Handbook of experiences in the design and installation of solar heating and cooling systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, D.S.; Oberoi, H.S.
1980-07-01
A large array of problems encountered are detailed, including design errors, installation mistakes, cases of inadequate durability of materials and unacceptable reliability of components, and wide variations in the performance and operation of different solar systems. Durability, reliability, and design problems are reviewed for solar collector subsystems, heat transfer fluids, thermal storage, passive solar components, piping/ducting, and reliability/operational problems. The following performance topics are covered: criteria for design and performance analysis, domestic hot water systems, passive space heating systems, active space heating systems, space cooling systems, analysis of systems performance, and performance evaluations. (MHR)
NASA Astrophysics Data System (ADS)
Gupta, R. K.; Bhunia, A. K.; Roy, D.
2009-10-01
In this paper, we have considered the problem of constrained redundancy allocation of series system with interval valued reliability of components. For maximizing the overall system reliability under limited resource constraints, the problem is formulated as an unconstrained integer programming problem with interval coefficients by penalty function technique and solved by an advanced GA for integer variables with interval fitness function, tournament selection, uniform crossover, uniform mutation and elitism. As a special case, considering the lower and upper bounds of the interval valued reliabilities of the components to be the same, the corresponding problem has been solved. The model has been illustrated with some numerical examples and the results of the series redundancy allocation problem with fixed value of reliability of the components have been compared with the existing results available in the literature. Finally, sensitivity analyses have been shown graphically to study the stability of our developed GA with respect to the different GA parameters.
Some Reliability Issues in Very Large Databases.
ERIC Educational Resources Information Center
Lynch, Clifford A.
1988-01-01
Describes the unique reliability problems of very large databases that necessitate specialized techniques for hardware problem management. The discussion covers the use of controlled partial redundancy to improve reliability, issues in operating systems and database management systems design, and the impact of disk technology on very large…
NASA Astrophysics Data System (ADS)
Lamour, B. G.; Harris, R. T.; Roberts, A. G.
2010-06-01
Power system reliability problems are very difficult to solve because the power systems are complex and geographically widely distributed and influenced by numerous unexpected events. It is therefore imperative to employ the most efficient optimization methods in solving the problems relating to reliability of the power system. This paper presents a reliability analysis and study of the power interruptions resulting from severe power outages in the Nelson Mandela Bay Municipality (NMBM), South Africa and includes an overview of the important factors influencing reliability, and methods to improve the reliability. The Blue Horizon Bay 22 kV overhead line, supplying a 6.6 kV residential sector has been selected. It has been established that 70% of the outages, recorded at the source, originate on this feeder.
Scheduling for energy and reliability management on multiprocessor real-time systems
NASA Astrophysics Data System (ADS)
Qi, Xuan
Scheduling algorithms for multiprocessor real-time systems have been studied for years with many well-recognized algorithms proposed. However, it is still an evolving research area and many problems remain open due to their intrinsic complexities. With the emergence of multicore processors, it is necessary to re-investigate the scheduling problems and design/develop efficient algorithms for better system utilization, low scheduling overhead, high energy efficiency, and better system reliability. Focusing cluster schedulings with optimal global schedulers, we study the utilization bound and scheduling overhead for a class of cluster-optimal schedulers. Then, taking energy/power consumption into consideration, we developed energy-efficient scheduling algorithms for real-time systems, especially for the proliferating embedded systems with limited energy budget. As the commonly deployed energy-saving technique (e.g. dynamic voltage frequency scaling (DVFS)) will significantly affect system reliability, we study schedulers that have intelligent mechanisms to recuperate system reliability to satisfy the quality assurance requirements. Extensive simulation is conducted to evaluate the performance of the proposed algorithms on reduction of scheduling overhead, energy saving, and reliability improvement. The simulation results show that the proposed reliability-aware power management schemes could preserve the system reliability while still achieving substantial energy saving.
Reliability Analysis and Modeling of ZigBee Networks
NASA Astrophysics Data System (ADS)
Lin, Cheng-Min
The architecture of ZigBee networks focuses on developing low-cost, low-speed ubiquitous communication between devices. The ZigBee technique is based on IEEE 802.15.4, which specifies the physical layer and medium access control (MAC) for a low rate wireless personal area network (LR-WPAN). Currently, numerous wireless sensor networks have adapted the ZigBee open standard to develop various services to promote improved communication quality in our daily lives. The problem of system and network reliability in providing stable services has become more important because these services will be stopped if the system and network reliability is unstable. The ZigBee standard has three kinds of networks; star, tree and mesh. The paper models the ZigBee protocol stack from the physical layer to the application layer and analyzes these layer reliability and mean time to failure (MTTF). Channel resource usage, device role, network topology and application objects are used to evaluate reliability in the physical, medium access control, network, and application layers, respectively. In the star or tree networks, a series system and the reliability block diagram (RBD) technique can be used to solve their reliability problem. However, a division technology is applied here to overcome the problem because the network complexity is higher than that of the others. A mesh network using division technology is classified into several non-reducible series systems and edge parallel systems. Hence, the reliability of mesh networks is easily solved using series-parallel systems through our proposed scheme. The numerical results demonstrate that the reliability will increase for mesh networks when the number of edges in parallel systems increases while the reliability quickly drops when the number of edges and the number of nodes increase for all three networks. More use of resources is another factor impact on reliability decreasing. However, lower network reliability will occur due to network complexity, more resource usage and complex object relationship.
NASA Technical Reports Server (NTRS)
Shooman, Martin L.
1991-01-01
Many of the most challenging reliability problems of our present decade involve complex distributed systems such as interconnected telephone switching computers, air traffic control centers, aircraft and space vehicles, and local area and wide area computer networks. In addition to the challenge of complexity, modern fault-tolerant computer systems require very high levels of reliability, e.g., avionic computers with MTTF goals of one billion hours. Most analysts find that it is too difficult to model such complex systems without computer aided design programs. In response to this need, NASA has developed a suite of computer aided reliability modeling programs beginning with CARE 3 and including a group of new programs such as: HARP, HARP-PC, Reliability Analysts Workbench (Combination of model solvers SURE, STEM, PAWS, and common front-end model ASSIST), and the Fault Tree Compiler. The HARP program is studied and how well the user can model systems using this program is investigated. One of the important objectives will be to study how user friendly this program is, e.g., how easy it is to model the system, provide the input information, and interpret the results. The experiences of the author and his graduate students who used HARP in two graduate courses are described. Some brief comparisons were made with the ARIES program which the students also used. Theoretical studies of the modeling techniques used in HARP are also included. Of course no answer can be any more accurate than the fidelity of the model, thus an Appendix is included which discusses modeling accuracy. A broad viewpoint is taken and all problems which occurred in the use of HARP are discussed. Such problems include: computer system problems, installation manual problems, user manual problems, program inconsistencies, program limitations, confusing notation, long run times, accuracy problems, etc.
On modeling human reliability in space flights - Redundancy and recovery operations
NASA Astrophysics Data System (ADS)
Aarset, M.; Wright, J. F.
The reliability of humans is of paramount importance to the safety of space flight systems. This paper describes why 'back-up' operators might not be the best solution, and in some cases, might even degrade system reliability. The problem associated with human redundancy calls for special treatment in reliability analyses. The concept of Standby Redundancy is adopted, and psychological and mathematical models are introduced to improve the way such problems can be estimated and handled. In the past, human reliability has practically been neglected in most reliability analyses, and, when included, the humans have been modeled as a component and treated numerically the way technical components are. This approach is not wrong in itself, but it may lead to systematic errors if too simple analogies from the technical domain are used in the modeling of human behavior. In this paper redundancy in a man-machine system will be addressed. It will be shown how simplification from the technical domain, when applied to human components of a system, may give non-conservative estimates of system reliability.
Exploiting replication in distributed systems
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.; Joseph, T. A.
1989-01-01
Techniques are examined for replicating data and execution in directly distributed systems: systems in which multiple processes interact directly with one another while continuously respecting constraints on their joint behavior. Directly distributed systems are often required to solve difficult problems, ranging from management of replicated data to dynamic reconfiguration in response to failures. It is shown that these problems reduce to more primitive, order-based consistency problems, which can be solved using primitives such as the reliable broadcast protocols. Moreover, given a system that implements reliable broadcast primitives, a flexible set of high-level tools can be provided for building a wide variety of directly distributed application programs.
1977-03-01
system acquisition cycle since they provide necessary inputs to comparative analyses, cost/benefit trade -offs, and system simulations. In addition, the...Management Program from above performs the function of analyzing the system trade -offs with respect to reliability to determine a reliability goal...one encounters the problem of comparing present dollars with future dollars. In this analysis, we are trading off costs expended initially (or at
System reliability of randomly vibrating structures: Computational modeling and laboratory testing
NASA Astrophysics Data System (ADS)
Sundar, V. S.; Ammanagi, S.; Manohar, C. S.
2015-09-01
The problem of determination of system reliability of randomly vibrating structures arises in many application areas of engineering. We discuss in this paper approaches based on Monte Carlo simulations and laboratory testing to tackle problems of time variant system reliability estimation. The strategy we adopt is based on the application of Girsanov's transformation to the governing stochastic differential equations which enables estimation of probability of failure with significantly reduced number of samples than what is needed in a direct simulation study. Notably, we show that the ideas from Girsanov's transformation based Monte Carlo simulations can be extended to conduct laboratory testing to assess system reliability of engineering structures with reduced number of samples and hence with reduced testing times. Illustrative examples include computational studies on a 10-degree of freedom nonlinear system model and laboratory/computational investigations on road load response of an automotive system tested on a four-post test rig.
Developing Reliable Life Support for Mars
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2017-01-01
A human mission to Mars will require highly reliable life support systems. Mars life support systems may recycle water and oxygen using systems similar to those on the International Space Station (ISS). However, achieving sufficient reliability is less difficult for ISS than it will be for Mars. If an ISS system has a serious failure, it is possible to provide spare parts, or directly supply water or oxygen, or if necessary bring the crew back to Earth. Life support for Mars must be designed, tested, and improved as needed to achieve high demonstrated reliability. A quantitative reliability goal should be established and used to guide development t. The designers should select reliable components and minimize interface and integration problems. In theory a system can achieve the component-limited reliability, but testing often reveal unexpected failures due to design mistakes or flawed components. Testing should extend long enough to detect any unexpected failure modes and to verify the expected reliability. Iterated redesign and retest may be required to achieve the reliability goal. If the reliability is less than required, it may be improved by providing spare components or redundant systems. The number of spares required to achieve a given reliability goal depends on the component failure rate. If the failure rate is under estimated, the number of spares will be insufficient and the system may fail. If the design is likely to have undiscovered design or component problems, it is advisable to use dissimilar redundancy, even though this multiplies the design and development cost. In the ideal case, a human tended closed system operational test should be conducted to gain confidence in operations, maintenance, and repair. The difficulty in achieving high reliability in unproven complex systems may require the use of simpler, more mature, intrinsically higher reliability systems. The limitations of budget, schedule, and technology may suggest accepting lower and less certain expected reliability. A plan to develop reliable life support is needed to achieve the best possible reliability.
Review of battery powered embedded systems design for mission-critical low-power applications
NASA Astrophysics Data System (ADS)
Malewski, Matthew; Cowell, David M. J.; Freear, Steven
2018-06-01
The applications and uses of embedded systems is increasingly pervasive. Mission and safety critical systems relying on embedded systems pose specific challenges. Embedded systems is a multi-disciplinary domain, involving both hardware and software. Systems need to be designed in a holistic manner so that they are able to provide the desired reliability and minimise unnecessary complexity. The large problem landscape means that there is no one solution that fits all applications of embedded systems. With the primary focus of these mission and safety critical systems being functionality and reliability, there can be conflicts with business needs, and this can introduce pressures to reduce cost at the expense of reliability and functionality. This paper examines the challenges faced by battery powered systems, and then explores at more general problems, and several real-world embedded systems.
NASA Technical Reports Server (NTRS)
Steffen, Chris
1990-01-01
An overview of the time-delay problem and the reliability problem which arise in trying to perform robotic construction operations at a remote space location are presented. The effects of the time-delay upon the control system design will be itemized. A high level overview of a decentralized method of control which is expected to perform better than the centralized approach in solving the time-delay problem is given. The lower level, decentralized, autonomous, Troter Move-Bar algorithm is also presented (Troters are coordinated independent robots). The solution of the reliability problem is connected to adding redundancy to the system. One method of adding redundancy is given.
Increasing the reliability of labor of railroad engineers
NASA Technical Reports Server (NTRS)
Genes, V. S.; Madiyevskiy, Y. M.
1975-01-01
It has been shown that the group of problems related to temporary overloads still require serious development with respect to further automating the basic control operation - programmed selection of speed and braking. The problem of systems for warning the engineer about the condition of the unseen track segments remains a very serious one. Systems of hygenic support of the engineer also require constructive development. The problems of ensuring the reliability of work of engineers in periods of low information load, requiring motor acts, can basically be considered theoretically solved.
to do so, and (5) three distinct versions of the problem of estimating component reliability from system failure-time data are treated, each resulting inconsistent estimators with asymptotically normal distributions.
System principles, mathematical models and methods to ensure high reliability of safety systems
NASA Astrophysics Data System (ADS)
Zaslavskyi, V.
2017-04-01
Modern safety and security systems are composed of a large number of various components designed for detection, localization, tracking, collecting, and processing of information from the systems of monitoring, telemetry, control, etc. They are required to be highly reliable in a view to correctly perform data aggregation, processing and analysis for subsequent decision making support. On design and construction phases of the manufacturing of such systems a various types of components (elements, devices, and subsystems) are considered and used to ensure high reliability of signals detection, noise isolation, and erroneous commands reduction. When generating design solutions for highly reliable systems a number of restrictions and conditions such as types of components and various constrains on resources should be considered. Various types of components perform identical functions; however, they are implemented using diverse principles, approaches and have distinct technical and economic indicators such as cost or power consumption. The systematic use of different component types increases the probability of tasks performing and eliminates the common cause failure. We consider type-variety principle as an engineering principle of system analysis, mathematical models based on this principle, and algorithms for solving optimization problems of highly reliable safety and security systems design. Mathematical models are formalized in a class of two-level discrete optimization problems of large dimension. The proposed approach, mathematical models, algorithms can be used for problem solving of optimal redundancy on the basis of a variety of methods and control devices for fault and defects detection in technical systems, telecommunication networks, and energy systems.
Malt, U F
1986-01-01
Experiences from teaching DSM-III to more than three hundred Norwegian psychiatrists and clinical psychologists suggest that reliable DSM-III diagnoses can be achieved within a few hours training with reference to the decision trees and the diagnostic criteria only. The diagnoses provided are more reliable than the corresponding ICD diagnoses which the participants were more familiar with. The three main sources of reduced reliability of the DSM-III diagnoses are related to: poor knowledge of the criteria which often is connected with failure of obtaining diagnostic key information during the clinical interview; unfamiliar concepts and vague or ambiguous criteria. The two first issues are related to the quality of the teaching of DSM-III. The third source of reduced reliability reflects unsolved validity issues. By using the classification of five affective case stories as examples, these sources of diagnostic pitfalls, reducing reliability and ways to overcome these problems when teaching the DSM-III system, are discussed. It is concluded that the DSM-III system of classification is easy to teach and that the system is superior to other classification systems available from a reliability point of view. The current version of the DSM-III system, however, partly owes a high degree of reliability to broad and heterogeneous diagnostic categories like the concept major depression, which may have questionable validity. Thus, the future revisions of the DSM-III system should, above all, address the issue of validity.
Effects of computing time delay on real-time control systems
NASA Technical Reports Server (NTRS)
Shin, Kang G.; Cui, Xianzhong
1988-01-01
The reliability of a real-time digital control system depends not only on the reliability of the hardware and software used, but also on the speed in executing control algorithms. The latter is due to the negative effects of computing time delay on control system performance. For a given sampling interval, the effects of computing time delay are classified into the delay problem and the loss problem. Analysis of these two problems is presented as a means of evaluating real-time control systems. As an example, both the self-tuning predicted (STP) control and Proportional-Integral-Derivative (PID) control are applied to the problem of tracking robot trajectories, and their respective effects of computing time delay on control performance are comparatively evaluated. For this example, the STP (PID) controller is shown to outperform the PID (STP) controller in coping with the delay (loss) problem.
Cyber Physical Systems for User Reliability Measurements in a Sharing Economy Environment
Seo, Aria; Kim, Yeichang
2017-01-01
As the sharing economic market grows, the number of users is also increasing but many problems arise in terms of reliability between providers and users in the processing of services. The existing methods provide shared economic systems that judge the reliability of the provider from the viewpoint of the user. In this paper, we have developed a system for establishing mutual trust between providers and users in a shared economic environment to solve existing problems. In order to implement a system that can measure and control users’ situation in a shared economic environment, we analyzed the necessary factors in a cyber physical system (CPS). In addition, a user measurement system based on a CPS structure in a sharing economic environment is implemented through analysis of the factors to consider when constructing a CPS. PMID:28805709
Cyber Physical Systems for User Reliability Measurements in a Sharing Economy Environment.
Seo, Aria; Jeong, Junho; Kim, Yeichang
2017-08-13
As the sharing economic market grows, the number of users is also increasing but many problems arise in terms of reliability between providers and users in the processing of services. The existing methods provide shared economic systems that judge the reliability of the provider from the viewpoint of the user. In this paper, we have developed a system for establishing mutual trust between providers and users in a shared economic environment to solve existing problems. In order to implement a system that can measure and control users' situation in a shared economic environment, we analyzed the necessary factors in a cyber physical system (CPS). In addition, a user measurement system based on a CPS structure in a sharing economic environment is implemented through analysis of the factors to consider when constructing a CPS.
Control optimization, stabilization and computer algorithms for aircraft applications
NASA Technical Reports Server (NTRS)
1975-01-01
Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.
Reliability analysis of a robotic system using hybridized technique
NASA Astrophysics Data System (ADS)
Kumar, Naveen; Komal; Lather, J. S.
2017-09-01
In this manuscript, the reliability of a robotic system has been analyzed using the available data (containing vagueness, uncertainty, etc). Quantification of involved uncertainties is done through data fuzzification using triangular fuzzy numbers with known spreads as suggested by system experts. With fuzzified data, if the existing fuzzy lambda-tau (FLT) technique is employed, then the computed reliability parameters have wide range of predictions. Therefore, decision-maker cannot suggest any specific and influential managerial strategy to prevent unexpected failures and consequently to improve complex system performance. To overcome this problem, the present study utilizes a hybridized technique. With this technique, fuzzy set theory is utilized to quantify uncertainties, fault tree is utilized for the system modeling, lambda-tau method is utilized to formulate mathematical expressions for failure/repair rates of the system, and genetic algorithm is utilized to solve established nonlinear programming problem. Different reliability parameters of a robotic system are computed and the results are compared with the existing technique. The components of the robotic system follow exponential distribution, i.e., constant. Sensitivity analysis is also performed and impact on system mean time between failures (MTBF) is addressed by varying other reliability parameters. Based on analysis some influential suggestions are given to improve the system performance.
A study on reliability of power customer in distribution network
NASA Astrophysics Data System (ADS)
Liu, Liyuan; Ouyang, Sen; Chen, Danling; Ma, Shaohua; Wang, Xin
2017-05-01
The existing power supply reliability index system is oriented to power system without considering actual electricity availability in customer side. In addition, it is unable to reflect outage or customer’s equipment shutdown caused by instantaneous interruption and power quality problem. This paper thus makes a systematic study on reliability of power customer. By comparing with power supply reliability, reliability of power customer is defined and extracted its evaluation requirements. An indexes system, consisting of seven customer indexes and two contrast indexes, are designed to describe reliability of power customer from continuity and availability. In order to comprehensively and quantitatively evaluate reliability of power customer in distribution networks, reliability evaluation method is proposed based on improved entropy method and the punishment weighting principle. Practical application has proved that reliability index system and evaluation method for power customer is reasonable and effective.
Optimal Bi-Objective Redundancy Allocation for Systems Reliability and Risk Management.
Govindan, Kannan; Jafarian, Ahmad; Azbari, Mostafa E; Choi, Tsan-Ming
2016-08-01
In the big data era, systems reliability is critical to effective systems risk management. In this paper, a novel multiobjective approach, with hybridization of a known algorithm called NSGA-II and an adaptive population-based simulated annealing (APBSA) method is developed to solve the systems reliability optimization problems. In the first step, to create a good algorithm, we use a coevolutionary strategy. Since the proposed algorithm is very sensitive to parameter values, the response surface method is employed to estimate the appropriate parameters of the algorithm. Moreover, to examine the performance of our proposed approach, several test problems are generated, and the proposed hybrid algorithm and other commonly known approaches (i.e., MOGA, NRGA, and NSGA-II) are compared with respect to four performance measures: 1) mean ideal distance; 2) diversification metric; 3) percentage of domination; and 4) data envelopment analysis. The computational studies have shown that the proposed algorithm is an effective approach for systems reliability and risk management.
NASA Astrophysics Data System (ADS)
Tamura, Yoshinobu; Yamada, Shigeru
OSS (open source software) systems which serve as key components of critical infrastructures in our social life are still ever-expanding now. Especially, embedded OSS systems have been gaining a lot of attention in the embedded system area, i.e., Android, BusyBox, TRON, etc. However, the poor handling of quality problem and customer support prohibit the progress of embedded OSS. Also, it is difficult for developers to assess the reliability and portability of embedded OSS on a single-board computer. In this paper, we propose a method of software reliability assessment based on flexible hazard rates for the embedded OSS. Also, we analyze actual data of software failure-occurrence time-intervals to show numerical examples of software reliability assessment for the embedded OSS. Moreover, we compare the proposed hazard rate model for the embedded OSS with the typical conventional hazard rate models by using the comparison criteria of goodness-of-fit. Furthermore, we discuss the optimal software release problem for the porting-phase based on the total expected software maintenance cost.
Research on Novel Algorithms for Smart Grid Reliability Assessment and Economic Dispatch
NASA Astrophysics Data System (ADS)
Luo, Wenjin
In this dissertation, several studies of electric power system reliability and economy assessment methods are presented. To be more precise, several algorithms in evaluating power system reliability and economy are studied. Furthermore, two novel algorithms are applied to this field and their simulation results are compared with conventional results. As the electrical power system develops towards extra high voltage, remote distance, large capacity and regional networking, the application of a number of new technique equipments and the electric market system have be gradually established, and the results caused by power cut has become more and more serious. The electrical power system needs the highest possible reliability due to its complication and security. In this dissertation the Boolean logic Driven Markov Process (BDMP) method is studied and applied to evaluate power system reliability. This approach has several benefits. It allows complex dynamic models to be defined, while maintaining its easy readability as conventional methods. This method has been applied to evaluate IEEE reliability test system. The simulation results obtained are close to IEEE experimental data which means that it could be used for future study of the system reliability. Besides reliability, modern power system is expected to be more economic. This dissertation presents a novel evolutionary algorithm named as quantum evolutionary membrane algorithm (QEPS), which combines the concept and theory of quantum-inspired evolutionary algorithm and membrane computation, to solve the economic dispatch problem in renewable power system with on land and offshore wind farms. The case derived from real data is used for simulation tests. Another conventional evolutionary algorithm is also used to solve the same problem for comparison. The experimental results show that the proposed method is quick and accurate to obtain the optimal solution which is the minimum cost for electricity supplied by wind farm system.
The Challenge of Wireless Reliability and Coexistence.
Berger, H Stephen
2016-09-01
Wireless communication plays an increasingly important role in healthcare delivery. This further heightens the importance of wireless reliability, but quantifying wireless reliability is a complex and difficult challenge. Understanding the risks that accompany the many benefits of wireless communication should be a component of overall risk management. The emerging trend of using sensors and other device-to-device communications, as part of the emerging Internet of Things concept, is evident in healthcare delivery. The trend increases both the importance and complexity of this challenge. As with most system problems, finding a solution requires breaking down the problem into manageable steps. Understanding the operational reliability of a new wireless device and its supporting system requires developing solid, quantified answers to three questions: 1) How well can this new device and its system operate in a spectral environment where many other wireless devices are also operating? 2) What is the spectral environment in which this device and its system are expected to operate? Are the risks and reliability in its operating environment acceptable? 3) How might the new device and its system affect other devices and systems already in use? When operated under an insightful risk management process, wireless technology can be safely implemented, resulting in improved delivery of care.
NASA Astrophysics Data System (ADS)
Yu, Zheng
2002-08-01
Facing the new demands of the optical fiber communications market, almost all the performance and reliability of optical network system are dependent on the qualification of the fiber optics components. So, how to comply with the system requirements, the Telcordia / Bellcore reliability and high-power testing has become the key issue for the fiber optics components manufacturers. The qualification of Telcordia / Bellcore reliability or high-power testing is a crucial issue for the manufacturers. It is relating to who is the outstanding one in the intense competition market. These testing also need maintenances and optimizations. Now, work on the reliability and high-power testing have become the new demands in the market. The way is needed to get the 'Triple-Win' goal expected by the component-makers, the reliability-testers and the system-users. To those who are meeting practical problems for the testing, there are following seven topics that deal with how to shoot the common mistakes to perform qualify reliability and high-power testing: ¸ Qualification maintenance requirements for the reliability testing ¸ Lots control for preparing the reliability testing ¸ Sampling select per the reliability testing ¸ Interim measurements during the reliability testing ¸ Basic referencing factors relating to the high-power testing ¸ Necessity of re-qualification testing for the changing of producing ¸ Understanding the similarity for product family by the definitions
Distribution System Reliability Analysis for Smart Grid Applications
NASA Astrophysics Data System (ADS)
Aljohani, Tawfiq Masad
Reliability of power systems is a key aspect in modern power system planning, design, and operation. The ascendance of the smart grid concept has provided high hopes of developing an intelligent network that is capable of being a self-healing grid, offering the ability to overcome the interruption problems that face the utility and cost it tens of millions in repair and loss. To address its reliability concerns, the power utilities and interested parties have spent extensive amount of time and effort to analyze and study the reliability of the generation and transmission sectors of the power grid. Only recently has attention shifted to be focused on improving the reliability of the distribution network, the connection joint between the power providers and the consumers where most of the electricity problems occur. In this work, we will examine the effect of the smart grid applications in improving the reliability of the power distribution networks. The test system used in conducting this thesis is the IEEE 34 node test feeder, released in 2003 by the Distribution System Analysis Subcommittee of the IEEE Power Engineering Society. The objective is to analyze the feeder for the optimal placement of the automatic switching devices and quantify their proper installation based on the performance of the distribution system. The measures will be the changes in the reliability system indices including SAIDI, SAIFI, and EUE. The goal is to design and simulate the effect of the installation of the Distributed Generators (DGs) on the utility's distribution system and measure the potential improvement of its reliability. The software used in this work is DISREL, which is intelligent power distribution software that is developed by General Reliability Co.
RELAV - RELIABILITY/AVAILABILITY ANALYSIS PROGRAM
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
RELAV (Reliability/Availability Analysis Program) is a comprehensive analytical tool to determine the reliability or availability of any general system which can be modeled as embedded k-out-of-n groups of items (components) and/or subgroups. Both ground and flight systems at NASA's Jet Propulsion Laboratory have utilized this program. RELAV can assess current system performance during the later testing phases of a system design, as well as model candidate designs/architectures or validate and form predictions during the early phases of a design. Systems are commonly modeled as System Block Diagrams (SBDs). RELAV calculates the success probability of each group of items and/or subgroups within the system assuming k-out-of-n operating rules apply for each group. The program operates on a folding basis; i.e. it works its way towards the system level from the most embedded level by folding related groups into single components. The entire folding process involves probabilities; therefore, availability problems are performed in terms of the probability of success, and reliability problems are performed for specific mission lengths. An enhanced cumulative binomial algorithm is used for groups where all probabilities are equal, while a fast algorithm based upon "Computing k-out-of-n System Reliability", Barlow & Heidtmann, IEEE TRANSACTIONS ON RELIABILITY, October 1984, is used for groups with unequal probabilities. Inputs to the program include a description of the system and any one of the following: 1) availabilities of the items, 2) mean time between failures and mean time to repairs for the items from which availabilities are calculated, 3) mean time between failures and mission length(s) from which reliabilities are calculated, or 4) failure rates and mission length(s) from which reliabilities are calculated. The results are probabilities of success of each group and the system in the given configuration. RELAV assumes exponential failure distributions for reliability calculations and infinite repair resources for availability calculations. No more than 967 items or groups can be modeled by RELAV. If larger problems can be broken into subsystems of 967 items or less, the subsystem results can be used as item inputs to a system problem. The calculated availabilities are steady-state values. Group results are presented in the order in which they were calculated (from the most embedded level out to the system level). This provides a good mechanism to perform trade studies. Starting from the system result and working backwards, the granularity gets finer; therefore, system elements that contribute most to system degradation are detected quickly. RELAV is a C-language program originally developed under the UNIX operating system on a MASSCOMP MC500 computer. It has been modified, as necessary, and ported to an IBM PC compatible with a math coprocessor. The current version of the program runs in the DOS environment and requires a Turbo C vers. 2.0 compiler. RELAV has a memory requirement of 103 KB and was developed in 1989. RELAV is a copyrighted work with all copyright vested in NASA.
Improving the Cost Efficiency and Readiness of MC-130 Aircrew Training: A Case Study
2015-01-01
51 Jiang, Changbing, "A reliable solver of Euclidean traveling salesman problems with Microsoft excel add-in tools for small-size systems...DisplayPage.aspx?DocType=Reference&ItemId=+++1 343364&Pubabbrev=JAWA 124 Jiang, Changbing, "A Reliable Solver of Euclidean Traveling Salesman Problems with...49 Figure 4.5 Training Resources Locations Traveling Salesperson Problem In order to participate in training, aircrews must fly to the
Photovoltaic power system reliability considerations
NASA Technical Reports Server (NTRS)
Lalli, V. R.
1980-01-01
An example of how modern engineering and safety techniques can be used to assure the reliable and safe operation of photovoltaic power systems is presented. This particular application is for a solar cell power system demonstration project designed to provide electric power requirements for remote villages. The techniques utilized involve a definition of the power system natural and operating environment, use of design criteria and analysis techniques, an awareness of potential problems via the inherent reliability and FMEA methods, and use of fail-safe and planned spare parts engineering philosophy.
Structural system reliability calculation using a probabilistic fault tree analysis method
NASA Technical Reports Server (NTRS)
Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.
1992-01-01
The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.
Photovoltaic power system reliability considerations
NASA Technical Reports Server (NTRS)
Lalli, V. R.
1980-01-01
This paper describes an example of how modern engineering and safety techniques can be used to assure the reliable and safe operation of photovoltaic power systems. This particular application was for a solar cell power system demonstration project in Tangaye, Upper Volta, Africa. The techniques involve a definition of the power system natural and operating environment, use of design criteria and analysis techniques, an awareness of potential problems via the inherent reliability and FMEA methods, and use of a fail-safe and planned spare parts engineering philosophy.
Design of an integrated airframe/propulsion control system architecture
NASA Technical Reports Server (NTRS)
Cohen, Gerald C.; Lee, C. William; Strickland, Michael J.
1990-01-01
The design of an integrated airframe/propulsion control system architecture is described. The design is based on a prevalidation methodology that used both reliability and performance tools. An account is given of the motivation for the final design and problems associated with both reliability and performance modeling. The appendices contain a listing of the code for both the reliability and performance model used in the design.
NASA Astrophysics Data System (ADS)
Iskandar, Ismed; Satria Gondokaryono, Yudi
2016-02-01
In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range between the true value and the maximum likelihood estimated value lines.
ERIC Educational Resources Information Center
Humpherys, Sean LaMarc
2010-01-01
Given the increasing problem of fraud, crime, and national security threats, assessing credibility is a recurring research topic in Information Systems and in other disciplines. Decision support systems can help. But the success of the system depends on reliable cues that can distinguish deceptive/truthful behavior and on a proven classification…
The Modified Cognitive Constructions Coding System: Reliability and Validity Assessments
ERIC Educational Resources Information Center
Moran, Galia S.; Diamond, Gary M.
2006-01-01
The cognitive constructions coding system (CCCS) was designed for coding client's expressed problem constructions on four dimensions: intrapersonal-interpersonal, internal-external, responsible-not responsible, and linear-circular. This study introduces, and examines the reliability and validity of, a modified version of the CCCS--a version that…
Bayesian methods in reliability
NASA Astrophysics Data System (ADS)
Sander, P.; Badoux, R.
1991-11-01
The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bucknor, Matthew; Grabaskas, David; Brunett, Acacia
2015-04-26
Advanced small modular reactor designs include many advantageous design features such as passively driven safety systems that are arguably more reliable and cost effective relative to conventional active systems. Despite their attractiveness, a reliability assessment of passive systems can be difficult using conventional reliability methods due to the nature of passive systems. Simple deviations in boundary conditions can induce functional failures in a passive system, and intermediate or unexpected operating modes can also occur. As part of an ongoing project, Argonne National Laboratory is investigating various methodologies to address passive system reliability. The Reliability Method for Passive Systems (RMPS), amore » systematic approach for examining reliability, is one technique chosen for this analysis. This methodology is combined with the Risk-Informed Safety Margin Characterization (RISMC) approach to assess the reliability of a passive system and the impact of its associated uncertainties. For this demonstration problem, an integrated plant model of an advanced small modular pool-type sodium fast reactor with a passive reactor cavity cooling system is subjected to a station blackout using RELAP5-3D. This paper discusses important aspects of the reliability assessment, including deployment of the methodology, the uncertainty identification and quantification process, and identification of key risk metrics.« less
Reliability analysis of multicellular system architectures for low-cost satellites
NASA Astrophysics Data System (ADS)
Erlank, A. O.; Bridges, C. P.
2018-06-01
Multicellular system architectures are proposed as a solution to the problem of low reliability currently seen amongst small, low cost satellites. In a multicellular architecture, a set of independent k-out-of-n systems mimic the cells of a biological organism. In order to be beneficial, a multicellular architecture must provide more reliability per unit of overhead than traditional forms of redundancy. The overheads include power consumption, volume and mass. This paper describes the derivation of an analytical model for predicting a multicellular system's lifetime. The performance of such architectures is compared against that of several common forms of redundancy and proven to be beneficial under certain circumstances. In addition, the problem of peripheral interfaces and cross-strapping is investigated using a purpose-developed, multicellular simulation environment. Finally, two case studies are presented based on a prototype cell implementation, which demonstrate the feasibility of the proposed architecture.
Couples' Reports of Relationship Problems in a Naturalistic Therapy Setting
ERIC Educational Resources Information Center
Boisvert, Marie-Michele; Wright, John; Tremblay, Nadine; McDuff, Pierre
2011-01-01
Understanding couples' relationship problems is fundamental to couple therapy. Although research has documented common relationship problems, no study has used open-ended questions to explore problems in couples seeking therapy in naturalistic settings. The present study used a reliable coding system to explore the relationship problems reported…
Reliability of Fault Tolerant Control Systems. Part 2
NASA Technical Reports Server (NTRS)
Wu, N. Eva
2000-01-01
This paper reports Part II of a two part effort that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability properties peculiar to fault-tolerant control systems are emphasized, such as the presence of analytic redundancy in high proportion, the dependence of failures on control performance, and high risks associated with decisions in redundancy management due to multiple sources of uncertainties and sometimes large processing requirements. As a consequence, coverage of failures through redundancy management can be severely limited. The paper proposes to formulate the fault tolerant control problem as an optimization problem that maximizes coverage of failures through redundancy management. Coverage modeling is attempted in a way that captures its dependence on the control performance and on the diagnostic resolution. Under the proposed redundancy management policy, it is shown that an enhanced overall system reliability can be achieved with a control law of a superior robustness, with an estimator of a higher resolution, and with a control performance requirement of a lesser stringency.
Mechanical system reliability for long life space systems
NASA Technical Reports Server (NTRS)
Kowal, Michael T.
1994-01-01
The creation of a compendium of mechanical limit states was undertaken in order to provide a reference base for the application of first-order reliability methods to mechanical systems in the context of the development of a system level design methodology. The compendium was conceived as a reference source specific to the problem of developing the noted design methodology, and not an exhaustive or exclusive compilation of mechanical limit states. The compendium is not intended to be a handbook of mechanical limit states for general use. The compendium provides a diverse set of limit-state relationships for use in demonstrating the application of probabilistic reliability methods to mechanical systems. The compendium is to be used in the reliability analysis of moderately complex mechanical systems.
Basic Principles of Electrical Network Reliability Optimization in Liberalised Electricity Market
NASA Astrophysics Data System (ADS)
Oleinikova, I.; Krishans, Z.; Mutule, A.
2008-01-01
The authors propose to select long-term solutions to the reliability problems of electrical networks in the stage of development planning. The guide lines or basic principles of such optimization are: 1) its dynamical nature; 2) development sustainability; 3) integrated solution of the problems of network development and electricity supply reliability; 4) consideration of information uncertainty; 5) concurrent consideration of the network and generation development problems; 6) application of specialized information technologies; 7) definition of requirements for independent electricity producers. In the article, the major aspects of liberalized electricity market, its functions and tasks are reviewed, with emphasis placed on the optimization of electrical network development as a significant component of sustainable management of power systems.
An overview of fatigue failures at the Rocky Flats Wind System Test Center
NASA Technical Reports Server (NTRS)
Waldon, C. A.
1981-01-01
Potential small wind energy conversion (SWECS) design problems were identified to improve product quality and reliability. Mass produced components such as gearboxes, generators, bearings, etc., are generally reliable due to their widespread uniform use in other industries. The likelihood of failure increases, though, in the interfacing of these components and in SWECS components designed for a specific system use. Problems relating to the structural integrity of such components are discussed and analyzed with techniques currently used in quality assurance programs in other manufacturing industries.
Designing Fault-Injection Experiments for the Reliability of Embedded Systems
NASA Technical Reports Server (NTRS)
White, Allan L.
2012-01-01
This paper considers the long-standing problem of conducting fault-injections experiments to establish the ultra-reliability of embedded systems. There have been extensive efforts in fault injection, and this paper offers a partial summary of the efforts, but these previous efforts have focused on realism and efficiency. Fault injections have been used to examine diagnostics and to test algorithms, but the literature does not contain any framework that says how to conduct fault-injection experiments to establish ultra-reliability. A solution to this problem integrates field-data, arguments-from-design, and fault-injection into a seamless whole. The solution in this paper is to derive a model reduction theorem for a class of semi-Markov models suitable for describing ultra-reliable embedded systems. The derivation shows that a tight upper bound on the probability of system failure can be obtained using only the means of system-recovery times, thus reducing the experimental effort to estimating a reasonable number of easily-observed parameters. The paper includes an example of a system subject to both permanent and transient faults. There is a discussion of integrating fault-injection with field-data and arguments-from-design.
Some Solved Problems with the SLAC PEP-II B-Factory Beam-Position Monitor System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Ronald G.
2000-05-05
The Beam-Position Monitor (BPM) system for the SLAC PEP-II B-Factory has been in operation for over two years. Although the BPM system has met all of its specifications, several problems with the system have been identified and solved. The problems include errors and limitations in both the hardware and software. Solutions of such problems have led to improved performance and reliability. In this paper the authors report on this experience. The process of identifying problems is not at an end and they expect continued improvement of the BPM system.
Heroic Reliability Improvement in Manned Space Systems
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2017-01-01
System reliability can be significantly improved by a strong continued effort to identify and remove all the causes of actual failures. Newly designed systems often have unexpected high failure rates which can be reduced by successive design improvements until the final operational system has an acceptable failure rate. There are many causes of failures and many ways to remove them. New systems may have poor specifications, design errors, or mistaken operations concepts. Correcting unexpected problems as they occur can produce large early gains in reliability. Improved technology in materials, components, and design approaches can increase reliability. The reliability growth is achieved by repeatedly operating the system until it fails, identifying the failure cause, and fixing the problem. The failure rate reduction that can be obtained depends on the number and the failure rates of the correctable failures. Under the strong assumption that the failure causes can be removed, the decline in overall failure rate can be predicted. If a failure occurs at the rate of lambda per unit time, the expected time before the failure occurs and can be corrected is 1/lambda, the Mean Time Before Failure (MTBF). Finding and fixing a less frequent failure with the rate of lambda/2 per unit time requires twice as long, time of 1/(2 lambda). Cutting the failure rate in half requires doubling the test and redesign time and finding and eliminating the failure causes.Reducing the failure rate significantly requires a heroic reliability improvement effort.
NASA Astrophysics Data System (ADS)
Chaitusaney, Surachai; Yokoyama, Akihiko
In distribution system, Distributed Generation (DG) is expected to improve the system reliability as its backup generation. However, DG contribution in fault current may cause the loss of the existing protection coordination, e.g. recloser-fuse coordination and breaker-breaker coordination. This problem can drastically deteriorate the system reliability, and it is more serious and complicated when there are several DG sources in the system. Hence, the above conflict in reliability aspect unavoidably needs a detailed investigation before the installation or enhancement of DG is done. The model of composite DG fault current is proposed to find the threshold beyond which existing protection coordination is lost. Cases of protection miscoordination are described, together with their consequences. Since a distribution system may be tied with another system, the issues of tie line and on-site DG are integrated into this study. Reliability indices are evaluated and compared in the distribution reliability test system RBTS Bus 2.
Design and Research of the Sewage Treatment Control System
NASA Astrophysics Data System (ADS)
Chu, J.; Hu, W. W.
Due to the rapid development of China's economy, the water pollution has become a problem that we have to face. In particular, how to deal with industrial wastewater has become a top priority. In wastewater treatment, the control system based on PLC has met the design requirement in real-time, reliability, precision and so on. The integration of sequence control and process control in PLC, has the characteristics of high reliability, simple network, convenient and flexible use. PLC is a powerful tool for small and medium-sized industrial automation. Therefore, the sewage treatment control system take PLC as the core of control system, can nicely solve the problem of industrial wastewater in a certain extent.
Reliability modeling of fault-tolerant computer based systems
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.
1987-01-01
Digital fault-tolerant computer-based systems have become commonplace in military and commercial avionics. These systems hold the promise of increased availability, reliability, and maintainability over conventional analog-based systems through the application of replicated digital computers arranged in fault-tolerant configurations. Three tightly coupled factors of paramount importance, ultimately determining the viability of these systems, are reliability, safety, and profitability. Reliability, the major driver affects virtually every aspect of design, packaging, and field operations, and eventually produces profit for commercial applications or increased national security. However, the utilization of digital computer systems makes the task of producing credible reliability assessment a formidable one for the reliability engineer. The root of the problem lies in the digital computer's unique adaptability to changing requirements, computational power, and ability to test itself efficiently. Addressed here are the nuances of modeling the reliability of systems with large state sizes, in the Markov sense, which result from systems based on replicated redundant hardware and to discuss the modeling of factors which can reduce reliability without concomitant depletion of hardware. Advanced fault-handling models are described and methods of acquiring and measuring parameters for these models are delineated.
A Delay-Aware and Reliable Data Aggregation for Cyber-Physical Sensing
Zhang, Jinhuan; Long, Jun; Zhang, Chengyuan; Zhao, Guihu
2017-01-01
Physical information sensed by various sensors in a cyber-physical system should be collected for further operation. In many applications, data aggregation should take reliability and delay into consideration. To address these problems, a novel Tiered Structure Routing-based Delay-Aware and Reliable Data Aggregation scheme named TSR-DARDA for spherical physical objects is proposed. By dividing the spherical network constructed by dispersed sensor nodes into circular tiers with specifically designed widths and cells, TSTR-DARDA tries to enable as many nodes as possible to transmit data simultaneously. In order to ensure transmission reliability, lost packets are retransmitted. Moreover, to minimize the latency while maintaining reliability for data collection, in-network aggregation and broadcast techniques are adopted to deal with the transmission between data collecting nodes in the outer layer and their parent data collecting nodes in the inner layer. Thus, the optimization problem is transformed to minimize the delay under reliability constraints by controlling the system parameters. To demonstrate the effectiveness of the proposed scheme, we have conducted extensive theoretical analysis and comparisons to evaluate the performance of TSR-DARDA. The analysis and simulations show that TSR-DARDA leads to lower delay with reliability satisfaction. PMID:28218668
Method of Testing and Predicting Failures of Electronic Mechanical Systems
NASA Technical Reports Server (NTRS)
Iverson, David L.; Patterson-Hine, Frances A.
1996-01-01
A method employing a knowledge base of human expertise comprising a reliability model analysis implemented for diagnostic routines is disclosed. The reliability analysis comprises digraph models that determine target events created by hardware failures human actions, and other factors affecting the system operation. The reliability analysis contains a wealth of human expertise information that is used to build automatic diagnostic routines and which provides a knowledge base that can be used to solve other artificial intelligence problems.
Probabilistic Finite Element Analysis & Design Optimization for Structural Designs
NASA Astrophysics Data System (ADS)
Deivanayagam, Arumugam
This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.
Autonomous navigation system based on GPS and magnetometer data
NASA Technical Reports Server (NTRS)
Julie, Thienel K. (Inventor); Richard, Harman R. (Inventor); Bar-Itzhack, Itzhack Y. (Inventor)
2004-01-01
This invention is drawn to an autonomous navigation system using Global Positioning System (GPS) and magnetometers for low Earth orbit satellites. As a magnetometer is reliable and always provides information on spacecraft attitude, rate, and orbit, the magnetometer-GPS configuration solves GPS initialization problem, decreasing the convergence time for navigation estimate and improving the overall accuracy. Eventually the magnetometer-GPS configuration enables the system to avoid costly and inherently less reliable gyro for rate estimation. Being autonomous, this invention would provide for black-box spacecraft navigation, producing attitude, orbit, and rate estimates without any ground input with high accuracy and reliability.
NASA trend analysis procedures
NASA Technical Reports Server (NTRS)
1993-01-01
This publication is primarily intended for use by NASA personnel engaged in managing or implementing trend analysis programs. 'Trend analysis' refers to the observation of current activity in the context of the past in order to infer the expected level of future activity. NASA trend analysis was divided into 5 categories: problem, performance, supportability, programmatic, and reliability. Problem trend analysis uncovers multiple occurrences of historical hardware or software problems or failures in order to focus future corrective action. Performance trend analysis observes changing levels of real-time or historical flight vehicle performance parameters such as temperatures, pressures, and flow rates as compared to specification or 'safe' limits. Supportability trend analysis assesses the adequacy of the spaceflight logistics system; example indicators are repair-turn-around time and parts stockage levels. Programmatic trend analysis uses quantitative indicators to evaluate the 'health' of NASA programs of all types. Finally, reliability trend analysis attempts to evaluate the growth of system reliability based on a decreasing rate of occurrence of hardware problems over time. Procedures for conducting all five types of trend analysis are provided in this publication, prepared through the joint efforts of the NASA Trend Analysis Working Group.
NASA Astrophysics Data System (ADS)
Frommer, Joshua B.
This work develops and implements a solution framework that allows for an integrated solution to a resource allocation system-of-systems problem associated with designing vehicles for integration into an existing fleet to extend that fleet's capability while improving efficiency. Typically, aircraft design focuses on using a specific design mission while a fleet perspective would provide a broader capability. Aspects of design for both the vehicles and missions may be, for simplicity, deterministic in nature or, in a model that reflects actual conditions, uncertain. Toward this end, the set of tasks or goals for the to-be-planned system-of-systems will be modeled more accurately with non-deterministic values, and the designed platforms will be evaluated using reliability analysis. The reliability, defined as the probability of a platform or set of platforms to complete possible missions, will contribute to the fitness of the overall system. The framework includes building surrogate models for metrics such as capability and cost, and includes the ideas of reliability in the overall system-level design space. The concurrent design and allocation system-of-systems problem is a multi-objective mixed integer nonlinear programming (MINLP) problem. This study considered two system-of-systems problems that seek to simultaneously design new aircraft and allocate these aircraft into a fleet to provide a desired capability. The Coast Guard's Integrated Deepwater System program inspired the first problem, which consists of a suite of search-and-find missions for aircraft based on descriptions from the National Search and Rescue Manual. The second represents suppression of enemy air defense operations similar to those carried out by the U.S. Air Force, proposed as part of the Department of Defense Network Centric Warfare structure, and depicted in MILSTD-3013. The two problems seem similar, with long surveillance segments, but because of the complex nature of aircraft design, the analysis of the vehicle for high-speed attack combined with a long loiter period is considerably different from that for quick cruise to an area combined with a low speed search. However, the framework developed to solve this class of system-of-systems problem handles both scenarios and leads to a solution type for this kind of problem. On the vehicle-level of the problem, different technology can have an impact on the fleet-level. One such technology is Morphing, the ability to change shape, which is an ideal candidate technology for missions with dissimilar segments, such as the aforementioned two. A framework, using surrogate models based on optimally-sized aircraft, and using probabilistic parameters to define a concept of operations, is investigated; this has provided insight into the setup of the optimization problem, the use of the reliability metric, and the measurement of fleet level impacts of morphing aircraft. The research consisted of four phases. The two initial phases built and defined the framework to solve system-of-systems problem; these investigations used the search-and-find scenario as the example application. The first phase included the design of fixed-geometry and morphing aircraft for a range of missions and evaluated the aircraft capability using non-deterministic mission parameters. The second phase introduced the idea of multiple aircraft in a fleet, but only considered a fleet consisting of one aircraft type. The third phase incorporated the simultaneous design of a new vehicle and allocation into a fleet for the search-and-find scenario; in this phase, multiple types of aircraft are considered. The fourth phase repeated the simultaneous new aircraft design and fleet allocation for the SEAD scenario to show that the approach is not specific to the search-and-find scenario. The framework presented in this work appears to be a viable approach for concurrently designing and allocating constituents in a system, specifically aircraft in a fleet. The research also shows that new technology impact can be assessed at the fleet level using conceptual design principles.
NASA Astrophysics Data System (ADS)
Yeh, Cheng-Ta; Lin, Yi-Kuei; Yang, Jo-Yun
2018-07-01
Network reliability is an important performance index for many real-life systems, such as electric power systems, computer systems and transportation systems. These systems can be modelled as stochastic-flow networks (SFNs) composed of arcs and nodes. Most system supervisors respect the network reliability maximization by finding the optimal multi-state resource assignment, which is one resource to each arc. However, a disaster may cause correlated failures for the assigned resources, affecting the network reliability. This article focuses on determining the optimal resource assignment with maximal network reliability for SFNs. To solve the problem, this study proposes a hybrid algorithm integrating the genetic algorithm and tabu search to determine the optimal assignment, called the hybrid GA-TS algorithm (HGTA), and integrates minimal paths, recursive sum of disjoint products and the correlated binomial distribution to calculate network reliability. Several practical numerical experiments are adopted to demonstrate that HGTA has better computational quality than several popular soft computing algorithms.
Measures of Reliability in Behavioral Observation: The Advantage of "Real Time" Data Acquisition.
ERIC Educational Resources Information Center
Hollenbeck, Albert R.; Slaby, Ronald G.
Two observers who were using an electronic digital data acquisition system were spot checked for reliability at random times over a four month period. Between-and within-observer reliability was assessed for frequency, duration, and duration-per-event measures of four infant behaviors. The results confirmed the problem of observer drift--the…
Reliability of the social skills rating system in a group of Iranian children.
Shahim, S
2001-12-01
The purpose of this study was to investigate reliability of the Social Skills Rating Systems of Gresham and Elliott for use in Iran. The sample consisted of 304 students aged 6 to 12 years, selected from the elementary schools in Shiraz, Iran. Parents' and teachers' ratings of social skills and behavioural problems and self-rating of social skills were applied in this study. Pearson correlations between parents' and teachers' ratings were low to moderate. Correlations between social skills subdomains and behavioural problems subdomains were low to high. Cronbach coefficients alpha were satisfactory for the two subdomains.
Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.
Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander
2018-04-10
A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing techniques and assessment of soft computing techniques to predict reliability. The parameter considered while estimating and prediction of reliability are also discussed. This study can be used in estimation and prediction of the reliability of various instruments used in the medical system, software engineering, computer engineering and mechanical engineering also. These concepts can be applied to both software and hardware, to predict the reliability using CBSE.
Kumar, Mohit; Yadav, Shiv Prasad
2012-03-01
This paper addresses the fuzzy system reliability analysis using different types of intuitionistic fuzzy numbers. Till now, in the literature, to analyze the fuzzy system reliability, it is assumed that the failure rates of all components of a system follow the same type of fuzzy set or intuitionistic fuzzy set. However, in practical problems, such type of situation rarely occurs. Therefore, in the present paper, a new algorithm has been introduced to construct the membership function and non-membership function of fuzzy reliability of a system having components following different types of intuitionistic fuzzy failure rates. Functions of intuitionistic fuzzy numbers are calculated to construct the membership function and non-membership function of fuzzy reliability via non-linear programming techniques. Using the proposed algorithm, membership functions and non-membership functions of fuzzy reliability of a series system and a parallel systems are constructed. Our study generalizes the various works of the literature. Numerical examples are given to illustrate the proposed algorithm. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
The scientific data acquisition system of the GAMMA-400 space project
NASA Astrophysics Data System (ADS)
Bobkov, S. G.; Serdin, O. V.; Gorbunov, M. S.; Arkhangelskiy, A. I.; Topchiev, N. P.
2016-02-01
The description of scientific data acquisition system (SDAS) designed by SRISA for the GAMMA-400 space project is presented. We consider the problem of different level electronics unification: the set of reliable fault-tolerant integrated circuits fabricated on Silicon-on-Insulator 0.25 mkm CMOS technology and the high-speed interfaces and reliable modules used in the space instruments. The characteristics of reliable fault-tolerant very large scale integration (VLSI) technology designed by SRISA for the developing of computation systems for space applications are considered. The scalable net structure of SDAS based on Serial RapidIO interface including real-time operating system BAGET is described too.
On Space Exploration and Human Error: A Paper on Reliability and Safety
NASA Technical Reports Server (NTRS)
Bell, David G.; Maluf, David A.; Gawdiak, Yuri
2005-01-01
NASA space exploration should largely address a problem class in reliability and risk management stemming primarily from human error, system risk and multi-objective trade-off analysis, by conducting research into system complexity, risk characterization and modeling, and system reasoning. In general, in every mission we can distinguish risk in three possible ways: a) known-known, b) known-unknown, and c) unknown-unknown. It is probably almost certain that space exploration will partially experience similar known or unknown risks embedded in the Apollo missions, Shuttle or Station unless something alters how NASA will perceive and manage safety and reliability
A reliability analysis tool for SpaceWire network
NASA Astrophysics Data System (ADS)
Zhou, Qiang; Zhu, Longjiang; Fei, Haidong; Wang, Xingyou
2017-04-01
A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. It is becoming more and more popular in space applications due to its technical advantages, including reliability, low power and fault protection, etc. High reliability is the vital issue for spacecraft. Therefore, it is very important to analyze and improve the reliability performance of the SpaceWire network. This paper deals with the problem of reliability modeling and analysis with SpaceWire network. According to the function division of distributed network, a reliability analysis method based on a task is proposed, the reliability analysis of every task can lead to the system reliability matrix, the reliability result of the network system can be deduced by integrating these entire reliability indexes in the matrix. With the method, we develop a reliability analysis tool for SpaceWire Network based on VC, where the computation schemes for reliability matrix and the multi-path-task reliability are also implemented. By using this tool, we analyze several cases on typical architectures. And the analytic results indicate that redundancy architecture has better reliability performance than basic one. In practical, the dual redundancy scheme has been adopted for some key unit, to improve the reliability index of the system or task. Finally, this reliability analysis tool will has a directive influence on both task division and topology selection in the phase of SpaceWire network system design.
NASA Technical Reports Server (NTRS)
1973-01-01
The ALERT program, a system for communicating common problems with parts, materials, and processes, is condensed and catalogued. Expanded information on selected topics is provided by relating the problem area (failure) to the cause, the investigations and findings, the suggestions for avoidance (inspections, screening tests, proper part applications), and failure analysis procedures. The basic objective of ALERT is the avoidance of the recurrence of parts, materials, and processed problems, thus improving the reliability of equipment produced for and used by the government.
Multidisciplinary System Reliability Analysis
NASA Technical Reports Server (NTRS)
Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)
2001-01-01
The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.
Multi-Disciplinary System Reliability Analysis
NASA Technical Reports Server (NTRS)
Mahadevan, Sankaran; Han, Song
1997-01-01
The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Xiangqi; Zhang, Yingchen
This paper presents an optimal voltage control methodology with coordination among different voltage-regulating resources, including controllable loads, distributed energy resources such as energy storage and photovoltaics (PV), and utility voltage-regulating devices such as voltage regulators and capacitors. The proposed methodology could effectively tackle the overvoltage and voltage regulation device distortion problems brought by high penetrations of PV to improve grid operation reliability. A voltage-load sensitivity matrix and voltage-regulator sensitivity matrix are used to deploy the resources along the feeder to achieve the control objectives. Mixed-integer nonlinear programming is used to solve the formulated optimization control problem. The methodology has beenmore » tested on the IEEE 123-feeder test system, and the results demonstrate that the proposed approach could actively tackle the voltage problem brought about by high penetrations of PV and improve the reliability of distribution system operation.« less
Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis
NASA Astrophysics Data System (ADS)
Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang
2017-07-01
In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.
Recent advances in computational structural reliability analysis methods
NASA Astrophysics Data System (ADS)
Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.
1993-10-01
The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.
Recent advances in computational structural reliability analysis methods
NASA Technical Reports Server (NTRS)
Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.
1993-01-01
The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.
NASA Technical Reports Server (NTRS)
Feldstein, J. F.
1977-01-01
Failure data from 16 commercial spacecraft were analyzed to evaluate failure trends, reliability growth, and effectiveness of tests. It was shown that the test programs were highly effective in ensuring a high level of in-orbit reliability. There was only a single catastrophic problem in 44 years of in-orbit operation on 12 spacecraft. The results also indicate that in-orbit failure rates are highly correlated with unit and systems test failure rates. The data suggest that test effectiveness estimates can be used to guide the content of a test program to ensure that in-orbit reliability goals are achieved.
Optimal Sensor Location Design for Reliable Fault Detection in Presence of False Alarms
Yang, Fan; Xiao, Deyun; Shah, Sirish L.
2009-01-01
To improve fault detection reliability, sensor location should be designed according to an optimization criterion with constraints imposed by issues of detectability and identifiability. Reliability requires the minimization of undetectability and false alarm probability due to random factors on sensor readings, which is not only related with sensor readings but also affected by fault propagation. This paper introduces the reliability criteria expression based on the missed/false alarm probability of each sensor and system topology or connectivity derived from the directed graph. The algorithm for the optimization problem is presented as a heuristic procedure. Finally, a boiler system is illustrated using the proposed method. PMID:22291524
An abstract specification language for Markov reliability models
NASA Technical Reports Server (NTRS)
Butler, R. W.
1985-01-01
Markov models can be used to compute the reliability of virtually any fault tolerant system. However, the process of delineating all of the states and transitions in a model of complex system can be devastatingly tedious and error-prone. An approach to this problem is presented utilizing an abstract model definition language. This high level language is described in a nonformal manner and illustrated by example.
An abstract language for specifying Markov reliability models
NASA Technical Reports Server (NTRS)
Butler, Ricky W.
1986-01-01
Markov models can be used to compute the reliability of virtually any fault tolerant system. However, the process of delineating all of the states and transitions in a model of complex system can be devastatingly tedious and error-prone. An approach to this problem is presented utilizing an abstract model definition language. This high level language is described in a nonformal manner and illustrated by example.
Future Directions in Navy Electronic System Reliability and Survivability.
1981-06-01
CENTERSAN DIEGO, CA 92152 AN ACTIVITY OF THE NAVAL MATERIAL COMMAND SL GUILLE, CAPT, USN HLBLOOD Commander Technical Director ADMINISTRATIVE INFORMATION...maintenancepoiys proposed as one remedy to these problems. To implement this policy, electronic systems which are very reliable and which include health ...distribute vital data, data-processing capability, and communication capability through the use of intraship and intership networks. The capability to
Optimal reconfiguration strategy for a degradable multimodule computing system
NASA Technical Reports Server (NTRS)
Lee, Yann-Hang; Shin, Kang G.
1987-01-01
The present quantitative approach to the problem of reconfiguring a degradable multimode system assigns some modules to computation and arranges others for reliability. By using expected total reward as the optimal criterion, there emerges an active reconfiguration strategy based not only on the occurrence of failure but the progression of the given mission. This reconfiguration strategy requires specification of the times at which the system should undergo reconfiguration, and the configurations to which the system should change. The optimal reconfiguration problem is converted to integer nonlinear knapsack and fractional programming problems.
Parallelized reliability estimation of reconfigurable computer networks
NASA Technical Reports Server (NTRS)
Nicol, David M.; Das, Subhendu; Palumbo, Dan
1990-01-01
A parallelized system, ASSURE, for computing the reliability of embedded avionics flight control systems which are able to reconfigure themselves in the event of failure is described. ASSURE accepts a grammar that describes a reliability semi-Markov state-space. From this it creates a parallel program that simultaneously generates and analyzes the state-space, placing upper and lower bounds on the probability of system failure. ASSURE is implemented on a 32-node Intel iPSC/860, and has achieved high processor efficiencies on real problems. Through a combination of improved algorithms, exploitation of parallelism, and use of an advanced microprocessor architecture, ASSURE has reduced the execution time on substantial problems by a factor of one thousand over previous workstation implementations. Furthermore, ASSURE's parallel execution rate on the iPSC/860 is an order of magnitude faster than its serial execution rate on a Cray-2 supercomputer. While dynamic load balancing is necessary for ASSURE's good performance, it is needed only infrequently; the particular method of load balancing used does not substantially affect performance.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-17
.../ or clarification of Order No. 773: NERC, American Public Power Association (APPA); American Wind... developed a list of facilities that have the potential to cause cascading problems on the system as well as... with particular tests and outlined general problems with the material impact tests used to determine...
NASA Technical Reports Server (NTRS)
Montoya, R. J. (Compiler); Howell, W. E. (Compiler); Bundick, W. T. (Compiler); Ostroff, A. J. (Compiler); Hueschen, R. M. (Compiler); Belcastro, C. M. (Compiler)
1983-01-01
Restructurable control system theory, robust reconfiguration for high reliability and survivability for advanced aircraft, restructurable controls problem definition and research, experimentation, system identification methods applied to aircraft, a self-repairing digital flight control system, and state-of-the-art theory application are addressed.
Assurance of reliability and safety in liquid hydrocarbons marine transportation and storing
NASA Astrophysics Data System (ADS)
Korshunov, G. I.; Polyakov, S. L.; Shunmin, Li
2017-10-01
The problems of assurance of safety and reliability in the liquid hydrocarbons marine transportation and storing are described. The requirements of standard IEC61511 have to be fulfilled for the load/unload in tanker’s system under dynamic loads on the pipeline system. The safety zones for fires of the type “fireball” and the spillage have to be determined when storing the liquid hydrocarbons. An example of the achieved necessary safety level of the duplicated load system, the conditions of the pipelines reliable operation under dynamic loads, the principles of the method of the liquid hydrocarbons storage safety zones under possible accident conditions are represented.
Performance and reliability of the NASA biomass production chamber
NASA Technical Reports Server (NTRS)
Fortson, R. E.; Sager, J. C.; Chetirkin, P. V.
1994-01-01
The Biomass Production Chamber (BPC) at the Kennedy Space Center is part of the Controlled Ecological Life Support System (CELSS) Breadboard Project. Plants are grown in a closed environment in an effort to quantify their contributions to the requirements for life support. Performance of this system is described. Also, in building this system, data from component and subsystem failures are being recorded. These data are used to identify problem areas in the design and implementation. The techniques used to measure the reliability will be useful in the design and construction of future CELSS. Possible methods for determining the reliability of a green plant, the primary component of CELSS, are discussed.
Magnetic suspension and balance systems (MSBSs)
NASA Technical Reports Server (NTRS)
Britcher, Colin P.; Kilgore, Robert A.
1987-01-01
The problems of wind tunnel testing are outlined, with attention given to the problems caused by mechanical support systems, such as support interference, dynamic-testing restrictions, and low productivity. The basic principles of magnetic suspension are highlighted, along with the history of magnetic suspension and balance systems. Roll control, size limitations, high angle of attack, reliability, position sensing, and calibration are discussed among the problems and limitations of the existing magnetic suspension and balance systems. Examples of the existing systems are presented, and design studies for future systems are outlined. Problems specific to large-scale magnetic suspension and balance systems, such as high model loads, requirements for high-power electromagnets, high-capacity power supplies, highly sophisticated control systems and position sensors, and high costs are assessed.
Choosing the optimal wind turbine variant using the ”ELECTRE” method
NASA Astrophysics Data System (ADS)
Ţişcă, I. A.; Anuşca, D.; Dumitrescu, C. D.
2017-08-01
This paper presents a method of choosing the “optimal” alternative, both under certainty and under uncertainty, based on relevant analysis criteria. Taking into account that a product can be assimilated to a system and that the reliability of the system depends on the reliability of its components, the choice of product (the appropriate system decision) can be done using the “ELECTRE” method and depending on the level of reliability of each product. In the paper, the “ELECTRE” method is used in choosing the optimal version of a wind turbine required to equip a wind farm in western Romania. The problems to be solved are related to the current situation of wind turbines that involves reliability problems. A set of criteria has been proposed to compare two or more products from a range of available products: Operating conditions, Environmental conditions during operation, Time requirements. Using the ELECTRE hierarchical mathematical method it was established that on the basis of the obtained coefficients of concordance the optimal variant of the wind turbine and the order of preference of the variants are determined, the values chosen as limits being arbitrary.
Time-Tagged Risk/Reliability Assessment Program for Development and Operation of Space System
NASA Astrophysics Data System (ADS)
Kubota, Yuki; Takegahara, Haruki; Aoyagi, Junichiro
We have investigated a new method of risk/reliability assessment for development and operation of space system. It is difficult to evaluate risk of spacecraft, because of long time operation, maintenance free and difficulty of test under the ground condition. Conventional methods are FMECA, FTA, ETA and miscellaneous. These are not enough to assess chronological anomaly and there is a problem to share information during R&D. A new method of risk and reliability assessment, T-TRAP (Time-tagged Risk/Reliability Assessment Program) is proposed as a management tool for the development and operation of space system. T-TRAP consisting of time-resolved Fault Tree and Criticality Analyses, upon occurrence of anomaly in the system, facilitates the responsible personnel to quickly identify the failure cause and decide corrective actions. This paper describes T-TRAP method and its availability.
Data reliability in complex directed networks
NASA Astrophysics Data System (ADS)
Sanz, Joaquín; Cozzo, Emanuele; Moreno, Yamir
2013-12-01
The availability of data from many different sources and fields of science has made it possible to map out an increasing number of networks of contacts and interactions. However, quantifying how reliable these data are remains an open problem. From Biology to Sociology and Economics, the identification of false and missing positives has become a problem that calls for a solution. In this work we extend one of the newest, best performing models—due to Guimerá and Sales-Pardo in 2009—to directed networks. The new methodology is able to identify missing and spurious directed interactions with more precision than previous approaches, which renders it particularly useful for analyzing data reliability in systems like trophic webs, gene regulatory networks, communication patterns and several social systems. We also show, using real-world networks, how the method can be employed to help search for new interactions in an efficient way.
Optimization Based Efficiencies in First Order Reliability Analysis
NASA Technical Reports Server (NTRS)
Peck, Jeffrey A.; Mahadevan, Sankaran
2003-01-01
This paper develops a method for updating the gradient vector of the limit state function in reliability analysis using Broyden's rank one updating technique. In problems that use commercial code as a black box, the gradient calculations are usually done using a finite difference approach, which becomes very expensive for large system models. The proposed method replaces the finite difference gradient calculations in a standard first order reliability method (FORM) with Broyden's Quasi-Newton technique. The resulting algorithm of Broyden updates within a FORM framework (BFORM) is used to run several example problems, and the results compared to standard FORM results. It is found that BFORM typically requires fewer functional evaluations that FORM to converge to the same answer.
Parameter estimation using meta-heuristics in systems biology: a comprehensive review.
Sun, Jianyong; Garibaldi, Jonathan M; Hodgman, Charlie
2012-01-01
This paper gives a comprehensive review of the application of meta-heuristics to optimization problems in systems biology, mainly focussing on the parameter estimation problem (also called the inverse problem or model calibration). It is intended for either the system biologist who wishes to learn more about the various optimization techniques available and/or the meta-heuristic optimizer who is interested in applying such techniques to problems in systems biology. First, the parameter estimation problems emerging from different areas of systems biology are described from the point of view of machine learning. Brief descriptions of various meta-heuristics developed for these problems follow, along with outlines of their advantages and disadvantages. Several important issues in applying meta-heuristics to the systems biology modelling problem are addressed, including the reliability and identifiability of model parameters, optimal design of experiments, and so on. Finally, we highlight some possible future research directions in this field.
NASA Astrophysics Data System (ADS)
Ishikawa, Kaoru; Nakamura, Taro; Osumi, Hisashi
A reliable control method is proposed for multiple loop control system. After a feedback loop failure, such as case of the sensor break down, the control system becomes unstable and has a big fluctuation even if it has a disturbance observer. To cope with this problem, the proposed method uses an equivalent transfer function (ETF) as active redundancy compensation after the loop failure. The ETF is designed so that it does not change the transfer function of the whole system before and after the loop failure. In this paper, the characteristic of reliable control system that uses an ETF and a disturbance observer is examined by the experiment that uses the DC servo motor for the current feedback loop failure in the position servo system.
NASA Astrophysics Data System (ADS)
Witantyo; Rindiyah, Anita
2018-03-01
According to data from maintenance planning and control, it was obtained that highest inventory value is non-routine components. Maintenance components are components which procured based on maintenance activities. The problem happens because there is no synchronization between maintenance activities and the components required. Reliability Centered Maintenance method is used to overcome the problem by reevaluating maintenance activities required components. The case chosen is roller mill system because it has the highest unscheduled downtime record. Components required for each maintenance activities will be determined by its failure distribution, so the number of components needed could be predicted. Moreover, those components will be reclassified from routine component to be non-routine component, so the procurement could be carried out regularly. Based on the conducted analysis, failure happens in almost every maintenance task are classified to become scheduled on condition task, scheduled discard task, schedule restoration task and no schedule maintenance. From 87 used components for maintenance activities are evaluated and there 19 components that experience reclassification from non-routine components to routine components. Then the reliability and need of those components were calculated for one-year operation period. Based on this invention, it is suggested to change all of the components in overhaul activity to increase the reliability of roller mill system. Besides, the inventory system should follow maintenance schedule and the number of required components in maintenance activity so the value of procurement will be decreased and the reliability system will increase.
Mechanization of and experience with a triplex fly-by-wire backup control system
NASA Technical Reports Server (NTRS)
Lock, W. P.; Petersen, W. R.; Whitman, G. B.
1976-01-01
A redundant three axis analog control system was designed and developed to back up a digital fly by wire control system for an F-8C airplane. The mechanization and operational experience with the backup control system, the problems involved in synchronizing it with the primary system, and the reliability of the system are discussed. The backup control system was dissimilar to the primary system, and it provided satisfactory handling through the flight envelope evaluated. Limited flight tests of a variety of control tasks showed that control was also satisfactory when the backup control system was controlled by a minimum displacement (force) side stick. The operational reliability of the F-8 digital fly by wire control system was satisfactory, with no unintentional downmodes to the backup control system in flight. The ground and flight reliability of the system's components is discussed.
Development of modelling algorithm of technological systems by statistical tests
NASA Astrophysics Data System (ADS)
Shemshura, E. A.; Otrokov, A. V.; Chernyh, V. G.
2018-03-01
The paper tackles the problem of economic assessment of design efficiency regarding various technological systems at the stage of their operation. The modelling algorithm of a technological system was performed using statistical tests and with account of the reliability index allows estimating the level of machinery technical excellence and defining the efficiency of design reliability against its performance. Economic feasibility of its application shall be determined on the basis of service quality of a technological system with further forecasting of volumes and the range of spare parts supply.
The specification-based validation of reliable multicast protocol: Problem Report. M.S. Thesis
NASA Technical Reports Server (NTRS)
Wu, Yunqing
1995-01-01
Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP multicasting. In this report, we develop formal models for RMP using existing automated verification systems, and perform validation on the formal RMP specifications. The validation analysis help identifies some minor specification and design problems. We also use the formal models of RMP to generate a test suite for conformance testing of the implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress of implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.
Proposed reliability cost model
NASA Technical Reports Server (NTRS)
Delionback, L. M.
1973-01-01
The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.
Apollo experience report: Power generation system
NASA Technical Reports Server (NTRS)
Bell, D., III; Plauche, F. M.
1973-01-01
A comprehensive review of the design philosophy and experience of the Apollo electrical power generation system is presented. The review of the system covers a period of 8 years, from conception through the Apollo 12 lunar-landing mission. The program progressed from the definition phase to hardware design, system development and qualification, and, ultimately, to the flight phase. Several problems were encountered; however, a technology evolved that enabled resolution of the problems and resulted in a fully manrated power generation system. These problems are defined and examined, and the corrective action taken is discussed. Several recommendations are made to preclude similar occurrences and to provide a more reliable fuel-cell power system.
An approach to solving large reliability models
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Veeraraghavan, Malathi; Dugan, Joanne Bechta; Trivedi, Kishor S.
1988-01-01
This paper describes a unified approach to the problem of solving large realistic reliability models. The methodology integrates behavioral decomposition, state trunction, and efficient sparse matrix-based numerical methods. The use of fault trees, together with ancillary information regarding dependencies to automatically generate the underlying Markov model state space is proposed. The effectiveness of this approach is illustrated by modeling a state-of-the-art flight control system and a multiprocessor system. Nonexponential distributions for times to failure of components are assumed in the latter example. The modeling tool used for most of this analysis is HARP (the Hybrid Automated Reliability Predictor).
An overview of the mathematical and statistical analysis component of RICIS
NASA Technical Reports Server (NTRS)
Hallum, Cecil R.
1987-01-01
Mathematical and statistical analysis components of RICIS (Research Institute for Computing and Information Systems) can be used in the following problem areas: (1) quantification and measurement of software reliability; (2) assessment of changes in software reliability over time (reliability growth); (3) analysis of software-failure data; and (4) decision logic for whether to continue or stop testing software. Other areas of interest to NASA/JSC where mathematical and statistical analysis can be successfully employed include: math modeling of physical systems, simulation, statistical data reduction, evaluation methods, optimization, algorithm development, and mathematical methods in signal processing.
NASA Astrophysics Data System (ADS)
Korotkova, T. I.; Popova, V. I.
2017-11-01
The generalized mathematical model of decision-making in the problem of planning and mode selection providing required heat loads in a large heat supply system is considered. The system is multilevel, decomposed into levels of main and distribution heating networks with intermediate control stages. Evaluation of the effectiveness, reliability and safety of such a complex system is carried out immediately according to several indicators, in particular pressure, flow, temperature. This global multicriteria optimization problem with constraints is decomposed into a number of local optimization problems and the coordination problem. An agreed solution of local problems provides a solution to the global multicriterion problem of decision making in a complex system. The choice of the optimum operational mode of operation of a complex heat supply system is made on the basis of the iterative coordination process, which converges to the coordinated solution of local optimization tasks. The interactive principle of multicriteria task decision-making includes, in particular, periodic adjustment adjustments, if necessary, guaranteeing optimal safety, reliability and efficiency of the system as a whole in the process of operation. The degree of accuracy of the solution, for example, the degree of deviation of the internal air temperature from the required value, can also be changed interactively. This allows to carry out adjustment activities in the best way and to improve the quality of heat supply to consumers. At the same time, an energy-saving task is being solved to determine the minimum required values of heads at sources and pumping stations.
The Impact of Causality on Information-Theoretic Source and Channel Coding Problems
ERIC Educational Resources Information Center
Palaiyanur, Harikrishna R.
2011-01-01
This thesis studies several problems in information theory where the notion of causality comes into play. Causality in information theory refers to the timing of when information is available to parties in a coding system. The first part of the thesis studies the error exponent (or reliability function) for several communication problems over…
Fateen, Seif-Eddeen K.; Bonilla-Petriciolet, Adrian
2014-01-01
The search for reliable and efficient global optimization algorithms for solving phase stability and phase equilibrium problems in applied thermodynamics is an ongoing area of research. In this study, we evaluated and compared the reliability and efficiency of eight selected nature-inspired metaheuristic algorithms for solving difficult phase stability and phase equilibrium problems. These algorithms are the cuckoo search (CS), intelligent firefly (IFA), bat (BA), artificial bee colony (ABC), MAKHA, a hybrid between monkey algorithm and krill herd algorithm, covariance matrix adaptation evolution strategy (CMAES), magnetic charged system search (MCSS), and bare bones particle swarm optimization (BBPSO). The results clearly showed that CS is the most reliable of all methods as it successfully solved all thermodynamic problems tested in this study. CS proved to be a promising nature-inspired optimization method to perform applied thermodynamic calculations for process design. PMID:24967430
Fateen, Seif-Eddeen K; Bonilla-Petriciolet, Adrian
2014-01-01
The search for reliable and efficient global optimization algorithms for solving phase stability and phase equilibrium problems in applied thermodynamics is an ongoing area of research. In this study, we evaluated and compared the reliability and efficiency of eight selected nature-inspired metaheuristic algorithms for solving difficult phase stability and phase equilibrium problems. These algorithms are the cuckoo search (CS), intelligent firefly (IFA), bat (BA), artificial bee colony (ABC), MAKHA, a hybrid between monkey algorithm and krill herd algorithm, covariance matrix adaptation evolution strategy (CMAES), magnetic charged system search (MCSS), and bare bones particle swarm optimization (BBPSO). The results clearly showed that CS is the most reliable of all methods as it successfully solved all thermodynamic problems tested in this study. CS proved to be a promising nature-inspired optimization method to perform applied thermodynamic calculations for process design.
Parallelizing Timed Petri Net simulations
NASA Technical Reports Server (NTRS)
Nicol, David M.
1993-01-01
The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.
NASA Technical Reports Server (NTRS)
Zanley, Nancy L.
1991-01-01
The NASA Science Internet (NSI) Network Operations Staff is responsible for providing reliable communication connectivity for the NASA science community. As the NSI user community expands, so does the demand for greater interoperability with users and resources on other networks (e.g., NSFnet, ESnet), both nationally and internationally. Coupled with the science community's demand for greater access to other resources is the demand for more reliable communication connectivity. Recognizing this, the NASA Science Internet Project Office (NSIPO) expands its Operations activities. By January 1990, Network Operations was equipped with a telephone hotline, and its staff was expanded to six Network Operations Analysts. These six analysts provide 24-hour-a-day, 7-day-a-week coverage to assist site managers with problem determination and resolution. The NSI Operations staff monitors network circuits and their associated routers. In most instances, NSI Operations diagnoses and reports problems before users realize a problem exists. Monitoring of the NSI TCP/IP Network is currently being done with Proteon's Overview monitoring system. The Overview monitoring system displays a map of the NSI network utilizing various colors to indicate the conditions of the components being monitored. Each node or site is polled via the Simple Network Monitoring Protocol (SNMP). If a circuit goes down, Overview alerts the Network Operations staff with an audible alarm and changes the color of the component. When an alert is received, Network Operations personnel immediately verify and diagnose the problem, coordinate repair with other networking service groups, track problems, and document problem and resolution into a trouble ticket data base. NSI Operations offers the NSI science community reliable connectivity by exercising prompt assessment and resolution of network problems.
Reliable vision-guided grasping
NASA Technical Reports Server (NTRS)
Nicewarner, Keith E.; Kelley, Robert B.
1992-01-01
Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided 'docking' problems. It differs from other hand-in-eye visual servoing problems, such as tracking, in that the distance from the target is a relevant servo parameter. The methodology described in this paper is hierarchy of levels in which the vision/robot interface is decreasingly 'intelligent,' and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system.
Rail Transit Fare Collection : Policy and Technology Assessment
DOT National Transportation Integrated Search
1982-12-01
In an attempt to resolve reliability problems, lower operation and maintenance costs, and simplify fare collection, transit authorities are focusing on their fare collection systems. Many transit properties have existing fare collection systems which...
Problem Solving with Guided Repeated Oral Reading Instruction
ERIC Educational Resources Information Center
Conderman, Greg; Strobel, Debra
2006-01-01
Many students with disabilities require specialized instructional interventions and frequent progress monitoring in reading. The guided repeated oral reading technique promotes oral reading fluency while providing a reliable data-based monitoring system. This article emphasizes the importance of problem-solving when using this reading approach.
Inductive System for Reliable Magnesium Level Detection in a Titanium Reduction Reactor
NASA Astrophysics Data System (ADS)
Krauter, Nico; Eckert, Sven; Gundrum, Thomas; Stefani, Frank; Wondrak, Thomas; Frick, Peter; Khalilov, Ruslan; Teimurazov, Andrei
2018-05-01
The determination of the Magnesium level in a Titanium reduction retort by inductive methods is often hampered by the formation of Titanium sponge rings which disturb the propagation of electromagnetic signals between excitation and receiver coils. We present a new method for the reliable identification of the Magnesium level which explicitly takes into account the presence of sponge rings with unknown geometry and conductivity. The inverse problem is solved by a look-up-table method, based on the solution of the inductive forward problems for several tens of thousands parameter combinations.
Silicon Nanophotonics for Many-Core On-Chip Networks
NASA Astrophysics Data System (ADS)
Mohamed, Moustafa
Number of cores in many-core architectures are scaling to unprecedented levels requiring ever increasing communication capacity. Traditionally, architects follow the path of higher throughput at the expense of latency. This trend has evolved into being problematic for performance in many-core architectures. Moreover, the trends of power consumption is increasing with system scaling mandating nontraditional solutions. Nanophotonics can address these problems, offering benefits in the three frontiers of many-core processor design: Latency, bandwidth, and power. Nanophotonics leverage circuit-switching flow control allowing low latency; in addition, the power consumption of optical links is significantly lower compared to their electrical counterparts at intermediate and long links. Finally, through wave division multiplexing, we can keep the high bandwidth trends without sacrificing the throughput. This thesis focuses on realizing nanophotonics for communication in many-core architectures at different design levels considering reliability challenges that our fabrication and measurements reveal. First, we study how to design on-chip networks for low latency, low power, and high bandwidth by exploiting the full potential of nanophotonics. The design process considers device level limitations and capabilities on one hand, and system level demands in terms of power and performance on the other hand. The design involves the choice of devices, designing the optical link, the topology, the arbitration technique, and the routing mechanism. Next, we address the problem of reliability in on-chip networks. Reliability not only degrades performance but can block communication. Hence, we propose a reliability-aware design flow and present a reliability management technique based on this flow to address reliability in the system. In the proposed flow reliability is modeled and analyzed for at the device, architecture, and system level. Our reliability management technique is superior to existing solutions in terms of power and performance. In fact, our solution can scale to thousand core with low overhead.
Performance and reliability of the NASA Biomass Production Chamber
NASA Technical Reports Server (NTRS)
Sager, J. C.; Chetirkin, P. V.
1994-01-01
The Biomass Production Chamber (BPC) at the Kennedy Space Center is part of the Controlled Ecological Life Support System (CELSS) Breadboard Project. Plants are grown in a closed environment in an effort to quantify their contributions to the requirements for life support. Performance of this system is described. Also, in building this system, data from component and subsystem failures are being recorded. These data are used to identify problem areas in the design and implementation. The techniques used to measure the reliability will be useful in the design and construction of future CELSS. Possible methods for determining the reliability of a green plant, the primary component of a CELSS, are discussed.
How reliable are clinical systems in the UK NHS? A study of seven NHS organisations
Franklin, Bryony Dean; Moorthy, Krishna; Cooke, Matthew W; Vincent, Charles
2012-01-01
Background It is well known that many healthcare systems have poor reliability; however, the size and pervasiveness of this problem and its impact has not been systematically established in the UK. The authors studied four clinical systems: clinical information in surgical outpatient clinics, prescribing for hospital inpatients, equipment in theatres, and insertion of peripheral intravenous lines. The aim was to describe the nature, extent and variation in reliability of these four systems in a sample of UK hospitals, and to explore the reasons for poor reliability. Methods Seven UK hospital organisations were involved; each system was studied in three of these. The authors took delivery of the systems' intended outputs to be a proxy for the reliability of the system as a whole. For example, for clinical information, 100% reliability was defined as all patients having an agreed list of clinical information available when needed during their appointment. Systems factors were explored using semi-structured interviews with key informants. Common themes across the systems were identified. Results Overall reliability was found to be between 81% and 87% for the systems studied, with significant variation between organisations for some systems: clinical information in outpatient clinics ranged from 73% to 96%; prescribing for hospital inpatients 82–88%; equipment availability in theatres 63–88%; and availability of equipment for insertion of peripheral intravenous lines 80–88%. One in five reliability failures were associated with perceived threats to patient safety. Common factors causing poor reliability included lack of feedback, lack of standardisation, and issues such as access to information out of working hours. Conclusions Reported reliability was low for the four systems studied, with some common factors behind each. However, this hides significant variation between organisations for some processes, suggesting that some organisations have managed to create more reliable systems. Standardisation of processes would be expected to have significant benefit. PMID:22495099
NASA Technical Reports Server (NTRS)
Wong, P. K.
1975-01-01
The closely-related problems of designing reliable feedback stabilization strategy and coordinating decentralized feedbacks are considered. Two approaches are taken. A geometric characterization of the structure of control interaction (and its dual) was first attempted and a concept of structural homomorphism developed based on the idea of 'similarity' of interaction pattern. The idea of finding classes of individual feedback maps that do not 'interfere' with the stabilizing action of each other was developed by identifying the structural properties of nondestabilizing and LQ-optimal feedback maps. Some known stability properties of LQ-feedback were generalized and some partial solutions were provided to the reliable stabilization and decentralized feedback coordination problems. A concept of coordination parametrization was introduced, and a scheme for classifying different modes of decentralization (information, control law computation, on-line control implementation) in control systems was developed.
NDE research efforts at the FAA Center for Aviation Systems Reliability
NASA Technical Reports Server (NTRS)
Thompson, Donald O.; Brasche, Lisa J. H.
1992-01-01
The Federal Aviation Administration-Center for Aviation Systems Reliability (FAA-CASR), a part of the Institute for Physical Research and Technology at Iowa State University, began operation in the Fall of 1990 with funding from the FAA. The mission of the FAA-CASR is to develop quantitative nondestructive evaluation (NDE) methods for aircraft structures and materials including prototype instrumentation, software, techniques, and procedures and to develop and maintain comprehensive education and training programs in aviation specific inspection procedures and practices. To accomplish this mission, FAA-CASR brings together resources from universities, government, and industry to develop a comprehensive approach to problems specific to the aviation industry. The problem areas are targeted by the FAA, aviation manufacturers, the airline industry and other members of the aviation business community. This consortium approach ensures that the focus of the efforts is on relevant problems and also facilitates effective transfer of the results to industry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Divan, Deepak; Brumsickle, William; Eto, Joseph
2003-04-01
This report describes a new approach for collecting information on power quality and reliability and making it available in the public domain. Making this information readily available in a form that is meaningful to electricity consumers is necessary for enabling more informed private and public decisions regarding electricity reliability. The system dramatically reduces the cost (and expertise) needed for customers to obtain information on the most significant power quality events, called voltage sags and interruptions. The system also offers widespread access to information on power quality collected from multiple sites and the potential for capturing information on the impacts ofmore » power quality problems, together enabling a wide variety of analysis and benchmarking to improve system reliability. Six case studies demonstrate selected functionality and capabilities of the system, including: Linking measured power quality events to process interruption and downtime; Demonstrating the ability to correlate events recorded by multiple monitors to narrow and confirm the causes of power quality events; and Benchmarking power quality and reliability on a firm and regional basis.« less
Problem Solving in Biology: A Methodology
ERIC Educational Resources Information Center
Wisehart, Gary; Mandell, Mark
2008-01-01
A methodology is described that teaches science process by combining informal logic and a heuristic for rating factual reliability. This system facilitates student hypothesis formation, testing, and evaluation of results. After problem solving with this scheme, students are asked to examine and evaluate arguments for the underlying principles of…
A flight test of laminar flow control leading-edge systems
NASA Technical Reports Server (NTRS)
Fischer, M. C.; Wright, A. S., Jr.; Wagner, R. D.
1983-01-01
NASA's program for development of a laminar flow technology base for application to commercial transports has made significant progress since its inception in 1976. Current efforts are focused on development of practical reliable systems for the leading-edge region where the most difficult problems in applying laminar flow exist. Practical solutions to these problems will remove many concerns about the ultimate practicality of laminar flow. To address these issues, two contractors performed studies, conducted development tests, and designed and fabricated fully functional leading-edge test articles for installation on the NASA JetStar aircraft. Systems evaluation and performance testing will be conducted to thoroughly evaluate all system capabilities and characteristics. A simulated airline service flight test program will be performed to obtain the operational sensitivity, maintenance, and reliability data needed to establish that practical solutions exist for the difficult leading-edge area of a future commercial transport employing laminar flow control.
Description and status of NASA-LeRC/DOE photovoltaic applications systems
NASA Technical Reports Server (NTRS)
Ratajczak, A. F.
1978-01-01
Designed, fabricated and installed were 16 geographically dispersed photovoltaic systems. These systems are powering a refrigerator, highway warning sign, forest lookout towers, remote weather stations, a water chiller at a visitor center, and insect survey traps. Each of these systems is described in terms of load requirements, solar array and battery size, and instrumentation and controls. Operational experience is described and present status is given for each system. The P/V power systems have proven to be highly reliable with almost no problems with modules and very few problems overall.
Decision-theoretic methodology for reliability and risk allocation in nuclear power plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, N.Z.; Papazoglou, I.A.; Bari, R.A.
1985-01-01
This paper describes a methodology for allocating reliability and risk to various reactor systems, subsystems, components, operations, and structures in a consistent manner, based on a set of global safety criteria which are not rigid. The problem is formulated as a multiattribute decision analysis paradigm; the multiobjective optimization, which is performed on a PRA model and reliability cost functions, serves as the guiding principle for reliability and risk allocation. The concept of noninferiority is used in the multiobjective optimization problem. Finding the noninferior solution set is the main theme of the current approach. The assessment of the decision maker's preferencesmore » could then be performed more easily on the noninferior solution set. Some results of the methodology applications to a nontrivial risk model are provided and several outstanding issues such as generic allocation and preference assessment are discussed.« less
The Evaluation Method of the Lightning Strike on Transmission Lines Aiming at Power Grid Reliability
NASA Astrophysics Data System (ADS)
Wen, Jianfeng; Wu, Jianwei; Huang, Liandong; Geng, Yinan; Yu, zhanqing
2018-01-01
Lightning protection of power system focuses on reducing the flashover rate, only distinguishing by the voltage level, without considering the functional differences between the transmission lines, and being lack of analysis the effect on the reliability of power grid. This will lead lightning protection design of general transmission lines is surplus but insufficient for key lines. In order to solve this problem, the analysis method of lightning striking on transmission lines for power grid reliability is given. Full wave process theory is used to analyze the lightning back striking; the leader propagation model is used to describe the process of shielding failure of transmission lines. The index of power grid reliability is introduced and the effect of transmission line fault on the reliability of power system is discussed in detail.
NASA Technical Reports Server (NTRS)
Bueno, R.; Chow, E.; Gershwin, S. B.; Willsky, A. S.
1975-01-01
The research is reported on the problems of failure detection and reliable system design for digital aircraft control systems. Failure modes, cross detection probability, wrong time detection, application of performance tools, and the GLR computer package are discussed.
A design for a new catalog manager and associated file management for the Land Analysis System (LAS)
NASA Technical Reports Server (NTRS)
Greenhagen, Cheryl
1986-01-01
Due to the larger number of different types of files used in an image processing system, a mechanism for file management beyond the bounds of typical operating systems is necessary. The Transportable Applications Executive (TAE) Catalog Manager was written to meet this need. Land Analysis System (LAS) users at the EROS Data Center (EDC) encountered some problems in using the TAE catalog manager, including catalog corruption, networking difficulties, and lack of a reliable tape storage and retrieval capability. These problems, coupled with the complexity of the TAE catalog manager, led to the decision to design a new file management system for LAS, tailored to the needs of the EDC user community. This design effort, which addressed catalog management, label services, associated data management, and enhancements to LAS applications, is described. The new file management design will provide many benefits including improved system integration, increased flexibility, enhanced reliability, enhanced portability, improved performance, and improved maintainability.
Serial Back-Plane Technologies in Advanced Avionics Architectures
NASA Technical Reports Server (NTRS)
Varnavas, Kosta
2005-01-01
Current back plane technologies such as VME, and current personal computer back planes such as PCI, are shared bus systems that can exhibit nondeterministic latencies. This means a card can take control of the bus and use resources indefinitely affecting the ability of other cards in the back plane to acquire the bus. This provides a real hit on the reliability of the system. Additionally, these parallel busses only have bandwidths in the 100s of megahertz range and EMI and noise effects get worse the higher the bandwidth goes. To provide scalable, fault-tolerant, advanced computing systems, more applicable to today s connected computing environment and to better meet the needs of future requirements for advanced space instruments and vehicles, serial back-plane technologies should be implemented in advanced avionics architectures. Serial backplane technologies eliminate the problem of one card getting the bus and never relinquishing it, or one minor problem on the backplane bringing the whole system down. Being serial instead of parallel improves the reliability by reducing many of the signal integrity issues associated with parallel back planes and thus significantly improves reliability. The increased speeds associated with a serial backplane are an added bonus.
Malt, U
1986-01-01
The reliability of the DSM-III is superior to other classification systems available in psychiatry. However, reliability depends on proper knowledge of the system. Some pitfalls reducing reliability of axis 1 diagnosis which commonly are overlooked are discussed. Secondly, some problems of validity of axis 1 and 2 are considered. This is done by discussing the differential diagnosis of organic mental disorders and other psychiatric disorders with concomittant physical dysfunction, and the diagnoses of post-traumatic stress disorders and adjustment disorders among others. The emphasis on health care seeking behaviour as a diagnostic criteria in the DSM-III system, may cause a social, racial and sexual bias in DSM-III diagnoses. The present discussion of the DSM-III system from a clinical point of view indicates the need for validation studies based on clinical experience with the DSM-III. These studies should include more out-patients and patients with psychopathology who do not seek psychiatric treatment. Such studies must also apply alternative diagnostic standards like the ICD-9 and not only rely on structured psychiatric interviews constructed for DSM-III diagnoses. The discussion of axis 4 points to the problem of wanting to combine reliable rating with clinically meaningful information. It is concluded that the most important issue to be settled regarding axis 4 in the future revisions is the aim of including this axis. The discussion of axis 5 concludes that axis 5 is biased toward poor functioning and thus may be less usefull when applied on patients seen outside hospitals. Despite these problems of the DSM-III, our experiences indicate that the use of the DSM-III is fruitful both for the patient, the clinician and the researcher. Thus, the cost of time and effort needed to learn to use the DSM-III properly are small compared to the benefits achieved by using the system.
Electric system restructuring and system reliability
NASA Astrophysics Data System (ADS)
Horiuchi, Catherine Miller
In 1996 the California legislature passed AB 1890, explicitly defining economic benefits and detailing specific mechanisms for initiating a partial restructuring the state's electric system. Critics have since sought re-regulation and proponents have asked for patience as the new institutions and markets take shape. Other states' electric system restructuring activities have been tempered by real and perceived problems in the California model. This study examines the reduced regulatory controls and new constraints introduced in California's limited restructuring model using utility and regulatory agency records from the 1990's to investigate effects of new institutions and practices on system reliability for the state's five largest public and private utilities. Logit and negative binomial regressions indicate negative impact from the California model of restructuring on system reliability as measured by customer interruptions. Time series analysis of outage data could not predict the wholesale power market collapse and the subsequent rolling blackouts in early 2001; inclusion of near-outage reliability disturbances---load shedding and energy emergencies---provided a measure of forewarning. Analysis of system disruptions, generation capacity and demand, and the role of purchased power challenge conventional wisdom on the causality of Californian's power problems. The quantitative analysis was supplemented by a targeted survey of electric system restructuring participants. Findings suggest each utility and the organization controlling the state's electric grid provided protection from power outages comparable to pre-restructuring operations through 2000; however, this reliability has come at an inflated cost, resulting in reduced system purchases and decreased marginal protection. The historic margin of operating safety has fully eroded, increasing mandatory load shedding and emergency declarations for voluntary and mandatory conservation. Proposed remedies focused on state-funded contracts and government-managed power authorities may not help, as the findings suggest pricing models, market uncertainty, interjurisdictional conflict and an inability to respond to market perturbations are more significant contributors to reduced regional generation availability than the particular contract mechanisms and funding sources used for power purchases.
Study of aircraft electrical power systems
NASA Technical Reports Server (NTRS)
1972-01-01
The formulation of a philosophy for devising a reliable, efficient, lightweight, and cost effective electrical power system for advanced, large transport aircraft in the 1980 to 1985 time period is discussed. The determination and recommendation for improvements in subsystems and components are also considered. All aspects of the aircraft electrical power system including generation, conversion, distribution, and utilization equipment were considered. Significant research and technology problem areas associated with the development of future power systems are identified. The design categories involved are: (1) safety-reliability, (2) power type, voltage, frequency, quality, and efficiency, (3) power control, and (4) selection of utilization equipment.
Reliability-Based Control Design for Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.
2005-01-01
This paper presents a robust control design methodology for systems with probabilistic parametric uncertainty. Control design is carried out by solving a reliability-based multi-objective optimization problem where the probability of violating design requirements is minimized. Simultaneously, failure domains are optimally enlarged to enable global improvements in the closed-loop performance. To enable an efficient numerical implementation, a hybrid approach for estimating reliability metrics is developed. This approach, which integrates deterministic sampling and asymptotic approximations, greatly reduces the numerical burden associated with complex probabilistic computations without compromising the accuracy of the results. Examples using output-feedback and full-state feedback with state estimation are used to demonstrate the ideas proposed.
Wind farm optimization using evolutionary algorithms
NASA Astrophysics Data System (ADS)
Ituarte-Villarreal, Carlos M.
In recent years, the wind power industry has focused its efforts on solving the Wind Farm Layout Optimization (WFLO) problem. Wind resource assessment is a pivotal step in optimizing the wind-farm design and siting and, in determining whether a project is economically feasible or not. In the present work, three (3) different optimization methods are proposed for the solution of the WFLO: (i) A modified Viral System Algorithm applied to the optimization of the proper location of the components in a wind-farm to maximize the energy output given a stated wind environment of the site. The optimization problem is formulated as the minimization of energy cost per unit produced and applies a penalization for the lack of system reliability. The viral system algorithm utilized in this research solves three (3) well-known problems in the wind-energy literature; (ii) a new multiple objective evolutionary algorithm to obtain optimal placement of wind turbines while considering the power output, cost, and reliability of the system. The algorithm presented is based on evolutionary computation and the objective functions considered are the maximization of power output, the minimization of wind farm cost and the maximization of system reliability. The final solution to this multiple objective problem is presented as a set of Pareto solutions and, (iii) A hybrid viral-based optimization algorithm adapted to find the proper component configuration for a wind farm with the introduction of the universal generating function (UGF) analytical approach to discretize the different operating or mechanical levels of the wind turbines in addition to the various wind speed states. The proposed methodology considers the specific probability functions of the wind resource to describe their proper behaviors to account for the stochastic comportment of the renewable energy components, aiming to increase their power output and the reliability of these systems. The developed heuristic considers a variable number of system components and wind turbines with different operating characteristics and sizes, to have a more heterogeneous model that can deal with changes in the layout and in the power generation requirements over the time. Moreover, the approach evaluates the impact of the wind-wake effect of the wind turbines upon one another to describe and evaluate the power production capacity reduction of the system depending on the layout distribution of the wind turbines.
NASA Astrophysics Data System (ADS)
Rezinskikh, V. F.; Grin', E. A.
2013-01-01
The problem concerned with safe and reliable operation of ageing heat-generating and mechanical equipment of thermal power stations is discussed. It is pointed out that the set of relevant regulatory documents serves as the basis for establishing an efficient equipment diagnostic system. In this connection, updating the existing regulatory documents with imparting the required status to them is one of top-priority tasks. Carrying out goal-oriented scientific research works is a necessary condition for solving this problem as well as other questions considered in the paper that are important for ensuring reliable performance of equipment operating for a long period of time. In recent years, the amount of such works has dropped dramatically, although the need for them is steadily growing. Unbiased assessment of the technical state of equipment that has been in operation for a long period of time is an important aspect in solving the problem of ensuring reliable and safe operation of thermal power stations. Here, along with the quality of diagnostic activities, monitoring of technical state performed on the basis of an analysis of statistical field data and results of operational checks plays an important role. The need to concentrate efforts taken in the mentioned problem areas is pointed out, and it is indicated that successful implementation of the outlined measures requires proper organization and efficient operation of a system for managing safety in the electric power industry.
Evaluating reliability of WSN with sleep/wake-up interfering nodes
NASA Astrophysics Data System (ADS)
Distefano, Salvatore
2013-10-01
A wireless sensor network (WSN) (singular and plural of acronyms are spelled the same) is a distributed system composed of autonomous sensor nodes wireless connected and randomly scattered into a geographical area to cooperatively monitor physical or environmental conditions. Adequate techniques and strategies are required to manage a WSN so that it works properly, observing specific quantities and metrics to evaluate the WSN operational conditions. Among them, one of the most important is the reliability. Considering a WSN as a system composed of sensor nodes the system reliability approach can be applied, thus expressing the WSN reliability in terms of its nodes' reliability. More specifically, since often standby power management policies are applied at node level and interferences among nodes may arise, a WSN can be considered as a dynamic system. In this article we therefore consider the WSN reliability evaluation problem from the dynamic system reliability perspective. Static-structural interactions are specified by the WSN topology. Sleep/wake-up standby policies and interferences due to wireless communications can be instead considered as dynamic aspects. Thus, in order to represent and to evaluate the WSN reliability, we use dynamic reliability block diagrams and Petri nets. The proposed technique allows to overcome the limits of Markov models when considering non-linear discharge processes, since they cannot adequately represent the aging processes. In order to demonstrate the effectiveness of the technique, we investigate some specific WSN network topologies, providing guidelines for their representation and evaluation.
Path querying system on mobile devices
NASA Astrophysics Data System (ADS)
Lin, Xing; Wang, Yifei; Tian, Yuan; Wu, Lun
2006-01-01
Traditional approaches to path querying problems are not efficient and convenient under most circumstances. A more convenient and reliable approach to this problem has to be found. This paper is devoted to a path querying solution on mobile devices. By using an improved Dijkstra's shortest path algorithm and a natural language translating module, this system can help people find the shortest path between two places through their cell phones or other mobile devices. The chosen path is prompted in text of natural language, as well as a map picture. This system would be useful in solving best path querying problems and have potential to be a profitable business system.
Takasaki, Hiroshi; Okuyama, Kousuke; Rosedale, Richard
2017-02-01
Mechanical Diagnosis and Therapy (MDT) is used in the treatment of extremity problems. Classifying clinical problems is one method of providing effective treatment to a target population. Classification reliability is a key factor to determine the precise clinical problem and to direct an appropriate intervention. To explore inter-examiner reliability of the MDT classification for extremity problems in three reliability designs: 1) vignette reliability using surveys with patient vignettes, 2) concurrent reliability, where multiple assessors decide a classification by observing someone's assessment, 3) successive reliability, where multiple assessors independently assess the same patient at different times. Systematic review with data synthesis in a quantitative format. Agreement of MDT subgroups was examined using the Kappa value, with the operational definition of acceptable reliability set at ≥ 0.6. The level of evidence was determined considering the methodological quality of the studies. Six studies were included and all studies met the criteria for high quality. Kappa values for the vignette reliability design (five studies) were ≥ 0.7. There was data from two cohorts in one study for the concurrent reliability design and the Kappa values ranged from 0.45 to 1.0. Kappa values for the successive reliability design (data from three cohorts in one study) were < 0.6. The current review found strong evidence of acceptable inter-examiner reliability of MDT classification for extremity problems in the vignette reliability design, limited evidence of acceptable reliability in the concurrent reliability design and unacceptable reliability in the successive reliability design. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Goodrich, Charles H.; Kurien, James; Clancy, Daniel (Technical Monitor)
2001-01-01
We present some diagnosis and control problems that are difficult to solve with discrete or purely qualitative techniques. We analyze the nature of the problems, classify them and explain why they are frequently encountered in systems with closed loop control. This paper illustrates the problem with several examples drawn from industrial and aerospace applications and presents detailed information on one important application: In-Situ Resource Utilization (ISRU) on Mars. The model for an ISRU plant is analyzed showing where qualitative techniques are inadequate to identify certain failure modes and to maintain control of the system in degraded environments. We show why the solution to the problem will result in significantly more robust and reliable control systems. Finally, we illustrate requirements for a solution to the problem by means of examples.
Parametric Mass Reliability Study
NASA Technical Reports Server (NTRS)
Holt, James P.
2014-01-01
The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.
Cutting planes for the multistage stochastic unit commitment problem
Jiang, Ruiwei; Guan, Yongpei; Watson, Jean -Paul
2016-04-20
As renewable energy penetration rates continue to increase in power systems worldwide, new challenges arise for system operators in both regulated and deregulated electricity markets to solve the security-constrained coal-fired unit commitment problem with intermittent generation (due to renewables) and uncertain load, in order to ensure system reliability and maintain cost effectiveness. In this paper, we study a security-constrained coal-fired stochastic unit commitment model, which we use to enhance the reliability unit commitment process for day-ahead power system operations. In our approach, we first develop a deterministic equivalent formulation for the problem, which leads to a large-scale mixed-integer linear program.more » Then, we verify that the turn on/off inequalities provide a convex hull representation of the minimum-up/down time polytope under the stochastic setting. Next, we develop several families of strong valid inequalities mainly through lifting schemes. In particular, by exploring sequence independent lifting and subadditive approximation lifting properties for the lifting schemes, we obtain strong valid inequalities for the ramping and general load balance polytopes. Lastly, branch-and-cut algorithms are developed to employ these valid inequalities as cutting planes to solve the problem. Our computational results verify the effectiveness of the proposed approach.« less
Cutting planes for the multistage stochastic unit commitment problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Ruiwei; Guan, Yongpei; Watson, Jean -Paul
As renewable energy penetration rates continue to increase in power systems worldwide, new challenges arise for system operators in both regulated and deregulated electricity markets to solve the security-constrained coal-fired unit commitment problem with intermittent generation (due to renewables) and uncertain load, in order to ensure system reliability and maintain cost effectiveness. In this paper, we study a security-constrained coal-fired stochastic unit commitment model, which we use to enhance the reliability unit commitment process for day-ahead power system operations. In our approach, we first develop a deterministic equivalent formulation for the problem, which leads to a large-scale mixed-integer linear program.more » Then, we verify that the turn on/off inequalities provide a convex hull representation of the minimum-up/down time polytope under the stochastic setting. Next, we develop several families of strong valid inequalities mainly through lifting schemes. In particular, by exploring sequence independent lifting and subadditive approximation lifting properties for the lifting schemes, we obtain strong valid inequalities for the ramping and general load balance polytopes. Lastly, branch-and-cut algorithms are developed to employ these valid inequalities as cutting planes to solve the problem. Our computational results verify the effectiveness of the proposed approach.« less
The embedded operating system project
NASA Technical Reports Server (NTRS)
Campbell, R. H.
1985-01-01
The design and construction of embedded operating systems for real-time advanced aerospace applications was investigated. The applications require reliable operating system support that must accommodate computer networks. Problems that arise in the construction of such operating systems, reconfiguration, consistency and recovery in a distributed system, and the issues of real-time processing are reported. A thesis that provides theoretical foundations for the use of atomic actions to support fault tolerance and data consistency in real-time object-based system is included. The following items are addressed: (1) atomic actions and fault-tolerance issues; (2) operating system structure; (3) program development; (4) a reliable compiler for path Pascal; and (5) mediators, a mechanism for scheduling distributed system processes.
Missile and Space Systems Reliability versus Cost Trade-Off Study
1983-01-01
F00-1C09 Robert C. Schneider F00-1C09 V . PERFORMING ORGANIZATION NAME AM0 ADDRESS 16 PRGRAM ELEMENT. PROJECT. TASK BoeingAerosace CmpAnyA CA WORK UNIT...reliability problems, which has the - real bearing on program effectiveness. A well planned and funded reliability effort can prevent or ferret out...failure analysis, and the in- corporation and verification of design corrections to prevent recurrence of failures. 302.2.2 A TMJ test plan shall be
Metal band drives in spacecraft mechanisms
NASA Technical Reports Server (NTRS)
Maus, Daryl
1993-01-01
Transmitting and changing the characteristics of force and stroke is a requirement in nearly all mechanisms. Examples include changing linear to rotary motion, providing a 90 deg change in direction, and amplifying stroke or force. Requirements for size, weight, efficiency and reliability create unique problems in spacecraft mechanisms. Flexible metal band and cam drive systems provide powerful solutions to these problems. Band drives, rack and pinion gears, and bell cranks are compared for effectiveness. Band drive issues are discussed including materials, bend radius, fabrication, attachment and reliability. Numerous mechanisms are shown which illustrate practical applications of band drives.
NASA Technical Reports Server (NTRS)
Scheper, C.; Baker, R.; Frank, G.; Yalamanchili, S.; Gray, G.
1992-01-01
Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified.
1988-09-01
applies to a one Air Transport Rack (ATR) volume LRU in an airborne, uninhabited, fighter environment.) The goal is to have a 2000 hour mean time between...benefits of applying reliability and 11 maintainability improvements to these weapon systems or components. Examples will be given in this research of...where the Pareto Principle applies . The Pareto analysis applies 25 to field failure types as well as to shop defect types. In the following automotive
Status and Needs of Power Electronics for Photovoltaic Inverters
NASA Astrophysics Data System (ADS)
Qin, Y. C.; Mohan, N.; West, R.; Bonn, R.
2002-06-01
Photovoltaics is the utility connected distributed energy resource (DER) that is in widespread use today. It has one element, the inverter, which is common with all DER sources except rotating generators. The inverter is required to transfer dc energy to ac energy. With all the DER technologies, (solar, wind, fuel cells, and microturbines) the inverter is still an immature product that will result in reliability problems in fielded systems. Today, the PV inverter is a costly and complex component of PV systems that produce ac power. Inverter MTFF (mean time to first failure) is currently unacceptable. Low inverter reliability contributes to unreliable fielded systems and a loss of confidence in renewable technology. The low volume of PV inverters produced restricts the manufacturing to small suppliers without sophisticated research and reliability programs or manufacturing methods. Thus, the present approach to PV inverter supply has low probability of meeting DOE reliability goals.
Mechanization of and experience with a triplex fly-by-wire backup control system
NASA Technical Reports Server (NTRS)
Lock, W. P.; Petersen, W. R.; Whitman, G. B.
1975-01-01
A redundant three-axis analog control system was designed and developed to back up a digital fly-by-wire control system for an F-8C airplane. Forty-two flights, involving 58 hours of flight time, were flown by six pilots. The mechanization and operational experience with the backup control system, the problems involved in synchronizing it with the primary system, and the reliability of the system are discussed. The backup control system was dissimilar to the primary system, and it provided satisfactory handling through the flight envelope evaluated. Limited flight tests of a variety of control tasks showed that control was also satisfactory when the backup control system was controlled by a minimum-displacement (force) side stick. The operational reliability of the F-8 digital fly-by-wire control system was satisfactory, with no unintentional downmodes to the backup control system in flight. The ground and flight reliability of the system's components is discussed.
On the next generation of reliability analysis tools
NASA Technical Reports Server (NTRS)
Babcock, Philip S., IV; Leong, Frank; Gai, Eli
1987-01-01
The current generation of reliability analysis tools concentrates on improving the efficiency of the description and solution of the fault-handling processes and providing a solution algorithm for the full system model. The tools have improved user efficiency in these areas to the extent that the problem of constructing the fault-occurrence model is now the major analysis bottleneck. For the next generation of reliability tools, it is proposed that techniques be developed to improve the efficiency of the fault-occurrence model generation and input. Further, the goal is to provide an environment permitting a user to provide a top-down design description of the system from which a Markov reliability model is automatically constructed. Thus, the user is relieved of the tedious and error-prone process of model construction, permitting an efficient exploration of the design space, and an independent validation of the system's operation is obtained. An additional benefit of automating the model construction process is the opportunity to reduce the specialized knowledge required. Hence, the user need only be an expert in the system he is analyzing; the expertise in reliability analysis techniques is supplied.
Information for the user in design of intelligent systems
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schreckenghost, Debra L.
1993-01-01
Recommendations are made for improving intelligent system reliability and usability based on the use of information requirements in system development. Information requirements define the task-relevant messages exchanged between the intelligent system and the user by means of the user interface medium. Thus, these requirements affect the design of both the intelligent system and its user interface. Many difficulties that users have in interacting with intelligent systems are caused by information problems. These information problems result from the following: (1) not providing the right information to support domain tasks; and (2) not recognizing that using an intelligent system introduces new user supervisory tasks that require new types of information. These problems are especially prevalent in intelligent systems used for real-time space operations, where data problems and unexpected situations are common. Information problems can be solved by deriving information requirements from a description of user tasks. Using information requirements embeds human-computer interaction design into intelligent system prototyping, resulting in intelligent systems that are more robust and easier to use.
Design of on-board Bluetooth wireless network system based on fault-tolerant technology
NASA Astrophysics Data System (ADS)
You, Zheng; Zhang, Xiangqi; Yu, Shijie; Tian, Hexiang
2007-11-01
In this paper, the Bluetooth wireless data transmission technology is applied in on-board computer system, to realize wireless data transmission between peripherals of the micro-satellite integrating electronic system, and in view of the high demand of reliability of a micro-satellite, a design of Bluetooth wireless network based on fault-tolerant technology is introduced. The reliability of two fault-tolerant systems is estimated firstly using Markov model, then the structural design of this fault-tolerant system is introduced; several protocols are established to make the system operate correctly, some related problems are listed and analyzed, with emphasis on Fault Auto-diagnosis System, Active-standby switch design and Data-Integrity process.
Harmony search algorithm: application to the redundancy optimization problem
NASA Astrophysics Data System (ADS)
Nahas, Nabil; Thien-My, Dao
2010-09-01
The redundancy optimization problem is a well known NP-hard problem which involves the selection of elements and redundancy levels to maximize system performance, given different system-level constraints. This article presents an efficient algorithm based on the harmony search algorithm (HSA) to solve this optimization problem. The HSA is a new nature-inspired algorithm which mimics the improvization process of music players. Two kinds of problems are considered in testing the proposed algorithm, with the first limited to the binary series-parallel system, where the problem consists of a selection of elements and redundancy levels used to maximize the system reliability given various system-level constraints; the second problem for its part concerns the multi-state series-parallel systems with performance levels ranging from perfect operation to complete failure, and in which identical redundant elements are included in order to achieve a desirable level of availability. Numerical results for test problems from previous research are reported and compared. The results of HSA showed that this algorithm could provide very good solutions when compared to those obtained through other approaches.
Human Factors in Financial Trading
Leaver, Meghan; Reader, Tom W.
2016-01-01
Objective This study tests the reliability of a system (FINANS) to collect and analyze incident reports in the financial trading domain and is guided by a human factors taxonomy used to describe error in the trading domain. Background Research indicates the utility of applying human factors theory to understand error in finance, yet empirical research is lacking. We report on the development of the first system for capturing and analyzing human factors–related issues in operational trading incidents. Method In the first study, 20 incidents are analyzed by an expert user group against a referent standard to establish the reliability of FINANS. In the second study, 750 incidents are analyzed using distribution, mean, pathway, and associative analysis to describe the data. Results Kappa scores indicate that categories within FINANS can be reliably used to identify and extract data on human factors–related problems underlying trading incidents. Approximately 1% of trades (n = 750) lead to an incident. Slip/lapse (61%), situation awareness (51%), and teamwork (40%) were found to be the most common problems underlying incidents. For the most serious incidents, problems in situation awareness and teamwork were most common. Conclusion We show that (a) experts in the trading domain can reliably and accurately code human factors in incidents, (b) 1% of trades incur error, and (c) poor teamwork skills and situation awareness underpin the most critical incidents. Application This research provides data crucial for ameliorating risk within financial trading organizations, with implications for regulation and policy. PMID:27142394
Human Factors in Financial Trading: An Analysis of Trading Incidents.
Leaver, Meghan; Reader, Tom W
2016-09-01
This study tests the reliability of a system (FINANS) to collect and analyze incident reports in the financial trading domain and is guided by a human factors taxonomy used to describe error in the trading domain. Research indicates the utility of applying human factors theory to understand error in finance, yet empirical research is lacking. We report on the development of the first system for capturing and analyzing human factors-related issues in operational trading incidents. In the first study, 20 incidents are analyzed by an expert user group against a referent standard to establish the reliability of FINANS. In the second study, 750 incidents are analyzed using distribution, mean, pathway, and associative analysis to describe the data. Kappa scores indicate that categories within FINANS can be reliably used to identify and extract data on human factors-related problems underlying trading incidents. Approximately 1% of trades (n = 750) lead to an incident. Slip/lapse (61%), situation awareness (51%), and teamwork (40%) were found to be the most common problems underlying incidents. For the most serious incidents, problems in situation awareness and teamwork were most common. We show that (a) experts in the trading domain can reliably and accurately code human factors in incidents, (b) 1% of trades incur error, and (c) poor teamwork skills and situation awareness underpin the most critical incidents. This research provides data crucial for ameliorating risk within financial trading organizations, with implications for regulation and policy. © 2016, Human Factors and Ergonomics Society.
Reliability Standards of Complex Engineering Systems
NASA Astrophysics Data System (ADS)
Galperin, E. M.; Zayko, V. A.; Gorshkalev, P. A.
2017-11-01
Production and manufacture play an important role in today’s modern society. Industrial production is nowadays characterized by increased and complex communications between its parts. The problem of preventing accidents in a large industrial enterprise becomes especially relevant. In these circumstances, the reliability of enterprise functioning is of particular importance. Potential damage caused by an accident at such enterprise may lead to substantial material losses and, in some cases, can even cause a loss of human lives. That is why industrial enterprise functioning reliability is immensely important. In terms of their reliability, industrial facilities (objects) are divided into simple and complex. Simple objects are characterized by only two conditions: operable and non-operable. A complex object exists in more than two conditions. The main characteristic here is the stability of its operation. This paper develops the reliability indicator combining the set theory methodology and a state space method. Both are widely used to analyze dynamically developing probability processes. The research also introduces a set of reliability indicators for complex technical systems.
Difficult Decisions Made Easier
NASA Technical Reports Server (NTRS)
2006-01-01
NASA missions are extremely complex and prone to sudden, catastrophic failure if equipment falters or if an unforeseen event occurs. For these reasons, NASA trains to expect the unexpected. It tests its equipment and systems in extreme conditions, and it develops risk-analysis tests to foresee any possible problems. The Space Agency recently worked with an industry partner to develop reliability analysis software capable of modeling complex, highly dynamic systems, taking into account variations in input parameters and the evolution of the system over the course of a mission. The goal of this research was multifold. It included performance and risk analyses of complex, multiphase missions, like the insertion of the Mars Reconnaissance Orbiter; reliability analyses of systems with redundant and/or repairable components; optimization analyses of system configurations with respect to cost and reliability; and sensitivity analyses to identify optimal areas for uncertainty reduction or performance enhancement.
A SVM framework for fault detection of the braking system in a high speed train
NASA Astrophysics Data System (ADS)
Liu, Jie; Li, Yan-Fu; Zio, Enrico
2017-03-01
In April 2015, the number of operating High Speed Trains (HSTs) in the world has reached 3603. An efficient, effective and very reliable braking system is evidently very critical for trains running at a speed around 300 km/h. Failure of a highly reliable braking system is a rare event and, consequently, informative recorded data on fault conditions are scarce. This renders the fault detection problem a classification problem with highly unbalanced data. In this paper, a Support Vector Machine (SVM) framework, including feature selection, feature vector selection, model construction and decision boundary optimization, is proposed for tackling this problem. Feature vector selection can largely reduce the data size and, thus, the computational burden. The constructed model is a modified version of the least square SVM, in which a higher cost is assigned to the error of classification of faulty conditions than the error of classification of normal conditions. The proposed framework is successfully validated on a number of public unbalanced datasets. Then, it is applied for the fault detection of braking systems in HST: in comparison with several SVM approaches for unbalanced datasets, the proposed framework gives better results.
Implementation method of multi-terminal DC control system
NASA Astrophysics Data System (ADS)
Yi, Liu; Hao-Ran, Huang; Jun-Wen, Zhou; Hong-Guang, Guo; Yu-Yong, Zhou
2018-04-01
Currently the multi-terminal DC system (MTDC) has more stations. Each station needs operators to monitor and control the device. It needs much more operation and maintenance, low efficiency and small reliability; for the most important reason, multi-terminal DC system has complex control mode. If one of the stations has some problem, the control of the whole system should have problems. According to research of the characteristics of multi-terminal DC (VSC-MTDC) systems, this paper presents a strong implementation of the multi-terminal DC Supervisory Control and Data Acquisition (SCADA) system. This system is intelligent, can be networking, integration and intelligent. A master control system is added in each station to communication with the other stations to send current and DC voltage value to pole control system for each station. Based on the practical application and information feedback in the China South Power Grid research center VSC-MTDC project, this system is higher efficiency and save the cost on the maintenance of convertor station to improve the intelligent level and comprehensive effect. And because of the master control system, a multi-terminal system hierarchy coordination control strategy is formed, this make the control and protection system more efficiency and reliability.
NASA Technical Reports Server (NTRS)
Singhal, Surendra N.
2003-01-01
The SAE G-11 RMSL (Reliability, Maintainability, Supportability, and Logistics) Division activities include identification and fulfillment of joint industry, government, and academia needs for development and implementation of RMSL technologies. Four Projects in the Probabilistic Methods area and two in the area of RMSL have been identified. These are: (1) Evaluation of Probabilistic Technology - progress has been made toward the selection of probabilistic application cases. Future effort will focus on assessment of multiple probabilistic softwares in solving selected engineering problems using probabilistic methods. Relevance to Industry & Government - Case studies of typical problems encountering uncertainties, results of solutions to these problems run by different codes, and recommendations on which code is applicable for what problems; (2) Probabilistic Input Preparation - progress has been made in identifying problem cases such as those with no data, little data and sufficient data. Future effort will focus on developing guidelines for preparing input for probabilistic analysis, especially with no or little data. Relevance to Industry & Government - Too often, we get bogged down thinking we need a lot of data before we can quantify uncertainties. Not True. There are ways to do credible probabilistic analysis with little data; (3) Probabilistic Reliability - probabilistic reliability literature search has been completed along with what differentiates it from statistical reliability. Work on computation of reliability based on quantification of uncertainties in primitive variables is in progress. Relevance to Industry & Government - Correct reliability computations both at the component and system level are needed so one can design an item based on its expected usage and life span; (4) Real World Applications of Probabilistic Methods (PM) - A draft of volume 1 comprising aerospace applications has been released. Volume 2, a compilation of real world applications of probabilistic methods with essential information demonstrating application type and timehost savings by the use of probabilistic methods for generic applications is in progress. Relevance to Industry & Government - Too often, we say, 'The Proof is in the Pudding'. With help from many contributors, we hope to produce such a document. Problem is - not too many people are coming forward due to proprietary nature. So, we are asking to document only minimum information including problem description, what method used, did it result in any savings, and how much?; (5) Software Reliability - software reliability concept, program, implementation, guidelines, and standards are being documented. Relevance to Industry & Government - software reliability is a complex issue that must be understood & addressed in all facets of business in industry, government, and other institutions. We address issues, concepts, ways to implement solutions, and guidelines for maximizing software reliability; (6) Maintainability Standards - maintainability/serviceability industry standard/guidelines and industry best practices and methodologies used in performing maintainability/ serviceability tasks are being documented. Relevance to Industry & Government - Any industry or government process, project, and/or tool must be maintained and serviced to realize the life and performance it was designed for. We address issues and develop guidelines for optimum performance & life.
Taheriyoun, Masoud; Moradinejad, Saber
2015-01-01
The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.
Optimal maintenance of a multi-unit system under dependencies
NASA Astrophysics Data System (ADS)
Sung, Ho-Joon
The availability, or reliability, of an engineering component greatly influences the operational cost and safety characteristics of a modern system over its life-cycle. Until recently, the reliance on past empirical data has been the industry-standard practice to develop maintenance policies that provide the minimum level of system reliability. Because such empirically-derived policies are vulnerable to unforeseen or fast-changing external factors, recent advancements in the study of topic on maintenance, which is known as optimal maintenance problem, has gained considerable interest as a legitimate area of research. An extensive body of applicable work is available, ranging from those concerned with identifying maintenance policies aimed at providing required system availability at minimum possible cost, to topics on imperfect maintenance of multi-unit system under dependencies. Nonetheless, these existing mathematical approaches to solve for optimal maintenance policies must be treated with caution when considered for broader applications, as they are accompanied by specialized treatments to ease the mathematical derivation of unknown functions in both objective function and constraint for a given optimal maintenance problem. These unknown functions are defined as reliability measures in this thesis, and theses measures (e.g., expected number of failures, system renewal cycle, expected system up time, etc.) do not often lend themselves to possess closed-form formulas. It is thus quite common to impose simplifying assumptions on input probability distributions of components' lifetime or repair policies. Simplifying the complex structure of a multi-unit system to a k-out-of-n system by neglecting any sources of dependencies is another commonly practiced technique intended to increase the mathematical tractability of a particular model. This dissertation presents a proposal for an alternative methodology to solve optimal maintenance problems by aiming to achieve the same end-goals as Reliability Centered Maintenance (RCM). RCM was first introduced to the aircraft industry in an attempt to bridge the gap between the empirically-driven and theory-driven approaches to establishing optimal maintenance policies. Under RCM, qualitative processes that enable the prioritizing of functions based on the criticality and influence would be combined with mathematical modeling to obtain the optimal maintenance policies. Where this thesis work deviates from RCM is its proposal to directly apply quantitative processes to model the reliability measures in optimal maintenance problem. First, Monte Carlo (MC) simulation, in conjunction with a pre-determined Design of Experiments (DOE) table, can be used as a numerical means of obtaining the corresponding discrete simulated outcomes of the reliability measures based on the combination of decision variables (e.g., periodic preventive maintenance interval, trigger age for opportunistic maintenance, etc.). These discrete simulation results can then be regressed as Response Surface Equations (RSEs) with respect to the decision variables. Such an approach to represent the reliability measures with continuous surrogate functions (i.e., the RSEs) not only enables the application of the numerical optimization technique to solve for optimal maintenance policies, but also obviates the need to make mathematical assumptions or impose over-simplifications on the structure of a multi-unit system for the sake of mathematical tractability. The applicability of the proposed methodology to a real-world optimal maintenance problem is showcased through its application to a Time Limited Dispatch (TLD) of Full Authority Digital Engine Control (FADEC) system. In broader terms, this proof-of-concept exercise can be described as a constrained optimization problem, whose objective is to identify the optimal system inspection interval that guarantees a certain level of availability for a multi-unit system. A variety of reputable numerical techniques were used to model the problem as accurately as possible, including algorithms for the MC simulation, imperfect maintenance model from quasi renewal processes, repair time simulation, and state transition rules. Variance Reduction Techniques (VRTs) were also used in an effort to enhance MC simulation efficiency. After accurate MC simulation results are obtained, the RSEs are generated based on the goodness-of-fit measure to yield as parsimonious model as possible to construct the optimization problem. Under the assumption of constant failure rate for lifetime distributions, the inspection interval from the proposed methodology was found to be consistent with the one from the common approach used in industry that leverages Continuous Time Markov Chain (CTMC). While the latter does not consider maintenance cost settings, the proposed methodology enables an operator to consider different types of maintenance cost settings, e.g., inspection cost, system corrective maintenance cost, etc., to result in more flexible maintenance policies. When the proposed methodology was applied to the same TLD of FADEC example, but under the more generalized assumption of strictly Increasing Failure Rate (IFR) for lifetime distribution, it was shown to successfully capture component wear-out, as well as the economic dependencies among the system components.
The embedded operating system project
NASA Technical Reports Server (NTRS)
Campbell, R. H.
1984-01-01
This progress report describes research towards the design and construction of embedded operating systems for real-time advanced aerospace applications. The applications concerned require reliable operating system support that must accommodate networks of computers. The report addresses the problems of constructing such operating systems, the communications media, reconfiguration, consistency and recovery in a distributed system, and the issues of realtime processing. A discussion is included on suitable theoretical foundations for the use of atomic actions to support fault tolerance and data consistency in real-time object-based systems. In particular, this report addresses: atomic actions, fault tolerance, operating system structure, program development, reliability and availability, and networking issues. This document reports the status of various experiments designed and conducted to investigate embedded operating system design issues.
A PC program to optimize system configuration for desired reliability at minimum cost
NASA Technical Reports Server (NTRS)
Hills, Steven W.; Siahpush, Ali S.
1994-01-01
High reliability is desired in all engineered systems. One way to improve system reliability is to use redundant components. When redundant components are used, the problem becomes one of allocating them to achieve the best reliability without exceeding other design constraints such as cost, weight, or volume. Systems with few components can be optimized by simply examining every possible combination but the number of combinations for most systems is prohibitive. A computerized iteration of the process is possible but anything short of a super computer requires too much time to be practical. Many researchers have derived mathematical formulations for calculating the optimum configuration directly. However, most of the derivations are based on continuous functions whereas the real system is composed of discrete entities. Therefore, these techniques are approximations of the true optimum solution. This paper describes a computer program that will determine the optimum configuration of a system of multiple redundancy of both standard and optional components. The algorithm is a pair-wise comparative progression technique which can derive the true optimum by calculating only a small fraction of the total number of combinations. A designer can quickly analyze a system with this program on a personal computer.
A Distributed Approach to System-Level Prognostics
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, Indranil
2012-01-01
Prognostics, which deals with predicting remaining useful life of components, subsystems, and systems, is a key technology for systems health management that leads to improved safety and reliability with reduced costs. The prognostics problem is often approached from a component-centric view. However, in most cases, it is not specifically component lifetimes that are important, but, rather, the lifetimes of the systems in which these components reside. The system-level prognostics problem can be quite difficult due to the increased scale and scope of the prognostics problem and the relative Jack of scalability and efficiency of typical prognostics approaches. In order to address these is ues, we develop a distributed solution to the system-level prognostics problem, based on the concept of structural model decomposition. The system model is decomposed into independent submodels. Independent local prognostics subproblems are then formed based on these local submodels, resul ting in a scalable, efficient, and flexible distributed approach to the system-level prognostics problem. We provide a formulation of the system-level prognostics problem and demonstrate the approach on a four-wheeled rover simulation testbed. The results show that the system-level prognostics problem can be accurately and efficiently solved in a distributed fashion.
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2004-08-01
Vision is only a part of a system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. These mechanisms provide a reliable recognition if the object is occluded or cannot be recognized as a whole. It is hard to split the entire system apart, and reliable solutions to the target recognition problems are possible only within the solution of a more generic Image Understanding Problem. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for such models. It converts visual information into relational Network-Symbolic structures, avoiding artificial precise computations of 3-dimensional models. Network-Symbolic Transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps creating consistent models. Attention, separation of figure from ground and perceptual grouping are special kinds of network-symbolic transformations. Such Image/Video Understanding Systems will be reliably recognizing targets.
NASA Technical Reports Server (NTRS)
Wilson, Larry W.
1989-01-01
The longterm goal of this research is to identify or create a model for use in analyzing the reliability of flight control software. The immediate tasks addressed are the creation of data useful to the study of software reliability and production of results pertinent to software reliability through the analysis of existing reliability models and data. The completed data creation portion of this research consists of a Generic Checkout System (GCS) design document created in cooperation with NASA and Research Triangle Institute (RTI) experimenters. This will lead to design and code reviews with the resulting product being one of the versions used in the Terminal Descent Experiment being conducted by the Systems Validations Methods Branch (SVMB) of NASA/Langley. An appended paper details an investigation of the Jelinski-Moranda and Geometric models for software reliability. The models were given data from a process that they have correctly simulated and asked to make predictions about the reliability of that process. It was found that either model will usually fail to make good predictions. These problems were attributed to randomness in the data and replication of data was recommended.
Intelligent pump test system based on virtual instrument
NASA Astrophysics Data System (ADS)
Ma, Jungong; Wang, Shifu; Wang, Zhanlin
2003-09-01
The intelligent pump system is the key component of the aircraft hydraulic system that can solve the problem, such as the temperature sharply increasing. As the performance of the intelligent pump directly determines that of the aircraft hydraulic system and seriously affects fly security and reliability. So it is important to test all kinds of performance parameters of intelligent pump during design and development, while the advanced, reliable and complete test equipments are the necessary instruments for achieving the goal. In this paper, the application of virtual instrument and computer network technology in aircraft intelligent pump test is presented. The composition of the hardware, software, hydraulic circuit in this system are designed and implemented.
Coder Drift: A Reliability Problem for Teacher Observations.
ERIC Educational Resources Information Center
Marston, Paul T.; And Others
The results of two experiments support the hypothesis of "coder drift" which is defined as change that takes place while trained coders are using a system for a number of classroom observation sessions. The coding system used was a modification of the low-inference Flanders System of Interaction Analysis which calls for assigning…
NASA Astrophysics Data System (ADS)
Moghaddam, Kamran S.; Usher, John S.
2011-07-01
In this article, a new multi-objective optimization model is developed to determine the optimal preventive maintenance and replacement schedules in a repairable and maintainable multi-component system. In this model, the planning horizon is divided into discrete and equally-sized periods in which three possible actions must be planned for each component, namely maintenance, replacement, or do nothing. The objective is to determine a plan of actions for each component in the system while minimizing the total cost and maximizing overall system reliability simultaneously over the planning horizon. Because of the complexity, combinatorial and highly nonlinear structure of the mathematical model, two metaheuristic solution methods, generational genetic algorithm, and a simulated annealing are applied to tackle the problem. The Pareto optimal solutions that provide good tradeoffs between the total cost and the overall reliability of the system can be obtained by the solution approach. Such a modeling approach should be useful for maintenance planners and engineers tasked with the problem of developing recommended maintenance plans for complex systems of components.
Educational Data Mining and Problem-Based Learning
ERIC Educational Resources Information Center
Walldén, Sari; Mäkinen, Erkki
2014-01-01
This paper considers the use of log data provided by learning management systems when studying whether students obey the problem-based learning (PBL) method. Log analysis turns out to be a valuable tool in measuring the use of the learning material of interest. It gives reliable figures concerning not only the number of use sessions but also the…
Enterprise Management Network Architecture Distributed Knowledge Base Support
1990-11-01
Advantages Potentially, this makes a distributed system more powerful than a conventional, centralized one in two ways: " First, it can be more reliable...does not completely apply [35]. The grain size of the processors measures the individual problem-solving power of the agents. In this definition...problem-solving power amounts to the conceptual size of a single action taken by an agent visible to the other agents in the system. If the grain is coarse
Phased-mission system analysis using Boolean algebraic methods
NASA Technical Reports Server (NTRS)
Somani, Arun K.; Trivedi, Kishor S.
1993-01-01
Most reliability analysis techniques and tools assume that a system is used for a mission consisting of a single phase. However, multiple phases are natural in many missions. The failure rates of components, system configuration, and success criteria may vary from phase to phase. In addition, the duration of a phase may be deterministic or random. Recently, several researchers have addressed the problem of reliability analysis of such systems using a variety of methods. A new technique for phased-mission system reliability analysis based on Boolean algebraic methods is described. Our technique is computationally efficient and is applicable to a large class of systems for which the failure criterion in each phase can be expressed as a fault tree (or an equivalent representation). Our technique avoids state space explosion that commonly plague Markov chain-based analysis. A phase algebra to account for the effects of variable configurations and success criteria from phase to phase was developed. Our technique yields exact (as opposed to approximate) results. The use of our technique was demonstrated by means of an example and present numerical results to show the effects of mission phases on the system reliability.
Detection of faults and software reliability analysis
NASA Technical Reports Server (NTRS)
Knight, J. C.
1987-01-01
Specific topics briefly addressed include: the consistent comparison problem in N-version system; analytic models of comparison testing; fault tolerance through data diversity; and the relationship between failures caused by automatically seeded faults.
NASA Astrophysics Data System (ADS)
Kanjilal, Oindrila; Manohar, C. S.
2017-07-01
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.
Li-Tsang, Cecilia W P; Wong, Agnes S K; Leung, Howard W H; Cheng, Joyce S; Chiu, Billy H W; Tse, Linda F L; Chung, Raymond C K
2013-09-01
There are more children diagnosed with specific learning difficulties in recent years as people are more aware of these conditions. Diagnostic tool has been validated to screen out this condition from the population (SpLD test for Hong Kong children). However, for specific assessment on handwriting problem, there seems a lack of standardized and objective evaluation tool to look into the problems. The objective of this study was to validate the Chinese Handwriting Analysis System (CHAS), which is designed to measure both the process and production of handwriting. The construct validity, convergent validity, internal consistency and test-retest reliability of CHAS was analyzed using the data from 734 grade 1-6 students from 6 primary schools in Hong Kong. Principal Component Analysis revealed that measurements of CHAS loaded into 4 components which accounted for 77.73% of the variance. The correlation between the handwriting accuracy obtained from HAS and eyeballing was r=.73. Cronbach's alpha of all measurement items was .65. Except SD of writing time per character, all the measurement items regarding handwriting speed, handwriting accuracy and pen pressure showed good to excellent test-retest reliability (r=.72-.96), while measurement on the numbers of characters which exceeded grid showed moderate reliability (r=.48). Although there are still ergonomic, biomechanical or unspecified aspects which may not be determined by the system, the CHAS can definitely assist therapists in identifying primary school students with handwriting problems and implement interventions accordingly. Copyright © 2013 Elsevier Ltd. All rights reserved.
Speech-driven environmental control systems--a qualitative analysis of users' perceptions.
Judge, Simon; Robertson, Zoë; Hawley, Mark; Enderby, Pam
2009-05-01
To explore users' experiences and perceptions of speech-driven environmental control systems (SPECS) as part of a larger project aiming to develop a new SPECS. The motivation for this part of the project was to add to the evidence base for the use of SPECS and to determine the key design specifications for a new speech-driven system from a user's perspective. Semi-structured interviews were conducted with 12 users of SPECS from around the United Kingdom. These interviews were transcribed and analysed using a qualitative method based on framework analysis. Reliability is the main influence on the use of SPECS. All the participants gave examples of occasions when their speech-driven system was unreliable; in some instances, this unreliability was reported as not being a problem (e.g., for changing television channels); however, it was perceived as a problem for more safety critical functions (e.g., opening a door). Reliability was cited by participants as the reason for using a switch-operated system as back up. Benefits of speech-driven systems focused on speech operation enabling access when other methods were not possible; quicker operation and better aesthetic considerations. Overall, there was a perception of increased independence from the use of speech-driven environmental control. In general, speech was considered a useful method of operating environmental controls by the participants interviewed; however, their perceptions regarding reliability often influenced their decision to have backup or alternative systems for certain functions.
Description and status of NASA-LeRC/DOE photovoltaic applications systems experiments
NASA Technical Reports Server (NTRS)
Ratajczak, A. F.
1978-01-01
In its role of supporting the DOE Photovoltaic Program, the NASA-Lewis Research Center has designed, fabricated and installed 16 geographically dispersed photovoltaic systems. These systems are powering a refrigerator, highway warning sign, forest lookout towers, remote weather stations, a water chiller at a visitor center, and insect survey traps. Each of these systems is described in terms of load requirements, solar array and battery size, and instrumentation and controls. Operational experience is described and present status is given for each system. The P/V power systems have proven to be highly reliable with almost no problems with modules and very few problems overall
Learning dominance relations in combinatorial search problems
NASA Technical Reports Server (NTRS)
Yu, Chee-Fen; Wah, Benjamin W.
1988-01-01
Dominance relations commonly are used to prune unnecessary nodes in search graphs, but they are problem-dependent and cannot be derived by a general procedure. The authors identify machine learning of dominance relations and the applicable learning mechanisms. A study of learning dominance relations using learning by experimentation is described. This system has been able to learn dominance relations for the 0/1-knapsack problem, an inventory problem, the reliability-by-replication problem, the two-machine flow shop problem, a number of single-machine scheduling problems, and a two-machine scheduling problem. It is considered that the same methodology can be extended to learn dominance relations in general.
High-reliability computing for the smarter planet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, Heather M; Graham, Paul; Manuzzato, Andrea
2010-01-01
The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities ofmore » inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is necessary. Already critical infrastructure is failing too frequently. In this paper, we will introduce the Cross-Layer Reliability concept for designing more reliable computer systems.« less
Reliability based design optimization: Formulations and methodologies
NASA Astrophysics Data System (ADS)
Agarwal, Harish
Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed. A trust region managed sequential approximate optimization methodology is employed for this purpose. Results from numerical test studies indicate that the methodology can be used for performing design optimization under severe uncertainty.
Novel High Integrity Bio-Inspired Systems with On-Line Self-Test and Self-Repair Properties
NASA Astrophysics Data System (ADS)
Samie, Mohammad; Dragffy, Gabriel; Pipe, Tony
2011-08-01
Since the beginning of life nature has been developing some remarkable solutions to the problem of creating reliable systems that can operate under difficult environmental and fault conditions. Yet, no matter how sophisticated our systems are, we are still unable to match the high degree of reliability that biological organisms posses. Since the early '90s attempts have been made to adapt biological properties and processes to the design of electronic systems but the results have always been unduly complex.This paper, proposes a novel model using a radically new approach to construct highly reliable electronic systems with online fault repair properties. It uses the characteristics and behaviour of unicellular bacteria and bacterial communities to achieve this. The result is a configurable bio-inspired cellular array architecture that, with built-in self-diagnostic and self-repair properties, can implement any application specific electronic system but is particularly suited for safety critical environments, such as space.
NASA Technical Reports Server (NTRS)
White, Allan L.; Palumbo, Daniel L.
1991-01-01
Semi-Markov processes have proved to be an effective and convenient tool to construct models of systems that achieve reliability by redundancy and reconfiguration. These models are able to depict complex system architectures and to capture the dynamics of fault arrival and system recovery. A disadvantage of this approach is that the models can be extremely large, which poses both a model and a computational problem. Techniques are needed to reduce the model size. Because these systems are used in critical applications where failure can be expensive, there must be an analytically derived bound for the error produced by the model reduction technique. A model reduction technique called trimming is presented that can be applied to a popular class of systems. Automatic model generation programs were written to help the reliability analyst produce models of complex systems. This method, trimming, is easy to implement and the error bound easy to compute. Hence, the method lends itself to inclusion in an automatic model generator.
Coordinated Control of Wind Turbine and Energy Storage System for Reducing Wind Power Fluctuation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muljadi, Eduard; Kim, Chunghun; Chung, Chung Choo
This paper proposes a coordinated control of wind turbine and energy storage system (ESS). Because wind power (WP) is highly dependent on variable wind speed and could induce a severe stability problem to power system especially when the WP has high penetration level. To solve this problem, many power generation corporations or grid operators recently use the ESS. It has very quick response and good performance for reducing the impact of WP fluctuation but has high cost for its installation. Therefore, it is very important to design the control algorithm considering both ESS capacity and grid reliability. Thus, we proposemore » the control algorithm to mitigate the WP fluctuation by using the coordinated control between wind turbine and ESS considering ESS state of charge (SoC) and the WP fluctuation. From deloaded control according to WP fluctuation and ESS SoC management, we can expect the ESS lifespan expansion and improved grid reliability. The effectiveness of the proposed method is validated in MATLAB/Simulink considering power system including both wind turbine generator and conventional generators which react to system frequency deviation.« less
3-Dimensional Root Cause Diagnosis via Co-analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Ziming; Lan, Zhiling; Yu, Li
2012-01-01
With the growth of system size and complexity, reliability has become a major concern for large-scale systems. Upon the occurrence of failure, system administrators typically trace the events in Reliability, Availability, and Serviceability (RAS) logs for root cause diagnosis. However, RAS log only contains limited diagnosis information. Moreover, the manual processing is time-consuming, error-prone, and not scalable. To address the problem, in this paper we present an automated root cause diagnosis mechanism for large-scale HPC systems. Our mechanism examines multiple logs to provide a 3-D fine-grained root cause analysis. Here, 3-D means that our analysis will pinpoint the failure layer,more » the time, and the location of the event that causes the problem. We evaluate our mechanism by means of real logs collected from a production IBM Blue Gene/P system at Oak Ridge National Laboratory. It successfully identifies failure layer information for 219 failures during 23-month period. Furthermore, it effectively identifies the triggering events with time and location information, even when the triggering events occur hundreds of hours before the resulting failures.« less
Removing Barriers for Effective Deployment of Intermittent Renewable Generation
NASA Astrophysics Data System (ADS)
Arabali, Amirsaman
The stochastic nature of intermittent renewable resources is the main barrier to effective integration of renewable generation. This problem can be studied from feeder-scale and grid-scale perspectives. Two new stochastic methods are proposed to meet the feeder-scale controllable load with a hybrid renewable generation (including wind and PV) and energy storage system. For the first method, an optimization problem is developed whose objective function is the cost of the hybrid system including the cost of renewable generation and storage subject to constraints on energy storage and shifted load. A smart-grid strategy is developed to shift the load and match the renewable energy generation and controllable load. Minimizing the cost function guarantees minimum PV and wind generation installation, as well as storage capacity selection for supplying the controllable load. A confidence coefficient is allocated to each stochastic constraint which shows to what degree the constraint is satisfied. In the second method, a stochastic framework is developed for optimal sizing and reliability analysis of a hybrid power system including renewable resources (PV and wind) and energy storage system. The hybrid power system is optimally sized to satisfy the controllable load with a specified reliability level. A load-shifting strategy is added to provide more flexibility for the system and decrease the installation cost. Load shifting strategies and their potential impacts on the hybrid system reliability/cost analysis are evaluated trough different scenarios. Using a compromise-solution method, the best compromise between the reliability and cost will be realized for the hybrid system. For the second problem, a grid-scale stochastic framework is developed to examine the storage application and its optimal placement for the social cost and transmission congestion relief of wind integration. Storage systems are optimally placed and adequately sized to minimize the sum of operation and congestion costs over a scheduling period. A technical assessment framework is developed to enhance the efficiency of wind integration and evaluate the economics of storage technologies and conventional gas-fired alternatives. The proposed method is used to carry out a cost-benefit analysis for the IEEE 24-bus system and determine the most economical technology. In order to mitigate the financial and technical concerns of renewable energy integration into the power system, a stochastic framework is proposed for transmission grid reinforcement studies in a power system with wind generation. A multi-stage multi-objective transmission network expansion planning (TNEP) methodology is developed which considers the investment cost, absorption of private investment and reliability of the system as the objective functions. A Non-dominated Sorting Genetic Algorithm (NSGA II) optimization approach is used in combination with a probabilistic optimal power flow (POPF) to determine the Pareto optimal solutions considering the power system uncertainties. Using a compromise-solution method, the best final plan is then realized based on the decision maker preferences. The proposed methodology is applied to the IEEE 24-bus Reliability Tests System (RTS) to evaluate the feasibility and practicality of the developed planning strategy.
Research requirements to improve reliability of civil helicopters
NASA Technical Reports Server (NTRS)
Dougherty, J. J., III; Barrett, L. D.
1978-01-01
The major reliability problems of the civil helicopter fleet as reported by helicopter operational and maintenance personnel are documented. An assessment of each problem is made to determine if the reliability can be improved by application of present technology or whether additional research and development are required. The reliability impact is measured in three ways: (1) The relative frequency of each problem in the fleet. (2) The relative on-aircraft manhours to repair, associated with each fleet problem. (3) The relative cost of repair materials or replacement parts associated with each fleet problem. The data reviewed covered the period of 1971 through 1976 and covered only turbine engine aircraft.
NASA Astrophysics Data System (ADS)
Lin, Yi-Kuei; Huang, Cheng-Fu; Yeh, Cheng-Ta
2016-04-01
In supply chain management, satisfying customer demand is the most concerned for the manager. However, the goods may rot or be spoilt during delivery owing to natural disasters, inclement weather, traffic accidents, collisions, and so on, such that the intact goods may not meet market demand. This paper concentrates on a stochastic-flow distribution network (SFDN), in which a node denotes a supplier, a transfer station, or a market, while a route denotes a carrier providing the delivery service for a pair of nodes. The available capacity of the carrier is stochastic because the capacity may be partially reserved by other customers. The addressed problem is to evaluate the system reliability, the probability that the SFDN can satisfy the market demand with the spoilage rate under the budget constraint from multiple suppliers to the customer. An algorithm is developed in terms of minimal paths to evaluate the system reliability along with a numerical example to illustrate the solution procedure. A practical case of fruit distribution is presented accordingly to emphasise the management implication of the system reliability.
Toward automatic finite element analysis
NASA Technical Reports Server (NTRS)
Kela, Ajay; Perucchio, Renato; Voelcker, Herbert
1987-01-01
Two problems must be solved if the finite element method is to become a reliable and affordable blackbox engineering tool. Finite element meshes must be generated automatically from computer aided design databases and mesh analysis must be made self-adaptive. The experimental system described solves both problems in 2-D through spatial and analytical substructuring techniques that are now being extended into 3-D.
Dokas, Ioannis M; Panagiotakopoulos, Demetrios C
2006-08-01
The available expertise on managing and operating solid waste management (SWM) facilities varies among countries and among types of facilities. Few experts are willing to record their experience, while few researchers systematically investigate the chains of events that could trigger operational failures in a facility; expertise acquisition and dissemination, in SWM, is neither popular nor easy, despite the great need for it. This paper presents a knowledge acquisition process aimed at capturing, codifying and expanding reliable expertise and propagating it to non-experts. The knowledge engineer (KE), the person performing the acquisition, must identify the events (or causes) that could trigger a failure, determine whether a specific event could trigger more than one failure, and establish how various events are related among themselves and how they are linked to specific operational problems. The proposed process, which utilizes logic diagrams (fault trees) widely used in system safety and reliability analyses, was used for the analysis of 24 common landfill operational problems. The acquired knowledge led to the development of a web-based expert system (Landfill Operation Management Advisor, http://loma.civil.duth.gr), which estimates the occurrence possibility of operational problems, provides advice and suggests solutions.
Airborne electronics for automated flight systems
NASA Technical Reports Server (NTRS)
Graves, G. B., Jr.
1975-01-01
The increasing importance of airborne electronics for use in automated flight systems is briefly reviewed with attention to both basic aircraft control functions and flight management systems for operational use. The requirements for high levels of systems reliability are recognized. Design techniques are discussed and the areas of control systems, computing and communications are considered in terms of key technical problems and trends for their solution.
Photovoltaic module reliability improvement through application testing and failure analysis
NASA Technical Reports Server (NTRS)
Dumas, L. N.; Shumka, A.
1982-01-01
During the first four years of the U.S. Department of Energy (DOE) National Photovoltatic Program, the Jet Propulsion Laboratory Low-Cost Solar Array (LSA) Project purchased about 400 kW of photovoltaic modules for test and experiments. In order to identify, report, and analyze test and operational problems with the Block Procurement modules, a problem/failure reporting and analysis system was implemented by the LSA Project with the main purpose of providing manufacturers with feedback from test and field experience needed for the improvement of product performance and reliability. A description of the more significant types of failures is presented, taking into account interconnects, cracked cells, dielectric breakdown, delamination, and corrosion. Current design practices and reliability evaluations are also discussed. The conducted evaluation indicates that current module designs incorporate damage-resistant and fault-tolerant features which address field failure mechanisms observed to date.
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
This paper presents a study on the optimization of systems with structured uncertainties, whose inputs and outputs can be exhaustively described in the probabilistic sense. By propagating the uncertainty from the input to the output in the space of the probability density functions and the moments, optimization problems that pursue performance, robustness and reliability based designs are studied. Be specifying the desired outputs in terms of desired probability density functions and then in terms of meaningful probabilistic indices, we settle a computationally viable framework for solving practical optimization problems. Applications to static optimization and stability control are used to illustrate the relevance of incorporating uncertainty in the early stages of the design. Several examples that admit a full probabilistic description of the output in terms of the design variables and the uncertain inputs are used to elucidate the main features of the generic problem and its solution. Extensions to problems that do not admit closed form solutions are also evaluated. Concrete evidence of the importance of using a consistent probabilistic formulation of the optimization problem and a meaningful probabilistic description of its solution is provided in the examples. In the stability control problem the analysis shows that standard deterministic approaches lead to designs with high probability of running into instability. The implementation of such designs can indeed have catastrophic consequences.
NASA Astrophysics Data System (ADS)
Yang, Zhou; Zhu, Yunpeng; Ren, Hongrui; Zhang, Yimin
2015-03-01
Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.
Application of high performance asynchronous socket communication in power distribution automation
NASA Astrophysics Data System (ADS)
Wang, Ziyu
2017-05-01
With the development of information technology and Internet technology, and the growing demand for electricity, the stability and the reliable operation of power system have been the goal of power grid workers. With the advent of the era of big data, the power data will gradually become an important breakthrough to guarantee the safe and reliable operation of the power grid. So, in the electric power industry, how to efficiently and robustly receive the data transmitted by the data acquisition device, make the power distribution automation system be able to execute scientific decision quickly, which is the pursuit direction in power grid. In this paper, some existing problems in the power system communication are analysed, and with the help of the network technology, a set of solutions called Asynchronous Socket Technology to the problem in network communication which meets the high concurrency and the high throughput is proposed. Besides, the paper also looks forward to the development direction of power distribution automation in the era of big data and artificial intelligence.
ERIC Educational Resources Information Center
Erford, Bradley T.; Alsamadi, Silvana C.
2012-01-01
Score reliability and validity of parent responses concerning their 10- to 17-year-old students were analyzed using the Screening Test for Emotional Problems-Parent Report (STEP-P), which assesses a variety of emotional problems classified under the Individuals with Disabilities Education Improvement Act. Score reliability, convergent, and…
The Role of Reliability, Vulnerability and Resilience in the Management of Water Quality Systems
NASA Astrophysics Data System (ADS)
Lence, B. J.; Maier, H. R.
2001-05-01
The risk based performance indicators reliability, vulnerability and resilience provide measures of the frequency, magnitude and duration of the failure of water resources systems, respectively. They have been applied primarily to water supply problems, including the assessment of the performance of reservoirs and water distribution systems. Applications to water quality case studies have been limited, although the need to consider the length and magnitude of violations of a particular water quality standard has been recognized for some time. In this research, the role of reliability, vulnerability and resilience in water quality management applications is investigated by examining their significance as performance measures for water quality systems and assessing their potential for assisting in decision making processes. The importance of each performance indicator is discussed and a framework for classifying such systems, based on the relative significance of each of these indicators, is introduced and illustrated qualitatively with various case studies. Quantitative examples drawn from both lake and river water quality modeling exercises are then provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the secondmore » explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.« less
Monitoring robot actions for error detection and recovery
NASA Technical Reports Server (NTRS)
Gini, M.; Smith, R.
1987-01-01
Reliability is a serious problem in computer controlled robot systems. Although robots serve successfully in relatively simple applications such as painting and spot welding, their potential in areas such as automated assembly is hampered by programming problems. A program for assembling parts may be logically correct, execute correctly on a simulator, and even execute correctly on a robot most of the time, yet still fail unexpectedly in the face of real world uncertainties. Recovery from such errors is far more complicated than recovery from simple controller errors, since even expected errors can often manifest themselves in unexpected ways. Here, a novel approach is presented for improving robot reliability. Instead of anticipating errors, researchers use knowledge-based programming techniques so that the robot can autonomously exploit knowledge about its task and environment to detect and recover from failures. They describe preliminary experiment of a system that they designed and constructed.
Inter-Vehicle Communication System Utilizing Autonomous Distributed Transmit Power Control
NASA Astrophysics Data System (ADS)
Hamada, Yuji; Sawa, Yoshitsugu; Goto, Yukio; Kumazawa, Hiroyuki
In ad-hoc network such as inter-vehicle communication (IVC) system, safety applications that vehicles broadcast the information such as car velocity, position and so on periodically are considered. In these applications, if there are many vehicles broadcast data in a communication area, congestion incurs a problem decreasing communication reliability. We propose autonomous distributed transmit power control method to keep high communication reliability. In this method, each vehicle controls its transmit power using feed back control. Furthermore, we design a communication protocol to realize the proposed method, and we evaluate the effectiveness of proposed method using computer simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2015-08-01
Since 1990, the National Renewable Energy Laboratory’s (NREL's) National Wind Technology Center (NWTC) has tested more than 150 wind turbine blades. NWTC researchers can test full-scale and subcomponent articles, conduct data analyses, and provide engineering expertise on best design practices. Structural testing of wind turbine blades enables designers, manufacturers, and owners to validate designs and assess structural performance to specific load conditions. Rigorous structural testing can reveal design and manufacturing problems at an early stage of development that can lead to overall improvements in design and increase system reliability.
Advanced techniques in reliability model representation and solution
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Nicol, David M.
1992-01-01
The current tendency of flight control system designs is towards increased integration of applications and increased distribution of computational elements. The reliability analysis of such systems is difficult because subsystem interactions are increasingly interdependent. Researchers at NASA Langley Research Center have been working for several years to extend the capability of Markov modeling techniques to address these problems. This effort has been focused in the areas of increased model abstraction and increased computational capability. The reliability model generator (RMG) is a software tool that uses as input a graphical object-oriented block diagram of the system. RMG uses a failure-effects algorithm to produce the reliability model from the graphical description. The ASSURE software tool is a parallel processing program that uses the semi-Markov unreliability range evaluator (SURE) solution technique and the abstract semi-Markov specification interface to the SURE tool (ASSIST) modeling language. A failure modes-effects simulation is used by ASSURE. These tools were used to analyze a significant portion of a complex flight control system. The successful combination of the power of graphical representation, automated model generation, and parallel computation leads to the conclusion that distributed fault-tolerant system architectures can now be analyzed.
SPS issues: The need to look ahead
NASA Technical Reports Server (NTRS)
Dybdal, K. K.
1980-01-01
The need for a systemic examination of SPS for the purpose of identifying potential problem areas and the issues related to those areas, is considered. The use of a systemic approach, a valuable perspective from which to evaluate SPS implementation as a reliable, safe, and cost efficient energy supply of the future, is discussed.
NASA Technical Reports Server (NTRS)
Joseph, T. A.; Birman, Kenneth P.
1989-01-01
A number of broadcast protocols that are reliable subject to a variety of ordering and delivery guarantees are considered. Developing applications that are distributed over a number of sites and/or must tolerate the failures of some of them becomes a considerably simpler task when such protocols are available for communication. Without such protocols the kinds of distributed applications that can reasonably be built will have a very limited scope. As the trend towards distribution and decentralization continues, it will not be surprising if reliable broadcast protocols have the same role in distributed operating systems of the future that message passing mechanisms have in the operating systems of today. On the other hand, the problems of engineering such a system remain large. For example, deciding which protocol is the most appropriate to use in a certain situation or how to balance the latency-communication-storage costs is not an easy question.
Scheduling Independent Partitions in Integrated Modular Avionics Systems
Du, Chenglie; Han, Pengcheng
2016-01-01
Recently the integrated modular avionics (IMA) architecture has been widely adopted by the avionics industry due to its strong partition mechanism. Although the IMA architecture can achieve effective cost reduction and reliability enhancement in the development of avionics systems, it results in a complex allocation and scheduling problem. All partitions in an IMA system should be integrated together according to a proper schedule such that their deadlines will be met even under the worst case situations. In order to help provide a proper scheduling table for all partitions in IMA systems, we study the schedulability of independent partitions on a multiprocessor platform in this paper. We firstly present an exact formulation to calculate the maximum scaling factor and determine whether all partitions are schedulable on a limited number of processors. Then with a Game Theory analogy, we design an approximation algorithm to solve the scheduling problem of partitions, by allowing each partition to optimize its own schedule according to the allocations of the others. Finally, simulation experiments are conducted to show the efficiency and reliability of the approach proposed in terms of time consumption and acceptance ratio. PMID:27942013
NASA Astrophysics Data System (ADS)
Liu, Haixing; Savić, Dragan; Kapelan, Zoran; Zhao, Ming; Yuan, Yixing; Zhao, Hongbin
2014-07-01
Flow entropy is a measure of uniformity of pipe flows in water distribution systems. By maximizing flow entropy one can identify reliable layouts or connectivity in networks. In order to overcome the disadvantage of the common definition of flow entropy that does not consider the impact of pipe diameter on reliability, an extended definition of flow entropy, termed as diameter-sensitive flow entropy, is proposed. This new methodology is then assessed by using other reliability methods, including Monte Carlo Simulation, a pipe failure probability model, and a surrogate measure (resilience index) integrated with water demand and pipe failure uncertainty. The reliability assessment is based on a sample of WDS designs derived from an optimization process for each of the two benchmark networks. Correlation analysis is used to evaluate quantitatively the relationship between entropy and reliability. To ensure reliability, a comparative analysis between the flow entropy and the new method is conducted. The results demonstrate that the diameter-sensitive flow entropy shows consistently much stronger correlation with the three reliability measures than simple flow entropy. Therefore, the new flow entropy method can be taken as a better surrogate measure for reliability and could be potentially integrated into the optimal design problem of WDSs. Sensitivity analysis results show that the velocity parameters used in the new flow entropy has no significant impact on the relationship between diameter-sensitive flow entropy and reliability.
NASA Astrophysics Data System (ADS)
Yegoshina, O. V.; Voronov, V. N.; Yarovoy, V. O.; Bolshakova, N. A.
2017-11-01
There are many problems in domestic energy at the present that require urgent solutions in the near future. One of these problems - the aging of the main and auxiliary equipment. Wear of equipment is the cause of decrease reliability and efficiency of power plants. Reliability of the equipment are associated with the introduction of cycle chemistry monitoring system. The most damageable equipment’s are boilers (52.2 %), turbines (12.6 %) and heating systems (12.3 %) according to the review of failure rate on the power plants. The most part of the damageability of the boiler is heated surfaces (73.2 %). According to the Russian technical requirements, the monitoring systems are responsible to reduce damageability the boiler heating surfaces and to increase the reliability of the equipment. All power units capacity of over 50 MW are equipped with cycle chemistry monitoring systems in order to maintain water chemistry within operating limits. The main idea of cycle chemistry monitoring systems is to improve water chemistry at power plants. According to the guidelines, cycle chemistry monitoring systems of a single unit depends on its type (drum or once-through boiler) and consists of: 20…50 parameters of on-line chemical analyzers; 20…30 «grab» sample analyses (daily) and about 15…20 on-line monitored operating parameters. The operator of modern power plant uses with many data at different points of steam/water cycle. Operators do not can estimate quality of the cycle chemistry due to the large volume of daily and every shift information and dispersion of data, lack of systematization. In this paper, an algorithm for calculating the quality index developed for improving control the water chemistry of the condensate, feed water and prevent scaling and corrosion in the steam/water cycle.
Mass transit : many management successes at WMATA, but capital planning could be enhanced
DOT National Transportation Integrated Search
2001-07-01
In recent years, the Washington Metropolitan Area Transit Authority's (WMATA) public transit system has experienced problems related to the safety and reliability of its transit services, including equipment breakdowns, delays in scheduled service, u...
Procurement Without Problems: Preparing the RFP.
ERIC Educational Resources Information Center
Epstein, Susan Baerg
1983-01-01
Discussion of factors contributing to successful procurement of automated library system focuses on preparation of Request for Proposal (RFP) and elements included in the RFP--administrative requirements, functional requirements, performance requirements, reliability requirements, testing procedures, standardized response language, location table,…
Recycled Water Poses Disinfectant Problem
ERIC Educational Resources Information Center
Chemical and Engineering News, 1973
1973-01-01
Discusses the possible health hazards resulting from released nucleic acid of inactivated viruses, chlorinated nonliving organic molecules, and overestimated reliability of waste treatment standards. Suggests the recycle system use a dual disinfectant such as chlorine and ozone in water treatment. (CC)
Performance Analysis of Stirling Engine-Driven Vapor Compression Heat Pump System
NASA Astrophysics Data System (ADS)
Kagawa, Noboru
Stirling engine-driven vapor compression systems have many unique advantages including higher thermal efficiencies, preferable exhaust gas characteristics, multi-fuel usage, and low noise and vibration which can play an important role in alleviating environmental and energy problems. This paper introduces a design method for the systems based on reliable mathematical methods for Stirling and Rankin cycles using reliable thermophysical information for refrigerants. The model deals with a combination of a kinematic Stirling engine and a scroll compressor. Some experimental coefficients are used to formulate the model. The obtained results show the performance behavior in detail. The measured performance of the actual system coincides with the calculated results. Furthermore, the calculated results clarify the performance using alternative refrigerants for R-22.
Improved inflatable landing systems for low cost planetary landers
NASA Astrophysics Data System (ADS)
Northey, Dave; Morgan, Chris
2006-10-01
Inflatable landing systems have been traditionally perceived as a cost-effective solution to the problem of landing a spacecraft on a planetary surface. To date, the systems used have all employed the approach of surrounding the lander with non-vented airbags where the lander on impact bounces a number of times until the impact energy is dissipated. However, the reliability record of such systems is not at all good. This paper examines the problems involved in the use of non-vented airbags, and how these problems have been overcome by the use of vented airbags in terrestrial systems. Using a specific case study, it is shown that even the basic passive type of venting can give significant mass reductions. It is also shown that actively controlling the venting based on the landing scenario can further enhance the performance of vented airbags.
Improved inflatable landing systems for low cost planetary landers
NASA Astrophysics Data System (ADS)
Northey, Dave; Morgan, Chris
2003-11-01
Inflatable landing systems have been traditionally perceived as a cost-effective solution to the problem of landing a spacecraft on a planetary surface. To date the systems used have all employed the approach of surrounding the lander with non-vented airbags where the lander bounces on impact a number of times until the impact energy is dissipated. However the reliability record of such systems is not at all good. This paper examines the problems involved in the use of non-vented airbags, and how these problems have been overcome by the use of vented airbags in terrestrial systems. Using a specific case study, it is shown that even the basic passive type of venting can give significant mass reductions. It is also shown that actively controlling the venting based on the landing scenario can further enhance the performance of vented airbags.
Optimal Management of Redundant Control Authority for Fault Tolerance
NASA Technical Reports Server (NTRS)
Wu, N. Eva; Ju, Jianhong
2000-01-01
This paper is intended to demonstrate the feasibility of a solution to a fault tolerant control problem. It explains, through a numerical example, the design and the operation of a novel scheme for fault tolerant control. The fundamental principle of the scheme was formalized in [5] based on the notion of normalized nonspecificity. The novelty lies with the use of a reliability criterion for redundancy management, and therefore leads to a high overall system reliability.
CERTS: Consortium for Electric Reliability Technology Solutions - Research Highlights
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eto, Joseph
2003-07-30
Historically, the U.S. electric power industry was vertically integrated, and utilities were responsible for system planning, operations, and reliability management. As the nation moves to a competitive market structure, these functions have been disaggregated, and no single entity is responsible for reliability management. As a result, new tools, technologies, systems, and management processes are needed to manage the reliability of the electricity grid. However, a number of simultaneous trends prevent electricity market participants from pursuing development of these reliability tools: utilities are preoccupied with restructuring their businesses, research funding has declined, and the formation of Independent System Operators (ISOs) andmore » Regional Transmission Organizations (RTOs) to operate the grid means that control of transmission assets is separate from ownership of these assets; at the same time, business uncertainty, and changing regulatory policies have created a climate in which needed investment for transmission infrastructure and tools for reliability management has dried up. To address the resulting emerging gaps in reliability R&D, CERTS has undertaken much-needed public interest research on reliability technologies for the electricity grid. CERTS' vision is to: (1) Transform the electricity grid into an intelligent network that can sense and respond automatically to changing flows of power and emerging problems; (2) Enhance reliability management through market mechanisms, including transparency of real-time information on the status of the grid; (3) Empower customers to manage their energy use and reliability needs in response to real-time market price signals; and (4) Seamlessly integrate distributed technologies--including those for generation, storage, controls, and communications--to support the reliability needs of both the grid and individual customers.« less
Foot Plantar Pressure Measurement System: A Review
Razak, Abdul Hadi Abdul; Zayegh, Aladin; Begg, Rezaul K.; Wahab, Yufridin
2012-01-01
Foot plantar pressure is the pressure field that acts between the foot and the support surface during everyday locomotor activities. Information derived from such pressure measures is important in gait and posture research for diagnosing lower limb problems, footwear design, sport biomechanics, injury prevention and other applications. This paper reviews foot plantar sensors characteristics as reported in the literature in addition to foot plantar pressure measurement systems applied to a variety of research problems. Strengths and limitations of current systems are discussed and a wireless foot plantar pressure system is proposed suitable for measuring high pressure distributions under the foot with high accuracy and reliability. The novel system is based on highly linear pressure sensors with no hysteresis. PMID:23012576
An empirical study of flight control software reliability
NASA Technical Reports Server (NTRS)
Dunham, J. R.; Pierce, J. L.
1986-01-01
The results of a laboratory experiment in flight control software reliability are reported. The experiment tests a small sample of implementations of a pitch axis control law for a PA28 aircraft with over 14 million pitch commands with varying levels of additive input and feedback noise. The testing which uses the method of n-version programming for error detection surfaced four software faults in one implementation of the control law. The small number of detected faults precluded the conduct of the error burst analyses. The pitch axis problem provides data for use in constructing a model in the prediction of the reliability of software in systems with feedback. The study is undertaken to find means to perform reliability evaluations of flight control software.
System Design under Uncertainty: Evolutionary Optimization of the Gravity Probe-B Spacecraft
NASA Technical Reports Server (NTRS)
Pullen, Samuel P.; Parkinson, Bradford W.
1994-01-01
This paper discusses the application of evolutionary random-search algorithms (Simulated Annealing and Genetic Algorithms) to the problem of spacecraft design under performance uncertainty. Traditionally, spacecraft performance uncertainty has been measured by reliability. Published algorithms for reliability optimization are seldom used in practice because they oversimplify reality. The algorithm developed here uses random-search optimization to allow us to model the problem more realistically. Monte Carlo simulations are used to evaluate the objective function for each trial design solution. These methods have been applied to the Gravity Probe-B (GP-B) spacecraft being developed at Stanford University for launch in 1999, Results of the algorithm developed here for GP-13 are shown, and their implications for design optimization by evolutionary algorithms are discussed.
NASA Technical Reports Server (NTRS)
Monaghan, Mark W.; Gillespie, Amanda M.
2013-01-01
During the shuttle era NASA utilized a failure reporting system called the Problem Reporting and Corrective Action (PRACA) it purpose was to identify and track system non-conformance. The PRACA system over the years evolved from a relatively nominal way to identify system problems to a very complex tracking and report generating data base. The PRACA system became the primary method to categorize any and all anomalies from corrosion to catastrophic failure. The systems documented in the PRACA system range from flight hardware to ground or facility support equipment. While the PRACA system is complex, it does possess all the failure modes, times of occurrence, length of system delay, parts repaired or replaced, and corrective action performed. The difficulty is mining the data then to utilize that data in order to estimate component, Line Replaceable Unit (LRU), and system reliability analysis metrics. In this paper, we identify a methodology to categorize qualitative data from the ground system PRACA data base for common ground or facility support equipment. Then utilizing a heuristic developed for review of the PRACA data determine what reports identify a credible failure. These data are the used to determine inter-arrival times to perform an estimation of a metric for repairable component-or LRU reliability. This analysis is used to determine failure modes of the equipment, determine the probability of the component failure mode, and support various quantitative differing techniques for performing repairable system analysis. The result is that an effective and concise estimate of components used in manned space flight operations. The advantage is the components or LRU's are evaluated in the same environment and condition that occurs during the launch process.
Numerical Optimization Algorithms and Software for Systems Biology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saunders, Michael
2013-02-02
The basic aims of this work are: to develop reliable algorithms for solving optimization problems involving large stoi- chiometric matrices; to investigate cyclic dependency between metabolic and macromolecular biosynthetic networks; and to quantify the significance of thermodynamic constraints on prokaryotic metabolism.
Robust reliable sampled-data control for switched systems with application to flight control
NASA Astrophysics Data System (ADS)
Sakthivel, R.; Joby, Maya; Shi, P.; Mathiyalagan, K.
2016-11-01
This paper addresses the robust reliable stabilisation problem for a class of uncertain switched systems with random delays and norm bounded uncertainties. The main aim of this paper is to obtain the reliable robust sampled-data control design which involves random time delay with an appropriate gain control matrix for achieving the robust exponential stabilisation for uncertain switched system against actuator failures. In particular, the involved delays are assumed to be randomly time-varying which obeys certain mutually uncorrelated Bernoulli distributed white noise sequences. By constructing an appropriate Lyapunov-Krasovskii functional (LKF) and employing an average-dwell time approach, a new set of criteria is derived for ensuring the robust exponential stability of the closed-loop switched system. More precisely, the Schur complement and Jensen's integral inequality are used in derivation of stabilisation criteria. By considering the relationship among the random time-varying delay and its lower and upper bounds, a new set of sufficient condition is established for the existence of reliable robust sampled-data control in terms of solution to linear matrix inequalities (LMIs). Finally, an illustrative example based on the F-18 aircraft model is provided to show the effectiveness of the proposed design procedures.
"Fly-by-Wireless" : A Revolution in Aerospace Architectures for Instrumentation and Control
NASA Technical Reports Server (NTRS)
Studor, George F.
2007-01-01
The conference presentation provides background information on Fly-by-Wireless technologies as well as reasons for implementation, CANEUS project goals, cost of change for instrumentation, reliability, focus areas, conceptual Hybrid SHMS architecture for future space habitats, real world problems that the technology can solve, evolution of Micro-WIS systems, and a WLEIDS system overview and end-to-end system design.
2011-01-01
Background With the increasing use of nanomaterials, the need for methods and assays to examine their immunosafety is becoming urgent, in particular for nanomaterials that are deliberately administered to human subjects (as in the case of nanomedicines). To obtain reliable results, standardised in vitro immunotoxicological tests should be used to determine the effects of engineered nanoparticles on human immune responses. However, before assays can be standardised, it is important that suitable methods are established and validated. Results In a collaborative work between European laboratories, existing immunological and toxicological in vitro assays were tested and compared for their suitability to test effects of nanoparticles on immune responses. The prototypical nanoparticles used were metal (oxide) particles, either custom-generated by wet synthesis or commercially available as powders. Several problems and challenges were encountered during assay validation, ranging from particle agglomeration in biological media and optical interference with assay systems, to chemical immunotoxicity of solvents and contamination with endotoxin. Conclusion The problems that were encountered in the immunological assay systems used in this study, such as chemical or endotoxin contamination and optical interference caused by the dense material, significantly affected the data obtained. These problems have to be solved to enable the development of reliable assays for the assessment of nano-immunosafety. PMID:21306632
Modeling and experimental verification of single event upsets
NASA Technical Reports Server (NTRS)
Fogarty, T. N.; Attia, J. O.; Kumar, A. A.; Tang, T. S.; Lindner, J. S.
1993-01-01
The research performed and the results obtained at the Laboratory for Radiation Studies, Prairie View A&M University and Texas A&I University, on the problem of Single Events Upsets, the various schemes employed to limit them and the effects they have on the reliability and fault tolerance at the systems level, such as robotic systems are reviewed.
Wu, Zhao; Xiong, Naixue; Huang, Yannong; Xu, Degang; Hu, Chunyang
2015-01-01
The services composition technology provides flexible methods for building service composition applications (SCAs) in wireless sensor networks (WSNs). The high reliability and high performance of SCAs help services composition technology promote the practical application of WSNs. The optimization methods for reliability and performance used for traditional software systems are mostly based on the instantiations of software components, which are inapplicable and inefficient in the ever-changing SCAs in WSNs. In this paper, we consider the SCAs with fault tolerance in WSNs. Based on a Universal Generating Function (UGF) we propose a reliability and performance model of SCAs in WSNs, which generalizes a redundancy optimization problem to a multi-state system. Based on this model, an efficient optimization algorithm for reliability and performance of SCAs in WSNs is developed based on a Genetic Algorithm (GA) to find the optimal structure of SCAs with fault-tolerance in WSNs. In order to examine the feasibility of our algorithm, we have evaluated the performance. Furthermore, the interrelationships between the reliability, performance and cost are investigated. In addition, a distinct approach to determine the most suitable parameters in the suggested algorithm is proposed. PMID:26561818
Pre-Proposal Assessment of Reliability for Spacecraft Docking with Limited Information
NASA Technical Reports Server (NTRS)
Brall, Aron
2013-01-01
This paper addresses the problem of estimating the reliability of a critical system function as well as its impact on the system reliability when limited information is available. The approach addresses the basic function reliability, and then the impact of multiple attempts to accomplish the function. The dependence of subsequent attempts on prior failure to accomplish the function is also addressed. The autonomous docking of two spacecraft was the specific example that generated the inquiry, and the resultant impact on total reliability generated substantial interest in presenting the results due to the relative insensitivity of overall performance to basic function reliability and moderate degradation given sufficient attempts to try and accomplish the required goal. The application of the methodology allows proper emphasis on the characteristics that can be estimated with some knowledge, and to insulate the integrity of the design from those characteristics that can't be properly estimated with any rational value of uncertainty. The nature of NASA's missions contains a great deal of uncertainty due to the pursuit of new science or operations. This approach can be applied to any function where multiple attempts at success, with or without degradation, are allowed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Dr. Yanhua; McCandless, Andrew Bascom
The main objective of this project is to improve the performance and reliability of sensor networks in the smart grid through an active interference cancellation technique that can effectively eliminate broadband electromagnetic interference (EMI) and radio frequency interference (RFI). This noise cancellation provides real-time monitoring the RF environment and automatically optimization of the signal fidelity. To determine the feasibility of the proposed technique and quantify the level of improvement in key system parameters, such as data rate, signal bandwidth, and cost saving, the tasks carried out during Phase I were 1) defining the problem statement, 2) developing a design thatmore » will solve the sensors’ reliably problem, 3) carrying out initial testing with a prototype, and 4) developing an integrated photonic chip version that could be built in a follow-on Phase II effort. The technology demonstration was successfully proven the feasibility of a mission assured photonic sensor system (MAPSS) that will address a major interference problem in smart grid deployments. The significant results demonstrated from bench-top testing show that the technology is capable of maintaining the error free communication link in the presence of various type of interference. The technology’s wideband performance in GHz is also verified and would be suitable for sensors deploying throughout the smart grid system.« less
Technique for Early Reliability Prediction of Software Components Using Behaviour Models
Ali, Awad; N. A. Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad
2016-01-01
Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction. PMID:27668748
Redundancy allocation problem for k-out-of- n systems with a choice of redundancy strategies
NASA Astrophysics Data System (ADS)
Aghaei, Mahsa; Zeinal Hamadani, Ali; Abouei Ardakan, Mostafa
2017-03-01
To increase the reliability of a specific system, using redundant components is a common method which is called redundancy allocation problem (RAP). Some of the RAP studies have focused on k-out-of- n systems. However, all of these studies assumed predetermined active or standby strategies for each subsystem. In this paper, for the first time, we propose a k-out-of- n system with a choice of redundancy strategies. Therefore, a k-out-of- n series-parallel system is considered when the redundancy strategy can be chosen for each subsystem. In other words, in the proposed model, the redundancy strategy is considered as an additional decision variable and an exact method based on integer programming is used to obtain the optimal solution of the problem. As the optimization of RAP belongs to the NP-hard class of problems, a modified version of genetic algorithm (GA) is also developed. The exact method and the proposed GA are implemented on a well-known test problem and the results demonstrate the efficiency of the new approach compared with the previous studies.
NASA Astrophysics Data System (ADS)
Hosseini-Hashemi, Shahrokh; Sepahi-Boroujeni, Amin; Sepahi-Boroujeni, Saeid
2018-04-01
Normal impact performance of a system including a fullerene molecule and a single-layered graphene sheet is studied in the present paper. Firstly, through a mathematical approach, a new contact law is derived to describe the overall non-bonding interaction forces of the "hollow indenter-target" system. Preliminary verifications show that the derived contact law gives a reliable picture of force field of the system which is in good agreements with the results of molecular dynamics (MD) simulations. Afterwards, equation of the transversal motion of graphene sheet is utilized on the basis of both the nonlocal theory of elasticity and the assumptions of classical plate theory. Then, to derive dynamic behavior of the system, a set including the proposed contact law and the equations of motion of both graphene sheet and fullerene molecule is solved numerically. In order to evaluate outcomes of this method, the problem is modeled by MD simulation. Despite intrinsic differences between analytical and MD methods as well as various errors arise due to transient nature of the problem, acceptable agreements are established between analytical and MD outcomes. As a result, the proposed analytical method can be reliably used to address similar impact problems. Furthermore, it is found that a single-layered graphene sheet is capable of trapping fullerenes approaching with low velocities. Otherwise, in case of rebound, the sheet effectively absorbs predominant portion of fullerene energy.
Development and validation of a physics problem-solving assessment rubric
NASA Astrophysics Data System (ADS)
Docktor, Jennifer Lynn
Problem solving is a complex process that is important for everyday life and crucial for learning physics. Although there is a great deal of effort to improve student problem solving throughout the educational system, there is no standard way to evaluate written problem solving that is valid, reliable, and easy to use. Most tests of problem solving performance given in the classroom focus on the correctness of the end result or partial results rather than the quality of the procedures and reasoning leading to the result, which gives an inadequate description of a student's skills. A more detailed and meaningful measure is necessary if different curricular materials or pedagogies are to be compared. This measurement tool could also allow instructors to diagnose student difficulties and focus their coaching. It is important that the instrument be applicable to any problem solving format used by a student and to a range of problem types and topics typically used by instructors. Typically complex processes such as problem solving are assessed by using a rubric, which divides a skill into multiple quasi-independent categories and defines criteria to attain a score in each. This dissertation describes the development of a problem solving rubric for the purpose of assessing written solutions to physics problems and presents evidence for the validity, reliability, and utility of score interpretations on the instrument.
NASA Astrophysics Data System (ADS)
Wu, Jianing; Yan, Shaoze; Xie, Liyang; Gao, Peng
2012-07-01
The reliability apportionment of spacecraft solar array is of significant importance for spacecraft designers in the early stage of design. However, it is difficult to use the existing methods to resolve reliability apportionment problem because of the data insufficiency and the uncertainty of the relations among the components in the mechanical system. This paper proposes a new method which combines the fuzzy comprehensive evaluation with fuzzy reasoning Petri net (FRPN) to accomplish the reliability apportionment of the solar array. The proposed method extends the previous fuzzy methods and focuses on the characteristics of the subsystems and the intrinsic associations among the components. The analysis results show that the synchronization mechanism may obtain the highest reliability value and the solar panels and hinges may get the lowest reliability before design and manufacturing. Our developed method is of practical significance for the reliability apportionment of solar array where the design information has not been clearly identified, particularly in early stage of design.
GEOMAGIA50: An archeointensity database with PHP and MySQL
NASA Astrophysics Data System (ADS)
Korhonen, K.; Donadini, F.; Riisager, P.; Pesonen, L. J.
2008-04-01
The GEOMAGIA50 database stores 3798 archeomagnetic and paleomagnetic intensity determinations dated to the past 50,000 years. It also stores details of the measurement setup for each determination, which are used for ranking the data according to prescribed reliability criteria. The ranking system aims to alleviate the data reliability problem inherent in this kind of data. GEOMAGIA50 is based on two popular open source technologies. The MySQL database management system is used for storing the data, whereas the functionality and user interface are provided by server-side PHP scripts. This technical brief gives a detailed description of GEOMAGIA50 from a technical viewpoint.
Halim, Isa; Arep, Hambali; Kamat, Seri Rahayu; Abdullah, Rohana; Omar, Abdul Rahman; Ismail, Ahmad Rasdan
2014-06-01
Prolonged standing has been hypothesized as a vital contributor to discomfort and muscle fatigue in the workplace. The objective of this study was to develop a decision support system that could provide systematic analysis and solutions to minimize the discomfort and muscle fatigue associated with prolonged standing. The integration of object-oriented programming and a Model Oriented Simultaneous Engineering System were used to design the architecture of the decision support system. Validation of the decision support system was carried out in two manufacturing companies. The validation process showed that the decision support system produced reliable results. The decision support system is a reliable advisory tool for providing analysis and solutions to problems related to the discomfort and muscle fatigue associated with prolonged standing. Further testing of the decision support system is suggested before it is used commercially.
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1990-01-01
Although powerful computers have allowed complex physical and manmade hardware systems to be modeled successfully, we have encountered persistent problems with the reliability of computer models for systems involving human learning, human action, and human organizations. This is not a misfortune; unlike physical and manmade systems, human systems do not operate under a fixed set of laws. The rules governing the actions allowable in the system can be changed without warning at any moment, and can evolve over time. That the governing laws are inherently unpredictable raises serious questions about the reliability of models when applied to human situations. In these domains, computers are better used, not for prediction and planning, but for aiding humans. Examples are systems that help humans speculate about possible futures, offer advice about possible actions in a domain, systems that gather information from the networks, and systems that track and support work flows in organizations.
Halim, Isa; Arep, Hambali; Kamat, Seri Rahayu; Abdullah, Rohana; Omar, Abdul Rahman; Ismail, Ahmad Rasdan
2014-01-01
Background Prolonged standing has been hypothesized as a vital contributor to discomfort and muscle fatigue in the workplace. The objective of this study was to develop a decision support system that could provide systematic analysis and solutions to minimize the discomfort and muscle fatigue associated with prolonged standing. Methods The integration of object-oriented programming and a Model Oriented Simultaneous Engineering System were used to design the architecture of the decision support system. Results Validation of the decision support system was carried out in two manufacturing companies. The validation process showed that the decision support system produced reliable results. Conclusion The decision support system is a reliable advisory tool for providing analysis and solutions to problems related to the discomfort and muscle fatigue associated with prolonged standing. Further testing of the decision support system is suggested before it is used commercially. PMID:25180141
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlson, J.J.; Bouchard, A.M.; Osbourn, G.C.
Future generation automated human biometric identification and verification will require multiple features/sensors together with internal and external information sources to achieve high performance, accuracy, and reliability in uncontrolled environments. The primary objective of the proposed research is to develop a theoretical and practical basis for identifying and verifying people using standoff biometric features that can be obtained with minimal inconvenience during the verification process. The basic problem involves selecting sensors and discovering features that provide sufficient information to reliably verify a person`s identity under the uncertainties caused by measurement errors and tactics of uncooperative subjects. A system was developed formore » discovering hand, face, ear, and voice features and fusing them to verify the identity of people. The system obtains its robustness and reliability by fusing many coarse and easily measured features into a near minimal probability of error decision algorithm.« less
Fuzzy probabilistic design of water distribution networks
NASA Astrophysics Data System (ADS)
Fu, Guangtao; Kapelan, Zoran
2011-05-01
The primary aim of this paper is to present a fuzzy probabilistic approach for optimal design and rehabilitation of water distribution systems, combining aleatoric and epistemic uncertainties in a unified framework. The randomness and imprecision in future water consumption are characterized using fuzzy random variables whose realizations are not real but fuzzy numbers, and the nodal head requirements are represented by fuzzy sets, reflecting the imprecision in customers' requirements. The optimal design problem is formulated as a two-objective optimization problem, with minimization of total design cost and maximization of system performance as objectives. The system performance is measured by the fuzzy random reliability, defined as the probability that the fuzzy head requirements are satisfied across all network nodes. The satisfactory degree is represented by necessity measure or belief measure in the sense of the Dempster-Shafer theory of evidence. An efficient algorithm is proposed, within a Monte Carlo procedure, to calculate the fuzzy random system reliability and is effectively combined with the nondominated sorting genetic algorithm II (NSGAII) to derive the Pareto optimal design solutions. The newly proposed methodology is demonstrated with two case studies: the New York tunnels network and Hanoi network. The results from both cases indicate that the new methodology can effectively accommodate and handle various aleatoric and epistemic uncertainty sources arising from the design process and can provide optimal design solutions that are not only cost-effective but also have higher reliability to cope with severe future uncertainties.
The all-electric aircraft - A systems view and proposed NASA research Programs
NASA Technical Reports Server (NTRS)
Spitzer, C. R.
1984-01-01
It is expected that all-electric aircraft, whether military or commercial, will exhibit reduced weight, acquisition cost and fuel consumption, an expanded flight envelope and improved survivability and reliability, simpler maintenance, and reduced support equipment. Also noteworthy are dramatic improvements in mission adaptability, based on the degree to which control system performance relies on easily exchanged software. Flight-critical secondary power and control systems whose malfunction would mean loss of an aircraft pose failure detection and design methodology problems, however, that have only begun to be addressed. NASA-sponsored research activities concerned with these problems and prospective benefits are presently discussed.
Software Fault Tolerance: A Tutorial
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2000-01-01
Because of our present inability to produce error-free software, software fault tolerance is and will continue to be an important consideration in software systems. The root cause of software design errors is the complexity of the systems. Compounding the problems in building correct software is the difficulty in assessing the correctness of software for highly complex systems. After a brief overview of the software development processes, we note how hard-to-detect design faults are likely to be introduced during development and how software faults tend to be state-dependent and activated by particular input sequences. Although component reliability is an important quality measure for system level analysis, software reliability is hard to characterize and the use of post-verification reliability estimates remains a controversial issue. For some applications software safety is more important than reliability, and fault tolerance techniques used in those applications are aimed at preventing catastrophes. Single version software fault tolerance techniques discussed include system structuring and closure, atomic actions, inline fault detection, exception handling, and others. Multiversion techniques are based on the assumption that software built differently should fail differently and thus, if one of the redundant versions fails, it is expected that at least one of the other versions will provide an acceptable output. Recovery blocks, N-version programming, and other multiversion techniques are reviewed.
Don't Trust a Management Metric, Especially in Life Support
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2014-01-01
Goodhart's law states that metrics do not work. Metrics become distorted when used and they deflect effort away from more important goals. These well-known and unavoidable problems occurred when the closure and system mass metrics were used to manage life support research. The intent of life support research should be to develop flyable, operable, reliable systems, not merely to increase life support system closure or to reduce its total mass. It would be better to design life support systems to meet the anticipated mission requirements and user needs. Substituting the metrics of closure and total mass for these goals seems to have led life support research to solve the wrong problems.
NASA Technical Reports Server (NTRS)
Martin, Ken E.; Esztergalyos, J.
1992-01-01
The Bonneville Power Administration (BPA) uses IRIG-B transmitted over microwave as its primary system time dissemination. Problems with accuracy and reliability have led to ongoing research into better methods. BPA has also developed and deployed a unique fault locator which uses precise clocks synchronized by a pulse over microwaves. It automatically transmits the data to a central computer for analysis. A proposed system could combine fault location timing and time dissemination into a Global Position System (GPS) timing receiver and close the verification loop through a master station at the Dittmer Control Center. Such a system would have many advantages, including lower cost, higher reliability, and wider industry support. Test results indicate the GPS has sufficient accuracy and reliability for this and other current timing requirements including synchronous phase angle measurements. A phasor measurement system which provides phase angle has recently been tested with excellent results. Phase angle is a key parameter in power system control applications including dynamic braking, DC modulation, remedial action schemes, and system state estimation. Further research is required to determine the applications which can most effectively use real-time phase angle measurements and the best method to apply them.
NASA Astrophysics Data System (ADS)
Martin, Ken E.; Esztergalyos, J.
1992-07-01
The Bonneville Power Administration (BPA) uses IRIG-B transmitted over microwave as its primary system time dissemination. Problems with accuracy and reliability have led to ongoing research into better methods. BPA has also developed and deployed a unique fault locator which uses precise clocks synchronized by a pulse over microwaves. It automatically transmits the data to a central computer for analysis. A proposed system could combine fault location timing and time dissemination into a Global Position System (GPS) timing receiver and close the verification loop through a master station at the Dittmer Control Center. Such a system would have many advantages, including lower cost, higher reliability, and wider industry support. Test results indicate the GPS has sufficient accuracy and reliability for this and other current timing requirements including synchronous phase angle measurements. A phasor measurement system which provides phase angle has recently been tested with excellent results. Phase angle is a key parameter in power system control applications including dynamic braking, DC modulation, remedial action schemes, and system state estimation. Further research is required to determine the applications which can most effectively use real-time phase angle measurements and the best method to apply them.
Biology doesn't waste energy: that's really smart
NASA Astrophysics Data System (ADS)
Vincent, Julian F. V.; Bogatyreva, Olga; Bogatyrev, Nikolaj
2006-03-01
Biology presents us with answers to design problems that we suspect would be very useful if only we could implement them successfully. We use the Russian theory of problem solving - TRIZ - in a novel way to provide a system for analysis and technology transfer. The analysis shows that whereas technology uses energy as the main means of solving technical problems, biology uses information and structure. Biology is also strongly hierarchical. The suggestion is that smart technology in hierarchical structures can help us to design much more efficient technology. TRIZ also suggests that biological design is autonomous and can be defined by the prefix "self-" with any function. This autonomy extends to the control system, so that the sensor is commonly also the actuator, resulting in simpler systems and greater reliability.
Modal Analysis for Grid Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
MANGO software is to provide a solution for improving small signal stability of power systems through adjusting operator-controllable variables using PMU measurement. System oscillation problems are one of the major threats to the grid stability and reliability in California and the Western Interconnection. These problems result in power fluctuations, lower grid operation efficiency, and may even lead to large-scale grid breakup and outages. This MANGO software aims to solve this problem by automatically generating recommended operation procedures termed Modal Analysis for Grid Operation (MANGO) to improve damping of inter-area oscillation modes. The MANGO procedure includes three steps: recognizing small signalmore » stability problems, implementing operating point adjustment using modal sensitivity, and evaluating the effectiveness of the adjustment. The MANGO software package is designed to help implement the MANGO procedure.« less
Validation of the Behavioral Risk Factor Surveillance System Sleep Questions
Jungquist, Carla R.; Mund, Jaime; Aquilina, Alan T.; Klingman, Karen; Pender, John; Ochs-Balcom, Heather; van Wijngaarden, Edwin; Dickerson, Suzanne S.
2016-01-01
Study Objective: Sleep problems may constitute a risk for health problems, including cardiovascular disease, depression, diabetes, poor work performance, and motor vehicle accidents. The primary purpose of this study was to assess the validity of the current Behavioral Risk Factor Surveillance System (BRFSS) sleep questions by establishing the sensitivity and specificity for detection of sleep/ wake disturbance. Methods: Repeated cross-sectional assessment of 300 community dwelling adults over the age of 18 who did not wear CPAP or oxygen during sleep. Reliability and validity testing of the BRFSS sleep questions was performed comparing to BFRSS responses to data from home sleep study, actigraphy for 14 days, Insomnia Severity Index, Epworth Sleepiness Scale, and PROMIS-57. Results: Only two of the five BRFSS sleep questions were found valid and reliable in determining total sleep time and excessive daytime sleepiness. Conclusions: Refinement of the BRFSS questions is recommended. Citation: Jungquist CR, Mund J, Aquilina AT, Klingman K, Pender J, Ochs-Balcom H, van Wijngaarden E, Dickerson SS. Validation of the behavioral risk factor surveillance system sleep questions. J Clin Sleep Med 2016;12(3):301–310. PMID:26446246
Research program for experiment M133
NASA Technical Reports Server (NTRS)
Frost, J. D., Jr.
1972-01-01
The development of the automatic data-acquisition and sleep-analysis system is reported. The purpose was consultation and evaluation in the transition of the Skylab M133 Sleep-Monitoring Experiment equipment from prototype of flight status; review of problems associated with acquisition and on-line display of data in near-real time via spacecraft telemetry; and development of laboratory facilities and design of equipment to assure reliable playback and analysis of analog data. The existing prototype system modified, and the changes improve the performance of the analysis circuitry and increase its reliability. These modifications are useful for pre- and postflight analysis, but are not now proposed for the inflight system. There were improvements in the EEG recording cap, some of which will be incorporated into the flight hardware.
Crezee, J; van der Koijk, J F; Kaatee, R S; Lagendijk, J J
1997-04-01
The 27 MHz Multi Electrode Current Source (MECS) interstitial hyperthermia system uses segmented electrodes, 10-20 mm long, to steer the 3D power deposition. This power control at a scale of 1-2 cm requires detailed and accurate temperature feedback data. To this end seven-point thermocouples are integrated into the probes. The aim of this work was to evaluate the feasibility and reliability of integrated thermometry in the 27 MHz MECS system, with special attention to the interference between electrode and thermometry and its effect on system performance. We investigated the impact of a seven-sensor thermocouple probe (outer diameter 150 microns) on the apparent impedance and power output of a 20 mm dual electrode (O.D. 1.5 mm) in a polyethylene catheter in a muscle equivalent medium (sigma 1 = 0.6 S m-1). The cross coupling between electrode and thermocouple was found to be small (1-2 pF) and to cause no problems in the dual-electrode mode, and only minimal problems in the single-electrode mode. Power loss into the thermometry system can be prevented using simple filters. The temperature readings are reliable and representative of the actual tissue temperature around the electrode. Self-heating effects, occurring in some catheter materials, are eliminated by sampling the temperature after a short power-off interval. We conclude that integrated thermocouple thermometry is compatible with 27 MHz capacitively coupled interstitial hyperthermia. The performance of the system is not affected and the temperatures measured are a reliable indication of the maximum tissue temperatures.
Expert system for UNIX system reliability and availability enhancement
NASA Astrophysics Data System (ADS)
Xu, Catherine Q.
1993-02-01
Highly reliable and available systems are critical to the airline industry. However, most off-the-shelf computer operating systems and hardware do not have built-in fault tolerant mechanisms, the UNIX workstation is one example. In this research effort, we have developed a rule-based Expert System (ES) to monitor, command, and control a UNIX workstation system with hot-standby redundancy. The ES on each workstation acts as an on-line system administrator to diagnose, report, correct, and prevent certain types of hardware and software failures. If a primary station is approaching failure, the ES coordinates the switch-over to a hot-standby secondary workstation. The goal is to discover and solve certain fatal problems early enough to prevent complete system failure from occurring and therefore to enhance system reliability and availability. Test results show that the ES can diagnose all targeted faulty scenarios and take desired actions in a consistent manner regardless of the sequence of the faults. The ES can perform designated system administration tasks about ten times faster than an experienced human operator. Compared with a single workstation system, our hot-standby redundancy system downtime is predicted to be reduced by more than 50 percent by using the ES to command and control the system.
Expert System for UNIX System Reliability and Availability Enhancement
NASA Technical Reports Server (NTRS)
Xu, Catherine Q.
1993-01-01
Highly reliable and available systems are critical to the airline industry. However, most off-the-shelf computer operating systems and hardware do not have built-in fault tolerant mechanisms, the UNIX workstation is one example. In this research effort, we have developed a rule-based Expert System (ES) to monitor, command, and control a UNIX workstation system with hot-standby redundancy. The ES on each workstation acts as an on-line system administrator to diagnose, report, correct, and prevent certain types of hardware and software failures. If a primary station is approaching failure, the ES coordinates the switch-over to a hot-standby secondary workstation. The goal is to discover and solve certain fatal problems early enough to prevent complete system failure from occurring and therefore to enhance system reliability and availability. Test results show that the ES can diagnose all targeted faulty scenarios and take desired actions in a consistent manner regardless of the sequence of the faults. The ES can perform designated system administration tasks about ten times faster than an experienced human operator. Compared with a single workstation system, our hot-standby redundancy system downtime is predicted to be reduced by more than 50 percent by using the ES to command and control the system.
The Reliability and Construct Validity of Scores on the Attitudes toward Problem Solving Scale
ERIC Educational Resources Information Center
Zakaria, Effandi; Haron, Zolkepeli; Daud, Md Yusoff
2004-01-01
The Attitudes Toward Problem Solving Scale (ATPSS) has received limited attention concerning its reliability and validity with a Malaysian secondary education population. Developed by Charles, Lester & O'Daffer (1987), the instruments assessed attitudes toward problem solving in areas of Willingness to Engage in Problem Solving Activities,…
An architectural approach to create self organizing control systems for practical autonomous robots
NASA Technical Reports Server (NTRS)
Greiner, Helen
1991-01-01
For practical industrial applications, the development of trainable robots is an important and immediate objective. Therefore, the developing of flexible intelligence directly applicable to training is emphasized. It is generally agreed upon by the AI community that the fusion of expert systems, neural networks, and conventionally programmed modules (e.g., a trajectory generator) is promising in the quest for autonomous robotic intelligence. Autonomous robot development is hindered by integration and architectural problems. Some obstacles towards the construction of more general robot control systems are as follows: (1) Growth problem; (2) Software generation; (3) Interaction with environment; (4) Reliability; and (5) Resource limitation. Neural networks can be successfully applied to some of these problems. However, current implementations of neural networks are hampered by the resource limitation problem and must be trained extensively to produce computationally accurate output. A generalization of conventional neural nets is proposed, and an architecture is offered in an attempt to address the above problems.
Müller, H; Naujoks, F; Dietz, S
2002-08-01
Problems encountered during the installation and introduction of an automated anaesthesia documentation system are discussed. Difficulties have to be expected in the area of staff training because of heterogeneous experience in computer usage and in the field of online documentation of vital signs. Moreover the areas of net administration and hardware configuration as well as general administrative issues also represent possible sources of drawbacks. System administration and reliable support provided by personnel of the department of anaesthesiology assuring staff motivation and reducing time of system failures require adequately staffed departments. Based on our own experiences, we recommend that anaesthesiology departments considering the future installation and use of an automated anaesthesia documentation system should verify sufficient personnel capacities prior to their decision.
NASA Technical Reports Server (NTRS)
Phillips, D. T.; Manseur, B.; Foster, J. W.
1982-01-01
Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.
Electric service reliability cost/worth assessment in a developing country
NASA Astrophysics Data System (ADS)
Pandey, Mohan Kumar
Considerable work has been done in developed countries to optimize the reliability of electric power systems on the basis of reliability cost versus reliability worth. This has yet to be considered in most developing countries, where development plans are still based on traditional deterministic measures. The difficulty with these criteria is that they cannot be used to evaluate the economic impacts of changing reliability levels on the utility and the customers, and therefore cannot lead to an optimum expansion plan for the system. The critical issue today faced by most developing countries is that the demand for electric power is high and growth in supply is constrained by technical, environmental, and most importantly by financial impediments. Many power projects are being canceled or postponed due to a lack of resources. The investment burden associated with the electric power sector has already led some developing countries into serious debt problems. This thesis focuses on power sector issues facing by developing countries and illustrates how a basic reliability cost/worth approach can be used in a developing country to determine appropriate planning criteria and justify future power projects by application to the Nepal Integrated Electric Power System (NPS). A reliability cost/worth based system evaluation framework is proposed in this thesis. Customer surveys conducted throughout Nepal using in-person interviews with approximately 2000 sample customers are presented. The survey results indicate that the interruption cost is dependent on both customer and interruption characteristics, and it varies from one location or region to another. Assessments at both the generation and composite system levels have been performed using the customer cost data and the developed NPS reliability database. The results clearly indicate the implications of service reliability to the electricity consumers of Nepal, and show that the reliability cost/worth evaluation is both possible and practical in a developing country. The average customer interruption costs of Rs 35/kWh at Hierarchical Level I and Rs 26/kWh at Hierarchical Level II evaluated in this research work led to an optimum reserve margin of 7.5%, which is considerably lower than the traditional reserve margin of 15% used in the NPS. A similar conclusion may result in other developing countries facing difficulties in power system expansion planning using the traditional approach. A new framework for system planning is therefore recommended for developing countries which would permit an objective review of the traditional system planning approach, and the evaluation of future power projects using a new approach based on fundamental principles of power system reliability and economics.
Xiao, Xiang; Zhu, Hao; Liu, Wei-Jie; Yu, Xiao-Ting; Duan, Lian; Li, Zheng; Zhu, Chao-Zhe
2017-01-01
The International 10/20 system is an important head-surface-based positioning system for transcranial brain mapping techniques, e.g., fNIRS and TMS. As guidance for probe placement, the 10/20 system permits both proper ROI coverage and spatial consistency among multiple subjects and experiments in a MRI-free context. However, the traditional manual approach to the identification of 10/20 landmarks faces problems in reliability and time cost. In this study, we propose a semi-automatic method to address these problems. First, a novel head surface reconstruction algorithm reconstructs head geometry from a set of points uniformly and sparsely sampled on the subject's head. Second, virtual 10/20 landmarks are determined on the reconstructed head surface in computational space. Finally, a visually-guided real-time navigation system guides the experimenter to each of the identified 10/20 landmarks on the physical head of the subject. Compared with the traditional manual approach, our proposed method provides a significant improvement both in reliability and time cost and thus could contribute to improving both the effectiveness and efficiency of 10/20-guided MRI-free probe placement.
Application of redundancy in the Saturn 5 guidance and control system
NASA Technical Reports Server (NTRS)
Moore, F. B.; White, J. B.
1976-01-01
The Saturn launch vehicle's guidance and control system is so complex that the reliability of a simplex system is not adequate to fulfill mission requirements. Thus, to achieve the desired reliability, redundancy encompassing a wide range of types and levels was employed. At one extreme, the lowest level, basic components (resistors, capacitors, relays, etc.) are employed in series, parallel, or quadruplex arrangements to insure continued system operation in the presence of possible failure conditions. At the other extreme, the highest level, complete subsystem duplication is provided so that a backup subsystem can be employed in case the primary system malfunctions. In between these two extremes, many other redundancy schemes and techniques are employed at various levels. Basic redundancy concepts are covered to gain insight into the advantages obtained with various techniques. Points and methods of application of these techniques are included. The theoretical gain in reliability resulting from redundancy is assessed and compared to a simplex system. Problems and limitations encountered in the practical application of redundancy are discussed as well as techniques verifying proper operation of the redundant channels. As background for the redundancy application discussion, a basic description of the guidance and control system is included.
Xu, Wenying; Wang, Zidong; Ho, Daniel W C
2018-05-01
This paper is concerned with the finite-horizon consensus problem for a class of discrete time-varying multiagent systems with external disturbances and missing measurements. To improve the communication reliability, redundant channels are introduced and the corresponding protocol is constructed for the information transmission over redundant channels. An event-triggered scheme is adopted to determine whether the information of agents should be transmitted to their neighbors. Subsequently, an observer-type event-triggered control protocol is proposed based on the latest received neighbors' information. The purpose of the addressed problem is to design a time-varying controller based on the observed information to achieve the consensus performance in a finite horizon. By utilizing a constrained recursive Riccati difference equation approach, some sufficient conditions are obtained to guarantee the consensus performance, and the controller parameters are also designed. Finally, a numerical example is provided to demonstrate the desired reliability of redundant channels and the effectiveness of the event-triggered control protocol.
Roudi, Yasser; Nirenberg, Sheila; Latham, Peter E.
2009-01-01
One of the most critical problems we face in the study of biological systems is building accurate statistical descriptions of them. This problem has been particularly challenging because biological systems typically contain large numbers of interacting elements, which precludes the use of standard brute force approaches. Recently, though, several groups have reported that there may be an alternate strategy. The reports show that reliable statistical models can be built without knowledge of all the interactions in a system; instead, pairwise interactions can suffice. These findings, however, are based on the analysis of small subsystems. Here, we ask whether the observations will generalize to systems of realistic size, that is, whether pairwise models will provide reliable descriptions of true biological systems. Our results show that, in most cases, they will not. The reason is that there is a crossover in the predictive power of pairwise models: If the size of the subsystem is below the crossover point, then the results have no predictive power for large systems. If the size is above the crossover point, then the results may have predictive power. This work thus provides a general framework for determining the extent to which pairwise models can be used to predict the behavior of large biological systems. Applied to neural data, the size of most systems studied so far is below the crossover point. PMID:19424487
Design and implementation of reliability evaluation of SAS hard disk based on RAID card
NASA Astrophysics Data System (ADS)
Ren, Shaohua; Han, Sen
2015-10-01
Because of the huge advantage of RAID technology in storage, it has been widely used. However, the question associated with this technology is that the hard disk based on the RAID card can not be queried by Operating System. Therefore how to read the self-information and log data of hard disk has been a problem, while this data is necessary for reliability test of hard disk. In traditional way, this information can be read just suitable for SATA hard disk, but not for SAS hard disk. In this paper, we provide a method by using LSI RAID card's Application Program Interface, communicating with RAID card and analyzing the feedback data to solve the problem. Then we will get the necessary information to assess the SAS hard disk.
Berger, Aaron J; Momeni, Arash; Ladd, Amy L
2014-04-01
Trapeziometacarpal, or thumb carpometacarpal (CMC), arthritis is a common problem with a variety of treatment options. Although widely used, the Eaton radiographic staging system for CMC arthritis is of questionable clinical utility, as disease severity does not predictably correlate with symptoms or treatment recommendations. A possible reason for this is that the classification itself may not be reliable, but the literature on this has not, to our knowledge, been systematically reviewed. We therefore performed a systematic review to determine the intra- and interobserver reliability of the Eaton staging system. We systematically reviewed English-language studies published between 1973 and 2013 to assess the degree of intra- and interobserver reliability of the Eaton classification for determining the stage of trapeziometacarpal joint arthritis and pantrapezial arthritis based on plain radiographic imaging. Search engines included: PubMed, Scopus(®), and CINAHL. Four studies, which included a total of 163 patients, met our inclusion criteria and were evaluated. The level of evidence of the studies included in this analysis was determined using the Oxford Centre for Evidence Based Medicine Levels of Evidence Classification by two independent observers. A limited number of studies have been performed to assess intra- and interobserver reliability of the Eaton classification system. The four studies included were determined to be Level 3b. These studies collectively indicate that the Eaton classification demonstrates poor to fair interobserver reliability (kappa values: 0.11-0.56) and fair to moderate intraobserver reliability (kappa values: 0.54-0.657). Review of the literature demonstrates that radiographs assist in the assessment of CMC joint disease, but there is not a reliable system for classification of disease severity. Currently, diagnosis and treatment of thumb CMC arthritis are based on the surgeon's qualitative assessment combining history, physical examination, and radiographic evaluation. Inconsistent agreement using the current common radiographic classification system suggests a need for better radiographic tools to quantify disease severity.
DRS: Derivational Reasoning System
NASA Technical Reports Server (NTRS)
Bose, Bhaskar
1995-01-01
The high reliability requirements for airborne systems requires fault-tolerant architectures to address failures in the presence of physical faults, and the elimination of design flaws during the specification and validation phase of the design cycle. Although much progress has been made in developing methods to address physical faults, design flaws remain a serious problem. Formal methods provides a mathematical basis for removing design flaws from digital systems. DRS (Derivational Reasoning System) is a formal design tool based on advanced research in mathematical modeling and formal synthesis. The system implements a basic design algebra for synthesizing digital circuit descriptions from high level functional specifications. DRS incorporates an executable specification language, a set of correctness preserving transformations, verification interface, and a logic synthesis interface, making it a powerful tool for realizing hardware from abstract specifications. DRS integrates recent advances in transformational reasoning, automated theorem proving and high-level CAD synthesis systems in order to provide enhanced reliability in designs with reduced time and cost.
A study of fuzzy logic ensemble system performance on face recognition problem
NASA Astrophysics Data System (ADS)
Polyakova, A.; Lipinskiy, L.
2017-02-01
Some problems are difficult to solve by using a single intelligent information technology (IIT). The ensemble of the various data mining (DM) techniques is a set of models which are able to solve the problem by itself, but the combination of which allows increasing the efficiency of the system as a whole. Using the IIT ensembles can improve the reliability and efficiency of the final decision, since it emphasizes on the diversity of its components. The new method of the intellectual informational technology ensemble design is considered in this paper. It is based on the fuzzy logic and is designed to solve the classification and regression problems. The ensemble consists of several data mining algorithms: artificial neural network, support vector machine and decision trees. These algorithms and their ensemble have been tested by solving the face recognition problems. Principal components analysis (PCA) is used for feature selection.
Comprehensive clinical assessment in community setting: applicability of the MDS-HC.
Morris, J N; Fries, B E; Steel, K; Ikegami, N; Bernabei, R; Carpenter, G I; Gilgen, R; Hirdes, J P; Topinková, E
1997-08-01
To describe the results of an international trial of the home care version of the MDS assessment and problem identification system (the MDS-HC), including reliability estimates, a comparison of MDS-HC reliabilities with reliabilities of the same items in the MDS 2.0 nursing home assessment instrument, and an examination of the types of problems found in home care clients using the MDS-HC. Independent, dual assessment of clients of home-care agencies by trained clinicians using a draft of the MDS-HC, with additional descriptive data regarding problem profiles for home care clients. Reliability data from dual assessments of 241 randomly selected clients of home care agencies in five countries, all of whom volunteered to test the MDS-HC. Also included are an expanded sample of 780 home care assessments from these countries and 187 dually assessed residents from 21 nursing homes in the United States. The array of MDS-HC assessment items included measures in the following areas: personal items, cognitive patterns, communication/hearing, vision, mood and behavior, social functioning, informal support services, physical functioning, continence, disease diagnoses health conditions and preventive health measures, nutrition/hydration, dental status, skin condition, environmental assessment, service utilization, and medications. Forty-seven percent of the functional, health status, social environment, and service items in the MDS-HC were taken from the MDS 2.0 for nursing homes. For this item set, it is estimated that the average weighted Kappa is .74 for the MDS-HC and .75 for the MDS 2.0. Similarly, high reliability values were found for items newly introduced in the MDS-HC (weighted Kappa = .70). Descriptive findings also characterize the problems of home care clients, with subanalyses within cognitive performance levels. Findings indicate that the core set of items in the MDS 2.0 work equally well in community and nursing home settings. New items are highly reliable. In tandem, these instruments can be used within the international community, assisting and planning care for older adults within a broad spectrum of service settings, including nursing homes and home care programs. With this community-based, second-generation problem and care plan-driven assessment instrument, disability assessment can be performed consistently across the world.
Robust penalty method for structural synthesis
NASA Technical Reports Server (NTRS)
Kamat, M. P.
1983-01-01
The Sequential Unconstrained Minimization Technique (SUMT) offers an easy way of solving nonlinearly constrained problems. However, this algorithm frequently suffers from the need to minimize an ill-conditioned penalty function. An ill-conditioned minimization problem can be solved very effectively by posing the problem as one of integrating a system of stiff differential equations utilizing concepts from singular perturbation theory. This paper evaluates the robustness and the reliability of such a singular perturbation based SUMT algorithm on two different problems of structural optimization of widely separated scales. The report concludes that whereas conventional SUMT can be bogged down by frequent ill-conditioning, especially in large scale problems, the singular perturbation SUMT has no such difficulty in converging to very accurate solutions.
Rollover risk prediction of heavy vehicles by reliability index and empirical modelling
NASA Astrophysics Data System (ADS)
Sellami, Yamine; Imine, Hocine; Boubezoul, Abderrahmane; Cadiou, Jean-Charles
2018-03-01
This paper focuses on a combination of a reliability-based approach and an empirical modelling approach for rollover risk assessment of heavy vehicles. A reliability-based warning system is developed to alert the driver to a potential rollover before entering into a bend. The idea behind the proposed methodology is to estimate the rollover risk by the probability that the vehicle load transfer ratio (LTR) exceeds a critical threshold. Accordingly, a so-called reliability index may be used as a measure to assess the vehicle safe functioning. In the reliability method, computing the maximum of LTR requires to predict the vehicle dynamics over the bend which can be in some cases an intractable problem or time-consuming. With the aim of improving the reliability computation time, an empirical model is developed to substitute the vehicle dynamics and rollover models. This is done by using the SVM (Support Vector Machines) algorithm. The preliminary obtained results demonstrate the effectiveness of the proposed approach.
Assessing local instrument reliability and validity: a field-based example from northern Uganda.
Betancourt, Theresa S; Bass, Judith; Borisova, Ivelina; Neugebauer, Richard; Speelman, Liesbeth; Onyango, Grace; Bolton, Paul
2009-08-01
This paper presents an approach for evaluating the reliability and validity of mental health measures in non-Western field settings. We describe this approach using the example of our development of the Acholi psychosocial assessment instrument (APAI), which is designed to assess depression-like (two tam, par and kumu), anxiety-like (ma lwor) and conduct problems (kwo maraco) among war-affected adolescents in northern Uganda. To examine the criterion validity of this measure in the absence of a traditional gold standard, we derived local syndrome terms from qualitative data and used self reports of these syndromes by indigenous people as a reference point for determining caseness. Reliability was examined using standard test-retest and inter-rater methods. Each of the subscale scores for the depression-like syndromes exhibited strong internal reliability ranging from alpha = 0.84-0.87. Internal reliability was good for anxiety (0.70), conduct problems (0.83), and the pro-social attitudes and behaviors (0.70) subscales. Combined inter-rater reliability and test-retest reliability were good for most subscales except for the conduct problem scale and prosocial scales. The pattern of significant mean differences in the corresponding APAI problem scale score between self-reported cases vs. noncases on local syndrome terms was confirmed in the data for all of the three depression-like syndromes, but not for the anxiety-like syndrome ma lwor or the conduct problem kwo maraco.
Model Predictive Control-based Optimal Coordination of Distributed Energy Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming
2013-01-07
Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive controlmore » (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.« less
Model Predictive Control-based Optimal Coordination of Distributed Energy Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming
2013-04-03
Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive controlmore » (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.« less
Quality control in the year 2000.
Schade, B
1992-01-01
'Just-in-time' production is a prerequisite for a company to meet the challenges of competition. Manufacturing cycles have been so successfully optimized that release time now has become a significant factor. A vision for a major quality-control (QC) contribution to profitability in this decade seems to be the just-in-time release. Benefits will go beyond cost savings for lower inventory. The earlier detection of problems will reduce rejections and scrap. In addition, problem analysis and problem-solving will be easier. To achieve just-in-time release, advanced automated systems like robots will become the workhorses in QC for high volume pharmaceutical production. The requirements for these systems are extremely high in terms of quality, reliability and ruggedness. Crucial for the success might be advances in use of microelectronics for error checks, system recording, trouble shooting, etc. as well as creative new approaches (for example the use of redundant assay systems).
Quality control in the year 2000
Schade, Bernd
1992-01-01
‘Just-in-time’ production is a prerequisite for a company to meet the challenges of competition. Manufacturing cycles have been so successfully optimized that release time now has become a significant factor. A vision for a major quality-control (QC) contribution to profitability in this decade seems to be the just-in-time release. Benefits will go beyond cost savings for lower inventory. The earlier detection of problems will reduce rejections and scrap. In addition, problem analysis and problem-solving will be easier. To achieve just-in-time release, advanced automated systems like robots will become the workhorses in QC for high volume pharmaceutical production. The requirements for these systems are extremely high in terms of quality, reliability and ruggedness. Crucial for the success might be advances in use of microelectronics for error checks, system recording, trouble shooting, etc. as well as creative new approaches (for example the use of redundant assay systems). PMID:18924930
Simulation of hybrid wind/solar systems for typical areas of Brazil and Cuba
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villar Ale, J.A.; Garcia, F.H.
Brazil and Cuba share a common history of serious problems related to the electrification of their isolated communities. Both countries have renewable resources of energy that allow, for instance, the use of photovoltaic and wind systems. These systems can be used in an integrated way, known as hybrid systems, achieving better reliability and economy. This work presents a simplified methodology for the design of such systems to be applied to the electrification of rural areas in both countries.
NASA Astrophysics Data System (ADS)
Zhizhimov, Oleg; Mazov, Nikolay; Skibin, Sergey
Questions concerned with construction and operation of the distributed information systems on the basis of ANSI/NISO Z39.50 Information Retrieval Protocol are discussed in the paper. The paper is based on authors' practice in developing ZooPARK server. Architecture of distributed information systems, questions of reliability of such systems, minimization of search time and administration are examined. Problems with developing of distributed information systems are also described.
77 FR 16175 - Station Blackout
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-20
... not have access to ADAMS or if there are problems in accessing the documents located in ADAMS, contact... with turbine trip and unavailability of the onsite emergency ac power system). Station blackout does... powered, such as turbine- or diesel-driven pumps. Thus, the reliability of such components, dc battery...
Study on advanced information processing system
NASA Technical Reports Server (NTRS)
Shin, Kang G.; Liu, Jyh-Charn
1992-01-01
Issues related to the reliability of a redundant system with large main memory are addressed. In particular, the Fault-Tolerant Processor (FTP) for Advanced Launch System (ALS) is used as a basis for our presentation. When the system is free of latent faults, the probability of system crash due to nearly-coincident channel faults is shown to be insignificant even when the outputs of computing channels are infrequently voted on. In particular, using channel error maskers (CEMs) is shown to improve reliability more effectively than increasing the number of channels for applications with long mission times. Even without using a voter, most memory errors can be immediately corrected by CEMs implemented with conventional coding techniques. In addition to their ability to enhance system reliability, CEMs--with a low hardware overhead--can be used to reduce not only the need of memory realignment, but also the time required to realign channel memories in case, albeit rare, such a need arises. Using CEMs, we have developed two schemes, called Scheme 1 and Scheme 2, to solve the memory realignment problem. In both schemes, most errors are corrected by CEMs, and the remaining errors are masked by a voter.
Study on fault-tolerant processors for advanced launch system
NASA Technical Reports Server (NTRS)
Shin, Kang G.; Liu, Jyh-Charn
1990-01-01
Issues related to the reliability of a redundant system with large main memory are addressed. The Fault-Tolerant Processor (FTP) for the Advanced Launch System (ALS) is used as a basis for the presentation. When the system is free of latent faults, the probability of system crash due to multiple channel faults is shown to be insignificant even when voting on the outputs of computing channels is infrequent. Using channel error maskers (CEMs) is shown to improve reliability more effectively than increasing redundancy or the number of channels for applications with long mission times. Even without using a voter, most memory errors can be immediately corrected by those CEMs implemented with conventional coding techniques. In addition to their ability to enhance system reliability, CEMs (with a very low hardware overhead) can be used to dramatically reduce not only the need of memory realignment, but also the time required to realign channel memories in case, albeit rare, such a need arises. Using CEMs, two different schemes were developed to solve the memory realignment problem. In both schemes, most errors are corrected by CEMs, and the remaining errors are masked by a voter.
Highly Survivable Avionics Systems for Long-Term Deep Space Exploration
NASA Technical Reports Server (NTRS)
Alkalai, L.; Chau, S.; Tai, A. T.
2001-01-01
The design of highly survivable avionics systems for long-term (> 10 years) exploration of space is an essential technology for all current and future missions in the Outer Planets roadmap. Long-term exposure to extreme environmental conditions such as high radiation and low-temperatures make survivability in space a major challenge. Moreover, current and future missions are increasingly using commercial technology such as deep sub-micron (0.25 microns) fabrication processes with specialized circuit designs, commercial interfaces, processors, memory, and other commercial off the shelf components that were not designed for long-term survivability in space. Therefore, the design of highly reliable, and available systems for the exploration of Europa, Pluto and other destinations in deep-space require a comprehensive and fresh approach to this problem. This paper summarizes work in progress in three different areas: a framework for the design of highly reliable and highly available space avionics systems, distributed reliable computing architecture, and Guarded Software Upgrading (GSU) techniques for software upgrading during long-term missions. Additional information is contained in the original extended abstract.
A Statistical Simulation Approach to Safe Life Fatigue Analysis of Redundant Metallic Components
NASA Technical Reports Server (NTRS)
Matthews, William T.; Neal, Donald M.
1997-01-01
This paper introduces a dual active load path fail-safe fatigue design concept analyzed by Monte Carlo simulation. The concept utilizes the inherent fatigue life differences between selected pairs of components for an active dual path system, enhanced by a stress level bias in one component. The design is applied to a baseline design; a safe life fatigue problem studied in an American Helicopter Society (AHS) round robin. The dual active path design is compared with a two-element standby fail-safe system and the baseline design for life at specified reliability levels and weight. The sensitivity of life estimates for both the baseline and fail-safe designs was examined by considering normal and Weibull distribution laws and coefficient of variation levels. Results showed that the biased dual path system lifetimes, for both the first element failure and residual life, were much greater than for standby systems. The sensitivity of the residual life-weight relationship was not excessive at reliability levels up to R = 0.9999 and the weight penalty was small. The sensitivity of life estimates increases dramatically at higher reliability levels.
Semiconductor measurement technology: Microelectronic ultrasonic bonding
NASA Technical Reports Server (NTRS)
Harman, G. G. (Editor)
1974-01-01
Information for making high quality ultrasonic wire bonds is presented as well as data to provide a basic understanding of the ultrasonic systems used. The work emphasizes problems and methods of solving them. The required measurement equipment is first introduced. This is followed by procedures and techniques used in setting up a bonding machine, and then various machine- or operator-induced reliability problems are discussed. The characterization of the ultrasonic system and its problems are followed by in-process bonding studies and work on the ultrasonic bonding (welding) mechanism. The report concludes with a discussion of various effects of bond geometry and wire metallurgical characteristics. Where appropriate, the latest, most accurate value of a particular measurement has been substituted for an earlier reported one.
Reliability and validity analysis of the open-source Chinese Foot and Ankle Outcome Score (FAOS).
Ling, Samuel K K; Chan, Vincent; Ho, Karen; Ling, Fona; Lui, T H
2017-12-21
Develop the first reliable and validated open-source outcome scoring system in the Chinese language for foot and ankle problems. Translation of the English FAOS into Chinese following regular protocols. First, two forward-translations were created separately, these were then combined into a preliminary version by an expert committee, and was subsequently back-translated into English. The process was repeated until the original and back translations were congruent. This version was then field tested on actual patients who provided feedback for modification. The final Chinese FAOS version was then tested for reliability and validity. Reliability analysis was performed on 20 subjects while validity analysis was performed on 50 subjects. Tools used to validate the Chinese FAOS were the SF36 and Pain Numeric Rating Scale (NRS). Internal consistency between the FAOS subgroups was measured using Cronbach's alpha. Spearman's correlation was calculated between each subgroup in the FAOS, SF36 and NRS. The Chinese FAOS passed both reliability and validity testing; meaning it is reliable, internally consistent and correlates positively with the SF36 and the NRS. The Chinese FAOS is a free, open-source scoring system that can be used to provide a relatively standardised outcome measure for foot and ankle studies. Copyright © 2017 Elsevier Ltd. All rights reserved.
Tschirren, Lea; Bauer, Susanne; Hanser, Chiara; Marsico, Petra; Sellers, Diane; van Hedel, Hubertus J A
2018-06-01
As there is little evidence for concurrent validity of the Eating and Drinking Ability Classification System (EDACS), this study aimed to determine its concurrent validity and reliability in children and adolescents with cerebral palsy (CP). After an extensive translation procedure, we applied the German language version to 52 participants with CP (30 males, 22 females, mean age 9y 7mo [SD 4y 2mo]). We correlated (Kendall's tau or K τ ) the EDACS levels with the Bogenhausener Dysphagiescore (BODS), and the EDACS level of assistance with the Manual Ability Classification System (MACS) and the item 'eating' of the Functional Independence Measure for Children (WeeFIM). We further quantified the interrater reliability between speech and language therapists (SaLTs) and between SaLTs and parents with Kappa (κ). The EDACS levels correlated highly with the BODS (K τ =0.79), and the EDACS level of assistance correlated highly with the MACS (K τ =0.73) and WeeFIM eating item (K τ =-0.80). Interrater reliability proved almost perfect between SaLTs (EDACS: κ=0.94; EDACS level of assistance: κ=0.89) and SaLTs and parents (EDACS: κ=0.82; EDACS level of assistance: κ=0.89). The EDACS levels and level of assistance seem valid and showed almost perfect interrater reliability when classifying eating and drinking problems in children and adolescents with CP. The Eating and Drinking Ability Classification System (EDACS) correlates well with a dysphagia score. The EDACS level of assistance proves valid. The German version of EDACS is highly reliable. EDACS correlates moderately to highly with other classification systems. © 2018 Mac Keith Press.
Adaptation of a military FTS to civilian air toxics measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engel, J.R.; Dorval, R.K.
1994-12-31
In many ways, the military problem of chemical agent detection is similar to the civilian problem of toxic and related air pollutants detection. A recent program to design a next generation Fourier transform spectrometer (FTS) based chemical agent detection system has been funded by the US Army. This program has resulted in an FTS system that has a number of characteristics that make it suitable for applications to the civilian measurement problem. Low power, low weight, and small size lead to low installation, operating and maintenance costs. Innovative use of diode lasers in place of HeNe reference sources leads tomore » long lifetimes and high reliability. Absolute scan position servos allow for highly efficient offset scanning. This paper will relate the performance of this system to present air monitoring requirements.« less
Crowd Computing as a Cooperation Problem: An Evolutionary Approach
NASA Astrophysics Data System (ADS)
Christoforou, Evgenia; Fernández Anta, Antonio; Georgiou, Chryssis; Mosteiro, Miguel A.; Sánchez, Angel
2013-05-01
Cooperation is one of the socio-economic issues that has received more attention from the physics community. The problem has been mostly considered by studying games such as the Prisoner's Dilemma or the Public Goods Game. Here, we take a step forward by studying cooperation in the context of crowd computing. We introduce a model loosely based on Principal-agent theory in which people (workers) contribute to the solution of a distributed problem by computing answers and reporting to the problem proposer (master). To go beyond classical approaches involving the concept of Nash equilibrium, we work on an evolutionary framework in which both the master and the workers update their behavior through reinforcement learning. Using a Markov chain approach, we show theoretically that under certain----not very restrictive—conditions, the master can ensure the reliability of the answer resulting of the process. Then, we study the model by numerical simulations, finding that convergence, meaning that the system reaches a point in which it always produces reliable answers, may in general be much faster than the upper bounds given by the theoretical calculation. We also discuss the effects of the master's level of tolerance to defectors, about which the theory does not provide information. The discussion shows that the system works even with very large tolerances. We conclude with a discussion of our results and possible directions to carry this research further.
[Experience in the use of equipment for ECG system analysis in municipal polyclinics].
Bondarenko, A A
2006-01-01
Two electrocardiographs, an analog-digital electrocardiograph with preliminary analog filtering of signal and a smart cardiograph implemented as a PC-compatible device without preliminary analog filtering, are considered. Advantages and disadvantages of ECG systems based on artificial intelligence are discussed. ECG interpretation modes provided by the two electrocardiographs are considered. The reliability of automatic ECG interpretation is assessed. Problems of rational use of automated ECG processing systems are discussed.
NASA Technical Reports Server (NTRS)
Hou, Jean W.; Sheen, Jeen S.
1987-01-01
The aim of this study is to find a reliable numerical algorithm to calculate thermal design sensitivities of a transient problem with discontinuous derivatives. The thermal system of interest is a transient heat conduction problem related to the curing process of a composite laminate. A logical function which can smoothly approximate the discontinuity is introduced to modify the system equation. Two commonly used methods, the adjoint variable method and the direct differentiation method, are then applied to find the design derivatives of the modified system. The comparisons of numerical results obtained by these two methods demonstrate that the direct differentiation method is a better choice to be used in calculating thermal design sensitivity.
Mousavian, Alireza; Ebrahimzadeh, Mohammad H; Birjandinejad, Ali; Omidi-Kashani, Farzad; Kachooei, Amir Reza
2015-12-01
In this study, we aimed to translate and test the validity and reliablity of the Persian version of the Manchester-Oxford Foot Questionnaire in foot and ankle patients. We translated the Manchester-Oxford Foot Questionnaire to Persian language according to the accepted guidelines, then assessed the psychometric properties including the validity and reliability on 308 patients with long-standing foot and ankle problems. To test the reliability, we calculated the intra-class correlation coefficient (ICC) for test-retest reliability and measured Cronbach's alpha to test the internal consistency. To test the construct validity of the Manchester-Oxford Foot Questionnaire we also administered the Short-Form 36 to patients. Construct validity was supported by significant correlation with SF36 subscales except for pain subscale of the persian MOXFQ with mental health of the SF36 (r=0.207). Intraclass correlation coefficient was 0.79 for the total MOXFQ and ranged from 0.83 to 0.89 for the three subscales. Cronbach's alpha for pain, walking/standing, and social interaction was 0.86, 0.88, and 0.89, respectively, and was 0.79 for the total MOXFQ showing good internal consistency in each domain. The Persian Manchester-Oxford Foot Questionnaire health scoring system is a valid and reliable patient-reported instrument for foot and ankle problems. Copyright © 2015. Published by Elsevier Ltd.
The Aftercare and School Observation System (ASOS): Reliability and Component Structure.
Ingoldsby, Erin M; Shelleby, Elizabeth C; Lane, Tonya; Shaw, Daniel S; Dishion, Thomas J; Wilson, Melvin N
2013-10-01
This study examines the psychometric properties and component structure of a newly developed observational system, the Aftercare and School Observation System (ASOS). Participants included 468 children drawn from a larger longitudinal intervention study. The system was utilized to assess participant children in school lunchrooms and recess and various afterschool environments. Exploratory factor analyses examined whether a core set of component constructs assessing qualities of children's relationships, caregiver involvement and monitoring, and experiences in school and aftercare contexts that have been linked to children's behavior problems would emerge. Construct validity was assessed by examining associations between ASOS constructs and questionnaire measures assessing children's behavior problems and relationship qualities in school and aftercare settings. Across both settings, two factors showed very similar empirical structures and item loadings, reflecting the constructs of a negative/aggressive context and caregiver positive involvement, with one additional unique factor from the school setting reflecting the extent to which caregiver methods used resulted in less negative behavior and two additional unique factors from the aftercare setting reflecting positivity in the child's interactions and general environment and negativity in the child's interactions and setting. Modest correlations between ASOS factors and aftercare provider and teacher ratings of behavior problems, adult-child relationships, and a rating of school climate contributed to our interpretation that the ASOS scores capture meaningful features of children's experiences in these settings. This study represents the first step of establishing that the ASOS reliably and validly captures risk and protective relationships and experiences in extra-familial settings.
Predicting Cost/Reliability/Maintainability of Advanced General Aviation Avionics Equipment
NASA Technical Reports Server (NTRS)
Davis, M. R.; Kamins, M.; Mooz, W. E.
1978-01-01
A methodology is provided for assisting NASA in estimating the cost, reliability, and maintenance (CRM) requirements for general avionics equipment operating in the 1980's. Practical problems of predicting these factors are examined. The usefulness and short comings of different approaches for modeling coast and reliability estimates are discussed together with special problems caused by the lack of historical data on the cost of maintaining general aviation avionics. Suggestions are offered on how NASA might proceed in assessing cost reliability CRM implications in the absence of reliable generalized predictive models.
Design of power cable grounding wire anti-theft monitoring system
NASA Astrophysics Data System (ADS)
An, Xisheng; Lu, Peng; Wei, Niansheng; Hong, Gang
2018-01-01
In order to prevent the serious consequences of the power grid failure caused by the power cable grounding wire theft, this paper presents a GPRS based power cable grounding wire anti-theft monitoring device system, which includes a camera module, a sensor module, a micro processing system module, and a data monitoring center module, a mobile terminal module. Our design utilize two kinds of methods for detecting and reporting comprehensive image, it can effectively solve the problem of power and cable grounding wire box theft problem, timely follow-up grounded cable theft events, prevent the occurrence of electric field of high voltage transmission line fault, improve the reliability of the safe operation of power grid.
Automatic visual monitoring of welding procedure in stainless steel kegs
NASA Astrophysics Data System (ADS)
Leo, Marco; Del Coco, Marco; Carcagnì, Pierluigi; Spagnolo, Paolo; Mazzeo, Pier Luigi; Distante, Cosimo; Zecca, Raffaele
2018-05-01
In this paper a system for automatic visual monitoring of welding process, in dry stainless steel kegs for food storage, is proposed. In the considered manufacturing process the upper and lower skirts are welded to the vessel by means of Tungsten Inert Gas (TIG) welding. During the process several problems can arise: 1) residuals on the bottom 2) darker weld 3) excessive/poor penetration and 4) outgrowths. The proposed system deals with all the four aforementioned problems and its inspection performances have been evaluated by using a large set of kegs demonstrating both the reliability in terms of defect detection and the suitability to be introduced in the manufacturing system in terms of computational costs.
Considering context: reliable entity networks through contextual relationship extraction
NASA Astrophysics Data System (ADS)
David, Peter; Hawes, Timothy; Hansen, Nichole; Nolan, James J.
2016-05-01
Existing information extraction techniques can only partially address the problem of exploiting unreadable-large amounts text. When discussion of events and relationships is limited to simple, past-tense, factual descriptions of events, current NLP-based systems can identify events and relationships and extract a limited amount of additional information. But the simple subset of available information that existing tools can extract from text is only useful to a small set of users and problems. Automated systems need to find and separate information based on what is threatened or planned to occur, has occurred in the past, or could potentially occur. We address the problem of advanced event and relationship extraction with our event and relationship attribute recognition system, which labels generic, planned, recurring, and potential events. The approach is based on a combination of new machine learning methods, novel linguistic features, and crowd-sourced labeling. The attribute labeler closes the gap between structured event and relationship models and the complicated and nuanced language that people use to describe them. Our operational-quality event and relationship attribute labeler enables Warfighters and analysts to more thoroughly exploit information in unstructured text. This is made possible through 1) More precise event and relationship interpretation, 2) More detailed information about extracted events and relationships, and 3) More reliable and informative entity networks that acknowledge the different attributes of entity-entity relationships.
NASA Astrophysics Data System (ADS)
Roussel, Marc R.
1999-10-01
One of the traditional obstacles to learning quantum mechanics is the relatively high level of mathematical proficiency required to solve even routine problems. Modern computer algebra systems are now sufficiently reliable that they can be used as mathematical assistants to alleviate this difficulty. In the quantum mechanics course at the University of Lethbridge, the traditional three lecture hours per week have been replaced by two lecture hours and a one-hour computer-aided problem solving session using a computer algebra system (Maple). While this somewhat reduces the number of topics that can be tackled during the term, students have a better opportunity to familiarize themselves with the underlying theory with this course design. Maple is also available to students during examinations. The use of a computer algebra system expands the class of feasible problems during a time-limited exercise such as a midterm or final examination. A modern computer algebra system is a complex piece of software, so some time needs to be devoted to teaching the students its proper use. However, the advantages to the teaching of quantum mechanics appear to outweigh the disadvantages.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Musyurka, A. V., E-mail: musyurkaav@burges.rushydro.ru
This article presents the design, hardware, and software solutions developed and placed in service for the automated system of diagnostic monitoring (ASDM) for hydraulic engineering installations at the Bureya HPP, and assuring a reliable process for monitoring hydraulic engineering installations. Project implementation represents a timely solution of problems addressed by the hydraulic engineering installation diagnostics section.
Clean access platform for orbiter
NASA Technical Reports Server (NTRS)
Morrison, H.; Harris, J.
1990-01-01
The design of the Clean Access Platform at the Kennedy Space Center, beginning with the design requirements and tracing the effort throughout development and manufacturing is described. Also examined are: (1) A system description; (2) Testing requirements and conclusions; (3) Safety and reliability features; (4) Major problems experienced during the project; and (5) Lessons learned, including features necessary for the effective design of mechanisms used in clean systems.
Czech results at criticality dosimetry intercomparison 2002.
Frantisek, Spurný; Jaroslav, Trousil
2004-01-01
Two criticality dosimetry systems were tested by Czech participants during the intercomparison held in Valduc, France, June 2002. The first consisted of the thermoluminescent detectors (TLDs) (Al-P glasses) and Si-diodes as passive neutron dosemeters. Second, it was studied to what extent the individual dosemeters used in the Czech routine personal dosimetry service can give a reliable estimation of criticality accident exposure. It was found that the first system furnishes quite reliable estimation of accidental doses. For routine individual dosimetry system, no important problems were encountered in the case of photon dosemeters (TLDs, film badge). For etched track detectors in contact with the 232Th or 235U-Al alloy, the track density saturation for the spark counting method limits the upper dose at approximately 1 Gy for neutrons with the energy >1 MeV.
ExSTraCS 2.0: Description and Evaluation of a Scalable Learning Classifier System.
Urbanowicz, Ryan J; Moore, Jason H
2015-09-01
Algorithmic scalability is a major concern for any machine learning strategy in this age of 'big data'. A large number of potentially predictive attributes is emblematic of problems in bioinformatics, genetic epidemiology, and many other fields. Previously, ExS-TraCS was introduced as an extended Michigan-style supervised learning classifier system that combined a set of powerful heuristics to successfully tackle the challenges of classification, prediction, and knowledge discovery in complex, noisy, and heterogeneous problem domains. While Michigan-style learning classifier systems are powerful and flexible learners, they are not considered to be particularly scalable. For the first time, this paper presents a complete description of the ExS-TraCS algorithm and introduces an effective strategy to dramatically improve learning classifier system scalability. ExSTraCS 2.0 addresses scalability with (1) a rule specificity limit, (2) new approaches to expert knowledge guided covering and mutation mechanisms, and (3) the implementation and utilization of the TuRF algorithm for improving the quality of expert knowledge discovery in larger datasets. Performance over a complex spectrum of simulated genetic datasets demonstrated that these new mechanisms dramatically improve nearly every performance metric on datasets with 20 attributes and made it possible for ExSTraCS to reliably scale up to perform on related 200 and 2000-attribute datasets. ExSTraCS 2.0 was also able to reliably solve the 6, 11, 20, 37, 70, and 135 multiplexer problems, and did so in similar or fewer learning iterations than previously reported, with smaller finite training sets, and without using building blocks discovered from simpler multiplexer problems. Furthermore, ExS-TraCS usability was made simpler through the elimination of previously critical run parameters.
An Evidential Reasoning-Based CREAM to Human Reliability Analysis in Maritime Accident Process.
Wu, Bing; Yan, Xinping; Wang, Yang; Soares, C Guedes
2017-10-01
This article proposes a modified cognitive reliability and error analysis method (CREAM) for estimating the human error probability in the maritime accident process on the basis of an evidential reasoning approach. This modified CREAM is developed to precisely quantify the linguistic variables of the common performance conditions and to overcome the problem of ignoring the uncertainty caused by incomplete information in the existing CREAM models. Moreover, this article views maritime accident development from the sequential perspective, where a scenario- and barrier-based framework is proposed to describe the maritime accident process. This evidential reasoning-based CREAM approach together with the proposed accident development framework are applied to human reliability analysis of a ship capsizing accident. It will facilitate subjective human reliability analysis in different engineering systems where uncertainty exists in practice. © 2017 Society for Risk Analysis.
Reliability and Validity of Prototype Diagnosis for Adolescent Psychopathology.
Haggerty, Greg; Zodan, Jennifer; Mehra, Ashwin; Zubair, Ayyan; Ghosh, Krishnendu; Siefert, Caleb J; Sinclair, Samuel J; DeFife, Jared
2016-04-01
The current study investigated the interrater reliability and validity of prototype ratings of 5 common adolescent psychiatric disorders: attention-deficit/hyperactivity disorder, conduct disorder, major depressive disorder, generalized anxiety disorder, and posttraumatic stress disorder. One hundred fifty-seven adolescent inpatient participants consented to participate in this study. We compared ratings from 2 inpatient clinicians, blinded to each other's ratings and patient measures, after their separate initial diagnostic interview to assess interrater reliability. Prototype ratings completed by clinicians after their initial diagnostic interview with adolescent inpatients and outpatients were compared with patient-reported behavior problems and parents' report of their child's behavioral problems. Prototype ratings demonstrated good interrater reliability. Clinicians' prototype ratings showed predicted relationships with patient-reported behavior problems and parent-reported behavior problems. Prototype matching seems to be a possible alternative for psychiatric diagnosis. Prototype ratings showed good interrater reliability based on clinicians unique experiences with the patient (as opposed to video-/audio-recorded material) with no training.
Three-dimensional implicit lambda methods
NASA Technical Reports Server (NTRS)
Napolitano, M.; Dadone, A.
1983-01-01
This paper derives the three dimensional lambda-formulation equations for a general orthogonal curvilinear coordinate system and provides various block-explicit and block-implicit methods for solving them, numerically. Three model problems, characterized by subsonic, supersonic and transonic flow conditions, are used to assess the reliability and compare the efficiency of the proposed methods.
Highest integration in microelectronics: Development of digital ASICs for PARS3-LR
NASA Astrophysics Data System (ADS)
Scholler, Peter; Vonlutz, Rainer
Essential electronic system components by PARS3-LR, show high requirements in calculation power, power consumption and reliability, by immediately increasing integration thicknesses. These problems are solved by using integrated circuits, developed by LSI LOGIC, that uses the technical and economic advantages of this leading edge technology.
Development of Critical Spatial Thinking through GIS Learning
ERIC Educational Resources Information Center
Kim, Minsung; Bednarz, Robert
2013-01-01
This study developed an interview-based critical spatial thinking oral test and used the test to investigate the effects of Geographic Information System (GIS) learning on three components of critical spatial thinking: evaluating data reliability, exercising spatial reasoning, and assessing problem-solving validity. Thirty-two students at a large…
Human reliability assessment: tools for law enforcement
NASA Astrophysics Data System (ADS)
Ryan, Thomas G.; Overlin, Trudy K.
1997-01-01
This paper suggests ways in which human reliability analysis (HRA) can assist the United State Justice System, and more specifically law enforcement, in enhancing the reliability of the process from evidence gathering through adjudication. HRA is an analytic process identifying, describing, quantifying, and interpreting the state of human performance, and developing and recommending enhancements based on the results of individual HRA. It also draws on lessons learned from compilations of several HRA. Given the high legal standards the Justice System is bound to, human errors that might appear to be trivial in other venues can make the difference between a successful and unsuccessful prosecution. HRA has made a major contribution to the efficiency, favorable cost-benefit ratio, and overall success of many enterprises where humans interface with sophisticated technologies, such as the military, ground transportation, chemical and oil production, nuclear power generation, commercial aviation and space flight. Each of these enterprises presents similar challenges to the humans responsible for executing action and action sequences, especially where problem solving and decision making are concerned. Nowhere are humans confronted, to a greater degree, with problem solving and decision making than are the diverse individuals and teams responsible for arrest and adjudication of criminal proceedings. This paper concludes that because of the parallels between the aforementioned technologies and the adjudication process, especially crime scene evidence gathering, there is reason to believe that the HRA technology, developed and enhanced in other applications, can be transferred to the Justice System with minimal cost and with significant payoff.
Software life cycle methodologies and environments
NASA Technical Reports Server (NTRS)
Fridge, Ernest
1991-01-01
Products of this project will significantly improve the quality and productivity of Space Station Freedom Program software processes by: improving software reliability and safety; and broadening the range of problems that can be solved with computational solutions. Projects brings in Computer Aided Software Engineering (CASE) technology for: Environments such as Engineering Script Language/Parts Composition System (ESL/PCS) application generator, Intelligent User Interface for cost avoidance in setting up operational computer runs, Framework programmable platform for defining process and software development work flow control, Process for bringing CASE technology into an organization's culture, and CLIPS/CLIPS Ada language for developing expert systems; and methodologies such as Method for developing fault tolerant, distributed systems and a method for developing systems for common sense reasoning and for solving expert systems problems when only approximate truths are known.
Making intelligent systems team players: Overview for designers
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schreckenghost, Debra L.
1992-01-01
This report is a guide and companion to the NASA Technical Memorandum 104738, 'Making Intelligent Systems Team Players,' Volumes 1 and 2. The first two volumes of this Technical Memorandum provide comprehensive guidance to designers of intelligent systems for real-time fault management of space systems, with the objective of achieving more effective human interaction. This report provides an analysis of the material discussed in the Technical Memorandum. It clarifies what it means for an intelligent system to be a team player, and how such systems are designed. It identifies significant intelligent system design problems and their impacts on reliability and usability. Where common design practice is not effective in solving these problems, we make recommendations for these situations. In this report, we summarize the main points in the Technical Memorandum and identify where to look for further information.
Performance issues for iterative solvers in device simulation
NASA Technical Reports Server (NTRS)
Fan, Qing; Forsyth, P. A.; Mcmacken, J. R. F.; Tang, Wei-Pai
1994-01-01
Due to memory limitations, iterative methods have become the method of choice for large scale semiconductor device simulation. However, it is well known that these methods still suffer from reliability problems. The linear systems which appear in numerical simulation of semiconductor devices are notoriously ill-conditioned. In order to produce robust algorithms for practical problems, careful attention must be given to many implementation issues. This paper concentrates on strategies for developing robust preconditioners. In addition, effective data structures and convergence check issues are also discussed. These algorithms are compared with a standard direct sparse matrix solver on a variety of problems.
A self-learning camera for the validation of highly variable and pseudorandom patterns
NASA Astrophysics Data System (ADS)
Kelley, Michael
2004-05-01
Reliable and productive manufacturing operations have depended on people to quickly detect and solve problems whenever they appear. Over the last 20 years, more and more manufacturing operations have embraced machine vision systems to increase productivity, reliability and cost-effectiveness, including reducing the number of human operators required. Although machine vision technology has long been capable of solving simple problems, it has still not been broadly implemented. The reason is that until now, no machine vision system has been designed to meet the unique demands of complicated pattern recognition. The ZiCAM family was specifically developed to be the first practical hardware to meet these needs. To be able to address non-traditional applications, the machine vision industry must include smart camera technology that meets its users" demands for lower costs, better performance and the ability to address applications of irregular lighting, patterns and color. The next-generation smart cameras will need to evolve as a fundamentally different kind of sensor, with new technology that behaves like a human but performs like a computer. Neural network based systems, coupled with self-taught, n-space, non-linear modeling, promises to be the enabler of the next generation of machine vision equipment. Image processing technology is now available that enables a system to match an operator"s subjectivity. A Zero-Instruction-Set-Computer (ZISC) powered smart camera allows high-speed fuzzy-logic processing, without the need for computer programming. This can address applications of validating highly variable and pseudo-random patterns. A hardware-based implementation of a neural network, Zero-Instruction-Set-Computer, enables a vision system to "think" and "inspect" like a human, with the speed and reliability of a machine.
Automatic specification of reliability models for fault-tolerant computers
NASA Technical Reports Server (NTRS)
Liceaga, Carlos A.; Siewiorek, Daniel P.
1993-01-01
The calculation of reliability measures using Markov models is required for life-critical processor-memory-switch structures that have standby redundancy or that are subject to transient or intermittent faults or repair. The task of specifying these models is tedious and prone to human error because of the large number of states and transitions required in any reasonable system. Therefore, model specification is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model specification. Automation requires a general system description language (SDL). For practicality, this SDL should also provide a high level of abstraction and be easy to learn and use. The first attempt to define and implement an SDL with those characteristics is presented. A program named Automated Reliability Modeling (ARM) was constructed as a research vehicle. The ARM program uses a graphical interface as its SDL, and it outputs a Markov reliability model specification formulated for direct use by programs that generate and evaluate the model.
Optimal Wind Power Uncertainty Intervals for Electricity Market Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Ying; Zhou, Zhi; Botterud, Audun
It is important to select an appropriate uncertainty level of the wind power forecast for power system scheduling and electricity market operation. Traditional methods hedge against a predefined level of wind power uncertainty, such as a specific confidence interval or uncertainty set, which leaves the questions of how to best select the appropriate uncertainty levels. To bridge this gap, this paper proposes a model to optimize the forecast uncertainty intervals of wind power for power system scheduling problems, with the aim of achieving the best trade-off between economics and reliability. Then we reformulate and linearize the models into a mixedmore » integer linear programming (MILP) without strong assumptions on the shape of the probability distribution. In order to invest the impacts on cost, reliability, and prices in a electricity market, we apply the proposed model on a twosettlement electricity market based on a six-bus test system and on a power system representing the U.S. state of Illinois. The results show that the proposed method can not only help to balance the economics and reliability of the power system scheduling, but also help to stabilize the energy prices in electricity market operation.« less
Seeking high reliability in primary care: Leadership, tools, and organization.
Weaver, Robert R
2015-01-01
Leaders in health care increasingly recognize that improving health care quality and safety requires developing an organizational culture that fosters high reliability and continuous process improvement. For various reasons, a reliability-seeking culture is lacking in most health care settings. Developing a reliability-seeking culture requires leaders' sustained commitment to reliability principles using key mechanisms to embed those principles widely in the organization. The aim of this study was to examine how key mechanisms used by a primary care practice (PCP) might foster a reliability-seeking, system-oriented organizational culture. A case study approach was used to investigate the PCP's reliability culture. The study examined four cultural artifacts used to embed reliability-seeking principles across the organization: leadership statements, decision support tools, and two organizational processes. To decipher their effects on reliability, the study relied on observations of work patterns and the tools' use, interactions during morning huddles and process improvement meetings, interviews with clinical and office staff, and a "collective mindfulness" questionnaire. The five reliability principles framed the data analysis. Leadership statements articulated principles that oriented the PCP toward a reliability-seeking culture of care. Reliability principles became embedded in the everyday discourse and actions through the use of "problem knowledge coupler" decision support tools and daily "huddles." Practitioners and staff were encouraged to report unexpected events or close calls that arose and which often initiated a formal "process change" used to adjust routines and prevent adverse events from recurring. Activities that foster reliable patient care became part of the taken-for-granted routine at the PCP. The analysis illustrates the role leadership, tools, and organizational processes play in developing and embedding a reliable-seeking culture across an organization. Progress toward a reliability-seeking, system-oriented approach to care remains ongoing, and movement in that direction requires deliberate and sustained effort by committed leaders in health care.
Faux-Pas Test: A Proposal of a Standardized Short Version.
Fernández-Modamio, Mar; Arrieta-Rodríguez, Marta; Bengochea-Seco, Rosario; Santacoloma-Cabero, Iciar; Gómez de Tojeiro-Roce, Juan; García-Polavieja, Bárbara; González-Fraile, Eduardo; Martín-Carrasco, Manuel; Griffin, Kim; Gil-Sanz, David
2018-06-26
Previous research on theory of mind suggests that people with schizophrenia have difficulties with complex mentalization tasks that involve the integration of cognition and affective mental states. One of the tools most commonly used to assess theory of mind is the Faux-Pas Test. However, it presents two main methodological problems: 1) the lack of a standard scoring system; 2) the different versions are not comparable due to a lack of information on the stories used. These methodological problems make it difficult to draw conclusions about performance on this test by people with schizophrenia. The aim of this study was to develop a reduced version of the Faux-Pas test with adequate psychometric properties. The test was administered to control and clinical groups. Interrater and test-retest reliability were analyzed for each story in order to select the set of 10 stories included in the final reduced version. The shortened version showed good psychometric properties for controls and patients: test-retest reliability of 0.97 and 0.78, inter-rater reliability of 0.95 and 0.87 and Cronbach's alpha of 0.82 and 0.72.
Mathematical Modelling-Based Energy System Operation Strategy Considering Energy Storage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryu, Jun-Hyung; Hodge, Bri-Mathias
2016-06-25
Renewable energy resources are widely recognized as an alternative to environmentally harmful fossil fuels. More renewable energy technologies will need to penetrate into fossil fuel dominated energy systems to mitigate the globally witnessed climate changes and environmental pollutions. It is necessary to prepare for the potential problems with increased proportions of renewable energy in the energy system, to prevent higher costs and decreased reliability. Motivated by this need, this paper addresses the operation of an energy system with an energy storage system in the context of developing a decision-supporting framework.
The Problem of Ensuring Reliability of Gas Turbine Engines
NASA Astrophysics Data System (ADS)
Nozhnitsky, Yu A.
2018-01-01
Requirements to advanced engines for civil aviation are discussing. Some significant problems of ensuring reliability of advanced gas turbine engines are mentioned. Special attention is paid to successful utilization of new materials and critical technologies. Also the problem of excluding failure of engine part due to low cycle or high cycle fatigue is discussing.
Markov chains for testing redundant software
NASA Technical Reports Server (NTRS)
White, Allan L.; Sjogren, Jon A.
1988-01-01
A preliminary design for a validation experiment has been developed that addresses several problems unique to assuring the extremely high quality of multiple-version programs in process-control software. The procedure uses Markov chains to model the error states of the multiple version programs. The programs are observed during simulated process-control testing, and estimates are obtained for the transition probabilities between the states of the Markov chain. The experimental Markov chain model is then expanded into a reliability model that takes into account the inertia of the system being controlled. The reliability of the multiple version software is computed from this reliability model at a given confidence level using confidence intervals obtained for the transition probabilities during the experiment. An example demonstrating the method is provided.
NASA Astrophysics Data System (ADS)
McPhee, J.; William, Y. W.
2005-12-01
This work presents a methodology for pumping test design based on the reliability requirements of a groundwater model. Reliability requirements take into consideration the application of the model results in groundwater management, expressed in this case as a multiobjective management model. The pumping test design is formulated as a mixed-integer nonlinear programming (MINLP) problem and solved using a combination of genetic algorithm (GA) and gradient-based optimization. Bayesian decision theory provides a formal framework for assessing the influence of parameter uncertainty over the reliability of the proposed pumping test. The proposed methodology is useful for selecting a robust design that will outperform all other candidate designs under most potential 'true' states of the system
Testing and checkout experiences in the National Transonic Facility since becoming operational
NASA Technical Reports Server (NTRS)
Bruce, W. E., Jr.; Gloss, B. B.; Mckinney, L. W.
1988-01-01
The U.S. National Transonic Facility, constructed by NASA to meet the national needs for High Reynolds Number Testing, has been operational in a checkout and test mode since the operational readiness review (ORR) in late 1984. During this time, there have been problems centered around the effect of large temperature excursions on the mechanical movement of large components, the reliable performance of instrumentation systems, and an unexpected moisture problem with dry insulation. The more significant efforts since the ORR are reviewed and NTF status concerning hardware, instrumentation and process controls systems, operating constraints imposed by the cryogenic environment, and data quality and process controls is summarized.
NASA Astrophysics Data System (ADS)
Poddaeva, O.; Churin, P.; Fedosova, A.; Truhanov, S.
2018-03-01
Studies of aerodynamics of bridge structures are an actual problem. Such attention is paid to the study of wind influence on bridge structures not at all by chance; a large number of cases of loss of stability of such structures are known under the influence of wind up to their complete destruction. The development of non-contact systems of measuring equipment allows solving this problem with a high level of accuracy and reliability. This article presents the results of experimental studies of wind impact on a two-span bridge using specialized measuring system based on high-precision laser displacement sensors.
NASA Formal Methods Workshop, 1990
NASA Technical Reports Server (NTRS)
Butler, Ricky W. (Compiler)
1990-01-01
The workshop brought together researchers involved in the NASA formal methods research effort for detailed technical interchange and provided a mechanism for interaction with representatives from the FAA and the aerospace industry. The workshop also included speakers from industry to debrief the formal methods researchers on the current state of practice in flight critical system design, verification, and certification. The goals were: define and characterize the verification problem for ultra-reliable life critical flight control systems and the current state of practice in industry today; determine the proper role of formal methods in addressing these problems, and assess the state of the art and recent progress toward applying formal methods to this area.
NASA Astrophysics Data System (ADS)
Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan
2010-02-01
Near-infrared spectroscopy is a promising, rapidly developing, reliable and noninvasive technique, used extensively in the biomedicine and in pharmaceutical industry. With the introduction of acousto-optic tunable filters (AOTF) and highly sensitive InGaAs focal plane sensor arrays, real-time high resolution hyper-spectral imaging has become feasible for a number of new biomedical in vivo applications. However, due to the specificity of the AOTF technology and lack of spectral calibration standardization, maintaining long-term stability and compatibility of the acquired hyper-spectral images across different systems is still a challenging problem. Efficiently solving both is essential as the majority of methods for analysis of hyper-spectral images relay on a priori knowledge extracted from large spectral databases, serving as the basis for reliable qualitative or quantitative analysis of various biological samples. In this study, we propose and evaluate fast and reliable spectral calibration of hyper-spectral imaging systems in the short wavelength infrared spectral region. The proposed spectral calibration method is based on light sources or materials, exhibiting distinct spectral features, which enable robust non-rigid registration of the acquired spectra. The calibration accounts for all of the components of a typical hyper-spectral imaging system such as AOTF, light source, lens and optical fibers. The obtained results indicated that practical, fast and reliable spectral calibration of hyper-spectral imaging systems is possible, thereby assuring long-term stability and inter-system compatibility of the acquired hyper-spectral images.
Leong, Kah Huo; Abdul-Rahman, Hamzah; Wang, Chen; Onn, Chiu Chuen
2016-01-01
Railway and metro transport systems (RS) are becoming one of the popular choices of transportation among people, especially those who live in urban cities. Urbanization and increasing population due to rapid development of economy in many cities are leading to a bigger demand for urban rail transit. Despite being a popular variant of Traveling Salesman Problem (TSP), it appears that the universal formula or techniques to solve the problem are yet to be found. This paper aims to develop an optimization algorithm for optimum route selection to multiple destinations in RS before returning to the starting point. Bee foraging behaviour is examined to generate a reliable algorithm in railway TSP. The algorithm is then verified by comparing the results with the exact solutions in 10 test cases, and a numerical case study is designed to demonstrate the application with large size sample. It is tested to be efficient and effective in railway route planning as the tour can be completed within a certain period of time by using minimal resources. The findings further support the reliability of the algorithm and capability to solve the problems with different complexity. This algorithm can be used as a method to assist business practitioners making better decision in route planning. PMID:27930659
Leong, Kah Huo; Abdul-Rahman, Hamzah; Wang, Chen; Onn, Chiu Chuen; Loo, Siaw-Chuing
2016-01-01
Railway and metro transport systems (RS) are becoming one of the popular choices of transportation among people, especially those who live in urban cities. Urbanization and increasing population due to rapid development of economy in many cities are leading to a bigger demand for urban rail transit. Despite being a popular variant of Traveling Salesman Problem (TSP), it appears that the universal formula or techniques to solve the problem are yet to be found. This paper aims to develop an optimization algorithm for optimum route selection to multiple destinations in RS before returning to the starting point. Bee foraging behaviour is examined to generate a reliable algorithm in railway TSP. The algorithm is then verified by comparing the results with the exact solutions in 10 test cases, and a numerical case study is designed to demonstrate the application with large size sample. It is tested to be efficient and effective in railway route planning as the tour can be completed within a certain period of time by using minimal resources. The findings further support the reliability of the algorithm and capability to solve the problems with different complexity. This algorithm can be used as a method to assist business practitioners making better decision in route planning.
Dynamic cellular manufacturing system considering machine failure and workload balance
NASA Astrophysics Data System (ADS)
Rabbani, Masoud; Farrokhi-Asl, Hamed; Ravanbakhsh, Mohammad
2018-02-01
Machines are a key element in the production system and their failure causes irreparable effects in terms of cost and time. In this paper, a new multi-objective mathematical model for dynamic cellular manufacturing system (DCMS) is provided with consideration of machine reliability and alternative process routes. In this dynamic model, we attempt to resolve the problem of integrated family (part/machine cell) formation as well as the operators' assignment to the cells. The first objective minimizes the costs associated with the DCMS. The second objective optimizes the labor utilization and, finally, a minimum value of the variance of workload between different cells is obtained by the third objective function. Due to the NP-hard nature of the cellular manufacturing problem, the problem is initially validated by the GAMS software in small-sized problems, and then the model is solved by two well-known meta-heuristic methods including non-dominated sorting genetic algorithm and multi-objective particle swarm optimization in large-scaled problems. Finally, the results of the two algorithms are compared with respect to five different comparison metrics.
Reconfigurable Antenna and Cognitive Radio for Space Applications
NASA Technical Reports Server (NTRS)
Hwu, Shian U.
2012-01-01
This presentation briefly discusses a research effort on mitigation techniques of radio frequency interference (RFI) on communication systems for possible space applications. This problem is of considerable interest in the context of providing reliable communications to the space vehicle which might suffer severe performance degradation due to RFI sources such as visiting spacecrafts and various ground radar systems. This study proposes a communication system with Reconfigurable Antenna (RA) and Cognitive Radio (CR) to mitigate the RFI impact. A cognitive radio is an intelligent radio that is able to learn from the environment and adapt to the variations in its surrounding by adjusting the transmit power, carrier frequency, modulation strategy or transmission data rate. Therefore, the main objective of a cognitive radio system is to ensure highly reliable communication whenever and wherever needed. To match the intelligent adaptability of the cognitive radio, a reconfigurable antenna system will be required to ensure the system performance. The technical challenges in design such a system will be discussed in this presentation.
Research and design of smart grid monitoring control via terminal based on iOS system
NASA Astrophysics Data System (ADS)
Fu, Wei; Gong, Li; Chen, Heli; Pan, Guangji
2017-06-01
Aiming at a series of problems existing in current smart grid monitoring Control Terminal, such as high costs, poor portability, simple monitoring system, poor software extensions, low system reliability when transmitting information, single man-machine interface, poor security, etc., smart grid remote monitoring system based on the iOS system has been designed. The system interacts with smart grid server so that it can acquire grid data through WiFi/3G/4G networks, and monitor each grid line running status, as well as power plant equipment operating conditions. When it occurs an exception in the power plant, incident information can be sent to the user iOS terminal equipment timely, which will provide troubleshooting information to help the grid staff to make the right decisions in a timely manner, to avoid further accidents. Field tests have shown the system realizes the integrated grid monitoring functions, low maintenance cost, friendly interface, high security and reliability, and it possesses certain applicable value.
NASA Astrophysics Data System (ADS)
Bobkov, S. G.; Serdin, O. V.; Arkhangelskiy, A. I.; Arkhangelskaja, I. V.; Suchkov, S. I.; Topchiev, N. P.
The problem of electronic component unification at the different levels (circuits, interfaces, hardware and software) used in space industry is considered. The task of computer systems for space purposes developing is discussed by example of scientific data acquisition system for space project GAMMA-400. The basic characteristics of high reliable and fault tolerant chips developed by SRISA RAS for space applicable computational systems are given. To reduce power consumption and enhance data reliability, embedded system interconnect made hierarchical: upper level is Serial RapidIO 1x or 4x with rate transfer 1.25 Gbaud; next level - SpaceWire with rate transfer up to 400 Mbaud and lower level - MIL-STD-1553B and RS232/RS485. The Ethernet 10/100 is technology interface and provided connection with the previously released modules too. Systems interconnection allows creating different redundancy systems. Designers can develop heterogeneous systems that employ the peer-to-peer networking performance of Serial RapidIO using multiprocessor clusters interconnected by SpaceWire.
NASA Technical Reports Server (NTRS)
Srivastava, Ashok, N.; Akella, Ram; Diev, Vesselin; Kumaresan, Sakthi Preethi; McIntosh, Dawn M.; Pontikakis, Emmanuel D.; Xu, Zuobing; Zhang, Yi
2006-01-01
This paper describes the results of a significant research and development effort conducted at NASA Ames Research Center to develop new text mining techniques to discover anomalies in free-text reports regarding system health and safety of two aerospace systems. We discuss two problems of significant importance in the aviation industry. The first problem is that of automatic anomaly discovery about an aerospace system through the analysis of tens of thousands of free-text problem reports that are written about the system. The second problem that we address is that of automatic discovery of recurring anomalies, i.e., anomalies that may be described m different ways by different authors, at varying times and under varying conditions, but that are truly about the same part of the system. The intent of recurring anomaly identification is to determine project or system weakness or high-risk issues. The discovery of recurring anomalies is a key goal in building safe, reliable, and cost-effective aerospace systems. We address the anomaly discovery problem on thousands of free-text reports using two strategies: (1) as an unsupervised learning problem where an algorithm takes free-text reports as input and automatically groups them into different bins, where each bin corresponds to a different unknown anomaly category; and (2) as a supervised learning problem where the algorithm classifies the free-text reports into one of a number of known anomaly categories. We then discuss the application of these methods to the problem of discovering recurring anomalies. In fact the special nature of recurring anomalies (very small cluster sizes) requires incorporating new methods and measures to enhance the original approach for anomaly detection. ?& pant 0-
Multibiometric Systems: Fusion Strategies and Template Security
2008-01-01
Biometric authentication, or simply biometrics, offers a natural and reliable solution to the problem of identity determination by establishing the identity...applications [99]. Therefore, there is no universally best biometric trait and the choice of biometric depends on the nature and requirements of the...result in a significant reduction in the GAR of a biometric system [72,204]. • Non-universality: If every individual in the target population is able
A Decreasing Failure Rate, Mixed Exponential Model Applied to Reliability.
1981-06-01
Trident missile systems have been observed. The mixed exponential distribu- tion has been shown to fit the life data for the electronic equipment on...these systems . This paper discusses some of the estimation problems which occur with the decreasing failure rate mixed exponential distribution when...assumption of constant or increasing failure rate seemed to be incorrect. 2. However, the design of this electronic equipment indicated that
Experiments Toward the Application of Multi-Robot Systems to Disaster-Relief Scenarios
2015-09-01
responsibility is assessment, such as dislocated populations, degree of property damage, and remaining communications infrastructure . These are all...specific problems: evaluating of damage to infrastructure in the environment, e.g., traversability of roads; and localizing particular targets of interest...regarding hardware and software infrastructure are driven by the need for these systems to “survive the field” and allow for reliable evaluation of autonomy
Zhang, Guangming; Chen, Guoqiang; Meng, Dawei; Liu, Yanwu; Chen, Jianwei; Shu, Lanmei; Liu, Wenbo
2017-06-01
This study aimed to introduce a new stereoelectroencephalography (SEEG) system based on Leksell stereotactic frame (L-SEEG) as well as Neurotech operation planning software, and to investigate its safety, applicability, and reliability.L-SEEG, without the help of navigation, includes SEEG operation planning software (Neurotech), Leksell stereotactic frame, and corresponding surgical instruments. Neurotech operation planning software can be used to display three-dimensional images of the cortex and cortical vessels and to plan the intracranial electrode implantation. In 44 refractory epilepsy patients, 364 intracranial electrodes were implanted through the L-SEEG system, and the postoperative complications such as bleeding, cerebral spinal fluid (CSF) leakage, infection, and electrode-related problems were also investigated.All electrodes were implanted accurately as preoperatively planned shown by postoperative lamina computed tomography and preoperative lamina magnetic resonance imaging. There was no severe complication after intracranial electrode implantation through the L-SEEG system. There were no electrode-related problems, no CSF leakage and no infection after surgery. All the patients recovered favorably after SEEG electrode implantation, and only 1 patient had asymptomatic frontal lateral ventricle hematoma (3 mL).The L-SEEG system with Neurotech operation planning software can be used for safe, accurate, and reliable intracranial electrode implantation for SEEG.
How precise is the PRECICE compared to the ISKD in intramedullary limb lengthening?
Vogt, Björn; Tretow, Henning L; Schuhknecht, Britta; Gosheger, Georg; Horter, Melanie J; Rödl, Robert
2014-01-01
Background and purpose The PRECICE intramedullary limb lengthening system uses a new technique with a magnetic rod and a motorized external remote controller (ERC) with rotational magnetic field. We evaluated the reliability and safety of the PRECICE system. Methods We compared our preliminary results with PRECICE in 24 patients (26 nails) with the known difficulties in the use of mechanical lengthening devices such as the ISKD. We used the Paley classification for evaluation of problems, obstacles, and complications. Results 2 nails were primarily without function, and 24/26 nails lengthened over the desired distance. Lengthening desired was 38 mm and lengthening obtained was 37 mm. There were 2 nail breakages, 1 in the welding seam and 1 because of a fall that occurred during consolidation. ERC usage was problematic mostly in patients with femoral lengthening. Adjustment of the ERC was necessary in 10 of 24 cases. 15 cases had implant-associated problems, obstacles were seen in 5 cases, and complications were seen in each of 4 cases. Interpretaion The reliability of the PRECICE system is comparable to that of other intramedullary lengthening devices such as the ISKD. The motorized external remote controller and its application by the patients is a weak point of the system and needs strict supervision. PMID:24758320
AMFESYS: Modelling and diagnosis functions for operations support
NASA Technical Reports Server (NTRS)
Wheadon, J.
1993-01-01
Packetized telemetry, combined with low station coverage for close-earth satellites, may introduce new problems in presenting to the operator a clear picture of what the spacecraft is doing. A recent ESOC study has gone some way to show, by means of a practical demonstration, how the use of subsystem models combined with artificial intelligence techniques, within a real-time spacecraft control system (SCS), can help to overcome these problems. A spin-off from using these techniques can be an improvement in the reliability of the telemetry (TM) limit-checking function, as well as the telecommand verification function, of the Spacecraft Control systems (SCS). The problem and how it was addressed, including an overview of the 'AMF Expert System' prototype are described, and proposes further work which needs to be done to prove the concept. The Automatic Mirror Furnace is part of the payload of the European Retrievable Carrier (EURECA) spacecraft, which was launched in July 1992.
Design and control strategy for a hybrid green energy system for mobile telecommunication sites
NASA Astrophysics Data System (ADS)
Okundamiya, Michael S.; Emagbetere, Joy O.; Ogujor, Emmanuel A.
2014-07-01
The rising energy costs and carbon footprint of operating mobile telecommunication sites in the emerging world have increased research interests in green technology. The intermittent nature of most green energy sources creates the problem of designing the optimum configuration for a given location. This study presents the design analysis and control strategy for a cost effective and reliable operation of the hybrid green energy system (HGES) for GSM base transceiver station (BTS) sites in isolated regions. The design constrains the generation and distribution of power to reliably satisfy the energy demand while ensuring safe operation of the system. The overall process control applies the genetic algorithm-based technique for optimal techno-economic sizing of system's components. The process simulation utilized meteorological data for 3 locations (Abuja, Benin City and Sokoto) with varying climatic conditions in Nigeria. Simulation results presented for green GSM BTS sites are discussed and compared with existing approaches.
Geibel, Scott; Habtamu, Kassahun; Mekonnen, Gebeyehu; Jani, Nrupa; Kay, Lynnette; Shibru, Julyata; Bedilu, Lake; Kalibala, Samuel
2016-01-01
Evaluate the reliability and validity of the Youth Self-Report (YSR) as a screening tool for mental health problems among young people vulnerable to HIV in Ethiopia. A cross-sectional assessment of young people currently receiving social services. Young people age 15-18 participated in a study where a translated and adapted version of the YSR was administered by trained nurses, followed by an assessment by Ethiopian psychiatrists. Internal reliability of YSR syndrome scales were assessed using Chronbach's alpha. Test-retest reliability was assessed through repeating the YSR one month later. To assess validity, analysis of the sensitivity and specificity of the YSR compared to the psychiatrist assessment was conducted. Across the eight syndrome scales, the YSR best measured the diagnosis of anxiety/depression and social problems among young women, and attention problems among young men. Among individual YSR syndrome scales, internal reliability ranged from unacceptable (Chronback's alpha = 0.11, rule-breaking behavior among young women) to good (α≥0.71, anxiety/depression among young women). Anxiety/depression scores of ≥8.5 among young women also had good sensitivity (0.833) and specificity (0.754) to predict a true diagnosis. The YSR syndrome scales for social problems among young women and attention problems among young men also had fair consistency and validity measurements. Most YSR scores had significant positive correlations between baseline and post-one month administration. Measures of reliability and validity for most other YSR syndrome scales were fair to poor. The adapted, personally administered, Amharic version of the YSR has sufficient reliability and validity in identifying young vulnerable women with anxiety/depression and/or social problems, and young men with attention problems; which were the most common mental health disorders observed by psychiatrists among the migrant populations in this study. Further assessment of the applicability of the YSR among vulnerable young people for less common disorders in Ethiopia is needed.
Monte Carlo Solution to Find Input Parameters in Systems Design Problems
NASA Astrophysics Data System (ADS)
Arsham, Hossein
2013-06-01
Most engineering system designs, such as product, process, and service design, involve a framework for arriving at a target value for a set of experiments. This paper considers a stochastic approximation algorithm for estimating the controllable input parameter within a desired accuracy, given a target value for the performance function. Two different problems, what-if and goal-seeking problems, are explained and defined in an auxiliary simulation model, which represents a local response surface model in terms of a polynomial. A method of constructing this polynomial by a single run simulation is explained. An algorithm is given to select the design parameter for the local response surface model. Finally, the mean time to failure (MTTF) of a reliability subsystem is computed and compared with its known analytical MTTF value for validation purposes.
Traffic congestion and reliability : linking solutions to problems.
DOT National Transportation Integrated Search
2004-07-19
The Traffic Congestion and Reliability: Linking Solutions to Problems Report provides : a snapshot of congestion in the United States by summarizing recent trends in : congestion, highlighting the role of unreliable travel times in the effects of con...
An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates
Hobbs, Michael T.; Brehme, Cheryl S.
2017-01-01
Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.
An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates.
Hobbs, Michael T; Brehme, Cheryl S
2017-01-01
Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.
Estimation of Faults in DC Electrical Power System
NASA Technical Reports Server (NTRS)
Gorinevsky, Dimitry; Boyd, Stephen; Poll, Scott
2009-01-01
This paper demonstrates a novel optimization-based approach to estimating fault states in a DC power system. Potential faults changing the circuit topology are included along with faulty measurements. Our approach can be considered as a relaxation of the mixed estimation problem. We develop a linear model of the circuit and pose a convex problem for estimating the faults and other hidden states. A sparse fault vector solution is computed by using 11 regularization. The solution is computed reliably and efficiently, and gives accurate diagnostics on the faults. We demonstrate a real-time implementation of the approach for an instrumented electrical power system testbed, the ADAPT testbed at NASA ARC. The estimates are computed in milliseconds on a PC. The approach performs well despite unmodeled transients and other modeling uncertainties present in the system.
Jung, Yong-Gyun; Kim, Hyejin; Lee, Sangyeop; Kim, Suyeoun; Jo, EunJi; Kim, Eun-Geun; Choi, Jungil; Kim, Hyun Jung; Yoo, Jungheon; Lee, Hye-Jeong; Kim, Haeun; Jung, Hyunju; Ryoo, Sungweon; Kwon, Sunghoon
2018-06-05
The Disc Agarose Channel (DAC) system utilizes microfluidics and imaging technologies and is fully automated and capable of tracking single cell growth to produce Mycobacterium tuberculosis (MTB) drug susceptibility testing (DST) results within 3~7 days. In particular, this system can be easily used to perform DSTs without the fastidious preparation of the inoculum of MTB cells. Inoculum effect is one of the major problems that causes DST errors. The DAC system was not influenced by the inoculum effect and produced reliable DST results. In this system, the minimum inhibitory concentration (MIC) values of the first-line drugs were consistent regardless of inoculum sizes ranging from ~10 3 to ~10 8 CFU/mL. The consistent MIC results enabled us to determine the critical concentrations for 12 anti-tuberculosis drugs. Based on the determined critical concentrations, further DSTs were performed with 254 MTB clinical isolates without measuring an inoculum size. There were high agreement rates (96.3%) between the DAC system and the absolute concentration method using Löwenstein-Jensen medium. According to these results, the DAC system is the first DST system that is not affected by the inoculum effect. It can thus increase reliability and convenience for DST of MTB. We expect that this system will be a potential substitute for conventional DST systems.
2000-01-01
To address important problems and needed changes in online and retrospective drug utilization review (DUR) programs. Emphasis is placed on reliability of DUR criteria and the shift of traditional retrospective DUR programs toward disease management and health care outcomes. Published literature evaluating the role of online and retrospective DUR programs. Particular attention was given to studies assessing DUR criteria reliability and new interventions with retrospective DUR programs. A literature review was conducted along with an expert summary from the U.S. Pharmacopeia Drug Utilization Review Advisory Panel. Studies have revealed variations in DUR criteria that could be affecting clinical practice and patient care. Appropriate formal methodologies and use of consistent procedures in developing online prospective DUR programs and systems could help resolve these problems. Traditional retrospective DUR is also shifting to incorporate disease management and methodologies from health outcomes and pharmacoeconomics studies. Refinements are needed to improve the reliability and validity of online DUR criteria and to minimize false positive messages. Databases created as a result of DUR efforts have been used in new and innovative ways to incorporate health outcomes data and disease management interventions. Additional outcomes data, combined with quality assurance efforts, should increase the utility of DUR/disease management efforts in evaluating health systems while improving the effectiveness and efficiency of pharmacists' health care interventions.
NASA Technical Reports Server (NTRS)
Guman, W. J. (Editor)
1971-01-01
Thermal vacuum design supporting thruster tests indicate no problems under the worst case conditions of sink temperature and spin rate. The reliability of the system was calculated to be 0.92 for a five-year mission. Minus the main energy storage capacitor it is 0.98.
Speech Recognition Technology for Disabilities Education
ERIC Educational Resources Information Center
Tang, K. Wendy; Kamoua, Ridha; Sutan, Victor; Farooq, Omer; Eng, Gilbert; Chu, Wei Chern; Hou, Guofeng
2005-01-01
Speech recognition is an alternative to traditional methods of interacting with a computer, such as textual input through a keyboard. An effective system can replace or reduce the reliability on standard keyboard and mouse input. This can especially assist dyslexic students who have problems with character or word use and manipulation in a textual…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-20
... ``cut'' from a sheet or roll of labels--is used. Persistent problems with drug product mislabeling and... believe that development and use of advanced code scanning equipment has made many current electronic... and other advanced scanning techniques have made current electronic systems reliable to the 100...
Transfer of space technology to industry
NASA Technical Reports Server (NTRS)
Hamilton, J. T.
1974-01-01
Some of the most significant applications of the NASA aerospace technology transfer to industry and other government agencies are briefly outlined. The technology utilization program encompasses computer programs for structural problems, life support systems, fuel cell development, and rechargeable cardiac pacemakers as well as reliability and quality research for oil recovery operations and pollution control.
Jacobson, Mark Z; Delucchi, Mark A; Cameron, Mary A; Frew, Bethany A
2015-12-08
This study addresses the greatest concern facing the large-scale integration of wind, water, and solar (WWS) into a power grid: the high cost of avoiding load loss caused by WWS variability and uncertainty. It uses a new grid integration model and finds low-cost, no-load-loss, nonunique solutions to this problem on electrification of all US energy sectors (electricity, transportation, heating/cooling, and industry) while accounting for wind and solar time series data from a 3D global weather model that simulates extreme events and competition among wind turbines for available kinetic energy. Solutions are obtained by prioritizing storage for heat (in soil and water); cold (in ice and water); and electricity (in phase-change materials, pumped hydro, hydropower, and hydrogen), and using demand response. No natural gas, biofuels, nuclear power, or stationary batteries are needed. The resulting 2050-2055 US electricity social cost for a full system is much less than for fossil fuels. These results hold for many conditions, suggesting that low-cost, reliable 100% WWS systems should work many places worldwide.
Jacobson, Mark Z.; Delucchi, Mark A.; Cameron, Mary A.; Frew, Bethany A.
2015-01-01
This study addresses the greatest concern facing the large-scale integration of wind, water, and solar (WWS) into a power grid: the high cost of avoiding load loss caused by WWS variability and uncertainty. It uses a new grid integration model and finds low-cost, no-load-loss, nonunique solutions to this problem on electrification of all US energy sectors (electricity, transportation, heating/cooling, and industry) while accounting for wind and solar time series data from a 3D global weather model that simulates extreme events and competition among wind turbines for available kinetic energy. Solutions are obtained by prioritizing storage for heat (in soil and water); cold (in ice and water); and electricity (in phase-change materials, pumped hydro, hydropower, and hydrogen), and using demand response. No natural gas, biofuels, nuclear power, or stationary batteries are needed. The resulting 2050–2055 US electricity social cost for a full system is much less than for fossil fuels. These results hold for many conditions, suggesting that low-cost, reliable 100% WWS systems should work many places worldwide. PMID:26598655
Parental explanatory models of ADHD: gender and cultural variations.
Bussing, Regina; Gary, Faye A; Mills, Terry L; Garvan, Cynthia Wilson
2003-10-01
This study describes parents' explanatory models of Attention Deficit Hyperactivity Disorder (ADHD) and examines model variation by child characteristics. Children with ADHD (N = 182) were identified from a school district population of elementary school students. A reliable coding system was developed for parental responses obtained in ethnographic interviews in order to convert qualitative into numerical data for quantitative analysis. African-American parents were less likely to connect the school system to ADHD problem identification, expressed fewer worries about ADHD-related school problems, and voiced fewer preferences for school interventions than Caucasian parents, pointing to a potential disconnect with the school system. More African-American than Caucasian parents were unsure about potential causes of and treatments for ADHD, indicating a need for culturally appropriate parent education approaches.
An expert system executive for automated assembly of large space truss structures
NASA Technical Reports Server (NTRS)
Allen, Cheryl L.
1993-01-01
Langley Research Center developed a unique test bed for investigating the practical problems associated with the assembly of large space truss structures using robotic manipulators. The test bed is the result of an interdisciplinary effort that encompasses the full spectrum of assembly problems - from the design of mechanisms to the development of software. The automated structures assembly test bed and its operation are described, the expert system executive and its development are detailed, and the planned system evolution is discussed. Emphasis is on the expert system implementation of the program executive. The executive program must direct and reliably perform complex assembly tasks with the flexibility to recover from realistic system errors. The employment of an expert system permits information that pertains to the operation of the system to be encapsulated concisely within a knowledge base. This consolidation substantially reduced code, increased flexibility, eased software upgrades, and realized a savings in software maintenance costs.
The Double-System Architecture for Trusted OS
NASA Astrophysics Data System (ADS)
Zhao, Yong; Li, Yu; Zhan, Jing
With the development of computer science and technology, current secure operating systems failed to respond to many new security challenges. Trusted operating system (TOS) is proposed to try to solve these problems. However, there are no mature, unified architectures for the TOS yet, since most of them cannot make clear of the relationship between security mechanism and the trusted mechanism. Therefore, this paper proposes a double-system architecture (DSA) for the TOS to solve the problem. The DSA is composed of the Trusted System (TS) and the Security System (SS). We constructed the TS by establishing a trusted environment and realized related SS. Furthermore, we proposed the Trusted Information Channel (TIC) to protect the information flow between TS and SS. In a word, the double system architecture we proposed can provide reliable protection for the OS through the SS with the supports provided by the TS.
Optimization of controlled processes in combined-cycle plant (new developments and researches)
NASA Astrophysics Data System (ADS)
Tverskoy, Yu S.; Muravev, I. K.
2017-11-01
All modern complex technical systems, including power units of TPP and nuclear power plants, work in the system-forming structure of multifunctional APCS. The development of the modern APCS mathematical support allows bringing the automation degree to the solution of complex optimization problems of equipment heat-mass-exchange processes in real time. The difficulty of efficient management of a binary power unit is related to the need to solve jointly at least three problems. The first problem is related to the physical issues of combined-cycle technologies. The second problem is determined by the criticality of the CCGT operation to changes in the regime and climatic factors. The third problem is related to a precise description of a vector of controlled coordinates of a complex technological object. To obtain a joint solution of this complex of interconnected problems, the methodology of generalized thermodynamic analysis, methods of the theory of automatic control and mathematical modeling are used. In the present report, results of new developments and studies are shown. These results allow improving the principles of process control and the automatic control systems structural synthesis of power units with combined-cycle plants that provide attainable technical and economic efficiency and operational reliability of equipment.
Duan, Lili; Liu, Xiao; Zhang, John Z H
2016-05-04
Efficient and reliable calculation of protein-ligand binding free energy is a grand challenge in computational biology and is of critical importance in drug design and many other molecular recognition problems. The main challenge lies in the calculation of entropic contribution to protein-ligand binding or interaction systems. In this report, we present a new interaction entropy method which is theoretically rigorous, computationally efficient, and numerically reliable for calculating entropic contribution to free energy in protein-ligand binding and other interaction processes. Drastically different from the widely employed but extremely expensive normal mode method for calculating entropy change in protein-ligand binding, the new method calculates the entropic component (interaction entropy or -TΔS) of the binding free energy directly from molecular dynamics simulation without any extra computational cost. Extensive study of over a dozen randomly selected protein-ligand binding systems demonstrated that this interaction entropy method is both computationally efficient and numerically reliable and is vastly superior to the standard normal mode approach. This interaction entropy paradigm introduces a novel and intuitive conceptual understanding of the entropic effect in protein-ligand binding and other general interaction systems as well as a practical method for highly efficient calculation of this effect.
Power law-based local search in spider monkey optimisation for lower order system modelling
NASA Astrophysics Data System (ADS)
Sharma, Ajay; Sharma, Harish; Bhargava, Annapurna; Sharma, Nirmala
2017-01-01
The nature-inspired algorithms (NIAs) have shown efficiency to solve many complex real-world optimisation problems. The efficiency of NIAs is measured by their ability to find adequate results within a reasonable amount of time, rather than an ability to guarantee the optimal solution. This paper presents a solution for lower order system modelling using spider monkey optimisation (SMO) algorithm to obtain a better approximation for lower order systems and reflects almost original higher order system's characteristics. Further, a local search strategy, namely, power law-based local search is incorporated with SMO. The proposed strategy is named as power law-based local search in SMO (PLSMO). The efficiency, accuracy and reliability of the proposed algorithm is tested over 20 well-known benchmark functions. Then, the PLSMO algorithm is applied to solve the lower order system modelling problem.
A prototype case-based reasoning human assistant for space crew assessment and mission management
NASA Technical Reports Server (NTRS)
Owen, Robert B.; Holland, Albert W.; Wood, Joanna
1993-01-01
We present a prototype human assistant system for space crew assessment and mission management. Our system is based on case episodes from American and Russian space missions and analog environments such as polar stations and undersea habitats. The general domain of small groups in isolated and confined environments represents a near ideal application area for case-based reasoning (CBR) - there are few reliable rules to follow, and most domain knowledge is in the form of cases. We define the problem domain and outline a unique knowledge representation system driven by conflict and communication triggers. The prototype system is able to represent, index, and retrieve case studies of human performance. We index by social, behavioral, and environmental factors. We present the problem domain, our current implementation, our research approach for an operational system, and prototype performance and results.
SABRE: a bio-inspired fault-tolerant electronic architecture.
Bremner, P; Liu, Y; Samie, M; Dragffy, G; Pipe, A G; Tempesti, G; Timmis, J; Tyrrell, A M
2013-03-01
As electronic devices become increasingly complex, ensuring their reliable, fault-free operation is becoming correspondingly more challenging. It can be observed that, in spite of their complexity, biological systems are highly reliable and fault tolerant. Hence, we are motivated to take inspiration for biological systems in the design of electronic ones. In SABRE (self-healing cellular architectures for biologically inspired highly reliable electronic systems), we have designed a bio-inspired fault-tolerant hierarchical architecture for this purpose. As in biology, the foundation for the whole system is cellular in nature, with each cell able to detect faults in its operation and trigger intra-cellular or extra-cellular repair as required. At the next level in the hierarchy, arrays of cells are configured and controlled as function units in a transport triggered architecture (TTA), which is able to perform partial-dynamic reconfiguration to rectify problems that cannot be solved at the cellular level. Each TTA is, in turn, part of a larger multi-processor system which employs coarser grain reconfiguration to tolerate faults that cause a processor to fail. In this paper, we describe the details of operation of each layer of the SABRE hierarchy, and how these layers interact to provide a high systemic level of fault tolerance.
Minimize system cost by choosing optimal subsystem reliability and redundancy
NASA Technical Reports Server (NTRS)
Suich, Ronald C.; Patterson, Richard L.
1993-01-01
The basic question which we address in this paper is how to choose among competing subsystems. This paper utilizes both reliabilities and costs to find the subsystems with the lowest overall expected cost. The paper begins by reviewing some of the concepts of expected value. We then address the problem of choosing among several competing subsystems. These concepts are then applied to k-out-of-n: G subsystems. We illustrate the use of the authors' basic program in viewing a range of possible solutions for several different examples. We then discuss the implications of various solutions in these examples.
NASA Technical Reports Server (NTRS)
Wild, Christian; Eckhardt, Dave
1987-01-01
The development of a methodology for the production of highly reliable software is one of the greatest challenges facing the computer industry. Meeting this challenge will undoubtably involve the integration of many technologies. This paper describes the use of Artificial Intelligence technologies in the automated analysis of the formal algebraic specifications of abstract data types. These technologies include symbolic execution of specifications using techniques of automated deduction and machine learning through the use of examples. On-going research into the role of knowledge representation and problem solving in the process of developing software is also discussed.
Automatic Bone Drilling - More Precise, Reliable and Safe Manipulation in the Orthopaedic Surgery
NASA Astrophysics Data System (ADS)
Boiadjiev, George; Kastelov, Rumen; Boiadjiev, Tony; Delchev, Kamen; Zagurski, Kazimir
2016-06-01
Bone drilling manipulation often occurs in the orthopaedic surgery. By statistics, nowadays, about one million people only in Europe need such an operation every year, where bone implants are inserted. Almost always, the drilling is performed handily, which cannot avoid the subjective factor influence. The question of subjective factor reduction has its answer - automatic bone drilling. The specific features and problems of orthopaedic drilling manipulation are considered in this work. The automatic drilling is presented according the possibilities of robotized system Orthopaedic Drilling Robot (ODRO) for assuring the manipulation accuracy, precision, reliability and safety.
Montes, Guillermo; Lotyczewski, Bohdan S; Halterman, Jill S; Hightower, Alan D
2012-03-01
The impact of behavior problems on kindergarten readiness is not known. Our objective was to estimate the association between behavior problems and kindergarten readiness on a US national sample. In the US educational system, kindergarten is a natural point of entry into formal schooling at age 5 because fewer than half of the children enter kindergarten with prior formal preschool education. Parents of 1,200 children who were scheduled to enter kindergarten for the first time and were members of the Harris Interactive online national panel were surveyed. We defined behavior problems as an affirmative response to the question, "Has your child ever had behavior problems?" We validated this against attention deficit hyperactivity disorder diagnosis, scores on a reliable socioemotional scale, and child's receipt of early intervention services. We used linear, tobit, and logistic regression analyses to estimate the association between having behavior problems and scores in reliable scales of motor, play, speech and language, and school skills and an overall kindergarten readiness indicator. The sample included 176 children with behavior problems for a national prevalence of 14% (confidence interval, 11.5-17.5). Children with behavior problems were more likely to be male and live in households with lower income and parental education. We found that children with behavior problems entered kindergarten with lower speech and language, motor, play, and school skills, even after controlling for demographics and region. Delays were 0.6-1 SD below scores of comparable children without behavior problems. Parents of children with behavior problems were 5.2 times more likely to report their child was not ready for kindergarten. Childhood behavior problems are associated with substantial delays in motor, language, play, school, and socioemotional skills before entrance into kindergarten. Early screening and intervention is recommended.
2004-03-01
developed while the HH-65 was still in the developmental phase and a Full Authority Digital Engine Control ( FADEC ) system (Chisom, 1984:189). In 1982...Lucas Aerospace developed a FADEC system for the HH-65. While test flights of this system were successful in demonstrating the feasibility of the...Lucas FADEC for the HH-65, there were problems associated with a lack of redundancy of the Engine Control Computer software and lack of cockpit
Chen, I-Min A; Markowitz, Victor M; Palaniappan, Krishna; Szeto, Ernest; Chu, Ken; Huang, Jinghua; Ratner, Anna; Pillay, Manoj; Hadjithomas, Michalis; Huntemann, Marcel; Mikhailova, Natalia; Ovchinnikova, Galina; Ivanova, Natalia N; Kyrpides, Nikos C
2016-04-26
The exponential growth of genomic data from next generation technologies renders traditional manual expert curation effort unsustainable. Many genomic systems have included community annotation tools to address the problem. Most of these systems adopted a "Wiki-based" approach to take advantage of existing wiki technologies, but encountered obstacles in issues such as usability, authorship recognition, information reliability and incentive for community participation. Here, we present a different approach, relying on tightly integrated method rather than "Wiki-based" method, to support community annotation and user collaboration in the Integrated Microbial Genomes (IMG) system. The IMG approach allows users to use existing IMG data warehouse and analysis tools to add gene, pathway and biosynthetic cluster annotations, to analyze/reorganize contigs, genes and functions using workspace datasets, and to share private user annotations and workspace datasets with collaborators. We show that the annotation effort using IMG can be part of the research process to overcome the user incentive and authorship recognition problems thus fostering collaboration among domain experts. The usability and reliability issues are addressed by the integration of curated information and analysis tools in IMG, together with DOE Joint Genome Institute (JGI) expert review. By incorporating annotation operations into IMG, we provide an integrated environment for users to perform deeper and extended data analysis and annotation in a single system that can lead to publications and community knowledge sharing as shown in the case studies.
Clarke, John R
2009-01-01
Surgical errors with minimally invasive surgery differ from those in open surgery. Perforations are typically the result of trocar introduction or electrosurgery. Infections include bioburdens, notably enteric viruses, on complex instruments. Retained foreign objects are primarily unretrieved device fragments and lost gallstones or other specimens. Fires and burns come from illuminated ends of fiber-optic cables and from electrosurgery. Pressure ischemia is more likely with longer endoscopic surgical procedures. Gas emboli can occur. Minimally invasive surgery is more dependent on complex equipment, with high likelihood of failures. Standardization, checklists, and problem reporting are solutions for minimizing failures. The necessity of electrosurgery makes education about best electrosurgical practices important. The recording of minimally invasive surgical procedures is an opportunity to debrief in a way that improves the reliability of future procedures. Safety depends on reliability, designing systems to withstand inevitable human errors. Safe systems are characterized by a commitment to safety, formal protocols for communications, teamwork, standardization around best practice, and reporting of problems for improvement of the system. Teamwork requires shared goals, mental models, and situational awareness in order to facilitate mutual monitoring and backup. An effective team has a flat hierarchy; team members are empowered to speak up if they are concerned about problems. Effective teams plan, rehearse, distribute the workload, and debrief. Surgeons doing minimally invasive surgery have a unique opportunity to incorporate the principles of safety into the development of their discipline.
NASA Astrophysics Data System (ADS)
Nagata, Keitro; Nishimura, Jun; Shimasaki, Shinji
2018-03-01
We study QCD at finite density and low temperature by using the complex Langevin method. We employ the gauge cooling to control the unitarity norm and intro-duce a deformation parameter in the Dirac operator to avoid the singular-drift problem. The reliability of the obtained results are judged by the probability distribution of the magnitude of the drift term. By making extrapolations with respect to the deformation parameter using only the reliable results, we obtain results for the original system. We perform simulations on a 43 × 8 lattice and show that our method works well even in the region where the reweighing method fails due to the severe sign problem. As a result we observe a delayed onset of the baryon number density as compared with the phase-quenched model, which is a clear sign of the Silver Blaze phenomenon.
A hybrid Jaya algorithm for reliability-redundancy allocation problems
NASA Astrophysics Data System (ADS)
Ghavidel, Sahand; Azizivahed, Ali; Li, Li
2018-04-01
This article proposes an efficient improved hybrid Jaya algorithm based on time-varying acceleration coefficients (TVACs) and the learning phase introduced in teaching-learning-based optimization (TLBO), named the LJaya-TVAC algorithm, for solving various types of nonlinear mixed-integer reliability-redundancy allocation problems (RRAPs) and standard real-parameter test functions. RRAPs include series, series-parallel, complex (bridge) and overspeed protection systems. The search power of the proposed LJaya-TVAC algorithm for finding the optimal solutions is first tested on the standard real-parameter unimodal and multi-modal functions with dimensions of 30-100, and then tested on various types of nonlinear mixed-integer RRAPs. The results are compared with the original Jaya algorithm and the best results reported in the recent literature. The optimal results obtained with the proposed LJaya-TVAC algorithm provide evidence for its better and acceptable optimization performance compared to the original Jaya algorithm and other reported optimal results.
Cleaning of printed circuit assemblies with surface-mounted components
NASA Astrophysics Data System (ADS)
Arzigian, J. S.
The need for ever-increasing miniaturization of airborne instrumentation through the use of surface mounted components closely placed on printed circuit boards highlights problems with traditional board cleaning methods. The reliability of assemblies which have been cleaned with vapor degreasing and spray cleaning can be seriously compromised by residual contaminants leading to solder joint failure, board corrosion, and even electrical failure of the mounted parts. In addition, recent government actions to eliminate fully halogenated chlorofluorocarbons (CFC) and chlorinated hydrocarbons from the industrial environment require the development of new cleaning materials and techniques. This paper discusses alternative cleaning materials and techniques and results that can be expected with them. Particular emphasis is placed on problems related to surface-mounted parts. These new techniques may lead to improved circuit reliability and, at the same time, be less expensive and less environmentally hazardous than the traditional systems.
The influence of utility-interactive PV system characteristics to ac power networks
NASA Astrophysics Data System (ADS)
Takeda, Y.; Takigawa, K.; Kaminosono, H.
Two basic experimental photovoltaic (PV) systems have been built for the study of variation of power quality, aspects of safety, and technical problems. One system uses a line-commutated inverter, while the other system uses a self-commutated inverter. A description is presented of the operating and generating characteristics of the two systems. The systems were connected to an ac simulated network which simulates an actual power distribution system. Attention is given to power generation characteristics, the control characteristics, the harmonics characteristics, aspects of coordination with the power network, and questions regarding the reliability of photovoltaic modules.
Fatigue Reliability of Gas Turbine Engine Structures
NASA Technical Reports Server (NTRS)
Cruse, Thomas A.; Mahadevan, Sankaran; Tryon, Robert G.
1997-01-01
The results of an investigation are described for fatigue reliability in engine structures. The description consists of two parts. Part 1 is for method development. Part 2 is a specific case study. In Part 1, the essential concepts and practical approaches to damage tolerance design in the gas turbine industry are summarized. These have evolved over the years in response to flight safety certification requirements. The effect of Non-Destructive Evaluation (NDE) methods on these methods is also reviewed. Assessment methods based on probabilistic fracture mechanics, with regard to both crack initiation and crack growth, are outlined. Limit state modeling techniques from structural reliability theory are shown to be appropriate for application to this problem, for both individual failure mode and system-level assessment. In Part 2, the results of a case study for the high pressure turbine of a turboprop engine are described. The response surface approach is used to construct a fatigue performance function. This performance function is used with the First Order Reliability Method (FORM) to determine the probability of failure and the sensitivity of the fatigue life to the engine parameters for the first stage disk rim of the two stage turbine. A hybrid combination of regression and Monte Carlo simulation is to use incorporate time dependent random variables. System reliability is used to determine the system probability of failure, and the sensitivity of the system fatigue life to the engine parameters of the high pressure turbine. 'ne variation in the primary hot gas and secondary cooling air, the uncertainty of the complex mission loading, and the scatter in the material data are considered.
Li, Guanghui; Wei, Jianhua; Wang, Xi; Wu, Guofeng; Ma, Dandan; Wang, Bo; Liu, Yanpu; Feng, Xinghua
2013-08-01
Cleft lip in the presence or absence of a cleft palate is a major public health problem. However, few studies have been published concerning the soft-tissue morphology of cleft lip infants. Currently, obtaining reliable three-dimensional (3D) surface models of infants remains a challenge. The aim of this study was to investigate a new way of capturing 3D images of cleft lip infants using a structured light scanning system. In addition, the accuracy and precision of the acquired facial 3D data were validated and compared with direct measurements. Ten unilateral cleft lip patients were enrolled in the study. Briefly, 3D facial images of the patients were acquired using a 3D scanner device before and after the surgery. Fourteen items were measured by direct anthropometry and 3D image software. The accuracy and precision of the 3D system were assessed by comparative analysis. The anthropometric data obtained using the 3D method were in agreement with the direct anthropometry measurements. All data calculated by the software were 'highly reliable' or 'reliable', as defined in the literature. The localisation of four landmarks was not consistent in repeated experiments of inter-observer reliability in preoperative images (P<0.05), while the intra-observer reliability in both pre- and postoperative images was good (P>0.05). The structured light scanning system is proven to be a non-invasive, accurate and precise method in cleft lip anthropometry. Copyright © 2013 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Time-Varying, Multi-Scale Adaptive System Reliability Analysis of Lifeline Infrastructure Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gearhart, Jared Lee; Kurtz, Nolan Scot
2014-09-01
The majority of current societal and economic needs world-wide are met by the existing networked, civil infrastructure. Because the cost of managing such infrastructure is high and increases with time, risk-informed decision making is essential for those with management responsibilities for these systems. To address such concerns, a methodology that accounts for new information, deterioration, component models, component importance, group importance, network reliability, hierarchical structure organization, and efficiency concerns has been developed. This methodology analyzes the use of new information through the lens of adaptive Importance Sampling for structural reliability problems. Deterioration, multi-scale bridge models, and time-variant component importance aremore » investigated for a specific network. Furthermore, both bridge and pipeline networks are studied for group and component importance, as well as for hierarchical structures in the context of specific networks. Efficiency is the primary driver throughout this study. With this risk-informed approach, those responsible for management can address deteriorating infrastructure networks in an organized manner.« less
NASA Astrophysics Data System (ADS)
Reinert, K. A.
The use of linear decision rules (LDR) and chance constrained programming (CCP) to optimize the performance of wind energy conversion clusters coupled to storage systems is described. Storage is modelled by LDR and output by CCP. The linear allocation rule and linear release rule prescribe the size and optimize a storage facility with a bypass. Chance constraints are introduced to explicitly treat reliability in terms of an appropriate value from an inverse cumulative distribution function. Details of deterministic programming structure and a sample problem involving a 500 kW and a 1.5 MW WECS are provided, considering an installed cost of $1/kW. Four demand patterns and three levels of reliability are analyzed for optimizing the generator choice and the storage configuration for base load and peak operating conditions. Deficiencies in ability to predict reliability and to account for serial correlations are noted in the model, which is concluded useful for narrowing WECS design options.
ERIC Educational Resources Information Center
Danseco, Evangeline R.; Marques, Paul R.
2002-01-01
The Problem-Oriented Screening Instrument for Teenagers (POSIT) screens for multiple problems among adolescents at risk for substance use. A shortened version of the POSIT was developed, using factor analysis, and correlational and reliability analyses. The POSIT-SF shows potential for a reliable and cost-efficient screen for youth with substance…
ERIC Educational Resources Information Center
Albanese, Mark A.; Jacobs, Richard M.
1990-01-01
The reliability and validity of a procedure to measure diagnostic-reasoning and problem-solving skills taught in predoctoral orthodontic education were studied using 68 second year dental students. The procedure includes stimulus material and 33 multiple-choice items. It is a feasible way of assessing problem-solving skills in dentistry education…
SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.
SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.
Sonar Transducer Reliability Improvement Program (STRIP)
1981-01-01
Fair *[51] EPDM NORDOL 1370 - Poor *[511 NATURAL 1155- Poor *[51] NITRILE 6100 - Good *[51] VITON CTBN (BF635075) - Poor *[511 CORK- RUBBER ... aging problems have been found. A report entitled "Reliability and Service Life Concepts for Sonar Transducer Applications" has been completed. - A draft...or aging problems have been found. See Section 9. * A report entitled "Reliability and Service Life Concepts for Sonar Transducer Applications" has
Business Case Analysis: Reconfiguration of the Frederick Memorial Healthcare System Courier Service
2008-05-13
from each specimen. This figure alone clearly supports the existence of the FMH courier service. The problem , rather, lies in the efficiency and...investigated, to include the Hyundai Accent, Chevrolet Aveo, and the Honda Fit. Each vehicle was evaluated on cost, fuel efficiency, predicted reliability...P175/65R14 Tires Temporary Spare Tire SAFETY Driver Front Airbag and Front Passenger Airbag with Advanced Airbag System 3 Point Driver & Fr Pass
1977-01-01
principles apply; however, special attention has to be given early in ana- ivsis to the number and kinds of discriminations required of the human observer...demands, to store, or to output desired information. Typically, these are not insurmountable problems, but they have to receive their due attention ... attention to calibration, data identification, noise, drift, and measureuent start/stop logic. Manual systems require special attention to the reliability of
The advantages of the high voltage solar array for electric propulsion
NASA Technical Reports Server (NTRS)
Sater, B. L.
1973-01-01
The high voltage solar array offers improvements in efficiency, weight, and reliability for the electric propulsion power system. Conventional power processes and problems associated with ion thruster operation using SERT 2 experience are discussed and the advantages of the HVSA concept for electric propulsion are presented. Tests conducted operating the SERT 2 thruster system in conjunction with HVSA are reported. Thruster operation was observed to be normal and in some respects improved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system.« less
JSD: Parallel Job Accounting on the IBM SP2
NASA Technical Reports Server (NTRS)
Saphir, William; Jones, James Patton; Walter, Howard (Technical Monitor)
1995-01-01
The IBM SP2 is one of the most promising parallel computers for scientific supercomputing - it is fast and usually reliable. One of its biggest problems is a lack of robust and comprehensive system software. Among other things, this software allows a collection of Unix processes to be treated as a single parallel application. It does not, however, provide accounting for parallel jobs other than what is provided by AIX for the individual process components. Without parallel job accounting, it is not possible to monitor system use, measure the effectiveness of system administration strategies, or identify system bottlenecks. To address this problem, we have written jsd, a daemon that collects accounting data for parallel jobs. jsd records information in a format that is easily machine- and human-readable, allowing us to extract the most important accounting information with very little effort. jsd also notifies system administrators in certain cases of system failure.
Measurement system with high accuracy for laser beam quality.
Ke, Yi; Zeng, Ciling; Xie, Peiyuan; Jiang, Qingshan; Liang, Ke; Yang, Zhenyu; Zhao, Ming
2015-05-20
Presently, most of the laser beam quality measurement system collimates the optical path manually with low efficiency and low repeatability. To solve these problems, this paper proposed a new collimated method to improve the reliability and accuracy of the measurement results. The system accuracy controlled the position of the mirror to change laser beam propagation direction, which can realize the beam perpendicularly incident to the photosurface of camera. The experiment results show that the proposed system has good repeatability and the measuring deviation of M2 factor is less than 0.6%.
Cape, John; Morris, Elena; Burd, Mary; Buszewicz, Marta
2008-01-01
Background How GPs understand mental health problems determines their treatment choices; however, measures describing GPs' thinking about such problems are not currently available. Aim To develop a measure of the complexity of GP explanations of common mental health problems and to pilot its reliability and validity. Design of study A qualitative development of the measure, followed by inter-rater reliability and validation pilot studies. Setting General practices in North London. Method Vignettes of simulated consultations with patients with mental health problems were videotaped, and an anchored measure of complexity of psychosocial explanation in response to these vignettes was developed. Six GPs, four psychologists, and two lay people viewed the vignettes. Their responses were rated for complexity, both using the anchored measure and independently by two experts in primary care mental health. In a second reliability and revalidation study, responses of 50 GPs to two vignettes were rated for complexity. The GPs also completed a questionnaire to determine their interest and training in mental health, and they completed the Depression Attitudes Questionnaire. Results Inter-rater reliability of the measure of complexity of explanation in both pilot studies was satisfactory (intraclass correlation coefficient = 0.78 and 0.72). The measure correlated with expert opinion as to what constitutes a complex explanation, and the responses of psychologists, GPs, and lay people differed in measured complexity. GPs with higher complexity scores had greater interest, more training in mental health, and more positive attitudes to depression. Conclusion Results suggest that the complexity of GPs' psychosocial explanations about common mental health problems can be reliably and validly assessed by this new standardised measure. PMID:18505616
NASA Astrophysics Data System (ADS)
Sinha, Pampa; Nath, Sudipta
2010-10-01
The main aspects of power system delivery are reliability and quality. If all the customers of a power system get uninterrupted power through the year then the system is considered to be reliable. The term power quality may be referred to as maintaining near sinusoidal voltage at rated frequency at the consumers end. The power component definitions are defined according to the IEEE Standard 1459-2000 both for single phase and three phase unbalanced systems based on Fourier Transform (FFT). In the presence of nonstationary power quality (PQ) disturbances results in accurate values due to its sensitivity to the spectral leakage problem. To overcome these limitations the power quality components are calculated using Discrete Wavelet Transform (DWT). In order to handle the uncertainties associated with electric power systems operations fuzzy logic has been incorporated in this paper. A new power quality index has been introduced here which can assess the power quality under nonstationary disturbances.
NASA Astrophysics Data System (ADS)
LI, Y.; Yang, S. H.
2017-05-01
The Antarctica astronomical telescopes work chronically on the top of the unattended South Pole, and they have only one chance to maintain every year. Due to the complexity of the optical, mechanical, and electrical systems, the telescopes are hard to be maintained and need multi-tasker expedition teams, which means an excessive awareness is essential for the reliability of the Antarctica telescopes. Based on the fault mechanism and fault mode of the main-axis control system for the equatorial Antarctica astronomical telescope AST3-3 (Antarctic Schmidt Telescopes 3-3), the method of fault tree analysis is introduced in this article, and we obtains the importance degree of the top event from the importance degree of the bottom event structure. From the above results, the hidden problems and weak links can be effectively found out, which will indicate the direction for promoting the stability of the system and optimizing the design of the system.
Space Station man-machine automation trade-off analysis
NASA Technical Reports Server (NTRS)
Zimmerman, W. F.; Bard, J.; Feinberg, A.
1985-01-01
The man machine automation tradeoff methodology presented is of four research tasks comprising the autonomous spacecraft system technology (ASST) project. ASST was established to identify and study system level design problems for autonomous spacecraft. Using the Space Station as an example spacecraft system requiring a certain level of autonomous control, a system level, man machine automation tradeoff methodology is presented that: (1) optimizes man machine mixes for different ground and on orbit crew functions subject to cost, safety, weight, power, and reliability constraints, and (2) plots the best incorporation plan for new, emerging technologies by weighing cost, relative availability, reliability, safety, importance to out year missions, and ease of retrofit. A fairly straightforward approach is taken by the methodology to valuing human productivity, it is still sensitive to the important subtleties associated with designing a well integrated, man machine system. These subtleties include considerations such as crew preference to retain certain spacecraft control functions; or valuing human integration/decision capabilities over equivalent hardware/software where appropriate.
The minimal residual QR-factorization algorithm for reliably solving subset regression problems
NASA Technical Reports Server (NTRS)
Verhaegen, M. H.
1987-01-01
A new algorithm to solve test subset regression problems is described, called the minimal residual QR factorization algorithm (MRQR). This scheme performs a QR factorization with a new column pivoting strategy. Basically, this strategy is based on the change in the residual of the least squares problem. Furthermore, it is demonstrated that this basic scheme might be extended in a numerically efficient way to combine the advantages of existing numerical procedures, such as the singular value decomposition, with those of more classical statistical procedures, such as stepwise regression. This extension is presented as an advisory expert system that guides the user in solving the subset regression problem. The advantages of the new procedure are highlighted by a numerical example.
Specific Features in Measuring Particle Size Distributions in Highly Disperse Aerosol Systems
NASA Astrophysics Data System (ADS)
Zagaynov, V. A.; Vasyanovich, M. E.; Maksimenko, V. V.; Lushnikov, A. A.; Biryukov, Yu. G.; Agranovskii, I. E.
2018-06-01
The distribution of highly dispersed aerosols is studied. Particular attention is given to the diffusion dynamic approach, as it is the best way to determine particle size distribution. It shown that the problem can be divided into two steps: directly measuring particle penetration through diffusion batteries and solving the inverse problem (obtaining a size distribution from the measured penetrations). No reliable way of solving the so-called inverse problem is found, but it can be done by introducing a parametrized size distribution (i.e., a gamma distribution). The integral equation is therefore reduced to a system of nonlinear equations that can be solved by elementary mathematical means. Further development of the method requires an increase in sensitivity (i.e., measuring the dimensions of molecular clusters with radioactive sources, along with the activity of diffusion battery screens).
Evaluation of DVD-R for Archival Applications
NASA Technical Reports Server (NTRS)
Martin, Michael D.; Hyon, Jason J.
2000-01-01
For more than a decade, CD-ROM and CD-R have provided an unprecedented level of reliability, low cost and cross-platform compatibility to support federal data archiving and distribution efforts. However, it should be remembered that years of effort were required to achieve the standardization that has supported the growth of the CD industry. Incompatibilities in the interpretation of the ISO-9660 standard on different operating systems had to be dealt with, and the imprecise specifications in the Orange Book Part n and Part Hi led to incompatibilities between CD-R media and CD-R recorders. Some of these issues were presented by the authors at Optical Data Storage '95. The major current problem with the use of CD technology is the growing volume of digital data that needs to be stored. CD-ROM collections of hundreds of volumes and CD-R collections of several thousand volumes are becoming almost too cumbersome to be useful. The emergence of Digital Video Disks Recorder (DVD-R) technology promises to reduce the number of discs required for archive applications by a factor of seven while providing improved reliability. It is important to identify problem areas for DVD-R media and provide guidelines to manufacturers, file system developers and users in order to provide reliable data storage and interchange. The Data Distribution Laboratory (DDL) at NASA's Jet Propulsion Laboratory began its evaluation of DVD-R technology in early 1998. The initial plan was to obtain a DVD-Recorder for preliminary testing, deploy reader hardware to user sites for compatibility testing, evaluate the quality and longevity of DVD-R media and develop proof-of-concept archive collections to test the reliability and usability of DVD-R media and jukebox hardware.
ERIC Educational Resources Information Center
Hardy, Precious; Aruguete, Mara
2014-01-01
Retention is a major problem in most colleges and universities. High dropout rates, especially in the STEM disciplines (science, technology, engineering and mathematics), have proved intractable despite the offering of supplemental instruction. A broad model of support systems that includes psychological factors is needed to address retention in…
AFRL/Cornell Information Assurance Institute
2007-03-01
revewing this colection ofinformation . Send connents regarding this burden estimate or any other aspect of this collection of information, indcudng...collabora- tions involving Cornell and AFRL researchers, with * AFRL researchers able to participate in Cornell research projects, fa- cilitating technology ...approach to developing a science base and technology for supporting large-scale reliable distributed systems. First, so- lutions to core problems were
ERIC Educational Resources Information Center
Blanchard, Alexia; Kraif, Olivier; Ponton, Claude
2009-01-01
This paper presents a "didactic triangulation" strategy to cope with the problem of reliability of NLP applications for computer-assisted language learning (CALL) systems. It is based on the implementation of basic but well mastered NLP techniques and puts the emphasis on an adapted gearing between computable linguistic clues and didactic features…
CESAR robotics and intelligent systems research for nuclear environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mann, R.C.
1992-07-01
The Center for Engineering Systems Advanced Research (CESAR) at the Oak Ridge National Laboratory (ORNL) encompasses expertise and facilities to perform basic and applied research in robotics and intelligent systems in order to address a broad spectrum of problems related to nuclear and other environments. For nuclear environments, research focus is derived from applications in advanced nuclear power stations, and in environmental restoration and waste management. Several programs at CESAR emphasize the cross-cutting technology issues, and are executed in appropriate cooperation with projects that address specific problem areas. Although the main thrust of the CESAR long-term research is on developingmore » highly automated systems that can cooperate and function reliably in complex environments, the development of advanced human-machine interfaces represents a significant part of our research. 11 refs.« less
CESAR robotics and intelligent systems research for nuclear environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mann, R.C.
1992-01-01
The Center for Engineering Systems Advanced Research (CESAR) at the Oak Ridge National Laboratory (ORNL) encompasses expertise and facilities to perform basic and applied research in robotics and intelligent systems in order to address a broad spectrum of problems related to nuclear and other environments. For nuclear environments, research focus is derived from applications in advanced nuclear power stations, and in environmental restoration and waste management. Several programs at CESAR emphasize the cross-cutting technology issues, and are executed in appropriate cooperation with projects that address specific problem areas. Although the main thrust of the CESAR long-term research is on developingmore » highly automated systems that can cooperate and function reliably in complex environments, the development of advanced human-machine interfaces represents a significant part of our research. 11 refs.« less
Data Acquisition System for Russian Arctic Magnetometer Network
NASA Astrophysics Data System (ADS)
Janzhura, A.; Troshichev, O. A.; Takahashi, K.
2010-12-01
Monitoring of magnetic activity in the auroral zone is very essential for space weather problem. The big part of northern auroral zone lies in the Russian sector of Arctica. The Russian auroral zone stations are located far from the proper infrastructure and communications, and getting the data from the stations is complicated and nontrivial task. To resolve this problem a new acquisition system for magnetometers was implemented and developed in last few years, with the magnetic data transmission in real time that is important for many forecasting purpose. The system, based on microprocessor modules, is very reliable in hush climatic conditions. The information from the magnetic sensors transmits to AARI data center by satellite communication system and is presented at AARI web pages. This equipment upgrading of Russian polar magnetometer network is supported by the international RapidMag program.
Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G.
2017-01-01
In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators. PMID:29099790
Digital image processing of bone - Problems and potentials
NASA Technical Reports Server (NTRS)
Morey, E. R.; Wronski, T. J.
1980-01-01
The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.
An inverse problem for a mathematical model of aquaponic agriculture
NASA Astrophysics Data System (ADS)
Bobak, Carly; Kunze, Herb
2017-01-01
Aquaponic agriculture is a sustainable ecosystem that relies on a symbiotic relationship between fish and macrophytes. While the practice has been growing in popularity, relatively little mathematical models exist which aim to study the system processes. In this paper, we present a system of ODEs which aims to mathematically model the population and concetrations dynamics present in an aquaponic environment. Values of the parameters in the system are estimated from the literature so that simulated results can be presented to illustrate the nature of the solutions to the system. As well, a brief sensitivity analysis is performed in order to identify redundant parameters and highlight those which may need more reliable estimates. Specifically, an inverse problem with manufactured data for fish and plants is presented to demonstrate the ability of the collage theorem to recover parameter estimates.
Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Kumon, Makoto; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G
2017-11-03
In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators.
An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates
2017-01-01
Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing. PMID:28981533
Improved model for detection of homogeneous production batches of electronic components
NASA Astrophysics Data System (ADS)
Kazakovtsev, L. A.; Orlov, V. I.; Stashkov, D. V.; Antamoshkin, A. N.; Masich, I. S.
2017-10-01
Supplying the electronic units of the complex technical systems with electronic devices of the proper quality is one of the most important problems for increasing the whole system reliability. Moreover, for reaching the highest reliability of an electronic unit, the electronic devices of the same type must have equal characteristics which assure their coherent operation. The highest homogeneity of the characteristics is reached if the electronic devices are manufactured as a single production batch. Moreover, each production batch must contain homogeneous raw materials. In this paper, we propose an improved model for detecting the homogeneous production batches of shipped lot of electronic components based on implementing the kurtosis criterion for the results of non-destructive testing performed for each lot of electronic devices used in the space industry.
Macaulay, Margaret; van den Heuvel, Eleanor; Jowitt, Felicity; Clarke-O'Neill, Sinead; Kardas, Przemyslaw; Blijham, Nienke; Leander, Hakan; Xu, Yu; Fader, Mandy; Cottenden, Alan
2007-01-01
This paper describes a project to develop and clinically evaluate a novel toileting device for women called the Non-Invasive Continence Management System (NICMS). The NICMS device is designed to provide an alternative toileting facility that overcomes problems some women experience when using conventional female urinals. A single product evaluation was completed; participants used the same device with 1 or 2 interface variants. Eighty women from 6 countries who were either mobile or wheelchair dependent evaluated the product over a 15-month period. The device was found to be useful in some circumstances for women and their caregivers. Significant further development is required for it to work reliably and to provide an acceptable device in terms of reliability, size, weight, noise, and aesthetics.
Burns, C
1991-01-01
Pediatric nurse practitioners (PNPs) need an integrated, comprehensive classification that includes nursing, disease, and developmental diagnoses to effectively describe their practice. No such classification exists. Further, methodologic studies to help evaluate the content validity of any nursing taxonomy are unavailable. A conceptual framework was derived. Then 178 diagnoses from the North American Nursing Diagnosis Association (NANDA) 1986 list, selected diagnoses from the International Classification of Diseases, the Diagnostic and Statistical Manual, Third Revision, and others were selected. This framework identified and listed, with definitions, three domains of diagnoses: Developmental Problems, Diseases, and Daily Living Problems. The diagnoses were ranked using a 4-point scale (4 = highly related to 1 = not related) and were placed into the three domains. The rating scale was assigned by a panel of eight expert pediatric nurses. Diagnoses that were assigned to the Daily Living Problems domain were then sorted into the 11 Functional Health patterns described by Gordon (1987). Reliability was measured using proportions of agreement and Kappas. Content validity of the groups created was measured using indices of content validity and average congruency percentages. The experts used a new method to sort the diagnoses in a new way that decreased overlaps among the domains. The Developmental and Disease domains were judged reliable and valid. The Daily Living domain of nursing diagnoses showed marginally acceptable validity with acceptable reliability. Six Functional Health Patterns were judged reliable and valid, mixed results were determined for four categories, and the Coping/Stress Tolerance category was judged reliable but not valid using either test. There were considerable differences between the panel's, Gordon's (1987), and NANDA's clustering of NANDA diagnoses. This study defines the diagnostic practice of nurses from a holistic, patient-centered perspective. It is the first study to use quantitative methods to test a diagnostic classification system for nursing. The classification model could also be adapted for other nurse specialties.
Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) Tutorial
DOE Office of Scientific and Technical Information (OSTI.GOV)
C. L. Smith; S. T. Beck; S. T. Wood
2008-08-01
The Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE) refers to a set of computer programs that were developed to create and analyze probabilistic risk assessment (PRAs). This volume is the tutorial manual for the SAPHIRE system. In this document, a series of lessons are provided that guide the user through basic steps common to most analyses preformed with SAPHIRE. The tutorial is divided into two major sections covering both basic and advanced features. The section covering basic topics contains lessons that lead the reader through development of a probabilistic hypothetical problem involving a vehicle accident, highlighting the program’smore » most fundamental features. The advanced features section contains additional lessons that expand on fundamental analysis features of SAPHIRE and provide insights into more complex analysis techniques. Together, these two elements provide an overview into the operation and capabilities of the SAPHIRE software.« less
The Standard Autonomous File Server, A Customized, Off-the-Shelf Success Story
NASA Technical Reports Server (NTRS)
Semancik, Susan K.; Conger, Annette M.; Obenschain, Arthur F. (Technical Monitor)
2001-01-01
The Standard Autonomous File Server (SAFS), which includes both off-the-shelf hardware and software, uses an improved automated file transfer process to provide a quicker, more reliable, prioritized file distribution for customers of near real-time data without interfering with the assets involved in the acquisition and processing of the data. It operates as a stand-alone solution, monitoring itself, and providing an automated fail-over process to enhance reliability. This paper describes the unique problems and lessons learned both during the COTS selection and integration into SAFS, and the system's first year of operation in support of NASA's satellite ground network. COTS was the key factor in allowing the two-person development team to deploy systems in less than a year, meeting the required launch schedule. The SAFS system has been so successful; it is becoming a NASA standard resource, leading to its nomination for NASA's Software of the Year Award in 1999.
Face Liveness Detection Using Defocus
Kim, Sooyeon; Ban, Yuseok; Lee, Sangyoun
2015-01-01
In order to develop security systems for identity authentication, face recognition (FR) technology has been applied. One of the main problems of applying FR technology is that the systems are especially vulnerable to attacks with spoofing faces (e.g., 2D pictures). To defend from these attacks and to enhance the reliability of FR systems, many anti-spoofing approaches have been recently developed. In this paper, we propose a method for face liveness detection using the effect of defocus. From two images sequentially taken at different focuses, three features, focus, power histogram and gradient location and orientation histogram (GLOH), are extracted. Afterwards, we detect forged faces through the feature-level fusion approach. For reliable performance verification, we develop two databases with a handheld digital camera and a webcam. The proposed method achieves a 3.29% half total error rate (HTER) at a given depth of field (DoF) and can be extended to camera-equipped devices, like smartphones. PMID:25594594
Reliability-Productivity Curve, a Tool for Adaptation Measures Identification
NASA Astrophysics Data System (ADS)
Chávez-Jiménez, A.; Granados, A.; Garrote, L. M.
2015-12-01
Due to climate change effects, water scarcity problems would intensify in several regions. These problems are going to impact negatively in the water low-priority demands, since these will be reduced in favor of those with high-priority. An example would be the reduction of agriculture water resources in favor of the urban ones. Then, it is important the evaluation of adaptation measures for a better water resources management. An important tool to face this challenge is the economic valuation of the water demands' impact within a water resources system. In agriculture this valuation is usually performed through the water productivity evaluation. The water productivity evaluation requires detailed information regarding the different crops like the applied technology, the agricultural supplies management, the water availability, etc. This is a restriction for an evaluation at basin scale due to the difficulty of gathers this level of detailed information. Besides, only the water availability is taken into account, but not the period when the water is distributed (i.e. water resources reliability). Water resources reliability is one of the most important variables in water resources management. This research proposes a methodology to determine the agriculture water productivity, using as variables the crops information, the crops price, the water resources availability, and the water resources reliability, at a basin scale. This methodology would allow identifying general water resources adaptation measures, providing the basis for further detailed studies in critical regions.
NASA Astrophysics Data System (ADS)
Long, Kim Chenming
Real-world engineering optimization problems often require the consideration of multiple conflicting and noncommensurate objectives, subject to nonconvex constraint regions in a high-dimensional decision space. Further challenges occur for combinatorial multiobjective problems in which the decision variables are not continuous. Traditional multiobjective optimization methods of operations research, such as weighting and epsilon constraint methods, are ill-suited to solving these complex, multiobjective problems. This has given rise to the application of a wide range of metaheuristic optimization algorithms, such as evolutionary, particle swarm, simulated annealing, and ant colony methods, to multiobjective optimization. Several multiobjective evolutionary algorithms have been developed, including the strength Pareto evolutionary algorithm (SPEA) and the non-dominated sorting genetic algorithm (NSGA), for determining the Pareto-optimal set of non-dominated solutions. Although numerous researchers have developed a wide range of multiobjective optimization algorithms, there is a continuing need to construct computationally efficient algorithms with an improved ability to converge to globally non-dominated solutions along the Pareto-optimal front for complex, large-scale, multiobjective engineering optimization problems. This is particularly important when the multiple objective functions and constraints of the real-world system cannot be expressed in explicit mathematical representations. This research presents a novel metaheuristic evolutionary algorithm for complex multiobjective optimization problems, which combines the metaheuristic tabu search algorithm with the evolutionary algorithm (TSEA), as embodied in genetic algorithms. TSEA is successfully applied to bicriteria (i.e., structural reliability and retrofit cost) optimization of the aircraft tail structure fatigue life, which increases its reliability by prolonging fatigue life. A comparison for this application of the proposed algorithm, TSEA, with several state-of-the-art multiobjective optimization algorithms reveals that TSEA outperforms these algorithms by providing retrofit solutions with greater reliability for the same costs (i.e., closer to the Pareto-optimal front) after the algorithms are executed for the same number of generations. This research also demonstrates that TSEA competes with and, in some situations, outperforms state-of-the-art multiobjective optimization algorithms such as NSGA II and SPEA 2 when applied to classic bicriteria test problems in the technical literature and other complex, sizable real-world applications. The successful implementation of TSEA contributes to the safety of aeronautical structures by providing a systematic way to guide aircraft structural retrofitting efforts, as well as a potentially useful algorithm for a wide range of multiobjective optimization problems in engineering and other fields.
National Space Transportation System (NSTS) technology needs
NASA Technical Reports Server (NTRS)
Winterhalter, David L.; Ulrich, Kimberly K.
1990-01-01
The National Space Transportation System (NSTS) is one of the Nation's most valuable resources, providing manned transportation to and from space in support of payloads and scientific research. The NSTS program is currently faced with the problem of hardware obsolescence, which could result in unacceptable schedule and cost impacts to the flight program. Obsolescence problems occur because certain components are no longer being manufactured or repair turnaround time is excessive. In order to achieve a long-term, reliable transportation system that can support manned access to space through 2010 and beyond, NASA must develop a strategic plan for a phased implementation of enhancements which will satisfy this long-term goal. The NSTS program has initiated the Assured Shuttle Availability (ASA) project with the following objectives: eliminate hardware obsolescence in critical areas, increase reliability and safety of the vehicle, decrease operational costs and turnaround time, and improve operational capability. The strategy for ASA will be to first meet the mandatory needs - keep the Shuttle flying. Non-mandatory changes that will improve operational capability and enhance performance will then be considered if funding is adequate. Upgrade packages should be developed to install within designated inspection periods, grouped in a systematic approach to reduce cost and schedule impacts, and allow the capability to provide a Block 2 Shuttle (Phase 3).
Simulation of floods caused by overloaded sewer systems: extensions of shallow-water equations
NASA Astrophysics Data System (ADS)
Hilden, Michael
2005-03-01
The outflow of water from a manhole onto a street is a typical flow problem within the simulation of floods in urban areas that are caused by overloaded sewer systems in the event of heavy rains. The reliable assessment of the flood risk for the connected houses requires accurate simulations of the water flow processes in the sewer system and in the street.The Navier-Stokes equations (NSEs) describe the free surface flow of the fluid water accurately, but since their numerical solution requires high CPU times and much memory, their application is not practical. However, their solutions for selected flow problems are applied as reference states to assess the results of other model approaches.The classical shallow-water equations (SWEs) require only fractions (factor 1/100) of the NSEs' computational effort. They assume hydrostatic pressure distribution, depth-averaged horizontal velocities and neglect vertical velocities. These shallow-water assumptions are not fulfilled for the outflow of water from a manhole onto the street. Accordingly, calculations show differences between NSEs and SWEs solutions.The SWEs are extended in order to assess the flood risks in urban areas reliably within applicable computational efforts. Separating vortex regions from the main flow and approximating vertical velocities to involve their contributions into a pressure correction yield suitable results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Touati, Said; Chennai, Salim; Souli, Aissa
The increased requirements on supervision, control, and performance in modern power systems make power quality monitoring a common practise for utilities. Large databases are created and automatic processing of the data is required for fast and effective use of the available information. Aim of the work presented in this paper is the development of tools for analysis of monitoring power quality data and in particular measurements of voltage and currents in various level of electrical power distribution. The study is extended to evaluate the reliability of the electrical system in nuclear plant. Power Quality is a measure of how wellmore » a system supports reliable operation of its loads. A power disturbance or event can involve voltage, current, or frequency. Power disturbances can originate in consumer power systems, consumer loads, or the utility. The effect of power quality problems is the loss power supply leading to severe damage to equipments. So, we try to track and improve system reliability. The assessment can be focused on the study of impact of short circuits on the system, harmonics distortion, power factor improvement and effects of transient disturbances on the Electrical System during motor starting and power system fault conditions. We focus also on the review of the Electrical System design against the Nuclear Directorate Safety Assessment principles, including those extended during the last Fukushima nuclear accident. The simplified configuration of the required system can be extended from this simple scheme. To achieve these studies, we have used a demo ETAP power station software for several simulations. (authors)« less
Second Conference on NDE for Aerospace Requirements
NASA Technical Reports Server (NTRS)
Woodis, Kenneth W. (Compiler); Bryson, Craig C. (Compiler); Workman, Gary L. (Compiler)
1990-01-01
Nondestructive evaluation and inspection procedures must constantly improve rapidly in order to keep pace with corresponding advances being made in aerospace material and systems. In response to this need, the 1989 Conference was organized to provide a forum for discussion between the materials scientists, systems designers, and NDE engineers who produce current and future aerospace systems. It is anticipated that problems in current systems can be resolved more quickly and that new materials and structures can be designed and manufactured in such a way as to be more easily inspected and to perform reliably over the life cycle of the system.
A Scalable and Robust Multi-Agent Approach to Distributed Optimization
NASA Technical Reports Server (NTRS)
Tumer, Kagan
2005-01-01
Modularizing a large optimization problem so that the solutions to the subproblems provide a good overall solution is a challenging problem. In this paper we present a multi-agent approach to this problem based on aligning the agent objectives with the system objectives, obviating the need to impose external mechanisms to achieve collaboration among the agents. This approach naturally addresses scaling and robustness issues by ensuring that the agents do not rely on the reliable operation of other agents We test this approach in the difficult distributed optimization problem of imperfect device subset selection [Challet and Johnson, 2002]. In this problem, there are n devices, each of which has a "distortion", and the task is to find the subset of those n devices that minimizes the average distortion. Our results show that in large systems (1000 agents) the proposed approach provides improvements of over an order of magnitude over both traditional optimization methods and traditional multi-agent methods. Furthermore, the results show that even in extreme cases of agent failures (i.e., half the agents fail midway through the simulation) the system remains coordinated and still outperforms a failure-free and centralized optimization algorithm.
2012-09-01
project. Thanks to Arijit Das for advice and assistance over the course of my program. Thanks to Brian Steckler and Carl Prince for their assistance in...reliable. Wireless architectural and interoperability problems can include (Bell, Jung , & Krishnakumar, 2010; Nguyen, Waeselnck, & Riviere, 2008...usually when a new mission was created and then a user tried to join the mission shortly after. This appears to be a synchronization issue between the
2010-06-11
capable of two-dimensional position information; they only provided latitude and longitude. This was not a significant problem for surface vessels...reliable three-dimensional navigation capable of providing continuous latitude , longitude and altitude information. Additionally, the Air Force’s system...upgrade initiatives for both AWACS and JSTARS airframes, consider the DRAGON program a model to modernize other Triad aircraft to comply with CNS/ATM
Hammar, Tora; Ohlson, Mats; Hanson, Elizabeth; Petersson, Göran
2015-01-01
When the Swedish pharmacy market was re-regulated in 2009, Sweden moved from one state-owned pharmacy chain to several private pharmacy companies, and four new dispensing systems emerged to replace the one system that had previously been used at all Swedish pharmacies for more than 20 years. The aim of this case study was to explore the implementation of the new information systems for dispensing at pharmacies. The vendors of the four dispensing systems in Sweden were interviewed, and a questionnaire was sent to the managers of the pharmacy companies. In addition, a questionnaire was sent to 350 pharmacists who used the systems for dispensing prescriptions. The implementation of four new dispensing systems followed a strict time frame set by political decisions, involved actors completely new to the market, lacked clear regulation and standards for functionality and quality assurance, was complex and resulted in variations in quality. More than half of the pharmacists (58%) perceived their current dispensing system as supporting safe dispensing of medications, 26% were neutral and 15% did not perceive it to support a safe dispensing. Most pharmacists (80%) had experienced problems with their dispensing system during the previous month. The pharmacists experienced problems included reliability issues, usability issues, and missing functionality. In this case study exploring the implementation of new information systems for dispensing prescriptions at pharmacies in Sweden, weaknesses related to reliability, functionality and usability were identified and could affect patient safety. The weaknesses of the systems seem to result from the limited time for the development and implementation, the lack of comprehensive and evidence-based requirements for dispensing systems, and the unclear distribution of quality assurance responsibilities among involved stakeholders. Copyright © 2015 Elsevier Inc. All rights reserved.
Gunnarsson, U; Johansson, M; Strigård, K
2011-08-01
The decrease in recurrence rates in ventral hernia surgery have led to a redirection of focus towards other important patient-related endpoints. One such endpoint is abdominal wall function. The aim of the present study was to evaluate the reliability and external validity of abdominal wall strength measurement using the Biodex System-4 with a back abdomen unit. Ten healthy volunteers and ten patients with ventral hernias exceeding 10 cm were recruited. Test-retest reliability, both with and without girdle, was evaluated by comparison of measurements at two test occasions 1 week apart. Reliability was calculated by the interclass correlation coefficients (ICC) method. Validity was evaluated by correlation with the well-established International Physical Activity Questionnaire (IPAQ) and a self-assessment of abdominal wall strength. One person in the healthy group was excluded after the first test due to neck problems following minor trauma. The reliability was excellent (>0.75), with ICC values between 0.92 and 0.97 for the different modalities tested. No differences were seen between testing with and without a girdle. Validity was also excellent both when calculated as correlation to self-assessment of abdominal wall strength, and to IPAQ, giving Kendall tau values of 0.51 and 0.47, respectively, and corresponding P values of 0.002 and 0.004. Measurement of abdominal muscle function using the Biodex System-4 is a reliable and valid method to assess this important patient-related endpoint. Further investigations will be made to explore the potential of this technique in the evaluation of the results of ventral hernia surgery, and to compare muscle function after different abdominal wall reconstruction techniques.
Shen, Minxue; Hu, Ming; Sun, Zhenqiu
2017-01-01
Objectives To develop and validate brief scales to measure common emotional and behavioural problems among adolescents in the examination-oriented education system and collectivistic culture of China. Setting Middle schools in Hunan province. Participants 5442 middle school students aged 11–19 years were sampled. 4727 valid questionnaires were collected and used for validation of the scales. The final sample included 2408 boys and 2319 girls. Primary and secondary outcome measures The tools were assessed by the item response theory, classical test theory (reliability and construct validity) and differential item functioning. Results Four scales to measure anxiety, depression, study problem and sociality problem were established. Exploratory factor analysis showed that each scale had two solutions. Confirmatory factor analysis showed acceptable to good model fit for each scale. Internal consistency and test–retest reliability of all scales were above 0.7. Item response theory showed that all items had acceptable discrimination parameters and most items had appropriate difficulty parameters. 10 items demonstrated differential item functioning with respect to gender. Conclusions Four brief scales were developed and validated among adolescents in middle schools of China. The scales have good psychometric properties with minor differential item functioning. They can be used in middle school settings, and will help school officials to assess the students’ emotional/behavioural problems. PMID:28062469
Optimal Control of Distributed Energy Resources using Model Predictive Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayhorn, Ebony T.; Kalsi, Karanjit; Elizondo, Marcelo A.
2012-07-22
In an isolated power system (rural microgrid), Distributed Energy Resources (DERs) such as renewable energy resources (wind, solar), energy storage and demand response can be used to complement fossil fueled generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation. The problem is formulated as a multi-objective optimization problem with the goals of minimizing fuel costs and changes in power output of diesel generators, minimizingmore » costs associated with low battery life of energy storage and maintaining system frequency at the nominal operating value. Two control modes are considered for controlling the energy storage to compensate either net load variability or wind variability. Model predictive control (MPC) is used to solve the aforementioned problem and the performance is compared to an open-loop look-ahead dispatch problem. Simulation studies using high and low wind profiles, as well as, different MPC prediction horizons demonstrate the efficacy of the closed-loop MPC in compensating for uncertainties in wind and demand.« less
Average inactivity time model, associated orderings and reliability properties
NASA Astrophysics Data System (ADS)
Kayid, M.; Izadkhah, S.; Abouammoh, A. M.
2018-02-01
In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.
Hand assessment in older adults with musculoskeletal hand problems: a reliability study.
Myers, Helen L; Thomas, Elaine; Hay, Elaine M; Dziedzic, Krysia S
2011-01-07
Musculoskeletal hand pain is common in the general population. This study aims to investigate the inter- and intra-observer reliability of two trained observers conducting a simple clinical interview and physical examination for hand problems in older adults. The reliability of applying the American College of Rheumatology (ACR) criteria for hand osteoarthritis to community-dwelling older adults will also be investigated. Fifty-five participants aged 50 years and over with a current self-reported hand problem and registered with one general practice were recruited from a previous health questionnaire study. Participants underwent a standardised, structured clinical interview and physical examination by two independent trained observers and again by one of these observers a month later. Agreement beyond chance was summarised using Kappa statistics and intra-class correlation coefficients. Median values for inter- and intra-observer reliability for clinical interview questions were found to be "substantial" and "moderate" respectively [median agreement beyond chance (Kappa) was 0.75 (range: -0.03, 0.93) for inter-observer ratings and 0.57 (range: -0.02, 1.00) for intra-observer ratings]. Inter- and intra-observer reliability for physical examination items was variable, with good reliability observed for some items, such as grip and pinch strength, and poor reliability observed for others, notably assessment of altered sensation, pain on resisted movement and judgements based on observation and palpation of individual features at single joints, such as bony enlargement, nodes and swelling. Moderate agreement was observed both between and within observers when applying the ACR criteria for hand osteoarthritis. Standardised, structured clinical interview is reliable for taking a history in community-dwelling older adults with self reported hand problems. Agreement between and within observers for physical examination items is variable. Low Kappa values may have resulted, in part, from a low prevalence of clinical signs and symptoms in the study participants. The decision to use clinical interview and hand assessment variables in clinical practice or further research in primary care should include consideration of clinical applicability and training alongside reliability. Further investigation is required to determine the relationship between these clinical questions and assessments and the clinical course of hand pain and hand problems in community-dwelling older adults.
NASA Astrophysics Data System (ADS)
Goh, A. T. C.; Kulhawy, F. H.
2005-05-01
In urban environments, one major concern with deep excavations in soft clay is the potentially large ground deformations in and around the excavation. Excessive movements can damage adjacent buildings and utilities. There are many uncertainties associated with the calculation of the ultimate or serviceability performance of a braced excavation system. These include the variabilities of the loadings, geotechnical soil properties, and engineering and geometrical properties of the wall. A risk-based approach to serviceability performance failure is necessary to incorporate systematically the uncertainties associated with the various design parameters. This paper demonstrates the use of an integrated neural network-reliability method to assess the risk of serviceability failure through the calculation of the reliability index. By first performing a series of parametric studies using the finite element method and then approximating the non-linear limit state surface (the boundary separating the safe and failure domains) through a neural network model, the reliability index can be determined with the aid of a spreadsheet. Two illustrative examples are presented to show how the serviceability performance for braced excavation problems can be assessed using the reliability index.
NASA Technical Reports Server (NTRS)
Rais-Rohani, Masoud
2001-01-01
This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.
NASA Astrophysics Data System (ADS)
Dehbozorgi, Mohammad Reza
2000-10-01
Improvements in power system reliability have always been of interest to both power companies and customers. Since there are no sizable electrical energy storage elements in electrical power systems, the generated power should match the load demand at any given time. Failure to meet this balance may cause severe system problems, including loss of generation and system blackouts. This thesis proposes a methodology which can respond to either loss of generation or loss of load. It is based on switching of electric water heaters using power system frequency as the controlling signal. The proposed methodology encounters, and the thesis has addressed, the following associated problems. The controller must be interfaced with the existing thermostat control. When necessary to switch on loads, the water in the tank should not be overheated. Rapid switching of blocks of load, or chattering, has been considered. The contributions of the thesis are: (A) A system has been proposed which makes a significant portion of the distributed loads connected to a power system to behave in a predetermined manner to improve the power system response during disturbances. (B) The action of the proposed system is transparent to the customers. (C) The thesis proposes a simple analysis for determining the amount of such loads which might be switched and relates this amount to the size of the disturbances which can occur in the utility. (D) The proposed system acts without any formal communication links, solely using the embedded information present system-wide. (E) The methodology of the thesis proposes switching of water heater loads based on a simple, localized frequency set-point controller. The thesis has identified the consequent problem of rapid switching of distributed loads, which is referred to as chattering. (F) Two approaches have been proposed to reduce chattering to tolerable levels. (G) A frequency controller has been designed and built according to the specifications required to switch electric water heater loads in response to power system disturbances. (H) A cost analysis for building and installing the distributed frequency controller has been carried out. (I) The proposed equipment and methodology has been implemented and tested successfully. (Abstract shortened by UMI.)
Reliability enhancement of Navier-Stokes codes through convergence acceleration
NASA Technical Reports Server (NTRS)
Merkle, Charles L.; Dulikravich, George S.
1995-01-01
Methods for enhancing the reliability of Navier-Stokes computer codes through improving convergence characteristics are presented. The improving of these characteristics decreases the likelihood of code unreliability and user interventions in a design environment. The problem referred to as a 'stiffness' in the governing equations for propulsion-related flowfields is investigated, particularly in regard to common sources of equation stiffness that lead to convergence degradation of CFD algorithms. Von Neumann stability theory is employed as a tool to study the convergence difficulties involved. Based on the stability results, improved algorithms are devised to ensure efficient convergence in different situations. A number of test cases are considered to confirm a correlation between stability theory and numerical convergence. The examples of turbulent and reacting flow are presented, and a generalized form of the preconditioning matrix is derived to handle these problems, i.e., the problems involving additional differential equations for describing the transport of turbulent kinetic energy, dissipation rate and chemical species. Algorithms for unsteady computations are considered. The extension of the preconditioning techniques and algorithms derived for Navier-Stokes computations to three-dimensional flow problems is discussed. New methods to accelerate the convergence of iterative schemes for the numerical integration of systems of partial differential equtions are developed, with a special emphasis on the acceleration of convergence on highly clustered grids.
Berman, Rebecca L; Iris, Madelyn; Conrad, Kendon J; Robinson, Carrie
2018-01-01
Older adults taking multiple prescription and nonprescription drugs are at risk for medication use problems, yet there are few brief, self-administered screening tools designed specifically for them. The study objective was to develop and validate a patient-centered screener for community-dwelling older adults. In phase 1, a convenience sample of 57 stakeholders (older adults, pharmacists, nurses, and physicians) participated in concept mapping, using Concept System® Global MAX TM , to identify items for a questionnaire. In phase 2, a 40-item questionnaire was tested with a convenience sample of 377 adults and a 24-item version was tested with 306 older adults, aged 55 and older, using Rasch methodology. In phase 3, stakeholder focus groups provided feedback on the format of questionnaire materials and recommended strategies for addressing problems. The concept map contained 72 statements organized into 6 conceptual clusters or domains. The 24-item screener was unidimensional. Cronbach's alpha was .87, person reliability was acceptable (.74), and item reliability was high (.96). The MedUseQ is a validated, patient-centered tool targeting older adults that can be used to assess a wide range of medication use problems in clinical and community settings and to identify areas for education, intervention, or further assessment.
Reliability Evaluation for Clustered WSNs under Malware Propagation
Shen, Shigen; Huang, Longjun; Liu, Jianhua; Champion, Adam C.; Yu, Shui; Cao, Qiying
2016-01-01
We consider a clustered wireless sensor network (WSN) under epidemic-malware propagation conditions and solve the problem of how to evaluate its reliability so as to ensure efficient, continuous, and dependable transmission of sensed data from sensor nodes to the sink. Facing the contradiction between malware intention and continuous-time Markov chain (CTMC) randomness, we introduce a strategic game that can predict malware infection in order to model a successful infection as a CTMC state transition. Next, we devise a novel measure to compute the Mean Time to Failure (MTTF) of a sensor node, which represents the reliability of a sensor node continuously performing tasks such as sensing, transmitting, and fusing data. Since clustered WSNs can be regarded as parallel-serial-parallel systems, the reliability of a clustered WSN can be evaluated via classical reliability theory. Numerical results show the influence of parameters such as the true positive rate and the false positive rate on a sensor node’s MTTF. Furthermore, we validate the method of reliability evaluation for a clustered WSN according to the number of sensor nodes in a cluster, the number of clusters in a route, and the number of routes in the WSN. PMID:27294934
Reliability Evaluation for Clustered WSNs under Malware Propagation.
Shen, Shigen; Huang, Longjun; Liu, Jianhua; Champion, Adam C; Yu, Shui; Cao, Qiying
2016-06-10
We consider a clustered wireless sensor network (WSN) under epidemic-malware propagation conditions and solve the problem of how to evaluate its reliability so as to ensure efficient, continuous, and dependable transmission of sensed data from sensor nodes to the sink. Facing the contradiction between malware intention and continuous-time Markov chain (CTMC) randomness, we introduce a strategic game that can predict malware infection in order to model a successful infection as a CTMC state transition. Next, we devise a novel measure to compute the Mean Time to Failure (MTTF) of a sensor node, which represents the reliability of a sensor node continuously performing tasks such as sensing, transmitting, and fusing data. Since clustered WSNs can be regarded as parallel-serial-parallel systems, the reliability of a clustered WSN can be evaluated via classical reliability theory. Numerical results show the influence of parameters such as the true positive rate and the false positive rate on a sensor node's MTTF. Furthermore, we validate the method of reliability evaluation for a clustered WSN according to the number of sensor nodes in a cluster, the number of clusters in a route, and the number of routes in the WSN.
Methods of increasing efficiency and maintainability of pipeline systems
NASA Astrophysics Data System (ADS)
Ivanov, V. A.; Sokolov, S. M.; Ogudova, E. V.
2018-05-01
This study is dedicated to the issue of pipeline transportation system maintenance. The article identifies two classes of technical-and-economic indices, which are used to select an optimal pipeline transportation system structure. Further, the article determines various system maintenance strategies and strategy selection criteria. Meanwhile, the maintenance strategies turn out to be not sufficiently effective due to non-optimal values of maintenance intervals. This problem could be solved by running the adaptive maintenance system, which includes a pipeline transportation system reliability improvement algorithm, especially an equipment degradation computer model. In conclusion, three model building approaches for determining optimal technical systems verification inspections duration were considered.
Maddali Bongi, S; Del Rosso, A; Miniati, I; Galluccio, F; Landi, G; Tai, G; Matucci-Cerinic, M
2012-09-01
In systemic sclerosis (SSc), mouth and face involvement leads to problems in oral health-related quality of life (OHRQoL). Mouth Handicap in Systemic Sclerosis scale (MHISS) is a 12-item questionnaire specifically quantifying mouth disability in SSc, organized in 3 subscales. Our aim was to validate Italian version of MHISS, by assessing its test-retest reliability and internal and external consistency in Italian SSc patients. Forty SSc patients (7 dSSc, 33 lSSc; age and disease duration: 57.27 ± 11.41, 9.4 ± 4.4 years; 22 with sicca syndrome) were evaluated with MHISS. MHISS was translated following a forward-backward translation procedure, with independent translations and counter-translation. Test-retest reliability was evaluated, comparing the results of two administrations, with intraclass correlation coefficient (ICC). Internal consistency was assessed by Cronbach's α and external consistency by comparison with mouth opening. MHISS has a good test-retest reliability (ICC: 0.93) and internal consistency (Cronbach's α:0.99). A good external consistency was confirmed by correlation with mouth opening (rho: -0,3869, p: 0.0137). Total MHISS score was 17.65 ± 5.20, with scores of subscale 1 (reduced mouth opening) of 6.60 ± 2.85 and scores of subscales 2 (sicca syndrome) and 3 (aesthetic concerns) of 7.82 ± 2.59 and 3.22 ± 1.14. Total and subscale 2 scores are higher in dSSc than in lSSc. This result may be due to the higher presence of sicca syndrome in dSSc than in lSSc (p = 0.0109). Our results support validity and reliability in Italian SSc patients of MHISS, specifically measuring SSc OHRQoL.
CRYOGENIC UPPER STAGE SYSTEM SAFETY
NASA Technical Reports Server (NTRS)
Smith, R. Kenneth; French, James V.; LaRue, Peter F.; Taylor, James L.; Pollard, Kathy (Technical Monitor)
2005-01-01
NASA s Exploration Initiative will require development of many new systems or systems of systems. One specific example is that safe, affordable, and reliable upper stage systems to place cargo and crew in stable low earth orbit are urgently required. In this paper, we examine the failure history of previous upper stages with liquid oxygen (LOX)/liquid hydrogen (LH2) propulsion systems. Launch data from 1964 until midyear 2005 are analyzed and presented. This data analysis covers upper stage systems from the Ariane, Centaur, H-IIA, Saturn, and Atlas in addition to other vehicles. Upper stage propulsion system elements have the highest impact on reliability. This paper discusses failure occurrence in all aspects of the operational phases (Le., initial burn, coast, restarts, and trends in failure rates over time). In an effort to understand the likelihood of future failures in flight, we present timelines of engine system failures relevant to initial flight histories. Some evidence suggests that propulsion system failures as a result of design problems occur shortly after initial development of the propulsion system; whereas failures because of manufacturing or assembly processing errors may occur during any phase of the system builds process, This paper also explores the detectability of historical failures. Observations from this review are used to ascertain the potential for increased upper stage reliability given investments in integrated system health management. Based on a clear understanding of the failure and success history of previous efforts by multiple space hardware development groups, the paper will investigate potential improvements that can be realized through application of system safety principles.
Reduction of Dynamic Loads in Mine Lifting Installations
NASA Astrophysics Data System (ADS)
Kuznetsov, N. K.; Eliseev, S. V.; Perelygina, A. Yu
2018-01-01
Article is devoted to a problem of decrease in the dynamic loadings arising in transitional operating modes of the mine lifting installations leading to heavy oscillating motions of lifting vessels and decrease in efficiency and reliability of work. The known methods and means of decrease in dynamic loadings and oscillating motions of the similar equipment are analysed. It is shown that an approach based on the concept of the inverse problems of dynamics can be effective method of the solution of this problem. The article describes the design model of a one-ended lifting installation in the form of a two-mass oscillation system, in which the inertial elements are the mass of the lifting vessel and the reduced mass of the engine, reducer, drum and pulley. The simplified mathematical model of this system and results of an efficiency research of an active way of reduction of dynamic loadings of lifting installation on the basis of the concept of the inverse problems of dynamics are given.
NASA Technical Reports Server (NTRS)
Cohn, S. E.
1982-01-01
Numerical weather prediction (NWP) is an initial-value problem for a system of nonlinear differential equations, in which initial values are known incompletely and inaccurately. Observational data available at the initial time must therefore be supplemented by data available prior to the initial time, a problem known as meteorological data assimilation. A further complication in NWP is that solutions of the governing equations evolve on two different time scales, a fast one and a slow one, whereas fast scale motions in the atmosphere are not reliably observed. This leads to the so called initialization problem: initial values must be constrained to result in a slowly evolving forecast. The theory of estimation of stochastic dynamic systems provides a natural approach to such problems. For linear stochastic dynamic models, the Kalman-Bucy (KB) sequential filter is the optimal data assimilation method, for linear models, the optimal combined data assimilation-initialization method is a modified version of the KB filter.
Reliability Assessment of a Robust Design Under Uncertainty for a 3-D Flexible Wing
NASA Technical Reports Server (NTRS)
Gumbert, Clyde R.; Hou, Gene J. -W.; Newman, Perry A.
2003-01-01
The paper presents reliability assessment results for the robust designs under uncertainty of a 3-D flexible wing previously reported by the authors. Reliability assessments (additional optimization problems) of the active constraints at the various probabilistic robust design points are obtained and compared with the constraint values or target constraint probabilities specified in the robust design. In addition, reliability-based sensitivity derivatives with respect to design variable mean values are also obtained and shown to agree with finite difference values. These derivatives allow one to perform reliability based design without having to obtain second-order sensitivity derivatives. However, an inner-loop optimization problem must be solved for each active constraint to find the most probable point on that constraint failure surface.
Ceramic component reliability with the restructured NASA/CARES computer program
NASA Technical Reports Server (NTRS)
Powers, Lynn M.; Starlinger, Alois; Gyekenyesi, John P.
1992-01-01
The Ceramics Analysis and Reliability Evaluation of Structures (CARES) integrated design program on statistical fast fracture reliability and monolithic ceramic components is enhanced to include the use of a neutral data base, two-dimensional modeling, and variable problem size. The data base allows for the efficient transfer of element stresses, temperatures, and volumes/areas from the finite element output to the reliability analysis program. Elements are divided to insure a direct correspondence between the subelements and the Gaussian integration points. Two-dimensional modeling is accomplished by assessing the volume flaw reliability with shell elements. To demonstrate the improvements in the algorithm, example problems are selected from a round-robin conducted by WELFEP (WEakest Link failure probability prediction by Finite Element Postprocessors).
NASA Technical Reports Server (NTRS)
White, Mark; Cooper, Mark; Johnston, Allan
2011-01-01
Reliability of advanced CMOS technology is a complex problem that is usually addressed from the standpoint of specific failure mechanisms rather than overall reliability of a finished microcircuit. A detailed treatment of CMOS reliability in scaled devices can be found in Ref. 1; it should be consulted for a more thorough discussion. The present document provides a more concise treatment of the scaled CMOS reliability problem, emphasizing differences in the recommended approach for these advanced devices compared to that of less aggressively scaled devices. It includes specific recommendations that can be used by flight projects that use advanced CMOS. The primary emphasis is on conventional memories, microprocessors, and related devices.
A novel symbiotic organisms search algorithm for congestion management in deregulated environment
NASA Astrophysics Data System (ADS)
Verma, Sumit; Saha, Subhodip; Mukherjee, V.
2017-01-01
In today's competitive electricity market, managing transmission congestion in deregulated power system has created challenges for independent system operators to operate the transmission lines reliably within the limits. This paper proposes a new meta-heuristic algorithm, called as symbiotic organisms search (SOS) algorithm, for congestion management (CM) problem in pool based electricity market by real power rescheduling of generators. Inspired by interactions among organisms in ecosystem, SOS algorithm is a recent population based algorithm which does not require any algorithm specific control parameters unlike other algorithms. Various security constraints such as load bus voltage and line loading are taken into account while dealing with the CM problem. In this paper, the proposed SOS algorithm is applied on modified IEEE 30- and 57-bus test power system for the solution of CM problem. The results, thus, obtained are compared to those reported in the recent state-of-the-art literature. The efficacy of the proposed SOS algorithm for obtaining the higher quality solution is also established.
A novel symbiotic organisms search algorithm for congestion management in deregulated environment
NASA Astrophysics Data System (ADS)
Verma, Sumit; Saha, Subhodip; Mukherjee, V.
2017-01-01
In today's competitive electricity market, managing transmission congestion in deregulated power system has created challenges for independent system operators to operate the transmission lines reliably within the limits. This paper proposes a new meta-heuristic algorithm, called as symbiotic organisms search (SOS) algorithm, for congestion management (CM) problem in pool-based electricity market by real power rescheduling of generators. Inspired by interactions among organisms in ecosystem, SOS algorithm is a recent population-based algorithm which does not require any algorithm specific control parameters unlike other algorithms. Various security constraints such as load bus voltage and line loading are taken into account while dealing with the CM problem. In this paper, the proposed SOS algorithm is applied on modified IEEE 30- and 57-bus test power system for the solution of CM problem. The results, thus, obtained are compared to those reported in the recent state-of-the-art literature. The efficacy of the proposed SOS algorithm for obtaining the higher quality solution is also established.
Inverse problems and computational cell metabolic models: a statistical approach
NASA Astrophysics Data System (ADS)
Calvetti, D.; Somersalo, E.
2008-07-01
In this article, we give an overview of the Bayesian modelling of metabolic systems at the cellular and subcellular level. The models are based on detailed description of key biochemical reactions occurring in tissue, which may in turn be compartmentalized into cytosol and mitochondria, and of transports between the compartments. The classical deterministic approach which models metabolic systems as dynamical systems with Michaelis-Menten kinetics, is replaced by a stochastic extension where the model parameters are interpreted as random variables with an appropriate probability density. The inverse problem of cell metabolism in this setting consists of estimating the density of the model parameters. After discussing some possible approaches to solving the problem, we address the issue of how to assess the reliability of the predictions of a stochastic model by proposing an output analysis in terms of model uncertainties. Visualization modalities for organizing the large amount of information provided by the Bayesian dynamic sensitivity analysis are also illustrated.
Reliable High Performance Peta- and Exa-Scale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronevetsky, G
2012-04-02
As supercomputers become larger and more powerful, they are growing increasingly complex. This is reflected both in the exponentially increasing numbers of components in HPC systems (LLNL is currently installing the 1.6 million core Sequoia system) as well as the wide variety of software and hardware components that a typical system includes. At this scale it becomes infeasible to make each component sufficiently reliable to prevent regular faults somewhere in the system or to account for all possible cross-component interactions. The resulting faults and instability cause HPC applications to crash, perform sub-optimally or even produce erroneous results. As supercomputers continuemore » to approach Exascale performance and full system reliability becomes prohibitively expensive, we will require novel techniques to bridge the gap between the lower reliability provided by hardware systems and users unchanging need for consistent performance and reliable results. Previous research on HPC system reliability has developed various techniques for tolerating and detecting various types of faults. However, these techniques have seen very limited real applicability because of our poor understanding of how real systems are affected by complex faults such as soft fault-induced bit flips or performance degradations. Prior work on such techniques has had very limited practical utility because it has generally focused on analyzing the behavior of entire software/hardware systems both during normal operation and in the face of faults. Because such behaviors are extremely complex, such studies have only produced coarse behavioral models of limited sets of software/hardware system stacks. Since this provides little insight into the many different system stacks and applications used in practice, this work has had little real-world impact. My project addresses this problem by developing a modular methodology to analyze the behavior of applications and systems during both normal and faulty operation. By synthesizing models of individual components into a whole-system behavior models my work is making it possible to automatically understand the behavior of arbitrary real-world systems to enable them to tolerate a wide range of system faults. My project is following a multi-pronged research strategy. Section II discusses my work on modeling the behavior of existing applications and systems. Section II.A discusses resilience in the face of soft faults and Section II.B looks at techniques to tolerate performance faults. Finally Section III presents an alternative approach that studies how a system should be designed from the ground up to make resilience natural and easy.« less
Engineering Infrastructures: Problems of Safety and Security in the Russian Federation
NASA Astrophysics Data System (ADS)
Makhutov, Nikolay A.; Reznikov, Dmitry O.; Petrov, Vitaly P.
Modern society cannot exist without stable and reliable engineering infrastructures (EI), whose operation is vital for any national economy. These infrastructures include energy, transportation, water and gas supply systems, telecommunication and cyber systems, etc. Their performance is commensurate with storing and processing huge amounts of information, energy and hazardous substances. Ageing infrastructures are deteriorating — with operating conditions declining from normal to emergency and catastrophic. The complexity of engineering infrastructures and their interdependence with other technical systems makes them vulnerable to emergency situations triggered by natural and manmade catastrophes or terrorist attacks.
Reliable in their failure: an analysis of healthcare reform policies in public systems.
Contandriopoulos, Damien; Brousselle, Astrid
2010-05-01
In this paper, we analyze recommendations of past governmental commissions and their implementation in Quebec as a case to discuss the obstacles that litter the road to healthcare system reform. Our analysis shows that the obstacles to tackling the healthcare system's main problems may have less to do with programmatic (what to do) than with political and governance (how to do it) questions. We then draw on neo-institutional theory to discuss the causes and effects of this situation. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Title, A. M.; Gillespie, B. A.; Mosher, J. W.
1982-01-01
A compact magnetograph system based on solid Fabry-Perot interferometers as the spectral isolation elements was studied. The theory of operation of several Fabry-Perot systems, the suitability of various magnetic lines, signal levels expected for different modes of operation, and the optimal detector systems were investigated. The requirements that the lack of a polarization modulator placed upon the electronic signal chain was emphasized. The PLZT modulator was chosen as a satisfactory component with both high reliability and elatively low voltage requirements. Thermal control, line centering and velocity offset problems were solved by a Fabry-Perot configuration.
Okundamiya, Michael S; Emagbetere, Joy O; Ogujor, Emmanuel A
2014-01-01
The rapid growth of the mobile telecommunication sectors of many emerging countries creates a number of problems such as network congestion and poor service delivery for network operators. This results primarily from the lack of a reliable and cost-effective power solution within such regions. This study presents a comprehensive review of the underlying principles of the renewable energy technology (RET) with the objective of ensuring a reliable and cost-effective energy solution for a sustainable development in the emerging world. The grid-connected hybrid renewable energy system incorporating a power conversion and battery storage unit has been proposed based on the availability, dynamism, and technoeconomic viability of energy resources within the region. The proposed system's performance validation applied a simulation model developed in MATLAB, using a practical load data for different locations with varying climatic conditions in Nigeria. Results indicate that, apart from being environmentally friendly, the increase in the overall energy throughput of about 4 kWh/$ of the proposed system would not only improve the quality of mobile services, by making the operations of GSM base stations more reliable and cost effective, but also better the living standards of the host communities.
Okundamiya, Michael S.; Emagbetere, Joy O.; Ogujor, Emmanuel A.
2014-01-01
The rapid growth of the mobile telecommunication sectors of many emerging countries creates a number of problems such as network congestion and poor service delivery for network operators. This results primarily from the lack of a reliable and cost-effective power solution within such regions. This study presents a comprehensive review of the underlying principles of the renewable energy technology (RET) with the objective of ensuring a reliable and cost-effective energy solution for a sustainable development in the emerging world. The grid-connected hybrid renewable energy system incorporating a power conversion and battery storage unit has been proposed based on the availability, dynamism, and technoeconomic viability of energy resources within the region. The proposed system's performance validation applied a simulation model developed in MATLAB, using a practical load data for different locations with varying climatic conditions in Nigeria. Results indicate that, apart from being environmentally friendly, the increase in the overall energy throughput of about 4 kWh/$ of the proposed system would not only improve the quality of mobile services, by making the operations of GSM base stations more reliable and cost effective, but also better the living standards of the host communities. PMID:24578673
Memorial Hermann: high reliability from board to bedside.
Shabot, M Michael; Monroe, Douglas; Inurria, Juan; Garbade, Debbi; France, Anne-Claire
2013-06-01
In 2006 the Memorial Hermann Health System (MHHS), which includes 12 hospitals, began applying principles embraced by high reliability organizations (HROs). Three factors support its HRO journey: (1) aligned organizational structure with transparent management systems and compressed reporting processes; (2) Robust Process Improvement (RPI) with high-reliability interventions; and (3) cultural establishment, sustainment, and evolution. The Quality and Safety strategic plan contains three domains, each with a specific set of measures that provide goals for performance: (1) "Clinical Excellence;" (2) "Do No Harm;" and (3) "Saving Lives," as measured by the Serious Safety Event rate. MHHS uses a uniform approach to performance improvement--RPI, which includes Six Sigma, Lean, and change management, to solve difficult safety and quality problems. The 9 acute care hospitals provide multiple opportunities to integrate high-reliability interventions and best practices across MHHS. For example, MHHS partnered with the Joint Commission Center for Transforming Healthcare in its inaugural project to establish reliable hand hygiene behaviors, which improved MHHS's average hand hygiene compliance rate from 44% to 92% currently. Soon after compliance exceeded 85% at all 12 hospitals, the average rate of central line-associated bloodstream and ventilator-associated pneumonias decreased to essentially zero. MHHS's size and diversity require a disciplined approach to performance improvement and systemwide achievement of measurable success. The most significant cultural change at MHHS has been the expectation for 100% compliance with evidence-based quality measures and 0% incidence of patient harm.
NASA Astrophysics Data System (ADS)
Teves, André da Costa; Lima, Cícero Ribeiro de; Passaro, Angelo; Silva, Emílio Carlos Nelli
2017-03-01
Electrostatic or capacitive accelerometers are among the highest volume microelectromechanical systems (MEMS) products nowadays. The design of such devices is a complex task, since they depend on many performance requirements, which are often conflicting. Therefore, optimization techniques are often used in the design stage of these MEMS devices. Because of problems with reliability, the technology of MEMS is not yet well established. Thus, in this work, size optimization is combined with the reliability-based design optimization (RBDO) method to improve the performance of accelerometers. To account for uncertainties in the dimensions and material properties of these devices, the first order reliability method is applied to calculate the probabilities involved in the RBDO formulation. Practical examples of bulk-type capacitive accelerometer designs are presented and discussed to evaluate the potential of the implemented RBDO solver.
Sànchez-Marrè, Miquel; Gilbert, Karina; Sojda, Rick S.; Steyer, Jean Philippe; Struss, Peter; Rodríguez-Roda, Ignasi; Voinov, A.A.; Jakeman, A.J.; Rizzoli, A.E.
2006-01-01
There are inherent open problems arising when developing and running Intelligent Environmental Decision Support Systems (IEDSS). During daily operation of IEDSS several open challenge problems appear. The uncertainty of data being processed is intrinsic to the environmental system, which is being monitored by several on-line sensors and off-line data. Thus, anomalous data values at data gathering level or even uncertain reasoning process at later levels such as in diagnosis or decision support or planning can lead the environmental process to unsafe critical operation states. At diagnosis level or even at decision support level or planning level, spatial reasoning or temporal reasoning or both aspects can influence the reasoning processes undertaken by the IEDSS. Most of Environmental systems must take into account the spatial relationships between the environmental goal area and the nearby environmental areas and the temporal relationships between the current state and the past states of the environmental system to state accurate and reliable assertions to be used within the diagnosis process or decision support process or planning process. Finally, a related issue is a crucial point: are really reliable and safe the decisions proposed by the IEDSS? Are we sure about the goodness and performance of proposed solutions? How can we ensure a correct evaluation of the IEDSS? Main goal of this paper is to analyse these four issues, review some possible approaches and techniques to cope with them, and study new trends for future research within the IEDSS field.
Chen, I-Min A.; Markowitz, Victor M.; Palaniappan, Krishna; ...
2016-04-26
Background: The exponential growth of genomic data from next generation technologies renders traditional manual expert curation effort unsustainable. Many genomic systems have included community annotation tools to address the problem. Most of these systems adopted a "Wiki-based" approach to take advantage of existing wiki technologies, but encountered obstacles in issues such as usability, authorship recognition, information reliability and incentive for community participation. Results: Here, we present a different approach, relying on tightly integrated method rather than "Wiki-based" method, to support community annotation and user collaboration in the Integrated Microbial Genomes (IMG) system. The IMG approach allows users to use existingmore » IMG data warehouse and analysis tools to add gene, pathway and biosynthetic cluster annotations, to analyze/reorganize contigs, genes and functions using workspace datasets, and to share private user annotations and workspace datasets with collaborators. We show that the annotation effort using IMG can be part of the research process to overcome the user incentive and authorship recognition problems thus fostering collaboration among domain experts. The usability and reliability issues are addressed by the integration of curated information and analysis tools in IMG, together with DOE Joint Genome Institute (JGI) expert review. Conclusion: By incorporating annotation operations into IMG, we provide an integrated environment for users to perform deeper and extended data analysis and annotation in a single system that can lead to publications and community knowledge sharing as shown in the case studies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, I-Min A.; Markowitz, Victor M.; Palaniappan, Krishna
Background: The exponential growth of genomic data from next generation technologies renders traditional manual expert curation effort unsustainable. Many genomic systems have included community annotation tools to address the problem. Most of these systems adopted a "Wiki-based" approach to take advantage of existing wiki technologies, but encountered obstacles in issues such as usability, authorship recognition, information reliability and incentive for community participation. Results: Here, we present a different approach, relying on tightly integrated method rather than "Wiki-based" method, to support community annotation and user collaboration in the Integrated Microbial Genomes (IMG) system. The IMG approach allows users to use existingmore » IMG data warehouse and analysis tools to add gene, pathway and biosynthetic cluster annotations, to analyze/reorganize contigs, genes and functions using workspace datasets, and to share private user annotations and workspace datasets with collaborators. We show that the annotation effort using IMG can be part of the research process to overcome the user incentive and authorship recognition problems thus fostering collaboration among domain experts. The usability and reliability issues are addressed by the integration of curated information and analysis tools in IMG, together with DOE Joint Genome Institute (JGI) expert review. Conclusion: By incorporating annotation operations into IMG, we provide an integrated environment for users to perform deeper and extended data analysis and annotation in a single system that can lead to publications and community knowledge sharing as shown in the case studies.« less
NASA Astrophysics Data System (ADS)
Jin, Shan
This dissertation concerns power system expansion planning under different market mechanisms. The thesis follows a three paper format, in which each paper emphasizes a different perspective. The first paper investigates the impact of market uncertainties on a long term centralized generation expansion planning problem. The problem is modeled as a two-stage stochastic program with uncertain fuel prices and demands, which are represented as probabilistic scenario paths in a multi-period tree. Two measurements, expected cost (EC) and Conditional Value-at-Risk (CVaR), are used to minimize, respectively, the total expected cost among scenarios and the risk of incurring high costs in unfavorable scenarios. We sample paths from the scenario tree to reduce the problem scale and determine the sufficient number of scenarios by computing confidence intervals on the objective values. The second paper studies an integrated electricity supply system including generation, transmission and fuel transportation with a restructured wholesale electricity market. This integrated system expansion problem is modeled as a bi-level program in which a centralized system expansion decision is made in the upper level and the operational decisions of multiple market participants are made in the lower level. The difficulty of solving a bi-level programming problem to global optimality is discussed and three problem relaxations obtained by reformulation are explored. The third paper solves a more realistic market-based generation and transmission expansion problem. It focuses on interactions among a centralized transmission expansion decision and decentralized generation expansion decisions. It allows each generator to make its own strategic investment and operational decisions both in response to a transmission expansion decision and in anticipation of a market price settled by an Independent System Operator (ISO) market clearing problem. The model poses a complicated tri-level structure including an equilibrium problem with equilibrium constraints (EPEC) sub-problem. A hybrid iterative algorithm is proposed to solve the problem efficiently and reliably.
Developing a Continuous Improvement System
2016-09-16
disagree that continuous improvement is critical to an organization’s suc-cess, since conducting business using a status quo philosophy will not work...for implementing one of these processes include: better operational efficiency, increased customer satisfaction, improved employee morale ...when a problem in reliability or maintenance may become the greatest opportunity. As described in the January-February 2011 issue of Defense AT&L
The Aviation Paradox: Why We Can 'Know' Jetliners But Not Reactors.
Downer, John
2017-01-01
Publics and policymakers increasingly have to contend with the risks of complex, safety-critical technologies, such as airframes and reactors. As such, 'technological risk' has become an important object of modern governance, with state regulators as core agents, and 'reliability assessment' as the most essential metric. The Science and Technology Studies (STS) literature casts doubt on whether or not we should place our faith in these assessments because predictively calculating the ultra-high reliability required of such systems poses seemingly insurmountable epistemological problems. This paper argues that these misgivings are warranted in the nuclear sphere, despite evidence from the aviation sphere suggesting that such calculations can be accurate. It explains why regulatory calculations that predict the reliability of new airframes cannot work in principle, and then it explains why those calculations work in practice. It then builds on this explanation to argue that the means by which engineers manage reliability in aviation is highly domain-specific, and to suggest how a more nuanced understanding of jetliners could inform debates about nuclear energy.
A Reserve-based Method for Mitigating the Impact of Renewable Energy
NASA Astrophysics Data System (ADS)
Krad, Ibrahim
The fundamental operating paradigm of today's power systems is undergoing a significant shift. This is partially motivated by the increased desire for incorporating variable renewable energy resources into generation portfolios. While these generating technologies offer clean energy at zero marginal cost, i.e. no fuel costs, they also offer unique operating challenges for system operators. Perhaps the biggest operating challenge these resources introduce is accommodating their intermittent fuel source availability. For this reason, these generators increase the system-wide variability and uncertainty. As a result, system operators are revisiting traditional operating strategies to more efficiently incorporate these generation resources to maximize the benefit they provide while minimizing the challenges they introduce. One way system operators have accounted for system variability and uncertainty is through the use of operating reserves. Operating reserves can be simplified as excess capacity kept online during real time operations to help accommodate unforeseen fluctuations in demand. With new generation resources, a new class of operating reserves has emerged that is generally known as flexibility, or ramping, reserves. This new reserve class is meant to better position systems to mitigate severe ramping in the net load profile. The best way to define this new requirement is still under investigation. Typical requirement definitions focus on the additional uncertainty introduced by variable generation and there is room for improvement regarding explicit consideration for the variability they introduce. An exogenous reserve modification method is introduced in this report that can improve system reliability with minimal impacts on total system wide production costs. Another potential solution to this problem is to formulate the problem as a stochastic programming problem. The unit commitment and economic dispatch problems are typically formulated as deterministic problems due to fast solution times and the solutions being sufficient for operations. Improvements in technical computing hardware have reignited interest in stochastic modeling. The variability of wind and solar naturally lends itself to stochastic modeling. The use of explicit reserve requirements in stochastic models is an area of interest for power system researchers. This report introduces a new reserve modification implementation based on previous results to be used in a stochastic modeling framework. With technological improvements in distributed generation technologies, microgrids are currently being researched and implemented. Microgrids are small power systems that have the ability to serve their demand with their own generation resources and may have a connection to a larger power system. As battery technologies improve, they are becoming a more viable option in these distributed power systems and research is necessary to determine the most efficient way to utilize them. This report will investigate several unique operating strategies for batteries in small power systems and analyze their benefits. These new operating strategies will help reduce operating costs and improve system reliability.
The system of technical diagnostics of the industrial safety information network
NASA Astrophysics Data System (ADS)
Repp, P. V.
2017-01-01
This research is devoted to problems of safety of the industrial information network. Basic sub-networks, ensuring reliable operation of the elements of the industrial Automatic Process Control System, were identified. The core tasks of technical diagnostics of industrial information safety were presented. The structure of the technical diagnostics system of the information safety was proposed. It includes two parts: a generator of cyber-attacks and the virtual model of the enterprise information network. The virtual model was obtained by scanning a real enterprise network. A new classification of cyber-attacks was proposed. This classification enables one to design an efficient generator of cyber-attacks sets for testing the virtual modes of the industrial information network. The numerical method of the Monte Carlo (with LPτ - sequences of Sobol), and Markov chain was considered as the design method for the cyber-attacks generation algorithm. The proposed system also includes a diagnostic analyzer, performing expert functions. As an integrative quantitative indicator of the network reliability the stability factor (Kstab) was selected. This factor is determined by the weight of sets of cyber-attacks, identifying the vulnerability of the network. The weight depends on the frequency and complexity of cyber-attacks, the degree of damage, complexity of remediation. The proposed Kstab is an effective integral quantitative measure of the information network reliability.
Low Temperature Regenerators for Zero Boil-Off Liquid Hydrogen Pulse Tube Cryocoolers
NASA Technical Reports Server (NTRS)
Salerno, Louis J.; Kashani, Ali; Helvensteijn, Ben; Kittel, Peter; Arnoldm James O. (Technical Monitor)
2002-01-01
Recently, a great deal of attention has been focused on zero boil-off (ZBO) propellant storage as a means of minimizing the launch mass required for long-term exploration missions. A key component of ZBO systems is the cooler. Pulse tube coolers offer the advantage of zero moving mass at the cold head, and recent advances in lightweight, high efficiency cooler technology have paved the way for reliable liquid oxygen (LOx) temperature coolers to be developed which are suitable for flight ZBO systems. Liquid hydrogen (LH2) systems, however, are another matter. For ZBO liquid hydrogen systems, cooling powers of 1-5 watts are required at 20 K. The final development from tier for these coolers is to achieve high efficiency and reliability at lower operating temperatures. Most of the life-limiting issues of flight Stirling and pulse tube coolers are associated with contamination, drive mechanisms, and drive electronics. These problems are well in hand in the present generation coolers. The remaining efficiency and reliability issues reside with the low temperature regenerators. This paper will discuss advances to be made in regenerators for pulse tube LH2 ZBO coolers, present some historical background, and discuss recent progress in regenerator technology development using alloys of erbium.
Reliability and Validity Evidence of Multiple Balance Assessments in Athletes With a Concussion
Murray, Nicholas; Salvatore, Anthony; Powell, Douglas; Reed-Jones, Rebecca
2014-01-01
Context: An estimated 300 000 sport-related concussion injuries occur in the United States annually. Approximately 30% of individuals with concussions experience balance disturbances. Common methods of balance assessment include the Clinical Test of Sensory Organization and Balance (CTSIB), the Sensory Organization Test (SOT), the Balance Error Scoring System (BESS), and the Romberg test; however, the National Collegiate Athletic Association recommended the Wii Fit as an alternative measure of balance in athletes with a concussion. A central concern regarding the implementation of the Wii Fit is whether it is reliable and valid for measuring balance disturbance in athletes with concussion. Objective: To examine the reliability and validity evidence for the CTSIB, SOT, BESS, Romberg test, and Wii Fit for detecting balance disturbance in athletes with a concussion. Data Sources: Literature considered for review included publications with reliability and validity data for the assessments of balance (CTSIB, SOT, BESS, Romberg test, and Wii Fit) from PubMed, PsycINFO, and CINAHL. Data Extraction: We identified 63 relevant articles for consideration in the review. Of the 63 articles, 28 were considered appropriate for inclusion and 35 were excluded. Data Synthesis: No current reliability or validity information supports the use of the CTSIB, SOT, Romberg test, or Wii Fit for balance assessment in athletes with a concussion. The BESS demonstrated moderate to high reliability (interclass correlation coefficient = 0.87) and low to moderate validity (sensitivity = 34%, specificity = 87%). However, the Romberg test and Wii Fit have been shown to be reliable tools in the assessment of balance in Parkinson patients. Conclusions: The BESS can evaluate balance problems after a concussion. However, it lacks the ability to detect balance problems after the third day of recovery. Further investigation is needed to establish the use of the CTSIB, SOT, Romberg test, and Wii Fit for assessing balance in athletes with concussions. PMID:24933431
Energy-efficient fault tolerance in multiprocessor real-time systems
NASA Astrophysics Data System (ADS)
Guo, Yifeng
The recent progress in the multiprocessor/multicore systems has important implications for real-time system design and operation. From vehicle navigation to space applications as well as industrial control systems, the trend is to deploy multiple processors in real-time systems: systems with 4 -- 8 processors are common, and it is expected that many-core systems with dozens of processing cores will be available in near future. For such systems, in addition to general temporal requirement common for all real-time systems, two additional operational objectives are seen as critical: energy efficiency and fault tolerance. An intriguing dimension of the problem is that energy efficiency and fault tolerance are typically conflicting objectives, due to the fact that tolerating faults (e.g., permanent/transient) often requires extra resources with high energy consumption potential. In this dissertation, various techniques for energy-efficient fault tolerance in multiprocessor real-time systems have been investigated. First, the Reliability-Aware Power Management (RAPM) framework, which can preserve the system reliability with respect to transient faults when Dynamic Voltage Scaling (DVS) is applied for energy savings, is extended to support parallel real-time applications with precedence constraints. Next, the traditional Standby-Sparing (SS) technique for dual processor systems, which takes both transient and permanent faults into consideration while saving energy, is generalized to support multiprocessor systems with arbitrary number of identical processors. Observing the inefficient usage of slack time in the SS technique, a Preference-Oriented Scheduling Framework is designed to address the problem where tasks are given preferences for being executed as soon as possible (ASAP) or as late as possible (ALAP). A preference-oriented earliest deadline (POED) scheduler is proposed and its application in multiprocessor systems for energy-efficient fault tolerance is investigated, where tasks' main copies are executed ASAP while backup copies ALAP to reduce the overlapped execution of main and backup copies of the same task and thus reduce energy consumption. All proposed techniques are evaluated through extensive simulations and compared with other state-of-the-art approaches. The simulation results confirm that the proposed schemes can preserve the system reliability while still achieving substantial energy savings. Finally, for both SS and POED based Energy-Efficient Fault-Tolerant (EEFT) schemes, a series of recovery strategies are designed when more than one (transient and permanent) faults need to be tolerated.
Mathematics applied to the climate system: outstanding challenges and recent progress
Williams, Paul D.; Cullen, Michael J. P.; Davey, Michael K.; Huthnance, John M.
2013-01-01
The societal need for reliable climate predictions and a proper assessment of their uncertainties is pressing. Uncertainties arise not only from initial conditions and forcing scenarios, but also from model formulation. Here, we identify and document three broad classes of problems, each representing what we regard to be an outstanding challenge in the area of mathematics applied to the climate system. First, there is the problem of the development and evaluation of simple physically based models of the global climate. Second, there is the problem of the development and evaluation of the components of complex models such as general circulation models. Third, there is the problem of the development and evaluation of appropriate statistical frameworks. We discuss these problems in turn, emphasizing the recent progress made by the papers presented in this Theme Issue. Many pressing challenges in climate science require closer collaboration between climate scientists, mathematicians and statisticians. We hope the papers contained in this Theme Issue will act as inspiration for such collaborations and for setting future research directions. PMID:23588054
Reliability and concurrent validity of the computer workstation checklist.
Baker, Nancy A; Livengood, Heather; Jacobs, Karen
2013-01-01
Self-report checklists are used to assess computer workstation set up, typically by workers not trained in ergonomic assessment or checklist interpretation.Though many checklists exist, few have been evaluated for reliability and validity. This study examined reliability and validity of the Computer Workstation Checklist (CWC) to identify mismatches between workers' self-reported workstation problems. The CWC was completed at baseline and at 1 month to establish reliability. Validity was determined with CWC baseline data compared to an onsite workstation evaluation conducted by an expert in computer workstation assessment. Reliability ranged from fair to near perfect (prevalence-adjusted bias-adjusted kappa, 0.38-0.93); items with the strongest agreement were related to the input device, monitor, computer table, and document holder. The CWC had greater specificity (11 of 16 items) than sensitivity (3 of 16 items). The positive predictive value was greater than the negative predictive value for all questions. The CWC has strong reliability. Sensitivity and specificity suggested workers often indicated no problems with workstation setup when problems existed. The evidence suggests that while the CWC may not be valid when used alone, it may be a suitable adjunct to an ergonomic assessment completed by professionals.
Designing for Reliability and Robustness
NASA Technical Reports Server (NTRS)
Svetlik, Randall G.; Moore, Cherice; Williams, Antony
2017-01-01
Long duration spaceflight has a negative effect on the human body, and exercise countermeasures are used on-board the International Space Station (ISS) to minimize bone and muscle loss, combatting these effects. Given the importance of these hardware systems to the health of the crew, this equipment must continue to be readily available. Designing spaceflight exercise hardware to meet high reliability and availability standards has proven to be challenging throughout the time the crewmembers have been living on ISS beginning in 2000. Furthermore, restoring operational capability after a failure is clearly time-critical, but can be problematic given the challenges of troubleshooting the problem from 220 miles away. Several best-practices have been leveraged in seeking to maximize availability of these exercise systems, including designing for robustness, implementing diagnostic instrumentation, relying on user feedback, and providing ample maintenance and sparing. These factors have enhanced the reliability of hardware systems, and therefore have contributed to keeping the crewmembers healthy upon return to Earth. This paper will review the failure history for three spaceflight exercise countermeasure systems identifying lessons learned that can help improve future systems. Specifically, the Treadmill with Vibration Isolation and Stabilization System (TVIS), Cycle Ergometer with Vibration Isolation and Stabilization System (CEVIS), and the Advanced Resistive Exercise Device (ARED) will be reviewed, analyzed, and conclusions identified so as to provide guidance for improving future exercise hardware designs. These lessons learned, paired with thorough testing, offer a path towards reduced system down-time.
Distributed Computing for the Pierre Auger Observatory
NASA Astrophysics Data System (ADS)
Chudoba, J.
2015-12-01
Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are successfully used. The first and the second versions of production system based on bash scripts and MySQL database were able to submit jobs to all reliable sites supporting Virtual Organization auger. For many years VO auger belongs to top ten of EGI users based on the total used computing time. Migration of the production system to DIRAC interware started in 2014. Pilot jobs improve efficiency of computing jobs and eliminate problems with small and less reliable sites used for the bulk production. The new system has also possibility to use available resources in clouds. Dirac File Catalog replaced LFC for new files, which are organized in datasets defined via metadata. CVMFS is used for software distribution since 2014. In the presentation we give a comparison of the old and the new production system and report the experience on migrating to the new system.
Probabilistic Parameter Uncertainty Analysis of Single Input Single Output Control Systems
NASA Technical Reports Server (NTRS)
Smith, Brett A.; Kenny, Sean P.; Crespo, Luis G.
2005-01-01
The current standards for handling uncertainty in control systems use interval bounds for definition of the uncertain parameters. This approach gives no information about the likelihood of system performance, but simply gives the response bounds. When used in design, current methods of m-analysis and can lead to overly conservative controller design. With these methods, worst case conditions are weighted equally with the most likely conditions. This research explores a unique approach for probabilistic analysis of control systems. Current reliability methods are examined showing the strong areas of each in handling probability. A hybrid method is developed using these reliability tools for efficiently propagating probabilistic uncertainty through classical control analysis problems. The method developed is applied to classical response analysis as well as analysis methods that explore the effects of the uncertain parameters on stability and performance metrics. The benefits of using this hybrid approach for calculating the mean and variance of responses cumulative distribution functions are shown. Results of the probabilistic analysis of a missile pitch control system, and a non-collocated mass spring system, show the added information provided by this hybrid analysis.
A computational framework for prime implicants identification in noncoherent dynamic systems.
Di Maio, Francesco; Baronchelli, Samuele; Zio, Enrico
2015-01-01
Dynamic reliability methods aim at complementing the capability of traditional static approaches (e.g., event trees [ETs] and fault trees [FTs]) by accounting for the system dynamic behavior and its interactions with the system state transition process. For this, the system dynamics is here described by a time-dependent model that includes the dependencies with the stochastic transition events. In this article, we present a novel computational framework for dynamic reliability analysis whose objectives are i) accounting for discrete stochastic transition events and ii) identifying the prime implicants (PIs) of the dynamic system. The framework entails adopting a multiple-valued logic (MVL) to consider stochastic transitions at discretized times. Then, PIs are originally identified by a differential evolution (DE) algorithm that looks for the optimal MVL solution of a covering problem formulated for MVL accident scenarios. For testing the feasibility of the framework, a dynamic noncoherent system composed of five components that can fail at discretized times has been analyzed, showing the applicability of the framework to practical cases. © 2014 Society for Risk Analysis.
A Model-Driven Development Method for Management Information Systems
NASA Astrophysics Data System (ADS)
Mizuno, Tomoki; Matsumoto, Keinosuke; Mori, Naoki
Traditionally, a Management Information System (MIS) has been developed without using formal methods. By the informal methods, the MIS is developed on its lifecycle without having any models. It causes many problems such as lack of the reliability of system design specifications. In order to overcome these problems, a model theory approach was proposed. The approach is based on an idea that a system can be modeled by automata and set theory. However, it is very difficult to generate automata of the system to be developed right from the start. On the other hand, there is a model-driven development method that can flexibly correspond to changes of business logics or implementing technologies. In the model-driven development, a system is modeled using a modeling language such as UML. This paper proposes a new development method for management information systems applying the model-driven development method to a component of the model theory approach. The experiment has shown that a reduced amount of efforts is more than 30% of all the efforts.
Robustness and cognition in stabilization problem of dynamical systems based on asymptotic methods
NASA Astrophysics Data System (ADS)
Dubovik, S. A.; Kabanov, A. A.
2017-01-01
The problem of synthesis of stabilizing systems based on principles of cognitive (logical-dynamic) control for mobile objects used under uncertain conditions is considered. This direction in control theory is based on the principles of guaranteeing robust synthesis focused on worst-case scenarios of the controlled process. The guaranteeing approach is able to provide functioning of the system with the required quality and reliability only at sufficiently low disturbances and in the absence of large deviations from some regular features of the controlled process. The main tool for the analysis of large deviations and prediction of critical states here is the action functional. After the forecast is built, the choice of anti-crisis control is the supervisory control problem that optimizes the control system in a normal mode and prevents escape of the controlled process in critical states. An essential aspect of the approach presented here is the presence of a two-level (logical-dynamic) control: the input data are used not only for generating of synthesized feedback (local robust synthesis) in advance (off-line), but also to make decisions about the current (on-line) quality of stabilization in the global sense. An example of using the presented approach for the problem of development of the ship tilting prediction system is considered.
Evaluation of engineered foods for Closed Ecological Life Support System (CELSS)
NASA Technical Reports Server (NTRS)
Karel, M.
1981-01-01
A system of conversion of locally regenerated raw materials and of resupplied freeze-dried foods and ingredients into acceptable, safe and nutritious engineered foods is proposed. The first phase of the proposed research has the following objectives: (1) evaluation of feasibility of developing acceptable and reliable engineered foods from a limited selection of plants, supplemented by microbially produced nutrients and a minimum of dehydrated nutrient sources (especially those of animal origin); (2) evaluation of research tasks and specifications of research projects to adapt present technology and food science to expected space conditions (in particular, problems arising from unusual gravity conditions, problems of limited size and the isolation of the food production system, and the opportunities of space conditions are considered); (3) development of scenarios of agricultural production of plant and microbial systems, including the specifications of processing wastes to be recycled.
Multiagent Flight Control in Dynamic Environments with Cooperative Coevolutionary Algorithms
NASA Technical Reports Server (NTRS)
Colby, Mitchell; Knudson, Matthew D.; Tumer, Kagan
2014-01-01
Dynamic environments in which objectives and environmental features change with respect to time pose a difficult problem with regards to planning optimal paths through these environments. Path planning methods are typically computationally expensive, and are often difficult to implement in real time if system objectives are changed. This computational problem is compounded when multiple agents are present in the system, as the state and action space grows exponentially with the number of agents in the system. In this work, we use cooperative coevolutionary algorithms in order to develop policies which control agent motion in a dynamic multiagent unmanned aerial system environment such that goals and perceptions change, while ensuring safety constraints are not violated. Rather than replanning new paths when the environment changes, we develop a policy which can map the new environmental features to a trajectory for the agent while ensuring safe and reliable operation, while providing 92% of the theoretically optimal performance.
NASA Astrophysics Data System (ADS)
Cheng, Xiang-Qin; Qu, Jing-Yuan; Yan, Zhe-Ping; Bian, Xin-Qian
2010-03-01
In order to improve the security and reliability for autonomous underwater vehicle (AUV) navigation, an H∞ robust fault-tolerant controller was designed after analyzing variations in state-feedback gain. Operating conditions and the design method were then analyzed so that the control problem could be expressed as a mathematical optimization problem. This permitted the use of linear matrix inequalities (LMI) to solve for the H∞ controller for the system. When considering different actuator failures, these conditions were then also mathematically expressed, allowing the H∞ robust controller to solve for these events and thus be fault-tolerant. Finally, simulation results showed that the H∞ robust fault-tolerant controller could provide precise AUV navigation control with strong robustness.
Laser Spot Detection Based on Reaction Diffusion.
Vázquez-Otero, Alejandro; Khikhlukha, Danila; Solano-Altamirano, J M; Dormido, Raquel; Duro, Natividad
2016-03-01
Center-location of a laser spot is a problem of interest when the laser is used for processing and performing measurements. Measurement quality depends on correctly determining the location of the laser spot. Hence, improving and proposing algorithms for the correct location of the spots are fundamental issues in laser-based measurements. In this paper we introduce a Reaction Diffusion (RD) system as the main computational framework for robustly finding laser spot centers. The method presented is compared with a conventional approach for locating laser spots, and the experimental results indicate that RD-based computation generates reliable and precise solutions. These results confirm the flexibility of the new computational paradigm based on RD systems for addressing problems that can be reduced to a set of geometric operations.
Management. A continuing bibliography with indexes. [March 1980
NASA Technical Reports Server (NTRS)
1980-01-01
This bibliography cites 604 reports, articles, and other documents introduced into the NASA scientific and technical information system in 1979 covering the management of research and development, contracts, production, logistics, personnel, safety, reliability and quality control. Program, project, and systems management; management policy, philosophy, tools, and techniques; decision making processes for managers; technology assessment; management of urban problems; and information for managers on Federal resources, expenditures, financing, and budgeting are also covered. Abstracts are provided as well as subject, personal author, and corporate source indexes.
PCB-level Electro thermal Coupling Simulation Analysis
NASA Astrophysics Data System (ADS)
Zhou, Runjing; Shao, Xuchen
2017-10-01
Power transmission network needs to transmit more current with the increase of the power density. The problem of temperature rise and the reliability is becoming more and more serious. In order to accurately design the power supply system, we must consider the influence of the power supply system including Joule heat, air convection and other factors. Therefore, this paper analyzes the relationship between the electric circuit and the thermal circuit on the basis of the theory of electric circuit and thermal circuit.
The evolution of automated launch processing
NASA Technical Reports Server (NTRS)
Tomayko, James E.
1988-01-01
The NASA Launch Processing System (LPS) to which attention is presently given has arrived at satisfactory solutions for the distributed-computing, good user interface and dissimilar-hardware interface, and automation-related problems that emerge in the specific arena of spacecraft launch preparations. An aggressive effort was made to apply the lessons learned in the 1960s, during the first attempts at automatic launch vehicle checkout, to the LPS. As the Space Shuttle System continues to evolve, the primary contributor to safety and reliability will be the LPS.
Land mobile satellite propagation results
NASA Technical Reports Server (NTRS)
Nicholas, David C.
1988-01-01
During the Fall of 1987 a land mobile satellite demonstration using the MARECS B2 satellite at 26 degrees W was performed. While all the data have not been digested, some observations are in order. First, the system worked remarkably well for the margins indicated. Second, when the system worked poorly, the experimenters could almost always identify terrain or other obstacles causing blockage. Third, the forward link seems relatively more reliable than the return link, and occasional return link problems occured which have not been entirely explained.
Abusive behavior is barrier to high-reliability health care systems, culture of patient safety.
Cassirer, C; Anderson, D; Hanson, S; Fraser, H
2000-11-01
Addressing abusive behavior in the medical workplace presents an important opportunity to deliver on the national commitment to improve patient safety. Fundamentally, the issue of patient safety and the issue of abusive behavior in the workplace are both about harm. Undiagnosed and untreated, abusive behavior is a barrier to creating high reliability service delivery systems that ensure patient safety. Health care managers and clinicians need to improve their awareness, knowledge, and understanding of the issue of workplace abuse. The available research suggests there is a high prevalence of workplace abuse in medicine. Both administrators at the blunt end and clinicians at the sharp end should consider learning new approaches to defining and treating the problem of workplace abuse. Eliminating abusive behavior has positive implications for preventing and controlling medical injury and improving organizational performance.
Methods for reliability evaluation of trust and reputation systems
NASA Astrophysics Data System (ADS)
Janiszewski, Marek B.
2016-09-01
Trust and reputation systems are a systematic approach to build security on the basis of observations of node's behaviour. Exchange of node's opinions about other nodes is very useful to indicate nodes which act selfishly or maliciously. The idea behind trust and reputation systems gets significance because of the fact that conventional security measures (based on cryptography) are often not sufficient. Trust and reputation systems can be used in various types of networks such as WSN, MANET, P2P and also in e-commerce applications. Trust and reputation systems give not only benefits but also could be a thread itself. Many attacks aim at trust and reputation systems exist, but such attacks still have not gain enough attention of research teams. Moreover, joint effects of many of known attacks have been determined as a very interesting field of research. Lack of an acknowledged methodology of evaluation of trust and reputation systems is a serious problem. This paper aims at presenting various approaches of evaluation such systems. This work also contains a description of generalization of many trust and reputation systems which can be used to evaluate reliability of such systems in the context of preventing various attacks.
Huang, X N; Zhang, Y; Feng, W W; Wang, H S; Cao, B; Zhang, B; Yang, Y F; Wang, H M; Zheng, Y; Jin, X M; Jia, M X; Zou, X B; Zhao, C X; Robert, J; Jing, Jin
2017-06-02
Objective: To evaluate the reliability and validity of warning signs checklist developed by the National Health and Family Planning Commission of the People's Republic of China (NHFPC), so as to determine the screening effectiveness of warning signs on developmental problems of early childhood. Method: Stratified random sampling method was used to assess the reliability and validity of checklist of warning sign and 2 110 children 0 to 6 years of age(1 513 low-risk subjects and 597 high-risk subjects) were recruited from 11 provinces of China. The reliability evaluation for the warning signs included the test-retest reliability and interrater reliability. With the use of Age and Stage Questionnaire (ASQ) and Gesell Development Diagnosis Scale (GESELL) as the criterion scales, criterion validity was assessed by determining the correlation and consistency between the screening results of warning signs and the criterion scales. Result: In terms of the warning signs, the screening positive rates at different ages ranged from 10.8%(21/141) to 26.2%(51/137). The median (interquartile) testing time for each subject was 1(0.6) minute. Both the test-retest reliability and interrater reliability of warning signs reached 0.7 or above, indicating that the stability was good. In terms of validity assessment, there was remarkable consistency between ASQ and warning signs, with the Kappa value of 0.63. With the use of GESELL as criterion, it was determined that the sensitivity of warning signs in children with suspected developmental delay was 82.2%, and the specificity was 77.7%. The overall Youden index was 0.6. Conclusion: The reliability and validity of warning signs checklist for screening early childhood developmental problems have met the basic requirements of psychological screening scales, with the characteristics of short testing time and easy operation. Thus, this warning signs checklist can be used for screening psychological and behavioral problems of early childhood, especially in community settings.
Gillespie, Alex; Reader, Tom W
2016-01-01
Background Letters of complaint written by patients and their advocates reporting poor healthcare experiences represent an under-used data source. The lack of a method for extracting reliable data from these heterogeneous letters hinders their use for monitoring and learning. To address this gap, we report on the development and reliability testing of the Healthcare Complaints Analysis Tool (HCAT). Methods HCAT was developed from a taxonomy of healthcare complaints reported in a previously published systematic review. It introduces the novel idea that complaints should be analysed in terms of severity. Recruiting three groups of educated lay participants (n=58, n=58, n=55), we refined the taxonomy through three iterations of discriminant content validity testing. We then supplemented this refined taxonomy with explicit coding procedures for seven problem categories (each with four levels of severity), stage of care and harm. These combined elements were further refined through iterative coding of a UK national sample of healthcare complaints (n= 25, n=80, n=137, n=839). To assess reliability and accuracy for the resultant tool, 14 educated lay participants coded a referent sample of 125 healthcare complaints. Results The seven HCAT problem categories (quality, safety, environment, institutional processes, listening, communication, and respect and patient rights) were found to be conceptually distinct. On average, raters identified 1.94 problems (SD=0.26) per complaint letter. Coders exhibited substantial reliability in identifying problems at four levels of severity; moderate and substantial reliability in identifying stages of care (except for ‘discharge/transfer’ that was only fairly reliable) and substantial reliability in identifying overall harm. Conclusions HCAT is not only the first reliable tool for coding complaints, it is the first tool to measure the severity of complaints. It facilitates service monitoring and organisational learning and it enables future research examining whether healthcare complaints are a leading indicator of poor service outcomes. HCAT is freely available to download and use. PMID:26740496
NASA Astrophysics Data System (ADS)
Sun, Xiaoqiang; Yuan, Chaochun; Cai, Yingfeng; Wang, Shaohua; Chen, Long
2017-09-01
This paper presents the hybrid modeling and the model predictive control of an air suspension system with damping multi-mode switching damper. Unlike traditional damper with continuously adjustable damping, in this study, a new damper with four discrete damping modes is applied to vehicle semi-active air suspension. The new damper can achieve different damping modes by just controlling the on-off statuses of two solenoid valves, which makes its damping adjustment more efficient and more reliable. However, since the damping mode switching induces different modes of operation, the air suspension system with the new damper poses challenging hybrid control problem. To model both the continuous/discrete dynamics and the switching between different damping modes, the framework of mixed logical dynamical (MLD) systems is used to establish the system hybrid model. Based on the resulting hybrid dynamical model, the system control problem is recast as a model predictive control (MPC) problem, which allows us to optimize the switching sequences of the damping modes by taking into account the suspension performance requirements. Numerical simulations results demonstrate the efficacy of the proposed control method finally.
Trust-Based Cooperative Social System Applied to a Carpooling Platform for Smartphones.
Caballero-Gil, Cándido; Caballero-Gil, Pino; Molina-Gil, Jezabel; Martín-Fernández, Francisco; Loia, Vincenzo
2017-01-27
One of the worst traffic problems today is the existence of huge traffic jams in almost any big city, produced by the large number of commuters using private cars. This problem has led to an increase in research on the optimization of vehicle occupancy in urban areas as this would help to solve the problem that most cars are occupied by single passengers. The solution of sharing the available seats in cars, known as carpooling, is already available in major cities around the world. However, carpooling is still not considered a safe and reliable solution for many users. With the widespread use of mobile technology and social networks, it is possible to create a trust-based platform to promote carpooling through a convenient, fast and secure system. The main objective of this work is the design and implementation of a carpool system that improves some important aspects of previous systems, focusing on trust between users, and on the security of the system. The proposed system guarantees user privacy and measures trust levels through a new reputation algorithm. In addition to this, the proposal has been developed as a mobile application for devices using the Android Open Source Project.
Trust-Based Cooperative Social System Applied to a Carpooling Platform for Smartphones
Caballero-Gil, Cándido; Caballero-Gil, Pino; Molina-Gil, Jezabel; Martín-Fernández, Francisco; Loia, Vincenzo
2017-01-01
One of the worst traffic problems today is the existence of huge traffic jams in almost any big city, produced by the large number of commuters using private cars. This problem has led to an increase in research on the optimization of vehicle occupancy in urban areas as this would help to solve the problem that most cars are occupied by single passengers. The solution of sharing the available seats in cars, known as carpooling, is already available in major cities around the world. However, carpooling is still not considered a safe and reliable solution for many users. With the widespread use of mobile technology and social networks, it is possible to create a trust-based platform to promote carpooling through a convenient, fast and secure system. The main objective of this work is the design and implementation of a carpool system that improves some important aspects of previous systems, focusing on trust between users, and on the security of the system. The proposed system guarantees user privacy and measures trust levels through a new reputation algorithm. In addition to this, the proposal has been developed as a mobile application for devices using the Android Open Source Project. PMID:28134803
NASA Technical Reports Server (NTRS)
Rais-Rohani, Masoud
2003-01-01
This report discusses the development and application of two alternative strategies in the form of global and sequential local response surface (RS) techniques for the solution of reliability-based optimization (RBO) problems. The problem of a thin-walled composite circular cylinder under axial buckling instability is used as a demonstrative example. In this case, the global technique uses a single second-order RS model to estimate the axial buckling load over the entire feasible design space (FDS) whereas the local technique uses multiple first-order RS models with each applied to a small subregion of FDS. Alternative methods for the calculation of unknown coefficients in each RS model are explored prior to the solution of the optimization problem. The example RBO problem is formulated as a function of 23 uncorrelated random variables that include material properties, thickness and orientation angle of each ply, cylinder diameter and length, as well as the applied load. The mean values of the 8 ply thicknesses are treated as independent design variables. While the coefficients of variation of all random variables are held fixed, the standard deviations of ply thicknesses can vary during the optimization process as a result of changes in the design variables. The structural reliability analysis is based on the first-order reliability method with reliability index treated as the design constraint. In addition to the probabilistic sensitivity analysis of reliability index, the results of the RBO problem are presented for different combinations of cylinder length and diameter and laminate ply patterns. The two strategies are found to produce similar results in terms of accuracy with the sequential local RS technique having a considerably better computational efficiency.
Fully automated laser ray tracing system to measure changes in the crystalline lens GRIN profile.
Qiu, Chen; Maceo Heilman, Bianca; Kaipio, Jari; Donaldson, Paul; Vaghefi, Ehsan
2017-11-01
Measuring the lens gradient refractive index (GRIN) accurately and reliably has proven an extremely challenging technical problem. A fully automated laser ray tracing (LRT) system was built to address this issue. The LRT system captures images of multiple laser projections before and after traversing through an ex vivo lens. These LRT images, combined with accurate measurements of the lens geometry, are used to calculate the lens GRIN profile. Mathematically, this is an ill-conditioned problem; hence, it is essential to apply biologically relevant constraints to produce a feasible solution. The lens GRIN measurements were compared with previously published data. Our GRIN retrieval algorithm produces fast and accurate measurements of the lens GRIN profile. Experiments to study the optics of physiologically perturbed lenses are the future direction of this research.
Fully automated laser ray tracing system to measure changes in the crystalline lens GRIN profile
Qiu, Chen; Maceo Heilman, Bianca; Kaipio, Jari; Donaldson, Paul; Vaghefi, Ehsan
2017-01-01
Measuring the lens gradient refractive index (GRIN) accurately and reliably has proven an extremely challenging technical problem. A fully automated laser ray tracing (LRT) system was built to address this issue. The LRT system captures images of multiple laser projections before and after traversing through an ex vivo lens. These LRT images, combined with accurate measurements of the lens geometry, are used to calculate the lens GRIN profile. Mathematically, this is an ill-conditioned problem; hence, it is essential to apply biologically relevant constraints to produce a feasible solution. The lens GRIN measurements were compared with previously published data. Our GRIN retrieval algorithm produces fast and accurate measurements of the lens GRIN profile. Experiments to study the optics of physiologically perturbed lenses are the future direction of this research. PMID:29188093
NASA Astrophysics Data System (ADS)
Pattke, Marco; Martin, Manuel; Voit, Michael
2017-05-01
Tracking people with cameras in public areas is common today. However with an increasing number of cameras it becomes harder and harder to view the data manually. Especially in safety critical areas automatic image exploitation could help to solve this problem. Setting up such a system can however be difficult because of its increased complexity. Sensor placement is critical to ensure that people are detected and tracked reliably. We try to solve this problem using a simulation framework that is able to simulate different camera setups in the desired environment including animated characters. We combine this framework with our self developed distributed and scalable system for people tracking to test its effectiveness and can show the results of the tracking system in real time in the simulated environment.
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Sankararaman, Shankar
2013-01-01
Prognostics is centered on predicting the time of and time until adverse events in components, subsystems, and systems. It typically involves both a state estimation phase, in which the current health state of a system is identified, and a prediction phase, in which the state is projected forward in time. Since prognostics is mainly a prediction problem, prognostic approaches cannot avoid uncertainty, which arises due to several sources. Prognostics algorithms must both characterize this uncertainty and incorporate it into the predictions so that informed decisions can be made about the system. In this paper, we describe three methods to solve these problems, including Monte Carlo-, unscented transform-, and first-order reliability-based methods. Using a planetary rover as a case study, we demonstrate and compare the different methods in simulation for battery end-of-discharge prediction.
Integrated Safety Risk Reduction Approach to Enhancing Human-Rated Spaceflight Safety
NASA Astrophysics Data System (ADS)
Mikula, J. F. Kip
2005-12-01
This paper explores and defines the current accepted concept and philosophy of safety improvement based on a Reliability enhancement (called here Reliability Enhancement Based Safety Theory [REBST]). In this theory a Reliability calculation is used as a measure of the safety achieved on the program. This calculation may be based on a math model or a Fault Tree Analysis (FTA) of the system, or on an Event Tree Analysis (ETA) of the system's operational mission sequence. In each case, the numbers used in this calculation are hardware failure rates gleaned from past similar programs. As part of this paper, a fictional but representative case study is provided that helps to illustrate the problems and inaccuracies of this approach to safety determination. Then a safety determination and enhancement approach based on hazard, worst case analysis, and safety risk determination (called here Worst Case Based Safety Theory [WCBST]) is included. This approach is defined and detailed using the same example case study as shown in the REBST case study. In the end it is concluded that an approach combining the two theories works best to reduce Safety Risk.
Virtual and flexible digital signal processing system based on software PnP and component works
NASA Astrophysics Data System (ADS)
He, Tao; Wu, Qinghua; Zhong, Fei; Li, Wei
2005-05-01
An idea about software PnP (Plug & Play) is put forward according to the hardware PnP. And base on this idea, a virtual flexible digital signal processing system (FVDSPS) is carried out. FVDSPS is composed of a main control center, many sub-function modules and other hardware I/O modules. Main control center sends out commands to sub-function modules, and manages running orders, parameters and results of sub-functions. The software kernel of FVDSPS is DSP (Digital Signal Processing) module, which communicates with the main control center through some protocols, accept commands or send requirements. The data sharing and exchanging between the main control center and the DSP modules are carried out and managed by the files system of the Windows Operation System through the effective communication. FVDSPS real orients objects, orients engineers and orients engineering problems. With FVDSPS, users can freely plug and play, and fast reconfigure a signal process system according to engineering problems without programming. What you see is what you get. Thus, an engineer can orient engineering problems directly, pay more attention to engineering problems, and promote the flexibility, reliability and veracity of testing system. Because FVDSPS orients TCP/IP protocol, through Internet, testing engineers, technology experts can be connected freely without space. Engineering problems can be resolved fast and effectively. FVDSPS can be used in many fields such as instruments and meter, fault diagnosis, device maintenance and quality control.
Knight, Danica K; Becan, Jennifer E; Landrum, Brittany; Joe, George W; Flynn, Patrick M
2014-06-01
The purpose of this study is to establish the psychometric properties of a noncommercial, publicly available, modular screening and assessment system for adolescents in substance abuse treatment. Data were collected in 2011-2012 from 1,189 adolescents admitted to eight residential treatment programs in urban and rural locations in the United States. Results from three sets of analyses documented the instruments to be reliable. Females reported more problems than males, and younger adolescents reported more problems than older youth. Implications and limitations are discussed, and suggestions for future research are provided.
ZERO: probabilistic routing for deploy and forget Wireless Sensor Networks.
Vilajosana, Xavier; Llosa, Jordi; Pacho, Jose Carlos; Vilajosana, Ignasi; Juan, Angel A; Vicario, Jose Lopez; Morell, Antoni
2010-01-01
As Wireless Sensor Networks are being adopted by industry and agriculture for large-scale and unattended deployments, the need for reliable and energy-conservative protocols become critical. Physical and Link layer efforts for energy conservation are not mostly considered by routing protocols that put their efforts on maintaining reliability and throughput. Gradient-based routing protocols route data through most reliable links aiming to ensure 99% packet delivery. However, they suffer from the so-called "hot spot" problem. Most reliable routes waste their energy fast, thus partitioning the network and reducing the area monitored. To cope with this "hot spot" problem we propose ZERO a combined approach at Network and Link layers to increase network lifespan while conserving reliability levels by means of probabilistic load balancing techniques.
Striped tertiary storage arrays
NASA Technical Reports Server (NTRS)
Drapeau, Ann L.
1993-01-01
Data stripping is a technique for increasing the throughput and reducing the response time of large access to a storage system. In striped magnetic or optical disk arrays, a single file is striped or interleaved across several disks; in a striped tape system, files are interleaved across tape cartridges. Because a striped file can be accessed by several disk drives or tape recorders in parallel, the sustained bandwidth to the file is greater than in non-striped systems, where access to the file are restricted to a single device. It is argued that applying striping to tertiary storage systems will provide needed performance and reliability benefits. The performance benefits of striping for applications using large tertiary storage systems is discussed. It will introduce commonly available tape drives and libraries, and discuss their performance limitations, especially focusing on the long latency of tape accesses. This section will also describe an event-driven tertiary storage array simulator that is being used to understand the best ways of configuring these storage arrays. The reliability problems of magnetic tape devices are discussed, and plans for modeling the overall reliability of striped tertiary storage arrays to identify the amount of error correction required are described. Finally, work being done by other members of the Sequoia group to address latency of accesses, optimizing tertiary storage arrays that perform mostly writes, and compression is discussed.
Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression
Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.; ...
2017-01-18
Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less
Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.
Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less
Pipeline scada upgrade uses satellite terminal system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conrad, W.; Skovrinski, J.R.
In the recent automation of its supervisory control and data acquisition (scada) system, Transwestern Pipeline Co. has become the first to use very small aperture satellite terminals (VSAT's) for scada. A subsidiary of Enron Interstate Pipeline, Houston, Transwestern moves natural gas through a 4,400-mile system from West Texas, New Mexico, and Oklahoma to southern California markets. Transwestern's modernization, begun in November 1985, addressed problems associated with its aging control equipment which had been installed when the compressor stations were built in 1960. Over the years a combination of three different systems had been added. All were cumbersome to maintain andmore » utilized outdated technology. Problems with reliability, high maintenance time, and difficulty in getting new parts were determining factors in Transwestern's decision to modernize its scada system. In addition, the pipeline was anticipating moving its control center from Roswell, N.M., to Houston and believed it would be impossible to marry the old system with the new computer equipment in Houston.« less
Using Multimodal Input for Autonomous Decision Making for Unmanned Systems
NASA Technical Reports Server (NTRS)
Neilan, James H.; Cross, Charles; Rothhaar, Paul; Tran, Loc; Motter, Mark; Qualls, Garry; Trujillo, Anna; Allen, B. Danette
2016-01-01
Autonomous decision making in the presence of uncertainly is a deeply studied problem space particularly in the area of autonomous systems operations for land, air, sea, and space vehicles. Various techniques ranging from single algorithm solutions to complex ensemble classifier systems have been utilized in a research context in solving mission critical flight decisions. Realized systems on actual autonomous hardware, however, is a difficult systems integration problem, constituting a majority of applied robotics development timelines. The ability to reliably and repeatedly classify objects during a vehicles mission execution is vital for the vehicle to mitigate both static and dynamic environmental concerns such that the mission may be completed successfully and have the vehicle operate and return safely. In this paper, the Autonomy Incubator proposes and discusses an ensemble learning and recognition system planned for our autonomous framework, AEON, in selected domains, which fuse decision criteria, using prior experience on both the individual classifier layer and the ensemble layer to mitigate environmental uncertainty during operation.
The Next Generation of Interoperability Agents in Healthcare
Cardoso, Luciana; Marins, Fernando; Portela, Filipe; Santos, Manuel ; Abelha, António; Machado, José
2014-01-01
Interoperability in health information systems is increasingly a requirement rather than an option. Standards and technologies, such as multi-agent systems, have proven to be powerful tools in interoperability issues. In the last few years, the authors have worked on developing the Agency for Integration, Diffusion and Archive of Medical Information (AIDA), which is an intelligent, agent-based platform to ensure interoperability in healthcare units. It is increasingly important to ensure the high availability and reliability of systems. The functions provided by the systems that treat interoperability cannot fail. This paper shows the importance of monitoring and controlling intelligent agents as a tool to anticipate problems in health information systems. The interaction between humans and agents through an interface that allows the user to create new agents easily and to monitor their activities in real time is also an important feature, as health systems evolve by adopting more features and solving new problems. A module was installed in Centro Hospitalar do Porto, increasing the functionality and the overall usability of AIDA. PMID:24840351
Documentation of pharmaceutical care: Validation of an intervention oriented classification system.
Maes, Karen A; Studer, Helene; Berger, Jérôme; Hersberger, Kurt E; Lampert, Markus L
2017-12-01
During the dispensing process, pharmacists may come across technical and clinical issues requiring a pharmaceutical intervention (PI). An intervention-oriented classification system is a helpful tool to document these PIs in a structured manner. Therefore, we developed the PharmDISC classification system (Pharmacists' Documentation of Interventions in Seamless Care). The aim of this study was to evaluate the PharmDISC system in the daily practice environment (in terms of interrater reliability, appropriateness, interpretability, acceptability, feasibility, and validity); to assess its user satisfaction, the descriptive manual, and the online training; and to explore first implementation aspects. Twenty-one pharmacists from different community pharmacies each classified 30 prescriptions requiring a PI with the PharmDISC system on 5 selected days within 5 weeks. Interrater reliability was determined using model PIs and Fleiss's kappa coefficients (κ) were calculated. User satisfaction was assessed by questionnaire with a 4-point Likert scale. The main outcome measures were interrater reliability (κ); appropriateness, interpretability, validity (ratio of completely classified PIs/all PIs); feasibility, and acceptability (user satisfaction and suggestions). The PharmDISC system reached an average substantial agreement (κ = 0.66). Of documented 519 PIs, 430 (82.9%) were completely classified. Most users found the system comprehensive (median user agreement 3 [2/3.25 quartiles]) and practical (3[2.75/3]). The PharmDISC system raised the awareness regarding drug-related problems for most users (n = 16). To facilitate its implementation, an electronic version that automatically connects to the prescription together with a task manager for PIs needing follow-up was suggested. Barriers could be time expenditure and lack of understanding the benefits. Substantial interrater reliability and acceptable user satisfaction indicate that the PharmDISC system is a valid system to document PIs in daily community pharmacy practice. © 2017 John Wiley & Sons, Ltd.
Galway, Lindsay P.
2016-01-01
Access to safe and reliable drinking water is commonplace for most Canadians. However, the right to safe and reliable drinking water is denied to many First Nations peoples across the country, highlighting a priority public health and environmental justice issue in Canada. This paper describes trends and characteristics of drinking water advisories, used as a proxy for reliable access to safe drinking water, among First Nations communities in the province of Ontario. Visual and statistical tools were used to summarize the advisory data in general, temporal trends, and characteristics of the drinking water systems in which advisories were issued. Overall, 402 advisories were issued during the study period. The number of advisories increased from 25 in 2004 to 75 in 2013. The average advisory duration was 294 days. Most advisories were reported in summer months and equipment malfunction was the most commonly reported reason for issuing an advisory. Nearly half of all advisories occurred in drinking water systems where additional operator training was needed. These findings underscore that the prevalence of drinking water advisories in First Nations communities is a problem that must be addressed. Concerted and multi-faceted efforts are called for to improve the provision of safe and reliable drinking water First Nations communities. PMID:27196919
Synthesizing cognition in neuromorphic electronic systems
Neftci, Emre; Binas, Jonathan; Rutishauser, Ueli; Chicca, Elisabetta; Indiveri, Giacomo; Douglas, Rodney J.
2013-01-01
The quest to implement intelligent processing in electronic neuromorphic systems lacks methods for achieving reliable behavioral dynamics on substrates of inherently imprecise and noisy neurons. Here we report a solution to this problem that involves first mapping an unreliable hardware layer of spiking silicon neurons into an abstract computational layer composed of generic reliable subnetworks of model neurons and then composing the target behavioral dynamics as a “soft state machine” running on these reliable subnets. In the first step, the neural networks of the abstract layer are realized on the hardware substrate by mapping the neuron circuit bias voltages to the model parameters. This mapping is obtained by an automatic method in which the electronic circuit biases are calibrated against the model parameters by a series of population activity measurements. The abstract computational layer is formed by configuring neural networks as generic soft winner-take-all subnetworks that provide reliable processing by virtue of their active gain, signal restoration, and multistability. The necessary states and transitions of the desired high-level behavior are then easily embedded in the computational layer by introducing only sparse connections between some neurons of the various subnets. We demonstrate this synthesis method for a neuromorphic sensory agent that performs real-time context-dependent classification of motion patterns observed by a silicon retina. PMID:23878215
Galway, Lindsay P
2016-05-17
Access to safe and reliable drinking water is commonplace for most Canadians. However, the right to safe and reliable drinking water is denied to many First Nations peoples across the country, highlighting a priority public health and environmental justice issue in Canada. This paper describes trends and characteristics of drinking water advisories, used as a proxy for reliable access to safe drinking water, among First Nations communities in the province of Ontario. Visual and statistical tools were used to summarize the advisory data in general, temporal trends, and characteristics of the drinking water systems in which advisories were issued. Overall, 402 advisories were issued during the study period. The number of advisories increased from 25 in 2004 to 75 in 2013. The average advisory duration was 294 days. Most advisories were reported in summer months and equipment malfunction was the most commonly reported reason for issuing an advisory. Nearly half of all advisories occurred in drinking water systems where additional operator training was needed. These findings underscore that the prevalence of drinking water advisories in First Nations communities is a problem that must be addressed. Concerted and multi-faceted efforts are called for to improve the provision of safe and reliable drinking water First Nations communities.
Simpson, V; Hughes, M; Wilkinson, J; Herrick, A L; Dinsdale, G
2018-03-01
Digital ulcers are a major problem in patients with systemic sclerosis (SSc), causing severe pain and impairment of hand function. In addition, digital ulcers heal slowly and sometimes become infected, which can lead to gangrene and necessitate amputation if appropriate intervention is not taken. A reliable, objective method for assessing digital ulcer healing or progression is needed in both the clinical and research arenas. This study was undertaken to compare 2 computer-assisted planimetry methods of measurement of digital ulcer area on photographs (ellipse and freehand regions of interest [ROIs]), and to assess the reliability of photographic calibration and the 2 methods of area measurement. Photographs were taken of 107 digital ulcers in 36 patients with SSc spectrum disease. Three raters assessed the photographs. Custom software allowed raters to calibrate photograph dimensions and draw ellipse or freehand ROIs. The shapes and dimensions of the ROIs were saved for further analysis. Calibration (by a single rater performing 5 repeats per image) produced an intraclass correlation coefficient (intrarater reliability) of 0.99. The mean ± SD areas of digital ulcers assessed using ellipse and freehand ROIs were 18.7 ± 20.2 mm 2 and 17.6 ± 19.3 mm 2 , respectively. Intrarater and interrater reliability of the ellipse ROI were 0.97 and 0.77, respectively. For the freehand ROI, the intrarater and interrater reliability were 0.98 and 0.76, respectively. Our findings indicate that computer-assisted planimetry methods applied to SSc-related digital ulcers can be extremely reliable. Further work is needed to move toward applying these methods as outcome measures for clinical trials and in clinical settings. © 2017, American College of Rheumatology.
SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. SURE was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR13789) is written in PASCAL, C-language, and FORTRAN 77. The standard distribution medium for the VMS version of SURE is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun UNIX version (LAR14921) is written in ANSI C-language and PASCAL. An ANSI compliant C compiler is required in order to compile the C portion of this package. The standard distribution medium for the Sun version of SURE is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. SURE was developed in 1988 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. TEMPLATE is a registered trademark of Template Graphics Software, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.
SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. SURE was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR13789) is written in PASCAL, C-language, and FORTRAN 77. The standard distribution medium for the VMS version of SURE is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun UNIX version (LAR14921) is written in ANSI C-language and PASCAL. An ANSI compliant C compiler is required in order to compile the C portion of this package. The standard distribution medium for the Sun version of SURE is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. SURE was developed in 1988 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. TEMPLATE is a registered trademark of Template Graphics Software, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.
Arunakul, Marut; Arunakul, Preeyaphan; Suesiritumrong, Chakhrist; Angthong, Chayanin; Chernchujit, Bancha
2015-06-01
Self-administered questionnaires have become an important aspect for clinical outcome assessment of foot and ankle-related problems. The Foot and Ankle Ability Measure (FAAM) subjective form is a region-specific questionnaire that is widely used and has sufficient validity and reliability from previous studies. Translate the original English version of FAAM into a Thai version and evaluate the validity and reliability of Thai FAAM in patients with foot and ankle-related problems. The FAAM subjective form was translated into Thai using forward-backward translation protocol. Afterward, reliability and validity were tested. Following responses from 60 consecutive patients on two questionnaires, the Thai FAAM subjective form and the short form (SF)-36, were used. The validity was tested by correlating the scores from both questionnaires. The reliability was adopted by measuring the test-retest reliability and internal consistency. Thai FAAM score including activity of daily life (ADL) and Sport subscale demonstrated the sufficient correlations with physical functioning (PF) and physical composite score (PCS) domains of the SF-36 (statistically significant with p < 0.001 level and ≥ 0.5 values). The result of reliability revealed highly intra-class correlation coefficient as 0.8 and 0.77, respectively from test-retest study. The internal consistency was strong (Cronbach alpha = 0.94 and 0.88, respectively). The Thai version of FAAM subjective form retained the characteristics of the original version and has proved a reliable evaluation instrument for patients with foot and ankle-related problems.
Monitoring Energy Balance in Breast Cancer Survivors Using a Mobile App: Reliability Study
Lozano-Lozano, Mario; Galiano-Castillo, Noelia; Martín-Martín, Lydia; Pace-Bedetti, Nicolás; Fernández-Lao, Carolina; Cantarero-Villanueva, Irene
2018-01-01
Background The majority of breast cancer survivors do not meet recommendations in terms of diet and physical activity. To address this problem, we developed a mobile health (mHealth) app for assessing and monitoring healthy lifestyles in breast cancer survivors, called the Energy Balance on Cancer (BENECA) mHealth system. The BENECA mHealth system is a novel and interactive mHealth app, which allows breast cancer survivors to engage themselves in their energy balance monitoring. BENECA was designed to facilitate adherence to healthy lifestyles in an easy and intuitive way. Objective The objective of the study was to assess the concurrent validity and test-retest reliability between the BENECA mHealth system and the gold standard assessment methods for diet and physical activity. Methods A reliability study was conducted with 20 breast cancer survivors. In the study, tri-axial accelerometers (ActiGraphGT3X+) were used as gold standard for 8 consecutive days, in addition to 2, 24-hour dietary recalls, 4 dietary records, and sociodemographic questionnaires. Two-way random effect intraclass correlation coefficients, a linear regression-analysis, and a Passing-Bablok regression were calculated. Results The reliability estimates were very high for all variables (alpha≥.90). The lowest reliability was found in fruit and vegetable intakes (alpha=.94). The reliability between the accelerometer and the dietary assessment instruments against the BENECA system was very high (intraclass correlation coefficient=.90). We found a mean match rate of 93.51% between instruments and a mean phantom rate of 3.35%. The Passing-Bablok regression analysis did not show considerable bias in fat percentage, portions of fruits and vegetables, or minutes of moderate to vigorous physical activity. Conclusions The BENECA mHealth app could be a new tool to measure energy balance in breast cancer survivors in a reliable and simple way. Our results support the use of this technology to not only to encourage changes in breast cancer survivors' lifestyles, but also to remotely monitor energy balance. Trial Registration ClinicalTrials.gov NCT02817724; https://clinicaltrials.gov/ct2/show/NCT02817724 (Archived by WebCite at http://www.webcitation.org/6xVY1buCc) PMID:29588273
Reliability Problems of the Datum: Solutions for Questionnaire Responses.
ERIC Educational Resources Information Center
Bastick, Tony
Questionnaires often ask for estimates, and these estimates are given with different reliabilities. It is difficult to know the different reliabilities of single estimates and to take these into account in subsequent analyses. This paper contains a practical example to show that not taking the reliability of different responses into account can…
Multistage Stochastic Programming and its Applications in Energy Systems Modeling and Optimization
NASA Astrophysics Data System (ADS)
Golari, Mehdi
Electric energy constitutes one of the most crucial elements to almost every aspect of life of people. The modern electric power systems face several challenges such as efficiency, economics, sustainability, and reliability. Increase in electrical energy demand, distributed generations, integration of uncertain renewable energy resources, and demand side management are among the main underlying reasons of such growing complexity. Additionally, the elements of power systems are often vulnerable to failures because of many reasons, such as system limits, weak conditions, unexpected events, hidden failures, human errors, terrorist attacks, and natural disasters. One common factor complicating the operation of electrical power systems is the underlying uncertainties from the demands, supplies and failures of system components. Stochastic programming provides a mathematical framework for decision making under uncertainty. It enables a decision maker to incorporate some knowledge of the intrinsic uncertainty into the decision making process. In this dissertation, we focus on application of two-stage and multistage stochastic programming approaches to electric energy systems modeling and optimization. Particularly, we develop models and algorithms addressing the sustainability and reliability issues in power systems. First, we consider how to improve the reliability of power systems under severe failures or contingencies prone to cascading blackouts by so called islanding operations. We present a two-stage stochastic mixed-integer model to find optimal islanding operations as a powerful preventive action against cascading failures in case of extreme contingencies. Further, we study the properties of this problem and propose efficient solution methods to solve this problem for large-scale power systems. We present the numerical results showing the effectiveness of the model and investigate the performance of the solution methods. Next, we address the sustainability issue considering the integration of renewable energy resources into production planning of energy-intensive manufacturing industries. Recently, a growing number of manufacturing companies are considering renewable energies to meet their energy requirements to move towards green manufacturing as well as decreasing their energy costs. However, the intermittent nature of renewable energies imposes several difficulties in long term planning of how to efficiently exploit renewables. In this study, we propose a scheme for manufacturing companies to use onsite and grid renewable energies provided by their own investments and energy utilities as well as conventional grid energy to satisfy their energy requirements. We propose a multistage stochastic programming model and study an efficient solution method to solve this problem. We examine the proposed framework on a test case simulated based on a real-world semiconductor company. Moreover, we evaluate long-term profitability of such scheme via so called value of multistage stochastic programming.