NASA Astrophysics Data System (ADS)
Taylor, John R.; Stolz, Christopher J.
1993-08-01
Laser system performance and reliability depends on the related performance and reliability of the optical components which define the cavity and transport subsystems. High-average-power and long transport lengths impose specific requirements on component performance. The complexity of the manufacturing process for optical components requires a high degree of process control and verification. Qualification has proven effective in ensuring confidence in the procurement process for these optical components. Issues related to component reliability have been studied and provide useful information to better understand the long term performance and reliability of the laser system.
NASA Astrophysics Data System (ADS)
Taylor, J. R.; Stolz, C. J.
1992-12-01
Laser system performance and reliability depends on the related performance and reliability of the optical components which define the cavity and transport subsystems. High-average-power and long transport lengths impose specific requirements on component performance. The complexity of the manufacturing process for optical components requires a high degree of process control and verification. Qualification has proven effective in ensuring confidence in the procurement process for these optical components. Issues related to component reliability have been studied and provide useful information to better understand the long term performance and reliability of the laser system.
NASA Astrophysics Data System (ADS)
Yu, Long; Xu, Juanjuan; Zhang, Lifang; Xu, Xiaogang
2018-03-01
Based on stress-strength interference theory to establish the reliability mathematical model for high temperature and high pressure multi-stage decompression control valve (HMDCV), and introduced to the temperature correction coefficient for revising material fatigue limit at high temperature. Reliability of key dangerous components and fatigue sensitivity curve of each component are calculated and analyzed by the means, which are analyzed the fatigue life of control valve and combined with reliability theory of control valve model. The impact proportion of each component on the control valve system fatigue failure was obtained. The results is shown that temperature correction factor makes the theoretical calculations of reliability more accurate, prediction life expectancy of main pressure parts accords with the technical requirements, and valve body and the sleeve have obvious influence on control system reliability, the stress concentration in key part of control valve can be reduced in the design process by improving structure.
Developing Reliable Life Support for Mars
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2017-01-01
A human mission to Mars will require highly reliable life support systems. Mars life support systems may recycle water and oxygen using systems similar to those on the International Space Station (ISS). However, achieving sufficient reliability is less difficult for ISS than it will be for Mars. If an ISS system has a serious failure, it is possible to provide spare parts, or directly supply water or oxygen, or if necessary bring the crew back to Earth. Life support for Mars must be designed, tested, and improved as needed to achieve high demonstrated reliability. A quantitative reliability goal should be established and used to guide development t. The designers should select reliable components and minimize interface and integration problems. In theory a system can achieve the component-limited reliability, but testing often reveal unexpected failures due to design mistakes or flawed components. Testing should extend long enough to detect any unexpected failure modes and to verify the expected reliability. Iterated redesign and retest may be required to achieve the reliability goal. If the reliability is less than required, it may be improved by providing spare components or redundant systems. The number of spares required to achieve a given reliability goal depends on the component failure rate. If the failure rate is under estimated, the number of spares will be insufficient and the system may fail. If the design is likely to have undiscovered design or component problems, it is advisable to use dissimilar redundancy, even though this multiplies the design and development cost. In the ideal case, a human tended closed system operational test should be conducted to gain confidence in operations, maintenance, and repair. The difficulty in achieving high reliability in unproven complex systems may require the use of simpler, more mature, intrinsically higher reliability systems. The limitations of budget, schedule, and technology may suggest accepting lower and less certain expected reliability. A plan to develop reliable life support is needed to achieve the best possible reliability.
NASA Astrophysics Data System (ADS)
Yu, Zheng
2002-08-01
Facing the new demands of the optical fiber communications market, almost all the performance and reliability of optical network system are dependent on the qualification of the fiber optics components. So, how to comply with the system requirements, the Telcordia / Bellcore reliability and high-power testing has become the key issue for the fiber optics components manufacturers. The qualification of Telcordia / Bellcore reliability or high-power testing is a crucial issue for the manufacturers. It is relating to who is the outstanding one in the intense competition market. These testing also need maintenances and optimizations. Now, work on the reliability and high-power testing have become the new demands in the market. The way is needed to get the 'Triple-Win' goal expected by the component-makers, the reliability-testers and the system-users. To those who are meeting practical problems for the testing, there are following seven topics that deal with how to shoot the common mistakes to perform qualify reliability and high-power testing: ¸ Qualification maintenance requirements for the reliability testing ¸ Lots control for preparing the reliability testing ¸ Sampling select per the reliability testing ¸ Interim measurements during the reliability testing ¸ Basic referencing factors relating to the high-power testing ¸ Necessity of re-qualification testing for the changing of producing ¸ Understanding the similarity for product family by the definitions
Test-retest reliability of cognitive EEG
NASA Technical Reports Server (NTRS)
McEvoy, L. K.; Smith, M. E.; Gevins, A.
2000-01-01
OBJECTIVE: Task-related EEG is sensitive to changes in cognitive state produced by increased task difficulty and by transient impairment. If task-related EEG has high test-retest reliability, it could be used as part of a clinical test to assess changes in cognitive function. The aim of this study was to determine the reliability of the EEG recorded during the performance of a working memory (WM) task and a psychomotor vigilance task (PVT). METHODS: EEG was recorded while subjects rested quietly and while they performed the tasks. Within session (test-retest interval of approximately 1 h) and between session (test-retest interval of approximately 7 days) reliability was calculated for four EEG components: frontal midline theta at Fz, posterior theta at Pz, and slow and fast alpha at Pz. RESULTS: Task-related EEG was highly reliable within and between sessions (r0.9 for all components in WM task, and r0.8 for all components in the PVT). Resting EEG also showed high reliability, although the magnitude of the correlation was somewhat smaller than that of the task-related EEG (r0.7 for all 4 components). CONCLUSIONS: These results suggest that under appropriate conditions, task-related EEG has sufficient retest reliability for use in assessing clinical changes in cognitive status.
Reliability Assessment for COTS Components in Space Flight Applications
NASA Technical Reports Server (NTRS)
Krishnan, G. S.; Mazzuchi, Thomas A.
2001-01-01
Systems built for space flight applications usually demand very high degree of performance and a very high level of accuracy. Hence, the design engineers are often prone to selecting state-of-art technologies for inclusion in their system design. The shrinking budgets also necessitate use of COTS (Commercial Off-The-Shelf) components, which are construed as being less expensive. The performance and accuracy requirements for space flight applications are much more stringent than those for the commercial applications. The quantity of systems designed and developed for space applications are much lower in number than those produced for the commercial applications. With a given set of requirements, are these COTS components reliable? This paper presents a model for assessing the reliability of COTS components in space applications and the associated affect on the system reliability. We illustrate the method with a real application.
Teamwork as an Essential Component of High-Reliability Organizations
Baker, David P; Day, Rachel; Salas, Eduardo
2006-01-01
Organizations are increasingly becoming dynamic and unstable. This evolution has given rise to greater reliance on teams and increased complexity in terms of team composition, skills required, and degree of risk involved. High-reliability organizations (HROs) are those that exist in such hazardous environments where the consequences of errors are high, but the occurrence of error is extremely low. In this article, we argue that teamwork is an essential component of achieving high reliability particularly in health care organizations. We describe the fundamental characteristics of teams, review strategies in team training, demonstrate the criticality of teamwork in HROs and finally, identify specific challenges the health care community must address to improve teamwork and enhance reliability. PMID:16898980
Sullivan, Jennifer L; Rivard, Peter E; Shin, Marlena H; Rosen, Amy K
2016-09-01
The lack of a tool for categorizing and differentiating hospitals according to their high reliability organization (HRO)-related characteristics has hindered progress toward implementing and sustaining evidence-based HRO practices. Hospitals would benefit both from an understanding of the organizational characteristics that support HRO practices and from knowledge about the steps necessary to achieve HRO status to reduce the risk of harm and improve outcomes. The High Reliability Health Care Maturity (HRHCM) model, a model for health care organizations' achievement of high reliability with zero patient harm, incorporates three major domains critical for promoting HROs-Leadership, Safety Culture, and Robust Process Improvement ®. A study was conducted to examine the content validity of the HRHCM model and evaluate whether it can differentiate hospitals' maturity levels for each of the model's components. Staff perceptions of patient safety at six US Department of Veterans Affairs (VA) hospitals were examined to determine whether all 14 HRHCM components were present and to characterize each hospital's level of organizational maturity. Twelve of the 14 components from the HRHCM model were detected; two additional characteristics emerged that are present in the HRO literature but not represented in the model-teamwork culture and system-focused tools for learning and improvement. Each hospital's level of organizational maturity could be characterized for 9 of the 14 components. The findings suggest the HRHCM model has good content validity and that there is differentiation between hospitals on model components. Additional research is needed to understand how these components can be used to build the infrastructure necessary for reaching high reliability.
System principles, mathematical models and methods to ensure high reliability of safety systems
NASA Astrophysics Data System (ADS)
Zaslavskyi, V.
2017-04-01
Modern safety and security systems are composed of a large number of various components designed for detection, localization, tracking, collecting, and processing of information from the systems of monitoring, telemetry, control, etc. They are required to be highly reliable in a view to correctly perform data aggregation, processing and analysis for subsequent decision making support. On design and construction phases of the manufacturing of such systems a various types of components (elements, devices, and subsystems) are considered and used to ensure high reliability of signals detection, noise isolation, and erroneous commands reduction. When generating design solutions for highly reliable systems a number of restrictions and conditions such as types of components and various constrains on resources should be considered. Various types of components perform identical functions; however, they are implemented using diverse principles, approaches and have distinct technical and economic indicators such as cost or power consumption. The systematic use of different component types increases the probability of tasks performing and eliminates the common cause failure. We consider type-variety principle as an engineering principle of system analysis, mathematical models based on this principle, and algorithms for solving optimization problems of highly reliable safety and security systems design. Mathematical models are formalized in a class of two-level discrete optimization problems of large dimension. The proposed approach, mathematical models, algorithms can be used for problem solving of optimal redundancy on the basis of a variety of methods and control devices for fault and defects detection in technical systems, telecommunication networks, and energy systems.
NASA Astrophysics Data System (ADS)
Sembiring, N.; Panjaitan, N.; Angelita, S.
2018-02-01
PT. XYZ is a company owned by non-governmental organizations engaged in the field of production of rubber processing becoming crumb rubber. Part of the production is supported by some of machines and interacting equipment to achieve optimal productivity. Types of the machine that are used in the production process are Conveyor Breaker, Breaker, Rolling Pin, Hammer Mill, Mill Roll, Conveyor, Shredder Crumb, and Dryer. Maintenance system in PT. XYZ is corrective maintenance i.e. repairing or replacing the engine components after the crash on the machine. Replacement of engine components on corrective maintenance causes the machine to stop operating during the production process is in progress. The result is in the loss of production time due to the operator must replace the damaged engine components. The loss of production time can impact on the production targets which were not reached and lead to high loss costs. The cost for all components is Rp. 4.088.514.505. This cost is really high just for maintaining a Mill Roll Machine. Therefore PT. XYZ is needed to do preventive maintenance i.e. scheduling engine components and improving maintenance efficiency. The used methods are Reliability Engineering and Maintenance Value Stream Mapping (MVSM). The needed data in this research are the interval of time damage to engine components, opportunity cost, labor cost, component cost, corrective repair time, preventive repair time, Mean Time To Opportunity (MTTO), Mean Time To Repair (MTTR), and Mean Time To Yield (MTTY). In this research, the critical components of Mill Roll machine are Spier, Bushing, Bearing, Coupling and Roll. Determination of damage distribution, reliability, MTTF, cost of failure, cost of preventive, current state map, and future state map are done so that the replacement time for each critical component with the lowest maintenance cost and preparation of Standard Operation Procedure (SOP) are developed. For the critical component that has been determined, the Spier component replacement time interval is 228 days with a reliability value of 0,503171, Bushing component is 240 days with reliability value of 0.36861, Bearing component is 202 days with reliability value of 0,503058, Coupling component is 247 days with reliability value of 0,50108 and Roll component is 301 days with reliability value of 0,373525. The results show that the cost decreases from Rp 300,688,114 to Rp 244,384,371 obtained from corrective maintenance to preventive maintenance. While maintenance efficiency increases with the application of preventive maintenance i.e. for Spier component from 54,0540541% to 74,07407%, Bushing component from 52,3809524% to 68,75%, Bearing component from 40% to 52,63158%, Coupling component from 60.9756098% to 71.42857%, and Roll components from 64.516129% to 74.7663551%.
Reliability prediction of large fuel cell stack based on structure stress analysis
NASA Astrophysics Data System (ADS)
Liu, L. F.; Liu, B.; Wu, C. W.
2017-09-01
The aim of this paper is to improve the reliability of Proton Electrolyte Membrane Fuel Cell (PEMFC) stack by designing the clamping force and the thickness difference between the membrane electrode assembly (MEA) and the gasket. The stack reliability is directly determined by the component reliability, which is affected by the material property and contact stress. The component contact stress is a random variable because it is usually affected by many uncertain factors in the production and clamping process. We have investigated the influences of parameter variation coefficient on the probability distribution of contact stress using the equivalent stiffness model and the first-order second moment method. The optimal contact stress to make the component stay in the highest level reliability is obtained by the stress-strength interference model. To obtain the optimal contact stress between the contact components, the optimal thickness of the component and the stack clamping force are optimally designed. Finally, a detailed description is given how to design the MEA and gasket dimensions to obtain the highest stack reliability. This work can provide a valuable guidance in the design of stack structure for a high reliability of fuel cell stack.
Space Radiation Effects and Reliability Consideration for the Proposed Jupiter Europa Orbiter
NASA Technical Reports Server (NTRS)
Johnston, Allan
2011-01-01
The proposed Jupiter Europa Orbiter (JEO) mission to explore the Jovian moon Europa poses a number of challenges. The spacecraft must operate for about seven years during the transit time to the vicinity of Jupiter, and then endure unusually high radiation levels during exploration and orbiting phases. The ability to withstand usually high total dose levels is critical for the mission, along with meeting the high reliability standards for flagship NASA missions. Reliability of new microelectronic components must be sufficiently understood to meet overall mission requirements.The proposed Jupiter Europa Orbiter (JEO) mission to explore the Jovian moon Europa poses a number of challenges. The spacecraft must operate for about seven years during the transit time to the vicinity of Jupiter, and then endure unusually high radiation levels during exploration and orbiting phases. The ability to withstand usually high total dose levels is critical for the mission, along with meeting the high reliability standards for flagship NASA missions. Reliability of new microelectronic components must be sufficiently understood to meet overall mission requirements.
The reliability of the pass/fail decision for assessments comprised of multiple components.
Möltner, Andreas; Tımbıl, Sevgi; Jünger, Jana
2015-01-01
The decision having the most serious consequences for a student taking an assessment is the one to pass or fail that student. For this reason, the reliability of the pass/fail decision must be determined for high quality assessments, just as the measurement reliability of the point values. Assessments in a particular subject (graded course credit) are often composed of multiple components that must be passed independently of each other. When "conjunctively" combining separate pass/fail decisions, as with other complex decision rules for passing, adequate methods of analysis are necessary for estimating the accuracy and consistency of these classifications. To date, very few papers have addressed this issue; a generally applicable procedure was published by Douglas and Mislevy in 2010. Using the example of an assessment comprised of several parts that must be passed separately, this study analyzes the reliability underlying the decision to pass or fail students and discusses the impact of an improved method for identifying those who do not fulfill the minimum requirements. The accuracy and consistency of the decision to pass or fail an examinee in the subject cluster Internal Medicine/General Medicine/Clinical Chemistry at the University of Heidelberg's Faculty of Medicine was investigated. This cluster requires students to separately pass three components (two written exams and an OSCE), whereby students may reattempt to pass each component twice. Our analysis was carried out using the method described by Douglas and Mislevy. Frequently, when complex logical connections exist between the individual pass/fail decisions in the case of low failure rates, only a very low reliability for the overall decision to grant graded course credit can be achieved, even if high reliabilities exist for the various components. For the example analyzed here, the classification accuracy and consistency when conjunctively combining the three individual parts is relatively low with κ=0.49 or κ=0.47, despite the good reliability of over 0.75 for each of the three components. The option to repeat each component twice leads to a situation in which only about half of the candidates who do not satisfy the minimum requirements would fail the overall assessment, while the other half is able to continue their studies despite having deficient knowledge and skills. The method put forth by Douglas and Mislevy allows the analysis of the decision accuracy and consistency for complex combinations of scores from different components. Even in the case of highly reliable components, it is not necessarily so that a reliable pass/fail decision has been reached - for instance in the case of low failure rates. Assessments must be administered with the explicit goal of identifying examinees that do not fulfill the minimum requirements.
The reliability of the pass/fail decision for assessments comprised of multiple components
Möltner, Andreas; Tımbıl, Sevgi; Jünger, Jana
2015-01-01
Objective: The decision having the most serious consequences for a student taking an assessment is the one to pass or fail that student. For this reason, the reliability of the pass/fail decision must be determined for high quality assessments, just as the measurement reliability of the point values. Assessments in a particular subject (graded course credit) are often composed of multiple components that must be passed independently of each other. When “conjunctively” combining separate pass/fail decisions, as with other complex decision rules for passing, adequate methods of analysis are necessary for estimating the accuracy and consistency of these classifications. To date, very few papers have addressed this issue; a generally applicable procedure was published by Douglas and Mislevy in 2010. Using the example of an assessment comprised of several parts that must be passed separately, this study analyzes the reliability underlying the decision to pass or fail students and discusses the impact of an improved method for identifying those who do not fulfill the minimum requirements. Method: The accuracy and consistency of the decision to pass or fail an examinee in the subject cluster Internal Medicine/General Medicine/Clinical Chemistry at the University of Heidelberg’s Faculty of Medicine was investigated. This cluster requires students to separately pass three components (two written exams and an OSCE), whereby students may reattempt to pass each component twice. Our analysis was carried out using the method described by Douglas and Mislevy. Results: Frequently, when complex logical connections exist between the individual pass/fail decisions in the case of low failure rates, only a very low reliability for the overall decision to grant graded course credit can be achieved, even if high reliabilities exist for the various components. For the example analyzed here, the classification accuracy and consistency when conjunctively combining the three individual parts is relatively low with κ=0.49 or κ=0.47, despite the good reliability of over 0.75 for each of the three components. The option to repeat each component twice leads to a situation in which only about half of the candidates who do not satisfy the minimum requirements would fail the overall assessment, while the other half is able to continue their studies despite having deficient knowledge and skills. Conclusion: The method put forth by Douglas and Mislevy allows the analysis of the decision accuracy and consistency for complex combinations of scores from different components. Even in the case of highly reliable components, it is not necessarily so that a reliable pass/fail decision has been reached – for instance in the case of low failure rates. Assessments must be administered with the explicit goal of identifying examinees that do not fulfill the minimum requirements. PMID:26483855
NASA Astrophysics Data System (ADS)
Ren, Xusheng; Qian, Longsheng; Zhang, Guiyan
2005-12-01
According to Generic Reliability Assurance Requirements for Passive Optical Components GR-1221-CORE (Issue 2, January 1999), reliability determination test of different kinds of passive optical components which using in uncontrolled environments is taken. The test condition of High Temperature Storage Test (Dry Test) and Damp Test is in below sheet. Except for humidity condition, all is same. In order to save test time and cost, after a sires of contrast tests, the replacement of Dry Heat is discussed. Controlling the Failure mechanism of dry heat and damp heat of passive optical components, the contrast test of dry heat and damp heat for passive optical components (include DWDM, CWDM, Coupler, Isolator, mini Isolator) is taken. The test result of isolator is listed. Telcordia test not only test the reliability of the passive optical components, but also test the patience of the experimenter. The cost of Telcordia test in money, manpower and material resources, especially in time is heavy burden for the company. After a series of tests, we can find that Damp heat could factually test the reliability of passive optical components, and equipment manufacturer in accord with component manufacture could omit the dry heat test if damp heat test is taken first and passed.
Fracture mechanics concepts in reliability analysis of monolithic ceramics
NASA Technical Reports Server (NTRS)
Manderscheid, Jane M.; Gyekenyesi, John P.
1987-01-01
Basic design concepts for high-performance, monolithic ceramic structural components are addressed. The design of brittle ceramics differs from that of ductile metals because of the inability of ceramic materials to redistribute high local stresses caused by inherent flaws. Random flaw size and orientation requires that a probabilistic analysis be performed in order to determine component reliability. The current trend in probabilistic analysis is to combine linear elastic fracture mechanics concepts with the two parameter Weibull distribution function to predict component reliability under multiaxial stress states. Nondestructive evaluation supports this analytical effort by supplying data during verification testing. It can also help to determine statistical parameters which describe the material strength variation, in particular the material threshold strength (the third Weibull parameter), which in the past was often taken as zero for simplicity.
NASA-DoD Lead-Free Electronics Project
NASA Technical Reports Server (NTRS)
Kessel, Kurt R.
2009-01-01
In response to concerns about risks from lead-free induced faults to high reliability products, NASA has initiated a multi-year project to provide manufacturers and users with data to clarify the risks of lead-free materials in their products. The project will also be of interest to component manufacturers supplying to high reliability markets. The project was launched in November 2006. The primary technical objective of the project is to undertake comprehensive testing to generate information on failure modes/criteria to better understand the reliability of: - Packages (e.g., TSOP, BOA, PDIP) assembled and reworked with solder interconnects consisting of lead-free alloys - Packages (e.g., TSOP, BOA, PDIP) assembled and reworked with solder interconnects consisting of mixed alloys, lead component finish/lead-free solder and lead-free component finish/SnPb solder.
NASA-DoD Lead-Free Electronics Project
NASA Technical Reports Server (NTRS)
Kessel, Kurt R.
2009-01-01
In response to concerns about risks from lead-free induced faults to high reliability products, NASA has initiated a multi-year project to provide manufacturers and users with data to clarify the risks of lead-free materials in their products. The project will also be of interest to component manufacturers supplying to high reliability markets. The project was launched in November 2006. The primary technical objective of the project is to undertake comprehensive testing to generate information on failure modes/criteria to better understand the reliability of: - Packages (e.g., TSOP, BGA, PDIP) assembled and reworked with solder interconnects consisting of lead-free alloys - Packages (e.g., TSOP, BGA, PDIP) assembled and reworked with solder interconnects consisting of mixed alloys, lead component finish/lead-free solder and lead-free component finish/SnPb solder.
Accounting for Proof Test Data in a Reliability Based Design Optimization Framework
NASA Technical Reports Server (NTRS)
Ventor, Gerharad; Scotti, Stephen J.
2012-01-01
This paper investigates the use of proof (or acceptance) test data during the reliability based design optimization of structural components. It is assumed that every component will be proof tested and that the component will only enter into service if it passes the proof test. The goal is to reduce the component weight, while maintaining high reliability, by exploiting the proof test results during the design process. The proposed procedure results in the simultaneous design of the structural component and the proof test itself and provides the designer with direct control over the probability of failing the proof test. The procedure is illustrated using two analytical example problems and the results indicate that significant weight savings are possible when exploiting the proof test results during the design process.
System reliability analysis through corona testing
NASA Technical Reports Server (NTRS)
Lalli, V. R.; Mueller, L. A.; Koutnik, E. A.
1975-01-01
A corona vacuum test facility for nondestructive testing of power system components was built in the Reliability and Quality Engineering Test Laboratories at the NASA Lewis Research Center. The facility was developed to simulate operating temperature and vacuum while monitoring corona discharges with residual gases. The facility is being used to test various high-voltage power system components.
Commercialized VCSEL components fabricated at TrueLight Corporation
NASA Astrophysics Data System (ADS)
Pan, Jin-Shan; Lin, Yung-Sen; Li, Chao-Fang A.; Chang, C. H.; Wu, Jack; Lee, Bor-Lin; Chuang, Y. H.; Tu, S. L.; Wu, Calvin; Huang, Kai-Feng
2001-05-01
TrueLight Corporation was found in 1997 and it is the pioneer of VCSEL components supplier in Taiwan. We specialize in the production and distribution of VCSEL (Vertical Cavity Surface Emitting Laser) and other high-speed PIN-detector devices and components. Our core technology is developed to meet blooming demand of fiber optic transmission. Our intention is to diverse the device application into data communication, telecommunication and industrial markets. One mission is to provide the high performance, highly reliable and low-cost VCSEL components for data communication and sensing applications. For the past three years, TrueLight Corporation has entered successfully into the Gigabit Ethernet and the Fiber Channel data communication area. In this paper, we will focus on the fabrication of VCSEL components. We will present you the evolution of implanted and oxide-confined VCSEL process, device characterization, also performance in Gigabit data communication and the most important reliability issue
Mission Reliability Estimation for Repairable Robot Teams
NASA Technical Reports Server (NTRS)
Trebi-Ollennu, Ashitey; Dolan, John; Stancliff, Stephen
2010-01-01
A mission reliability estimation method has been designed to translate mission requirements into choices of robot modules in order to configure a multi-robot team to have high reliability at minimal cost. In order to build cost-effective robot teams for long-term missions, one must be able to compare alternative design paradigms in a principled way by comparing the reliability of different robot models and robot team configurations. Core modules have been created including: a probabilistic module with reliability-cost characteristics, a method for combining the characteristics of multiple modules to determine an overall reliability-cost characteristic, and a method for the generation of legitimate module combinations based on mission specifications and the selection of the best of the resulting combinations from a cost-reliability standpoint. The developed methodology can be used to predict the probability of a mission being completed, given information about the components used to build the robots, as well as information about the mission tasks. In the research for this innovation, sample robot missions were examined and compared to the performance of robot teams with different numbers of robots and different numbers of spare components. Data that a mission designer would need was factored in, such as whether it would be better to have a spare robot versus an equivalent number of spare parts, or if mission cost can be reduced while maintaining reliability using spares. This analytical model was applied to an example robot mission, examining the cost-reliability tradeoffs among different team configurations. Particularly scrutinized were teams using either redundancy (spare robots) or repairability (spare components). Using conservative estimates of the cost-reliability relationship, results show that it is possible to significantly reduce the cost of a robotic mission by using cheaper, lower-reliability components and providing spares. This suggests that the current design paradigm of building a minimal number of highly robust robots may not be the best way to design robots for extended missions.
Maximally reliable spatial filtering of steady state visual evoked potentials.
Dmochowski, Jacek P; Greaves, Alex S; Norcia, Anthony M
2015-04-01
Due to their high signal-to-noise ratio (SNR) and robustness to artifacts, steady state visual evoked potentials (SSVEPs) are a popular technique for studying neural processing in the human visual system. SSVEPs are conventionally analyzed at individual electrodes or linear combinations of electrodes which maximize some variant of the SNR. Here we exploit the fundamental assumption of evoked responses--reproducibility across trials--to develop a technique that extracts a small number of high SNR, maximally reliable SSVEP components. This novel spatial filtering method operates on an array of Fourier coefficients and projects the data into a low-dimensional space in which the trial-to-trial spectral covariance is maximized. When applied to two sample data sets, the resulting technique recovers physiologically plausible components (i.e., the recovered topographies match the lead fields of the underlying sources) while drastically reducing the dimensionality of the data (i.e., more than 90% of the trial-to-trial reliability is captured in the first four components). Moreover, the proposed technique achieves a higher SNR than that of the single-best electrode or the Principal Components. We provide a freely-available MATLAB implementation of the proposed technique, herein termed "Reliable Components Analysis". Copyright © 2015 Elsevier Inc. All rights reserved.
Design for Verification: Using Design Patterns to Build Reliable Systems
NASA Technical Reports Server (NTRS)
Mehlitz, Peter C.; Penix, John; Koga, Dennis (Technical Monitor)
2003-01-01
Components so far have been mainly used in commercial software development to reduce time to market. While some effort has been spent on formal aspects of components, most of this was done in the context of programming language or operating system framework integration. As a consequence, increased reliability of composed systems is mainly regarded as a side effect of a more rigid testing of pre-fabricated components. In contrast to this, Design for Verification (D4V) puts the focus on component specific property guarantees, which are used to design systems with high reliability requirements. D4V components are domain specific design pattern instances with well-defined property guarantees and usage rules, which are suitable for automatic verification. The guaranteed properties are explicitly used to select components according to key system requirements. The D4V hypothesis is that the same general architecture and design principles leading to good modularity, extensibility and complexity/functionality ratio can be adapted to overcome some of the limitations of conventional reliability assurance measures, such as too large a state space or too many execution paths.
Suter, Basil; Testa, Enrique; Stämpfli, Patrick; Konala, Praveen; Rasch, Helmut; Friederich, Niklaus F; Hirschmann, Michael T
2015-03-20
The introduction of a standardized SPECT/CT algorithm including a localization scheme, which allows accurate identification of specific patterns and thresholds of SPECT/CT tracer uptake, could lead to a better understanding of the bone remodeling and specific failure modes of unicondylar knee arthroplasty (UKA). The purpose of the present study was to introduce a novel standardized SPECT/CT algorithm for patients after UKA and evaluate its clinical applicability, usefulness and inter- and intra-observer reliability. Tc-HDP-SPECT/CT images of consecutive patients (median age 65, range 48-84 years) with 21 knees after UKA were prospectively evaluated. The tracer activity on SPECT/CT was localized using a specific standardized UKA localization scheme. For tracer uptake analysis (intensity and anatomical distribution pattern) a 3D volumetric quantification method was used. The maximum intensity values were recorded for each anatomical area. In addition, ratios between the respective value in the measured area and the background tracer activity were calculated. The femoral and tibial component position (varus-valgus, flexion-extension, internal and external rotation) was determined in 3D-CT. The inter- and intraobserver reliability of the localization scheme, grading of the tracer activity and component measurements were determined by calculating the intraclass correlation coefficients (ICC). The localization scheme, grading of the tracer activity and component measurements showed high inter- and intra-observer reliabilities for all regions (tibia, femur and patella). For measurement of component position there was strong agreement between the readings of the two observers; the ICC for the orientation of the femoral component was 0.73-1.00 (intra-observer reliability) and 0.91-1.00 (inter-observer reliability). The ICC for the orientation of the tibial component was 0.75-1.00 (intra-observer reliability) and 0.77-1.00 (inter-observer reliability). The SPECT/CT algorithm presented combining the mechanical information on UKA component position, alignment and metabolic data is highly reliable and proved to be a valuable, consistent and useful tool for analysing postoperative knees after UKA. Using this standardized approach in clinical studies might be helpful in establishing the diagnosis in patients with pain after UKA.
System reliability analysis through corona testing
NASA Technical Reports Server (NTRS)
Lalli, V. R.; Mueller, L. A.; Koutnik, E. A.
1975-01-01
In the Reliability and Quality Engineering Test Laboratory at the NASA Lewis Research Center a nondestructive, corona-vacuum test facility for testing power system components was developed using commercially available hardware. The test facility was developed to simulate operating temperature and vacuum while monitoring corona discharges with residual gases. This facility is being used to test various high voltage power system components.
NASA-DoD Lead-Free Electronics Project
NASA Technical Reports Server (NTRS)
Kessel, Kurt
2009-01-01
In response to concerns about risks from lead-free induced faults to high reliability products, NASA has initiated a multi-year project to provide manufacturers and users with data to clarify the risks of lead-free materials in their products. The project will also be of interest to component manufacturers supplying to high reliability markets. The project was launched in November 2006. The primary technical objective of the project is to undertake comprehensive testing to generate information on failure modes/criteria to better understand the reliability of: (1) Packages (e.g., Thin Small Outline Package [TSOP], Ball Grid Array [BGA], Plastic Dual In-line Package [PDIP]) assembled and reworked with solder interconnects consisting of lead-free alloys (2) Packages (e.g., TSOP, BGA, PDIP) assembled and reworked with solder interconnects consisting of mixed alloys, lead component finish/lead-free solder and lead-free component finish/SnPb solder
Multiprocessor switch with selective pairing
Gara, Alan; Gschwind, Michael K; Salapura, Valentina
2014-03-11
System, method and computer program product for a multiprocessing system to offer selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). Each paired microprocessor or processor cores that provide one highly reliable thread for high-reliability connect with a system components such as a memory "nest" (or memory hierarchy), an optional system controller, and optional interrupt controller, optional I/O or peripheral devices, etc. The memory nest is attached to a selective pairing facility via a switch or a bus
Reliability and maintainability assessment factors for reliable fault-tolerant systems
NASA Technical Reports Server (NTRS)
Bavuso, S. J.
1984-01-01
A long term goal of the NASA Langley Research Center is the development of a reliability assessment methodology of sufficient power to enable the credible comparison of the stochastic attributes of one ultrareliable system design against others. This methodology, developed over a 10 year period, is a combined analytic and simulative technique. An analytic component is the Computer Aided Reliability Estimation capability, third generation, or simply CARE III. A simulative component is the Gate Logic Software Simulator capability, or GLOSS. The numerous factors that potentially have a degrading effect on system reliability and the ways in which these factors that are peculiar to highly reliable fault tolerant systems are accounted for in credible reliability assessments. Also presented are the modeling difficulties that result from their inclusion and the ways in which CARE III and GLOSS mitigate the intractability of the heretofore unworkable mathematics.
Effect of Individual Component Life Distribution on Engine Life Prediction
NASA Technical Reports Server (NTRS)
Zaretsky, Erwin V.; Hendricks, Robert C.; Soditus, Sherry M.
2003-01-01
The effect of individual engine component life distributions on engine life prediction was determined. A Weibull-based life and reliability analysis of the NASA Energy Efficient Engine was conducted. The engine s life at a 95 and 99.9 percent probability of survival was determined based upon the engine manufacturer s original life calculations and assumed values of each of the component s cumulative life distributions as represented by a Weibull slope. The lives of the high-pressure turbine (HPT) disks and blades were also evaluated individually and as a system in a similar manner. Knowing the statistical cumulative distribution of each engine component with reasonable engineering certainty is a condition precedent to predicting the life and reliability of an entire engine. The life of a system at a given reliability will be less than the lowest-lived component in the system at the same reliability (probability of survival). Where Weibull slopes of all the engine components are equal, the Weibull slope had a minimal effect on engine L(sub 0.1) life prediction. However, at a probability of survival of 95 percent (L(sub 5) life), life decreased with increasing Weibull slope.
Ultra Reliable Closed Loop Life Support for Long Space Missions
NASA Technical Reports Server (NTRS)
Jones, Harry W.; Ewert, Michael K.
2010-01-01
Spacecraft human life support systems can achieve ultra reliability by providing sufficient spares to replace all failed components. The additional mass of spares for ultra reliability is approximately equal to the original system mass, provided that the original system reliability is not too low. Acceptable reliability can be achieved for the Space Shuttle and Space Station by preventive maintenance and by replacing failed units. However, on-demand maintenance and repair requires a logistics supply chain in place to provide the needed spares. In contrast, a Mars or other long space mission must take along all the needed spares, since resupply is not possible. Long missions must achieve ultra reliability, a very low failure rate per hour, since they will take years rather than weeks and cannot be cut short if a failure occurs. Also, distant missions have a much higher mass launch cost per kilogram than near-Earth missions. Achieving ultra reliable spacecraft life support systems with acceptable mass will require a well-planned and extensive development effort. Analysis must determine the reliability requirement and allocate it to subsystems and components. Ultra reliability requires reducing the intrinsic failure causes, providing spares to replace failed components and having "graceful" failure modes. Technologies, components, and materials must be selected and designed for high reliability. Long duration testing is needed to confirm very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The system must be designed, developed, integrated, and tested with system reliability in mind. Maintenance and reparability of failed units must not add to the probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass should start soon since it must be a long term effort.
The Interplay of Surface Mount Solder Joint Quality and Reliability of Low Volume SMAs
NASA Technical Reports Server (NTRS)
Ghaffarian, R.
1997-01-01
Spacecraft electronics including those used at the Jet Propulsion Laboratory (JPL), demand production of highly reliable assemblies. JPL has recently completed an extensive study, funded by NASA's code Q, of the interplay between manufacturing defects and reliability of ball grid array (BGA) and surface mount electronic components.
Reliability systems for implantable cardiac defibrillator batteries
NASA Astrophysics Data System (ADS)
Takeuchi, Esther S.
The reliability of the power sources used in implantable cardiac defibrillators is critical due to the life-saving nature of the device. Achieving a high reliability power source depends on several systems functioning together. Appropriate cell design is the first step in assuring a reliable product. Qualification of critical components and of the cells using those components is done prior to their designation as implantable grade. Product consistency is assured by control of manufacturing practices and verified by sampling plans using both accelerated and real-time testing. Results to date show that lithium/silver vanadium oxide cells used for implantable cardiac defibrillators have a calculated maximum random failure rate of 0.005% per test month.
The influence of various test plans on mission reliability. [for Shuttle Spacelab payloads
NASA Technical Reports Server (NTRS)
Stahle, C. V.; Gongloff, H. R.; Young, J. P.; Keegan, W. B.
1977-01-01
Methods have been developed for the evaluation of cost effective vibroacoustic test plans for Shuttle Spacelab payloads. The shock and vibration environments of components have been statistically represented, and statistical decision theory has been used to evaluate the cost effectiveness of five basic test plans with structural test options for two of the plans. Component, subassembly, and payload testing have been performed for each plan along with calculations of optimum test levels and expected costs. The tests have been ranked according to both minimizing expected project costs and vibroacoustic reliability. It was found that optimum costs may vary up to $6 million with the lowest plan eliminating component testing and maintaining flight vibration reliability via subassembly tests at high acoustic levels.
NASA Technical Reports Server (NTRS)
White, Mark
2012-01-01
New space missions will increasingly rely on more advanced technologies because of system requirements for higher performance, particularly in instruments and high-speed processing. Component-level reliability challenges with scaled CMOS in spacecraft systems from a bottom-up perspective have been presented. Fundamental Front-end and Back-end processing reliability issues with more aggressively scaled parts have been discussed. Effective thermal management from system-level to the componentlevel (top-down) is a key element in overall design of reliable systems. Thermal management in space systems must consider a wide range of issues, including thermal loading of many different components, and frequent temperature cycling of some systems. Both perspectives (top-down and bottom-up) play a large role in robust, reliable spacecraft system design.
High resolution time interval meter
Martin, A.D.
1986-05-09
Method and apparatus are provided for measuring the time interval between two events to a higher resolution than reliability available from conventional circuits and component. An internal clock pulse is provided at a frequency compatible with conventional component operating frequencies for reliable operation. Lumped constant delay circuits are provided for generating outputs at delay intervals corresponding to the desired high resolution. An initiation START pulse is input to generate first high resolution data. A termination STOP pulse is input to generate second high resolution data. Internal counters count at the low frequency internal clock pulse rate between the START and STOP pulses. The first and second high resolution data are logically combined to directly provide high resolution data to one counter and correct the count in the low resolution counter to obtain a high resolution time interval measurement.
Effectiveness of back-to-back testing
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.; Eckhardt, David E.; Caglayan, Alper; Kelly, John P. J.
1987-01-01
Three models of back-to-back testing processes are described. Two models treat the case where there is no intercomponent failure dependence. The third model describes the more realistic case where there is correlation among the failure probabilities of the functionally equivalent components. The theory indicates that back-to-back testing can, under the right conditions, provide a considerable gain in software reliability. The models are used to analyze the data obtained in a fault-tolerant software experiment. It is shown that the expected gain is indeed achieved, and exceeded, provided the intercomponent failure dependence is sufficiently small. However, even with the relatively high correlation the use of several functionally equivalent components coupled with back-to-back testing may provide a considerable reliability gain. Implications of this finding are that the multiversion software development is a feasible and cost effective approach to providing highly reliable software components intended for fault-tolerant software systems, on condition that special attention is directed at early detection and elimination of correlated faults.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, Dave; Brunett, Acacia J.; Bucknor, Matthew
GE Hitachi Nuclear Energy (GEH) and Argonne National Laboratory are currently engaged in a joint effort to modernize and develop probabilistic risk assessment (PRA) techniques for advanced non-light water reactors. At a high level the primary outcome of this project will be the development of next-generation PRA methodologies that will enable risk-informed prioritization of safety- and reliability-focused research and development, while also identifying gaps that may be resolved through additional research. A subset of this effort is the development of a reliability database (RDB) methodology to determine applicable reliability data for inclusion in the quantification of the PRA. The RDBmore » method developed during this project seeks to satisfy the requirements of the Data Analysis element of the ASME/ANS Non-LWR PRA standard. The RDB methodology utilizes a relevancy test to examine reliability data and determine whether it is appropriate to include as part of the reliability database for the PRA. The relevancy test compares three component properties to establish the level of similarity to components examined as part of the PRA. These properties include the component function, the component failure modes, and the environment/boundary conditions of the component. The relevancy test is used to gauge the quality of data found in a variety of sources, such as advanced reactor-specific databases, non-advanced reactor nuclear databases, and non-nuclear databases. The RDB also establishes the integration of expert judgment or separate reliability analysis with past reliability data. This paper provides details on the RDB methodology, and includes an example application of the RDB methodology for determining the reliability of the intermediate heat exchanger of a sodium fast reactor. The example explores a variety of reliability data sources, and assesses their applicability for the PRA of interest through the use of the relevancy test.« less
Component Reliability Testing of Long-Life Sorption Cryocoolers
NASA Technical Reports Server (NTRS)
Bard, S.; Wu, J.; Karlmann, P.; Mirate, C.; Wade, L.
1994-01-01
This paper summarizes ongoing experiments characterizing the ability of critical sorption cryocooler components to achieve highly reliable operation for long-life space missions. Test data obtained over the past several years at JPL are entirely consistent with achieving ten year life for sorption compressors, electrical heaters, container materials, valves, and various sorbent materials suitable for driving 8 to 180 K refrigeration stages. Test results for various compressor systems are reported. Planned future tests necessary to gain a detailed understanding of the sensitivity of cooler performance and component life to operating constraints, design configurations, and fabrication, assembly and handling techniques, are also discussed.
NASA Technical Reports Server (NTRS)
Ghaffarian, Reza
2014-01-01
Bottom terminated components and quad flat no-lead (BTC/QFN) packages have been extensively used by commercial industry for more than a decade. Cost and performance advantages and the closeness of the packages to the boards make them especially unique for radio frequency (RF) applications. A number of high-reliability parts are now available in this style of package configuration. This report presents a summary of literature surveyed and provides a body of knowledge (BOK) gathered on the status of BTC/QFN and their advanced versions of multi-row QFN (MRQFN) packaging technologies. The report provides a comprehensive review of packaging trends and specifications on design, assembly, and reliability. Emphasis is placed on assembly reliability and associated key design and process parameters because they show lower life than standard leaded package assembly under thermal cycling exposures. Inspection of hidden solder joints for assuring quality is challenging and is similar to ball grid arrays (BGAs). Understanding the key BTC/QFN technology trends, applications, processing parameters, workmanship defects, and reliability behavior is important when judicially selecting and narrowing the follow-on packages for evaluation and testing, as well as for the low risk insertion in high-reliability applications.
NASA Technical Reports Server (NTRS)
Ghaffarian, Reza
2014-01-01
Bottom terminated components and quad flat no-lead (BTC/QFN) packages have been extensively used by commercial industry for more than a decade. Cost and performance advantages and the closeness of the packages to the boards make them especially unique for radio frequency (RF) applications. A number of high-reliability parts are now available in this style of package configuration. This report presents a summary of literature surveyed and provides a body of knowledge (BOK) gathered on the status of BTC/QFN and their advanced versions of multi-row QFN (MRQFN) packaging technologies. The report provides a comprehensive review of packaging trends and specifications on design, assembly, and reliability. Emphasis is placed on assembly reliability and associated key design and process parameters because they show lower life than standard leaded package assembly under thermal cycling exposures. Inspection of hidden solder joints for assuring quality is challenging and is similar to ball grid arrays (BGAs). Understanding the key BTC/QFN technology trends, applications, processing parameters, workmanship defects, and reliability behavior is important when judicially selecting and narrowing the follow-on packages for evaluation and testing, as well as for the low risk insertion in high-reliability applications.
Esposito, Fabio; Cè, Emiliano; Rampichini, Susanna; Limonta, Eloisa; Venturelli, Massimo; Monti, Elena; Bet, Luciano; Fossati, Barbara; Meola, Giovanni
2016-01-01
The electromechanical delay during muscle contraction and relaxation can be partitioned into mainly electrochemical and mainly mechanical components by an EMG, mechanomyographic, and force combined approach. Component duration and measurement reliability were investigated during contraction and relaxation in a group of patients with myotonic dystrophy type 1 (DM1, n = 13) and in healthy controls (n = 13). EMG, mechanomyogram, and force were recorded in DM1 and in age- and body-matched controls from tibialis anterior (distal muscle) and vastus lateralis (proximal muscle) muscles during maximum voluntary and electrically-evoked isometric contractions. The electrochemical and mechanical components of the electromechanical delay during muscle contraction and relaxation were calculated off-line. Maximum strength was significantly lower in DM1 than in controls under both experimental conditions. All electrochemical and mechanical components were significantly longer in DM1 in both muscles. Measurement reliability was very high in both DM1 and controls. The high reliability of the measurements and the differences between DM1 patients and controls suggest that the EMG, mechanomyographic, and force combined approach could be utilized as a valid tool to assess the level of neuromuscular dysfunction in this pathology, and to follow the efficacy of pharmacological or non-pharmacological interventions. Copyright © 2015 Elsevier B.V. All rights reserved.
Retest reliability of individual alpha ERD topography assessed by human electroencephalography.
Vázquez-Marrufo, Manuel; Galvao-Carmona, Alejandro; Benítez Lugo, María Luisa; Ruíz-Peña, Juan Luis; Borges Guerra, Mónica; Izquierdo Ayuso, Guillermo
2017-01-01
Despite the immense literature related to diverse human electroencephalographic (EEG) parameters, very few studies have focused on the reliability of these measures. Some of the most studied components (i.e., P3 or MMN) have received more attention regarding the stability of their main parameters, such as latency, amplitude or topography. However, spectral modulations have not been as extensively evaluated considering that different analysis methods are available. The main aim of the present study is to assess the reliability of the latency, amplitude and topography of event-related desynchronization (ERD) for the alpha band (10-14 Hz) observed in a cognitive task (visual oddball). Topography reliability was analysed at different levels (for the group, within-subjects individually and between-subjects individually). The latency for alpha ERD showed stable behaviour between two sessions, and the amplitude exhibited an increment (more negative) in the second session. Alpha ERD topography exhibited a high correlation score between sessions at the group level (r = 0.903, p<0.001). The mean value for within-subject correlations was 0.750 (with a range from 0.391 to 0.954). Regarding between-subject topography comparisons, some subjects showed a highly specific topography, whereas other subjects showed topographies that were more similar to those of other subjects. ERD was mainly stable between the two sessions with the exception of amplitude, which exhibited an increment in the second session. Topography exhibits excellent reliability at the group level; however, it exhibits highly heterogeneous behaviour at the individual level. Considering that the P3 was previously evaluated for this group of subjects, a direct comparison of the correlation scores was possible, and it showed that the ERD component is less reliable in individual topography than in the ERP component (P3).
Retest reliability of individual alpha ERD topography assessed by human electroencephalography
Vázquez-Marrufo, Manuel; Benítez Lugo, María Luisa; Ruíz-Peña, Juan Luis; Borges Guerra, Mónica; Izquierdo Ayuso, Guillermo
2017-01-01
Background Despite the immense literature related to diverse human electroencephalographic (EEG) parameters, very few studies have focused on the reliability of these measures. Some of the most studied components (i.e., P3 or MMN) have received more attention regarding the stability of their main parameters, such as latency, amplitude or topography. However, spectral modulations have not been as extensively evaluated considering that different analysis methods are available. The main aim of the present study is to assess the reliability of the latency, amplitude and topography of event-related desynchronization (ERD) for the alpha band (10–14 Hz) observed in a cognitive task (visual oddball). Topography reliability was analysed at different levels (for the group, within-subjects individually and between-subjects individually). Results The latency for alpha ERD showed stable behaviour between two sessions, and the amplitude exhibited an increment (more negative) in the second session. Alpha ERD topography exhibited a high correlation score between sessions at the group level (r = 0.903, p<0.001). The mean value for within-subject correlations was 0.750 (with a range from 0.391 to 0.954). Regarding between-subject topography comparisons, some subjects showed a highly specific topography, whereas other subjects showed topographies that were more similar to those of other subjects. Conclusion ERD was mainly stable between the two sessions with the exception of amplitude, which exhibited an increment in the second session. Topography exhibits excellent reliability at the group level; however, it exhibits highly heterogeneous behaviour at the individual level. Considering that the P3 was previously evaluated for this group of subjects, a direct comparison of the correlation scores was possible, and it showed that the ERD component is less reliable in individual topography than in the ERP component (P3). PMID:29088307
Reliability Assessment Approach for Stirling Convertors and Generators
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Schreiber, Jeffrey G.; Zampino, Edward; Best, Timothy
2004-01-01
Stirling power conversion is being considered for use in a Radioisotope Power System for deep-space science missions because it offers a multifold increase in the conversion efficiency of heat to electric power. Quantifying the reliability of a Radioisotope Power System that utilizes Stirling power conversion technology is important in developing and demonstrating the capability for long-term success. A description of the Stirling power convertor is provided, along with a discussion about some of the key components. Ongoing efforts to understand component life, design variables at the component and system levels, related sources, and the nature of uncertainties is discussed. The requirement for reliability also is discussed, and some of the critical areas of concern are identified. A section on the objectives of the performance model development and a computation of reliability is included to highlight the goals of this effort. Also, a viable physics-based reliability plan to model the design-level variable uncertainties at the component and system levels is outlined, and potential benefits are elucidated. The plan involves the interaction of different disciplines, maintaining the physical and probabilistic correlations at all the levels, and a verification process based on rational short-term tests. In addition, both top-down and bottom-up coherency were maintained to follow the physics-based design process and mission requirements. The outlined reliability assessment approach provides guidelines to improve the design and identifies governing variables to achieve high reliability in the Stirling Radioisotope Generator design.
HTGR plant availability and reliability evaluations. Volume I. Summary of evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cadwallader, G.J.; Hannaman, G.W.; Jacobsen, F.K.
1976-12-01
The report (1) describes a reliability assessment methodology for systematically locating and correcting areas which may contribute to unavailability of new and uniquely designed components and systems, (2) illustrates the methodology by applying it to such components in a high-temperature gas-cooled reactor (Public Service Company of Colorado's Fort St. Vrain 330-MW(e) HTGR), and (3) compares the results of the assessment with actual experience. The methodology can be applied to any component or system; however, it is particularly valuable for assessments of components or systems which provide essential functions, or the failure or mishandling of which could result in relatively largemore » economic losses.« less
Strategies for Increasing the Market Share of Recycled Products—A Games Theory Approach
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.; Pollalis, Yannis A.
2009-08-01
A methodological framework (including 28 activity stages and 10 decision nodes) has been designed under the form of an algorithmic procedure for the development of strategies for increasing the market share of recycled products within a games theory context. A case example is presented referring to a paper market, where a recycling company (RC) is in competition with a virgin-raw-material-using company (VC). The strategies of the VC, for increasing its market share, are the strengthening of (and advertisement based on) the high quality (VC1), the high reliability (VC2), the combination quality and reliability, putting emphasis on the first component (VC3), the combination quality and reliability, putting emphasis on the second component (VC4). The strategies of the RC, for increasing its market share, are proper advertisement based on the low price of produced recycled paper satisfying minimum quality requirements (RC1), the combination of low price with sensitization of the public as regards environmental and materials-saving issues, putting emphasis on the first component (RC2), the same combination, putting emphasis on the second component (RC3). Analysis of all possible situations for the case example under examination is also presented.
Space Station Freedom power supply commonality via modular design
NASA Technical Reports Server (NTRS)
Krauthamer, S.; Gangal, M. D.; Das, R.
1990-01-01
At mature operations, Space Station Freedom will need more than 2000 power supplies to feed housekeeping and user loads. Advanced technology power supplies from 20 to 250 W have been hybridized for terrestrial, aerospace, and industry applications in compact, efficient, reliable, lightweight packages compatible with electromagnetic interference requirements. The use of these hybridized packages as modules, either singly or in parallel, to satisfy the wide range of user power supply needs for all elements of the station is proposed. Proposed characteristics for the power supplies include common mechanical packaging, digital control, self-protection, high efficiency at full and partial loads, synchronization capability to reduce electromagnetic interference, redundancy, and soft-start capability. The inherent reliability is improved compared with conventional discrete component power supplies because the hybrid circuits use high-reliability components such as ceramic capacitors. Reliability is further improved over conventional supplies because the hybrid packages, which may be treated as a single part, reduce the parts count in the power supply.
1992-09-01
demonstrating the producibility of optoelectronic components for high-density/high-data-rate processors and accelerating the insertion of this technology...technology development stage, OETC will advance the development of optical components, produce links for a multiboard processor testbed demonstration, and...components that are affordable, initially at <$100 per line, and reliable, with a li~e BER-15 and MTTF >10 6 hours. Under the OETC program, Honeywell will
Uncertainties in obtaining high reliability from stress-strength models
NASA Technical Reports Server (NTRS)
Neal, Donald M.; Matthews, William T.; Vangel, Mark G.
1992-01-01
There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.
Reliability of Source Mechanisms for a Hydraulic Fracturing Dataset
NASA Astrophysics Data System (ADS)
Eyre, T.; Van der Baan, M.
2016-12-01
Non-double-couple components have been inferred for induced seismicity due to fluid injection, yet these components are often poorly constrained due to the acquisition geometry. Likewise non-double-couple components in microseismic recordings are not uncommon. Microseismic source mechanisms provide an insight into the fracturing behaviour of a hydraulically stimulated reservoir. However, source inversion in a hydraulic fracturing environment is complicated by the likelihood of volumetric contributions to the source due to the presence of high pressure fluids, which greatly increases the possible solution space and therefore the non-uniqueness of the solutions. Microseismic data is usually recorded on either 2D surface or borehole arrays of sensors. In many cases, surface arrays appear to constrain source mechanisms with high shear components, whereas borehole arrays tend to constrain more variable mechanisms including those with high tensile components. The abilities of each geometry to constrain the true source mechanisms are therefore called into question.The ability to distinguish between shear and tensile source mechanisms with different acquisition geometries is investigated using synthetic data. For both inversions, both P- and S- wave amplitudes recorded on three component sensors need to be included to obtain reliable solutions. Surface arrays appear to give more reliable solutions due to a greater sampling of the focal sphere, but in reality tend to record signals with a low signal to noise ratio. Borehole arrays can produce acceptable results, however the reliability is much more affected by relative source-receiver locations and source orientation, with biases produced in many of the solutions. Therefore more care must be taken when interpreting results.These findings are taken into account when interpreting a microseismic dataset of 470 events recorded by two vertical borehole arrays monitoring a horizontal treatment well. Source locations and mechanisms are calculated and the results discussed, including the biases caused by the array geometry. The majority of the events are located within the target reservoir, however a small, seemingly disconnected cluster of events appears 100 m above the reservoir.
High-reliability gas-turbine combined-cycle development program: Phase II. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hecht, K.G.; Sanderson, R.A.; Smith, M.J.
This three-volume report presents the results of Phase II of the multiphase EPRI-sponsored High-Reliability Gas Turbine Combined-Cycle Development Program whose goal is to achieve a highly reliable gas turbine combined-cycle power plant, available by the mid-1980s, which would be an economically attractive baseload generation alternative for the electric utility industry. The Phase II program objective was to prepare the preliminary design of this power plant. This volume presents information of the reliability, availability, and maintainability (RAM) analysis of a representative plant and the preliminary design of the gas turbine, the gas turbine ancillaries, and the balance of plant including themore » steam turbine generator. To achieve the program goals, a gas turbine was incorporated which combined proven reliability characteristics with improved performance features. This gas turbine, designated the V84.3, is the result of a cooperative effort between Kraftwerk Union AG and United Technologies Corporation. Gas turbines of similar design operating in Europe under baseload conditions have demonstrated mean time between failures in excess of 40,000 hours. The reliability characteristics of the gas turbine ancillaries and balance-of-plant equipment were improved through system simplification and component redundancy and by selection of component with inherent high reliability. A digital control system was included with logic, communications, sensor redundancy, and mandual backup. An independent condition monitoring and diagnostic system was also included. Program results provide the preliminary design of a gas turbine combined-cycle baseload power plant. This power plant has a predicted mean time between failure of nearly twice the 3000-hour EPRI goal. The cost of added reliability features is offset by improved performance, which results in a comparable specific cost and an 8% lower cost of electricity compared to present market offerings.« less
Reliability and validity of a Swedish language version of the Resilience Scale.
Nygren, Björn; Randström, Kerstin Björkman; Lejonklou, Anna K; Lundman, Beril
2004-01-01
The purpose of this study was to test the reliability and validity of the Swedish language version of the Resilience Scale (RS). Participants were 142 adults between 19-85 years of age. Internal consistency reliability, stability over time, and construct validity were evaluated using Cronbach's alpha, principal components analysis with varimax rotation and correlations with scores on the Sense of Coherence Scale (SOC) and the Rosenberg Self-Esteem Scale (RSE). The mean score on the RS was 142 (SD = 15). The possible scores on the RS range from 25 to 175, and scores higher than 146 are considered high. The test-retest correlation was .78. Correlations with the SOC and the RSE were .41 (p < 0.01) and .37 (p < 0.01), respectively. Personal Assurance and Acceptance of Self and Life emerged as components from the principal components analysis. These findings provide evidence for the reliability and validity of the Swedish language version of the RS.
Enhanced Component Performance Study: Air-Operated Valves 1998-2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2015-11-01
This report presents a performance evaluation of air-operated valves (AOVs) at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience failure reports from fiscal year 1998 through 2014 for the component reliability as reported in the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The AOV failure modes considered are failure-to-open/close, failure to operate or control, and spurious operation. The component reliability estimates and the reliability data are trended for the most recent 10-year period, while yearly estimates for reliability are provided for the entire active period. One statistically significantmore » trend was observed in the AOV data: The frequency of demands per reactor year for valves recording the fail-to-open or fail-to-close failure modes, for high-demand valves (those with greater than twenty demands per year), was found to be decreasing. The decrease was about three percent over the ten year period trended.« less
Design Evaluation of High Reliability Lithium Batteries
NASA Technical Reports Server (NTRS)
Buchman, R. C.; Helgeson, W. D.; Istephanous, N. S.
1985-01-01
Within one year, a lithium battery design can be qualified for device use through the application of accelerated discharge testing, calorimetry measurements, real time tests and other supplemental testing. Materials and corrosion testing verify that the battery components remain functional during expected battery life. By combining these various methods, a high reliability lithium battery can be manufactured for applications which require zero defect battery performance.
Wang, X; Jiao, Y; Tang, T; Wang, H; Lu, Z
2013-12-19
Intrinsic connectivity networks (ICNs) are composed of spatial components and time courses. The spatial components of ICNs were discovered with moderate-to-high reliability. So far as we know, few studies focused on the reliability of the temporal patterns for ICNs based their individual time courses. The goals of this study were twofold: to investigate the test-retest reliability of temporal patterns for ICNs, and to analyze these informative univariate metrics. Additionally, a correlation analysis was performed to enhance interpretability. Our study included three datasets: (a) short- and long-term scans, (b) multi-band echo-planar imaging (mEPI), and (c) eyes open or closed. Using dual regression, we obtained the time courses of ICNs for each subject. To produce temporal patterns for ICNs, we applied two categories of univariate metrics: network-wise complexity and network-wise low-frequency oscillation. Furthermore, we validated the test-retest reliability for each metric. The network-wise temporal patterns for most ICNs (especially for default mode network, DMN) exhibited moderate-to-high reliability and reproducibility under different scan conditions. Network-wise complexity for DMN exhibited fair reliability (ICC<0.5) based on eyes-closed sessions. Specially, our results supported that mEPI could be a useful method with high reliability and reproducibility. In addition, these temporal patterns were with physiological meanings, and certain temporal patterns were correlated to the node strength of the corresponding ICN. Overall, network-wise temporal patterns of ICNs were reliable and informative and could be complementary to spatial patterns of ICNs for further study. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.
Time-Varying, Multi-Scale Adaptive System Reliability Analysis of Lifeline Infrastructure Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gearhart, Jared Lee; Kurtz, Nolan Scot
2014-09-01
The majority of current societal and economic needs world-wide are met by the existing networked, civil infrastructure. Because the cost of managing such infrastructure is high and increases with time, risk-informed decision making is essential for those with management responsibilities for these systems. To address such concerns, a methodology that accounts for new information, deterioration, component models, component importance, group importance, network reliability, hierarchical structure organization, and efficiency concerns has been developed. This methodology analyzes the use of new information through the lens of adaptive Importance Sampling for structural reliability problems. Deterioration, multi-scale bridge models, and time-variant component importance aremore » investigated for a specific network. Furthermore, both bridge and pipeline networks are studied for group and component importance, as well as for hierarchical structures in the context of specific networks. Efficiency is the primary driver throughout this study. With this risk-informed approach, those responsible for management can address deteriorating infrastructure networks in an organized manner.« less
NASA Astrophysics Data System (ADS)
Tan, Samantha H.; Chen, Ning; Liu, Shi; Wang, Kefei
2003-09-01
As part of the semiconductor industry "contamination-free manufacturing" effort, significant emphasis has been placed on reducing potential sources of contamination from process equipment and process equipment components. Process tools contain process chambers and components that are exposed to the process environment or process chemistry and in some cases are in direct contact with production wafers. Any contamination from these sources must be controlled or eliminated in order to maintain high process yields, device performance, and device reliability. This paper discusses new nondestructive analytical methods for quantitative measurement of the cleanliness of metal, quartz, polysilicon and ceramic components that are used in process equipment tools. The goal of these new procedures is to measure the effectiveness of cleaning procedures and to verify whether a tool component part is sufficiently clean for installation and subsequent routine use in the manufacturing line. These procedures provide a reliable "qualification method" for tool component certification and also provide a routine quality control method for reliable operation of cleaning facilities. Cost advantages to wafer manufacturing include higher yields due to improved process cleanliness and elimination of yield loss and downtime resulting from the installation of "bad" components in process tools. We also discuss a representative example of wafer contamination having been linked to a specific process tool component.
Hybrid Power Management-Based Vehicle Architecture
NASA Technical Reports Server (NTRS)
Eichenberg, Dennis J.
2011-01-01
Hybrid Power Management (HPM) is the integration of diverse, state-of-the-art power devices in an optimal configuration for space and terrestrial applications (s ee figure). The appropriate application and control of the various power devices significantly improves overall system performance and efficiency. The basic vehicle architecture consists of a primary power source, and possibly other power sources, that provides all power to a common energy storage system that is used to power the drive motors and vehicle accessory systems. This architecture also provides power as an emergency power system. Each component is independent, permitting it to be optimized for its intended purpose. The key element of HPM is the energy storage system. All generated power is sent to the energy storage system, and all loads derive their power from that system. This can significantly reduce the power requirement of the primary power source, while increasing the vehicle reliability. Ultracapacitors are ideal for an HPM-based energy storage system due to their exceptionally long cycle life, high reliability, high efficiency, high power density, and excellent low-temperature performance. Multiple power sources and multiple loads are easily incorporated into an HPM-based vehicle. A gas turbine is a good primary power source because of its high efficiency, high power density, long life, high reliability, and ability to operate on a wide range of fuels. An HPM controller maintains optimal control over each vehicle component. This flexible operating system can be applied to all vehicles to considerably improve vehicle efficiency, reliability, safety, security, and performance. The HPM-based vehicle architecture has many advantages over conventional vehicle architectures. Ultracapacitors have a much longer cycle life than batteries, which greatly improves system reliability, reduces life-of-system costs, and reduces environmental impact as ultracapacitors will probably never need to be replaced and disposed of. The environmentally safe ultracapacitor components reduce disposal concerns, and their recyclable nature reduces the environmental impact. High ultracapacitor power density provides high power during surges, and the ability to absorb high power during recharging. Ultracapacitors are extremely efficient in capturing recharging energy, are rugged, reliable, maintenance-free, have excellent lowtemperature characteristic, provide consistent performance over time, and promote safety as they can be left indefinitely in a safe, discharged state whereas batteries cannot.
A PC program to optimize system configuration for desired reliability at minimum cost
NASA Technical Reports Server (NTRS)
Hills, Steven W.; Siahpush, Ali S.
1994-01-01
High reliability is desired in all engineered systems. One way to improve system reliability is to use redundant components. When redundant components are used, the problem becomes one of allocating them to achieve the best reliability without exceeding other design constraints such as cost, weight, or volume. Systems with few components can be optimized by simply examining every possible combination but the number of combinations for most systems is prohibitive. A computerized iteration of the process is possible but anything short of a super computer requires too much time to be practical. Many researchers have derived mathematical formulations for calculating the optimum configuration directly. However, most of the derivations are based on continuous functions whereas the real system is composed of discrete entities. Therefore, these techniques are approximations of the true optimum solution. This paper describes a computer program that will determine the optimum configuration of a system of multiple redundancy of both standard and optional components. The algorithm is a pair-wise comparative progression technique which can derive the true optimum by calculating only a small fraction of the total number of combinations. A designer can quickly analyze a system with this program on a personal computer.
Developing Ultra Reliable Life Support for the Moon and Mars
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2009-01-01
Recycling life support systems can achieve ultra reliability by using spares to replace failed components. The added mass for spares is approximately equal to the original system mass, provided the original system reliability is not very low. Acceptable reliability can be achieved for the space shuttle and space station by preventive maintenance and by replacing failed units, However, this maintenance and repair depends on a logistics supply chain that provides the needed spares. The Mars mission must take all the needed spares at launch. The Mars mission also must achieve ultra reliability, a very low failure rate per hour, since it requires years rather than weeks and cannot be cut short if a failure occurs. Also, the Mars mission has a much higher mass launch cost per kilogram than shuttle or station. Achieving ultra reliable space life support with acceptable mass will require a well-planned and extensive development effort. Analysis must define the reliability requirement and allocate it to subsystems and components. Technologies, components, and materials must be designed and selected for high reliability. Extensive testing is needed to ascertain very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The systems must be designed, produced, integrated, and tested without impairing system reliability. Maintenance and failed unit replacement should not introduce any additional probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass must start soon if it is to produce timely results for the moon and Mars.
A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp
High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storagemore » systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results« less
Diverse Redundant Systems for Reliable Space Life Support
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2015-01-01
Reliable life support systems are required for deep space missions. The probability of a fatal life support failure should be less than one in a thousand in a multi-year mission. It is far too expensive to develop a single system with such high reliability. Using three redundant units would require only that each have a failure probability of one in ten over the mission. Since the system development cost is inverse to the failure probability, this would cut cost by a factor of one hundred. Using replaceable subsystems instead of full systems would further cut cost. Using full sets of replaceable components improves reliability more than using complete systems as spares, since a set of components could repair many different failures instead of just one. Replaceable components would require more tools, space, and planning than full systems or replaceable subsystems. However, identical system redundancy cannot be relied on in practice. Common cause failures can disable all the identical redundant systems. Typical levels of common cause failures will defeat redundancy greater than two. Diverse redundant systems are required for reliable space life support. Three, four, or five diverse redundant systems could be needed for sufficient reliability. One system with lower level repair could be substituted for two diverse systems to save cost.
Caruso, J C
2001-06-01
The unreliability of difference scores is a well documented phenomenon in the social sciences and has led researchers and practitioners to interpret differences cautiously, if at all. In the case of the Kaufman Adult and Adolescent Intelligence Test (KAIT), the unreliability of the difference between the Fluid IQ and the Crystallized IQ is due to the high correlation between the two scales. The consequences of the lack of precision with which differences are identified are wide confidence intervals and unpowerful significance tests (i.e., large differences are required to be declared statistically significant). Reliable component analysis (RCA) was performed on the subtests of the KAIT in order to address these problems. RCA is a new data reduction technique that results in uncorrelated component scores with maximum proportions of reliable variance. Results indicate that the scores defined by RCA have discriminant and convergent validity (with respect to the equally weighted scores) and that differences between the scores, derived from a single testing session, were more reliable than differences derived from equal weighting for each age group (11-14 years, 15-34 years, 35-85+ years). This reliability advantage results in narrower confidence intervals around difference scores and smaller differences required for statistical significance.
High-reliability gas-turbine combined-cycle development program: Phase II, Volume 3. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hecht, K.G.; Sanderson, R.A.; Smith, M.J.
This three-volume report presents the results of Phase II of the multiphase EPRI-sponsored High-Reliability Gas Turbine Combined-Cycle Development Program whose goal is to achieve a highly reliable gas turbine combined-cycle power plant, available by the mid-1980s, which would be an economically attractive baseload generation alternative for the electric utility industry. The Phase II program objective was to prepare the preliminary design of this power plant. The power plant was addressed in three areas: (1) the gas turbine, (2) the gas turbine ancillaries, and (3) the balance of plant including the steam turbine generator. To achieve the program goals, a gasmore » turbine was incorporated which combined proven reliability characteristics with improved performance features. This gas turbine, designated the V84.3, is the result of a cooperative effort between Kraftwerk Union AG and United Technologies Corporation. Gas turbines of similar design operating in Europe under baseload conditions have demonstrated mean time between failures in excess of 40,000. The reliability characteristics of the gas turbine ancillaries and balance-of-plant equipment were improved through system simplification and component redundancy and by selection of component with inherent high reliability. A digital control system was included with logic, communications, sensor redundancy, and manual backup. An independent condition monitoring and diagnostic system was also included. Program results provide the preliminary design of a gas turbine combined-cycle baseload power plant. This power plant has a predicted mean time between failure of nearly twice the 3000-h EPRI goal. The cost of added reliability features is offset by improved performance, which results in a comparable specific cost and an 8% lower cost of electricty compared to present market offerings.« less
High Power Klystrons for Efficient Reliable High Power Amplifiers.
1980-11-01
techniques to obtain high overall efficiency. One is second harmonic space charge bunching. This is a process whereby the fundamental and second harmonic...components of the space charge waves in the electron beam of a microwave tube are combined to produce more highly concentrated electron bunches raising the...the drift lengths to enhance the 2nd harmonic component in the space charge waves. The latter method was utilized in the VKC-7790. Computer
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Steinetz, B. M.; Zaretsky, E. V.; Athavale, M. M.; Przekwas, A. J.
2004-01-01
The issues and components supporting the engine power stream are reviewed. It is essential that companies pay close attention to engine sealing issues, particularly on the high-pressure spool or high-pressure pumps. Small changes in these systems are reflected throughout the entire engine. Although cavity, platform, and tip sealing are complex and have a significant effect on component and engine performance, computational tools (e.g., NASA-developed INDSEAL, SCISEAL, and ADPAC) are available to help guide the designer and the experimenter. Gas turbine engine and rocket engine externals must all function efficiently with a high degree of reliability in order for the engine to run but often receive little attention until they malfunction. Within the open literature statistically significant data for critical engine components are virtually nonexistent; the classic approach is deterministic. Studies show that variations with loading can have a significant effect on component performance and life. Without validation data they are just studies. These variations and deficits in statistical databases require immediate attention.
Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis
NASA Astrophysics Data System (ADS)
Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang
2017-07-01
In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.
The work and social adjustment scale: reliability, sensitivity and value.
Zahra, Daniel; Qureshi, Adam; Henley, William; Taylor, Rod; Quinn, Cath; Pooler, Jill; Hardy, Gillian; Newbold, Alexandra; Byng, Richard
2014-06-01
To investigate the psychometric properties of the Work and Social Adjustment Scale (WSAS) as an outcome measure for the Improving Access to Psychological Therapy programme, assessing its value as an addition to the Patient Health (PHQ-9) and Generalised Anxiety Disorder questionnaires (GAD-7). Little research has investigated these properties to date. Reliability and responsiveness to change were assessed using data from 4,835 patients. Principal components analysis was used to determine whether the WSAS measures a factor distinct from the PHQ-9 and GAD-7. The WSAS measures a distinct social functioning factor, has high internal reliability, and is sensitive to treatment effects. The WSAS, PHQ-9 and GAD-7 perform comparably on measures of reliability and sensitivity. The WSAS also measures a distinct social functioning component suggesting it has potential as an additional outcome measure.
Learning high-quality soldering
NASA Technical Reports Server (NTRS)
Read, W. S.
1981-01-01
Soldering techniques for high-reliability electronic equipment are taught in 5 day course at NASA's Jet Propulsion Laboratory. Topic covered include new circuit assembly, printed-wiring board reworking, circuit changes, wire routing, and component installation.
Development and Testing of a USM High Altitude Balloon
NASA Astrophysics Data System (ADS)
Thaheer, A. S. Mohamed; Ismail, N. A.; Yusoff, S. H. Md.; Nasirudin, M. A.
2018-04-01
This paper discusses on tests conducted on the component and subsystem level during development of the USM High Altitude Balloon (HAB). The tests conducted by selecting initial components then tested individually based on several case studies such as reliability test, camera viewing, power consumption, thermal capability, and parachute performance. Then, the component is integrated into sub-system level for integration and functionality test. The preliminary result is utilized to tune the components and sub-systems and trial launch is conducted where the sample images are recorded and atmospheric data successfully collected.
NASA Technical Reports Server (NTRS)
Kiser, James D.; Levine, Stanley R.; Dicarlo, James A.
1987-01-01
Structural ceramics were under nearly continuous development for various heat engine applications since the early 1970s. These efforts were sustained by the properties that ceramics offer in the areas of high-temperature strength, environmental resistance, and low density and the large benefits in system efficiency and performance that can result. The promise of ceramics was not realized because their brittle nature results in high sensitivity to microscopic flaws and catastrophic fracture behavior. This translated into low reliability for ceramic components and thus limited their application in engines. For structural ceramics to successfully make inroads into the terrestrial heat engine market requires further advances in low cost, net shape fabrication of high reliability components, and improvements in properties such as toughness, and strength. These advances will lead to very limited use of ceramics in noncritical applications in aerospace engines. For critical aerospace applications, an additional requirement is that the components display markedly improved toughness and noncatastrophic or graceful fracture. Thus the major emphasis is on fiber-reinforced ceramics.
Parts and Components Reliability Assessment: A Cost Effective Approach
NASA Technical Reports Server (NTRS)
Lee, Lydia
2009-01-01
System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.
Reliable, Low-Cost, Low-Weight, Non-Hermetic Coating for MCM Applications
NASA Technical Reports Server (NTRS)
Jones, Eric W.; Licari, James J.
2000-01-01
Through an Air Force Research Laboratory sponsored STM program, reliable, low-cost, low-weight, non-hermetic coatings for multi-chip-module(MCK applications were developed. Using the combination of Sandia Laboratory ATC-01 test chips, AvanTeco's moisture sensor chips(MSC's), and silicon slices, we have shown that organic and organic/inorganic overcoatings are reliable and practical non-hermetic moisture and oxidation barriers. The use of the MSC and unpassivated ATC-01 test chips provided rapid test results and comparison of moisture barrier quality of the overcoatings. The organic coatings studied were Parylene and Cyclotene. The inorganic coatings were Al2O3 and SiO2. The choice of coating(s) is dependent on the environment that the device(s) will be exposed to. We have defined four(4) classes of environments: Class I(moderate temperature/moderate humidity). Class H(high temperature/moderate humidity). Class III(moderate temperature/high humidity). Class IV(high temperature/high humidity). By subjecting the components to adhesion, FTIR, temperature-humidity(TH), pressure cooker(PCT), and electrical tests, we have determined that it is possible to reduce failures 50-70% for organic/inorganic coated components compared to organic coated components. All materials and equipment used are readily available commercially or are standard in most semiconductor fabrication lines. It is estimated that production cost for the developed technology would range from $1-10/module, compared to $20-200 for hermetically sealed packages.
Training less-experienced faculty improves reliability of skills assessment in cardiac surgery.
Lou, Xiaoying; Lee, Richard; Feins, Richard H; Enter, Daniel; Hicks, George L; Verrier, Edward D; Fann, James I
2014-12-01
Previous work has demonstrated high inter-rater reliability in the objective assessment of simulated anastomoses among experienced educators. We evaluated the inter-rater reliability of less-experienced educators and the impact of focused training with a video-embedded coronary anastomosis assessment tool. Nine less-experienced cardiothoracic surgery faculty members from different institutions evaluated 2 videos of simulated coronary anastomoses (1 by a medical student and 1 by a resident) at the Thoracic Surgery Directors Association Boot Camp. They then underwent a 30-minute training session using an assessment tool with embedded videos to anchor rating scores for 10 components of coronary artery anastomosis. Afterward, they evaluated 2 videos of a different student and resident performing the task. Components were scored on a 1 to 5 Likert scale, yielding an average composite score. Inter-rater reliabilities of component and composite scores were assessed using intraclass correlation coefficients (ICCs) and overall pass/fail ratings with kappa. All components of the assessment tool exhibited improvement in reliability, with 4 (bite, needle holder use, needle angles, and hand mechanics) improving the most from poor (ICC range, 0.09-0.48) to strong (ICC range, 0.80-0.90) agreement. After training, inter-rater reliabilities for composite scores improved from moderate (ICC, 0.76) to strong (ICC, 0.90) agreement, and for overall pass/fail ratings, from poor (kappa = 0.20) to moderate (kappa = 0.78) agreement. Focused, video-based anchor training facilitates greater inter-rater reliability in the objective assessment of simulated coronary anastomoses. Among raters with less teaching experience, such training may be needed before objective evaluation of technical skills. Published by Elsevier Inc.
Application of reliability-centered-maintenance to BWR ECCS motor operator valve performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feltus, M.A.; Choi, Y.A.
1993-01-01
This paper describes the application of reliability-centered maintenance (RCM) methods to plant probabilistic risk assessment (PRA) and safety analyses for four boiling water reactor emergency core cooling systems (ECCSs): (1) high-pressure coolant injection (HPCI); (2) reactor core isolation cooling (RCIC); (3) residual heat removal (RHR); and (4) core spray systems. Reliability-centered maintenance is a system function-based technique for improving a preventive maintenance program that is applied on a component basis. Those components that truly affect plant function are identified, and maintenance tasks are focused on preventing their failures. The RCM evaluation establishes the relevant criteria that preserve system function somore » that an RCM-focused approach can be flexible and dynamic.« less
Reliability issues of free-space communications systems and networks
NASA Astrophysics Data System (ADS)
Willebrand, Heinz A.
2003-04-01
Free space optics (FSO) is a high-speed point-to-point connectivity solution traditionally used in the enterprise campus networking market for building-to-building LAN connectivity. However, more recently some wire line and wireless carriers started to deploy FSO systems in their networks. The requirements on FSO system reliability, meaing both system availability and component reliability, are far more stringent in the carrier market when compared to the requirements in the enterprise market segment. This paper tries to outline some of the aspects that are important to ensure carrier class system reliability.
Characterization of High-power Quasi-cw Laser Diode Arrays
NASA Technical Reports Server (NTRS)
Stephen, Mark A.; Vasilyev, Aleksey; Troupaki, Elisavet; Allan, Graham R.; Kashem, Nasir B.
2005-01-01
NASA s requirements for high reliability, high performance satellite laser instruments have driven the investigation of many critical components; specifically, 808 nm laser diode array (LDA) pump devices. Performance and comprehensive characterization data of Quasi-CW, High-power, laser diode arrays is presented.
Reliability of Radioisotope Stirling Convertor Linear Alternator
NASA Technical Reports Server (NTRS)
Shah, Ashwin; Korovaichuk, Igor; Geng, Steven M.; Schreiber, Jeffrey G.
2006-01-01
Onboard radioisotope power systems being developed and planned for NASA s deep-space missions would require reliable design lifetimes of up to 14 years. Critical components and materials of Stirling convertors have been undergoing extensive testing and evaluation in support of a reliable performance for the specified life span. Of significant importance to the successful development of the Stirling convertor is the design of a lightweight and highly efficient linear alternator. Alternator performance could vary due to small deviations in the permanent magnet properties, operating temperature, and component geometries. Durability prediction and reliability of the alternator may be affected by these deviations from nominal design conditions. Therefore, it is important to evaluate the effect of these uncertainties in predicting the reliability of the linear alternator performance. This paper presents a study in which a reliability-based methodology is used to assess alternator performance. The response surface characterizing the induced open-circuit voltage performance is constructed using 3-D finite element magnetic analysis. Fast probability integration method is used to determine the probability of the desired performance and its sensitivity to the alternator design parameters.
High Temperature Perforating System for Geothermal Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smart, Moises E.
The objective of this project is to develop a perforating system consisting of all the explosive components and hardware, capable of reliable performance in high temperatures geothermal wells (>200 ºC). In this light we will focused on engineering development of these components, characterization of the explosive raw powder and developing the internal infrastructure to increase the production of the explosive from laboratory scale to industrial scale.
Chang, Hing-Chiu; Bilgin, Ali; Bernstein, Adam; Trouard, Theodore P.
2018-01-01
Over the past several years, significant efforts have been made to improve the spatial resolution of diffusion-weighted imaging (DWI), aiming at better detecting subtle lesions and more reliably resolving white-matter fiber tracts. A major concern with high-resolution DWI is the limited signal-to-noise ratio (SNR), which may significantly offset the advantages of high spatial resolution. Although the SNR of DWI data can be improved by denoising in post-processing, existing denoising procedures may potentially reduce the anatomic resolvability of high-resolution imaging data. Additionally, non-Gaussian noise induced signal bias in low-SNR DWI data may not always be corrected with existing denoising approaches. Here we report an improved denoising procedure, termed diffusion-matched principal component analysis (DM-PCA), which comprises 1) identifying a group of (not necessarily neighboring) voxels that demonstrate very similar magnitude signal variation patterns along the diffusion dimension, 2) correcting low-frequency phase variations in complex-valued DWI data, 3) performing PCA along the diffusion dimension for real- and imaginary-components (in two separate channels) of phase-corrected DWI voxels with matched diffusion properties, 4) suppressing the noisy PCA components in real- and imaginary-components, separately, of phase-corrected DWI data, and 5) combining real- and imaginary-components of denoised DWI data. Our data show that the new two-channel (i.e., for real- and imaginary-components) DM-PCA denoising procedure performs reliably without noticeably compromising anatomic resolvability. Non-Gaussian noise induced signal bias could also be reduced with the new denoising method. The DM-PCA based denoising procedure should prove highly valuable for high-resolution DWI studies in research and clinical uses. PMID:29694400
Relating design and environmental variables to reliability
NASA Astrophysics Data System (ADS)
Kolarik, William J.; Landers, Thomas L.
The combination of space application and nuclear power source demands high reliability hardware. The possibilities of failure, either an inability to provide power or a catastrophic accident, must be minimized. Nuclear power experiences on the ground have led to highly sophisticated probabilistic risk assessment procedures, most of which require quantitative information to adequately assess such risks. In the area of hardware risk analysis, reliability information plays a key role. One of the lessons learned from the Three Mile Island experience is that thorough analyses of critical components are essential. Nuclear grade equipment shows some reliability advantages over commercial. However, no statistically significant difference has been found. A recent study pertaining to spacecraft electronics reliability, examined some 2500 malfunctions on more than 300 aircraft. The study classified the equipment failures into seven general categories. Design deficiencies and lack of environmental protection accounted for about half of all failures. Within each class, limited reliability modeling was performed using a Weibull failure model.
An approximation formula for a class of fault-tolerant computers
NASA Technical Reports Server (NTRS)
White, A. L.
1986-01-01
An approximation formula is derived for the probability of failure for fault-tolerant process-control computers. These computers use redundancy and reconfiguration to achieve high reliability. Finite-state Markov models capture the dynamic behavior of component failure and system recovery, and the approximation formula permits an estimation of system reliability by an easy examination of the model.
High-reliability computing for the smarter planet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, Heather M; Graham, Paul; Manuzzato, Andrea
2010-01-01
The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities ofmore » inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is necessary. Already critical infrastructure is failing too frequently. In this paper, we will introduce the Cross-Layer Reliability concept for designing more reliable computer systems.« less
Reliability and Productivity Modeling for the Optimization of Separated Spacecraft Interferometers
NASA Technical Reports Server (NTRS)
Kenny, Sean (Technical Monitor); Wertz, Julie
2002-01-01
As technological systems grow in capability, they also grow in complexity. Due to this complexity, it is no longer possible for a designer to use engineering judgement to identify the components that have the largest impact on system life cycle metrics, such as reliability, productivity, cost, and cost effectiveness. One way of identifying these key components is to build quantitative models and analysis tools that can be used to aid the designer in making high level architecture decisions. Once these key components have been identified, two main approaches to improving a system using these components exist: add redundancy or improve the reliability of the component. In reality, the most effective approach to almost any system will be some combination of these two approaches, in varying orders of magnitude for each component. Therefore, this research tries to answer the question of how to divide funds, between adding redundancy and improving the reliability of components, to most cost effectively improve the life cycle metrics of a system. While this question is relevant to any complex system, this research focuses on one type of system in particular: Separate Spacecraft Interferometers (SSI). Quantitative models are developed to analyze the key life cycle metrics of different SSI system architectures. Next, tools are developed to compare a given set of architectures in terms of total performance, by coupling different life cycle metrics together into one performance metric. Optimization tools, such as simulated annealing and genetic algorithms, are then used to search the entire design space to find the "optimal" architecture design. Sensitivity analysis tools have been developed to determine how sensitive the results of these analyses are to uncertain user defined parameters. Finally, several possibilities for the future work that could be done in this area of research are presented.
2002-06-01
projects are converted into bricks and mortar , as Figure 5 illustrates. Making major changes in LCC after projects are turned over to production is...matter experts ( SMEs ) in the parts, materials, and processes functional area. Data gathering and analysis were conducted through structured interviews...The analysis synthesized feedback and searched for collective issues from the various SMEs on managing PM&P Program requirements, the
Highly-reliable laser diodes and modules for spaceborne applications
NASA Astrophysics Data System (ADS)
Deichsel, E.
2017-11-01
Laser applications become more and more interesting in contemporary missions such as earth observations or optical communication in space. One of these applications is light detection and ranging (LIDAR), which comprises huge scientific potential in future missions. The Nd:YAG solid-state laser of such a LIDAR system is optically pumped using 808nm emitting pump sources based on semiconductor laser-diodes in quasi-continuous wave (qcw) operation. Therefore reliable and efficient laser diodes with increased output powers are an important requirement for a spaceborne LIDAR-system. In the past, many tests were performed regarding the performance and life-time of such laser-diodes. There were also studies for spaceborne applications, but a test with long operation times at high powers and statistical relevance is pending. Other applications, such as science packages (e.g. Raman-spectroscopy) on planetary rovers require also reliable high-power light sources. Typically fiber-coupled laser diode modules are used for such applications. Besides high reliability and life-time, designs compatible to the harsh environmental conditions must be taken in account. Mechanical loads, such as shock or strong vibration are expected due to take-off or landing procedures. Many temperature cycles with high change rates and differences must be taken in account due to sun-shadow effects in planetary orbits. Cosmic radiation has strong impact on optical components and must also be taken in account. Last, a hermetic sealing must be considered, since vacuum can have disadvantageous effects on optoelectronics components.
Lifetime Reliability Prediction of Ceramic Structures Under Transient Thermomechanical Loads
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Jadaan, Osama J.; Gyekenyesi, John P.
2005-01-01
An analytical methodology is developed to predict the probability of survival (reliability) of ceramic components subjected to harsh thermomechanical loads that can vary with time (transient reliability analysis). This capability enables more accurate prediction of ceramic component integrity against fracture in situations such as turbine startup and shutdown, operational vibrations, atmospheric reentry, or other rapid heating or cooling situations (thermal shock). The transient reliability analysis methodology developed herein incorporates the following features: fast-fracture transient analysis (reliability analysis without slow crack growth, SCG); transient analysis with SCG (reliability analysis with time-dependent damage due to SCG); a computationally efficient algorithm to compute the reliability for components subjected to repeated transient loading (block loading); cyclic fatigue modeling using a combined SCG and Walker fatigue law; proof testing for transient loads; and Weibull and fatigue parameters that are allowed to vary with temperature or time. Component-to-component variation in strength (stochastic strength response) is accounted for with the Weibull distribution, and either the principle of independent action or the Batdorf theory is used to predict the effect of multiaxial stresses on reliability. The reliability analysis can be performed either as a function of the component surface (for surface-distributed flaws) or component volume (for volume-distributed flaws). The transient reliability analysis capability has been added to the NASA CARES/ Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code. CARES/Life was also updated to interface with commercially available finite element analysis software, such as ANSYS, when used to model the effects of transient load histories. Examples are provided to demonstrate the features of the methodology as implemented in the CARES/Life program.
A particle swarm model for estimating reliability and scheduling system maintenance
NASA Astrophysics Data System (ADS)
Puzis, Rami; Shirtz, Dov; Elovici, Yuval
2016-05-01
Modifying data and information system components may introduce new errors and deteriorate the reliability of the system. Reliability can be efficiently regained with reliability centred maintenance, which requires reliability estimation for maintenance scheduling. A variant of the particle swarm model is used to estimate reliability of systems implemented according to the model view controller paradigm. Simulations based on data collected from an online system of a large financial institute are used to compare three component-level maintenance policies. Results show that appropriately scheduled component-level maintenance greatly reduces the cost of upholding an acceptable level of reliability by reducing the need in system-wide maintenance.
The state-of-the-art of dc power distribution systems/components for space applications
NASA Technical Reports Server (NTRS)
Krauthamer, S.
1988-01-01
This report is a survey of the state of the art of high voltage dc systems and components. This information can be used for consideration of an alternative secondary distribution (120 Vdc) system for the Space Station. All HVdc components have been prototyped or developed for terrestrial, aircraft, and spacecraft applications, and are applicable for general space application with appropriate modification and qualification. HVdc systems offer a safe, reliable, low mass, high efficiency and low EMI alternative for Space Station secondary distribution.
Figueroa, José; Guarachi, Juan Pablo; Matas, José; Arnander, Magnus; Orrego, Mario
2016-04-01
Computed tomography (CT) is widely used to assess component rotation in patients with poor results after total knee arthroplasty (TKA). The purpose of this study was to simultaneously determine the accuracy and reliability of CT in measuring TKA component rotation. TKA components were implanted in dry-bone models and assigned to two groups. The first group (n = 7) had variable femoral component rotations, and the second group (n = 6) had variable tibial tray rotations. CT images were then used to assess component rotation. Accuracy of CT rotational assessment was determined by mean difference, in degrees, between implanted component rotation and CT-measured rotation. Intraclass correlation coefficient (ICC) was applied to determine intra-observer and inter-observer reliability. Femoral component accuracy showed a mean difference of 2.5° and the tibial tray a mean difference of 3.2°. There was good intra- and inter-observer reliability for both components, with a femoral ICC of 0.8 and 0.76, and tibial ICC of 0.68 and 0.65, respectively. CT rotational assessment accuracy can differ from true component rotation by approximately 3° for each component. It does, however, have good inter- and intra-observer reliability.
Statistical validity of using ratio variables in human kinetics research.
Liu, Yuanlong; Schutz, Robert W
2003-09-01
The purposes of this study were to investigate the validity of the simple ratio and three alternative deflation models and examine how the variation of the numerator and denominator variables affects the reliability of a ratio variable. A simple ratio and three alternative deflation models were fitted to four empirical data sets, and common criteria were applied to determine the best model for deflation. Intraclass correlation was used to examine the component effect on the reliability of a ratio variable. The results indicate that the validity, of a deflation model depends on the statistical characteristics of the particular component variables used, and an optimal deflation model for all ratio variables may not exist. Therefore, it is recommended that different models be fitted to each empirical data set to determine the best deflation model. It was found that the reliability of a simple ratio is affected by the coefficients of variation and the within- and between-trial correlations between the numerator and denominator variables. It was recommended that researchers should compute the reliability of the derived ratio scores and not assume that strong reliabilities in the numerator and denominator measures automatically lead to high reliability in the ratio measures.
Alwan, Faris M; Baharum, Adam; Hassan, Geehan S
2013-01-01
The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter [Formula: see text] and shape parameters [Formula: see text] and [Formula: see text]. Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models.
Alwan, Faris M.; Baharum, Adam; Hassan, Geehan S.
2013-01-01
The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter and shape parameters and . Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models. PMID:23936346
High-Reliability Health Care: Getting There from Here
Chassin, Mark R; Loeb, Jerod M
2013-01-01
Context Despite serious and widespread efforts to improve the quality of health care, many patients still suffer preventable harm every day. Hospitals find improvement difficult to sustain, and they suffer “project fatigue” because so many problems need attention. No hospitals or health systems have achieved consistent excellence throughout their institutions. High-reliability science is the study of organizations in industries like commercial aviation and nuclear power that operate under hazardous conditions while maintaining safety levels that are far better than those of health care. Adapting and applying the lessons of this science to health care offer the promise of enabling hospitals to reach levels of quality and safety that are comparable to those of the best high-reliability organizations. Methods We combined the Joint Commission's knowledge of health care organizations with knowledge from the published literature and from experts in high-reliability industries and leading safety scholars outside health care. We developed a conceptual and practical framework for assessing hospitals’ readiness for and progress toward high reliability. By iterative testing with hospital leaders, we refined the framework and, for each of its fourteen components, defined stages of maturity through which we believe hospitals must pass to reach high reliability. Findings We discovered that the ways that high-reliability organizations generate and maintain high levels of safety cannot be directly applied to today's hospitals. We defined a series of incremental changes that hospitals should undertake to progress toward high reliability. These changes involve the leadership's commitment to achieving zero patient harm, a fully functional culture of safety throughout the organization, and the widespread deployment of highly effective process improvement tools. Conclusions Hospitals can make substantial progress toward high reliability by undertaking several specific organizational change initiatives. Further research and practical experience will be necessary to determine the validity and effectiveness of this framework for high-reliability health care. PMID:24028696
High-reliability health care: getting there from here.
Chassin, Mark R; Loeb, Jerod M
2013-09-01
Despite serious and widespread efforts to improve the quality of health care, many patients still suffer preventable harm every day. Hospitals find improvement difficult to sustain, and they suffer "project fatigue" because so many problems need attention. No hospitals or health systems have achieved consistent excellence throughout their institutions. High-reliability science is the study of organizations in industries like commercial aviation and nuclear power that operate under hazardous conditions while maintaining safety levels that are far better than those of health care. Adapting and applying the lessons of this science to health care offer the promise of enabling hospitals to reach levels of quality and safety that are comparable to those of the best high-reliability organizations. We combined the Joint Commission's knowledge of health care organizations with knowledge from the published literature and from experts in high-reliability industries and leading safety scholars outside health care. We developed a conceptual and practical framework for assessing hospitals' readiness for and progress toward high reliability. By iterative testing with hospital leaders, we refined the framework and, for each of its fourteen components, defined stages of maturity through which we believe hospitals must pass to reach high reliability. We discovered that the ways that high-reliability organizations generate and maintain high levels of safety cannot be directly applied to today's hospitals. We defined a series of incremental changes that hospitals should undertake to progress toward high reliability. These changes involve the leadership's commitment to achieving zero patient harm, a fully functional culture of safety throughout the organization, and the widespread deployment of highly effective process improvement tools. Hospitals can make substantial progress toward high reliability by undertaking several specific organizational change initiatives. Further research and practical experience will be necessary to determine the validity and effectiveness of this framework for high-reliability health care. © 2013 The Authors. The Milbank Quarterly published by Wiley Periodicals Inc. on behalf of Milbank Memorial Fund.
Reliability analysis of component-level redundant topologies for solid-state fault current limiter
NASA Astrophysics Data System (ADS)
Farhadi, Masoud; Abapour, Mehdi; Mohammadi-Ivatloo, Behnam
2018-04-01
Experience shows that semiconductor switches in power electronics systems are the most vulnerable components. One of the most common ways to solve this reliability challenge is component-level redundant design. There are four possible configurations for the redundant design in component level. This article presents a comparative reliability analysis between different component-level redundant designs for solid-state fault current limiter. The aim of the proposed analysis is to determine the more reliable component-level redundant configuration. The mean time to failure (MTTF) is used as the reliability parameter. Considering both fault types (open circuit and short circuit), the MTTFs of different configurations are calculated. It is demonstrated that more reliable configuration depends on the junction temperature of the semiconductor switches in the steady state. That junction temperature is a function of (i) ambient temperature, (ii) power loss of the semiconductor switch and (iii) thermal resistance of heat sink. Also, results' sensitivity to each parameter is investigated. The results show that in different conditions, various configurations have higher reliability. The experimental results are presented to clarify the theory and feasibility of the proposed approaches. At last, levelised costs of different configurations are analysed for a fair comparison.
Chen, Lin-Wei; Wang, Qin; Qin, Kun-Ming; Wang, Xiao-Li; Wang, Bin; Chen, Dan-Ni; Cai, Bao-Chang; Cai, Ting
2016-02-01
The present study was designed to develop and validate a sensitive and reliable ultra high performance liquid chromatography coupled with quadrupole time-of-flight mass spectrometry (UPLC-QTOF/MS) method to separate and identify the chemical constituents of Qixue Shuangbu Tincture (QXSBT), a classic traditional Chinese medicine (TCM) prescription. Under the optimized UPLC and QTOF/MS conditions, 56 components in QXSBT, including chalcones, triterpenoids, protopanaxatriol, flavones and flavanones were identified and tentatively characterized within a running time of 42 min. The components were identified by comparing the retention times, accurate mass, and mass spectrometric fragmentation characteristic ions, and matching empirical molecular formula with that of the published compounds. In conclusion, the established UPLC-QTOF/MS method was reliable for a rapid identification of complicated components in the TCM prescriptions. Copyright © 2016 China Pharmaceutical University. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Witantyo; Rindiyah, Anita
2018-03-01
According to data from maintenance planning and control, it was obtained that highest inventory value is non-routine components. Maintenance components are components which procured based on maintenance activities. The problem happens because there is no synchronization between maintenance activities and the components required. Reliability Centered Maintenance method is used to overcome the problem by reevaluating maintenance activities required components. The case chosen is roller mill system because it has the highest unscheduled downtime record. Components required for each maintenance activities will be determined by its failure distribution, so the number of components needed could be predicted. Moreover, those components will be reclassified from routine component to be non-routine component, so the procurement could be carried out regularly. Based on the conducted analysis, failure happens in almost every maintenance task are classified to become scheduled on condition task, scheduled discard task, schedule restoration task and no schedule maintenance. From 87 used components for maintenance activities are evaluated and there 19 components that experience reclassification from non-routine components to routine components. Then the reliability and need of those components were calculated for one-year operation period. Based on this invention, it is suggested to change all of the components in overhaul activity to increase the reliability of roller mill system. Besides, the inventory system should follow maintenance schedule and the number of required components in maintenance activity so the value of procurement will be decreased and the reliability system will increase.
Computing Reliabilities Of Ceramic Components Subject To Fracture
NASA Technical Reports Server (NTRS)
Nemeth, N. N.; Gyekenyesi, J. P.; Manderscheid, J. M.
1992-01-01
CARES calculates fast-fracture reliability or failure probability of macroscopically isotropic ceramic components. Program uses results from commercial structural-analysis program (MSC/NASTRAN or ANSYS) to evaluate reliability of component in presence of inherent surface- and/or volume-type flaws. Computes measure of reliability by use of finite-element mathematical model applicable to multiple materials in sense model made function of statistical characterizations of many ceramic materials. Reliability analysis uses element stress, temperature, area, and volume outputs, obtained from two-dimensional shell and three-dimensional solid isoparametric or axisymmetric finite elements. Written in FORTRAN 77.
NASA Astrophysics Data System (ADS)
Sembiring, N.; Ginting, E.; Darnello, T.
2017-12-01
Problems that appear in a company that produces refined sugar, the production floor has not reached the level of critical machine availability because it often suffered damage (breakdown). This results in a sudden loss of production time and production opportunities. This problem can be solved by Reliability Engineering method where the statistical approach to historical damage data is performed to see the pattern of the distribution. The method can provide a value of reliability, rate of damage, and availability level, of an machine during the maintenance time interval schedule. The result of distribution test to time inter-damage data (MTTF) flexible hose component is lognormal distribution while component of teflon cone lifthing is weibull distribution. While from distribution test to mean time of improvement (MTTR) flexible hose component is exponential distribution while component of teflon cone lifthing is weibull distribution. The actual results of the flexible hose component on the replacement schedule per 720 hours obtained reliability of 0.2451 and availability 0.9960. While on the critical components of teflon cone lifthing actual on the replacement schedule per 1944 hours obtained reliability of 0.4083 and availability 0.9927.
NASA Technical Reports Server (NTRS)
Gerke, R. David; Sandor, Mike; Agarwal, Shri; Moor, Andrew F.; Cooper, Kim A.
2000-01-01
Engineers within the commercial and aerospace industries are using trade-off and risk analysis to aid in reducing spacecraft system cost while increasing performance and maintaining high reliability. In many cases, Commercial Off-The-Shelf (COTS) components, which include Plastic Encapsulated Microcircuits (PEMs), are candidate packaging technologies for spacecrafts due to their lower cost, lower weight and enhanced functionality. Establishing and implementing a parts program that effectively and reliably makes use of these potentially less reliable, but state-of-the-art devices, has become a significant portion of the job for the parts engineer. Assembling a reliable high performance electronic system, which includes COTS components, requires that the end user assume a risk. To minimize the risk involved, companies have developed methodologies by which they use accelerated stress testing to assess the product and reduce the risk involved to the total system. Currently, there are no industry standard procedures for accomplishing this risk mitigation. This paper will present the approaches for reducing the risk of using PEMs devices in space flight systems as developed by two independent Laboratories. The JPL procedure involves primarily a tailored screening with accelerated stress philosophy while the APL procedure is primarily, a lot qualification procedure. Both Laboratories successfully have reduced the risk of using the particular devices for their respective systems and mission requirements.
Design of fuel cell powered data centers for sufficient reliability and availability
NASA Astrophysics Data System (ADS)
Ritchie, Alexa J.; Brouwer, Jacob
2018-04-01
It is challenging to design a sufficiently reliable fuel cell electrical system for use in data centers, which require 99.9999% uptime. Such a system could lower emissions and increase data center efficiency, but the reliability and availability of such a system must be analyzed and understood. Currently, extensive backup equipment is used to ensure electricity availability. The proposed design alternative uses multiple fuel cell systems each supporting a small number of servers to eliminate backup power equipment provided the fuel cell design has sufficient reliability and availability. Potential system designs are explored for the entire data center and for individual fuel cells. Reliability block diagram analysis of the fuel cell systems was accomplished to understand the reliability of the systems without repair or redundant technologies. From this analysis, it was apparent that redundant components would be necessary. A program was written in MATLAB to show that the desired system reliability could be achieved by a combination of parallel components, regardless of the number of additional components needed. Having shown that the desired reliability was achievable through some combination of components, a dynamic programming analysis was undertaken to assess the ideal allocation of parallel components.
Garcia-Ortega, Xavier; Reyes, Cecilia; Montesinos, José Luis; Valero, Francisco
2015-01-01
The most commonly used cell disruption procedures may present lack of reproducibility, which introduces significant errors in the quantification of intracellular components. In this work, an approach consisting in the definition of an overall key performance indicator (KPI) was implemented for a lab scale high-pressure homogenizer (HPH) in order to determine the disruption settings that allow the reliable quantification of a wide sort of intracellular components. This innovative KPI was based on the combination of three independent reporting indicators: decrease of absorbance, release of total protein, and release of alkaline phosphatase activity. The yeast Pichia pastoris growing on methanol was selected as model microorganism due to it presents an important widening of the cell wall needing more severe methods and operating conditions than Escherichia coli and Saccharomyces cerevisiae. From the outcome of the reporting indicators, the cell disruption efficiency achieved using HPH was about fourfold higher than other lab standard cell disruption methodologies, such bead milling cell permeabilization. This approach was also applied to a pilot plant scale HPH validating the methodology in a scale-up of the disruption process. This innovative non-complex approach developed to evaluate the efficacy of a disruption procedure or equipment can be easily applied to optimize the most common disruption processes, in order to reach not only reliable quantification but also recovery of intracellular components from cell factories of interest.
Garcia-Ortega, Xavier; Reyes, Cecilia; Montesinos, José Luis; Valero, Francisco
2015-01-01
The most commonly used cell disruption procedures may present lack of reproducibility, which introduces significant errors in the quantification of intracellular components. In this work, an approach consisting in the definition of an overall key performance indicator (KPI) was implemented for a lab scale high-pressure homogenizer (HPH) in order to determine the disruption settings that allow the reliable quantification of a wide sort of intracellular components. This innovative KPI was based on the combination of three independent reporting indicators: decrease of absorbance, release of total protein, and release of alkaline phosphatase activity. The yeast Pichia pastoris growing on methanol was selected as model microorganism due to it presents an important widening of the cell wall needing more severe methods and operating conditions than Escherichia coli and Saccharomyces cerevisiae. From the outcome of the reporting indicators, the cell disruption efficiency achieved using HPH was about fourfold higher than other lab standard cell disruption methodologies, such bead milling cell permeabilization. This approach was also applied to a pilot plant scale HPH validating the methodology in a scale-up of the disruption process. This innovative non-complex approach developed to evaluate the efficacy of a disruption procedure or equipment can be easily applied to optimize the most common disruption processes, in order to reach not only reliable quantification but also recovery of intracellular components from cell factories of interest. PMID:26284241
Kenny, Sarah J; Palacios-Derflingher, Luz; Owoeye, Oluwatoyosi B A; Whittaker, Jackie L; Emery, Carolyn A
2018-03-15
Critical appraisal of research investigating risk factors for musculoskeletal injury in dancers suggests high quality reliability studies are lacking. The purpose of this study was to determine between-day reliability of pre-participation screening (PPS) components in pre-professional ballet and contemporary dancers. Thirty-eight dancers (35 female, 3 male; median age; 18 years; range: 11 to 30 years) participated. Screening components (Athletic Coping Skills Inventory-28, body mass index, percent total body fat, total bone mineral density, Foot Posture Index-6, hip and ankle range of motion, three lumbopelvic control tasks, unipedal dynamic balance, and the Y-Balance Test) were conducted one week apart. Intra-class correlation coefficients (ICCs: 95% confidence intervals), standard error of measurement, minimal detectable change (MDC), Bland-Altman methods of agreement [95% limits of agreement (LOA)], Cohen's kappa coefficients, standard error, and percent agreements were calculated. Depending on the screening component, ICC estimates ranged from 0.51 to 0.98, kappa coefficients varied between -0.09 and 0.47, and percent agreement spanned 71% to 95%. Wide 95% LOA were demonstrated by Foot Posture Index-6 (right: -6.06, 7.31), passive hip external rotation (right: -9.89, 16.54), and passive supine turnout (left: -15.36, 17.58). The PPS components examined demonstrated moderate to excellent relative reliability with mean between-day differences less than MDC, or sufficient percent agreement, across all assessments. However, due to wide 95% limits of agreement, the Foot Posture Index-6 and passive hip range of motion are not recommended for screening injury risk in pre-professional dancers.
Thermal Management and Reliability of Automotive Power Electronics and Electric Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narumanchi, Sreekant V; Bennion, Kevin S; Cousineau, Justine E
Low-cost, high-performance thermal management technologies are helping meet aggressive power density, specific power, cost, and reliability targets for power electronics and electric machines. The National Renewable Energy Laboratory is working closely with numerous industry and research partners to help influence development of components that meet aggressive performance and cost targets through development and characterization of cooling technologies, and thermal characterization and improvements of passive stack materials and interfaces. Thermomechanical reliability and lifetime estimation models are important enablers for industry in cost-and time-effective design.
One-year test-retest reliability of intrinsic connectivity network fMRI in older adults
Guo, Cong C.; Kurth, Florian; Zhou, Juan; Mayer, Emeran A.; Eickhoff, Simon B; Kramer, Joel H.; Seeley, William W.
2014-01-01
“Resting-state” or task-free fMRI can assess intrinsic connectivity network (ICN) integrity in health and disease, suggesting a potential for use of these methods as disease-monitoring biomarkers. Numerous analytical options are available, including model-driven ROI-based correlation analysis and model-free, independent component analysis (ICA). High test-retest reliability will be a necessary feature of a successful ICN biomarker, yet available reliability data remains limited. Here, we examined ICN fMRI test-retest reliability in 24 healthy older subjects scanned roughly one year apart. We focused on the salience network, a disease-relevant ICN not previously subjected to reliability analysis. Most ICN analytical methods proved reliable (intraclass coefficients > 0.4) and could be further improved by wavelet analysis. Seed-based ROI correlation analysis showed high map-wise reliability, whereas graph theoretical measures and temporal concatenation group ICA produced the most reliable individual unit-wise outcomes. Including global signal regression in ROI-based correlation analyses reduced reliability. Our study provides a direct comparison between the most commonly used ICN fMRI methods and potential guidelines for measuring intrinsic connectivity in aging control and patient populations over time. PMID:22446491
SSME component assembly and life management expert system
NASA Technical Reports Server (NTRS)
Ali, M.; Dietz, W. E.; Ferber, H. J.
1989-01-01
The space shuttle utilizes several rocket engine systems, all of which must function with a high degree of reliability for successful mission completion. The space shuttle main engine (SSME) is by far the most complex of the rocket engine systems and is designed to be reusable. The reusability of spacecraft systems introduces many problems related to testing, reliability, and logistics. Components must be assembled from parts inventories in a manner which will most effectively utilize the available parts. Assembly must be scheduled to efficiently utilize available assembly benches while still maintaining flight schedules. Assembled components must be assigned to as many contiguous flights as possible, to minimize component changes. Each component must undergo a rigorous testing program prior to flight. In addition, testing and assembly of flight engines and components must be done in conjunction with the assembly and testing of developmental engines and components. The development, testing, manufacture, and flight assignments of the engine fleet involves the satisfaction of many logistical and operational requirements, subject to many constraints. The purpose of the SSME Component Assembly and Life Management Expert System (CALMES) is to assist the engine assembly and scheduling process, and to insure that these activities utilize available resources as efficiently as possible.
Laser beam soldering of micro-optical components
NASA Astrophysics Data System (ADS)
Eberhardt, R.
2003-05-01
MOTIVATION Ongoing miniaturisation and higher requirements within optical assemblies and the processing of temperature sensitive components demands for innovative selective joining techniques. So far adhesive bonding has primarily been used to assemble and adjust hybrid micro optical systems. However, the properties of the organic polymers used for the adhesives limit the application of these systems. In fields of telecommunication and lithography, an enhancement of existing joining techniques is necessary to improve properties like humidity resistance, laserstability, UV-stability, thermal cycle reliability and life time reliability. Against this background laser beam soldering of optical components is a reasonable joining technology alternative. Properties like: - time and area restricted energy input - energy input can be controlled by the process temperature - direct and indirect heating of the components is possible - no mechanical contact between joining tool and components give good conditions to meet the requirements on a joining technology for sensitive optical components. Additionally to the laser soldering head, for the assembly of optical components it is necessary to include positioning units to adjust the position of the components with high accuracy before joining. Furthermore, suitable measurement methods to characterize the soldered assemblies (for instance in terms of position tolerances) need to be developed.
Power Electronics Thermal Management Research: Annual Progress Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreno, Gilberto
The objective for this project is to develop thermal management strategies to enable efficient and high-temperature wide-bandgap (WBG)-based power electronic systems (e.g., emerging inverter and DC-DC converter). Reliable WBG devices are capable of operating at elevated temperatures (≥ 175 °Celsius). However, packaging WBG devices within an automotive inverter and operating them at higher junction temperatures will expose other system components (e.g., capacitors and electrical boards) to temperatures that may exceed their safe operating limits. This creates challenges for thermal management and reliability. In this project, system-level thermal analyses are conducted to determine the effect of elevated device temperatures on invertermore » components. Thermal modeling work is then conducted to evaluate various thermal management strategies that will enable the use of highly efficient WBG devices with automotive power electronic systems.« less
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Britton, Paul; Hatfield, Glen Spencer; Novack, Steven D.
2017-01-01
Field Programmable Gate Arrays (FPGAs) integrated circuits (IC) are one of the key electronic components in today's sophisticated launch and space vehicle complex avionic systems, largely due to their superb reprogrammable and reconfigurable capabilities combined with relatively low non-recurring engineering costs (NRE) and short design cycle. Consequently, FPGAs are prevalent ICs in communication protocols and control signal commands. This paper will identify reliability concerns and high level guidelines to estimate FPGA total failure rates in a launch vehicle application. The paper will discuss hardware, hardware description language, and radiation induced failures. The hardware contribution of the approach accounts for physical failures of the IC. The hardware description language portion will discuss the high level FPGA programming languages and software/code reliability growth. The radiation portion will discuss FPGA susceptibility to space environment radiation.
Limitations of Reliability for Long-Endurance Human Spaceflight
NASA Technical Reports Server (NTRS)
Owens, Andrew C.; de Weck, Olivier L.
2016-01-01
Long-endurance human spaceflight - such as missions to Mars or its moons - will present a never-before-seen maintenance logistics challenge. Crews will be in space for longer and be farther way from Earth than ever before. Resupply and abort options will be heavily constrained, and will have timescales much longer than current and past experience. Spare parts and/or redundant systems will have to be included to reduce risk. However, the high cost of transportation means that this risk reduction must be achieved while also minimizing mass. The concept of increasing system and component reliability is commonly discussed as a means to reduce risk and mass by reducing the probability that components will fail during a mission. While increased reliability can reduce maintenance logistics mass requirements, the rate of mass reduction decreases over time. In addition, reliability growth requires increased test time and cost. This paper assesses trends in test time requirements, cost, and maintenance logistics mass savings as a function of increase in Mean Time Between Failures (MTBF) for some or all of the components in a system. In general, reliability growth results in superlinear growth in test time requirements, exponential growth in cost, and sublinear benefits (in terms of logistics mass saved). These trends indicate that it is unlikely that reliability growth alone will be a cost-effective approach to maintenance logistics mass reduction and risk mitigation for long-endurance missions. This paper discusses these trends as well as other options to reduce logistics mass such as direct reduction of part mass, commonality, or In-Space Manufacturing (ISM). Overall, it is likely that some combination of all available options - including reliability growth - will be required to reduce mass and mitigate risk for future deep space missions.
NASA Technical Reports Server (NTRS)
Singh, M.
1999-01-01
Ceramic matrix composite (CMC) components are being designed, fabricated, and tested for a number of high temperature, high performance applications in aerospace and ground based systems. The critical need for and the role of reliable and robust databases for the design and manufacturing of ceramic matrix composites are presented. A number of issues related to engineering design, manufacturing technologies, joining, and attachment technologies, are also discussed. Examples of various ongoing activities in the area of composite databases. designing to codes and standards, and design for manufacturing are given.
Reliability-Based Design Optimization of a Composite Airframe Component
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.
2009-01-01
A stochastic design optimization methodology (SDO) has been developed to design components of an airframe structure that can be made of metallic and composite materials. The design is obtained as a function of the risk level, or reliability, p. The design method treats uncertainties in load, strength, and material properties as distribution functions, which are defined with mean values and standard deviations. A design constraint or a failure mode is specified as a function of reliability p. Solution to stochastic optimization yields the weight of a structure as a function of reliability p. Optimum weight versus reliability p traced out an inverted-S-shaped graph. The center of the inverted-S graph corresponded to 50 percent (p = 0.5) probability of success. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponds to unity for reliability p (or p = 1). Weight can be reduced to a small value for the most failure-prone design with a reliability that approaches zero (p = 0). Reliability can be changed for different components of an airframe structure. For example, the landing gear can be designed for a very high reliability, whereas it can be reduced to a small extent for a raked wingtip. The SDO capability is obtained by combining three codes: (1) The MSC/Nastran code was the deterministic analysis tool, (2) The fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and (3) NASA Glenn Research Center s optimization testbed CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life raked wingtip structure of the Boeing 767-400 extended range airliner made of metallic and composite materials.
Malo Cerrato, Sara; Bataller Sallent, Sílvia; Casas Aznar, Ferran; Gras Pérez, Ma Eugenia; González Carrasco, Mònica
2011-11-01
The aim of this study is to carry out a psychometric study of the AF5 scale in a sample of 4.825 Catalan subjects from 11 to 63 years-old. They are students from secondary compulsory education (ESO), from high school, middle-level vocational training (CFGM) and from the university. Using a principal component analysis (PCA) the theoretical validity of the components is established and the reliability of the instrument is also analyzed. Differential analyses are performed by gender and normative group using a 2 x 6 factorial design. The normative group variable includes the different levels classified into 6 sub-groups: university, post-compulsory secondary education (high school and CFGM), 4th of ESO, 3rd of ESO, 2nd of ESO and 1st of ESO. The results indicate that the reliability of the Catalan version of the scale is similar to the original scale. The factorial structure also fits with the original model established beforehand. Significant differences by normative group in the four components of self-concept explored (social, family, academic/occupational and physical) are observed. By gender, significant differences appear in the component of physical self-concept, academic and social but not in the family component.
Space Transportation Main Engine
NASA Technical Reports Server (NTRS)
Monk, Jan C.
1992-01-01
The topics are presented in viewgraph form and include the following: Space Transportation Main Engine (STME) definition, design philosophy, robust design, maximum design condition, casting vs. machined and welded forgings, operability considerations, high reliability design philosophy, engine reliability enhancement, low cost design philosophy, engine systems requirements, STME schematic, fuel turbopump, liquid oxygen turbopump, main injector, and gas generator. The major engine components of the STME and the Space Shuttle Main Engine are compared.
Critical issues in assuring long lifetime and fail-safe operation of optical communications network
NASA Astrophysics Data System (ADS)
Paul, Dilip K.
1993-09-01
Major factors in assuring long lifetime and fail-safe operation in optical communications networks are reviewed in this paper. Reliable functionality to design specifications, complexity of implementation, and cost are the most critical issues. As economics is the driving force to set the goals as well as priorities for the design, development, safe operation, and maintenance schedules of reliable networks, a balance is sought between the degree of reliability enhancement, cost, and acceptable outage of services. Protecting both the link and the network with high reliability components, hardware duplication, and diversity routing can ensure the best network availability. Case examples include both fiber optic and lasercom systems. Also, the state-of-the-art reliability of photonics in space environment is presented.
Design-for-reliability (DfR) of aerospace electronics: Attributes and challenges
NASA Astrophysics Data System (ADS)
Bensoussan, A.; Suhir, E.
The next generation of multi-beam satellite systems that would be able to provide effective interactive communication services will have to operate within a highly flexible architecture. One option to develop such flexibility is to employ microwaves and/or optoelectronic components and to make them reliable. The use of optoelectronic devices, equipments and systems will result indeed in significant improvement in the state-of-the-art only provided that the new designs will suggest a novel and effective architecture that will combine the merits of good functional performance, satisfactory mechanical (structural) reliability and high cost effectiveness. The obvious challenge is the ability to design and fabricate equipment based on EEE components that would be able to successfully withstand harsh space environments for the entire duration of the mission. It is imperative that the major players in the space industry, such as manufacturers, industrial users, and space agencies, understand the importance and the limits of the achievable quality and reliability of optoelectronic devices operated in harsh environments. It is equally imperative that the physics of possible failures is well understood and, if necessary, minimized, and that adequate Quality Standards are developed and employed. The space community has to identify and to develop the strategic approach for validating optoelectronic products. This should be done with consideration of numerous intrinsic and extrinsic requirements for the systems' performance. When considering a particular next generation optoelectronic space system, the space community needs to address the following major issues: proof of concept for this system, proof of reliability and proof of performance. This should be done with taking into account the specifics of the anticipated application. High operational reliability cannot be left to the prognostics and health monitoring/management (PHM) effort and stage, no matter how important and - ffective such an effort might be. Reliability should be pursued at all the stages of the equipment lifetime: design, product development, manufacturing, burn-in testing and, of course, subsequent PHM after the space apparatus is launched and operated.
High voltage requirements and issues for the 1990's. [for spacecraft power supplies
NASA Technical Reports Server (NTRS)
Dunbar, W. G.; Faymon, K. A.
1984-01-01
The development of high-power high-voltage space systems will require advances in power generation and processing. The systems must be reliable, adaptable, and durable for space mission success. The issues, which must be resolved in order to produce a high power system, are weight and volume reduction of components and modules and the creation of a reliable high repetition pulse power processor. Capacitor energy density must be increased by twice the present capacity and packaging must be reduced by a factor of 10 to 20 times. The packaging must also protect the system from interaction with the natural space environment and the induced environment, produced from spacecraft systems and environment interaction.
Oliveira, Tássia Boeno de; Azevedo Peixoto, Leonardo de; Teodoro, Paulo Eduardo; Alvarenga, Amauri Alves de; Bhering, Leonardo Lopes; Campo, Clara Beatriz Hoffmann
2018-01-01
Asian rust affects the physiology of soybean plants and causes losses in yield. Repeatability coefficients may help breeders to know how many measurements are needed to obtain a suitable reliability for a target trait. Therefore, the objectives of this study were to determine the repeatability coefficients of 14 traits in soybean plants inoculated with Phakopsora pachyrhizi and to establish the minimum number of measurements needed to predict the breeding value with high accuracy. Experiments were performed in a 3x2 factorial arrangement with three treatments and two inoculations in a random block design. Repeatability coefficients, coefficients of determination and number of measurements needed to obtain a certain reliability were estimated using ANOVA, principal component analysis based on the covariance matrix and the correlation matrix, structural analysis and mixed model. It was observed that the principal component analysis based on the covariance matrix out-performed other methods for almost all traits. Significant differences were observed for all traits except internal CO2 concentration for the treatment effects. For the measurement effects, all traits were significantly different. In addition, significant differences were found for all Treatment x Measurement interaction traits except coumestrol, chitinase and chlorophyll content. Six measurements were suitable to obtain a coefficient of determination higher than 0.7 for all traits based on principal component analysis. The information obtained from this research will help breeders and physiologists determine exactly how many measurements are needed to evaluate each trait in soybean plants infected by P. pachyrhizi with a desirable reliability.
de Oliveira, Tássia Boeno; Teodoro, Paulo Eduardo; de Alvarenga, Amauri Alves; Bhering, Leonardo Lopes; Campo, Clara Beatriz Hoffmann
2018-01-01
Asian rust affects the physiology of soybean plants and causes losses in yield. Repeatability coefficients may help breeders to know how many measurements are needed to obtain a suitable reliability for a target trait. Therefore, the objectives of this study were to determine the repeatability coefficients of 14 traits in soybean plants inoculated with Phakopsora pachyrhizi and to establish the minimum number of measurements needed to predict the breeding value with high accuracy. Experiments were performed in a 3x2 factorial arrangement with three treatments and two inoculations in a random block design. Repeatability coefficients, coefficients of determination and number of measurements needed to obtain a certain reliability were estimated using ANOVA, principal component analysis based on the covariance matrix and the correlation matrix, structural analysis and mixed model. It was observed that the principal component analysis based on the covariance matrix out-performed other methods for almost all traits. Significant differences were observed for all traits except internal CO2 concentration for the treatment effects. For the measurement effects, all traits were significantly different. In addition, significant differences were found for all Treatment x Measurement interaction traits except coumestrol, chitinase and chlorophyll content. Six measurements were suitable to obtain a coefficient of determination higher than 0.7 for all traits based on principal component analysis. The information obtained from this research will help breeders and physiologists determine exactly how many measurements are needed to evaluate each trait in soybean plants infected by P. pachyrhizi with a desirable reliability. PMID:29438380
Understanding software faults and their role in software reliability modeling
NASA Technical Reports Server (NTRS)
Munson, John C.
1994-01-01
This study is a direct result of an on-going project to model the reliability of a large real-time control avionics system. In previous modeling efforts with this system, hardware reliability models were applied in modeling the reliability behavior of this system. In an attempt to enhance the performance of the adapted reliability models, certain software attributes were introduced in these models to control for differences between programs and also sequential executions of the same program. As the basic nature of the software attributes that affect software reliability become better understood in the modeling process, this information begins to have important implications on the software development process. A significant problem arises when raw attribute measures are to be used in statistical models as predictors, for example, of measures of software quality. This is because many of the metrics are highly correlated. Consider the two attributes: lines of code, LOC, and number of program statements, Stmts. In this case, it is quite obvious that a program with a high value of LOC probably will also have a relatively high value of Stmts. In the case of low level languages, such as assembly language programs, there might be a one-to-one relationship between the statement count and the lines of code. When there is a complete absence of linear relationship among the metrics, they are said to be orthogonal or uncorrelated. Usually the lack of orthogonality is not serious enough to affect a statistical analysis. However, for the purposes of some statistical analysis such as multiple regression, the software metrics are so strongly interrelated that the regression results may be ambiguous and possibly even misleading. Typically, it is difficult to estimate the unique effects of individual software metrics in the regression equation. The estimated values of the coefficients are very sensitive to slight changes in the data and to the addition or deletion of variables in the regression equation. Since most of the existing metrics have common elements and are linear combinations of these common elements, it seems reasonable to investigate the structure of the underlying common factors or components that make up the raw metrics. The technique we have chosen to use to explore this structure is a procedure called principal components analysis. Principal components analysis is a decomposition technique that may be used to detect and analyze collinearity in software metrics. When confronted with a large number of metrics measuring a single construct, it may be desirable to represent the set by some smaller number of variables that convey all, or most, of the information in the original set. Principal components are linear transformations of a set of random variables that summarize the information contained in the variables. The transformations are chosen so that the first component accounts for the maximal amount of variation of the measures of any possible linear transform; the second component accounts for the maximal amount of residual variation; and so on. The principal components are constructed so that they represent transformed scores on dimensions that are orthogonal. Through the use of principal components analysis, it is possible to have a set of highly related software attributes mapped into a small number of uncorrelated attribute domains. This definitively solves the problem of multi-collinearity in subsequent regression analysis. There are many software metrics in the literature, but principal component analysis reveals that there are few distinct sources of variation, i.e. dimensions, in this set of metrics. It would appear perfectly reasonable to characterize the measurable attributes of a program with a simple function of a small number of orthogonal metrics each of which represents a distinct software attribute domain.
Toward a virtual platform for materials processing
NASA Astrophysics Data System (ADS)
Schmitz, G. J.; Prahl, U.
2009-05-01
Any production is based on materials eventually becoming components of a final product. Material properties being determined by the microstructure of the material thus are of utmost importance both for productivity and reliability of processing during production and for application and reliability of the product components. A sound prediction of materials properties therefore is highly important. Such a prediction requires tracking of microstructure and properties evolution along the entire component life cycle starting from a homogeneous, isotropic and stress-free melt and eventually ending in failure under operational load. This article will outline ongoing activities at the RWTH Aachen University aiming at establishing a virtual platform for materials processing comprising a virtual, integrative numerical description of processes and of the microstructure evolution along the entire production chain and even extending further toward microstructure and properties evolution under operational conditions.
CCARES: A computer algorithm for the reliability analysis of laminated CMC components
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Gyekenyesi, John P.
1993-01-01
Structural components produced from laminated CMC (ceramic matrix composite) materials are being considered for a broad range of aerospace applications that include various structural components for the national aerospace plane, the space shuttle main engine, and advanced gas turbines. Specifically, these applications include segmented engine liners, small missile engine turbine rotors, and exhaust nozzles. Use of these materials allows for improvements in fuel efficiency due to increased engine temperatures and pressures, which in turn generate more power and thrust. Furthermore, this class of materials offers significant potential for raising the thrust-to-weight ratio of gas turbine engines by tailoring directions of high specific reliability. The emerging composite systems, particularly those with silicon nitride or silicon carbide matrix, can compete with metals in many demanding applications. Laminated CMC prototypes have already demonstrated functional capabilities at temperatures approaching 1400 C, which is well beyond the operational limits of most metallic materials. Laminated CMC material systems have several mechanical characteristics which must be carefully considered in the design process. Test bed software programs are needed that incorporate stochastic design concepts that are user friendly, computationally efficient, and have flexible architectures that readily incorporate changes in design philosophy. The CCARES (Composite Ceramics Analysis and Reliability Evaluation of Structures) program is representative of an effort to fill this need. CCARES is a public domain computer algorithm, coupled to a general purpose finite element program, which predicts the fast fracture reliability of a structural component under multiaxial loading conditions.
Artifact removal in the context of group ICA: a comparison of single-subject and group approaches
Du, Yuhui; Allen, Elena A.; He, Hao; Sui, Jing; Wu, Lei; Calhoun, Vince D.
2018-01-01
Independent component analysis (ICA) has been widely applied to identify intrinsic brain networks from fMRI data. Group ICA computes group-level components from all data and subsequently estimates individual-level components to recapture inter-subject variability. However, the best approach to handle artifacts, which may vary widely among subjects, is not yet clear. In this work, we study and compare two ICA approaches for artifacts removal. One approach, recommended in recent work by the Human Connectome Project, first performs ICA on individual subject data to remove artifacts, and then applies a group ICA on the cleaned data from all subjects. We refer to this approach as Individual ICA based artifacts Removal Plus Group ICA (IRPG). A second proposed approach, called Group Information Guided ICA (GIG-ICA), performs ICA on group data, then removes the group-level artifact components, and finally performs subject-specific ICAs using the group-level non-artifact components as spatial references. We used simulations to evaluate the two approaches with respect to the effects of data quality, data quantity, variable number of sources among subjects, and spatially unique artifacts. Resting-state test-retest datasets were also employed to investigate the reliability of functional networks. Results from simulations demonstrate GIG-ICA has greater performance compared to IRPG, even in the case when single-subject artifacts removal is perfect and when individual subjects have spatially unique artifacts. Experiments using test-retest data suggest that GIG-ICA provides more reliable functional networks. Based on high estimation accuracy, ease of implementation, and high reliability of functional networks, we find GIG-ICA to be a promising approach. PMID:26859308
A Step Made Toward Designing Microelectromechanical System (MEMS) Structures With High Reliability
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.
2003-01-01
The mechanical design of microelectromechanical systems-particularly for micropower generation applications-requires the ability to predict the strength capacity of load-carrying components over the service life of the device. These microdevices, which typically are made of brittle materials such as polysilicon, show wide scatter (stochastic behavior) in strength as well as a different average strength for different sized structures (size effect). These behaviors necessitate either costly and time-consuming trial-and-error designs or, more efficiently, the development of a probabilistic design methodology for MEMS. Over the years, the NASA Glenn Research Center s Life Prediction Branch has developed the CARES/Life probabilistic design methodology to predict the reliability of advanced ceramic components. In this study, done in collaboration with Johns Hopkins University, the ability of the CARES/Life code to predict the reliability of polysilicon microsized structures with stress concentrations is successfully demonstrated.
Towards early software reliability prediction for computer forensic tools (case study).
Abu Talib, Manar
2016-01-01
Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.
Lersilp, Suchitporn; Suchart, Sumana
2017-01-01
The purpose of this study was to improve upon the first version of the basic work skills assessment tool for adolescents with autism spectrum disorder (ASD) and examine interrater and intrarater reliability using Intraclass Correlation Coefficient (ICC). The modified tool includes 2 components: (1) three tasks measuring work abilities and work attitudes and (2) a form to record the number of verbal and nonverbal prompts. 26 participants were selected by purposive sampling and divided into 3 groups—group 1 (10 subjects, aged 11–13 years), group 2 (10, aged 14–16 years), and group 3 (6, aged 17–19 years). The results show that interrater reliabilities of work abilities and work attitudes were high in all groups except that the work attitude in group 1 was moderate. Intrarater reliabilities of work abilities in group 1 and group 2 were high. Group 3 was moderate. Intrarater reliabilities of work attitudes in group 1 and group 3 were high but not in group 2 in which they were moderate. Nevertheless, interrater and intrarater reliabilities in the total scores of all groups were high, which implies that this tool is applicable for adolescents aged 11–19 years with consideration of relevance for each group. PMID:28280769
NASA Astrophysics Data System (ADS)
Yang, Zhou; Zhu, Yunpeng; Ren, Hongrui; Zhang, Yimin
2015-03-01
Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.
Ramírez-Vélez, Robinson; Rodrigues-Bezerra, Diogo; Correa-Bautista, Jorge Enrique; Izquierdo, Mikel; Lobelo, Felipe
2015-01-01
Substantial evidence indicates that youth physical fitness levels are an important marker of lifestyle and cardio-metabolic health profiles and predict future risk of chronic diseases. The reliability physical fitness tests have not been explored in Latino-American youth population. This study’s aim was to examine the reliability of health-related physical fitness tests that were used in the Colombian health promotion “Fuprecol study”. Participants were 229 Colombian youth (boys n = 124 and girls n = 105) aged 9 to 17.9 years old. Five components of health-related physical fitness were measured: 1) morphological component: height, weight, body mass index (BMI), waist circumference, triceps skinfold, subscapular skinfold, and body fat (%) via impedance; 2) musculoskeletal component: handgrip and standing long jump test; 3) motor component: speed/agility test (4x10 m shuttle run); 4) flexibility component (hamstring and lumbar extensibility, sit-and-reach test); 5) cardiorespiratory component: 20-meter shuttle-run test (SRT) to estimate maximal oxygen consumption. The tests were performed two times, 1 week apart on the same day of the week, except for the SRT which was performed only once. Intra-observer technical errors of measurement (TEMs) and inter-rater (reliability) were assessed in the morphological component. Reliability for the Musculoskeletal, motor and cardiorespiratory fitness components was examined using Bland–Altman tests. For the morphological component, TEMs were small and reliability was greater than 95% of all cases. For the musculoskeletal, motor, flexibility and cardiorespiratory components, we found adequate reliability patterns in terms of systematic errors (bias) and random error (95% limits of agreement). When the fitness assessments were performed twice, the systematic error was nearly 0 for all tests, except for the sit and reach (mean difference: -1.03% [95% CI = -4.35% to -2.28%]. The results from this study indicate that the “Fuprecol study” health-related physical fitness battery, administered by physical education teachers, was reliable for measuring health-related components of fitness in children and adolescents aged 9–17.9 years old in a school setting in Colombia. PMID:26474474
Compound estimation procedures in reliability
NASA Technical Reports Server (NTRS)
Barnes, Ron
1990-01-01
At NASA, components and subsystems of components in the Space Shuttle and Space Station generally go through a number of redesign stages. While data on failures for various design stages are sometimes available, the classical procedures for evaluating reliability only utilize the failure data on the present design stage of the component or subsystem. Often, few or no failures have been recorded on the present design stage. Previously, Bayesian estimators for the reliability of a single component, conditioned on the failure data for the present design, were developed. These new estimators permit NASA to evaluate the reliability, even when few or no failures have been recorded. Point estimates for the latter evaluation were not possible with the classical procedures. Since different design stages of a component (or subsystem) generally have a good deal in common, the development of new statistical procedures for evaluating the reliability, which consider the entire failure record for all design stages, has great intuitive appeal. A typical subsystem consists of a number of different components and each component has evolved through a number of redesign stages. The present investigations considered compound estimation procedures and related models. Such models permit the statistical consideration of all design stages of each component and thus incorporate all the available failure data to obtain estimates for the reliability of the present version of the component (or subsystem). A number of models were considered to estimate the reliability of a component conditioned on its total failure history from two design stages. It was determined that reliability estimators for the present design stage, conditioned on the complete failure history for two design stages have lower risk than the corresponding estimators conditioned only on the most recent design failure data. Several models were explored and preliminary models involving bivariate Poisson distribution and the Consael Process (a bivariate Poisson process) were developed. Possible short comings of the models are noted. An example is given to illustrate the procedures. These investigations are ongoing with the aim of developing estimators that extend to components (and subsystems) with three or more design stages.
Ramírez-Vélez, Robinson; Rodrigues-Bezerra, Diogo; Correa-Bautista, Jorge Enrique; Izquierdo, Mikel; Lobelo, Felipe
2015-01-01
Substantial evidence indicates that youth physical fitness levels are an important marker of lifestyle and cardio-metabolic health profiles and predict future risk of chronic diseases. The reliability physical fitness tests have not been explored in Latino-American youth population. This study's aim was to examine the reliability of health-related physical fitness tests that were used in the Colombian health promotion "Fuprecol study". Participants were 229 Colombian youth (boys n = 124 and girls n = 105) aged 9 to 17.9 years old. Five components of health-related physical fitness were measured: 1) morphological component: height, weight, body mass index (BMI), waist circumference, triceps skinfold, subscapular skinfold, and body fat (%) via impedance; 2) musculoskeletal component: handgrip and standing long jump test; 3) motor component: speed/agility test (4x10 m shuttle run); 4) flexibility component (hamstring and lumbar extensibility, sit-and-reach test); 5) cardiorespiratory component: 20-meter shuttle-run test (SRT) to estimate maximal oxygen consumption. The tests were performed two times, 1 week apart on the same day of the week, except for the SRT which was performed only once. Intra-observer technical errors of measurement (TEMs) and inter-rater (reliability) were assessed in the morphological component. Reliability for the Musculoskeletal, motor and cardiorespiratory fitness components was examined using Bland-Altman tests. For the morphological component, TEMs were small and reliability was greater than 95% of all cases. For the musculoskeletal, motor, flexibility and cardiorespiratory components, we found adequate reliability patterns in terms of systematic errors (bias) and random error (95% limits of agreement). When the fitness assessments were performed twice, the systematic error was nearly 0 for all tests, except for the sit and reach (mean difference: -1.03% [95% CI = -4.35% to -2.28%]. The results from this study indicate that the "Fuprecol study" health-related physical fitness battery, administered by physical education teachers, was reliable for measuring health-related components of fitness in children and adolescents aged 9-17.9 years old in a school setting in Colombia.
Space reliability technology - A historical perspective
NASA Technical Reports Server (NTRS)
Cohen, H.
1984-01-01
The progressive improvements in reliability of launch vehicles is traced from the Vanguard rocket to the STS. The Vanguard, built with minimal redundancy and a high mass ratio, was used as an operational vehicle midway through its test program in an attempt to meet the perceived challenge represented by the Sputnik. The fourth Vanguard failed due to inadequate contamination prevention and lack of inspection ports. Automatic firing sequences were adopted for the Titan rockets, which were an order of magnitude larger than the Vanguard and therefore had room for interior inspections. Qualification testing and reporting were introduced for components, along with X ray inspection of fuel tank welds. Dual systems were added for flight critical components when the Titan became man-rated for the Gemini program. Designs incorporated full failure mode effects and criticality analyses for the Apollo program, which exposed the limits of applicability of numerical reliability models. Fault tree analyses and program milestone reviews were initiated. The worth of man-in-the-loop in space activities for reliability was demonstrated with the rescue of Skylab after solar panel and meteoroid shield failures. It is now the reliability of the payload, rather than the vehicle, that is questioned for Shuttle launches.
Mehta, Shraddha; Bastero-Caballero, Rowena F; Sun, Yijun; Zhu, Ray; Murphy, Diane K; Hardas, Bhushan; Koch, Gary
2018-04-29
Many published scale validation studies determine inter-rater reliability using the intra-class correlation coefficient (ICC). However, the use of this statistic must consider its advantages, limitations, and applicability. This paper evaluates how interaction of subject distribution, sample size, and levels of rater disagreement affects ICC and provides an approach for obtaining relevant ICC estimates under suboptimal conditions. Simulation results suggest that for a fixed number of subjects, ICC from the convex distribution is smaller than ICC for the uniform distribution, which in turn is smaller than ICC for the concave distribution. The variance component estimates also show that the dissimilarity of ICC among distributions is attributed to the study design (ie, distribution of subjects) component of subject variability and not the scale quality component of rater error variability. The dependency of ICC on the distribution of subjects makes it difficult to compare results across reliability studies. Hence, it is proposed that reliability studies should be designed using a uniform distribution of subjects because of the standardization it provides for representing objective disagreement. In the absence of uniform distribution, a sampling method is proposed to reduce the non-uniformity. In addition, as expected, high levels of disagreement result in low ICC, and when the type of distribution is fixed, any increase in the number of subjects beyond a moderately large specification such as n = 80 does not have a major impact on ICC. Copyright © 2018 John Wiley & Sons, Ltd.
The factorial reliability of the Middlesex Hospital Questionnaire in normal subjects.
Bagley, C
1980-03-01
The internal reliability of the Middlesex Hospital Questionnaire and its component subscales has been checked by means of principal components analyses of data on 256 normal subjects. The subscales (with the possible exception of Hysteria) were found to contribute to the general underlying factor of psychoneurosis. In general, the principal components analysis points to the reliability of the subscales, despite some item overlap.
State recovery and lockstep execution restart in a system with multiprocessor pairing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gara, Alan; Gschwind, Michael K; Salapura, Valentina
System, method and computer program product for a multiprocessing system to offer selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). Each paired microprocessor or processor cores that provide one highly reliable thread for high-reliability connect with a system components such as a memory "nest" (or memory hierarchy), an optional system controller, and optional interrupt controller, optional I/O or peripheral devices, etc. The memory nest is attached to a selective pairing facility via a switchmore » or a bus. Each selectively paired processor core is includes a transactional execution facility, whereing the system is configured to enable processor rollback to a previous state and reinitialize lockstep execution in order to recover from an incorrect execution when an incorrect execution has been detected by the selective pairing facility.« less
Cross-cultural adaptation and validation of the Korean version of the neck disability index.
Song, Kyung-Jin; Choi, Byung-Wan; Choi, Byung-Ryeul; Seo, Gyeu-Beom
2010-09-15
Validation of a translated, culturally adapted questionnaire. The purpose of this study is to translate and culturally adapt the Neck Disability Index (NDI) and to validate the use of the derived version in Korean patient. Although several valid measures exist for measurement of neck pain and functional impairment, these measures have yet been validated in Korean version. The NDI was linguistically translated into Korean, and prefinal version was assessed and modified by a pilot study. The reliability and validity of the derived Korean version was examined in 78 patients with degenerative cervical spine disease. Test-retest reliability, internal consistency, and construct validity were investigated by comparing Visual Analogue Scale (VAS) and Short Form Health Survey (SF-36) scores. Factor analysis of Korean NDI extracted 2 factors with eigenvalues >1. The intraclass-correlation coefficient of test-retest reliability was 0.93. Reliability, estimated by internal consistency, had a Cronbach alpha value of 0.82. The correlation between NDI and VAS scores was r = 0.49, and the correlation between NDI and SF-36 scores was r = -0.44. The physical health component score of SF-36 was highly correlated with NDI, and the correlation between VAS scores and the mental health component scores of SF-36 was high. The derived Korean version of the NDI was found to be a reliable and valid instrument for measuring disability in Korean patients with cervical problems. The authors recommend its use in future Korean clinical studies.
Fundamentals of endoscopic surgery: creation and validation of the hands-on test.
Vassiliou, Melina C; Dunkin, Brian J; Fried, Gerald M; Mellinger, John D; Trus, Thadeus; Kaneva, Pepa; Lyons, Calvin; Korndorffer, James R; Ujiki, Michael; Velanovich, Vic; Kochman, Michael L; Tsuda, Shawn; Martinez, Jose; Scott, Daniel J; Korus, Gary; Park, Adrian; Marks, Jeffrey M
2014-03-01
The Fundamentals of Endoscopic Surgery™ (FES) program consists of online materials and didactic and skills-based tests. All components were designed to measure the skills and knowledge required to perform safe flexible endoscopy. The purpose of this multicenter study was to evaluate the reliability and validity of the hands-on component of the FES examination, and to establish the pass score. Expert endoscopists identified the critical skill set required for flexible endoscopy. They were then modeled in a virtual reality simulator (GI Mentor™ II, Simbionix™ Ltd., Airport City, Israel) to create five tasks and metrics. Scores were designed to measure both speed and precision. Validity evidence was assessed by correlating performance with self-reported endoscopic experience (surgeons and gastroenterologists [GIs]). Internal consistency of each test task was assessed using Cronbach's alpha. Test-retest reliability was determined by having the same participant perform the test a second time and comparing their scores. Passing scores were determined by a contrasting groups methodology and use of receiver operating characteristic curves. A total of 160 participants (17 % GIs) performed the simulator test. Scores on the five tasks showed good internal consistency reliability and all had significant correlations with endoscopic experience. Total FES scores correlated 0.73, with participants' level of endoscopic experience providing evidence of their validity, and their internal consistency reliability (Cronbach's alpha) was 0.82. Test-retest reliability was assessed in 11 participants, and the intraclass correlation was 0.85. The passing score was determined and is estimated to have a sensitivity (true positive rate) of 0.81 and a 1-specificity (false positive rate) of 0.21. The FES hands-on skills test examines the basic procedural components required to perform safe flexible endoscopy. It meets rigorous standards of reliability and validity required for high-stakes examinations, and, together with the knowledge component, may help contribute to the definition and determination of competence in endoscopy.
Reliability analysis of laminated CMC components through shell subelement techniques
NASA Technical Reports Server (NTRS)
Starlinger, A.; Duffy, S. F.; Gyekenyesi, J. P.
1992-01-01
An updated version of the integrated design program C/CARES (composite ceramic analysis and reliability evaluation of structures) was developed for the reliability evaluation of CMC laminated shell components. The algorithm is now split in two modules: a finite-element data interface program and a reliability evaluation algorithm. More flexibility is achieved, allowing for easy implementation with various finite-element programs. The new interface program from the finite-element code MARC also includes the option of using hybrid laminates and allows for variations in temperature fields throughout the component.
User's guide to the Reliability Estimation System Testbed (REST)
NASA Technical Reports Server (NTRS)
Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam
1992-01-01
The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.
Zhao, Leihong; Qu, Xiaolu; Zhang, Meijia; Lin, Hongjun; Zhou, Xiaoling; Liao, Bao-Qiang; Mei, Rongwu; Hong, Huachang
2016-08-01
Failure of membrane hydrophobicity in predicting membrane fouling requires a more reliable indicator. In this study, influences of membrane acid base (AB) property on interfacial interactions in two different interaction scenarios in a submerged membrane bioreactor (MBR) were studied according to thermodynamic approaches. It was found that both the polyvinylidene fluoride (PVDF) membrane and foulant samples in the MBR had relatively high electron donor (γ(-)) component and low electron acceptor (γ(+)) component. For both of interaction scenarios, AB interaction was the major component of the total interaction. The results showed that, the total interaction monotonically decreased with membrane γ(-), while was marginally affected by membrane γ(+), suggesting that γ(-) could act as a reliable indicator for membrane fouling prediction. This study suggested that membrane modification for fouling mitigation should orient to improving membrane surface γ(-) component rather than hydrophilicity. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gobbato, Maurizio; Kosmatka, John B.; Conte, Joel P.
2014-04-01
Fatigue-induced damage is one of the most uncertain and highly unpredictable failure mechanisms for a large variety of mechanical and structural systems subjected to cyclic and random loads during their service life. A health monitoring system capable of (i) monitoring the critical components of these systems through non-destructive evaluation (NDE) techniques, (ii) assessing their structural integrity, (iii) recursively predicting their remaining fatigue life (RFL), and (iv) providing a cost-efficient reliability-based inspection and maintenance plan (RBIM) is therefore ultimately needed. In contribution to these objectives, the first part of the paper provides an overview and extension of a comprehensive reliability-based fatigue damage prognosis methodology — previously developed by the authors — for recursively predicting and updating the RFL of critical structural components and/or sub-components in aerospace structures. In the second part of the paper, a set of experimental fatigue test data, available in the literature, is used to provide a numerical verification and an experimental validation of the proposed framework at the reliability component level (i.e., single damage mechanism evolving at a single damage location). The results obtained from this study demonstrate (i) the importance and the benefits of a nearly continuous NDE monitoring system, (ii) the efficiency of the recursive Bayesian updating scheme, and (iii) the robustness of the proposed framework in recursively updating and improving the RFL estimations. This study also demonstrates that the proposed methodology can lead to either an extent of the RFL (with a consequent economical gain without compromising the minimum safety requirements) or an increase of safety by detecting a premature fault and therefore avoiding a very costly catastrophic failure.
Reliability of heart rate measures during walking before and after running maximal efforts.
Boullosa, D A; Barros, E S; del Rosso, S; Nakamura, F Y; Leicht, A S
2014-11-01
Previous studies on HR recovery (HRR) measures have utilized the supine and the seated postures. However, the most common recovery mode in sport and clinical settings after running exercise is active walking. The aim of the current study was to examine the reliability of HR measures during walking (4 km · h(-1)) before and following a maximal test. Twelve endurance athletes performed an incremental running test on 2 days separated by 48 h. Absolute (coefficient of variation, CV, %) and relative [Intraclass correlation coefficient, (ICC)] reliability of time domain and non-linear measures of HR variability (HRV) from 3 min recordings, and HRR parameters over 5 min were assessed. Moderate to very high reliability was identified for most HRV indices with short-term components of time domain and non-linear HRV measures demonstrating the greatest reliability before (CV: 12-22%; ICC: 0.73-0.92) and after exercise (CV: 14-32%; ICC: 0.78-0.91). Most HRR indices and parameters of HRR kinetics demonstrated high to very high reliability with HR values at a given point and the asymptotic value of HR being the most reliable (CV: 2.5-10.6%; ICC: 0.81-0.97). These findings demonstrate these measures as reliable tools for the assessment of autonomic control of HR during walking before and after maximal efforts. © Georg Thieme Verlag KG Stuttgart · New York.
Enhanced ultrasonic inspection of steel bridge pin components.
DOT National Transportation Integrated Search
1998-01-01
This report describes the development of a technique for obtaining a reliable assessment of the condition of steel bridge pins already determined by ultrasound to contain imperfections. The details of a technique for performing high-definition ultras...
NASA Astrophysics Data System (ADS)
Weick, Clément; De Betelu, Romain; Tauzin, Aurélie; Baudrit, Mathieu
2017-09-01
Concentrator photovoltaic (CPV) modules are composed of many components and interfaces, which require complex assembling processes, resulting in fabrication complexity and often lack of reliability. The present work addresses these issues, by proposing an innovative low concentration photovoltaic (LCPV) concept. In particular, the purpose here is to develop a module with a high level of integration by lowering the number of components and interfaces. The mirror used as the concentrator optic is multifunctional, as it combines thermal, structural and optical function. Moreover, the proposed design claims to demonstrate the applicability of reliable flat PV processes (such as lamination and cells interconnections), for the manufacturing of this LCPV module. The paper describes both indoor and outdoor characterization of a new prototype. Performances by means of IV curves tracing will be discussed regarding the losses distribution within the optical chain.
NASA Astrophysics Data System (ADS)
Moghaddam, Kamran S.; Usher, John S.
2011-07-01
In this article, a new multi-objective optimization model is developed to determine the optimal preventive maintenance and replacement schedules in a repairable and maintainable multi-component system. In this model, the planning horizon is divided into discrete and equally-sized periods in which three possible actions must be planned for each component, namely maintenance, replacement, or do nothing. The objective is to determine a plan of actions for each component in the system while minimizing the total cost and maximizing overall system reliability simultaneously over the planning horizon. Because of the complexity, combinatorial and highly nonlinear structure of the mathematical model, two metaheuristic solution methods, generational genetic algorithm, and a simulated annealing are applied to tackle the problem. The Pareto optimal solutions that provide good tradeoffs between the total cost and the overall reliability of the system can be obtained by the solution approach. Such a modeling approach should be useful for maintenance planners and engineers tasked with the problem of developing recommended maintenance plans for complex systems of components.
Principle of maximum entropy for reliability analysis in the design of machine components
NASA Astrophysics Data System (ADS)
Zhang, Yimin
2018-03-01
We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.
An automatic chip structure optical inspection system for electronic components
NASA Astrophysics Data System (ADS)
Song, Zhichao; Xue, Bindang; Liang, Jiyuan; Wang, Ke; Chen, Junzhang; Liu, Yunhe
2018-01-01
An automatic chip structure inspection system based on machine vision is presented to ensure the reliability of electronic components. It consists of four major modules, including a metallographic microscope, a Gigabit Ethernet high-resolution camera, a control system and a high performance computer. An auto-focusing technique is presented to solve the problem that the chip surface is not on the same focusing surface under the high magnification of the microscope. A panoramic high-resolution image stitching algorithm is adopted to deal with the contradiction between resolution and field of view, caused by different sizes of electronic components. In addition, we establish a database to storage and callback appropriate parameters to ensure the consistency of chip images of electronic components with the same model. We use image change detection technology to realize the detection of chip images of electronic components. The system can achieve high-resolution imaging for chips of electronic components with various sizes, and clearly imaging for the surface of chip with different horizontal and standardized imaging for ones with the same model, and can recognize chip defects.
Cheong, A T; Tong, S F; Sazlina, S G
2015-01-01
Hill-Bone compliance to high blood pressure therapy scale (HBTS) is one of the useful scales in primary care settings. It has been tested in America, Africa and Turkey with variable validity and reliability. The aim of this paper was to determine the validity and reliability of the Malay version of HBTS (HBTS-M) for the Malaysian population. HBTS comprises three subscales assessing compliance to medication, appointment and salt intake. The content validity of HBTS to the local population was agreed through consensus of expert panel. The 14 items used in the HBTS were adapted to reflect the local situations. It was translated into Malay and then back-translated into English. The translated version was piloted in 30 participants. This was followed by structural and predictive validity, and internal consistency testing in 262 patients with hypertension, who were on antihypertensive agent(s) for at least 1 year in two primary healthcare clinics in Kuala Lumpur, Malaysia. Exploratory factor analyses and the correlation between HBTS-M total score and blood pressure were performed. The Cronbach's alpha was calculated accordingly. Factor analysis revealed a three-component structure represented by two components on medication adherence and one on salt intake adherence. The Kaiser-Meyer-Olkin statistic was 0.764. The variance explained by each factors were 23.6%, 10.4% and 9.8%, respectively. However, the internal consistency for each component was suboptimal with Cronbach's alpha of 0.64, 0.55 and 0.29, respectively. Although there were two components representing medication adherence, the theoretical concepts underlying each concept cannot be differentiated. In addition, there was no correlation between the HBTS-M total score and blood pressure. HBTS-M did not conform to the structural and predictive validity of the original scale. Its reliability on assessing medication and salt intake adherence would most probably to be suboptimal in the Malaysian primary care setting.
Technique for Early Reliability Prediction of Software Components Using Behaviour Models
Ali, Awad; N. A. Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad
2016-01-01
Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction. PMID:27668748
Reliability analysis of single crystal NiAl turbine blades
NASA Technical Reports Server (NTRS)
Salem, Jonathan; Noebe, Ronald; Wheeler, Donald R.; Holland, Fred; Palko, Joseph; Duffy, Stephen; Wright, P. Kennard
1995-01-01
As part of a co-operative agreement with General Electric Aircraft Engines (GEAE), NASA LeRC is modifying and validating the Ceramic Analysis and Reliability Evaluation of Structures algorithm for use in design of components made of high strength NiAl based intermetallic materials. NiAl single crystal alloys are being actively investigated by GEAE as a replacement for Ni-based single crystal superalloys for use in high pressure turbine blades and vanes. The driving force for this research lies in the numerous property advantages offered by NiAl alloys over their superalloy counterparts. These include a reduction of density by as much as a third without significantly sacrificing strength, higher melting point, greater thermal conductivity, better oxidation resistance, and a better response to thermal barrier coatings. The current drawback to high strength NiAl single crystals is their limited ductility. Consequently, significant efforts including the work agreement with GEAE are underway to develop testing and design methodologies for these materials. The approach to validation and component analysis involves the following steps: determination of the statistical nature and source of fracture in a high strength, NiAl single crystal turbine blade material; measurement of the failure strength envelope of the material; coding of statistically based reliability models; verification of the code and model; and modeling of turbine blades and vanes for rig testing.
Analysis on Sealing Reliability of Bolted Joint Ball Head Component of Satellite Propulsion System
NASA Astrophysics Data System (ADS)
Guo, Tao; Fan, Yougao; Gao, Feng; Gu, Shixin; Wang, Wei
2018-01-01
Propulsion system is one of the important subsystems of satellite, and its performance directly affects the service life, attitude control and reliability of the satellite. The Paper analyzes the sealing principle of bolted joint ball head component of satellite propulsion system and discuss from the compatibility of hydrazine anhydrous and bolted joint ball head component, influence of ground environment on the sealing performance of bolted joint ball heads, and material failure caused by environment, showing that the sealing reliability of bolted joint ball head component is good and the influence of above three aspects on sealing of bolted joint ball head component can be ignored.
Reliability Quantification of Advanced Stirling Convertor (ASC) Components
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward
2010-01-01
The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.
The Validity of Examination Essays in Higher Education: Issues and Responses
ERIC Educational Resources Information Center
Brown, Gavin T. L.
2010-01-01
The use of timed, essay examinations is a well-established means of evaluating student learning in higher education. The reliability of essay scoring is highly problematic and it appears that essay examination grades are highly dependent on language and organisational components of writing. Computer-assisted scoring of essays makes use of language…
Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria
NASA Astrophysics Data System (ADS)
Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong
2017-08-01
In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.
Advanced Rankine and Brayton cycle power systems - Materials needs and opportunities
NASA Technical Reports Server (NTRS)
Grisaffe, S. J.; Guentert, D. C.
1974-01-01
Conceptual advanced potassium Rankine and closed Brayton power conversion cycles offer the potential for improved efficiency over steam systems through higher operating temperatures. However, for utility service of at least 100,000 hours, materials technology advances will be needed for such high temperature systems. Improved alloys and surface protection must be developed and demonstrated to resist coal combustion gases as well as potassium corrosion or helium surface degradation at high temperatures. Extensions in fabrication technology are necessary to produce large components of high temperature alloys. Long-time property data must be obtained under environments of interest to assure high component reliability.
Advanced Rankine and Brayton cycle power systems: Materials needs and opportunities
NASA Technical Reports Server (NTRS)
Grisaffe, S. J.; Guentert, D. C.
1974-01-01
Conceptual advanced potassium Rankine and closed Brayton power conversion cycles offer the potential for improved efficiency over steam systems through higher operating temperatures. However, for utility service of at least 100,000 hours, materials technology advances will be needed for such high temperature systems. Improved alloys and surface protection must be developed and demonstrated to resist coal combustion gases as well as potassium corrosion or helium surface degradation at high temperatures. Extensions in fabrication technology are necessary to produce large components of high temperature alloys. Long time property data must be obtained under environments of interest to assure high component reliability.
Calculating system reliability with SRFYDO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M
2010-01-01
SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for themore » system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.« less
Engineering Design Handbook. Development Guide for Reliability. Part Two. Design for Reliability
1976-01-01
Component failure rates, however, have been recorded by many sources as a function of use and environment. Some of these sources are listed in Refs. 13-17...other systems capable of creating an explosive reac- tion. The second category is fairly obvious and includes many variations on methods for providing...aboutthem. 4. Ability to detect signals ( including patterns) in high noise environments. 5. Ability to store large amounts of informa- tion for long
NASA Technical Reports Server (NTRS)
Schwarze, Gene E.; Niedra, Janis M.; Frasca, Albert J.; Wieserman, William R.
1993-01-01
The effects of nuclear radiation and high temperature environments must be fully known and understood for the electronic components and materials used in both the Power Conditioning and Control subsystem and the reactor Instrumentation and Control subsystem of future high capacity nuclear space power systems. This knowledge is required by the designer of these subsystems in order to develop highly reliable, long-life power systems for future NASA missions. A review and summary of the experimental results obtained for the electronic components and materials investigated under the power management element of the Civilian Space Technology Initiative (CSTI) high capacity power project are presented: (1) neutron, gamma ray, and temperature effects on power semiconductor switches, (2) temperature and frequency effects on soft magnetic materials; and (3) temperature effects on rare earth permanent magnets.
Reliability Considerations of ULP Scaled CMOS in Spacecraft Systems
NASA Technical Reports Server (NTRS)
White, Mark; MacNeal, Kristen; Cooper, Mark
2012-01-01
NASA, the aerospace community, and other high reliability (hi-rel) users of advanced microelectronic products face many challenges as technology continues to scale into the deep sub-micron region. Decreasing the feature size of CMOS devices not only allows more components to be placed on a single chip, but it increases performance by allowing faster switching (or clock) speeds with reduced power compared to larger scaled devices. Higher performance, and lower operating and stand-by power characteristics of Ultra-Low Power (ULP) microelectronics are not only desirable, but also necessary to meet low power consumption design goals of critical spacecraft systems. The integration of these components in such systems, however, must be balanced with the overall risk tolerance of the project.
Development and evaluation of the nurse quality of communication with patient questionnaire.
Vuković, Mira; Gvozdenović, Branislav S; Stamatović-Gajić, Branka; Ilić, Miodrag; Gajić, Tomislav
2010-01-01
Nurse/patient relationship as a complex interrelation or as an interaction of the factor patient and factor nurse has been a subject of a number of studies during the past ten years. Nurse/patient communication is a special entity, usually observed within a framework of the wider nurse/patient relationship. In that regard, we wanted to develop a standardized questionnaire that could reliably measure the quality of communication between nurse and patient, and be used by nurses. The main goal of this study was to develop and evaluate construct validity of the Nurse Quality of Communication with Patient Questionnaire (NQCPQ), as well as to evaluate its reliability. The goal was also to establish a measure of inter-raters reliability, using two repeated measurements of results by items and scores of the NQCPQ, on the same observed units by two assessors. The starting NQCPQ that consists of 25 items, was filled in by two groups of nurses. Each nurse was questioned during morning and afternoon shifts, in order to evaluate their communication with hospitalized patients, using marks from 1 to 6. To evaluate construct validity, we used the analysis of main components, while reliability was assessed using intraclass correlation coefficient and Cronbach-alpha coefficient. To evaluate interraters reliability, we used Pearson correlation coefficient. Using a group of 118 patients, we explained 86% of the unknown, regarding the investigated phenomenon (communication nurse/patient), using one component by which we separated 6 items of the questionnaire. Inter-item correlation (alpha) in this component was 0.96. Pearson correlation coefficient was highly significant, value 0.7 by item, and correlation coefficient for scores at repeated measurements was 0.84. NQCPQ is 6-item instrument with high construct validity. It can be used to measure quality of nurse/patient communication in a simple, fast and reliable way. It could contribute to more adequate research and defining of this problem, and as such could be used in studies of interaction of psychometric, clinical, biochemical, socio-cultural, demographic and other parameters as well.
Hologram interferometry in automotive component vibration testing
NASA Astrophysics Data System (ADS)
Brown, Gordon M.; Forbes, Jamie W.; Marchi, Mitchell M.; Wales, Raymond R.
1993-02-01
An ever increasing variety of automotive component vibration testing is being pursued at Ford Motor Company, U.S.A. The driving force for use of hologram interferometry in these tests is the continuing need to design component structures to meet more stringent functional performance criteria. Parameters such as noise and vibration, sound quality, and reliability must be optimized for the lightest weight component possible. Continually increasing customer expectations and regulatory pressures on fuel economy and safety mandate that vehicles be built from highly optimized components. This paper includes applications of holographic interferometry for powertrain support structure tuning, body panel noise reduction, wiper system noise and vibration path analysis, and other vehicle component studies.
Casting of weldable graphite/magnesium metal matrix composites with built-in metallic inserts
NASA Technical Reports Server (NTRS)
Lee, Jonathan A.; Kashalikar, Uday; Majkowski, Patricia
1994-01-01
Technology innovations directed at the advanced development of a potentially low cost and weldable graphite/magnesium metal matrix composites (MMC) through near net shape pressure casting are described. These MMC components uniquely have built-in metallic inserts to provide an innovative approach for joining or connecting other MMC components through conventional joining techniques such as welding, brazing, mechanical fasteners, etc. Moreover, the metallic inserts trapped within the MMC components can be made to transfer the imposed load efficiently to the continuous graphite fiber reinforcement thus producing stronger, stiffer, and more reliable MMC components. The use of low pressure near net shape casting is economical compared to other MMC fabrication processes. These castable and potentially weldable MMC components can provide great payoffs in terms of high strength, high stiffness, low thermal expansion, lightweight, and easily joinable MMC components for several future NASA space structural, industrial, and commercial applications.
NASA Astrophysics Data System (ADS)
Wallace, Jon Michael
2003-10-01
Reliability prediction of components operating in complex systems has historically been conducted in a statistically isolated manner. Current physics-based, i.e. mechanistic, component reliability approaches focus more on component-specific attributes and mathematical algorithms and not enough on the influence of the system. The result is that significant error can be introduced into the component reliability assessment process. The objective of this study is the development of a framework that infuses the needs and influence of the system into the process of conducting mechanistic-based component reliability assessments. The formulated framework consists of six primary steps. The first three steps, identification, decomposition, and synthesis, are primarily qualitative in nature and employ system reliability and safety engineering principles to construct an appropriate starting point for the component reliability assessment. The following two steps are the most unique. They involve a step to efficiently characterize and quantify the system-driven local parameter space and a subsequent step using this information to guide the reduction of the component parameter space. The local statistical space quantification step is accomplished using two proposed multivariate probability models: Multi-Response First Order Second Moment and Taylor-Based Inverse Transformation. Where existing joint probability models require preliminary distribution and correlation information of the responses, these models combine statistical information of the input parameters with an efficient sampling of the response analyses to produce the multi-response joint probability distribution. Parameter space reduction is accomplished using Approximate Canonical Correlation Analysis (ACCA) employed as a multi-response screening technique. The novelty of this approach is that each individual local parameter and even subsets of parameters representing entire contributing analyses can now be rank ordered with respect to their contribution to not just one response, but the entire vector of component responses simultaneously. The final step of the framework is the actual probabilistic assessment of the component. Although the same multivariate probability tools employed in the characterization step can be used for the component probability assessment, variations of this final step are given to allow for the utilization of existing probabilistic methods such as response surface Monte Carlo and Fast Probability Integration. The overall framework developed in this study is implemented to assess the finite-element based reliability prediction of a gas turbine airfoil involving several failure responses. Results of this implementation are compared to results generated using the conventional 'isolated' approach as well as a validation approach conducted through large sample Monte Carlo simulations. The framework resulted in a considerable improvement to the accuracy of the part reliability assessment and an improved understanding of the component failure behavior. Considerable statistical complexity in the form of joint non-normal behavior was found and accounted for using the framework. Future applications of the framework elements are discussed.
Lorencatto, Fabiana; West, Robert; Seymour, Natalie; Michie, Susan
2013-06-01
There is a difference between interventions as planned and as delivered in practice. Unless we know what was actually delivered, we cannot understand "what worked" in effective interventions. This study aimed to (a) assess whether an established taxonomy of 53 smoking cessation behavior change techniques (BCTs) may be applied or adapted as a method for reliably specifying the content of smoking cessation behavioral support consultations and (b) develop an effective method for training researchers and practitioners in the reliable application of the taxonomy. Fifteen transcripts of audio-recorded consultations delivered by England's Stop Smoking Services were coded into component BCTs using the taxonomy. Interrater reliability and potential adaptations to the taxonomy to improve coding were discussed following 3 coding waves. A coding training manual was developed through expert consensus and piloted on 10 trainees, assessing coding reliability and self-perceived competence before and after training. An average of 33 BCTs from the taxonomy were identified at least once across sessions and coding waves. Consultations contained on average 12 BCTs (range = 8-31). Average interrater reliability was high (88% agreement). The taxonomy was adapted to simplify coding by merging co-occurring BCTs and refining BCT definitions. Coding reliability and self-perceived competence significantly improved posttraining for all trainees. It is possible to apply a taxonomy to reliably identify and classify BCTs in smoking cessation behavioral support delivered in practice, and train inexperienced coders to do so reliably. This method can be used to investigate variability in provision of behavioral support across services, monitor fidelity of delivery, and identify training needs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reheis, N.; Zabernig, A.; Ploechl, L.
1994-12-31
Actively cooled in-vessel components like divertors or limiters require high quality and reliability to ensure safe operation during long term use. Such components are subjected to very severe thermal and mechanical cyclic loads and high power densities. Key requirements for materials in question are e.g. high melting point and thermal conductivity and low atomic mass number. Since no single material can simultaneously meet all of these requirements the selection of materials to be combined in composite components as well as of manufacturing and non-destructive inspection (NDI) methods is a particularly challenging task. Armour materials like graphite intended to face themore » plasma and help to maintain its desired properties, are bonded to metallic substrates like copper, molybdenum or stainless steel providing cooling and mechanical support. Several techniques such as brazing and active metal casting have been developed and successfully applied for joining materials with different thermophysical properties, pursuing the objective of sufficient heat dissipation from the hot, plasma facing surface to the coolant. NDI methods are an integral part of the manufacturing schedule of these components, starting in the design phase and ending in the final inspection. They apply all kinds of divertor types (monobloc and flat-tile concept). Particular focus is put on the feasibility of detecting small flaws and defects in complex interfaces and on the limits of these techniques. Special test pieces with defined defects acting as standards were inspected. Accompanying metallographic investigations were carried out to compare actual defects with results recorded during NDI.« less
DATMAN: A reliability data analysis program using Bayesian updating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becker, M.; Feltus, M.A.
1996-12-31
Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less
NASA Technical Reports Server (NTRS)
Briand, Lionel C.; Basili, Victor R.; Hetmanski, Christopher J.
1993-01-01
Applying equal testing and verification effort to all parts of a software system is not very efficient, especially when resources are limited and scheduling is tight. Therefore, one needs to be able to differentiate low/high fault frequency components so that testing/verification effort can be concentrated where needed. Such a strategy is expected to detect more faults and thus improve the resulting reliability of the overall system. This paper presents the Optimized Set Reduction approach for constructing such models, intended to fulfill specific software engineering needs. Our approach to classification is to measure the software system and build multivariate stochastic models for predicting high risk system components. We present experimental results obtained by classifying Ada components into two classes: is or is not likely to generate faults during system and acceptance test. Also, we evaluate the accuracy of the model and the insights it provides into the error making process.
Issues and Methods for Assessing COTS Reliability, Maintainability, and Availability
NASA Technical Reports Server (NTRS)
Schneidewind, Norman F.; Nikora, Allen P.
1998-01-01
Many vendors produce products that are not domain specific (e.g., network server) and have limited functionality (e.g., mobile phone). In contrast, many customers of COTS develop systems that am domain specific (e.g., target tracking system) and have great variability in functionality (e.g., corporate information system). This discussion takes the viewpoint of how the customer can ensure the quality of COTS components. In evaluating the benefits and costs of using COTS, we must consider the environment in which COTS will operate. Thus we must distinguish between using a non-mission critical application like a spreadsheet program to produce a budget and a mission critical application like military strategic and tactical operations. Whereas customers will tolerate an occasional bug in the former, zero tolerance is the rule in the latter. We emphasize the latter because this is the arena where there are major unresolved problems in the application of COTS. Furthermore, COTS components may be embedded in the larger customer system. We refer to these as embedded systems. These components must be reliable, maintainable, and available, and must be with the larger system in order for the customer to benefit from the advertised advantages of lower development and maintenance costs. Interestingly, when the claims of COTS advantages are closely examined, one finds that to a great extent these COTS components consist of hardware and office products, not mission critical software [1]. Obviously, COTS components are different from custom components with respect to one or more of the following attributes: source, development paradigm, safety, reliability, maintainability, availability, security, and other attributes. However, the important question is whether they should be treated differently when deciding to deploy them for operational use; we suggest the answer is no. We use reliability as an example to justify our answer. In order to demonstrate its reliability, a COTS component must pass the same reliability evaluations as the custom components, otherwise the COTS components will be the weakest link in the chain of components and will be the determinant of software system reliability. The challenge is that there will be less information available for evaluating COTS components than for custom components but this does not mean we should despair and do nothing. Actually, there is a lot we can do even in the absence of documentation on COTS components because the customer will have information about how COTS components are to be used in the larger system. To illustrate our approach, we will consider the reliability, maintainability, and availability (RMA) of COTS components as used in larger systems. Finally, COTS suppliers might consider increasing visibility into their products to assist customers in determining the components' fitness for use in a particular application. We offer ideas of information that would be useful to customers, and what vendors might do to provide it.
Reliability approach to rotating-component design. [fatigue life and stress concentration
NASA Technical Reports Server (NTRS)
Kececioglu, D. B.; Lalli, V. R.
1975-01-01
A probabilistic methodology for designing rotating mechanical components using reliability to relate stress to strength is explained. The experimental test machines and data obtained for steel to verify this methodology are described. A sample mechanical rotating component design problem is solved by comparing a deterministic design method with the new design-by reliability approach. The new method shows that a smaller size and weight can be obtained for specified rotating shaft life and reliability, and uses the statistical distortion-energy theory with statistical fatigue diagrams for optimum shaft design. Statistical methods are presented for (1) determining strength distributions for steel experimentally, (2) determining a failure theory for stress variations in a rotating shaft subjected to reversed bending and steady torque, and (3) relating strength to stress by reliability.
Wu, Zhao; Xiong, Naixue; Huang, Yannong; Xu, Degang; Hu, Chunyang
2015-01-01
The services composition technology provides flexible methods for building service composition applications (SCAs) in wireless sensor networks (WSNs). The high reliability and high performance of SCAs help services composition technology promote the practical application of WSNs. The optimization methods for reliability and performance used for traditional software systems are mostly based on the instantiations of software components, which are inapplicable and inefficient in the ever-changing SCAs in WSNs. In this paper, we consider the SCAs with fault tolerance in WSNs. Based on a Universal Generating Function (UGF) we propose a reliability and performance model of SCAs in WSNs, which generalizes a redundancy optimization problem to a multi-state system. Based on this model, an efficient optimization algorithm for reliability and performance of SCAs in WSNs is developed based on a Genetic Algorithm (GA) to find the optimal structure of SCAs with fault-tolerance in WSNs. In order to examine the feasibility of our algorithm, we have evaluated the performance. Furthermore, the interrelationships between the reliability, performance and cost are investigated. In addition, a distinct approach to determine the most suitable parameters in the suggested algorithm is proposed. PMID:26561818
Parametric Mass Reliability Study
NASA Technical Reports Server (NTRS)
Holt, James P.
2014-01-01
The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.
NASA Astrophysics Data System (ADS)
Gupta, R. K.; Bhunia, A. K.; Roy, D.
2009-10-01
In this paper, we have considered the problem of constrained redundancy allocation of series system with interval valued reliability of components. For maximizing the overall system reliability under limited resource constraints, the problem is formulated as an unconstrained integer programming problem with interval coefficients by penalty function technique and solved by an advanced GA for integer variables with interval fitness function, tournament selection, uniform crossover, uniform mutation and elitism. As a special case, considering the lower and upper bounds of the interval valued reliabilities of the components to be the same, the corresponding problem has been solved. The model has been illustrated with some numerical examples and the results of the series redundancy allocation problem with fixed value of reliability of the components have been compared with the existing results available in the literature. Finally, sensitivity analyses have been shown graphically to study the stability of our developed GA with respect to the different GA parameters.
Applicability and Limitations of Reliability Allocation Methods
NASA Technical Reports Server (NTRS)
Cruz, Jose A.
2016-01-01
Reliability allocation process may be described as the process of assigning reliability requirements to individual components within a system to attain the specified system reliability. For large systems, the allocation process is often performed at different stages of system design. The allocation process often begins at the conceptual stage. As the system design develops, more information about components and the operating environment becomes available, different allocation methods can be considered. Reliability allocation methods are usually divided into two categories: weighting factors and optimal reliability allocation. When properly applied, these methods can produce reasonable approximations. Reliability allocation techniques have limitations and implied assumptions that need to be understood by system engineers. Applying reliability allocation techniques without understanding their limitations and assumptions can produce unrealistic results. This report addresses weighting factors, optimal reliability allocation techniques, and identifies the applicability and limitations of each reliability allocation technique.
Highly Survivable Avionics Systems for Long-Term Deep Space Exploration
NASA Technical Reports Server (NTRS)
Alkalai, L.; Chau, S.; Tai, A. T.
2001-01-01
The design of highly survivable avionics systems for long-term (> 10 years) exploration of space is an essential technology for all current and future missions in the Outer Planets roadmap. Long-term exposure to extreme environmental conditions such as high radiation and low-temperatures make survivability in space a major challenge. Moreover, current and future missions are increasingly using commercial technology such as deep sub-micron (0.25 microns) fabrication processes with specialized circuit designs, commercial interfaces, processors, memory, and other commercial off the shelf components that were not designed for long-term survivability in space. Therefore, the design of highly reliable, and available systems for the exploration of Europa, Pluto and other destinations in deep-space require a comprehensive and fresh approach to this problem. This paper summarizes work in progress in three different areas: a framework for the design of highly reliable and highly available space avionics systems, distributed reliable computing architecture, and Guarded Software Upgrading (GSU) techniques for software upgrading during long-term missions. Additional information is contained in the original extended abstract.
Fan, Wei; Li, Rong; Li, Sifan; Ping, Wenli; Li, Shujun; Naumova, Alexandra; Peelen, Tamara; Yuan, Zheng; Zhang, Dabing
2016-01-01
Reliable methods are needed to detect the presence of tobacco components in tobacco products to effectively control smuggling and classify tariff and excise in tobacco industry to control illegal tobacco trade. In this study, two sensitive and specific DNA based methods, one quantitative real-time PCR (qPCR) assay and the other loop-mediated isothermal amplification (LAMP) assay, were developed for the reliable and efficient detection of the presence of tobacco (Nicotiana tabacum) in various tobacco samples and commodities. Both assays targeted the same sequence of the uridine 5′-monophosphate synthase (UMPS), and their specificities and sensitivities were determined with various plant materials. Both qPCR and LAMP methods were reliable and accurate in the rapid detection of tobacco components in various practical samples, including customs samples, reconstituted tobacco samples, and locally purchased cigarettes, showing high potential for their application in tobacco identification, particularly in the special cases where the morphology or chemical compositions of tobacco have been disrupted. Therefore, combining both methods would facilitate not only the detection of tobacco smuggling control, but also the detection of tariff classification and of excise. PMID:27635142
Kordi Yoosefinejad, Amin; Motealleh, Alireza; Babakhani, Mohammad
2017-05-01
The Functional index of hand osteoarthritis (FIHOA) is a commonly used patient-reported outcome questionnaire designed to measure function in patients with hand osteoarthritis. The objective of this study was to evaluate the validity and reliability of the Persian version of the FIHOA. The Persian-translated version of FIHOA was administered to 72 native Persian-speaking patients in Iran with hand osteoarthritis. Thirty-six of the patients completed the questionnaire on two occasions 1 week apart. The physical component of the SF-36 and a numerical rating scale were used to evaluate the construct validity of the Persian version of FIHOA. Internal consistency was high (Cronbach's alpha = 0.89). Test-retest reliability for the total score was excellent (weighted kappa = 0.89, 95% CI 0.79-0.94). A significant positive correlation between total FIHOA score and numerical rating scale (r = 0.70) and a significant negative correlation between total FIHOA score and the physical component scale of the SF-36 (r = -0.76) were observed. The Persian version of the FIHOA showed adequate validity and reliability to evaluate functional disability in Persian-speaking patients with hand osteoarthritis.
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.
2002-01-01
Brittle materials are being used, or considered, for a wide variety of high tech applications that operate in harsh environments, including static and rotating turbine parts. thermal protection systems, dental prosthetics, fuel cells, oxygen transport membranes, radomes, and MEMS. Designing components to sustain repeated load without fracturing while using the minimum amount of material requires the use of a probabilistic design methodology. The CARES/Life code provides a general-purpose analysis tool that predicts the probability of failure of a ceramic component as a function of its time in service. For this presentation an interview of the CARES/Life program will be provided. Emphasis will be placed on describing the latest enhancements to the code for reliability analysis with time varying loads and temperatures (fully transient reliability analysis). Also, early efforts in investigating the validity of using Weibull statistics, the basis of the CARES/Life program, to characterize the strength of MEMS structures will be described as as well as the version of CARES/Life for MEMS (CARES/MEMS) being prepared which incorporates single crystal and edge flaw reliability analysis capability. It is hoped this talk will open a dialog for potential collaboration in the area of MEMS testing and life prediction.
Alanis-Lobato, Gregorio
2015-01-01
High-throughput detection of protein interactions has had a major impact in our understanding of the intricate molecular machinery underlying the living cell, and has permitted the construction of very large protein interactomes. The protein networks that are currently available are incomplete and a significant percentage of their interactions are false positives. Fortunately, the structural properties observed in good quality social or technological networks are also present in biological systems. This has encouraged the development of tools, to improve the reliability of protein networks and predict new interactions based merely on the topological characteristics of their components. Since diseases are rarely caused by the malfunction of a single protein, having a more complete and reliable interactome is crucial in order to identify groups of inter-related proteins involved in disease etiology. These system components can then be targeted with minimal collateral damage. In this article, an important number of network mining tools is reviewed, together with resources from which reliable protein interactomes can be constructed. In addition to the review, a few representative examples of how molecular and clinical data can be integrated to deepen our understanding of pathogenesis are discussed.
Reliability of Laterality Effects in a Dichotic Listening Task with Words and Syllables
ERIC Educational Resources Information Center
Russell, Nancy L.; Voyer, Daniel
2004-01-01
Large and reliable laterality effects have been found using a dichotic target detection task in a recent experiment using word stimuli pronounced with an emotional component. The present study tested the hypothesis that the magnitude and reliability of the laterality effects would increase with the removal of the emotional component and variations…
NASA Astrophysics Data System (ADS)
Sokoloski, Martin M.
1988-09-01
The objective of the Communications Technology Program is to enable data transmission to and from low Earth orbit, geostationary orbit, and solar and deep space missions. This can be achieved by maintaining an effective, balances effort in basic, applied, and demonstration prototype communications technology through work in theory, experimentation, and components. The program consists of three major research and development discipline areas which are: microwave and millimeter wave tube components; solid state monolithic integrated circuit; and free space laser communications components and devices. The research ranges from basic research in surface physics (to study the mechanisms of surface degradation from under high temperature and voltage operating conditions which impacts cathode tube reliability and lifetime) to generic research on the dynamics of electron beams and circuits (for exploitation in various micro- and millimeter wave tube devices). Work is also performed on advanced III-V semiconductor materials and devices for use in monolithic integrated analog circuits (used in adaptive, programmable phased arrays for microwave antenna feeds and receivers) - on the use of electromagnetic theory in antennas and on technology necessary for eventual employment of lasers for free space communications for future low earth, geostationary, and deep space missions requiring high data rates with corresponding directivity and reliability.
NASA Technical Reports Server (NTRS)
Sokoloski, Martin M.
1988-01-01
The objective of the Communications Technology Program is to enable data transmission to and from low Earth orbit, geostationary orbit, and solar and deep space missions. This can be achieved by maintaining an effective, balances effort in basic, applied, and demonstration prototype communications technology through work in theory, experimentation, and components. The program consists of three major research and development discipline areas which are: microwave and millimeter wave tube components; solid state monolithic integrated circuit; and free space laser communications components and devices. The research ranges from basic research in surface physics (to study the mechanisms of surface degradation from under high temperature and voltage operating conditions which impacts cathode tube reliability and lifetime) to generic research on the dynamics of electron beams and circuits (for exploitation in various micro- and millimeter wave tube devices). Work is also performed on advanced III-V semiconductor materials and devices for use in monolithic integrated analog circuits (used in adaptive, programmable phased arrays for microwave antenna feeds and receivers) - on the use of electromagnetic theory in antennas and on technology necessary for eventual employment of lasers for free space communications for future low earth, geostationary, and deep space missions requiring high data rates with corresponding directivity and reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotlin, J.J.; Dunteman, N.R.; Scott, D.I.
1983-01-01
The current Electro-Motive Division 645 Series turbocharged engines are the Model FB and EC. The FB engine combines the highest thermal efficiency with the highest specific output of any EMD engine to date. The FB Series incorporates 16:1 compression ratio with a fire ring piston and an improved turbocharger design. Engine components included in the FB engine provide very high output levels with exceptional reliability. This paper also describes the performance of the lower rated Model EC engine series which feature high thermal efficiency and utilize many engine components well proven in service and basic to the Model FB Series.
AGT 100 automotive gas turbine system development
NASA Technical Reports Server (NTRS)
Helms, H. E. G.
1982-01-01
General Motors is developing an automotive gas turbine system that can be an alternate powerplant for future automobiles. Work sponsored by DOE and administered by NASA Lewis Research Center is emphasizing small component aerodynamics and high-temperature structural ceramics. Reliability requirements of the AGT 100 turbine system include chemical and structural ceramic component stability in the gas turbine environment. The power train system, its configuration and schedule are presented, and its performance tested. The aerodynamic component development is reviewed with discussions on the compressor, turbine, regenerator, interturbine duct and scroll, and combustor. Ceramic component development is also reviewed, and production cost and required capital investment are taken into consideration.
Orbit Transfer Vehicle (OTV) advanced expander cycle engine point design study, volume 2
NASA Technical Reports Server (NTRS)
1981-01-01
The engine requirements are emphasized and include: high specific impulse within a restricted installed length constraint, long life, multiple starts, different thrust levels, and man-rated reliability. The engine operating characteristics and the major component analytical design are summarized.
Some Characteristics of One Type of High Reliability Organization.
ERIC Educational Resources Information Center
Roberts, Karlene H.
1990-01-01
Attempts to define organizational processes necessary to operate safely technologically complex organizations. Identifies nuclear powered aircraft carriers as examples of potentially hazardous organizations with histories of excellent operations. Discusses how carriers deal with components of risk and antecedents to catastrophe cited by Perrow and…
Electrochemical carbon dioxide concentrator subsystem development
NASA Technical Reports Server (NTRS)
Koszenski, E. P.; Heppner, D. B.; Bunnell, C. T.
1986-01-01
The most promising concept for a regenerative CO2 removal system for long duration manned space flight is the Electrochemical CO2 Concentrator (EDC), which allows for the continuous, efficient removal of CO2 from the spacecraft cabin. This study addresses the advancement of the EDC system by generating subsystem and ancillary component reliability data through extensive endurance testing and developing related hardware components such as electrochemical module lightweight end plates, electrochemical module improved isolation valves, an improved air/liquid heat exchanger and a triple redundant relative humidity sensor. Efforts included fabrication and testing the EDC with a Sabatier CO2 Reduction Reactor and generation of data necessary for integration of the EDC into a space station air revitalization system. The results verified the high level of performance, reliability and durability of the EDC subsystem and ancillary hardware, verified the high efficiency of the Sabatier CO2 Reduction Reactor, and increased the overall EDC technology engineering data base. The study concluded that the EDC system is approaching the hardware maturity levels required for space station deployment.
ArControl: An Arduino-Based Comprehensive Behavioral Platform with Real-Time Performance.
Chen, Xinfeng; Li, Haohong
2017-01-01
Studying animal behavior in the lab requires reliable delivering stimulations and monitoring responses. We constructed a comprehensive behavioral platform (ArControl: Arduino Control Platform) that was an affordable, easy-to-use, high-performance solution combined software and hardware components. The hardware component was consisted of an Arduino UNO board and a simple drive circuit. As for software, the ArControl provided a stand-alone and intuitive GUI (graphical user interface) application that did not require users to master scripts. The experiment data were automatically recorded with the built in DAQ (data acquisition) function. The ArControl also allowed the behavioral schedule to be entirely stored in and operated on the Arduino chip. This made the ArControl a genuine, real-time system with high temporal resolution (<1 ms). We tested the ArControl, based on strict performance measurements and two mice behavioral experiments. The results showed that the ArControl was an adaptive and reliable system suitable for behavioral research.
ArControl: An Arduino-Based Comprehensive Behavioral Platform with Real-Time Performance
Chen, Xinfeng; Li, Haohong
2017-01-01
Studying animal behavior in the lab requires reliable delivering stimulations and monitoring responses. We constructed a comprehensive behavioral platform (ArControl: Arduino Control Platform) that was an affordable, easy-to-use, high-performance solution combined software and hardware components. The hardware component was consisted of an Arduino UNO board and a simple drive circuit. As for software, the ArControl provided a stand-alone and intuitive GUI (graphical user interface) application that did not require users to master scripts. The experiment data were automatically recorded with the built in DAQ (data acquisition) function. The ArControl also allowed the behavioral schedule to be entirely stored in and operated on the Arduino chip. This made the ArControl a genuine, real-time system with high temporal resolution (<1 ms). We tested the ArControl, based on strict performance measurements and two mice behavioral experiments. The results showed that the ArControl was an adaptive and reliable system suitable for behavioral research. PMID:29321735
Approach to developing reliable space reactor power systems
NASA Technical Reports Server (NTRS)
Mondt, Jack F.; Shinbrot, Charles H.
1991-01-01
During Phase II, the Engineering Development Phase, the SP-100 Project has defined and is pursuing a new approach to developing reliable power systems. The approach to developing such a system during the early technology phase is described along with some preliminary examples to help explain the approach. Developing reliable components to meet space reactor power system requirements is based on a top-down systems approach which includes a point design based on a detailed technical specification of a 100-kW power system. The SP-100 system requirements implicitly recognize the challenge of achieving a high system reliability for a ten-year lifetime, while at the same time using technologies that require very significant development efforts. A low-cost method for assessing reliability, based on an understanding of fundamental failure mechanisms and design margins for specific failure mechanisms, is being developed as part of the SP-100 Program.
Arshad, Muzamil; Stanley, Jeffrey A.; Raz, Naftali
2016-01-01
In an age-heterogeneous sample of healthy adults, we examined test-retest reliability (with and without participant re-positioning) of two popular MRI methods of estimating myelin content: modeling the short spin-spin (T2) relaxation component of multi-echo imaging data and computing the ratio of T1-weighted and T2-weighted images (T1w/T2w). Taking the myelin water fraction (MWF) index of myelin content derived from the multi-component T2 relaxation data as a standard, we evaluate the concurrent and differential validity of T1w/T2w ratio images. The results revealed high reliability of MWF and T1w/T2w ratio. However, we found significant correlations of low to moderate magnitude between MWF and the T1w/T2w ratio in only two of six examined regions of the cerebral white matter. Notably, significant correlations of the same or greater magnitude were observed for T1w/T2w ratio and the intermediate T2 relaxation time constant, which is believed to reflect differences in the mobility of water between the intracellular and extracellular compartments. We conclude that although both methods are highly reliable and thus well-suited for longitudinal studies, T1w/T2w ratio has low criterion validity and may be not an optimal index of subcortical myelin content. PMID:28009069
NASA Technical Reports Server (NTRS)
Coleman, Anthony S.; Hansen, Irving G.
1994-01-01
NASA is pursuing a program in Advanced Subsonic Transport (AST) to develop the technology for a highly reliable Fly-By-Light/Power-By-WIre aircraft. One of the primary objectives of the program is to develop the technology base for confident application of integrated PBW components and systems to transport aircraft to improve operating reliability and efficiency. Technology will be developed so that the present hydraulic and pneumatic systems of the aircraft can be systematically eliminated and replaced by electrical systems. These motor driven actuators would move the aircraft wing surfaces as well as the rudder to provide steering controls for the pilot. Existing aircraft electrical systems are not flight critical and are prone to failure due to Electromagnetic Interference (EMI) (1), ground faults and component failures. In order to successfully implement electromechanical flight control actuation, a Power Management and Distribution (PMAD) System must be designed having a reliability of 1 failure in 10(exp +9) hours, EMI hardening and a fault tolerance architecture to ensure uninterrupted power to all aircraft flight critical systems. The focus of this paper is to analyze, define, and describe technically challenging areas associated with the development of a Power By Wire Aircraft and typical requirements to be established at the box level. The authors will attempt to propose areas of investigation, citing specific military standards and requirements that need to be revised to accommodate the 'More Electric Aircraft Systems'.
NASA Technical Reports Server (NTRS)
White, A. L.
1983-01-01
This paper examines the reliability of three architectures for six components. For each architecture, the probabilities of the failure states are given by algebraic formulas involving the component fault rate, the system recovery rate, and the operating time. The dominant failure modes are identified, and the change in reliability is considered with respect to changes in fault rate, recovery rate, and operating time. The major conclusions concern the influence of system architecture on failure modes and parameter requirements. Without this knowledge, a system designer may pick an inappropriate structure.
Environmental education curriculum evaluation questionnaire: A reliability and validity study
NASA Astrophysics Data System (ADS)
Minner, Daphne Diane
The intention of this research project was to bridge the gap between social science research and application to the environmental domain through the development of a theoretically derived instrument designed to give educators a template by which to evaluate environmental education curricula. The theoretical base for instrument development was provided by several developmental theories such as Piaget's theory of cognitive development, Developmental Systems Theory, Life-span Perspective, as well as curriculum research within the area of environmental education. This theoretical base fueled the generation of a list of components which were then translated into a questionnaire with specific questions relevant to the environmental education domain. The specific research question for this project is: Can a valid assessment instrument based largely on human development and education theory be developed that reliably discriminates high, moderate, and low quality in environmental education curricula? The types of analyses conducted to answer this question were interrater reliability (percent agreement, Cohen's Kappa coefficient, Pearson's Product-Moment correlation coefficient), test-retest reliability (percent agreement, correlation), and criterion-related validity (correlation). Face validity and content validity were also assessed through thorough reviews. Overall results indicate that 29% of the questions on the questionnaire demonstrated a high level of interrater reliability and 43% of the questions demonstrated a moderate level of interrater reliability. Seventy-one percent of the questions demonstrated a high test-retest reliability and 5% a moderate level. Fifty-five percent of the questions on the questionnaire were reliable (high or moderate) both across time and raters. Only eight questions (8%) did not show either interrater or test-retest reliability. The global overall rating of high, medium, or low quality was reliable across both coders and time, indicating that the questionnaire can discriminate differences in quality of environmental education curricula. Of the 35 curricula evaluated, 6 were high quality, 14 were medium quality and 15 were low quality. The criterion-related validity of the instrument is at current time unable to be established due to the lack of comparable measures or a concretely usable set of multidisciplinary standards. Face and content validity were sufficiently demonstrated.
The Liebowitz Social Anxiety Scale for Children and Adolescents.
Olivares, José; Sánchez-García, Raquel; López-Pina, José Antonio
2009-08-01
The purpose of this study was to analyze the component structure and reliability of the Liebowitz Social Anxiety Scale for Children and Adolescents, self-report version (LSAS-CA-SR), in a Spanish community population. The sample was made up of 422 students from elementary and high schools, aged between 10 and 17 years. Exploratory factor analysis isolated one component for the Anxiety subscale and one component for the Avoidance subscale. Medium-strong associations were found between the total score and subscale scores. LSAS-CA-SR scores had stronger associations with instruments of social anxiety. Internal consistency for the Fear subscale was .91, and for the Avoidance subscale, it was .89. Gender and age effects were assessed for LSAS-CA-SR scores. Effect sizes for age and gender and interaction of age and gender were very low on both the Fear and the Avoidance subscales. There were significant differences between female and male means on the Fear subscale. The findings suggest that the LSAS-CA-SR is reliable and valid.
Post-Test Analysis of a 10-Year Sodium Heat Pipe Life Test
NASA Technical Reports Server (NTRS)
Rosenfeld, John H.; Locci, Ivan E.; Sanzi, James L.; Hull, David R.; Geng, Steven M.
2011-01-01
High-temperature heat pipes are being evaluated for use in energy conversion applications such as fuel cells, gas turbine re-combustors, Stirling cycle heat sources; and with the resurgence of space nuclear power both as reactor heat removal elements and as radiator elements. Long operating life and reliable performance are critical requirements for these applications. Accordingly, long-term materials compatibility is being evaluated through the use of high-temperature life test heat pipes. Thermacore, Inc., has carried out a sodium heat pipe 10-year life test to establish long-term operating reliability. Sodium heat pipes have demonstrated favorable materials compatibility and heat transport characteristics at high operating temperatures in air over long time periods. A representative one-tenth segment Stirling Space Power Converter heat pipe with an Inconel 718 envelope and a stainless steel screen wick has operated for over 87,000 hr (10 years) at nearly 700 C. These life test results have demonstrated the potential for high-temperature heat pipes to serve as reliable energy conversion system components for power applications that require long operating lifetime with high reliability. Detailed design specifications, operating history, and post-test analysis of the heat pipe and sodium working fluid are described. Lessons learned and future life test plans are also discussed.
NASA Technical Reports Server (NTRS)
Rosenfeld, John, H; Minnerly, Kenneth, G; Dyson, Christopher, M.
2012-01-01
High-temperature heat pipes are being evaluated for use in energy conversion applications such as fuel cells, gas turbine re-combustors, Stirling cycle heat sources; and with the resurgence of space nuclear power both as reactor heat removal elements and as radiator elements. Long operating life and reliable performance are critical requirements for these applications. Accordingly, long-term materials compatibility is being evaluated through the use of high-temperature life test heat pipes. Thermacore, Inc., has carried out a sodium heat pipe 10-year life test to establish long-term operating reliability. Sodium heat pipes have demonstrated favorable materials compatibility and heat transport characteristics at high operating temperatures in air over long time periods. A representative one-tenth segment Stirling Space Power Converter heat pipe with an Inconel 718 envelope and a stainless steel screen wick has operated for over 87,000 hr (10 yr) at nearly 700 C. These life test results have demonstrated the potential for high-temperature heat pipes to serve as reliable energy conversion system components for power applications that require long operating lifetime with high reliability. Detailed design specifications, operating history, and post-test analysis of the heat pipe and sodium working fluid are described.
A probabilisitic based failure model for components fabricated from anisotropic graphite
NASA Astrophysics Data System (ADS)
Xiao, Chengfeng
The nuclear moderator for high temperature nuclear reactors are fabricated from graphite. During reactor operations graphite components are subjected to complex stress states arising from structural loads, thermal gradients, neutron irradiation damage, and seismic events. Graphite is a quasi-brittle material. Two aspects of nuclear grade graphite, i.e., material anisotropy and different behavior in tension and compression, are explicitly accounted for in this effort. Fracture mechanic methods are useful for metal alloys, but they are problematic for anisotropic materials with a microstructure that makes it difficult to identify a "critical" flaw. In fact cracking in a graphite core component does not necessarily result in the loss of integrity of a nuclear graphite core assembly. A phenomenological failure criterion that does not rely on flaw detection has been derived that accounts for the material behaviors mentioned. The probability of failure of components fabricated from graphite is governed by the scatter in strength. The design protocols being proposed by international code agencies recognize that design and analysis of reactor core components must be based upon probabilistic principles. The reliability models proposed herein for isotropic graphite and graphite that can be characterized as being transversely isotropic are another set of design tools for the next generation very high temperature reactors (VHTR) as well as molten salt reactors. The work begins with a review of phenomenologically based deterministic failure criteria. A number of this genre of failure models are compared with recent multiaxial nuclear grade failure data. Aspects in each are shown to be lacking. The basic behavior of different failure strengths in tension and compression is exhibited by failure models derived for concrete, but attempts to extend these concrete models to anisotropy were unsuccessful. The phenomenological models are directly dependent on stress invariants. A set of invariants, known as an integrity basis, was developed for a non-linear elastic constitutive model. This integrity basis allowed the non-linear constitutive model to exhibit different behavior in tension and compression and moreover, the integrity basis was amenable to being augmented and extended to anisotropic behavior. This integrity basis served as the starting point in developing both an isotropic reliability model and a reliability model for transversely isotropic materials. At the heart of the reliability models is a failure function very similar in nature to the yield functions found in classic plasticity theory. The failure function is derived and presented in the context of a multiaxial stress space. States of stress inside the failure envelope denote safe operating states. States of stress on or outside the failure envelope denote failure. The phenomenological strength parameters associated with the failure function are treated as random variables. There is a wealth of failure data in the literature that supports this notion. The mathematical integration of a joint probability density function that is dependent on the random strength variables over the safe operating domain defined by the failure function provides a way to compute the reliability of a state of stress in a graphite core component fabricated from graphite. The evaluation of the integral providing the reliability associated with an operational stress state can only be carried out using a numerical method. Monte Carlo simulation with importance sampling was selected to make these calculations. The derivation of the isotropic reliability model and the extension of the reliability model to anisotropy are provided in full detail. Model parameters are cast in terms of strength parameters that can (and have been) characterized by multiaxial failure tests. Comparisons of model predictions with failure data is made and a brief comparison is made to reliability predictions called for in the ASME Boiler and Pressure Vessel Code. Future work is identified that would provide further verification and augmentation of the numerical methods used to evaluate model predictions.
First impressions: gait cues drive reliable trait judgements.
Thoresen, John C; Vuong, Quoc C; Atkinson, Anthony P
2012-09-01
Personality trait attribution can underpin important social decisions and yet requires little effort; even a brief exposure to a photograph can generate lasting impressions. Body movement is a channel readily available to observers and allows judgements to be made when facial and body appearances are less visible; e.g., from great distances. Across three studies, we assessed the reliability of trait judgements of point-light walkers and identified motion-related visual cues driving observers' judgements. The findings confirm that observers make reliable, albeit inaccurate, trait judgements, and these were linked to a small number of motion components derived from a Principal Component Analysis of the motion data. Parametric manipulation of the motion components linearly affected trait ratings, providing strong evidence that the visual cues captured by these components drive observers' trait judgements. Subsequent analyses suggest that reliability of trait ratings was driven by impressions of emotion, attractiveness and masculinity. Copyright © 2012 Elsevier B.V. All rights reserved.
Metal Injection Molding for Superalloy Jet Engine Components
2006-05-01
single vanes. The vanes are subject to high vibration stresses and thus require reliable fatigue strength. Therefore the quality of the material must meet...Injection Molding for Superalloy Jet Engine Components 9 - 12 RTO-MP-AVT-139 UNCLASSIFIED/UNLIMITED UNCLASSIFIED/UNLIMITED MTU AeroEngines copyright...Sikorski Max Kraus Dr. Claus Müller MTU Aero Engines GmbH Munich, Germany 15.05. - 17.05.2006 MTU AeroEngines copyright ©2 AVT – 139 on “Cost Effective
Reliability Prediction for Combustors and Turbines. Volume I.
1977-06-01
comprised of many sophisticated components utilizing the latest in high-strength materials and technology. This is especially true in the turbine component...JT9D engine. This inspection technique makes use of a horoscope probe to look into the en- gine hot section while the engine remains installed in the...engine can now be removed based on results observed with the horoscope . This type of failure can be caused by any of the three primary turbine airfoil
Hayes, Corey J.; Bhandari, Naleen Raj; Kathe, Niranjan; Payakachat, Nalin
2017-01-01
Limited evidence exists on how non-cancer pain (NCP) affects an individual’s health-related quality of life (HRQoL). This study aimed to validate the Medical Outcomes Study Short Form-12 Version 2 (SF-12v2), a generic measure of HRQoL, in a NCP cohort using the Medical Expenditure Panel Survey Longitudinal Files. The SF Mental Component Summary (MCS12) and SF Physical Component Summary (PCS12) were tested for reliability (internal consistency and test-retest reliability) and validity (construct: convergent and discriminant; criterion: concurrent and predictive). A total of 15,716 patients with NCP were included in the final analysis. The MCS12 and PCS12 demonstrated high internal consistency (Cronbach’s alpha and Mosier’s alpha > 0.8), and moderate and high test-retest reliability, respectively (MCS12 intraclass correlation coefficient (ICC): 0.64; PCS12 ICC: 0.73). Both scales were significantly associated with a number of chronic conditions (p < 0.05). The PCS12 was strongly correlated with perceived health (r = 0.52) but weakly correlated with perceived mental health (r = 0.25). The MCS12 was moderately correlated with perceived mental health (r = 0.42) and perceived health (r = 0.33). Increasing PCS12 and MCS12 scores were significantly associated with lower odds of reporting future physical and cognitive limitations (PCS12: OR = 0.90 95%CI: 0.89–0.90, MCS12: OR = 0.94 95%CI: 0.93–0.94). In summary, the SF-12v2 is a reliable and valid measure of HRQoL for patients with NCP. PMID:28445438
NASA Astrophysics Data System (ADS)
Sembiring, N.; Nasution, A. H.
2018-02-01
Corrective maintenance i.e replacing or repairing the machine component after machine break down always done in a manufacturing company. It causes the production process must be stopped. Production time will decrease due to the maintenance team must replace or repair the damage machine component. This paper proposes a preventive maintenance’s schedule for a critical component of a critical machine of an crude palm oil and kernel company due to increase maintenance efficiency. The Reliability Engineering & Maintenance Value Stream Mapping is used as a method and a tool to analize the reliability of the component and reduce the wastage in any process by segregating value added and non value added activities.
On modeling human reliability in space flights - Redundancy and recovery operations
NASA Astrophysics Data System (ADS)
Aarset, M.; Wright, J. F.
The reliability of humans is of paramount importance to the safety of space flight systems. This paper describes why 'back-up' operators might not be the best solution, and in some cases, might even degrade system reliability. The problem associated with human redundancy calls for special treatment in reliability analyses. The concept of Standby Redundancy is adopted, and psychological and mathematical models are introduced to improve the way such problems can be estimated and handled. In the past, human reliability has practically been neglected in most reliability analyses, and, when included, the humans have been modeled as a component and treated numerically the way technical components are. This approach is not wrong in itself, but it may lead to systematic errors if too simple analogies from the technical domain are used in the modeling of human behavior. In this paper redundancy in a man-machine system will be addressed. It will be shown how simplification from the technical domain, when applied to human components of a system, may give non-conservative estimates of system reliability.
Implementation of Integrated System Fault Management Capability
NASA Technical Reports Server (NTRS)
Figueroa, Fernando; Schmalzel, John; Morris, Jon; Smith, Harvey; Turowski, Mark
2008-01-01
Fault Management to support rocket engine test mission with highly reliable and accurate measurements; while improving availability and lifecycle costs. CORE ELEMENTS: Architecture, taxonomy, and ontology (ATO) for DIaK management. Intelligent Sensor Processes; Intelligent Element Processes; Intelligent Controllers; Intelligent Subsystem Processes; Intelligent System Processes; Intelligent Component Processes.
Reliability Analysis of a Glacier Lake Warning System Using a Bayesian Net
NASA Astrophysics Data System (ADS)
Sturny, Rouven A.; Bründl, Michael
2013-04-01
Beside structural mitigation measures like avalanche defense structures, dams and galleries, warning and alarm systems have become important measures for dealing with Alpine natural hazards. Integrating them into risk mitigation strategies and comparing their effectiveness with structural measures requires quantification of the reliability of these systems. However, little is known about how reliability of warning systems can be quantified and which methods are suitable for comparing their contribution to risk reduction with that of structural mitigation measures. We present a reliability analysis of a warning system located in Grindelwald, Switzerland. The warning system was built for warning and protecting residents and tourists from glacier outburst floods as consequence of a rapid drain of the glacier lake. We have set up a Bayesian Net (BN, BPN) that allowed for a qualitative and quantitative reliability analysis. The Conditional Probability Tables (CPT) of the BN were determined according to manufacturer's reliability data for each component of the system as well as by assigning weights for specific BN nodes accounting for information flows and decision-making processes of the local safety service. The presented results focus on the two alerting units 'visual acoustic signal' (VAS) and 'alerting of the intervention entities' (AIE). For the summer of 2009, the reliability was determined to be 94 % for the VAS and 83 % for the AEI. The probability of occurrence of a major event was calculated as 0.55 % per day resulting in an overall reliability of 99.967 % for the VAS and 99.906 % for the AEI. We concluded that a failure of the VAS alerting unit would be the consequence of a simultaneous failure of the four probes located in the lake and the gorge. Similarly, we deduced that the AEI would fail either if there were a simultaneous connectivity loss of the mobile and fixed network in Grindelwald, an Internet access loss or a failure of the regional operations centre. However, the probability of a common failure of these components was assumed to be low. Overall it can be stated that due to numerous redundancies, the investigated warning system is highly reliable and its influence on risk reduction is very high. Comparable studies in the future are needed to classify these results and to gain more experience how the reliability of warning systems could be determined in practice.
Interrater reliability assessment using the Test of Gross Motor Development-2.
Barnett, Lisa M; Minto, Christine; Lander, Natalie; Hardy, Louise L
2014-11-01
The aim was to examine interrater reliability of the object control subtest from the Test of Gross Motor Development-2 by live observation in a school field setting. Reliability Study--cross sectional. Raters were rated on their ability to agree on (1) the raw total for the six object control skills; (2) each skill performance and (3) the skill components. Agreement for the object control subtest and the individual skills was assessed by an intraclass correlation (ICC) and a kappa statistic assessed for skill component agreement. A total of 37 children (65% girls) aged 4-8 years (M = 6.2, SD = 0.8) were assessed in six skills by two raters; equating to 222 skill tests. Interrater reliability was excellent for the object control subset (ICC = 0.93), and for individual skills, highest for the dribble (ICC = 0.94) followed by strike (ICC = 0.85), overhand throw (ICC = 0.84), underhand roll (ICC = 0.82), kick (ICC = 0.80) and the catch (ICC = 0.71). The strike and the throw had more components with less agreement. Even though the overall subtest score and individual skill agreement was good, some skill components had lower agreement, suggesting these may be more problematic to assess. This may mean some skill components need to be specified differently in order to improve component reliability. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
1991-01-01
The technical effort and computer code enhancements performed during the sixth year of the Probabilistic Structural Analysis Methods program are summarized. Various capabilities are described to probabilistically combine structural response and structural resistance to compute component reliability. A library of structural resistance models is implemented in the Numerical Evaluations of Stochastic Structures Under Stress (NESSUS) code that included fatigue, fracture, creep, multi-factor interaction, and other important effects. In addition, a user interface was developed for user-defined resistance models. An accurate and efficient reliability method was developed and was successfully implemented in the NESSUS code to compute component reliability based on user-selected response and resistance models. A risk module was developed to compute component risk with respect to cost, performance, or user-defined criteria. The new component risk assessment capabilities were validated and demonstrated using several examples. Various supporting methodologies were also developed in support of component risk assessment.
Integrated Design Methodology for Highly Reliable Liquid Rocket Engine
NASA Astrophysics Data System (ADS)
Kuratani, Naoshi; Aoki, Hiroshi; Yasui, Masaaki; Kure, Hirotaka; Masuya, Goro
The Integrated Design Methodology is strongly required at the conceptual design phase to achieve the highly reliable space transportation systems, especially the propulsion systems, not only in Japan but also all over the world in these days. Because in the past some catastrophic failures caused some losses of mission and vehicle (LOM/LOV) at the operational phase, moreover did affect severely the schedule delays and cost overrun at the later development phase. Design methodology for highly reliable liquid rocket engine is being preliminarily established and investigated in this study. The sensitivity analysis is systematically performed to demonstrate the effectiveness of this methodology, and to clarify and especially to focus on the correlation between the combustion chamber, turbopump and main valve as main components. This study describes the essential issues to understand the stated correlations, the need to apply this methodology to the remaining critical failure modes in the whole engine system, and the perspective on the engine development in the future.
Shuttle payload vibroacoustic test plan evaluation
NASA Technical Reports Server (NTRS)
Stahle, C. V.; Gongloff, H. R.; Young, J. P.; Keegan, W. B.
1977-01-01
Statistical decision theory is used to evaluate seven alternate vibro-acoustic test plans for Space Shuttle payloads; test plans include component, subassembly and payload testing and combinations of component and assembly testing. The optimum test levels and the expected cost are determined for each test plan. By including all of the direct cost associated with each test plan and the probabilistic costs due to ground test and flight failures, the test plans which minimize project cost are determined. The lowest cost approach eliminates component testing and maintains flight vibration reliability by performing subassembly tests at a relatively high acoustic level.
Elastohydrodynamic principles applied to the design of helicopter components.
NASA Technical Reports Server (NTRS)
Townsend, D. P.
1973-01-01
Elastohydrodynamic principles affecting the lubrication of transmission components are presented and discussed. Surface temperatures of the transmission bearings and gears affect elastohydrodynamic film thickness. Traction forces and sliding as well as the inlet temperature determine surface temperatures. High contact ratio gears cause increased sliding and may run at higher surface temperatures. Component life is a function of the ratio of elastohydrodynamic film thickness to composite surface roughness. Lubricant starvation reduces elastohydrodynamic film thickness and increases surface temperatures. Methods are presented which allow for the application of elastohydrodynamic principles to transmission design in order to increase system life and reliability.
Elastohydrodynamic principles applied to the design of helicopter components
NASA Technical Reports Server (NTRS)
Townsend, D. P.
1973-01-01
Elastohydrodynamic principles affecting the lubrication of transmission components are presented and discussed. Surface temperature of the transmission bearings and gears affect elastohydrodynamic film thickness. Traction forces and sliding as well as the inlet temperature determine surface temperatures. High contact ratio gears cause increased sliding and may run at higher surface temperatures. Component life is a function of the ratio of elastohydrodynamic film thickness to composite surface roughness. Lubricant starvation reduces elastrohydrodynamic film thickness and increases surface temperatures. Methods are presented which allow for the application of elastohydrodynamic principles to transmission design in order to increase system life and reliability.
Reliability models applicable to space telescope solar array assembly system
NASA Technical Reports Server (NTRS)
Patil, S. A.
1986-01-01
A complex system may consist of a number of subsystems with several components in series, parallel, or combination of both series and parallel. In order to predict how well the system will perform, it is necessary to know the reliabilities of the subsystems and the reliability of the whole system. The objective of the present study is to develop mathematical models of the reliability which are applicable to complex systems. The models are determined by assuming k failures out of n components in a subsystem. By taking k = 1 and k = n, these models reduce to parallel and series models; hence, the models can be specialized to parallel, series combination systems. The models are developed by assuming the failure rates of the components as functions of time and as such, can be applied to processes with or without aging effects. The reliability models are further specialized to Space Telescope Solar Arrray (STSA) System. The STSA consists of 20 identical solar panel assemblies (SPA's). The reliabilities of the SPA's are determined by the reliabilities of solar cell strings, interconnects, and diodes. The estimates of the reliability of the system for one to five years are calculated by using the reliability estimates of solar cells and interconnects given n ESA documents. Aging effects in relation to breaks in interconnects are discussed.
PCA leverage: outlier detection for high-dimensional functional magnetic resonance imaging data.
Mejia, Amanda F; Nebel, Mary Beth; Eloyan, Ani; Caffo, Brian; Lindquist, Martin A
2017-07-01
Outlier detection for high-dimensional (HD) data is a popular topic in modern statistical research. However, one source of HD data that has received relatively little attention is functional magnetic resonance images (fMRI), which consists of hundreds of thousands of measurements sampled at hundreds of time points. At a time when the availability of fMRI data is rapidly growing-primarily through large, publicly available grassroots datasets-automated quality control and outlier detection methods are greatly needed. We propose principal components analysis (PCA) leverage and demonstrate how it can be used to identify outlying time points in an fMRI run. Furthermore, PCA leverage is a measure of the influence of each observation on the estimation of principal components, which are often of interest in fMRI data. We also propose an alternative measure, PCA robust distance, which is less sensitive to outliers and has controllable statistical properties. The proposed methods are validated through simulation studies and are shown to be highly accurate. We also conduct a reliability study using resting-state fMRI data from the Autism Brain Imaging Data Exchange and find that removal of outliers using the proposed methods results in more reliable estimation of subject-level resting-state networks using independent components analysis. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, Rongli; Freyberger, Arne P.; Legg, Robert A.
Several new accelerator projects are adopting superconducting accelerator technology. When accelerating cavities maintain high RF gradients, field emission, the emission of electrons from cavity walls, can occur and may impact operational cavity gradient, radiological environment via activated components, and reliability. In this talk, we will discuss instrumented measurements of field emission from the two 1.1 GeV superconducting continuous wave (CW) linacs in CEBAF. The goal is to improve the understanding of field emission sources originating from cryomodule production, installation and operation. Such basic knowledge is needed in guiding field emission control, mitigation, and reduction toward high gradient and reliable operationmore » of superconducting accelerators.« less
Electric-Field Instrument With Ac-Biased Corona Point
NASA Technical Reports Server (NTRS)
Markson, R.; Anderson, B.; Govaert, J.
1993-01-01
Measurements indicative of incipient lightning yield additional information. New instrument gives reliable readings. High-voltage ac bias applied to needle point through high-resistance capacitance network provides corona discharge at all times, enabling more-slowly-varying component of electrostatic potential of needle to come to equilibrium with surrounding air. High resistance of high-voltage coupling makes instrument insensitive to wind. Improved corona-point instrument expected to yield additional information assisting in safety-oriented forecasting of lighting.
Paddock, L E; Veloski, J; Chatterton, M L; Gevirtz, F O; Nash, D B
2000-07-01
To develop a reliable and valid questionnaire to measure patient satisfaction with diabetes disease management programs. Questions related to structure, process, and outcomes were categorized into 14 domains defining the essential elements of diabetes disease management. Health professionals confirmed the content validity. Face validity was established by a patient focus group. The questionnaire was mailed to 711 patients with diabetes who participated in a disease management program. To reduce the number of questionnaire items, a principal components analysis was performed using a varimax rotation. The Scree test was used to select significant components. To further assess reliability and validity; Cronbach's alpha and product-moment correlations were calculated for components having > or =3 items with loadings >0.50. The validated 73-item mailed satisfaction survey had a 34.1% response rate. Principal components analysis yielded 13 components with eigenvalues > 1.0. The Scree test proposed a 6-component solution (39 items), which explained 59% of the total variation. Internal consistency reliabilities computed for the first 6 components (alpha = 0.79-0.95) were acceptable. The final questionnaire, the Diabetes Management Evaluation Tool (DMET), was designed to assess patient satisfaction with diabetes disease management programs. Although more extensive testing of the questionnaire is appropriate, preliminary reliability and validity of the DMET has been demonstrated.
Highest integration in microelectronics: Development of digital ASICs for PARS3-LR
NASA Astrophysics Data System (ADS)
Scholler, Peter; Vonlutz, Rainer
Essential electronic system components by PARS3-LR, show high requirements in calculation power, power consumption and reliability, by immediately increasing integration thicknesses. These problems are solved by using integrated circuits, developed by LSI LOGIC, that uses the technical and economic advantages of this leading edge technology.
PHOBOS Exploration using Two Small Solar Electric Propulsion (SEP) Spacecraft
NASA Technical Reports Server (NTRS)
Lang, J. J.; Baker, J. D.; McElrath, T. P.; Piacentine, J. S.; Snyder, J. S.
2012-01-01
Phobos Surveyor Mission concept provides an innovative low cost, highly reliable approach to exploring the inner solar system 1/16/2013 3 Dual manifest launch. Use only flight proven, well characterize commercial off-the-shelf components. Flexible mission architecture allows for a slew of unique measurements.
Active parallel redundancy for electronic integrator-type control circuits
NASA Technical Reports Server (NTRS)
Peterson, R. A.
1971-01-01
Circuit extends concept of redundant feedback control from type-0 to type-1 control systems. Inactive channels are slaves to the active channel, if latter fails, it is rejected and slave channel is activated. High reliability and elimination of single-component catastrophic failure are important in closed-loop control systems.
Innovative on board payload optical architecture for high throughput satellites
NASA Astrophysics Data System (ADS)
Baudet, D.; Braux, B.; Prieur, O.; Hughes, R.; Wilkinson, M.; Latunde-Dada, K.; Jahns, J.; Lohmann, U.; Fey, D.; Karafolas, N.
2017-11-01
For the next generation of HighThroughPut (HTP) Telecommunications Satellites, space end users' needs will result in higher link speeds and an increase in the number of channels; up to 512 channels running at 10Gbits/s. By keeping electrical interconnections based on copper, the constraints in term of power dissipation, number of electrical wires and signal integrity will become too demanding. The replacement of the electrical links by optical links is the most adapted solution as it provides high speed links with low power consumption and no EMC/EMI. But replacing all electrical links by optical links of an On Board Payload (OBP) is challenging. It is not simply a matter of replacing electrical components with optical but rather the whole concept and architecture have to be rethought to achieve a high reliability and high performance optical solution. In this context, this paper will present the concept of an Innovative OBP Optical Architecture. The optical architecture was defined to meet the critical requirements of the application: signal speed, number of channels, space reliability, power dissipation, optical signals crossing and components availability. The resulting architecture is challenging and the need for new developments is highlighted. But this innovative optically interconnected architecture will substantially outperform standard electrical ones.
Tutorial: Performance and reliability in redundant disk arrays
NASA Technical Reports Server (NTRS)
Gibson, Garth A.
1993-01-01
A disk array is a collection of physically small magnetic disks that is packaged as a single unit but operates in parallel. Disk arrays capitalize on the availability of small-diameter disks from a price-competitive market to provide the cost, volume, and capacity of current disk systems but many times their performance. Unfortunately, relative to current disk systems, the larger number of components in disk arrays leads to higher rates of failure. To tolerate failures, redundant disk arrays devote a fraction of their capacity to an encoding of their information. This redundant information enables the contents of a failed disk to be recovered from the contents of non-failed disks. The simplest and least expensive encoding for this redundancy, known as N+1 parity is highlighted. In addition to compensating for the higher failure rates of disk arrays, redundancy allows highly reliable secondary storage systems to be built much more cost-effectively than is now achieved in conventional duplicated disks. Disk arrays that combine redundancy with the parallelism of many small-diameter disks are often called Redundant Arrays of Inexpensive Disks (RAID). This combination promises improvements to both the performance and the reliability of secondary storage. For example, IBM's premier disk product, the IBM 3390, is compared to a redundant disk array constructed of 84 IBM 0661 3 1/2-inch disks. The redundant disk array has comparable or superior values for each of the metrics given and appears likely to cost less. In the first section of this tutorial, I explain how disk arrays exploit the emergence of high performance, small magnetic disks to provide cost-effective disk parallelism that combats the access and transfer gap problems. The flexibility of disk-array configurations benefits manufacturer and consumer alike. In contrast, I describe in this tutorial's second half how parallelism, achieved through increasing numbers of components, causes overall failure rates to rise. Redundant disk arrays overcome this threat to data reliability by ensuring that data remains available during and after component failures.
Analysis of whisker-toughened CMC structural components using an interactive reliability model
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Palko, Joseph L.
1992-01-01
Realizing wider utilization of ceramic matrix composites (CMC) requires the development of advanced structural analysis technologies. This article focuses on the use of interactive reliability models to predict component probability of failure. The deterministic William-Warnke failure criterion serves as theoretical basis for the reliability model presented here. The model has been implemented into a test-bed software program. This computer program has been coupled to a general-purpose finite element program. A simple structural problem is presented to illustrate the reliability model and the computer algorithm.
Pérez de los Cobos, José; Trujols, Joan; Siñol, Núria; Vasconcelos e Rego, Lisiane; Iraurgi, Ioseba; Batlle, Francesca
2014-09-01
Reliable and valid assessment of cocaine withdrawal is relevant for treating cocaine-dependent patients. This study examined the psychometric properties of the Spanish version of the Cocaine Selective Severity Assessment (CSSA), an instrument that measures cocaine withdrawal. Participants were 170 cocaine-dependent inpatients receiving detoxification treatment. Principal component analysis revealed a 4-factor structure for CSSA that included the following components: 'Cocaine Craving and Psychological Distress', 'Lethargy', 'Carbohydrate Craving and Irritability', and 'Somatic Depressive Symptoms'. These 4 components accounted for 56.0% of total variance. Internal reliability for these components ranged from unacceptable to good (Chronbach's alpha: 0.87, 0.65, 0.55, and 0.22, respectively). All components except Somatic Depressive Symptoms presented concurrent validity with cocaine use. In summary, while some properties of the Spanish version of the CSSA are satisfactory, such as interpretability of factor structure and test-retest reliability, other properties, such as internal reliability and concurrent validity of some factors, are inadequate. Copyright © 2014 Elsevier Inc. All rights reserved.
[Authentic leadership. Concept and validation of the ALQ in Spain].
Moriano, Juan A; Molero, Fernando; Lévy Mangin, Jean-Pierre
2011-04-01
This study presents the validation of the Authentic Leadership Questionnaire (ALQ) in a sample of more than 600 Spanish employees. This questionnaire measures four distinct but related substantive components of authentic leadership. These components are: self-awareness, relational transparency, balanced processing, and internalized moral perspective. Structural equation modeling confirmed that the Spanish version of ALQ has high reliability and predictive validity for important leadership outputs such as perceived effectiveness of leadership, followers' extra effort and satisfaction with the leader.
Tribological Properties of Structural Ceramics
NASA Technical Reports Server (NTRS)
Buckley, Donald H.; Miyoshi, Kazuhisa
1987-01-01
Paper discusses tribological properties of structural ceramics. Function of tribological research is to bring about reduction in adhesion, friction, and wear of mechanical components; to prevent failures; and to provide long, reliable component life, through judicious selection of materials, operating parameters, and lubricants. Paper reviews adhesion, friction, wear, and lubrication of ceramics; anisotropic friction and wear behavior; and effects of surface films and interactions between ceramics and metals. Analogies with metals are made. Both oxide and nonoxide ceramics, including ceramics used as high temperature lubricants, are dicussed.
Investigation of discrete component chip mounting technology for hybrid microelectronic circuits
NASA Technical Reports Server (NTRS)
Caruso, S. V.; Honeycutt, J. O.
1975-01-01
The use of polymer adhesives for high reliability microcircuit applications is a radical deviation from past practices in electronic packaging. Bonding studies were performed using two gold-filled conductive adhesives, 10/90 tin/lead solder and Indalloy no. 7 solder. Various types of discrete components were mounted on ceramic substrates using both thick-film and thin-film metallization. Electrical and mechanical testing were performed on the samples before and after environmental exposure to MIL-STD-883 screening tests.
FOR Allocation to Distribution Systems based on Credible Improvement Potential (CIP)
NASA Astrophysics Data System (ADS)
Tiwary, Aditya; Arya, L. D.; Arya, Rajesh; Choube, S. C.
2017-02-01
This paper describes an algorithm for forced outage rate (FOR) allocation to each section of an electrical distribution system subject to satisfaction of reliability constraints at each load point. These constraints include threshold values of basic reliability indices, for example, failure rate, interruption duration and interruption duration per year at load points. Component improvement potential measure has been used for FOR allocation. Component with greatest magnitude of credible improvement potential (CIP) measure is selected for improving reliability performance. The approach adopted is a monovariable method where one component is selected for FOR allocation and in the next iteration another component is selected for FOR allocation based on the magnitude of CIP. The developed algorithm is implemented on sample radial distribution system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kisner, R.; Melin, A.; Burress, T.
The objective of this project is to demonstrate improved reliability and increased performance made possible by deeply embedding instrumentation and controls (I&C) in nuclear power plant (NPP) components and systems. The project is employing a highly instrumented canned rotor, magnetic bearing, fluoride salt pump as its I&C technology demonstration platform. I&C is intimately part of the basic millisecond-by-millisecond functioning of the system; treating I&C as an integral part of the system design is innovative and will allow significant improvement in capabilities and performance. As systems become more complex and greater performance is required, traditional I&C design techniques become inadequate andmore » more advanced I&C needs to be applied. New I&C techniques enable optimal and reliable performance and tolerance of noise and uncertainties in the system rather than merely monitoring quasistable performance. Traditionally, I&C has been incorporated in NPP components after the design is nearly complete; adequate performance was obtained through over-design. By incorporating I&C at the beginning of the design phase, the control system can provide superior performance and reliability and enable designs that are otherwise impossible. This report describes the progress and status of the project and provides a conceptual design overview for the platform to demonstrate the performance and reliability improvements enabled by advanced embedded I&C.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reass, W.A.
1994-07-01
This paper describes the electrical design and operation of a high power modulator system implemented for the Los Alamos Plasma Source Ion Implantation (PSII) facility. To test the viability of the PSII process for various automotive components, the modulator must accept wide variations of load impedance. Components have varying area and composition which must be processed with different plasmas. Additionally, the load impedance may change by large factors during the typical 20 uS pulse, due to plasma displacement currents and sheath growth. As a preliminary design to test the system viability for automotive component implantation, suitable for a manufacturing environment,more » circuit topology must be able to directly scale to high power versions, for increased component through-put. We have chosen an evolutionary design approach with component families of characterized performance, which should Ion result in a reliable modulator system with component lifetimes. The modulator utilizes a pair of Litton L-3408 hollow beam amplifier tubes as switching elements in a ``hot-deck`` configuration. Internal to the main of planar triode hot deck, an additional pair decks, configured in a totem pole circuit, provide input drive to the L-3408 mod-anodes. The modulator can output over 2 amps average current (at 100 kV) with 1 kW of modanode drive. Diagnostic electronics monitor the load and stops pulses for 100 mS when a load arcs occur. This paper, in addition to providing detailed engineering design information, will provide operational characteristics and reliability data that direct the design to the higher power, mass production line capable modulators.« less
PVD TBC experience on GE aircraft engines
NASA Technical Reports Server (NTRS)
Bartz, A.; Mariocchi, A.; Wortman, D. J.
1995-01-01
The higher performance levels of modern gas turbine engines present significant challenges in the reliability of materials in the turbine. The increased engine temperatures required to achieve the higher performance levels reduce the strength of the materials used in the turbine sections of the engine. Various forms of Thermal Barrier Coatings (TBC's) have been used for many years to increase the reliability of gas turbine engine components. Recent experience with the Physical Vapor Deposition (PVD) process using ceramic material has demonstrated success in extending the service life of turbine blades and nozzles. Engine test results of turbine components with a 125 micrometer (0.005 in) PVD TBC have demonstrated component operating temperatures of 56-83 C (100-150 F) lower than uncoated components. Engine testing has also revealed the TBC is susceptible to high angle particle impact damage. Sand particles and other engine debris impact the TBC surface at the leading edge of airfoils and fracture the PVD columns. As the impacting continues the TBC erodes away in local areas. Analysis of the eroded areas has shown a slight increase in temperature over a fully coated area, however, a significant temperature reduction was realized over an airfoil without any TBC.
PVD TBC experience on GE aircraft engines
NASA Technical Reports Server (NTRS)
Maricocchi, Antonio; Bartz, Andi; Wortman, David
1995-01-01
The higher performance levels of modern gas turbine engines present significant challenges in the reliability of materials in the turbine. The increased engine temperatures required to achieve the higher performance levels reduce the strength of the materials used in the turbine sections of the engine. Various forms of thermal barrier coatings (TBC's) have been used for many years to increase the reliability of gas turbine engine components. Recent experience with the physical vapor deposition (PVD) process using ceramic material has demonstrated success in extending the service life of turbine blades and nozzles. Engine test results of turbine components with a 125 micron (0.005 in) PVD TBC have demonstrated component operating temperatures of 56-83 C (100-150 F) lower than non-PVD TBC components. Engine testing has also revealed the TBC is susceptible to high angle particle impact damage. Sand particles and other engine debris impact the TBC surface at the leading edge of airfoils and fracture the PVD columns. As the impacting continues, the TBC erodes away in local areas. Analysis of the eroded areas has shown a slight increase in temperature over a fully coated area, however a significant temperature reduction was realized over an airfoil without TBC.
PVD TBC experience on GE aircraft engines
NASA Astrophysics Data System (ADS)
Maricocchi, A.; Bartz, A.; Wortman, D.
1997-06-01
The higher performance levels of modern gas turbine engines present significant challenges in the reli-ability of materials in the turbine. The increased engine temperatures required to achieve the higher per-formance levels reduce the strength of the materials used in the turbine sections of the engine. Various forms of thermal barrier coatings have been used for many years to increase the reliability of gas turbine engine components. Recent experience with the physical vapor deposition process using ceramic material has demonstrated success in extending the service life of turbine blades and nozzles. Engine test results of turbine components with a 125 μm (0.005 in.) PVD TBC have demonstrated component operating tem-peratures of 56 to 83 °C (100 to 150 °F) lower than non-PVD TBC components. Engine testing has also revealed that TBCs are susceptible to high angle particle impact damage. Sand particles and other engine debris impact the TBC surface at the leading edge of airfoils and fracture the PVD columns. As the impacting continues, the TBC erodes in local areas. Analysis of the eroded areas has shown a slight increase in temperature over a fully coated area ; however, a significant temperature reduc-tion was realized over an airfoil without TBC.
Retest reliability of individual p3 topography assessed by high density electroencephalography.
Vázquez-Marrufo, Manuel; González-Rosa, Javier J; Galvao-Carmona, Alejandro; Hidalgo-Muñoz, Antonio; Borges, Mónica; Peña, Juan Luis Ruiz; Izquierdo, Guillermo
2013-01-01
Some controversy remains about the potential applicability of cognitive potentials for evaluating the cerebral activity associated with cognitive capacity. A fundamental requirement is that these neurophysiological parameters show a high level of stability over time. Previous studies have shown that the reliability of diverse parameters of the P3 component (latency and amplitude) ranges between moderate and high. However, few studies have paid attention to the retest reliability of the P3 topography in groups or individuals. Considering that changes in P3 topography have been related to different pathologies and healthy aging, the main objective of this article was to evaluate in a longitudinal study (two sessions) the reliability of P3 topography in a group and at the individual level. The correlation between sessions for P3 topography in the grand average of groups was high (r = 0.977, p<0.001). The within-subject correlation values ranged from 0.626 to 0.981 (mean: 0.888). In the between-subjects topography comparisons, the correlation was always lower for comparisons between different subjects than for within-subjects correlations in the first session but not in the second session. The present study shows that P3 topography is highly reliable for group analysis (comprising the same subjects) in different sessions. The results also confirmed that retest reliability for individual P3 maps is suitable for follow-up studies for a particular subject. Moreover, P3 topography appears to be a specific marker considering that the between-subjects correlations were lower than the within-subject correlations. However, P3 topography appears more similar between subjects in the second session, demonstrating that is modulated by experience. Possible clinical applications of all these results are discussed.
Reliable High Performance Peta- and Exa-Scale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronevetsky, G
2012-04-02
As supercomputers become larger and more powerful, they are growing increasingly complex. This is reflected both in the exponentially increasing numbers of components in HPC systems (LLNL is currently installing the 1.6 million core Sequoia system) as well as the wide variety of software and hardware components that a typical system includes. At this scale it becomes infeasible to make each component sufficiently reliable to prevent regular faults somewhere in the system or to account for all possible cross-component interactions. The resulting faults and instability cause HPC applications to crash, perform sub-optimally or even produce erroneous results. As supercomputers continuemore » to approach Exascale performance and full system reliability becomes prohibitively expensive, we will require novel techniques to bridge the gap between the lower reliability provided by hardware systems and users unchanging need for consistent performance and reliable results. Previous research on HPC system reliability has developed various techniques for tolerating and detecting various types of faults. However, these techniques have seen very limited real applicability because of our poor understanding of how real systems are affected by complex faults such as soft fault-induced bit flips or performance degradations. Prior work on such techniques has had very limited practical utility because it has generally focused on analyzing the behavior of entire software/hardware systems both during normal operation and in the face of faults. Because such behaviors are extremely complex, such studies have only produced coarse behavioral models of limited sets of software/hardware system stacks. Since this provides little insight into the many different system stacks and applications used in practice, this work has had little real-world impact. My project addresses this problem by developing a modular methodology to analyze the behavior of applications and systems during both normal and faulty operation. By synthesizing models of individual components into a whole-system behavior models my work is making it possible to automatically understand the behavior of arbitrary real-world systems to enable them to tolerate a wide range of system faults. My project is following a multi-pronged research strategy. Section II discusses my work on modeling the behavior of existing applications and systems. Section II.A discusses resilience in the face of soft faults and Section II.B looks at techniques to tolerate performance faults. Finally Section III presents an alternative approach that studies how a system should be designed from the ground up to make resilience natural and easy.« less
Rahman, Mohd Nizam Ab; Zubir, Noor Suhana Mohd; Leuveano, Raden Achmad Chairdino; Ghani, Jaharah A; Mahmood, Wan Mohd Faizal Wan
2014-12-02
The significant increase in metal costs has forced the electronics industry to provide new materials and methods to reduce costs, while maintaining customers' high-quality expectations. This paper considers the problem of most electronic industries in reducing costly materials, by introducing a solder paste with alloy composition tin 98.3%, silver 0.3%, and copper 0.7%, used for the construction of the surface mount fine-pitch component on a Printing Wiring Board (PWB). The reliability of the solder joint between electronic components and PWB is evaluated through the dynamic characteristic test, thermal shock test, and Taguchi method after the printing process. After experimenting with the dynamic characteristic test and thermal shock test with 20 boards, the solder paste was still able to provide a high-quality solder joint. In particular, the Taguchi method is used to determine the optimal control parameters and noise factors of the Solder Printer (SP) machine, that affects solder volume and solder height. The control parameters include table separation distance, squeegee speed, squeegee pressure, and table speed of the SP machine. The result shows that the most significant parameter for the solder volume is squeegee pressure (2.0 mm), and the solder height is the table speed of the SP machine (2.5 mm/s).
Rahman, Mohd Nizam Ab.; Zubir, Noor Suhana Mohd; Leuveano, Raden Achmad Chairdino; Ghani, Jaharah A.; Mahmood, Wan Mohd Faizal Wan
2014-01-01
The significant increase in metal costs has forced the electronics industry to provide new materials and methods to reduce costs, while maintaining customers’ high-quality expectations. This paper considers the problem of most electronic industries in reducing costly materials, by introducing a solder paste with alloy composition tin 98.3%, silver 0.3%, and copper 0.7%, used for the construction of the surface mount fine-pitch component on a Printing Wiring Board (PWB). The reliability of the solder joint between electronic components and PWB is evaluated through the dynamic characteristic test, thermal shock test, and Taguchi method after the printing process. After experimenting with the dynamic characteristic test and thermal shock test with 20 boards, the solder paste was still able to provide a high-quality solder joint. In particular, the Taguchi method is used to determine the optimal control parameters and noise factors of the Solder Printer (SP) machine, that affects solder volume and solder height. The control parameters include table separation distance, squeegee speed, squeegee pressure, and table speed of the SP machine. The result shows that the most significant parameter for the solder volume is squeegee pressure (2.0 mm), and the solder height is the table speed of the SP machine (2.5 mm/s). PMID:28788270
Buchan, Jena; Janda, Monika; Box, Robyn; Rogers, Laura; Hayes, Sandi
2015-03-18
No tool exists to measure self-efficacy for overcoming lymphedema-related exercise barriers in individuals with cancer-related lymphedema. However, an existing scale measures confidence to overcome general exercise barriers in cancer survivors. Therefore, the purpose of this study was to develop, validate and assess the reliability of a subscale, to be used in conjunction with the general barriers scale, for determining exercise barriers self-efficacy in individuals facing lymphedema-related exercise barriers. A lymphedema-specific exercise barriers self-efficacy subscale was developed and validated using a cohort of 106 cancer survivors with cancer-related lymphedema, from Brisbane, Australia. An initial ten-item lymphedema-specific barrier subscale was developed and tested, with participant feedback and principal components analysis results used to guide development of the final version. Validity and test-retest reliability analyses were conducted on the final subscale. The final lymphedema-specific subscale contained five items. Principal components analysis revealed these items loaded highly (>0.75) on a separate factor when tested with a well-established nine-item general barriers scale. The final five-item subscale demonstrated good construct and criterion validity, high internal consistency (Cronbach's alpha = 0.93) and test-retest reliability (ICC = 0.67, p < 0.01). A valid and reliable lymphedema-specific subscale has been developed to assess exercise barriers self-efficacy in individuals with cancer-related lymphedema. This scale can be used in conjunction with an existing general exercise barriers scale to enhance exercise adherence in this understudied patient group.
Arshad, Muzamil; Stanley, Jeffrey A; Raz, Naftali
2017-04-01
In an age-heterogeneous sample of healthy adults, we examined test-retest reliability (with and without participant repositioning) of two popular MRI methods of estimating myelin content: modeling the short spin-spin (T 2 ) relaxation component of multi-echo imaging data and computing the ratio of T 1 -weighted and T 2 -weighted images (T 1 w/T 2 w). Taking the myelin water fraction (MWF) index of myelin content derived from the multi-component T 2 relaxation data as a standard, we evaluate the concurrent and differential validity of T 1 w/T 2 w ratio images. The results revealed high reliability of MWF and T 1 w/T 2 w ratio. However, we found significant correlations of low to moderate magnitude between MWF and the T 1 w/T 2 w ratio in only two of six examined regions of the cerebral white matter. Notably, significant correlations of the same or greater magnitude were observed for T 1 w/T 2 w ratio and the intermediate T 2 relaxation time constant, which is believed to reflect differences in the mobility of water between the intracellular and extracellular compartments. We conclude that although both methods are highly reliable and thus well-suited for longitudinal studies, T 1 w/T 2 w ratio has low criterion validity and may be not an optimal index of subcortical myelin content. Hum Brain Mapp 38:1780-1790, 2017. © 2017 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
de Montbrun, Sandra; Roberts, Patricia L; Satterthwaite, Lisa; MacRae, Helen
2016-07-01
To implement the Colorectal Objective Structured Assessment of Technical skill (COSATS) into American Board of Colon and Rectal Surgery (ABCRS) certification and build evidence of validity for the interpretation of the scores of this high stakes assessment tool. Currently, technical skill assessment is not a formal component of board certification. With the technical demands of surgical specialties, documenting competence in technical skill at the time of certification with a valid tool is ideal. In September 2014, the COSATS was a mandatory component of ABCRS certification. Seventy candidates took the examination, with their performance evaluated by expert colorectal surgeons using a task-specific checklist, global rating scale, and overall performance scale. Passing scores were set and compared using 2 standard setting methodologies, using a compensatory and conjunctive model. Inter-rater reliability and the reliability of the pass/fail decision were calculated using Cronbach alpha and Subkoviak methodology, respectively. Overall COSATS scores and pass/fail status were compared with results on the ABCRS oral examination. The pass rate ranged from 85.7% to 90%. Inter-rater reliability (0.85) and reliability of the pass/fail decision (0.87 and 0.84) were high. A low positive correlation (r= 0.25) was seen between the COSATS and oral examination. All individuals who failed the COSATS passed the ABCRS oral examination. COSATS is the first technical skill examination used in national surgical board certification. This study suggests that the current certification process may be failing to identify individuals who have demonstrated technical deficiencies on this standardized assessment tool.
Reliability analysis of laminated CMC components through shell subelement techniques
NASA Technical Reports Server (NTRS)
Starlinger, Alois; Duffy, Stephen F.; Gyekenyesi, John P.
1992-01-01
An updated version of the integrated design program Composite Ceramics Analysis and Reliability Evaluation of Structures (C/CARES) was developed for the reliability evaluation of ceramic matrix composites (CMC) laminated shell components. The algorithm is now split into two modules: a finite-element data interface program and a reliability evaluation algorithm. More flexibility is achieved, allowing for easy implementation with various finite-element programs. The interface program creates a neutral data base which is then read by the reliability module. This neutral data base concept allows easy data transfer between different computer systems. The new interface program from the finite-element code Matrix Automated Reduction and Coupling (MARC) also includes the option of using hybrid laminates (a combination of plies of different materials or different layups) and allows for variations in temperature fields throughout the component. In the current version of C/CARES, a subelement technique was implemented, enabling stress gradients within an element to be taken into account. The noninteractive reliability function is now evaluated at each Gaussian integration point instead of using averaging techniques. As a result of the increased number of stress evaluation points, considerable improvements in the accuracy of reliability analyses were realized.
NASA Astrophysics Data System (ADS)
Pattalwar, Shrikant; Jones, Thomas; Strachan, John; Bate, Robert; Davies, Phil; McIntosh, Peter
2012-06-01
Through an international cryomodule collaboration, ASTeC at Daresbury Laboratory has taken the primary responsibility in leading the development of an optimised Superconducting RF (SRF) cryomodule, operating in CW mode for energy recovery facilities and other high duty cycle accelerators. For high beam current operation, Higher Order Mode (HOM) absorbers are critical components of the SRF Cryomodule, ensuring excessive heating of the accelerating structures and beam instabilities are effectively managed. This paper describes some of the cold tests conducted on the HOM absorbers and other critical components during the construction phase, to ensure that the quality and reliable cryomodule performance is maintained.
Irvine, Karen-Amanda; Ferguson, Adam R.; Mitchell, Kathleen D.; Beattie, Stephanie B.; Lin, Amity; Stuck, Ellen D.; Huie, J. Russell; Nielson, Jessica L.; Talbott, Jason F.; Inoue, Tomoo; Beattie, Michael S.; Bresnahan, Jacqueline C.
2014-01-01
The IBB scale is a recently developed forelimb scale for the assessment of fine control of the forelimb and digits after cervical spinal cord injury [SCI; (1)]. The present paper describes the assessment of inter-rater reliability and face, concurrent and construct validity of this scale following SCI. It demonstrates that the IBB is a reliable and valid scale that is sensitive to severity of SCI and to recovery over time. In addition, the IBB correlates with other outcome measures and is highly predictive of biological measures of tissue pathology. Multivariate analysis using principal component analysis (PCA) demonstrates that the IBB is highly predictive of the syndromic outcome after SCI (2), and is among the best predictors of bio-behavioral function, based on strong construct validity. Altogether, the data suggest that the IBB, especially in concert with other measures, is a reliable and valid tool for assessing neurological deficits in fine motor control of the distal forelimb, and represents a powerful addition to multivariate outcome batteries aimed at documenting recovery of function after cervical SCI in rats. PMID:25071704
NASA Technical Reports Server (NTRS)
Harkney, R. D.
1980-01-01
Increased system requirements and functional integration with the aircraft have placed an increased demand on control system capability and reliability. To provide these at an affordable cost and weight and because of the rapid advances in electronic technology, hydromechanical systems are being phased out in favor of digital electronic systems. The transition is expected to be orderly from electronic trimming of hydromechanical controls to full authority digital electronic control. Future propulsion system controls will be highly reliable full authority digital electronic with selected component and circuit redundancy to provide the required safety and reliability. Redundancy may include a complete backup control of a different technology for single engine applications. The propulsion control will be required to communicate rapidly with the various flight and fire control avionics as part of an integrated control concept.
Skinner, Ian W; Hübscher, Markus; Moseley, G Lorimer; Lee, Hopin; Wand, Benedict M; Traeger, Adrian C; Gustin, Sylvia M; McAuley, James H
2017-08-15
Eyetracking is commonly used to investigate attentional bias. Although some studies have investigated the internal consistency of eyetracking, data are scarce on the test-retest reliability and agreement of eyetracking to investigate attentional bias. This study reports the test-retest reliability, measurement error, and internal consistency of 12 commonly used outcome measures thought to reflect the different components of attentional bias: overall attention, early attention, and late attention. Healthy participants completed a preferential-looking eyetracking task that involved the presentation of threatening (sensory words, general threat words, and affective words) and nonthreatening words. We used intraclass correlation coefficients (ICCs) to measure test-retest reliability (ICC > .70 indicates adequate reliability). The ICCs(2, 1) ranged from -.31 to .71. Reliability varied according to the outcome measure and threat word category. Sensory words had a lower mean ICC (.08) than either affective words (.32) or general threat words (.29). A longer exposure time was associated with higher test-retest reliability. All of the outcome measures, except second-run dwell time, demonstrated low measurement error (<6%). Most of the outcome measures reported high internal consistency (α > .93). Recommendations are discussed for improving the reliability of eyetracking tasks in future research.
Kumar, Mohit; Yadav, Shiv Prasad
2012-03-01
This paper addresses the fuzzy system reliability analysis using different types of intuitionistic fuzzy numbers. Till now, in the literature, to analyze the fuzzy system reliability, it is assumed that the failure rates of all components of a system follow the same type of fuzzy set or intuitionistic fuzzy set. However, in practical problems, such type of situation rarely occurs. Therefore, in the present paper, a new algorithm has been introduced to construct the membership function and non-membership function of fuzzy reliability of a system having components following different types of intuitionistic fuzzy failure rates. Functions of intuitionistic fuzzy numbers are calculated to construct the membership function and non-membership function of fuzzy reliability via non-linear programming techniques. Using the proposed algorithm, membership functions and non-membership functions of fuzzy reliability of a series system and a parallel systems are constructed. Our study generalizes the various works of the literature. Numerical examples are given to illustrate the proposed algorithm. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Test-retest reliability of infant event related potentials evoked by faces.
Munsters, N M; van Ravenswaaij, H; van den Boomen, C; Kemner, C
2017-04-05
Reliable measures are required to draw meaningful conclusions regarding developmental changes in longitudinal studies. Little is known, however, about the test-retest reliability of face-sensitive event related potentials (ERPs), a frequently used neural measure in infants. The aim of the current study is to investigate the test-retest reliability of ERPs typically evoked by faces in 9-10 month-old infants. The infants (N=31) were presented with neutral, fearful and happy faces that contained only the lower or higher spatial frequency information. They were tested twice within two weeks. The present results show that the test-retest reliability of the face-sensitive ERP components is moderate (P400 and Nc) to substantial (N290). However, there is low test-retest reliability for the effects of the specific experimental manipulations (i.e. emotion and spatial frequency) on the face-sensitive ERPs. To conclude, in infants the face-sensitive ERP components (i.e. N290, P400 and Nc) show adequate test-retest reliability, but not the effects of emotion and spatial frequency on these ERP components. We propose that further research focuses on investigating elements that might increase the test-retest reliability, as adequate test-retest reliability is necessary to draw meaningful conclusions on individual developmental trajectories of the face-sensitive ERPs in infants. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Space flight requirements for fiber optic components: qualification testing and lessons learned
NASA Astrophysics Data System (ADS)
Ott, Melanie N.; Jin, Xiaodan Linda; Chuska, Richard; Friedberg, Patricia; Malenab, Mary; Matuszeski, Adam
2006-04-01
"Qualification" of fiber optic components holds a very different meaning than it did ten years ago. In the past, qualification meant extensive prolonged testing and screening that led to a programmatic method of reliability assurance. For space flight programs today, the combination of using higher performance commercial technology, with shorter development schedules and tighter mission budgets makes long term testing and reliability characterization unfeasible. In many cases space flight missions will be using technology within years of its development and an example of this is fiber laser technology. Although the technology itself is not a new product the components that comprise a fiber laser system change frequently as processes and packaging changes occur. Once a process or the materials for manufacturing a component change, even the data that existed on its predecessor can no longer provide assurance on the newer version. In order to assure reliability during a space flight mission, the component engineer must understand the requirements of the space flight environment as well as the physics of failure of the components themselves. This can be incorporated into an efficient and effective testing plan that "qualifies" a component to specific criteria defined by the program given the mission requirements and the component limitations. This requires interaction at the very initial stages of design between the system design engineer, mechanical engineer, subsystem engineer and the component hardware engineer. Although this is the desired interaction what typically occurs is that the subsystem engineer asks the components or development engineers to meet difficult requirements without knowledge of the current industry situation or the lack of qualification data. This is then passed on to the vendor who can provide little help with such a harsh set of requirements due to high cost of testing for space flight environments. This presentation is designed to guide the engineers of design, development and components, and vendors of commercial components with how to make an efficient and effective qualification test plan with some basic generic information about many space flight requirements. Issues related to the physics of failure, acceptance criteria and lessons learned will also be discussed to assist with understanding how to approach a space flight mission in an ever changing commercial photonics industry.
Space Flight Requirements for Fiber Optic Components; Qualification Testing and Lessons Learned
NASA Technical Reports Server (NTRS)
Ott, Melanie N.; Jin, Xiaodan Linda; Chuska, Richard; Friedberg, Patricia; Malenab, Mary; Matuszeski, Adam
2007-01-01
"Qualification" of fiber optic components holds a very different meaning than it did ten years ago. In the past, qualification meant extensive prolonged testing and screening that led to a programmatic method of reliability assurance. For space flight programs today, the combination of using higher performance commercial technology, with shorter development schedules and tighter mission budgets makes long term testing and reliability characterization unfeasible. In many cases space flight missions will be using technology within years of its development and an example of this is fiber laser technology. Although the technology itself is not a new product the components that comprise a fiber laser system change frequently as processes and packaging changes occur. Once a process or the materials for manufacturing a component change, even the data that existed on its predecessor can no longer provide assurance on the newer version. In order to assure reliability during a space flight mission, the component engineer must understand the requirements of the space flight environment as well as the physics of failure of the components themselves. This can be incorporated into an efficient and effective testing plan that "qualifies" a component to specific criteria defined by the program given the mission requirements and the component limitations. This requires interaction at the very initial stages of design between the system design engineer, mechanical engineer, subsystem engineer and the component hardware engineer. Although this is the desired interaction what typically occurs is that the subsystem engineer asks the components or development engineers to meet difficult requirements without knowledge of the current industry situation or the lack of qualification data. This is then passed on to the vendor who can provide little help with such a harsh set of requirements due to high cost of testing for space flight environments. This presentation is designed to guide the engineers of design, development and components, and vendors of commercial components with how to make an efficient and effective qualification test plan with some basic generic information about many space flight requirements. Issues related to the physics of failure, acceptance criteria and lessons learned will also be discussed to assist with understanding how to approach a space flight mission in an ever changing commercial photonics industry.
Cè, Emiliano; Rampichini, Susanna; Monti, Elena; Venturelli, Massimo; Limonta, Eloisa; Esposito, Fabio
2017-01-01
Peripheral fatigue involves electrochemical and mechanical mechanisms. An electromyographic, mechanomyographic and force combined approach may permit a kinetic evaluation of the changes at the synaptic, skeletal muscle fiber, and muscle-tendon unit level during a fatiguing stimulation. Surface electromyogram, mechanomyogram, force and stimulation current were detected from the gastrocnemius medialis muscle in twenty male participants during a fatiguing stimulation (twelve blocks of 35 Hz stimulations, duty cycle 9 s on/1 s off, duration 120 s). The total electromechanical delay and its three components (between stimulation current and electromyogram, synaptic component; between electromyogram and mechanomyogram signal onset, muscle fiber electrochemical component, and between mechanomyogram and force signal onset, mechanical component) were calculated. Interday reliability and sensitivity were determined. After fatigue, peak force decreased by 48% (P < 0.05) and the total electromechanical delay and its synaptic, electrochemical and mechanical components lengthened from 25.8 ± 0.9, 1.47 ± 0.04, 11.2 ± 0.6, and 13.1 ± 1.3 ms to 29.0 ± 1.6, 1.56 ± 0.05, 12.4 ± 0.9, and 17.2 ± 0.6 ms, respectively (P < 0.05). During fatigue, the total electromechanical delay and the mechanical component increased significantly after the 40th second, and then remained stable. The synaptic and electrochemical components lengthened significantly after the 20th and 30th second, respectively. Interday reliability was high to very high, with an adequate level of sensitivity. The kinetic evaluation of the delays during the fatiguing stimulation highlighted different onsets and kinetics, with the events at synaptic level being the first to reveal a significant elongation, followed by those at the intra-fiber level. The mechanical events, which were the most affected by fatigue, were the last to lengthen.
NASA Astrophysics Data System (ADS)
Thylén, Lars
2006-07-01
The design and manufacture of components and systems underpin the European and indeed worldwide photonics industry. Optical materials and photonic components serve as the basis for systems building at different levels of complexity. In most cases, they perform a key function and dictate the performance of these systems. New products and processes will generate economic activity for the European photonics industry into the 21 st century. However, progress will rely on Europe's ability to develop new and better materials, components and systems. To achieve success, photonic components and systems must: •be reliable and inexpensive •be generic and adaptable •offer superior functionality •be innovative and protected by Intellectual Property •be aligned to market opportunities The challenge in the short-, medium-, and long-term is to put a coordinating framework in place which will make the European activity in this technology area competitive as compared to those in the US and Asia. In the short term the aim should be to facilitate the vibrant and profitable European photonics industry to further develop its ability to commercialize advances in photonic related technologies. In the medium and longer terms the objective must be to place renewed emphasis on materials research and the design and manufacturing of key components and systems to form the critical link between science endeavour and commercial success. All these general issues are highly relevant for the component intensive broadband communications industry. Also relevant for this development is the convergence of data and telecom, where the low cost of data com meets with the high reliability requirements of telecom. The text below is to a degree taken form the Strategic Research Agenda of the Technology Platform Photonics 21 [1], as this contains a concerted effort to iron out a strategy for EU in the area of photonics components and systems.
CARES/Life Software for Designing More Reliable Ceramic Parts
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Powers, Lynn M.; Baker, Eric H.
1997-01-01
Products made from advanced ceramics show great promise for revolutionizing aerospace and terrestrial propulsion, and power generation. However, ceramic components are difficult to design because brittle materials in general have widely varying strength values. The CAPES/Life software eases this task by providing a tool to optimize the design and manufacture of brittle material components using probabilistic reliability analysis techniques. Probabilistic component design involves predicting the probability of failure for a thermomechanically loaded component from specimen rupture data. Typically, these experiments are performed using many simple geometry flexural or tensile test specimens. A static, dynamic, or cyclic load is applied to each specimen until fracture. Statistical strength and SCG (fatigue) parameters are then determined from these data. Using these parameters and the results obtained from a finite element analysis, the time-dependent reliability for a complex component geometry and loading is then predicted. Appropriate design changes are made until an acceptable probability of failure has been reached.
Advanced Electrical Materials and Components Being Developed
NASA Technical Reports Server (NTRS)
Schwarze, Gene E.
2004-01-01
All aerospace systems require power management and distribution (PMAD) between the energy and power source and the loads. The PMAD subsystem can be broadly described as the conditioning and control of unregulated power from the energy source and its transmission to a power bus for distribution to the intended loads. All power and control circuits for PMAD require electrical components for switching, energy storage, voltage-to-current transformation, filtering, regulation, protection, and isolation. Advanced electrical materials and component development technology is a key technology to increasing the power density, efficiency, reliability, and operating temperature of the PMAD. The primary means to develop advanced electrical components is to develop new and/or significantly improved electronic materials for capacitors, magnetic components, and semiconductor switches and diodes. The next important step is to develop the processing techniques to fabricate electrical and electronic components that exceed the specifications of presently available state-of-the-art components. The NASA Glenn Research Center's advanced electrical materials and component development technology task is focused on the following three areas: 1) New and/or improved dielectric materials for the development of power capacitors with increased capacitance volumetric efficiency, energy density, and operating temperature; 2) New and/or improved high-frequency, high-temperature soft magnetic materials for the development of transformers and inductors with increased power density, energy density, electrical efficiency, and operating temperature; 3) Packaged high-temperature, high-power density, high-voltage, and low-loss SiC diodes and switches.
Reliable and redundant FPGA based read-out design in the ATLAS TileCal Demonstrator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akerstedt, Henrik; Muschter, Steffen; Drake, Gary
The Tile Calorimeter at ATLAS [1] is a hadron calorimeter based on steel plates and scintillating tiles read out by PMTs. The current read-out system uses standard ADCs and custom ASICs to digitize and temporarily store the data on the detector. However, only a subset of the data is actually read out to the counting room. The on-detector electronics will be replaced around 2023. To achieve the required reliability the upgraded system will be highly redundant. Here the ASICs will be replaced with Kintex-7 FPGAs from Xilinx. This, in addition to the use of multiple 10 Gbps optical read-out links,more » will allow a full read-out of all detector data. Due to the higher radiation levels expected when the beam luminosity is increased, opportunities for repairs will be less frequent. The circuitry and firmware must therefore be designed for sufficiently high reliability using redundancy and radiation tolerant components. Within a year, a hybrid demonstrator including the new readout system will be installed in one slice of the ATLAS Tile Calorimeter. This will allow the proposed upgrade to be thoroughly evaluated well before the planned 2023 deployment in all slices, especially with regard to long term reliability. Different firmware strategies alongside with their integration in the demonstrator are presented in the context of high reliability protection against hardware malfunction and radiation induced errors.« less
Scale for positive aspects of caregiving experience: development, reliability, and factor structure.
Kate, N; Grover, S; Kulhara, P; Nehra, R
2012-06-01
OBJECTIVE. To develop an instrument (Scale for Positive Aspects of Caregiving Experience [SPACE]) that evaluates positive caregiving experience and assess its psychometric properties. METHODS. Available scales which assess some aspects of positive caregiving experience were reviewed and a 50-item questionnaire with a 5-point rating was constructed. In all, 203 primary caregivers of patients with severe mental disorders were asked to complete the questionnaire. Internal consistency, test-retest reliability, cross-language reliability, split-half reliability, and face validity were evaluated. Principal component factor analysis was run to assess the factorial validity of the scale. RESULTS. The scale developed as part of the study was found to have good internal consistency, test-retest reliability, cross-language reliability, split-half reliability, and face validity. Principal component factor analysis yielded a 4-factor structure, which also had good test-retest reliability and cross-language reliability. There was a strong correlation between the 4 factors obtained. CONCLUSION. The SPACE developed as part of this study has good psychometric properties.
Blouin, Danielle; Day, Andrew G.; Pavlov, Andrey
2011-01-01
Background Although never directly compared, structured interviews are reported as being more reliable than unstructured interviews. This study compared the reliability of both types of interview when applied to a common pool of applicants for positions in an emergency medicine residency program. Methods In 2008, one structured interview was added to the two unstructured interviews traditionally used in our resident selection process. A formal job analysis using the critical incident technique guided the development of the structured interview tool. This tool consisted of 7 scenarios assessing 4 of the domains deemed essential for success as a resident in this program. The traditional interview tool assessed 5 general criteria. In addition to these criteria, the unstructured panel members were asked to rate each candidate on the same 4 essential domains rated by the structured panel members. All 3 panels interviewed all candidates. Main outcomes were the overall, interitem, and interrater reliabilities, the correlations between interview panels, and the dimensionality of each interview tool. Results Thirty candidates were interviewed. The overall reliability reached 0.43 for the structured interview, and 0.81 and 0.71 for the unstructured interviews. Analyses of the variance components showed a high interrater, low interitem reliability for the structured interview, and a high interrater, high interitem reliability for the unstructured interviews. The summary measures from the 2 unstructured interviews were significantly correlated, but neither was correlated with the structured interview. Only the structured interview was multidimensional. Conclusions A structured interview did not yield a higher overall reliability than both unstructured interviews. The lower reliability is explained by a lower interitem reliability, which in turn is due to the multidimensionality of the interview tool. Both unstructured panels consistently rated a single dimension, even when prompted to assess the 4 specific domains established as essential to succeed in this residency program. PMID:23205201
Blouin, Danielle; Day, Andrew G; Pavlov, Andrey
2011-12-01
Although never directly compared, structured interviews are reported as being more reliable than unstructured interviews. This study compared the reliability of both types of interview when applied to a common pool of applicants for positions in an emergency medicine residency program. In 2008, one structured interview was added to the two unstructured interviews traditionally used in our resident selection process. A formal job analysis using the critical incident technique guided the development of the structured interview tool. This tool consisted of 7 scenarios assessing 4 of the domains deemed essential for success as a resident in this program. The traditional interview tool assessed 5 general criteria. In addition to these criteria, the unstructured panel members were asked to rate each candidate on the same 4 essential domains rated by the structured panel members. All 3 panels interviewed all candidates. Main outcomes were the overall, interitem, and interrater reliabilities, the correlations between interview panels, and the dimensionality of each interview tool. Thirty candidates were interviewed. The overall reliability reached 0.43 for the structured interview, and 0.81 and 0.71 for the unstructured interviews. Analyses of the variance components showed a high interrater, low interitem reliability for the structured interview, and a high interrater, high interitem reliability for the unstructured interviews. The summary measures from the 2 unstructured interviews were significantly correlated, but neither was correlated with the structured interview. Only the structured interview was multidimensional. A structured interview did not yield a higher overall reliability than both unstructured interviews. The lower reliability is explained by a lower interitem reliability, which in turn is due to the multidimensionality of the interview tool. Both unstructured panels consistently rated a single dimension, even when prompted to assess the 4 specific domains established as essential to succeed in this residency program.
NASA Technical Reports Server (NTRS)
Zhu, Dongming; Nemeth, Noel N.
2017-01-01
Advanced environmental barrier coatings will play an increasingly important role in future gas turbine engines because of their ability to protect emerging light-weight SiC/SiC ceramic matrix composite (CMC) engine components, further raising engine operating temperatures and performance. Because the environmental barrier coating systems are critical to the performance, reliability and durability of these hot-section ceramic engine components, a prime-reliant coating system along with established life design methodology are required for the hot-section ceramic component insertion into engine service. In this paper, we have first summarized some observations of high temperature, high-heat-flux environmental degradation and failure mechanisms of environmental barrier coating systems in laboratory simulated engine environment tests. In particular, the coating surface cracking morphologies and associated subsequent delamination mechanisms under the engine level high-heat-flux, combustion steam, and mechanical creep and fatigue loading conditions will be discussed. The EBC compostion and archtechture improvements based on advanced high heat flux environmental testing, and the modeling advances based on the integrated Finite Element Analysis Micromechanics Analysis Code/Ceramics Analysis and Reliability Evaluation of Structures (FEAMAC/CARES) program will also be highlighted. The stochastic progressive damage simulation successfully predicts mud flat damage pattern in EBCs on coated 3-D specimens, and a 2-D model of through-the-thickness cross-section. A 2-parameter Weibull distribution was assumed in characterizing the coating layer stochastic strength response and the formation of damage was therefore modeled. The damage initiation and coalescence into progressively smaller mudflat crack cells was demonstrated. A coating life prediction framework may be realized by examining the surface crack initiation and delamination propagation in conjunction with environmental degradation under high-heat-flux and environment load test conditions.
Qualification of Laser Diode Arrays for Mercury Laser Altimeter
NASA Technical Reports Server (NTRS)
Stephen, Mark; Vasilyev, Aleksey; Schafer, John; Allan, Graham R.
2004-01-01
NASA's requirements for high reliability, high performance satellite laser instruments have driven the investigation of many critical components; specifically, 808 nm laser diode array (LDA) pump devices. Performance of Quasi-CW, High-power, laser diode arrays under extended use is presented. We report the optical power over several hundred million pulse operation and the effect of power cycling and temperature cycling of the laser diode arrays. Data on the initial characterization of the devices is also presented.
Transmission overhaul estimates for partial and full replacement at repair
NASA Technical Reports Server (NTRS)
Savage, M.; Lewicki, D. G.
1991-01-01
Timely transmission overhauls increase in-flight service reliability greater than the calculated design reliabilities of the individual aircraft transmission components. Although necessary for aircraft safety, transmission overhauls contribute significantly to aircraft expense. Predictions of a transmission's maintenance needs at the design stage should enable the development of more cost effective and reliable transmissions in the future. The frequency is estimated of overhaul along with the number of transmissions or components needed to support the overhaul schedule. Two methods based on the two parameter Weibull statistical distribution for component life are used to estimate the time between transmission overhauls. These methods predict transmission lives for maintenance schedules which repair the transmission with a complete system replacement or repair only failed components of the transmission. An example illustrates the methods.
Soleimani, Mohammad Ali; Yaghoobzadeh, Ameneh; Bahrami, Nasim; Sharif, Saeed Pahlevan; Sharif Nia, Hamid
2016-10-01
In this study, 398 Iranian cancer patients completed the 15-item Templer's Death Anxiety Scale (TDAS). Tests of internal consistency, principal components analysis, and confirmatory factor analysis were conducted to assess the internal consistency and factorial validity of the Persian TDAS. The construct reliability statistic and average variance extracted were also calculated to measure construct reliability, convergent validity, and discriminant validity. Principal components analysis indicated a 3-component solution, which was generally supported in the confirmatory analysis. However, acceptable cutoffs for construct reliability, convergent validity, and discriminant validity were not fulfilled for the three subscales that were derived from the principal component analysis. This study demonstrated both the advantages and potential limitations of using the TDAS with Persian-speaking cancer patients.
Ceramic Technology For Advanced Heat Engines Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1990-12-01
Significant accomplishments in fabricating ceramic components for the Department of Energy (DOE), National Aeronautics and Space Administration (NASA), and Department of Defense (DoD) advanced heat engine programs have provided evidence that the operation of ceramic parts in high-temperature engine environments is feasible. However, these programs have also demonstrated that additional research is needed in materials and processing development, design methodology, and data base and life prediction before industry will have a sufficient technology base from which to produce reliable cost-effective ceramic engine components commercially. The objective of the project is to develop the industrial technology base required for reliable ceramicsmore » for application in advanced automotive heat engines. The project approach includes determining the mechanisms controlling reliability, improving processes for fabricating existing ceramics, developing new materials with increased reliability, and testing these materials in simulated engine environments to confirm reliability. Although this is a generic materials project, the focus is on the structural ceramics for advanced gas turbine and diesel engines, ceramic bearings and attachments, and ceramic coatings for thermal barrier and wear applications in these engines. This advanced materials technology is being developed in parallel and close coordination with the ongoing DOE and industry proof of concept engine development programs. To facilitate the rapid transfer of this technology to U.S. industry, the major portion of the work is being done in the ceramic industry, with technological support from government laboratories, other industrial laboratories, and universities. Abstracts prepared for appropriate papers.« less
NASA Astrophysics Data System (ADS)
Nemeth, Noel N.; Jadaan, Osama M.; Palfi, Tamas; Baker, Eric H.
Brittle materials today are being used, or considered, for a wide variety of high tech applications that operate in harsh environments, including static and rotating turbine parts, thermal protection systems, dental prosthetics, fuel cells, oxygen transport membranes, radomes, and MEMS. Designing brittle material components to sustain repeated load without fracturing while using the minimum amount of material requires the use of a probabilistic design methodology. The NASA CARES/Life 1 (Ceramic Analysis and Reliability Evaluation of Structure/Life) code provides a general-purpose analysis tool that predicts the probability of failure of a ceramic component as a function of its time in service. This capability includes predicting the time-dependent failure probability of ceramic components against catastrophic rupture when subjected to transient thermomechanical loads (including cyclic loads). The developed methodology allows for changes in material response that can occur with temperature or time (i.e. changing fatigue and Weibull parameters with temperature or time). For this article an overview of the transient reliability methodology and how this methodology is extended to account for proof testing is described. The CARES/Life code has been modified to have the ability to interface with commercially available finite element analysis (FEA) codes executed for transient load histories. Examples are provided to demonstrate the features of the methodology as implemented in the CARES/Life program.
Motegi, Hiromi; Tsuboi, Yuuri; Saga, Ayako; Kagami, Tomoko; Inoue, Maki; Toki, Hideaki; Minowa, Osamu; Noda, Tetsuo; Kikuchi, Jun
2015-11-04
There is an increasing need to use multivariate statistical methods for understanding biological functions, identifying the mechanisms of diseases, and exploring biomarkers. In addition to classical analyses such as hierarchical cluster analysis, principal component analysis, and partial least squares discriminant analysis, various multivariate strategies, including independent component analysis, non-negative matrix factorization, and multivariate curve resolution, have recently been proposed. However, determining the number of components is problematic. Despite the proposal of several different methods, no satisfactory approach has yet been reported. To resolve this problem, we implemented a new idea: classifying a component as "reliable" or "unreliable" based on the reproducibility of its appearance, regardless of the number of components in the calculation. Using the clustering method for classification, we applied this idea to multivariate curve resolution-alternating least squares (MCR-ALS). Comparisons between conventional and modified methods applied to proton nuclear magnetic resonance ((1)H-NMR) spectral datasets derived from known standard mixtures and biological mixtures (urine and feces of mice) revealed that more plausible results are obtained by the modified method. In particular, clusters containing little information were detected with reliability. This strategy, named "cluster-aided MCR-ALS," will facilitate the attainment of more reliable results in the metabolomics datasets.
High-power VCSEL systems and applications
NASA Astrophysics Data System (ADS)
Moench, Holger; Conrads, Ralf; Deppe, Carsten; Derra, Guenther; Gronenborn, Stephan; Gu, Xi; Heusler, Gero; Kolb, Johanna; Miller, Michael; Pekarski, Pavel; Pollmann-Retsch, Jens; Pruijmboom, Armand; Weichmann, Ulrich
2015-03-01
Easy system design, compactness and a uniform power distribution define the basic advantages of high power VCSEL systems. Full addressability in space and time add new dimensions for optimization and enable "digital photonic production". Many thermal processes benefit from the improved control i.e. heat is applied exactly where and when it is needed. The compact VCSEL systems can be integrated into most manufacturing equipment, replacing batch processes using large furnaces and reducing energy consumption. This paper will present how recent technological development of high power VCSEL systems will extend efficiency and flexibility of thermal processes and replace not only laser systems, lamps and furnaces but enable new ways of production. High power VCSEL systems are made from many VCSEL chips, each comprising thousands of low power VCSELs. Systems scalable in power from watts to multiple ten kilowatts and with various form factors utilize a common modular building block concept. Designs for reliable high power VCSEL arrays and systems can be developed and tested on each building block level and benefit from the low power density and excellent reliability of the VCSELs. Furthermore advanced assembly concepts aim to reduce the number of individual processes and components and make the whole system even more simple and reliable.
Fine phenotyping of pod and seed traits in Arachis germplasm accessions using digital image analysis
USDA-ARS?s Scientific Manuscript database
Reliable and objective phenotyping of peanut pod and seed traits is important for cultivar selection and genetic mapping of yield components. To develop useful and efficient methods to quantitatively define peanut pod and seed traits, a group of peanut germplasm with high levels of phenotypic varia...
USDA-ARS?s Scientific Manuscript database
The thermal-based Two Source Energy Balance (TSEB) model partitions the water and energy fluxes from vegetation and soil components providing thus the ability for estimating soil evaporation (E) and canopy transpiration (T) separately. However, it is crucial for ET partitioning to retrieve reliable ...
Biomarkers: background, classification and guidelines for applications in nutritional epidemiology
USDA-ARS?s Scientific Manuscript database
One of the main problems in nutritional epidemiology is to assess food intake as well as nutrient/food component intake to a high level of validity and reliability. To help in this process, the need to have good biomarkers that more objectively allow us to evaluate the diet consumed in a more standa...
The 20 GHz solid state transmitter design, impatt diode development and reliability assessment
NASA Technical Reports Server (NTRS)
Picone, S.; Cho, Y.; Asmus, J. R.
1984-01-01
A single drift gallium arsenide (GaAs) Schottky barrier IMPATT diode and related components were developed. The IMPATT diode reliability was assessed. A proof of concept solid state transmitter design and a technology assessment study were performed. The transmitter design utilizes technology which, upon implementation, will demonstrate readiness for development of a POC model within the 1982 time frame and will provide an information base for flight hardware capable of deployment in a 1985 to 1990 demonstrational 30/20 GHz satellite communication system. Life test data for Schottky barrier GaAs diodes and grown junction GaAs diodes are described. The results demonstrate the viability of GaAs IMPATTs as high performance, reliable RF power sources which, based on the recommendation made herein, will surpass device reliability requirements consistent with a ten year spaceborne solid state power amplifier mission.
Added Value of Reliability to a Microgrid: Simulations of Three California Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marnay, Chris; Lai, Judy; Stadler, Michael
The Distributed Energy Resources Customer Adoption Model is used to estimate the value an Oakland nursing home, a Riverside high school, and a Sunnyvale data center would need to put on higher electricity service reliability for them to adopt a Consortium for Electric Reliability Technology Solutions Microgrid (CM) based on economics alone. A fraction of each building's load is deemed critical based on its mission, and the added cost of CM capability to meet it added to on-site generation options. The three sites are analyzed with various resources available as microgrid components. Results show that the value placed on highermore » reliability often does not have to be significant for CM to appear attractive, about 25 $/kWcdota and up, but the carbon footprint consequences are mixed because storage is often used to shift cheaper off-peak electricity to use during afternoon hours in competition with the solar sources.« less
Fast Multiscale Algorithms for Wave Propagation in Heterogeneous Environments
2016-01-07
methods for waves’’, Nonlinear solvers for high- intensity focused ultrasound with application to cancer treatment, AIMS, Palo Alto, 2012. ``Hermite...formulation but different parametrizations. . . . . . . . . . . . 6 4 Density µ(t) at mode 0 for scattering of a plane Gaussian pulse from a sphere. On the...spatiotemporal scales. Two crucial components of the highly-efficient, general-purpose wave simulator we envision are • Reliable, low -cost methods for truncating
Scaling Impacts in Life Support Architecture and Technology Selection
NASA Technical Reports Server (NTRS)
Lange, Kevin
2016-01-01
For long-duration space missions outside of Earth orbit, reliability considerations will drive higher levels of redundancy and/or on-board spares for life support equipment. Component scaling will be a critical element in minimizing overall launch mass while maintaining an acceptable level of system reliability. Building on an earlier reliability study (AIAA 2012-3491), this paper considers the impact of alternative scaling approaches, including the design of technology assemblies and their individual components to maximum, nominal, survival, or other fractional requirements. The optimal level of life support system closure is evaluated for deep-space missions of varying duration using equivalent system mass (ESM) as the comparative basis. Reliability impacts are included in ESM by estimating the number of component spares required to meet a target system reliability. Common cause failures are included in the analysis. ISS and ISS-derived life support technologies are considered along with selected alternatives. This study focusses on minimizing launch mass, which may be enabling for deep-space missions.
A new method for computing the reliability of consecutive k-out-of-n:F systems
NASA Astrophysics Data System (ADS)
Gökdere, Gökhan; Gürcan, Mehmet; Kılıç, Muhammet Burak
2016-01-01
In many physical systems, reliability evaluation, such as ones encountered in telecommunications, the design of integrated circuits, microwave relay stations, oil pipeline systems, vacuum systems in accelerators, computer ring networks, and spacecraft relay stations, have had applied consecutive k-out-of-n system models. These systems are characterized as logical connections among the components of the systems placed in lines or circles. In literature, a great deal of attention has been paid to the study of the reliability evaluation of consecutive k-out-of-n systems. In this paper, we propose a new method to compute the reliability of consecutive k-out-of-n:F systems, with n linearly and circularly arranged components. The proposed method provides a simple way for determining the system failure probability. Also, we write R-Project codes based on our proposed method to compute the reliability of the linear and circular systems which have a great number of components.
1980-05-01
Components 25 2.7.1 Transformers 25 2.7.2 Solid Dielectric 26 2.7.3 Cables and Connectors 27 III. SOURCES 29 3.1 Preface 29 3.2 Electron Sources 30 3.3 High...be developed which can withstand high voltages , high current densities, and pass large energies per pulse with high repetition rates, high reliability...Ceramics - high voltage hold-off 2) Dielectrics - hold-off recovery after breakdown 3) Metals - low erosion rates, higher j and esaturation 4) Degradation
A Wavelet-based Fast Discrimination of Transformer Magnetizing Inrush Current
NASA Astrophysics Data System (ADS)
Kitayama, Masashi
Recently customers who need electricity of higher quality have been installing co-generation facilities. They can avoid voltage sags and other distribution system related disturbances by supplying electricity to important load from their generators. For another example, FRIENDS, highly reliable distribution system using semiconductor switches or storage devices based on power electronics technology, is proposed. These examples illustrates that the request for high reliability in distribution system is increasing. In order to realize these systems, fast relaying algorithms are indispensable. The author proposes a new method of detecting magnetizing inrush current using discrete wavelet transform (DWT). DWT provides the function of detecting discontinuity of current waveform. Inrush current occurs when transformer core becomes saturated. The proposed method detects spikes of DWT components derived from the discontinuity of the current waveform at both the beginning and the end of inrush current. Wavelet thresholding, one of the wavelet-based statistical modeling, was applied to detect the DWT component spikes. The proposed method is verified using experimental data using single-phase transformer and the proposed method is proved to be effective.
Psychometric Evaluation of Kingston Caregiver Stress Scale.
Sadak, Tatiana; Korpak, Anna; Wright, Jacob D; Lee, Mee Kyung; Noel, Margaret; Buckwalter, Kathleen; Borson, Soo
2017-01-01
Standardized measurement of caregiver stress is a component of Medicare's new health care benefit supporting care planning for people with dementia. In this article we identify existing measures of caregiver stress, strain and burden and propose specific criteria for choosing tools that may be suitable for wide use in primary care settings. We reviewed 22 measures and identified one, the Kingston Caregiver Stress Scale (KCSS), which met all the proposed criteria but had not been studied in a U.S. We conducted a psychometric evaluation of KCSS to determine its potential usefulness as a care planning tool with a U.S. We examined the internal consistency, test-retest reliability, component structure, and relationship to depression and anxiety in 227 dementia caregivers at two U.S. sites. The KCSS has high internal consistency and test-retest reliability, a strong factor structure, and moderate to high correlations with caregiver depression and anxiety. KCSS is a good candidate for use as part of comprehensive care planning for people with dementia and their caregivers. Routine assessment of caregiver stress in clinical care may facilitate timely intervention and potentially improve both patient and caregiver outcomes.
Application of IUS equipment and experience to orbit transfer vehicles of the 90's
NASA Astrophysics Data System (ADS)
Bangsund, E.; Keeney, J.; Cowgill, E.
1985-10-01
This paper relates experiences with the IUS program and the application of that experience to Future Orbit Transfer Vehicles. More specifically it includes the implementation of the U.S. Air Force Space Division high reliability parts standard (SMASO STD 73-2C) and the component/system test standard (MIL-STD-1540A). Test results from the parts and component level testing and the resulting system level test program for fourteen IUS flight vehicles are discussed. The IUS program has had the highest compliance with these standards and thus offers a benchmark of experience for future programs demanding extreme reliability. In summary, application of the stringent parts standard has resulted in fewer failures during testing and the stringent test standard has eliminated design problems in the hardware. Both have been expensive in costs and schedules, and should be applied with flexibility.
Mansberger, Steven L; Sheppler, Christina R; McClure, Tina M; Vanalstine, Cory L; Swanson, Ingrid L; Stoumbos, Zoey; Lambert, William E
2013-09-01
To report the psychometrics of the Glaucoma Treatment Compliance Assessment Tool (GTCAT), a new questionnaire designed to assess adherence with glaucoma therapy. We developed the questionnaire according to the constructs of the Health Belief Model. We evaluated the questionnaire using data from a cross-sectional study with focus groups (n = 20) and a prospective observational case series (n=58). Principal components analysis provided assessment of construct validity. We repeated the questionnaire after 3 months for test-retest reliability. We evaluated predictive validity using an electronic dosing monitor as an objective measure of adherence. Focus group participants provided 931 statements related to adherence, of which 88.7% (826/931) could be categorized into the constructs of the Health Belief Model. Perceived barriers accounted for 31% (288/931) of statements, cues-to-action 14% (131/931), susceptibility 12% (116/931), benefits 12% (115/931), severity 10% (91/931), and self-efficacy 9% (85/931). The principal components analysis explained 77% of the variance with five components representing Health Belief Model constructs. Reliability analyses showed acceptable Cronbach's alphas (>.70) for four of the seven components (severity, susceptibility, barriers [eye drop administration], and barriers [discomfort]). Predictive validity was high, with several Health Belief Model questions significantly associated (P <.05) with adherence and a correlation coefficient (R (2)) of .40. Test-retest reliability was 90%. The GTCAT shows excellent repeatability, content, construct, and predictive validity for glaucoma adherence. A multisite trial is needed to determine whether the results can be generalized and whether the questionnaire accurately measures the effect of interventions to increase adherence.
Dettlaff, Alan J; Christopher Graham, J; Holzman, Jesse; Baumann, Donald J; Fluke, John D
2015-11-01
When children come to the attention of the child welfare system, they become involved in a decision-making process in which decisions are made that have a significant effect on their future and well-being. The decision to remove children from their families is particularly complex; yet surprisingly little is understood about this decision-making process. This paper presents the results of a study to develop an instrument to explore, at the caseworker level, the context of the removal decision, with the objective of understanding the influence of the individual and organizational factors on this decision, drawing from the Decision Making Ecology as the underlying rationale for obtaining the measures. The instrument was based on the development of decision-making scales used in prior decision-making studies and administered to child protection caseworkers in several states. Analyses included reliability analyses, principal components analyses, and inter-correlations among the resulting scales. For one scale regarding removal decisions, a principal components analysis resulted in the extraction of two components, jointly identified as caseworkers' decision-making orientation, described as (1) an internal reference to decision-making and (2) an external reference to decision-making. Reliability analyses demonstrated acceptable to high internal consistency for 9 of the 11 scales. Full details of the reliability analyses, principal components analyses, and inter-correlations among the seven scales are discussed, along with implications for practice and the utility of this instrument to support the understanding of decision-making in child welfare. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Single-Block TRL Test Fixture for the Cryogenic Characterization of Planar Microwave Components
NASA Technical Reports Server (NTRS)
Mejia, M.; Creason, A. S.; Toncich, S. S.; Ebihara, B. T.; Miranda, F. A.
1996-01-01
The High-Temperature-Superconductivity (HTS) group of the RF Technology Branch, Space Electronics Division, is actively involved in the fabrication and cryogenic characterization of planar microwave components for space applications. This process requires fast, reliable, and accurate measurement techniques not readily available. A new calibration standard/test fixture that enhances the integrity and reliability of the component characterization process has been developed. The fixture consists of 50 omega thru, reflect, delay, and device under test gold lines etched onto a 254 microns (0.010 in) thick alumina substrate. The Thru-Reflect-Line (TRL) fixture was tested at room temperature using a 30 omega, 7.62 mm (300 mil) long, gold line as a known standard. Good agreement between the experimental data and the data modelled using Sonnet's em(C) software was obtained for both the return (S(sub 11)) and insertion (S( 21)) losses. A gold two-pole bandpass filter with a 7.3 GHz center frequency was used as our Device Under Test (DUT), and the results compared with those obtained using a Short-Open-Load-Thru (SOLT) calibration technique.
Gear systems for advanced turboprops
NASA Technical Reports Server (NTRS)
Wagner, Douglas A.
1987-01-01
A new generation of transport aircraft will be powered by efficient, advanced turboprop propulsion systems. Systems that develop 5,000 to 15,000 horsepower have been studied. Reduction gearing for these advanced propulsion systems is discussed. Allison Gas Turbine Division's experience with the 5,000 horsepower reduction gearing for the T56 engine is reviewed and the impact of that experience on advanced gear systems is considered. The reliability needs for component design and development are also considered. Allison's experience and their research serve as a basis on which to characterize future gear systems that emphasize low cost and high reliability.
Fast gas spectroscopy using pulsed quantum cascade lasers
NASA Astrophysics Data System (ADS)
Beyer, T.; Braun, M.; Lambrecht, A.
2003-03-01
Laser spectroscopy has found many industrial applications, e.g., control of automotive exhaust and process monitoring. The midinfrared region is of special interest because it has stronger absorption lines compared to the near infrared (NIR). However, in the NIR high quality reliable laser sources, detectors, and passive optical components are available. A quantum cascade laser could change this situation if fundamental advantages can be exploited with compact and reliable systems. It will be shown that, using pulsed lasers and available fast detectors, lower residual sensitivity levels than in corresponding NIR systems can be achieved. The stability is sufficient for industrial applications.
Vimalchand, Pannalal; Liu, Guohai; Peng, Wan Wang
2015-02-24
The improvements proposed in this invention provide a reliable apparatus and method to gasify low rank coals in a class of pressurized circulating fluidized bed reactors termed "transport gasifier." The embodiments overcome a number of operability and reliability problems with existing gasifiers. The systems and methods address issues related to distribution of gasification agent without the use of internals, management of heat release to avoid any agglomeration and clinker formation, specific design of bends to withstand the highly erosive environment due to high solid particles circulation rates, design of a standpipe cyclone to withstand high temperature gasification environment, compact design of seal-leg that can handle high mass solids flux, design of nozzles that eliminate plugging, uniform aeration of large diameter Standpipe, oxidant injection at the cyclone exits to effectively modulate gasifier exit temperature and reduction in overall height of the gasifier with a modified non-mechanical valve.
Advanced Turbine Technology Applications Project (ATTAP)
NASA Technical Reports Server (NTRS)
1994-01-01
Reports technical effort by AlliedSignal Engines in sixth year of DOE/NASA funded project. Topics include: gas turbine engine design modifications of production APU to incorporate ceramic components; fabrication and processing of silicon nitride blades and nozzles; component and engine testing; and refinement and development of critical ceramics technologies, including: hot corrosion testing and environmental life predictive model; advanced NDE methods for internal flaws in ceramic components; and improved carbon pulverization modeling during impact. ATTAP project is oriented toward developing high-risk technology of ceramic structural component design and fabrication to carry forward to commercial production by 'bridging the gap' between structural ceramics in the laboratory and near-term commercial heat engine application. Current ATTAP project goal is to support accelerated commercialization of advanced, high-temperature engines for hybrid vehicles and other applications. Project objectives are to provide essential and substantial early field experience demonstrating ceramic component reliability and durability in modified, available, gas turbine engine applications; and to scale-up and improve manufacturing processes of ceramic turbine engine components and demonstrate application of these processes in the production environment.
Mass and Reliability Source (MaRS) Database
NASA Technical Reports Server (NTRS)
Valdenegro, Wladimir
2017-01-01
The Mass and Reliability Source (MaRS) Database consolidates components mass and reliability data for all Oribital Replacement Units (ORU) on the International Space Station (ISS) into a single database. It was created to help engineers develop a parametric model that relates hardware mass and reliability. MaRS supplies relevant failure data at the lowest possible component level while providing support for risk, reliability, and logistics analysis. Random-failure data is usually linked to the ORU assembly. MaRS uses this data to identify and display the lowest possible component failure level. As seen in Figure 1, the failure point is identified to the lowest level: Component 2.1. This is useful for efficient planning of spare supplies, supporting long duration crewed missions, allowing quicker trade studies, and streamlining diagnostic processes. MaRS is composed of information from various databases: MADS (operating hours), VMDB (indentured part lists), and ISS PART (failure data). This information is organized in Microsoft Excel and accessed through a program made in Microsoft Access (Figure 2). The focus of the Fall 2017 internship tour was to identify the components that were the root cause of failure from the given random-failure data, develop a taxonomy for the database, and attach material headings to the component list. Secondary objectives included verifying the integrity of the data in MaRS, eliminating any part discrepancies, and generating documentation for future reference. Due to the nature of the random-failure data, data mining had to be done manually without the assistance of an automated program to ensure positive identification.
High efficiency pump combiner fabricated by CO2 laser splicing system
NASA Astrophysics Data System (ADS)
Zhu, Gongwen
2018-02-01
High power combiners are of great interest for high power fiber lasers and fiber amplifiers. With the advent of CO2 laser splicing system, power combiners are made possible with low manufacturing cost, low loss, high reliability and high performance. Traditionally fiber optical components are fabricated with flame torch, electrode arc discharge or filament heater. However, these methods can easily leave contamination on the fiber, resulting inconsistent performance or even catching fire in high power operations. The electrodes or filaments also degrade rapidly during the combiner manufacturing process. The rapid degradation will lead to extensive maintenance, making it unpractical or uneconomic for volume production. By contrast, CO2 laser is the cleanest heating source which provides reliable and repeatable process for fabricating fiber optic components including high power combiners. In this paper we present an all fiber end pumped 7x1 pump combiner fabricated by CO2 laser splicing system. The input pump fibers are 105/125 (core/clad diameters in μm) fibers with a core NA of 0.22. The output fiber is a 300/320 fiber with a core NA of 0.22. The average efficiency is 99.4% with all 7 ports more than 99%. The process is contamination-free and highly repeatable. To our best knowledge, this is the first report in the literature on power combiners fabricated by CO2 laser splicing system. It also has the highest reported efficiency of its kind.
A complex network-based importance measure for mechatronics systems
NASA Astrophysics Data System (ADS)
Wang, Yanhui; Bi, Lifeng; Lin, Shuai; Li, Man; Shi, Hao
2017-01-01
In view of the negative impact of functional dependency, this paper attempts to provide an alternative importance measure called Improved-PageRank (IPR) for measuring the importance of components in mechatronics systems. IPR is a meaningful extension of the centrality measures in complex network, which considers usage reliability of components and functional dependency between components to increase importance measures usefulness. Our work makes two important contributions. First, this paper integrates the literature of mechatronic architecture and complex networks theory to define component network. Second, based on the notion of component network, a meaningful IPR is brought into the identifying of important components. In addition, the IPR component importance measures, and an algorithm to perform stochastic ordering of components due to the time-varying nature of usage reliability of components and functional dependency between components, are illustrated with a component network of bogie system that consists of 27 components.
Validity and reliability of the Utrecht Work Engagement Scale-Student Version in Sri Lanka.
Wickramasinghe, Nuwan Darshana; Dissanayake, Devani Sakunthala; Abeywardena, Gihan Sajiwa
2018-05-04
The present study was aimed at assessing the validity and the reliability of the Sinhala version of the Utrecht Work Engagement Scale-Student Version (UWES-S) among collegiate cycle students in Sri Lanka. The 17-item UWES-S was translated to Sinhala and the judgmental validity was assessed by a multi-disciplinary panel of experts. Construct validity of the UWES-S was appraised by using multi-trait scaling analysis and exploratory factor analysis (EFA) on data obtained from a sample of 194 grade thirteen students in the Kurunegala district, Sri Lanka. Reliability of the UWES-S was assessed by using internal consistency and test-retest reliability. Except for item 13, all other items showed good psychometric properties in judgemental validity, item-convergent validity and item-discriminant validity. EFA using principal component analysis with Oblimin rotation, suggested a three-factor solution (including vigor, dedication and absorption subscales) explaining 65.4% of the total variance for the 16-item UWES-S (with item 13 deleted). All three subscales show high internal consistency with Cronbach's α coefficient values of 0.867, 0.819, and 0.903 and test-retest reliability was high (p < 0.001). Hence, the Sinhala version of the 16-item UWES-S is a valid and a reliable instrument to assess work engagement among collegiate cycle students in Sri Lanka.
Creating High Reliability in Health Care Organizations
Pronovost, Peter J; Berenholtz, Sean M; Goeschel, Christine A; Needham, Dale M; Sexton, J Bryan; Thompson, David A; Lubomski, Lisa H; Marsteller, Jill A; Makary, Martin A; Hunt, Elizabeth
2006-01-01
Objective The objective of this paper was to present a comprehensive approach to help health care organizations reliably deliver effective interventions. Context Reliability in healthcare translates into using valid rate-based measures. Yet high reliability organizations have proven that the context in which care is delivered, called organizational culture, also has important influences on patient safety. Model for Improvement Our model to improve reliability, which also includes interventions to improve culture, focuses on valid rate-based measures. This model includes (1) identifying evidence-based interventions that improve the outcome, (2) selecting interventions with the most impact on outcomes and converting to behaviors, (3) developing measures to evaluate reliability, (4) measuring baseline performance, and (5) ensuring patients receive the evidence-based interventions. The comprehensive unit-based safety program (CUSP) is used to improve culture and guide organizations in learning from mistakes that are important, but cannot be measured as rates. Conclusions We present how this model was used in over 100 intensive care units in Michigan to improve culture and eliminate catheter-related blood stream infections—both were accomplished. Our model differs from existing models in that it incorporates efforts to improve a vital component for system redesign—culture, it targets 3 important groups—senior leaders, team leaders, and front line staff, and facilitates change management—engage, educate, execute, and evaluate for planned interventions. PMID:16898981
An overview of fatigue failures at the Rocky Flats Wind System Test Center
NASA Technical Reports Server (NTRS)
Waldon, C. A.
1981-01-01
Potential small wind energy conversion (SWECS) design problems were identified to improve product quality and reliability. Mass produced components such as gearboxes, generators, bearings, etc., are generally reliable due to their widespread uniform use in other industries. The likelihood of failure increases, though, in the interfacing of these components and in SWECS components designed for a specific system use. Problems relating to the structural integrity of such components are discussed and analyzed with techniques currently used in quality assurance programs in other manufacturing industries.
Advancement of High Power Quasi-CW Laser Diode Arrays For Space-based Laser Instruments
NASA Technical Reports Server (NTRS)
Amzajerdian, Farzin; Meadows, Byron L.; Baker, nathaniel R.; Baggott, Renee S.; Singh, Upendra N.; Kavaya, Michael J.
2004-01-01
Space-based laser and lidar instruments play an important role in NASA s plans for meeting its objectives in both Earth Science and Space Exploration areas. Almost all the lidar instrument concepts being considered by NASA scientist utilize moderate to high power diode-pumped solid state lasers as their transmitter source. Perhaps the most critical component of any solid state laser system is its pump laser diode array which essentially dictates instrument efficiency, reliability and lifetime. For this reason, premature failures and rapid degradation of high power laser diode arrays that have been experienced by laser system designers are of major concern to NASA. This work addresses these reliability and lifetime issues by attempting to eliminate the causes of failures and developing methods for screening laser diode arrays and qualifying them for operation in space.
Column Grid Array Rework for High Reliability
NASA Technical Reports Server (NTRS)
Mehta, Atul C.; Bodie, Charles C.
2008-01-01
Due to requirements for reduced size and weight, use of grid array packages in space applications has become common place. To meet the requirement of high reliability and high number of I/Os, ceramic column grid array packages (CCGA) were selected for major electronic components used in next MARS Rover mission (specifically high density Field Programmable Gate Arrays). ABSTRACT The probability of removal and replacement of these devices on the actual flight printed wiring board assemblies is deemed to be very high because of last minute discoveries in final test which will dictate changes in the firmware. The questions and challenges presented to the manufacturing organizations engaged in the production of high reliability electronic assemblies are, Is the reliability of the PWBA adversely affected by rework (removal and replacement) of the CGA package? and How many times can we rework the same board without destroying a pad or degrading the lifetime of the assembly? To answer these questions, the most complex printed wiring board assembly used by the project was chosen to be used as the test vehicle, the PWB was modified to provide a daisy chain pattern, and a number of bare PWB s were acquired to this modified design. Non-functional 624 pin CGA packages with internal daisy chained matching the pattern on the PWB were procured. The combination of the modified PWB and the daisy chained packages enables continuity measurements of every soldered contact during subsequent testing and thermal cycling. Several test vehicles boards were assembled, reworked and then thermal cycled to assess the reliability of the solder joints and board material including pads and traces near the CGA. The details of rework process and results of thermal cycling are presented in this paper.
Ceramic component reliability with the restructured NASA/CARES computer program
NASA Technical Reports Server (NTRS)
Powers, Lynn M.; Starlinger, Alois; Gyekenyesi, John P.
1992-01-01
The Ceramics Analysis and Reliability Evaluation of Structures (CARES) integrated design program on statistical fast fracture reliability and monolithic ceramic components is enhanced to include the use of a neutral data base, two-dimensional modeling, and variable problem size. The data base allows for the efficient transfer of element stresses, temperatures, and volumes/areas from the finite element output to the reliability analysis program. Elements are divided to insure a direct correspondence between the subelements and the Gaussian integration points. Two-dimensional modeling is accomplished by assessing the volume flaw reliability with shell elements. To demonstrate the improvements in the algorithm, example problems are selected from a round-robin conducted by WELFEP (WEakest Link failure probability prediction by Finite Element Postprocessors).
Transit ridership, reliability, and retention.
DOT National Transportation Integrated Search
2008-10-01
This project explores two major components that affect transit ridership: travel time reliability and rider : retention. It has been recognized that transit travel time reliability may have a significant impact on : attractiveness of transit to many ...
Yuzhnoye SDO Technologies, Proposed for Using in International Programs on Moon Exploration
NASA Astrophysics Data System (ADS)
Konyukhov, S.; Degtyarev, A.; Kushnarev, A.; Berdnyk, A.; Lyzikova, N.
Yuzhnoye SDO possesses a lot of technologies and has obtained great experience of development of the space transportation systems which can be used in international programs on Moon exploration begin enumerate item Liquid-propellant booster made on the basis of the first stage of Zenit LV possesses high specific parameters and is convenient in operation together with high reliability which has been confirmed in two launches of Energia LV and in more than 50 launches of Zenit LV Ecologically clean fuel components minimize negative influence on the environment Because of identity of the booster construction with regular first stage of Zenit LV it retains the high reliability of the last one and can be developed with the minimum costs and in short terms It is proposed to use such booster as the first stage in heavy and super heavy launch vehicles Thanks to the decisions which are put into its construction it could be a part of LV for manned launches and has the real potential for multiple usages item Rocket module block E of the soviet lunar vehicle is designed for the astronaut soft landing on the Moon surface and further return to the circumlunar orbit Block E consists of the major and backup main engines fuel tanks with support facilities for the entirety and heat conditions of the fuel components as well as interfaces with lunar vehicle cabin and landing device High reliability of the Block E is proved by great volume of ground testing and successful testing in space during three launches to the near-earth orbit Block E even now can be used for
High reliability megawatt transformer/rectifier
NASA Technical Reports Server (NTRS)
Zwass, Samuel; Ashe, Harry; Peters, John W.
1991-01-01
The goal of the two phase program is to develop the technology and design and fabricate ultralightweight high reliability DC to DC converters for space power applications. The converters will operate from a 5000 V dc source and deliver 1 MW of power at 100 kV dc. The power weight density goal is 0.1 kg/kW. The cycle to cycle voltage stability goals was + or - 1 percent RMS. The converter is to operate at an ambient temperature of -40 C with 16 minute power pulses and one hour off time. The uniqueness of the design in Phase 1 resided in the dc switching array which operates the converter at 20 kHz using Hollotron plasma switches along with a specially designed low loss, low leakage inductance and a light weight high voltage transformer. This approach reduced considerably the number of components in the converter thereby increasing the system reliability. To achieve an optimum transformer for this application, the design uses four 25 kV secondary windings to produce the 100 kV dc output, thus reducing the transformer leakage inductance, and the ac voltage stresses. A specially designed insulation system improves the high voltage dielectric withstanding ability and reduces the insulation path thickness thereby reducing the component weight. Tradeoff studies and tests conducted on scaled-down model circuits and using representative coil insulation paths have verified the calculated transformer wave shape parameters and the insulation system safety. In Phase 1 of the program a converter design approach was developed and a preliminary transformer design was completed. A fault control circuit was designed and a thermal profile of the converter was also developed.
Discrete component bonding and thick film materials study
NASA Technical Reports Server (NTRS)
Kinser, D. L.
1975-01-01
The results are summarized of an investigation of discrete component bonding reliability and a fundamental study of new thick film resistor materials. The component bonding study examined several types of solder bonded components with some processing variable studies to determine their influence upon bonding reliability. The bonding reliability was assessed using the thermal cycle: 15 minutes at room temperature, 15 minutes at +125 C 15 minutes at room temperature, and 15 minutes at -55 C. The thick film resistor materials examined were of the transition metal oxide-phosphate glass family with several elemental metal additions of the same transition metal. These studies were conducted by preparing a paste of the subject composition, printing, drying, and firing using both air and reducing atmospheres. The resulting resistors were examined for adherence, resistance, thermal coefficient of resistance, and voltage coefficient of resistance.
Use of ceramics in point-focus solar receivers
NASA Technical Reports Server (NTRS)
Smoak, R. H.; Kudirka, A. A.
1981-01-01
One of the research and development efforts in the Solar Thermal Energy Systems Project at the Jet Propulsion Laboratory has been focused on application of ceramic components for advanced point-focus solar receivers. The impetus for this effort is a need for high efficiency, low cost solar receivers which operate in a temperature regime where use of metal components is impractical. The current status of the work on evaluation of ceramic components at JPL and elsewhere is outlined and areas where lack of knowledge is currently slowing application of ceramics are discussed. Future developments of ceramic processing technology and reliability assurance methodology should open up applications for the point-focus solar concentrator system in fuels and chemicals production, in thermochemical energy transport and storage, in detoxification of hazardous materials and in high temperature process heat as well as for electric power generation.
State-of-the-Art for Small Satellite Propulsion Systems
NASA Technical Reports Server (NTRS)
Parker, Khary I.
2016-01-01
SmallSats are a low cost access to space with an increasing need for propulsion systems. NASA, and other organizations, will be using SmallSats that require propulsion systems to: a) Conduct high quality near and far reaching on-orbit research and b) Perform technology demonstrations. Increasing call for high reliability and high performing for SmallSat components. Many SmallSat propulsion technologies are currently under development: a) Systems at various levels of maturity and b) Wide variety of systems for many mission applications.
Life and reliability models for helicopter transmissions
NASA Technical Reports Server (NTRS)
Savage, M.; Knorr, R. J.; Coy, J. J.
1982-01-01
Computer models of life and reliability are presented for planetary gear trains with a fixed ring gear, input applied to the sun gear, and output taken from the planet arm. For this transmission the input and output shafts are co-axial and the input and output torques are assumed to be coaxial with these shafts. Thrust and side loading are neglected. The reliability model is based on the Weibull distributions of the individual reliabilities of the in transmission components. The system model is also a Weibull distribution. The load versus life model for the system is a power relationship as the models for the individual components. The load-life exponent and basic dynamic capacity are developed as functions of the components capacities. The models are used to compare three and four planet, 150 kW (200 hp), 5:1 reduction transmissions with 1500 rpm input speed to illustrate their use.
Lifetime Reliability Evaluation of Structural Ceramic Parts with the CARES/LIFE Computer Program
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Powers, Lynn M.; Janosik, Lesley A.; Gyekenyesi, John P.
1993-01-01
The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing the power law, Paris law, or Walker equation. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), Weibull's normal stress averaging method (NSA), or Batdorf's theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Two example problems demonstrating cyclic fatigue parameter estimation and component reliability analysis with proof testing are included.
Reliability and availability analysis of a 10 kW@20 K helium refrigerator
NASA Astrophysics Data System (ADS)
Li, J.; Xiong, L. Y.; Liu, L. Q.; Wang, H. R.; Wang, B. M.
2017-02-01
A 10 kW@20 K helium refrigerator has been established in the Technical Institute of Physics and Chemistry, Chinese Academy of Sciences. To evaluate and improve this refrigerator’s reliability and availability, a reliability and availability analysis is performed. According to the mission profile of this refrigerator, a functional analysis is performed. The failure data of the refrigerator components are collected and failure rate distributions are fitted by software Weibull++ V10.0. A Failure Modes, Effects & Criticality Analysis (FMECA) is performed and the critical components with higher risks are pointed out. Software BlockSim V9.0 is used to calculate the reliability and the availability of this refrigerator. The result indicates that compressors, turbine and vacuum pump are the critical components and the key units of this refrigerator. The mitigation actions with respect to design, testing, maintenance and operation are proposed to decrease those major and medium risks.
Reliability Modeling of Microelectromechanical Systems Using Neural Networks
NASA Technical Reports Server (NTRS)
Perera. J. Sebastian
2000-01-01
Microelectromechanical systems (MEMS) are a broad and rapidly expanding field that is currently receiving a great deal of attention because of the potential to significantly improve the ability to sense, analyze, and control a variety of processes, such as heating and ventilation systems, automobiles, medicine, aeronautical flight, military surveillance, weather forecasting, and space exploration. MEMS are very small and are a blend of electrical and mechanical components, with electrical and mechanical systems on one chip. This research establishes reliability estimation and prediction for MEMS devices at the conceptual design phase using neural networks. At the conceptual design phase, before devices are built and tested, traditional methods of quantifying reliability are inadequate because the device is not in existence and cannot be tested to establish the reliability distributions. A novel approach using neural networks is created to predict the overall reliability of a MEMS device based on its components and each component's attributes. The methodology begins with collecting attribute data (fabrication process, physical specifications, operating environment, property characteristics, packaging, etc.) and reliability data for many types of microengines. The data are partitioned into training data (the majority) and validation data (the remainder). A neural network is applied to the training data (both attribute and reliability); the attributes become the system inputs and reliability data (cycles to failure), the system output. After the neural network is trained with sufficient data. the validation data are used to verify the neural networks provided accurate reliability estimates. Now, the reliability of a new proposed MEMS device can be estimated by using the appropriate trained neural networks developed in this work.
Perez, Concepcion; Galvez, Rafael; Huelbes, Silvia; Insausti, Joaquin; Bouhassira, Didier; Diaz, Silvia; Rejas, Javier
2007-01-01
Background This study assesses the validity and reliability of the Spanish version of DN4 questionnaire as a tool for differential diagnosis of pain syndromes associated to a neuropathic (NP) or somatic component (non-neuropathic pain, NNP). Methods A study was conducted consisting of two phases: cultural adaptation into the Spanish language by means of conceptual equivalence, including forward and backward translations in duplicate and cognitive debriefing, and testing of psychometric properties in patients with NP (peripheral, central and mixed) and NNP. The analysis of psychometric properties included reliability (internal consistency, inter-rater agreement and test-retest reliability) and validity (ROC curve analysis, agreement with the reference diagnosis and determination of sensitivity, specificity, and positive and negative predictive values in different subsamples according to type of NP). Results A sample of 164 subjects (99 women, 60.4%; age: 60.4 ± 16.0 years), 94 (57.3%) with NP (36 with peripheral, 32 with central, and 26 with mixed pain) and 70 with NNP was enrolled. The questionnaire was reliable [Cronbach's alpha coefficient: 0.71, inter-rater agreement coefficient: 0.80 (0.71–0.89), and test-retest intra-class correlation coefficient: 0.95 (0.92–0.97)] and valid for a cut-off value ≥ 4 points, which was the best value to discriminate between NP and NNP subjects. Discussion This study, representing the first validation of the DN4 questionnaire into another language different than the original, not only supported its high discriminatory value for identification of neuropathic pain, but also provided supplemental psychometric validation (i.e. test-retest reliability, influence of educational level and pain intensity) and showed its validity in mixed pain syndromes. PMID:18053212
[Validation of the German version of the Singing Voice Handicap Index].
Lorenz, A; Kleber, B; Büttner, M; Fuchs, M; Mürbe, D; Richter, B; Sandel, M; Nawka, T
2013-08-01
The Singing Voice Handicap Index (SVHI) was developed in the United States for the self-assessment of patients with singing problems. It has been translated into German and its reliability and validity have been assessed. In total, 54 (35 female, 19 male) dysphonic singers and 130 (74 female, 56 male) non-dysphonic professional singers were included in the study. Reliability rested on high test-retest reliability (r = 0.960, p ≤ 0.001, Pearson correlation) and a Cronbach's α of 0.975. A principal component analysis using the Varimax method and the results of the screeplot suggest the SVHI scored as a single scale. Validity rested on a highly significant correlation between the severity of the self-rated voice impairment by the patient and the total SVHI score. Dysphonic singers have significantly higher SVHI scores than healthy singers. The SVHI is thus suited to implementation as a diagnostic tool in German-speaking countries.
Reliability and Maintainability Data for Lead Lithium Cooling Systems
Cadwallader, Lee
2016-11-16
This article presents component failure rate data for use in assessment of lead lithium cooling systems. Best estimate data applicable to this liquid metal coolant is presented. Repair times for similar components are also referenced in this work. These data support probabilistic safety assessment and reliability, availability, maintainability and inspectability analyses.
Component Structure, Reliability, and Stability of Lawrence's Self-Esteem Questionnaire (LAWSEQ)
ERIC Educational Resources Information Center
Rae, Gordon; Dalto, Georgia; Loughrey, Dolores; Woods, Caroline
2011-01-01
Lawrence's Self-Esteem Questionnaire (LAWSEQ) was administered to 120 Year 1 pupils in six schools in Belfast, Northern Ireland. A principal components analysis indicated that the scale items were unidimensional and that the reliability of the scores, as estimated by Cronbach's alpha, was satisfactory ([alpha] = 0.73). There were no differences…
1984-03-01
Engineering initiative to develop an orderly plan and procedure to assure that USAF acquire reliable, high quality, supportable avionics with a higher avail...susceptibility te~t~ (radiated and conducted), and emission of radio frequency energy tests."l6) Other electrical stresses can include over/under voltage...jo ints, poor welds, and dielectric defects. Also, instruments with components unable to endu very high temperatures can be safely tested. 1-19
A unique high heat flux facility for testing hypersonic engine components
NASA Technical Reports Server (NTRS)
Melis, Matthew E.; Gladden, Herbert J.
1990-01-01
This paper describes the Hot Gas Facility, a unique, reliable, and cost-effective high-heat-flux facility for testing hypersonic engine components developed at the NASA Lewis Research Center. The Hot Gas Facility is capable of providing heat fluxes ranging from 200 Btu/sq ft per sec on flat surfaces up to 8000 Btu/sq ft per sec at a leading edge stagnation point. The usefulness of the Hot Gas Facility for the NASP community was demonstrated by testing hydrogen-cooled structures over a range of temperatures and pressures. Ranges of the Reynolds numbers, Prandtl numbers, enthalpy, and heat fluxes similar to those expected during hypersonic flights were achieved.
Stirling engine - Approach for long-term durability assessment
NASA Technical Reports Server (NTRS)
Tong, Michael T.; Bartolotta, Paul A.; Halford, Gary R.; Freed, Alan D.
1992-01-01
The approach employed by NASA Lewis for the long-term durability assessment of the Stirling engine hot-section components is summarized. The approach consists of: preliminary structural assessment; development of a viscoplastic constitutive model to accurately determine material behavior under high-temperature thermomechanical loads; an experimental program to characterize material constants for the viscoplastic constitutive model; finite-element thermal analysis and structural analysis using a viscoplastic constitutive model to obtain stress/strain/temperature at the critical location of the hot-section components for life assessment; and development of a life prediction model applicable for long-term durability assessment at high temperatures. The approach should aid in the provision of long-term structural durability and reliability of Stirling engines.
Digital Processing Of Young's Fringes In Speckle Photography
NASA Astrophysics Data System (ADS)
Chen, D. J.; Chiang, F. P.
1989-01-01
A new technique for fully automatic diffraction fringe measurement in point-wise speckle photograph analysis is presented in this paper. The fringe orientation and spacing are initially estimated with the help of 1-D FFT. A 2-D convolution filter is then applied to enhance the estimated image . High signal-to-noise rate (SNR) fringe pattern is achieved which makes it feasible for precise determination of the displacement components. The halo-effect is also optimally eliminated in a new way. With the computation time compared favorably with those of 2-D autocorrelation method and the iterative 2-D FFT method. High reliability and accurate determination of displacement components are achieved over a wide range of fringe density.
NASA Astrophysics Data System (ADS)
Haghani Hassan Abadi, Reza; Fakhari, Abbas; Rahimian, Mohammad Hassan
2018-03-01
In this paper, we propose a multiphase lattice Boltzmann model for numerical simulation of ternary flows at high density and viscosity ratios free from spurious velocities. The proposed scheme, which is based on the phase-field modeling, employs the Cahn-Hilliard theory to track the interfaces among three different fluid components. Several benchmarks, such as the spreading of a liquid lens, binary droplets, and head-on collision of two droplets in binary- and ternary-fluid systems, are conducted to assess the reliability and accuracy of the model. The proposed model can successfully simulate both partial and total spreadings while reducing the parasitic currents to the machine precision.
Emery, John M.; Field, Richard V.; Foulk, James W.; ...
2015-05-26
Laser welds are prevalent in complex engineering systems and they frequently govern failure. The weld process often results in partial penetration of the base metals, leaving sharp crack-like features with a high degree of variability in the geometry and material properties of the welded structure. Furthermore, accurate finite element predictions of the structural reliability of components containing laser welds requires the analysis of a large number of finite element meshes with very fine spatial resolution, where each mesh has different geometry and/or material properties in the welded region to address variability. We found that traditional modeling approaches could not bemore » efficiently employed. Consequently, a method is presented for constructing a surrogate model, based on stochastic reduced-order models, and is proposed to represent the laser welds within the component. Here, the uncertainty in weld microstructure and geometry is captured by calibrating plasticity parameters to experimental observations of necking as, because of the ductility of the welds, necking – and thus peak load – plays the pivotal role in structural failure. The proposed method is exercised for a simplified verification problem and compared with the traditional Monte Carlo simulation with rather remarkable results.« less
[Balanced scorecard for performance measurement of a nursing organization in a Korean hospital].
Hong, Yoonmi; Hwang, Kyung Ja; Kim, Mi Ja; Park, Chang Gi
2008-02-01
The purpose of this study was to develop a balanced scorecard (BSC) for performance measurement of a Korean hospital nursing organization and to evaluate the validity and reliability of performance measurement indicators. Two hundred fifty-nine nurses in a Korean hospital participated in a survey questionnaire that included 29-item performance evaluation indicators developed by investigators of this study based on the Kaplan and Norton's BSC (1992). Cronbach's alpha was used to test the reliability of the BSC. Exploratory and confirmatory factor analysis with a structure equation model (SEM) was applied to assess the construct validity of the BSC. Cronbach's alpha of 29 items was .948. Factor analysis of the BSC showed 5 principal components (eigen value >1.0) which explained 62.7% of the total variance, and it included a new one, community service. The SEM analysis results showed that 5 components were significant for the hospital BSC tool. High degree of reliability and validity of this BSC suggests that it may be used for performance measurements of a Korean hospital nursing organization. Future studies may consider including a balanced number of nurse managers and staff nurses in the study. Further data analysis on the relationships among factors is recommended.
NASA Technical Reports Server (NTRS)
Aruljothi, Arunvenkatesh
2016-01-01
The Space Exploration Division of the Safety and Mission Assurances Directorate is responsible for reducing the risk to Human Space Flight Programs by providing system safety, reliability, and risk analysis. The Risk & Reliability Analysis branch plays a part in this by utilizing Probabilistic Risk Assessment (PRA) and Reliability and Maintainability (R&M) tools to identify possible types of failure and effective solutions. A continuous effort of this branch is MaRS, or Mass and Reliability System, a tool that was the focus of this internship. Future long duration space missions will have to find a balance between the mass and reliability of their spare parts. They will be unable take spares of everything and will have to determine what is most likely to require maintenance and spares. Currently there is no database that combines mass and reliability data of low level space-grade components. MaRS aims to be the first database to do this. The data in MaRS will be based on the hardware flown on the International Space Stations (ISS). The components on the ISS have a long history and are well documented, making them the perfect source. Currently, MaRS is a functioning excel workbook database; the backend is complete and only requires optimization. MaRS has been populated with all the assemblies and their components that are used on the ISS; the failures of these components are updated regularly. This project was a continuation on the efforts of previous intern groups. Once complete, R&M engineers working on future space flight missions will be able to quickly access failure and mass data on assemblies and components, allowing them to make important decisions and tradeoffs.
Analytical models for coupling reliability in identical two-magnet systems during slow reversals
NASA Astrophysics Data System (ADS)
Kani, Nickvash; Naeemi, Azad
2017-12-01
This paper follows previous works which investigated the strength of dipolar coupling in two-magnet systems. While those works focused on qualitative analyses, this manuscript elucidates reversal through dipolar coupling culminating in analytical expressions for reversal reliability in identical two-magnet systems. The dipolar field generated by a mono-domain magnetic body can be represented by a tensor containing both longitudinal and perpendicular field components; this field changes orientation and magnitude based on the magnetization of neighboring nanomagnets. While the dipolar field does reduce to its longitudinal component at short time-scales, for slow magnetization reversals, the simple longitudinal field representation greatly underestimates the scope of parameters that ensure reliable coupling. For the first time, analytical models that map the geometric and material parameters required for reliable coupling in two-magnet systems are developed. It is shown that in biaxial nanomagnets, the x ̂ and y ̂ components of the dipolar field contribute to the coupling, while all three dimensions contribute to the coupling between a pair of uniaxial magnets. Additionally, the ratio of the longitudinal and perpendicular components of the dipolar field is also very important. If the perpendicular components in the dipolar tensor are too large, the nanomagnet pair may come to rest in an undesirable meta-stable state away from the free axis. The analytical models formulated in this manuscript map the minimum and maximum parameters for reliable coupling. Using these models, it is shown that there is a very small range of material parameters which can facilitate reliable coupling between perpendicular-magnetic-anisotropy nanomagnets; hence, in-plane nanomagnets are more suitable for coupled systems.
Child Care and Other Support Programs
ERIC Educational Resources Information Center
Floyd, Latosha; Phillips, Deborah A.
2013-01-01
The U.S. military has come to realize that providing reliable, high-quality child care for service members' children is a key component of combat readiness. As a result, the Department of Defense (DoD) has invested heavily in child care. The DoD now runs what is by far the nation's largest employer-sponsored child-care system, a sprawling network…
Anne E. Black; Brooke Baldauf McBride
2013-01-01
This study examined the effects of organisational, environmental, group and individual characteristics on five components of safety climate (High Reliability Organising Practices, Leadership, Group Culture, Learning Orientation and Mission Clarity) in the US federal wildland fire management community. Of particular interest were differences between perceptions based on...
ERIC Educational Resources Information Center
Holloway, Justin
2017-01-01
Business schools have transformed from organizations that solely provide a business education to organizations that train future business leaders, perform extensive research, and serve as major revenue generators for the university systems in which they belong. Organizational mindfulness, a concept created from high-reliability organizations, to…
Phase-locked telemetry system for rotary instrumentation of turbomachinery, phase 1
NASA Technical Reports Server (NTRS)
Adler, A.; Hoeks, B.
1978-01-01
A telemetry system for use in making strain and temperature measurements on the rotating components of high speed turbomachines employs phase locked transmitters, which offer greater measurement channel capacity and reliability than existing systems which employ L-C carrier oscillators. A prototype transmitter module was tested at 175 C combined with 40,000 g's acceleration.
Metsavaht, Leonardo; Leporace, Gustavo; Riberto, Marcelo; Sposito, Maria Matilde M; Del Castillo, Letícia N C; Oliveira, Liszt P; Batista, Luiz Alberto
2012-11-01
Clinical measurement. To translate and culturally adapt the Lower Extremity Functional Scale (LEFS) into a Brazilian Portuguese version, and to test the construct and content validity and reliability of this version in patients with knee injuries. There is no Brazilian Portuguese version of an instrument to assess the function of the lower extremity after orthopaedic injury. The translation of the original English version of the LEFS into a Brazilian Portuguese version was accomplished using standard guidelines and tested in 31 patients with knee injuries. Subsequently, 87 patients with a variety of knee disorders completed the Brazilian Portuguese LEFS, the Medical Outcomes Study 36-Item Short-Form Health Survey, the Western Ontario and McMaster Universities Osteoarthritis Index, and the International Knee Documentation Committee Subjective Knee Evaluation Form and a visual analog scale for pain. All patients were retested within 2 days to determine reliability of these measures. Validation was assessed by determining the level of association between the Brazilian Portuguese LEFS and the other outcome measures. Reliability was documented by calculating internal consistency, test-retest reliability, and standard error of measurement. The Brazilian Portuguese LEFS had a high level of association with the physical component of the Medical Outcomes Study 36-Item Short-Form Health Survey (r = 0.82), the Western Ontario and McMaster Universities Osteoarthritis Index (r = 0.87), the International Knee Documentation Committee Subjective Knee Evaluation Form (r = 0.82), and the pain visual analog scale (r = -0.60) (all, P<.05). The Brazilian Portuguese LEFS had a low level of association with the mental component of the Medical Outcomes Study 36-Item Short-Form Health Survey (r = 0.38, P<.05). The internal consistency (Cronbach α = .952) and test-retest reliability (intraclass correlation coefficient = 0.957) of the Brazilian Portuguese version of the LEFS were high. The standard error of measurement was low (3.6) and the agreement was considered high, demonstrated by the small differences between test and retest and the narrow limit of agreement, as observed in Bland-Altman and survival-agreement plots. The translation of the LEFS into a Brazilian Portuguese version was successful in preserving the semantic and measurement properties of the original version and was shown to be valid and reliable in a Brazilian population with knee injuries.
Schweitzer, Karl M; Vaccaro, Alexander R; Harrop, James S; Hurlbert, John; Carrino, John A; Rechtine, Glenn R; Schwartz, David G; Alanay, Ahmet; Sharma, Dinesh K; Anderson, D Greg; Lee, Joon Y; Arnold, Paul M
2007-09-01
The Spine Trauma Study Group (STSG) has proposed a novel thoracolumbar injury classification system and score (TLICS) in an attempt to define traumatic spinal injuries and direct appropriate management schemes objectively. The TLICS assigns specific point values based on three variables to generate a final severity score that guides potential treatment options. Within this algorithm, significant emphasis has been placed on posterior ligamentous complex (PLC) integrity. The purpose of this study was to determine the interrater reliability of indicators surgeons use when assessing PLC disruption on imaging studies, including computed tomography (CT) and magnetic resonance imaging (MRI). Orthopedic surgeons and neurosurgeons retrospectively reviewed a series of thoracolumbar injury case studies. Thirteen case studies, including images, were distributed to STSG members for individual, independent evaluation of the following three criteria: (1) diastasis of the facet joints on CT; (2) posterior edema-like signal in the region of PLC components on sagittal T2-weighted fat saturation (FAT SAT) MRI; and (3) disrupted PLC components on sagittal T1-weighted MRI. Interrater agreement on the presence or absence of each of the three criteria in each of the 13 cases was assessed. Absolute interrater percent agreement on diastasis of the facet joints on CT and posterior edema-like signal in the region of PLC components on sagittal T2-weighted FAT SAT MRI was similar (agreement 70.5%). Interrater agreement on disrupted PLC components on sagittal T1-weighted MRI was 48.9%. Facet joint diastasis on CT was the most reliable indicator of PLC disruption as assessed by both Cohen's kappa (kappa = 0.395) and intraclass correlation coefficient (ICC 0.430). The interrater reliability of assessing diastasis of the facet joints on CT had fair to moderate agreement. The reliability of assessing the posterior edema-like signal in the region of PLC components was lower but also fair, whereas the reliability of identifying disrupted PLC components was poor.
Fuel Cell Balance-of-Plant Reliability Testbed Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sproat, Vern; LaHurd, Debbie
Reliability of the fuel cell system balance-of-plant (BoP) components is a critical factor that needs to be addressed prior to fuel cells becoming fully commercialized. Failure or performance degradation of BoP components has been identified as a life-limiting factor in fuel cell systems.1 The goal of this project is to develop a series of test beds that will test system components such as pumps, valves, sensors, fittings, etc., under operating conditions anticipated in real Polymer Electrolyte Membrane (PEM) fuel cell systems. Results will be made generally available to begin removing reliability as a roadblock to the growth of the PEMmore » fuel cell industry. Stark State College students participating in the project, in conjunction with their coursework, have been exposed to technical knowledge and training in the handling and maintenance of hydrogen, fuel cells and system components as well as component failure modes and mechanisms. Three test beds were constructed. Testing was completed on gas flow pumps, tubing, and pressure and temperature sensors and valves.« less
Bessette, Katie L; Jenkins, Lisanne M; Skerrett, Kristy A; Gowins, Jennifer R; DelDonno, Sophie R; Zubieta, Jon-Kar; McInnis, Melvin G; Jacobs, Rachel H; Ajilore, Olusola; Langenecker, Scott A
2018-01-01
There is substantial variability across studies of default mode network (DMN) connectivity in major depressive disorder, and reliability and time-invariance are not reported. This study evaluates whether DMN dysconnectivity in remitted depression (rMDD) is reliable over time and symptom-independent, and explores convergent relationships with cognitive features of depression. A longitudinal study was conducted with 82 young adults free of psychotropic medications (47 rMDD, 35 healthy controls) who completed clinical structured interviews, neuropsychological assessments, and 2 resting-state fMRI scans across 2 study sites. Functional connectivity analyses from bilateral posterior cingulate and anterior hippocampal formation seeds in DMN were conducted at both time points within a repeated-measures analysis of variance to compare groups and evaluate reliability of group-level connectivity findings. Eleven hyper- (from posterior cingulate) and 6 hypo- (from hippocampal formation) connectivity clusters in rMDD were obtained with moderate to adequate reliability in all but one cluster (ICC's range = 0.50 to 0.76 for 16 of 17). The significant clusters were reduced with a principle component analysis (5 components obtained) to explore these connectivity components, and were then correlated with cognitive features (rumination, cognitive control, learning and memory, and explicit emotion identification). At the exploratory level, for convergent validity, components consisting of posterior cingulate with cognitive control network hyperconnectivity in rMDD were related to cognitive control (inverse) and rumination (positive). Components consisting of anterior hippocampal formation with social emotional network and DMN hypoconnectivity were related to memory (inverse) and happy emotion identification (positive). Thus, time-invariant DMN connectivity differences exist early in the lifespan course of depression and are reliable. The nuanced results suggest a ventral within-network hypoconnectivity associated with poor memory and a dorsal cross-network hyperconnectivity linked to poorer cognitive control and elevated rumination. Study of early course remitted depression with attention to reliability and symptom independence could lead to more readily translatable clinical assessment tools for biomarkers.
NASA Glenn Research Center Support of the ASRG Project
NASA Technical Reports Server (NTRS)
Wilson, Scott D.; Wong, Wayne A.
2014-01-01
A high efficiency radioisotope power system is being developed for long-duration NASA space science missions. The U.S. Department of Energy (DOE) managed a flight contract with Lockheed Martin Space Systems Company (LMSSC) to build Advanced Stirling Radioisotope Generators (ASRGs), with support from NASA Glenn Research Center (GRC). Sunpower Inc. held two parallel contracts to produce Advanced Stirling Convertors (ASCs), one with DOELockheed Martin to produce ASC-F flight units, and one with GRC for the production of ASC-E3 engineering unit pathfinders that are built to the flight design. In support of those contracts, GRC provided testing, materials expertise, government furnished equipment, inspections, and related data products to DOELockheed Martin and Sunpower. The technical support includes material evaluations, component tests, convertor characterization, and technology transfer. Material evaluations and component tests have been performed on various ASC components in order to assess potential life-limiting mechanisms and provide data for reliability models. Convertor level tests have been used to characterize performance under operating conditions that are representative of various mission conditions. Technology transfers enhanced contractor capabilities for specialized production processes and tests. Despite termination of flight ASRG contract, NASA continues to develop the high efficiency ASC conversion technology under the ASC-E3 contract. This paper describes key government furnished services performed for ASRG and future tests used to provide data for ongoing reliability assessments.
NASA Astrophysics Data System (ADS)
Deshayes, Yannick; Verdier, Frederic; Bechou, Laurent; Tregon, Bernard; Danto, Yves; Laffitte, Dominique; Goudard, Jean Luc
2004-09-01
High performance and high reliability are two of the most important goals driving the penetration of optical transmission into telecommunication systems ranging from 880 nm to 1550 nm. Lifetime prediction defined as the time at which a parameter reaches its maximum acceptable shirt still stays the main result in terms of reliability estimation for a technology. For optoelectronic emissive components, selection tests and life testing are specifically used for reliability evaluation according to Telcordia GR-468 CORE requirements. This approach is based on extrapolation of degradation laws, based on physics of failure and electrical or optical parameters, allowing both strong test time reduction and long-term reliability prediction. Unfortunately, in the case of mature technology, there is a growing complexity to calculate average lifetime and failure rates (FITs) using ageing tests in particular due to extremely low failure rates. For present laser diode technologies, time to failure tend to be 106 hours aged under typical conditions (Popt=10 mW and T=80°C). These ageing tests must be performed on more than 100 components aged during 10000 hours mixing different temperatures and drive current conditions conducting to acceleration factors above 300-400. These conditions are high-cost, time consuming and cannot give a complete distribution of times to failure. A new approach consists in use statistic computations to extrapolate lifetime distribution and failure rates in operating conditions from physical parameters of experimental degradation laws. In this paper, Distributed Feedback single mode laser diodes (DFB-LD) used for 1550 nm telecommunication network working at 2.5 Gbit/s transfer rate are studied. Electrical and optical parameters have been measured before and after ageing tests, performed at constant current, according to Telcordia GR-468 requirements. Cumulative failure rates and lifetime distributions are computed using statistic calculations and equations of drift mechanisms versus time fitted from experimental measurements.
Reliability Issues in Stirling Radioisotope Power Systems
NASA Technical Reports Server (NTRS)
Schreiber, Jeffrey; Shah, Ashwin
2005-01-01
Stirling power conversion is a potential candidate for use in a Radioisotope Power System (RPS) for space science missions because it offers a multifold increase in the conversion efficiency of heat to electric power and reduced requirement of radioactive material. Reliability of an RPS that utilizes Stirling power conversion technology is important in order to ascertain long term successful performance. Owing to long life time requirement (14 years), it is difficult to perform long-term tests that encompass all the uncertainties involved in the design variables of components and subsystems comprising the RPS. The requirement for uninterrupted performance reliability and related issues are discussed, and some of the critical areas of concern are identified. An overview of the current on-going efforts to understand component life, design variables at the component and system levels, and related sources and nature of uncertainties are also discussed. Current status of the 110 watt Stirling Radioisotope Generator (SRG110) reliability efforts is described. Additionally, an approach showing the use of past experience on other successfully used power systems to develop a reliability plan for the SRG110 design is outlined.
Reliability Issues in Stirling Radioisotope Power Systems
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Schreiber, Jeffrey G.
2004-01-01
Stirling power conversion is a potential candidate for use in a Radioisotope Power System (RPS) for space science missions because it offers a multifold increase in the conversion efficiency of heat to electric power and reduced requirement of radioactive material. Reliability of an RPS that utilizes Stirling power conversion technology is important in order to ascertain long term successful performance. Owing to long life time requirement (14 years), it is difficult to perform long-term tests that encompass all the uncertainties involved in the design variables of components and subsystems comprising the RPS. The requirement for uninterrupted performance reliability and related issues are discussed, and some of the critical areas of concern are identified. An overview of the current on-going efforts to understand component life, design variables at the component and system levels, and related sources and nature of uncertainties are also discussed. Current status of the 110 watt Stirling Radioisotope Generator (SRG110) reliability efforts is described. Additionally, an approach showing the use of past experience on other successfully used power systems to develop a reliability plan for the SRG110 design is outlined.
Reliability and risk assessment of structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1991-01-01
Development of reliability and risk assessment of structural components and structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) the evaluation of the various uncertainties in terms of cumulative distribution functions for various structural response variables based on known or assumed uncertainties in primitive structural variables; (2) evaluation of the failure probability; (3) reliability and risk-cost assessment; and (4) an outline of an emerging approach for eventual certification of man-rated structures by computational methods. Collectively, the results demonstrate that the structural durability/reliability of man-rated structural components and structures can be effectively evaluated by using formal probabilistic methods.
Development of a Novel Brayton-Cycle Cryocooler and Key Component Technologies
NASA Astrophysics Data System (ADS)
Nieczkoski, S. J.; Mohling, R. A.
2004-06-01
Brayton-cycle cryocoolers are being developed to provide efficient cooling in the 6 K to 70 K temperature range. The cryocoolers are being developed for use in space and in terrestrial applications where combinations of long lifetime, high efficiency, compactness, low mass, low vibration, flexible interfacing, load variability, and reliability are essential. The key enabling technologies for these systems are a mesoscale expander and an advanced oil-free scroll compressor. Both these components are nearing completion of their prototype development phase. The emphasis on the component and system development has been on invoking fabrication processes and techniques that can be evolved to further reduction in scale tending toward cryocooler miniaturization.
Shuttle payload minimum cost vibroacoustic tests
NASA Technical Reports Server (NTRS)
Stahle, C. V.; Gongloff, H. R.; Young, J. P.; Keegan, W. B.
1977-01-01
This paper is directed toward the development of the methodology needed to evaluate cost effective vibroacoustic test plans for Shuttle Spacelab payloads. Statistical decision theory is used to quantitatively evaluate seven alternate test plans by deriving optimum test levels and the expected cost for each multiple mission payload considered. The results indicate that minimum costs can vary by as much as $6 million for the various test plans. The lowest cost approach eliminates component testing and maintains flight vibration reliability by performing subassembly tests at a relatively high acoustic level. Test plans using system testing or combinations of component and assembly level testing are attractive alternatives. Component testing alone is shown not to be cost effective.
High variable mixture ratio oxygen/hydrogen engine
NASA Technical Reports Server (NTRS)
Erickson, C. M.; Tu, W. H.; Weiss, A. H.
1988-01-01
The ability of an O2/H2 engine to operate over a range of high-propellant mixture ratios was previously shown to be advantageous in single stage to orbit (SSTO) vehicles. The results are presented for the analysis of high-performance engine power cycles operating over propellant mixture ratio ranges of 12 to 6 and 9 to 6. A requirement to throttle up to 60 percent of nominal thrust was superimposed as a typical throttle range to limit vehicle acceleration as propellant is expended. The object of the analysis was to determine areas of concern relative to component and engine operability or potential hazards resulting from the operating requirements and ranges of conditions that derive from the overall engine requirements. The SSTO mission necessitates a high-performance, lightweight engine. Therefore, staged combustion power cycles employing either dual fuel-rich preburners or dual mixed (fuel-rich and oxygen-rich) preburners were examined. Engine mass flow and power balances were made and major component operating ranges were defined. Component size and arrangement were determined through engine layouts for one of the configurations evaluated. Each component is being examined to determine if there are areas of concern with respect to component efficiency, operability, reliability, or hazard. The effects of reducing the maximum chamber pressure were investigated for one of the cycles.
Deterministic Ethernet for Space Applications
NASA Astrophysics Data System (ADS)
Fidi, C.; Wolff, B.
2015-09-01
Typical spacecraft systems are distributed to be able to achieve the required reliability and availability targets of the mission. However the requirements on these systems are different for launchers, satellites, human space flight and exploration missions. Launchers require typically high reliability with very short mission times whereas satellites or space exploration missions require very high availability at very long mission times. Comparing a distributed system of launchers with satellites it shows very fast reaction times in launchers versus much slower once in satellite applications. Human space flight missions are maybe most challenging concerning reliability and availability since human lives are involved and the mission times can be very long e.g. ISS. Also the reaction times of these vehicles can get challenging during mission scenarios like landing or re-entry leading to very fast control loops. In these different applications more and more autonomous functions are required to fulfil the needs of current and future missions. This autonomously leads to new requirements with respect to increase performance, determinism, reliability and availability. On the other hand side the pressure on reducing costs of electronic components in space applications is increasing, leading to the use of more and more COTS components especially for launchers and LEO satellites. This requires a technology which is able to provide a cost competitive solution for both the high reliable and available deep-space as well as the low cost “new space” markets. Future spacecraft communication standards therefore have to be much more flexible, scalable and modular to be able to deal with these upcoming challenges. The only way to fulfill these requirements is, if they are based on open standards which are used cross industry leading to a reduction of the lifecycle costs and an increase in performance. The use of a communication network that fulfills these requirements will be essential for such spacecraft’s to allow the use in launcher, satellite, human space flight and exploration missions. Using one technology and the related infrastructure for these different applications will lead to a significant reduction of complexity and would moreover lead to significant savings in size weight and power while increasing the performance of the overall system. The paper focuses on the use of the TTEthernet technology for launchers, satellites and human spaceflight and will demonstrate the scalability of the technology for the different applications. The data used is derived from the ESA TRP 7594 on “Reliable High-Speed Data Bus/Network for Safety-Oriented Missions”.
Optical Amplifier Based Space Solar Power
NASA Technical Reports Server (NTRS)
Fork, Richard L.
2001-01-01
The objective was to design a safe optical power beaming system for use in space. Research was focused on identification of strategies and structures that would enable achievement near diffraction limited optical beam quality, highly efficient electrical to optical conversion, and high average power in combination in a single system. Efforts centered on producing high efficiency, low mass of the overall system, low operating temperature, precision pointing and tracking capability, compatibility with useful satellite orbits, component and system reliability, and long component and system life in space. A system based on increasing the power handled by each individual module to an optimum and the number of modules in the complete structure was planned. We were concerned with identifying the most economical and rapid path to commercially viable safe space solar power.
Yoshimoto, Shusuke; Uemura, Takafumi; Akiyama, Mihoko; Ihara, Yoshihiro; Otake, Satoshi; Fujii, Tomoharu; Araki, Teppei; Sekitani, Tsuyoshi
2017-07-01
This paper presents a flexible organic thin-film transistor (OTFT) amplifier for bio-signal monitoring and presents the chip component assembly process. Using a conductive adhesive and a chip mounter, the chip components are mounted on a flexible film substrate, which has OTFT circuits. This study first investigates the assembly technique reliability for chip components on the flexible substrate. This study also specifically examines heart pulse wave monitoring conducted using the proposed flexible amplifier circuit and a flexible piezoelectric film. We connected the amplifier to a bluetooth device for a wearable device demonstration.
Correlation study between vibrational environmental and failure rates of civil helicopter components
NASA Technical Reports Server (NTRS)
Alaniz, O.
1979-01-01
An investigation of two selected helicopter types, namely, the Models 206A/B and 212, is reported. An analysis of the available vibration and reliability data for these two helicopter types resulted in the selection of ten components located in five different areas of the helicopter and consisting primarily of instruments, electrical components, and other noncritical flight hardware. The potential for advanced technology in suppressing vibration in helicopters was assessed. The are still several unknowns concerning both the vibration environment and the reliability of helicopter noncritical flight components. Vibration data for the selected components were either insufficient or inappropriate. The maintenance data examined for the selected components were inappropriate due to variations in failure mode identification, inconsistent reporting, or inaccurate informaton.
49 CFR Appendix E to Part 238 - General Principles of Reliability-Based Maintenance Programs
Code of Federal Regulations, 2011 CFR
2011-10-01
... that have already occurred but were not evident to the operating crew. (b) Components or systems in a... shows decreasing reliability with increasing operating age. An age/time limit may be used to reduce the... maintenance of a component or system to protect the safety and operating capability of the equipment, a number...
49 CFR Appendix E to Part 238 - General Principles of Reliability-Based Maintenance Programs
Code of Federal Regulations, 2013 CFR
2013-10-01
... that have already occurred but were not evident to the operating crew. (b) Components or systems in a... shows decreasing reliability with increasing operating age. An age/time limit may be used to reduce the... maintenance of a component or system to protect the safety and operating capability of the equipment, a number...
49 CFR Appendix E to Part 238 - General Principles of Reliability-Based Maintenance Programs
Code of Federal Regulations, 2012 CFR
2012-10-01
... that have already occurred but were not evident to the operating crew. (b) Components or systems in a... shows decreasing reliability with increasing operating age. An age/time limit may be used to reduce the... maintenance of a component or system to protect the safety and operating capability of the equipment, a number...
49 CFR Appendix E to Part 238 - General Principles of Reliability-Based Maintenance Programs
Code of Federal Regulations, 2014 CFR
2014-10-01
... that have already occurred but were not evident to the operating crew. (b) Components or systems in a... shows decreasing reliability with increasing operating age. An age/time limit may be used to reduce the... maintenance of a component or system to protect the safety and operating capability of the equipment, a number...
2008-10-01
provide adequate means for thermal heat dissipation and cooling. Thus electronic packaging has four main functions [1]: • Signal distribution which... dissipation , involving structural and materials consideration. • Mechanical, chemical and electromagnetic protection of components and... nature when compared to phenomenological models. Microelectronic packaging industry spends typically several months building and reliability
ERIC Educational Resources Information Center
Usher, Wayne
2009-01-01
This study was undertaken to determine the level of understanding of Gold Coast general practitioners (GPs) pertaining to such criteria as reliability, interactive and usability components associated with health websites. These are important considerations due to the increased levels of computer and World Wide Web (WWW)/Internet use and health…
ERIC Educational Resources Information Center
Caruso, John C.; Witkiewitz, Katie
2002-01-01
As an alternative to equally weighted difference scores, examined an orthogonal reliable component analysis (RCA) solution and an oblique principal components analysis (PCA) solution for the standardization sample of the Kaufman Assessment Battery for Children (KABC; A. Kaufman and N. Kaufman, 1983). Discusses the practical implications of the…
On-orbit spacecraft reliability
NASA Technical Reports Server (NTRS)
Bloomquist, C.; Demars, D.; Graham, W.; Henmi, P.
1978-01-01
Operational and historic data for 350 spacecraft from 52 U.S. space programs were analyzed for on-orbit reliability. Failure rates estimates are made for on-orbit operation of spacecraft subsystems, components, and piece parts, as well as estimates of failure probability for the same elements during launch. Confidence intervals for both parameters are also given. The results indicate that: (1) the success of spacecraft operation is only slightly affected by most reported incidents of anomalous behavior; (2) the occurrence of the majority of anomalous incidents could have been prevented piror to launch; (3) no detrimental effect of spacecraft dormancy is evident; (4) cycled components in general are not demonstrably less reliable than uncycled components; and (5) application of product assurance elements is conductive to spacecraft success.
NASA Astrophysics Data System (ADS)
Wu, Jianing; Yan, Shaoze; Xie, Liyang; Gao, Peng
2012-07-01
The reliability apportionment of spacecraft solar array is of significant importance for spacecraft designers in the early stage of design. However, it is difficult to use the existing methods to resolve reliability apportionment problem because of the data insufficiency and the uncertainty of the relations among the components in the mechanical system. This paper proposes a new method which combines the fuzzy comprehensive evaluation with fuzzy reasoning Petri net (FRPN) to accomplish the reliability apportionment of the solar array. The proposed method extends the previous fuzzy methods and focuses on the characteristics of the subsystems and the intrinsic associations among the components. The analysis results show that the synchronization mechanism may obtain the highest reliability value and the solar panels and hinges may get the lowest reliability before design and manufacturing. Our developed method is of practical significance for the reliability apportionment of solar array where the design information has not been clearly identified, particularly in early stage of design.
Accurate reliability analysis method for quantum-dot cellular automata circuits
NASA Astrophysics Data System (ADS)
Cui, Huanqing; Cai, Li; Wang, Sen; Liu, Xiaoqiang; Yang, Xiaokuo
2015-10-01
Probabilistic transfer matrix (PTM) is a widely used model in the reliability research of circuits. However, PTM model cannot reflect the impact of input signals on reliability, so it does not completely conform to the mechanism of the novel field-coupled nanoelectronic device which is called quantum-dot cellular automata (QCA). It is difficult to get accurate results when PTM model is used to analyze the reliability of QCA circuits. To solve this problem, we present the fault tree models of QCA fundamental devices according to different input signals. After that, the binary decision diagram (BDD) is used to quantitatively investigate the reliability of two QCA XOR gates depending on the presented models. By employing the fault tree models, the impact of input signals on reliability can be identified clearly and the crucial components of a circuit can be found out precisely based on the importance values (IVs) of components. So this method is contributive to the construction of reliable QCA circuits.
Component technology for stirling power converters
NASA Technical Reports Server (NTRS)
Thieme, Lanny G.
1991-01-01
NASA Lewis Research Center has organized a component technology program as part of the efforts to develop Stirling converter technology for space power applications. The Stirling Space Power Program is part of the NASA High Capacity Power Project of the Civil Space Technology Initiative (CSTI). NASA Lewis is also providing technical management for the DOE/Sandia program to develop Stirling converters for solar terrestrial power producing electricity for the utility grid. The primary contractors for the space power and solar terrestrial programs develop component technologies directly related to their goals. This Lewis component technology effort, while coordinated with the main programs, aims at longer term issues, advanced technologies, and independent assessments. An overview of work on linear alternators, engine/alternator/load interactions and controls, heat exchangers, materials, life and reliability, and bearings is presented.
Mansberger, Steven L.; Sheppler, Christina R.; McClure, Tina M.; VanAlstine, Cory L.; Swanson, Ingrid L.; Stoumbos, Zoey; Lambert, William E.
2013-01-01
Purpose: To report the psychometrics of the Glaucoma Treatment Compliance Assessment Tool (GTCAT), a new questionnaire designed to assess adherence with glaucoma therapy. Methods: We developed the questionnaire according to the constructs of the Health Belief Model. We evaluated the questionnaire using data from a cross-sectional study with focus groups (n = 20) and a prospective observational case series (n=58). Principal components analysis provided assessment of construct validity. We repeated the questionnaire after 3 months for test-retest reliability. We evaluated predictive validity using an electronic dosing monitor as an objective measure of adherence. Results: Focus group participants provided 931 statements related to adherence, of which 88.7% (826/931) could be categorized into the constructs of the Health Belief Model. Perceived barriers accounted for 31% (288/931) of statements, cues-to-action 14% (131/931), susceptibility 12% (116/931), benefits 12% (115/931), severity 10% (91/931), and self-efficacy 9% (85/931). The principal components analysis explained 77% of the variance with five components representing Health Belief Model constructs. Reliability analyses showed acceptable Cronbach’s alphas (>.70) for four of the seven components (severity, susceptibility, barriers [eye drop administration], and barriers [discomfort]). Predictive validity was high, with several Health Belief Model questions significantly associated (P <.05) with adherence and a correlation coefficient (R2) of .40. Test-retest reliability was 90%. Conclusion: The GTCAT shows excellent repeatability, content, construct, and predictive validity for glaucoma adherence. A multisite trial is needed to determine whether the results can be generalized and whether the questionnaire accurately measures the effect of interventions to increase adherence. PMID:24072942
Identifying reliable independent components via split-half comparisons
Groppe, David M.; Makeig, Scott; Kutas, Marta
2011-01-01
Independent component analysis (ICA) is a family of unsupervised learning algorithms that have proven useful for the analysis of the electroencephalogram (EEG) and magnetoencephalogram (MEG). ICA decomposes an EEG/MEG data set into a basis of maximally temporally independent components (ICs) that are learned from the data. As with any statistic, a concern with using ICA is the degree to which the estimated ICs are reliable. An IC may not be reliable if ICA was trained on insufficient data, if ICA training was stopped prematurely or at a local minimum (for some algorithms), or if multiple global minima were present. Consequently, evidence of ICA reliability is critical for the credibility of ICA results. In this paper, we present a new algorithm for assessing the reliability of ICs based on applying ICA separately to split-halves of a data set. This algorithm improves upon existing methods in that it considers both IC scalp topographies and activations, uses a probabilistically interpretable threshold for accepting ICs as reliable, and requires applying ICA only three times per data set. As evidence of the method’s validity, we show that the method can perform comparably to more time intensive bootstrap resampling and depends in a reasonable manner on the amount of training data. Finally, using the method we illustrate the importance of checking the reliability of ICs by demonstrating that IC reliability is dramatically increased by removing the mean EEG at each channel for each epoch of data rather than the mean EEG in a prestimulus baseline. PMID:19162199
International classification of reliability for implanted cochlear implant receiver stimulators.
Battmer, Rolf-Dieter; Backous, Douglas D; Balkany, Thomas J; Briggs, Robert J S; Gantz, Bruce J; van Hasselt, Andrew; Kim, Chong Sun; Kubo, Takeshi; Lenarz, Thomas; Pillsbury, Harold C; O'Donoghue, Gerard M
2010-10-01
To design an international standard to be used when reporting reliability of the implanted components of cochlear implant systems to appropriate governmental authorities, cochlear implant (CI) centers, and for journal editors in evaluating manuscripts involving cochlear implant reliability. The International Consensus Group for Cochlear Implant Reliability Reporting was assembled to unify ongoing efforts in the United States, Europe, Asia, and Australia to create a consistent and comprehensive classification system for the implanted components of CI systems across manufacturers. All members of the consensus group are from tertiary referral cochlear implant centers. None. A clinically relevant classification scheme adapted from principles of ISO standard 5841-2:2000 originally designed for reporting reliability of cardiac pacemakers, pulse generators, or leads. Standard definitions for device failure, survival time, clinical benefit, reduced clinical benefit, and specification were generated. Time intervals for reporting back to implant centers for devices tested to be "out of specification," categorization of explanted devices, the method of cumulative survival reporting, and content of reliability reports to be issued by manufacturers was agreed upon by all members. The methodology for calculating Cumulative survival was adapted from ISO standard 5841-2:2000. The International Consensus Group on Cochlear Implant Device Reliability Reporting recommends compliance to this new standard in reporting reliability of implanted CI components by all manufacturers of CIs and the adoption of this standard as a minimal reporting guideline for editors of journals publishing cochlear implant research results.
van der Put, Robert M F; de Haan, Alex; van den IJssel, Jan G M; Hamidi, Ahd; Beurret, Michel
2015-11-27
Due to the rapidly increasing introduction of Haemophilus influenzae type b (Hib) and other conjugate vaccines worldwide during the last decade, reliable and robust analytical methods are needed for the quantitative monitoring of intermediate samples generated during fermentation (upstream processing, USP) and purification (downstream processing, DSP) of polysaccharide vaccine components. This study describes the quantitative characterization of in-process control (IPC) samples generated during the fermentation and purification of the capsular polysaccharide (CPS), polyribosyl-ribitol-phosphate (PRP), derived from Hib. Reliable quantitative methods are necessary for all stages of production; otherwise accurate process monitoring and validation is not possible. Prior to the availability of high performance anion exchange chromatography methods, this polysaccharide was predominantly quantified either with immunochemical methods, or with the colorimetric orcinol method, which shows interference from fermentation medium components and reagents used during purification. Next to an improved high performance anion exchange chromatography-pulsed amperometric detection (HPAEC-PAD) method, using a modified gradient elution, both the orcinol assay and high performance size exclusion chromatography (HPSEC) analyses were evaluated. For DSP samples, it was found that the correlation between the results obtained by HPAEC-PAD specific quantification of the PRP monomeric repeat unit released by alkaline hydrolysis, and those from the orcinol method was high (R(2)=0.8762), and that it was lower between HPAEC-PAD and HPSEC results. Additionally, HPSEC analysis of USP samples yielded surprisingly comparable results to those obtained by HPAEC-PAD. In the early part of the fermentation, medium components interfered with the different types of analysis, but quantitative HPSEC data could still be obtained, although lacking the specificity of the HPAEC-PAD method. Thus, the HPAEC-PAD method has the advantage of giving a specific response compared to the orcinol assay and HPSEC, and does not show interference from various components that can be present in intermediate and purified PRP samples. Copyright © 2014 Elsevier Ltd. All rights reserved.
2nd Generation Reusable Launch Vehicle (2G RLV). Revised
NASA Technical Reports Server (NTRS)
Matlock, Steve; Sides, Steve; Kmiec, Tom; Arbogast, Tim; Mayers, Tom; Doehnert, Bill
2001-01-01
This is a revised final report and addresses all of the work performed on this program. Specifically, it covers vehicle architecture background, definition of six baseline engine cycles, reliability baseline (space shuttle main engine QRAS), and component level reliability/performance/cost for the six baseline cycles, and selection of 3 cycles for further study. This report further addresses technology improvement selection and component level reliability/performance/cost for the three cycles selected for further study, as well as risk reduction plans, and recommendation for future studies.
Data Applicability of Heritage and New Hardware For Launch Vehicle Reliability Models
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Novack, Steven
2015-01-01
Bayesian reliability requires the development of a prior distribution to represent degree of belief about the value of a parameter (such as a component's failure rate) before system specific data become available from testing or operations. Generic failure data are often provided in reliability databases as point estimates (mean or median). A component's failure rate is considered a random variable where all possible values are represented by a probability distribution. The applicability of the generic data source is a significant source of uncertainty that affects the spread of the distribution. This presentation discusses heuristic guidelines for quantifying uncertainty due to generic data applicability when developing prior distributions mainly from reliability predictions.
Test-retest and between-site reliability in a multicenter fMRI study.
Friedman, Lee; Stern, Hal; Brown, Gregory G; Mathalon, Daniel H; Turner, Jessica; Glover, Gary H; Gollub, Randy L; Lauriello, John; Lim, Kelvin O; Cannon, Tyrone; Greve, Douglas N; Bockholt, Henry Jeremy; Belger, Aysenil; Mueller, Bryon; Doty, Michael J; He, Jianchun; Wells, William; Smyth, Padhraic; Pieper, Steve; Kim, Seyoung; Kubicki, Marek; Vangel, Mark; Potkin, Steven G
2008-08-01
In the present report, estimates of test-retest and between-site reliability of fMRI assessments were produced in the context of a multicenter fMRI reliability study (FBIRN Phase 1, www.nbirn.net). Five subjects were scanned on 10 MRI scanners on two occasions. The fMRI task was a simple block design sensorimotor task. The impulse response functions to the stimulation block were derived using an FIR-deconvolution analysis with FMRISTAT. Six functionally-derived ROIs covering the visual, auditory and motor cortices, created from a prior analysis, were used. Two dependent variables were compared: percent signal change and contrast-to-noise-ratio. Reliability was assessed with intraclass correlation coefficients derived from a variance components analysis. Test-retest reliability was high, but initially, between-site reliability was low, indicating a strong contribution from site and site-by-subject variance. However, a number of factors that can markedly improve between-site reliability were uncovered, including increasing the size of the ROIs, adjusting for smoothness differences, and inclusion of additional runs. By employing multiple steps, between-site reliability for 3T scanners was increased by 123%. Dropping one site at a time and assessing reliability can be a useful method of assessing the sensitivity of the results to particular sites. These findings should provide guidance toothers on the best practices for future multicenter studies.
NASA Astrophysics Data System (ADS)
Seiller, G.; Anctil, F.; Roy, R.
2017-09-01
This paper outlines the design and experimentation of an Empirical Multistructure Framework (EMF) for lumped conceptual hydrological modeling. This concept is inspired from modular frameworks, empirical model development, and multimodel applications, and encompasses the overproduce and select paradigm. The EMF concept aims to reduce subjectivity in conceptual hydrological modeling practice and includes model selection in the optimisation steps, reducing initial assumptions on the prior perception of the dominant rainfall-runoff transformation processes. EMF generates thousands of new modeling options from, for now, twelve parent models that share their functional components and parameters. Optimisation resorts to ensemble calibration, ranking and selection of individual child time series based on optimal bias and reliability trade-offs, as well as accuracy and sharpness improvement of the ensemble. Results on 37 snow-dominated Canadian catchments and 20 climatically-diversified American catchments reveal the excellent potential of the EMF in generating new individual model alternatives, with high respective performance values, that may be pooled efficiently into ensembles of seven to sixty constitutive members, with low bias and high accuracy, sharpness, and reliability. A group of 1446 new models is highlighted to offer good potential on other catchments or applications, based on their individual and collective interests. An analysis of the preferred functional components reveals the importance of the production and total flow elements. Overall, results from this research confirm the added value of ensemble and flexible approaches for hydrological applications, especially in uncertain contexts, and open up new modeling possibilities.
Robot-Powered Reliability Testing at NREL's ESIF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, Kevin
With auto manufacturers expected to roll out fuel cell electric vehicles in the 2015 to 2017 timeframe, the need for a reliable hydrogen fueling infrastructure is greater than ever. That's why the National Renewable Energy Laboratory (NREL) is using a robot in its Energy Systems Integration Facility (ESIF) to assess the durability of hydrogen fueling hoses, a largely untested-and currently costly-component of hydrogen fueling stations. The automated machine mimics the repetitive stress of a human bending and twisting the hose to refuel a vehicle-all under the high pressure and low temperature required to deliver hydrogen to a fuel cell vehicle'smore » onboard storage tank.« less
Robot-Powered Reliability Testing at NREL's ESIF
Harrison, Kevin
2018-02-14
With auto manufacturers expected to roll out fuel cell electric vehicles in the 2015 to 2017 timeframe, the need for a reliable hydrogen fueling infrastructure is greater than ever. That's why the National Renewable Energy Laboratory (NREL) is using a robot in its Energy Systems Integration Facility (ESIF) to assess the durability of hydrogen fueling hoses, a largely untested-and currently costly-component of hydrogen fueling stations. The automated machine mimics the repetitive stress of a human bending and twisting the hose to refuel a vehicle-all under the high pressure and low temperature required to deliver hydrogen to a fuel cell vehicle's onboard storage tank.
Robot-Powered Reliability Testing at NREL's ESIF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, Kevin
With auto manufacturers expected to roll out fuel cell electric vehicles in the 2015 to 2017 timeframe, the need for a reliable hydrogen fueling infrastructure is greater than ever. That's why the National Renewable Energy Laboratory (NREL) is using a robot in its Energy Systems Integration Facility (ESIF) to assess the durability of hydrogen fueling hoses, a largely untested—and currently costly—component of hydrogen fueling stations. The automated machine mimics the repetitive stress of a human bending and twisting the hose to refuel a vehicle—all under the high pressure and low temperature required to deliver hydrogen to a fuel cell vehicle'smore » onboard storage tank.« less
Developing and testing the CHORDS: Characteristics of Responsible Drinking Survey.
Barry, Adam E; Goodson, Patricia
2011-01-01
Report on the development and psychometric testing of a theoretically and evidence-grounded instrument, the Characteristics of Responsible Drinking Survey (CHORDS). Instrument subjected to four phases of pretesting (cognitive validity, cognitive and motivational qualities, pilot test, and item evaluation) and a final posttest implementation. Large public university in Texas. Randomly selected convenience sample (n = 729) of currently enrolled students. This 78-item questionnaire measures individuals' responsible drinking beliefs, motivations, intentions, and behaviors. Cronbach α, split-half reliability, principal components analysis and Spearman ρ were conducted to investigate reliability, stability, and validity. Measures in the CHORDS exhibited high internal consistency reliability and strong correlations of split-half reliability. Factor analyses indicated five distinct scales were present, as proposed in the theoretical model. Subscale composite scores also exhibited a correlation to alcohol consumption behaviors, indicating concurrent validity. The CHORDS represents the first instrument specifically designed to assess responsible drinking beliefs and behaviors. It was found to elicit valid and reliable data among a college student sample. This instrument holds much promise for practitioners who desire to empirically investigate dimensions of responsible drinking.
Mejia, Amanda F; Nebel, Mary Beth; Barber, Anita D; Choe, Ann S; Pekar, James J; Caffo, Brian S; Lindquist, Martin A
2018-05-15
Reliability of subject-level resting-state functional connectivity (FC) is determined in part by the statistical techniques employed in its estimation. Methods that pool information across subjects to inform estimation of subject-level effects (e.g., Bayesian approaches) have been shown to enhance reliability of subject-level FC. However, fully Bayesian approaches are computationally demanding, while empirical Bayesian approaches typically rely on using repeated measures to estimate the variance components in the model. Here, we avoid the need for repeated measures by proposing a novel measurement error model for FC describing the different sources of variance and error, which we use to perform empirical Bayes shrinkage of subject-level FC towards the group average. In addition, since the traditional intra-class correlation coefficient (ICC) is inappropriate for biased estimates, we propose a new reliability measure denoted the mean squared error intra-class correlation coefficient (ICC MSE ) to properly assess the reliability of the resulting (biased) estimates. We apply the proposed techniques to test-retest resting-state fMRI data on 461 subjects from the Human Connectome Project to estimate connectivity between 100 regions identified through independent components analysis (ICA). We consider both correlation and partial correlation as the measure of FC and assess the benefit of shrinkage for each measure, as well as the effects of scan duration. We find that shrinkage estimates of subject-level FC exhibit substantially greater reliability than traditional estimates across various scan durations, even for the most reliable connections and regardless of connectivity measure. Additionally, we find partial correlation reliability to be highly sensitive to the choice of penalty term, and to be generally worse than that of full correlations except for certain connections and a narrow range of penalty values. This suggests that the penalty needs to be chosen carefully when using partial correlations. Copyright © 2018. Published by Elsevier Inc.
Development of high purity large forgings for nuclear power plants
NASA Astrophysics Data System (ADS)
Tanaka, Yasuhiko; Sato, Ikuo
2011-10-01
The recent increase in the size of energy plants has been supported by the development of manufacturing technology for high purity large forgings for the key components of the plant. To assure the reliability and performance of the large forgings, refining technology to make high purity steels, casting technology for gigantic ingots, forging technology to homogenize the material and consolidate porosity are essential, together with the required heat treatment and machining technologies. To meet these needs, the double degassing method to reduce impurities, multi-pouring methods to cast the gigantic ingots, vacuum carbon deoxidization, the warm forging process and related technologies have been developed and further improved. Furthermore, melting facilities including vacuum induction melting and electro slag re-melting furnaces have been installed. By using these technologies and equipment, large forgings have been manufactured and shipped to customers. These technologies have also been applied to the manufacture of austenitic steel vessel components of the fast breeder reactors and components for fusion experiments.
NASA Astrophysics Data System (ADS)
Qiang, Tian; Wang, Cong; Kim, Nam-Young
2017-08-01
A diplexer offering the advantages of compact size, high performance, and high reliability is proposed on the basis of advanced integrated passive device (IPD) fabrication techniques. The proposed diplexer is developed by combining a third-order low-pass filter (LPF) and a third-order high-pass filter (HPF), which are designed on the basis of the elliptic function prototype low-pass filter. Primary components, such as inductors and capacitors, are designed and fabricated with high Q-factor and appropriate values, and they are subsequently used to construct a compact diplexer having a chip area of 900 μm × 1100 μm (0.009 λ0 × 0.011 λ0, where λ0 is the guided wavelength). In addition, a small-outline transistor (SOT-6) packaging method is adopted, and reliability tests (including temperature, humidity, vibration, and pressure) are conducted to guarantee long-term stability and commercial success. The packaged measurement results indicate excellent RF performance with insertion losses of 1.39 dB and 0.75 dB at operation bands of 0.9 GHz and 1.8 GHz, respectively. The return loss is lower than 10 dB from 0.5 GHz to 4.0 GHz, while the isolation is higher than 15 dB from 0.5 GHz to 3.0 GHz. Thus, it can be concluded that the proposed SOT-6 packaged diplexer is a promising candidate for GSM/CDMA applications. Synthetic solution of diplexer design, RF performance optimization, fabrication process, packaging, RF response measurement, and reliability test is particularly explained and analyzed in this work.
Test Re-Test Reliability of Four Versions of the 3-Cone Test in Non-Athletic Men
Langley, Jason G.; Chetlin, Robert D.
2017-01-01
Until recently, measurement and evaluation in sport science, especially agility testing, has not always included key elements of proper test construction. Often tests are published without reporting reliability and validity analysis for a specific population. The purpose of the present study was to examine the test re-test reliability of four versions of the 3-Cone Test (3CT), and provide guidance on proper test construction for testing agility in athletic populations. Forty male students enrolled in classes in the Department of Physical Education at a mid-Atlantic university participated. On each of test day participants performed 10 trials. In random order, they performed three trials to the right (3CTR, standard test), three to the left (3CTL), and two modified trials (3CTAR and 3CTAL), which included a reactive component in which a visual cue was given to indicate direction. Intra-class correlation coefficients (ICC) indicated a moderate to high reliability for the four tests, 3CTR 0.79 (0.64-0.88, 95%CI), 3CTL 0.73 (0.55-0.85), 3CTAR 0.85(0.74-0.92), and 3CTAL 0.79 (0.64-0.88). Small standard error of the measurement (SEM) was found; range 0.09 to 0.10. Pearson correlations between tests were high (0.82-0.92) on day one as well as day two (0.72-0.85). These results indicate each version of the 3-Cone Test is reliable; however, further tests are needed with specific athletic populations. Only the 3CTAR and 3CTAL are tests of agility due to the inclusion of a reactive component. Future studies examining agility testing and training should incorporate technological elements, including automated timing systems and motion capture analysis. Such instrumentation will allow for optimal design of tests that simulate sport-specific game conditions. Key points The commonly used 3-cone test (upside down “L” to the right”) is a reliable change of direction speed (CODS) test when evaluating collegiate males. A modification of the CODS 3-cone test (upside down “L” to the left instead of to the right) is also reliable for evaluating collegiate males. A modification of the 3-cone that includes reaction and a choice of a cut to the left or right remains reliable as now an agility test version in collegiate males. There are moderate to high correlation between the 4 versions of the tests. Reaction remains a critical to the design of testing and training agility protocols, and should be investigated similarly to various athletes including novice/expert, male/female, and nearly every sporting event. PMID:28344450
Sekir, U; Yildiz, Y; Hazneci, B; Ors, F; Saka, T; Aydin, T
2008-12-01
In contrast to the single evaluation methods used in the past, the combination of multiple tests allows one to obtain a global assessment of the ankle joint. The aim of this study was to determine the reliability of the different tests in a functional test battery. Twenty-four male recreational athletes with unilateral functional ankle instability (FAI) were recruited for this study. One component of the test battery included five different functional ability tests. These tests included a single limb hopping course, single-legged and triple-legged hop for distance, and six and cross six meter hop for time. The ankle joint position sense and one leg standing test were used for evaluation of proprioception and sensorimotor control. The isokinetic strengths of the ankle invertor and evertor muscles were evaluated at a velocity of 120 degrees /s. The reliability of the test battery was assessed by calculating the intraclass correlation coefficient (ICC). Each subject was tested two times, with an interval of 3-5 days between the test sessions. The ICCs for ankle functional and proprioceptive ability showed high reliability (ICCs ranging from 0.94 to 0.98). Additionally, isokinetic ankle joint inversion and eversion strength measurements represented good to high reliability (ICCs between 0.82 and 0.98). The functional test battery investigated in this study proved to be a reliable tool for the assessment of athletes with functional ankle instability. Therefore, clinicians may obtain reliable information from the functional test battery during the assessment of ankle joint performance in patients with functional ankle instability.
Zhang, Xiao-Chao; Wei, Zhen-Wei; Gong, Xiao-Yun; Si, Xing-Yu; Zhao, Yao-Yao; Yang, Cheng-Dui; Zhang, Si-Chun; Zhang, Xin-Rong
2016-04-29
Integrating droplet-based microfluidics with mass spectrometry is essential to high-throughput and multiple analysis of single cells. Nevertheless, matrix effects such as the interference of culture medium and intracellular components influence the sensitivity and the accuracy of results in single-cell analysis. To resolve this problem, we developed a method that integrated droplet-based microextraction with single-cell mass spectrometry. Specific extraction solvent was used to selectively obtain intracellular components of interest and remove interference of other components. Using this method, UDP-Glc-NAc, GSH, GSSG, AMP, ADP and ATP were successfully detected in single MCF-7 cells. We also applied the method to study the change of unicellular metabolites in the biological process of dysfunctional oxidative phosphorylation. The method could not only realize matrix-free, selective and sensitive detection of metabolites in single cells, but also have the capability for reliable and high-throughput single-cell analysis.
Polymer, metal and ceramic matrix composites for advanced aircraft engine applications
NASA Technical Reports Server (NTRS)
Mcdanels, D. L.; Serafini, T. T.; Dicarlo, J. A.
1985-01-01
Advanced aircraft engine research within NASA Lewis is being focused on propulsion systems for subsonic, supersonic, and hypersonic aircraft. Each of these flight regimes requires different types of engines, but all require advanced materials to meet their goals of performance, thrust-to-weight ratio, and fuel efficiency. The high strength/weight and stiffness/weight properties of resin, metal, and ceramic matrix composites will play an increasingly key role in meeting these performance requirements. At NASA Lewis, research is ongoing to apply graphite/polyimide composites to engine components and to develop polymer matrices with higher operating temperature capabilities. Metal matrix composites, using magnesium, aluminum, titanium, and superalloy matrices, are being developed for application to static and rotating engine components, as well as for space applications, over a broad temperature range. Ceramic matrix composites are also being examined to increase the toughness and reliability of ceramics for application to high-temperature engine structures and components.
Digital echocardiography 2002: now is the time
NASA Technical Reports Server (NTRS)
Thomas, James D.; Greenberg, Neil L.; Garcia, Mario J.
2002-01-01
The ability to acquire echocardiographic images digitally, store and transfer these data using the DICOM standard, and routinely analyze examinations exists today and allows the implementation of a digital echocardiography laboratory. The purpose of this review article is to outline the critical components of a digital echocardiography laboratory, discuss general strategies for implementation, and put forth some of the pitfalls that we have encountered in our own implementation. The major components of the digital laboratory include (1) digital echocardiography machines with network output, (2) a switched high-speed network, (3) a high throughput server with abundant local storage, (4) a reliable low-cost archive, (5) software to manage information, and (6) support mechanisms for software and hardware. Implementation strategies can vary from a complete vendor solution providing all components (hardware, software, support), to a strategy similar to our own where standard computer and networking hardware are used with specialized software for management of image and measurement information.
Olivier, Serge; Delage, Laurent; Reynaud, Francois; Collomb, Virginie; Trouillon, Michel; Grelin, Jerome; Schanen, Isabelle; Minier, Vincent; Broquin, Jean-Emmanuel; Ruilier, Cyril; Leone, Bruno
2007-02-20
We present a three-telescope space-based interferometer prototype dedicated to high-resolution imaging. This project, named multiaperture fiber-linked interferometer (MAFL), was founded by the European Space Agency. The aim of the MAFL project is to propose, design, and implement for the first time to the best of our knowledge all the optical functions required for the global instrument on the same integrated optics (IO) component for controlling a three-arm interferometer and to obtain reliable science data. The coherent transport from telescopes to the IO component is achieved by means of highly birefringent optical fiber. The laboratory bench is presented, and the results are reported allowing us to validate the optical potentiality of the IO component in this frame. The validation measurements consist of the throughput of this optical device, the performances of metrological servoloop, and the instrumental contrasts and phase closure of the science fringes.
Reliability and Confidence Interval Analysis of a CMC Turbine Stator Vane
NASA Technical Reports Server (NTRS)
Murthy, Pappu L. N.; Gyekenyesi, John P.; Mital, Subodh K.
2008-01-01
High temperature ceramic matrix composites (CMC) are being explored as viable candidate materials for hot section gas turbine components. These advanced composites can potentially lead to reduced weight, enable higher operating temperatures requiring less cooling and thus leading to increased engine efficiencies. However, these materials are brittle and show degradation with time at high operating temperatures due to creep as well as cyclic mechanical and thermal loads. In addition, these materials are heterogeneous in their make-up and various factors affect their properties in a specific design environment. Most of these advanced composites involve two- and three-dimensional fiber architectures and require a complex multi-step high temperature processing. Since there are uncertainties associated with each of these in addition to the variability in the constituent material properties, the observed behavior of composite materials exhibits scatter. Traditional material failure analyses employing a deterministic approach, where failure is assumed to occur when some allowable stress level or equivalent stress is exceeded, are not adequate for brittle material component design. Such phenomenological failure theories are reasonably successful when applied to ductile materials such as metals. Analysis of failure in structural components is governed by the observed scatter in strength, stiffness and loading conditions. In such situations, statistical design approaches must be used. Accounting for these phenomena requires a change in philosophy on the design engineer s part that leads to a reduced focus on the use of safety factors in favor of reliability analyses. The reliability approach demands that the design engineer must tolerate a finite risk of unacceptable performance. This risk of unacceptable performance is identified as a component's probability of failure (or alternatively, component reliability). The primary concern of the engineer is minimizing this risk in an economical manner. The methods to accurately determine the service life of an engine component with associated variability have become increasingly difficult. This results, in part, from the complex missions which are now routinely considered during the design process. These missions include large variations of multi-axial stresses and temperatures experienced by critical engine parts. There is a need for a convenient design tool that can accommodate various loading conditions induced by engine operating environments, and material data with their associated uncertainties to estimate the minimum predicted life of a structural component. A probabilistic composite micromechanics technique in combination with woven composite micromechanics, structural analysis and Fast Probability Integration (FPI) techniques has been used to evaluate the maximum stress and its probabilistic distribution in a CMC turbine stator vane. Furthermore, input variables causing scatter are identified and ranked based upon their sensitivity magnitude. Since the measured data for the ceramic matrix composite properties is very limited, obtaining a probabilistic distribution with their corresponding parameters is difficult. In case of limited data, confidence bounds are essential to quantify the uncertainty associated with the distribution. Usually 90 and 95% confidence intervals are computed for material properties. Failure properties are then computed with the confidence bounds. Best estimates and the confidence bounds on the best estimate of the cumulative probability function for R-S (strength - stress) are plotted. The methodologies and the results from these analyses will be discussed in the presentation.
NASA Technical Reports Server (NTRS)
Bast, Callie C.; Jurena, Mark T.; Godines, Cody R.; Chamis, Christos C. (Technical Monitor)
2001-01-01
This project included both research and education objectives. The goal of this project was to advance innovative research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction for improved reliability and safety of structural components of aerospace and aircraft propulsion systems. Research and education partners included Glenn Research Center (GRC) and Southwest Research Institute (SwRI) along with the University of Texas at San Antonio (UTSA). SwRI enhanced the NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) code and provided consulting support for NESSUS-related activities at UTSA. NASA funding supported three undergraduate students, two graduate students, a summer course instructor and the Principal Investigator. Matching funds from UTSA provided for the purchase of additional equipment for the enhancement of the Advanced Interactive Computational SGI Lab established during the first year of this Partnership Award to conduct the probabilistic finite element summer courses. The research portion of this report presents the cumulation of work performed through the use of the probabilistic finite element program, NESSUS, Numerical Evaluation and Structures Under Stress, and an embedded Material Strength Degradation (MSD) model. Probabilistic structural analysis provided for quantification of uncertainties associated with the design, thus enabling increased system performance and reliability. The structure examined was a Space Shuttle Main Engine (SSME) fuel turbopump blade. The blade material analyzed was Inconel 718, since the MSD model was previously calibrated for this material. Reliability analysis encompassing the effects of high temperature and high cycle fatigue, yielded a reliability value of 0.99978 using a fully correlated random field for the blade thickness. The reliability did not change significantly for a change in distribution type except for a change in distribution from Gaussian to Weibull for the centrifugal load. The sensitivity factors determined to be most dominant were the centrifugal loading and the initial strength of the material. These two sensitivity factors were influenced most by a change in distribution type from Gaussian to Weibull. The education portion of this report describes short-term and long-term educational objectives. Such objectives serve to integrate research and education components of this project resulting in opportunities for ethnic minority students, principally Hispanic. The primary vehicle to facilitate such integration was the teaching of two probabilistic finite element method courses to undergraduate engineering students in the summers of 1998 and 1999.
Vibration-free stirling cryocooler for high definition microscopy
NASA Astrophysics Data System (ADS)
Riabzev, S. V.; Veprik, A. M.; Vilenchik, H. S.; Pundak, N.; Castiel, E.
2009-12-01
The normal operation of high definition Scanning Electronic and Helium Ion microscope tools often relies on maintaining particular components at cryogenic temperatures. This has traditionally been accomplished by using liquid coolants such as liquid Nitrogen. This inherently limits the useful temperature range to above 77 K, produces various operational hazards and typically involves elevated ownership costs, inconvenient logistics and maintenance. Mechanical coolers, over-performing the above traditional method and capable of delivering required (even below 77 K) cooling to the above cooled components, have been well-known elsewhere for many years, but their typical drawbacks, such as high purchasing cost, cooler size, low reliability and high power consumption have so far prevented their wide-spreading. Additional critical drawback is inevitable degradation of imagery performance originated from the wideband vibration export as typical for the operation of the mechanical cooler incorporating numerous movable components. Recent advances in the development of reliable, compact, reasonably priced and dynamically quiet linear cryogenic coolers gave rise to so-called "dry cooling" technologies aimed at eventually replacing the traditional use of outdated liquid Nitrogen cooling facilities. Although much improved these newer cryogenic coolers still produce relatively high vibration export which makes them incompatible with modern high definition microscopy tools. This has motivated further research activity towards developing a vibration free closed-cycle mechanical cryocooler. The authors have successfully adapted the standard low vibration Stirling cryogenic refrigerator (Ricor model K535-LV) delivering 5 W@40 K heat lift for use in vibration-sensitive high definition microscopy. This has been achieved by using passive mechanical counterbalancing of the main portion of the low frequency vibration export in combination with an active feed-forward multi-axes suppression of the residual wideband vibration, thermo-conductive vibration isolation struts and soft vibration mounts. The attainable performance of the resulting vibration free linear Stirling cryocooler (Ricor model K535-ULV) is evaluated through a full-scale experimentation.
Advanced Stirling Convertor Heater Head Durability and Reliability Quantification
NASA Technical Reports Server (NTRS)
Krause, David L.; Shah, Ashwin R.; Korovaichuk, Igor; Kalluri, Sreeramesh
2008-01-01
The National Aeronautics and Space Administration (NASA) has identified the high efficiency Advanced Stirling Radioisotope Generator (ASRG) as a candidate power source for long duration Science missions, such as lunar applications, Mars rovers, and deep space missions, that require reliable design lifetimes of up to 17 years. Resistance to creep deformation of the MarM-247 heater head (HH), a structurally critical component of the ASRG Advanced Stirling Convertor (ASC), under high temperatures (up to 850 C) is a key design driver for durability. Inherent uncertainties in the creep behavior of the thin-walled HH and the variations in the wall thickness, control temperature, and working gas pressure need to be accounted for in the life and reliability prediction. Due to the availability of very limited test data, assuring life and reliability of the HH is a challenging task. The NASA Glenn Research Center (GRC) has adopted an integrated approach combining available uniaxial MarM-247 material behavior testing, HH benchmark testing and advanced analysis in order to demonstrate the integrity, life and reliability of the HH under expected mission conditions. The proposed paper describes analytical aspects of the deterministic and probabilistic approaches and results. The deterministic approach involves development of the creep constitutive model for the MarM-247 (akin to the Oak Ridge National Laboratory master curve model used previously for Inconel 718 (Special Metals Corporation)) and nonlinear finite element analysis to predict the mean life. The probabilistic approach includes evaluation of the effect of design variable uncertainties in material creep behavior, geometry and operating conditions on life and reliability for the expected life. The sensitivity of the uncertainties in the design variables on the HH reliability is also quantified, and guidelines to improve reliability are discussed.
NASA Technical Reports Server (NTRS)
Todling, Ricardo; Diniz, F. L. R.; Takacs, L. L.; Suarez, M. J.
2018-01-01
Many hybrid data assimilation systems currently used for NWP employ some form of dual-analysis system approach. Typically a hybrid variational analysis is responsible for creating initial conditions for high-resolution forecasts, and an ensemble analysis system is responsible for creating sample perturbations used to form the flow-dependent part of the background error covariance required in the hybrid analysis component. In many of these, the two analysis components employ different methodologies, e.g., variational and ensemble Kalman filter. In such cases, it is not uncommon to have observations treated rather differently between the two analyses components; recentering of the ensemble analysis around the hybrid analysis is used to compensated for such differences. Furthermore, in many cases, the hybrid variational high-resolution system implements some type of four-dimensional approach, whereas the underlying ensemble system relies on a three-dimensional approach, which again introduces discrepancies in the overall system. Connected to these is the expectation that one can reliably estimate observation impact on forecasts issued from hybrid analyses by using an ensemble approach based on the underlying ensemble strategy of dual-analysis systems. Just the realization that the ensemble analysis makes substantially different use of observations as compared to their hybrid counterpart should serve as enough evidence of the implausibility of such expectation. This presentation assembles numerous anecdotal evidence to illustrate the fact that hybrid dual-analysis systems must, at the very minimum, strive for consistent use of the observations in both analysis sub-components. Simpler than that, this work suggests that hybrid systems can reliably be constructed without the need to employ a dual-analysis approach. In practice, the idea of relying on a single analysis system is appealing from a cost-maintenance perspective. More generally, single-analysis systems avoid contradictions such as having to choose one sub-component to generate performance diagnostics to another, possibly not fully consistent, component.
He, Yugui; Feng, Jiwen; Zhang, Zhi; Wang, Chao; Wang, Dong; Chen, Fang; Liu, Maili; Liu, Chaoyang
2015-08-01
High sensitivity, high data rates, fast pulses, and accurate synchronization all represent challenges for modern nuclear magnetic resonance spectrometers, which make any expansion or adaptation of these devices to new techniques and experiments difficult. Here, we present a Peripheral Component Interconnect Express (PCIe)-based highly integrated distributed digital architecture pulsed spectrometer that is implemented with electron and nucleus double resonances and is scalable specifically for broad dynamic nuclear polarization (DNP) enhancement applications, including DNP-magnetic resonance spectroscopy/imaging (DNP-MRS/MRI). The distributed modularized architecture can implement more transceiver channels flexibly to meet a variety of MRS/MRI instrumentation needs. The proposed PCIe bus with high data rates can significantly improve data transmission efficiency and communication reliability and allow precise control of pulse sequences. An external high speed double data rate memory chip is used to store acquired data and pulse sequence elements, which greatly accelerates the execution of the pulse sequence, reduces the TR (time of repetition) interval, and improves the accuracy of TR in imaging sequences. Using clock phase-shift technology, we can produce digital pulses accurately with high timing resolution of 1 ns and narrow widths of 4 ns to control the microwave pulses required by pulsed DNP and ensure overall system synchronization. The proposed spectrometer is proved to be both feasible and reliable by observation of a maximum signal enhancement factor of approximately -170 for (1)H, and a high quality water image was successfully obtained by DNP-enhanced spin-echo (1)H MRI at 0.35 T.
ERIC Educational Resources Information Center
Ho, Esther Sui Chu; Sum, Kwok Wing
2018-01-01
This study aims to construct and validate the Career and Educational Decision Self-Efficacy Inventory for Secondary Students (CEDSIS) by using a sample of 2,631 students in Hong Kong. Principal component analysis yielded a three-factor structure, which demonstrated good model fit in confirmatory factor analysis. High reliability was found for the…
ERIC Educational Resources Information Center
Echols, Julie M. Young
2010-01-01
Reading proficiency is the goal of many local and national reading initiatives. A key component of these initiatives is accurate and reliable reading assessment. In this high-stakes testing arena, the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) has emerged as a preferred measure for identification of students at risk for reading…
Architecture for Survivable System Processing (ASSP)
NASA Astrophysics Data System (ADS)
Wood, Richard J.
1991-11-01
The Architecture for Survivable System Processing (ASSP) Program is a multi-phase effort to implement Department of Defense (DOD) and commercially developed high-tech hardware, software, and architectures for reliable space avionics and ground based systems. System configuration options provide processing capabilities to address Time Dependent Processing (TDP), Object Dependent Processing (ODP), and Mission Dependent Processing (MDP) requirements through Open System Architecture (OSA) alternatives that allow for the enhancement, incorporation, and capitalization of a broad range of development assets. High technology developments in hardware, software, and networking models, address technology challenges of long processor life times, fault tolerance, reliability, throughput, memories, radiation hardening, size, weight, power (SWAP) and security. Hardware and software design, development, and implementation focus on the interconnectivity/interoperability of an open system architecture and is being developed to apply new technology into practical OSA components. To insure for widely acceptable architecture capable of interfacing with various commercial and military components, this program provides for regular interactions with standardization working groups (e.g.) the International Standards Organization (ISO), American National Standards Institute (ANSI), Society of Automotive Engineers (SAE), and Institute of Electrical and Electronic Engineers (IEEE). Selection of a viable open architecture is based on the widely accepted standards that implement the ISO/OSI Reference Model.
Architecture for Survivable System Processing (ASSP)
NASA Technical Reports Server (NTRS)
Wood, Richard J.
1991-01-01
The Architecture for Survivable System Processing (ASSP) Program is a multi-phase effort to implement Department of Defense (DOD) and commercially developed high-tech hardware, software, and architectures for reliable space avionics and ground based systems. System configuration options provide processing capabilities to address Time Dependent Processing (TDP), Object Dependent Processing (ODP), and Mission Dependent Processing (MDP) requirements through Open System Architecture (OSA) alternatives that allow for the enhancement, incorporation, and capitalization of a broad range of development assets. High technology developments in hardware, software, and networking models, address technology challenges of long processor life times, fault tolerance, reliability, throughput, memories, radiation hardening, size, weight, power (SWAP) and security. Hardware and software design, development, and implementation focus on the interconnectivity/interoperability of an open system architecture and is being developed to apply new technology into practical OSA components. To insure for widely acceptable architecture capable of interfacing with various commercial and military components, this program provides for regular interactions with standardization working groups (e.g.) the International Standards Organization (ISO), American National Standards Institute (ANSI), Society of Automotive Engineers (SAE), and Institute of Electrical and Electronic Engineers (IEEE). Selection of a viable open architecture is based on the widely accepted standards that implement the ISO/OSI Reference Model.
NASA Technical Reports Server (NTRS)
Dennehy, Cornelius J.
2010-01-01
This final report summarizes the results of a comparative assessment of the fault tolerance and reliability of different Guidance, Navigation and Control (GN&C) architectural approaches. This study was proactively performed by a combined Massachusetts Institute of Technology (MIT) and Draper Laboratory team as a GN&C "Discipline-Advancing" activity sponsored by the NASA Engineering and Safety Center (NESC). This systematic comparative assessment of GN&C system architectural approaches was undertaken as a fundamental step towards understanding the opportunities for, and limitations of, architecting highly reliable and fault tolerant GN&C systems composed of common avionic components. The primary goal of this study was to obtain architectural 'rules of thumb' that could positively influence future designs in the direction of an optimized (i.e., most reliable and cost-efficient) GN&C system. A secondary goal was to demonstrate the application and the utility of a systematic modeling approach that maps the entire possible architecture solution space.
Status of the Flooding Fragility Testing Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, C. L.; Savage, B.; Bhandari, B.
2016-06-01
This report provides an update on research addressing nuclear power plant component reliability under flooding conditions. The research includes use of the Component Flooding Evaluation Laboratory (CFEL) where individual components and component subassemblies will be tested to failure under various flooding conditions. The resulting component reliability data can then be incorporated with risk simulation strategies to provide a more thorough representation of overall plant risk. The CFEL development strategy consists of four interleaved phases. Phase 1 addresses design and application of CFEL with water rise and water spray capabilities allowing testing of passive and active components including fully electrified components.more » Phase 2 addresses research into wave generation techniques followed by the design and addition of the wave generation capability to CFEL. Phase 3 addresses methodology development activities including small scale component testing, development of full scale component testing protocol, and simulation techniques including Smoothed Particle Hydrodynamic (SPH) based computer codes. Phase 4 involves full scale component testing including work on full scale component testing in a surrogate CFEL testing apparatus.« less
Missile Systems Maintenance, AFSC 411XOB/C.
1988-04-01
technician’s rating. A statistical measurement of their agreement, known as the interrater reliability (as assessed through components of variance of...senior technician’s ratings. A statistical measurement of their agreement, known as the interrater reliability (as assessed through components of...FABRICATION TRANSITORS *INPUT/OUTPUT (PERIPHERAL) DEVICES SOLID-STATE SPECIAL PURPOSE DEVICES COMPUTER MICRO PROCESSORS AND PROGRAMS POWER SUPPLIES
Stirling Convertor Fasteners Reliability Quantification
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Korovaichuk, Igor; Kovacevich, Tiodor; Schreiber, Jeffrey G.
2006-01-01
Onboard Radioisotope Power Systems (RPS) being developed for NASA s deep-space science and exploration missions require reliable operation for up to 14 years and beyond. Stirling power conversion is a candidate for use in an RPS because it offers a multifold increase in the conversion efficiency of heat to electric power and reduced inventory of radioactive material. Structural fasteners are responsible to maintain structural integrity of the Stirling power convertor, which is critical to ensure reliable performance during the entire mission. Design of fasteners involve variables related to the fabrication, manufacturing, behavior of fasteners and joining parts material, structural geometry of the joining components, size and spacing of fasteners, mission loads, boundary conditions, etc. These variables have inherent uncertainties, which need to be accounted for in the reliability assessment. This paper describes these uncertainties along with a methodology to quantify the reliability, and provides results of the analysis in terms of quantified reliability and sensitivity of Stirling power conversion reliability to the design variables. Quantification of the reliability includes both structural and functional aspects of the joining components. Based on the results, the paper also describes guidelines to improve the reliability and verification testing.
Hales, M; Biros, E; Reznik, J E
2015-01-01
Since 1982, the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) has been used to classify sensation of spinal cord injury (SCI) through pinprick and light touch scores. The absence of proprioception, pain, and temperature within this scale creates questions about its validity and accuracy. To assess whether the sensory component of the ISNCSCI represents a reliable and valid measure of classification of SCI. A systematic review of studies examining the reliability and validity of the sensory component of the ISNCSCI published between 1982 and February 2013 was conducted. The electronic databases MEDLINE via Ovid, CINAHL, PEDro, and Scopus were searched for relevant articles. A secondary search of reference lists was also completed. Chosen articles were assessed according to the Oxford Centre for Evidence-Based Medicine hierarchy of evidence and critically appraised using the McMasters Critical Review Form. A statistical analysis was conducted to investigate the variability of the results given by reliability studies. Twelve studies were identified: 9 reviewed reliability and 3 reviewed validity. All studies demonstrated low levels of evidence and moderate critical appraisal scores. The majority of the articles (~67%; 6/9) assessing the reliability suggested that training was positively associated with better posttest results. The results of the 3 studies that assessed the validity of the ISNCSCI scale were confounding. Due to the low to moderate quality of the current literature, the sensory component of the ISNCSCI requires further revision and investigation if it is to be a useful tool in clinical trials.
Hales, M.; Biros, E.
2015-01-01
Background: Since 1982, the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) has been used to classify sensation of spinal cord injury (SCI) through pinprick and light touch scores. The absence of proprioception, pain, and temperature within this scale creates questions about its validity and accuracy. Objectives: To assess whether the sensory component of the ISNCSCI represents a reliable and valid measure of classification of SCI. Methods: A systematic review of studies examining the reliability and validity of the sensory component of the ISNCSCI published between 1982 and February 2013 was conducted. The electronic databases MEDLINE via Ovid, CINAHL, PEDro, and Scopus were searched for relevant articles. A secondary search of reference lists was also completed. Chosen articles were assessed according to the Oxford Centre for Evidence-Based Medicine hierarchy of evidence and critically appraised using the McMasters Critical Review Form. A statistical analysis was conducted to investigate the variability of the results given by reliability studies. Results: Twelve studies were identified: 9 reviewed reliability and 3 reviewed validity. All studies demonstrated low levels of evidence and moderate critical appraisal scores. The majority of the articles (~67%; 6/9) assessing the reliability suggested that training was positively associated with better posttest results. The results of the 3 studies that assessed the validity of the ISNCSCI scale were confounding. Conclusions: Due to the low to moderate quality of the current literature, the sensory component of the ISNCSCI requires further revision and investigation if it is to be a useful tool in clinical trials. PMID:26363591
Improving the Reliability of Technological Subsystems Equipment for Steam Turbine Unit in Operation
NASA Astrophysics Data System (ADS)
Brodov, Yu. M.; Murmansky, B. E.; Aronson, R. T.
2017-11-01
The authors’ conception is presented of an integrated approach to reliability improving of the steam turbine unit (STU) state along with its implementation examples for the various STU technological subsystems. Basing on the statistical analysis of damage to turbine individual parts and components, on the development and application of modern methods and technologies of repair and on operational monitoring techniques, the critical components and elements of equipment are identified and priorities are proposed for improving the reliability of STU equipment in operation. The research results are presented of the analysis of malfunctions for various STU technological subsystems equipment operating as part of power units and at cross-linked thermal power plants and resulting in turbine unit shutdown (failure). Proposals are formulated and justified for adjustment of maintenance and repair for turbine components and parts, for condenser unit equipment, for regeneration subsystem and oil supply system that permit to increase the operational reliability, to reduce the cost of STU maintenance and repair and to optimize the timing and amount of repairs.
PV inverter performance and reliability: What is the role of the bus capacitor?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flicker, Jack; Kaplar, Robert; Marinella, Matthew
In order to elucidate how the degradation of individual components affects the state of the photovoltaic inverter as a whole, we have carried out SPICE simulations to investigate the voltage and current ripple on the DC bus. The bus capacitor is generally considered to be among the least reliable components of the system, so we have simulated how the degradation of bus capacitors affects the AC ripple at the terminals of the PV module. Degradation-induced ripple leads to an increased degradation rate in a positive feedback cycle. Additionally, laboratory experiments are being carried out to ascertain the reliability of metallizedmore » thin film capacitors. By understanding the degradation mechanisms and their effects on the inverter as a system, steps can be made to more effectively replace marginal components with more reliable ones, increasing the lifetime and efficiency of the inverter and decreasing its cost per watt towards the US Department of Energy goals.« less
NASA Astrophysics Data System (ADS)
Linke, J.
2006-04-01
The plasma exposed components in existing and future fusion devices are strongly affected by the plasma material interaction processes. These mechanisms have a strong influence on the plasma performance; in addition they have major impact on the lifetime of the plasma facing armour and the joining interface between the plasma facing material (PFM) and the heat sink. Besides physical and chemical sputtering processes, high heat quasi-stationary fluxes during normal and intense thermal transients are of serious concern for the engineers who develop reliable wall components. In addition, the material and component degradation due to intense fluxes of energetic neutrons is another critical issue in D-T-burning fusion devices which requires extensive R&D. This paper presents an overview on the materials development and joining, the testing of PFMs and components, and the analysis of the neutron irradiation induced degradation.
Health Monitoring System for Composite Structures
NASA Technical Reports Server (NTRS)
Tang, S. S.; Riccardella, P. C.; Andrews, R. J.; Grady, J. E.; Mucciaradi, A. N.
1996-01-01
An automated system was developed to monitor the health status of composites. It uses the vibration characteristics of composites to identify a component's damage condition. The vibration responses are characterized by a set of signal features defined in the time, frequency and spatial domains. The identification of these changes in the vibration characteristics corresponding to different health conditions was performed using pattern recognition principles. This allows efficient data reduction and interpretation of vast amounts of information. Test components were manufactured from isogrid panels to evaluate performance of the monitoring system. The components were damaged by impact to simulate different health conditions. Free vibration response was induced by a tap test on the test components. The monitoring system was trained using these free vibration responses to identify three different health conditions. They are undamaged vs. damaged, damage location and damage zone size. High reliability in identifying the correct component health condition was achieved by the monitoring system.
Laser Scanner For Automatic Inspection Of Printed Wiring Boards
NASA Astrophysics Data System (ADS)
Geise, Philip; George, Eugene; Freese, Fritz; Brown, Robert; Ruwe, Victor
1980-11-01
An, Instrument is described which inspects unpopulated, populated (components onserted and leads clinched), and soldered printed wiring boards for correct hole location, component presence, correct lead clinch direction and solder bridges. The instrument consists of a low power heliumneon laser, an x-y moving iron galvanometer scanner and several folding mirros. A unique shadow signature is detected by silicon photodiodes located at the optium geometry to allow rapid and reliable detection of components with correctly clinched leads. A reflective glint screen is utilized to inspect for a solder bridges. The detected signal are processed and evaluated by a minocomputer which also controls the scan inspection rate of at least 25 components or 50 components holes per second. The return of investment on this instrument for high volume production of printed wirind boards is less than one yea and only slightly longer for medium run military application.
Yang, Xing-Xin; Zhang, Xiao-Xia; Chang, Rui-Miao; Wang, Yan-Wei; Li, Xiao-Ni
2011-01-01
A simple and reliable high performance liquid chromatography (HPLC) method has been developed for the simultaneous quantification of five major bioactive components in ‘Shu-Jin-Zhi-Tong’ capsules (SJZTC), for the purposes of quality control of this commonly prescribed traditional Chinese medicine. Under the optimum conditions, excellent separation was achieved, and the assay was fully validated in terms of linearity, precision, repeatability, stability and accuracy. The validated method was applied successfully to the determination of the five compounds in SJZTC samples from different production batches. The HPLC method can be used as a valid analytical method to evaluate the intrinsic quality of SJZTC. PMID:29403711
Test-retest and interrater reliability of the functional lower extremity evaluation.
Haitz, Karyn; Shultz, Rebecca; Hodgins, Melissa; Matheson, Gordon O
2014-12-01
Repeated-measures clinical measurement reliability study. To establish the reliability and face validity of the Functional Lower Extremity Evaluation (FLEE). The FLEE is a 45-minute battery of 8 standardized functional performance tests that measures 3 components of lower extremity function: control, power, and endurance. The reliability and normative values for the FLEE in healthy athletes are unknown. A face validity survey for the FLEE was sent to sports medicine personnel to evaluate the level of importance and frequency of clinical usage of each test included in the FLEE. The FLEE was then administered and rated for 40 uninjured athletes. To assess test-retest reliability, each athlete was tested twice, 1 week apart, by the same rater. To assess interrater reliability, 3 raters scored each athlete during 1 of the testing sessions. Intraclass correlation coefficients were used to assess the test-retest and interrater reliability of each of the FLEE tests. In the face validity survey, the FLEE tests were rated as highly important by 58% to 71% of respondents but frequently used by only 26% to 45% of respondents. Interrater reliability intraclass correlation coefficients ranged from 0.83 to 1.00, and test-retest reliability ranged from 0.71 to 0.95. The FLEE tests are considered clinically important for assessing lower extremity function by sports medicine personnel but are underused. The FLEE also is a reliable assessment tool. Future studies are required to determine if use of the FLEE to make return-to-play decisions may reduce reinjury rates.
Overview of Lightweight Structures for Rotorcraft Engines and Drivetrains
NASA Technical Reports Server (NTRS)
Roberts, Gary D.
2011-01-01
This is an overview presentation of research being performed in the Advanced Materials Task within the NASA Subsonic Rotary Wing Project. This research is focused on technology areas that address both national goals and project goals for advanced rotorcraft. Specific technology areas discussed are: (1) high temperature materials for advanced turbines in turboshaft engines; (2) polymer matrix composites for lightweight drive system components; (3) lightweight structure approaches for noise and vibration control; and (4) an advanced metal alloy for lighter weight bearings and more reliable mechanical components. An overview of the technology in each area is discussed, and recent accomplishments are presented.
Time-dependent reliability analysis of ceramic engine components
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.
1993-01-01
The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing either the power or Paris law relations. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Two example problems demonstrating proof testing and fatigue parameter estimation are given.
R&D of high reliable refrigeration system for superconducting generators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hosoya, T.; Shindo, S.; Yaguchi, H.
1996-12-31
Super-GM carries out R&D of 70 MW class superconducting generators (model machines), refrigeration system and superconducting wires to apply superconducting technology to electric power apparatuses. The helium refrigeration system for keeping field windings of superconducting generator (SCG) in cryogenic environment must meet the requirement of high reliability for uninterrupted long term operation of the SCG. In FY 1992, a high reliable conventional refrigeration system for the model machines was integrated by combining components such as compressor unit, higher temperature cold box and lower temperature cold box which were manufactured utilizing various fundamental technologies developed in early stage of the projectmore » since 1988. Since FY 1993, its performance tests have been carried out. It has been confirmed that its performance was fulfilled the development target of liquefaction capacity of 100 L/h and impurity removal in the helium gas to < 0.1 ppm. Furthermore, its operation method and performance were clarified to all different modes as how to control liquefaction rate and how to supply liquid helium from a dewar to the model machine. In addition, the authors have made performance tests and system performance analysis of oil free screw type and turbo type compressors which greatly improve reliability of conventional refrigeration systems. The operation performance and operational control method of the compressors has been clarified through the tests and analysis.« less
Venkataraman, Vinay; Turaga, Pavan; Baran, Michael; Lehrer, Nicole; Du, Tingfang; Cheng, Long; Rikakis, Thanassis; Wolf, Steven L.
2016-01-01
In this paper, we propose a general framework for tuning component-level kinematic features using therapists’ overall impressions of movement quality, in the context of a Home-based Adaptive Mixed Reality Rehabilitation (HAMRR) system. We propose a linear combination of non-linear kinematic features to model wrist movement, and propose an approach to learn feature thresholds and weights using high-level labels of overall movement quality provided by a therapist. The kinematic features are chosen such that they correlate with the quality of wrist movements to clinical assessment scores. Further, the proposed features are designed to be reliably extracted from an inexpensive and portable motion capture system using a single reflective marker on the wrist. Using a dataset collected from ten stroke survivors, we demonstrate that the framework can be reliably used for movement quality assessment in HAMRR systems. The system is currently being deployed for large-scale evaluations, and will represent an increasingly important application area of motion capture and activity analysis. PMID:25438331
NASA Astrophysics Data System (ADS)
Price, Aaron
2010-01-01
Citizen Sky is a new three-year, astronomical citizen science project launched in June, 2009 with funding from the National Science Foundation. This paper reports on early results of an assessment delivered to 1000 participants when they first joined the project. The goal of the assessment, based on the Nature of Scientific Knowledge Scale (NSKS), is to characterize their attitudes towards the nature of scientific knowledge. Our results are that the NSKS components of the assessment achieved high levels of reliability. Both reliability and overall scores fall within the range reported from other NSKS studies in the literature. Correlation analysis with other components of the assessment reveals some factors, such as age and understanding of scientific evidence, may be reflected in scores of subscales of NSKS items. Further work will be done using online discourse analysis and interviews. Overall, we find that the NSKS can be used as an entrance assessment for an online citizen science project.
Improving the reliability of automated non-destructive inspection
NASA Astrophysics Data System (ADS)
Brierley, N.; Tippetts, T.; Cawley, P.
2014-02-01
In automated NDE a region of an inspected component is often interrogated several times, be it within a single data channel, across multiple channels or over the course of repeated inspections. The systematic combination of these diverse readings is recognized to provide a means to improve the reliability of the inspection, for example by enabling noise suppression. Specifically, such data fusion makes it possible to declare regions of the component defect-free to a very high probability whilst readily identifying indications. Registration, aligning input datasets to a common coordinate system, is a critical pre-computation before meaningful data fusion takes place. A novel scheme based on a multiobjective optimization is described. The developed data fusion framework, that is able to identify and rate possible indications in the dataset probabilistically, based on local data statistics, is outlined. The process is demonstrated on large data sets from the industrial ultrasonic testing of aerospace turbine disks, with major improvements in the probability of detection and probability of false call being obtained.
X-33/RLV System Health Management/ Vehicle Health Management
NASA Technical Reports Server (NTRS)
Garbos, Raymond J.; Mouyos, William
1998-01-01
To reduce operations cost, the RLV must include the following elements: highly reliable, robust subsystems designed for simple repair access with a simplified servicing infrastructure and incorporating expedited decision making about faults and anomalies. A key component for the Single Stage to Orbit (SSTO) RLV System used to meet these objectives is System Health Management (SHM). SHM deals with the vehicle component- Vehicle Health Management (VHM), the ground processing associated with the fleet (GVHM) and the Ground Infrastructure Health Management (GIHM). The objective is to provide an automated collection and paperless health decision, maintenance and logistics system. Many critical technologies are necessary to make the SHM (and more specifically VHM) practical, reliable and cost effective. Sanders is leading the design, development and integration of the SHM system for RLV and X-33 SHM (a sub-scale, sub-orbit Advanced Technology Demonstrator). This paper will present the X-33 SHM design which forms the baseline for RLV SHM. This paper will also discuss other applications of these technologies.
ERIC Educational Resources Information Center
Menold, Natalja; Raykov, Tenko
2016-01-01
This article examines the possible dependency of composite reliability on presentation format of the elements of a multi-item measuring instrument. Using empirical data and a recent method for interval estimation of group differences in reliability, we demonstrate that the reliability of an instrument need not be the same when polarity of the…
Huang, Wenhao; Chapman-Novakofski, Karen M
2017-01-01
Background The extensive availability and increasing use of mobile apps for nutrition-based health interventions makes evaluation of the quality of these apps crucial for integration of apps into nutritional counseling. Objective The goal of this research was the development, validation, and reliability testing of the app quality evaluation (AQEL) tool, an instrument for evaluating apps’ educational quality and technical functionality. Methods Items for evaluating app quality were adapted from website evaluations, with additional items added to evaluate the specific characteristics of apps, resulting in 79 initial items. Expert panels of nutrition and technology professionals and app users reviewed items for face and content validation. After recommended revisions, nutrition experts completed a second AQEL review to ensure clarity. On the basis of 150 sets of responses using the revised AQEL, principal component analysis was completed, reducing AQEL into 5 factors that underwent reliability testing, including internal consistency, split-half reliability, test-retest reliability, and interrater reliability (IRR). Two additional modifiable constructs for evaluating apps based on the age and needs of the target audience as selected by the evaluator were also tested for construct reliability. IRR testing using intraclass correlations (ICC) with all 7 constructs was conducted, with 15 dietitians evaluating one app. Results Development and validation resulted in the 51-item AQEL. These were reduced to 25 items in 5 factors after principal component analysis, plus 9 modifiable items in two constructs that were not included in principal component analysis. Internal consistency and split-half reliability of the following constructs derived from principal components analysis was good (Cronbach alpha >.80, Spearman-Brown coefficient >.80): behavior change potential, support of knowledge acquisition, app function, and skill development. App purpose split half-reliability was .65. Test-retest reliability showed no significant change over time (P>.05) for all but skill development (P=.001). Construct reliability was good for items assessing age appropriateness of apps for children, teens, and a general audience. In addition, construct reliability was acceptable for assessing app appropriateness for various target audiences (Cronbach alpha >.70). For the 5 main factors, ICC (1,k) was >.80, with a P value of <.05. When 15 nutrition professionals evaluated one app, ICC (2,15) was .98, with a P value of <.001 for all 7 constructs when the modifiable items were specified for adults seeking weight loss support. Conclusions Our preliminary effort shows that AQEL is a valid, reliable instrument for evaluating nutrition apps’ qualities for clinical interventions by nutrition clinicians, educators, and researchers. Further efforts in validating AQEL in various contexts are needed. PMID:29079554
NASA Technical Reports Server (NTRS)
1993-01-01
The Marshall Space Flight Center is responsible for the development and management of advanced launch vehicle propulsion systems, including the Space Shuttle Main Engine (SSME), which is presently operational, and the Space Transportation Main Engine (STME) under development. The SSME's provide high performance within stringent constraints on size, weight, and reliability. Based on operational experience, continuous design improvement is in progress to enhance system durability and reliability. Specialized data analysis and interpretation is required in support of SSME and advanced propulsion system diagnostic evaluations. Comprehensive evaluation of the dynamic measurements obtained from test and flight operations is necessary to provide timely assessment of the vibrational characteristics indicating the operational status of turbomachinery and other critical engine components. Efficient performance of this effort is critical due to the significant impact of dynamic evaluation results on ground test and launch schedules, and requires direct familiarity with SSME and derivative systems, test data acquisition, and diagnostic software. Detailed analysis and evaluation of dynamic measurements obtained during SSME and advanced system ground test and flight operations was performed including analytical/statistical assessment of component dynamic behavior, and the development and implementation of analytical/statistical models to efficiently define nominal component dynamic characteristics, detect anomalous behavior, and assess machinery operational condition. In addition, the SSME and J-2 data will be applied to develop vibroacoustic environments for advanced propulsion system components, as required. This study will provide timely assessment of engine component operational status, identify probable causes of malfunction, and indicate feasible engineering solutions. This contract will be performed through accomplishment of negotiated task orders.
Damschroder, Laura J; Goodrich, David E; Kim, Hyungjin Myra; Holleman, Robert; Gillon, Leah; Kirsh, Susan; Richardson, Caroline R; Lutes, Lesley D
2016-09-01
Practical and valid instruments are needed to assess fidelity of coaching for weight loss. The purpose of this study was to develop and validate the ASPIRE Coaching Fidelity Checklist (ACFC). Classical test theory guided ACFC development. Principal component analyses were used to determine item groupings. Psychometric properties, internal consistency, and inter-rater reliability were evaluated for each subscale. Criterion validity was tested by predicting weight loss as a function of coaching fidelity. The final 19-item ACFC consists of two domains (session process and session structure) and five subscales (sets goals and monitor progress, assess and personalize self-regulatory content, manages the session, creates a supportive and empathetic climate, and stays on track). Four of five subscales showed high internal consistency (Cronbach alphas > 0.70) for group-based coaching; only two of five subscales had high internal reliability for phone-based coaching. All five sub-scales were positively and significantly associated with weight loss for group- but not for phone-based coaching. The ACFC is a reliable and valid instrument that can be used to assess fidelity and guide skill-building for weight management interventionists.
Transient Reliability Analysis Capability Developed for CARES/Life
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.
2001-01-01
The CARES/Life software developed at the NASA Glenn Research Center provides a general-purpose design tool that predicts the probability of the failure of a ceramic component as a function of its time in service. This award-winning software has been widely used by U.S. industry to establish the reliability and life of a brittle material (e.g., ceramic, intermetallic, and graphite) structures in a wide variety of 21st century applications.Present capabilities of the NASA CARES/Life code include probabilistic life prediction of ceramic components subjected to fast fracture, slow crack growth (stress corrosion), and cyclic fatigue failure modes. Currently, this code can compute the time-dependent reliability of ceramic structures subjected to simple time-dependent loading. For example, in slow crack growth failure conditions CARES/Life can handle sustained and linearly increasing time-dependent loads, whereas in cyclic fatigue applications various types of repetitive constant-amplitude loads can be accounted for. However, in real applications applied loads are rarely that simple but vary with time in more complex ways such as engine startup, shutdown, and dynamic and vibrational loads. In addition, when a given component is subjected to transient environmental and or thermal conditions, the material properties also vary with time. A methodology has now been developed to allow the CARES/Life computer code to perform reliability analysis of ceramic components undergoing transient thermal and mechanical loading. This means that CARES/Life will be able to analyze finite element models of ceramic components that simulate dynamic engine operating conditions. The methodology developed is generalized to account for material property variation (on strength distribution and fatigue) as a function of temperature. This allows CARES/Life to analyze components undergoing rapid temperature change in other words, components undergoing thermal shock. In addition, the capability has been developed to perform reliability analysis for components that undergo proof testing involving transient loads. This methodology was developed for environmentally assisted crack growth (crack growth as a function of time and loading), but it will be extended to account for cyclic fatigue (crack growth as a function of load cycles) as well.
Strategies and Approaches to TPS Design
NASA Technical Reports Server (NTRS)
Kolodziej, Paul
2005-01-01
Thermal protection systems (TPS) insulate planetary probes and Earth re-entry vehicles from the aerothermal heating experienced during hypersonic deceleration to the planet s surface. The systems are typically designed with some additional capability to compensate for both variations in the TPS material and for uncertainties in the heating environment. This additional capability, or robustness, also provides a surge capability for operating under abnormal severe conditions for a short period of time, and for unexpected events, such as meteoroid impact damage, that would detract from the nominal performance. Strategies and approaches to developing robust designs must also minimize mass because an extra kilogram of TPS displaces one kilogram of payload. Because aircraft structures must be optimized for minimum mass, reliability-based design approaches for mechanical components exist that minimize mass. Adapting these existing approaches to TPS component design takes advantage of the extensive work, knowledge, and experience from nearly fifty years of reliability-based design of mechanical components. A Non-Dimensional Load Interference (NDLI) method for calculating the thermal reliability of TPS components is presented in this lecture and applied to several examples. A sensitivity analysis from an existing numerical simulation of a carbon phenolic TPS provides insight into the effects of the various design parameters, and is used to demonstrate how sensitivity analysis may be used with NDLI to develop reliability-based designs of TPS components.
NASA Technical Reports Server (NTRS)
Cohen, Gerald C. (Inventor); McMann, Catherine M. (Inventor)
1991-01-01
An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.
Nadkarni, Lindsay D; Roskind, Cindy G; Auerbach, Marc A; Calhoun, Aaron W; Adler, Mark D; Kessler, David O
2018-04-01
The aim of this study was to assess the validity of a formative feedback instrument for leaders of simulated resuscitations. This is a prospective validation study with a fully crossed (person × scenario × rater) study design. The Concise Assessment of Leader Management (CALM) instrument was designed by pediatric emergency medicine and graduate medical education experts to be used off the shelf to evaluate and provide formative feedback to resuscitation leaders. Four experts reviewed 16 videos of in situ simulated pediatric resuscitations and scored resuscitation leader performance using the CALM instrument. The videos consisted of 4 pediatric emergency department resuscitation teams each performing in 4 pediatric resuscitation scenarios (cardiac arrest, respiratory arrest, seizure, and sepsis). We report on content and internal structure (reliability) validity of the CALM instrument. Content validity was supported by the instrument development process that involved professional experience, expert consensus, focused literature review, and pilot testing. Internal structure validity (reliability) was supported by the generalizability analysis. The main component that contributed to score variability was the person (33%), meaning that individual leaders performed differently. The rater component had almost zero (0%) contribution to variance, which implies that raters were in agreement and argues for high interrater reliability. These results provide initial evidence to support the validity of the CALM instrument as a reliable assessment instrument that can facilitate formative feedback to leaders of pediatric simulated resuscitations.
Foerster, Rebecca M.; Poth, Christian H.; Behler, Christian; Botsch, Mario; Schneider, Werner X.
2016-01-01
Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen’s visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions. PMID:27869220
CARES/LIFE Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Powers, Lynn M.; Janosik, Lesley A.; Gyekenyesi, John P.
2003-01-01
This manual describes the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction (CARES/LIFE) computer program. The program calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. CARES/LIFE is an extension of the CARES (Ceramic Analysis and Reliability Evaluation of Structures) computer program. The program uses results from MSC/NASTRAN, ABAQUS, and ANSYS finite element analysis programs to evaluate component reliability due to inherent surface and/or volume type flaws. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing the power law, Paris law, or Walker law. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled by using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. The probabilistic time-dependent theories used in CARES/LIFE, along with the input and output for CARES/LIFE, are described. Example problems to demonstrate various features of the program are also included.
Foerster, Rebecca M; Poth, Christian H; Behler, Christian; Botsch, Mario; Schneider, Werner X
2016-11-21
Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen's visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Yugui; Liu, Chaoyang, E-mail: chyliu@wipm.ac.cn; State Key Laboratory of Magnet Resonance and Atomic and Molecular Physics, Wuhan Institute of Physics and Mathematics, Chinese Academy of Sciences, Wuhan 430071
2015-08-15
High sensitivity, high data rates, fast pulses, and accurate synchronization all represent challenges for modern nuclear magnetic resonance spectrometers, which make any expansion or adaptation of these devices to new techniques and experiments difficult. Here, we present a Peripheral Component Interconnect Express (PCIe)-based highly integrated distributed digital architecture pulsed spectrometer that is implemented with electron and nucleus double resonances and is scalable specifically for broad dynamic nuclear polarization (DNP) enhancement applications, including DNP-magnetic resonance spectroscopy/imaging (DNP-MRS/MRI). The distributed modularized architecture can implement more transceiver channels flexibly to meet a variety of MRS/MRI instrumentation needs. The proposed PCIe bus with highmore » data rates can significantly improve data transmission efficiency and communication reliability and allow precise control of pulse sequences. An external high speed double data rate memory chip is used to store acquired data and pulse sequence elements, which greatly accelerates the execution of the pulse sequence, reduces the TR (time of repetition) interval, and improves the accuracy of TR in imaging sequences. Using clock phase-shift technology, we can produce digital pulses accurately with high timing resolution of 1 ns and narrow widths of 4 ns to control the microwave pulses required by pulsed DNP and ensure overall system synchronization. The proposed spectrometer is proved to be both feasible and reliable by observation of a maximum signal enhancement factor of approximately −170 for {sup 1}H, and a high quality water image was successfully obtained by DNP-enhanced spin-echo {sup 1}H MRI at 0.35 T.« less
Advanced Signal Conditioners for Data-Acquisition Systems
NASA Technical Reports Server (NTRS)
Lucena, Angel; Perotti, Jose; Eckhoff, Anthony; Medelius, Pedro
2004-01-01
Signal conditioners embodying advanced concepts in analog and digital electronic circuitry and software have been developed for use in data-acquisition systems that are required to be compact and lightweight, to utilize electric energy efficiently, and to operate with high reliability, high accuracy, and high power efficiency, without intervention by human technicians. These signal conditioners were originally intended for use aboard spacecraft. There are also numerous potential terrestrial uses - especially in the fields of aeronautics and medicine, wherein it is necessary to monitor critical functions. Going beyond the usual analog and digital signal-processing functions of prior signal conditioners, the new signal conditioner performs the following additional functions: It continuously diagnoses its own electronic circuitry, so that it can detect failures and repair itself (as described below) within seconds. It continuously calibrates itself on the basis of a highly accurate and stable voltage reference, so that it can continue to generate accurate measurement data, even under extreme environmental conditions. It repairs itself in the sense that it contains a micro-controller that reroutes signals among redundant components as needed to maintain the ability to perform accurate and stable measurements. It detects deterioration of components, predicts future failures, and/or detects imminent failures by means of a real-time analysis in which, among other things, data on its present state are continuously compared with locally stored historical data. It minimizes unnecessary consumption of electric energy. The design architecture divides the signal conditioner into three main sections: an analog signal section, a digital module, and a power-management section. The design of the analog signal section does not follow the traditional approach of ensuring reliability through total redundancy of hardware: Instead, following an approach called spare parts tool box, the reliability of each component is assessed in terms of such considerations as risks of damage, mean times between failures, and the effects of certain failures on the performance of the signal conditioner as a whole system. Then, fewer or more spares are assigned for each affected component, pursuant to the results of this analysis, in order to obtain the required degree of reliability of the signal conditioner as a whole system. The digital module comprises one or more processors and field-programmable gate arrays, the number of each depending on the results of the aforementioned analysis. The digital module provides redundant control, monitoring, and processing of several analog signals. It is designed to minimize unnecessary consumption of electric energy, including, when possible, going into a low-power "sleep" mode that is implemented in firmware. The digital module communicates with external equipment via a personal-computer serial port. The digital module monitors the "health" of the rest of the signal conditioner by processing defined measurements and/or trends. It automatically makes adjustments to respond to channel failures, compensate for effects of temperature, and maintain calibration.
1980-04-01
incorporate the high reliability ceramic-packaged quartz crystal resonator developed at ERADCOM, and utilize beam -leaded devices wherever possible...the form of a truncated cylinder. The rather complex module outline is best accomplished through the use of a precast potting shell filled with a low...crossover connections are achieved by means of thick-film dielectric material. Chip components attached to the metallized substrate complete the circuits
Handbook of experiences in the design and installation of solar heating and cooling systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, D.S.; Oberoi, H.S.
1980-07-01
A large array of problems encountered are detailed, including design errors, installation mistakes, cases of inadequate durability of materials and unacceptable reliability of components, and wide variations in the performance and operation of different solar systems. Durability, reliability, and design problems are reviewed for solar collector subsystems, heat transfer fluids, thermal storage, passive solar components, piping/ducting, and reliability/operational problems. The following performance topics are covered: criteria for design and performance analysis, domestic hot water systems, passive space heating systems, active space heating systems, space cooling systems, analysis of systems performance, and performance evaluations. (MHR)
NASA Technical Reports Server (NTRS)
White, Mark
2012-01-01
The recently launched Mars Science Laboratory (MSL) flagship mission, named Curiosity, is the most complex rover ever built by NASA and is scheduled to touch down on the red planet in August, 2012 in Gale Crater. The rover and its instruments will have to endure the harsh environments of the surface of Mars to fulfill its main science objectives. Such complex systems require reliable microelectronic components coupled with adequate component and system-level design margins. Reliability aspects of these elements of the spacecraft system are presented from bottom- up and top-down perspectives.
NASA Technical Reports Server (NTRS)
Matlock, Steve
2001-01-01
This is the final report and addresses all of the work performed on this program. Specifically, it covers vehicle architecture background, definition of six baseline engine cycles, reliability baseline (space shuttle main engine QRAS), and component level reliability/performance/cost for the six baseline cycles, and selection of 3 cycles for further study. This report further addresses technology improvement selection and component level reliability/performance/cost for the three cycles selected for further study, as well as risk reduction plans, and recommendation for future studies.
Transient Reliability of Ceramic Structures For Heat Engine Applications
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Jadaan, Osama M.
2002-01-01
The objectives of this report was to develop a methodology to predict the time-dependent reliability (probability of failure) of brittle material components subjected to transient thermomechanical loading, taking into account the change in material response with time. This methodology for computing the transient reliability in ceramic components subjected to fluctuation thermomechanical loading was developed, assuming SCG (Slow Crack Growth) as the delayed mode of failure. It takes into account the effect of varying Weibull modulus and materials with time. It was also coded into a beta version of NASA's CARES/Life code, and an example demonstrating its viability was presented.
An overview of the mathematical and statistical analysis component of RICIS
NASA Technical Reports Server (NTRS)
Hallum, Cecil R.
1987-01-01
Mathematical and statistical analysis components of RICIS (Research Institute for Computing and Information Systems) can be used in the following problem areas: (1) quantification and measurement of software reliability; (2) assessment of changes in software reliability over time (reliability growth); (3) analysis of software-failure data; and (4) decision logic for whether to continue or stop testing software. Other areas of interest to NASA/JSC where mathematical and statistical analysis can be successfully employed include: math modeling of physical systems, simulation, statistical data reduction, evaluation methods, optimization, algorithm development, and mathematical methods in signal processing.
Development of tungsten armor and bonding to copper for plasma-interactive components
NASA Astrophysics Data System (ADS)
Smid, I.; Akiba, M.; Vieider, G.; Plöchl, L.
1998-10-01
For the highest sputtering threshold of all possible candidates, tungsten will be the most likely armor material in highly loaded plasma-interactive components of commercially relevant fusion reactors. The development of new materials, as well as joining and coating techniques are needed to find the best balance in plasma compatibility, lifetime, reliability, neutron irradiation resistance, and safety. Further important issues for selection are availability, costs of machining and production, etc. Tungsten doped with lanthanum oxide is a commercially available W grade for electrodes, designed for low electron work function, higher recrystallization temperature, reduced secondary grain growth, and machinability at relatively low costs. W-Re and related tungsten base alloys are preferred for application at high temperatures, when high strength, high thermal shock and recrystallization resistance are required. Due to the high costs and limited global availability of Re, however, the amount of such alloys in a commercial reactor should be kept low. Newly measured material properties up to high temperatures are presented for lanthanated and W-Re alloys, and the impact on fusion application is discussed. Recently developed coatings of chemical vapor deposited tungsten (CVD-W) on copper substrates have proven to be resistant to repeated thermal and shock loading. Layers of more than 5 mm, as required for the International Thermonuclear Experimental Reactor (ITER), became available. Vacuum plasma sprayed tungsten (VPS-W) in particular is attractive for its lower costs, and the potential of in situ repair. However, the advantage of sacrificial plasma-interactive tungsten coatings in long-term fusion devices has yet to be demonstrated. A durable and reliable joining of bulk tungsten to copper is needed to achieve an acceptable component lifetime in a fusion environment. The material properties of the copper alloys proposed for ITER, and their impact on the quality of bonding to tungsten is discussed. Future materials R&D should concern issues such as plasma compatibility, and above all neutron irradiation damage of promising tungsten-copper joints.
Reliability and Validity of the Dyadic Observed Communication Scale (DOCS).
Hadley, Wendy; Stewart, Angela; Hunter, Heather L; Affleck, Katelyn; Donenberg, Geri; Diclemente, Ralph; Brown, Larry K
2013-02-01
We evaluated the reliability and validity of the Dyadic Observed Communication Scale (DOCS) coding scheme, which was developed to capture a range of communication components between parents and adolescents. Adolescents and their caregivers were recruited from mental health facilities for participation in a large, multi-site family-based HIV prevention intervention study. Seventy-one dyads were randomly selected from the larger study sample and coded using the DOCS at baseline. Preliminary validity and reliability of the DOCS was examined using various methods, such as comparing results to self-report measures and examining interrater reliability. Results suggest that the DOCS is a reliable and valid measure of observed communication among parent-adolescent dyads that captures both verbal and nonverbal communication behaviors that are typical intervention targets. The DOCS is a viable coding scheme for use by researchers and clinicians examining parent-adolescent communication. Coders can be trained to reliably capture individual and dyadic components of communication for parents and adolescents and this complex information can be obtained relatively quickly.
NASA Astrophysics Data System (ADS)
Collmann, Jeff R.
2003-05-01
This paper justifies and explains current efforts in the Military Health System (MHS) to enhance information assurance in light of the sociological debate between "Normal Accident" (NAT) and "High Reliability" (HRT) theorists. NAT argues that complex systems such as enterprise health information systems display multiple, interdependent interactions among diverse parts that potentially manifest unfamiliar, unplanned, or unexpected sequences that operators may not perceive or immediately understand, especially during emergencies. If the system functions rapidly with few breaks in time, space or process development, the effects of single failures ramify before operators understand or gain control of the incident thus producing catastrophic accidents. HRT counters that organizations with strong leadership support, continuous training, redundant safety features and "cultures of high reliability" contain the effects of component failures even in complex, tightly coupled systems. Building highly integrated, enterprise-wide computerized health information management systems risks creating the conditions for catastrophic breaches of data security as argued by NAT. The data security regulations of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) implicitly depend on the premises of High Reliability Theorists. Limitations in HRT thus have implications for both safe program design and compliance efforts. MHS and other health care organizations should consider both NAT and HRT when designing and deploying enterprise-wide computerized health information systems.
Advanced Reactor PSA Methodologies for System Reliability Analysis and Source Term Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, D.; Brunett, A.; Passerini, S.
Beginning in 2015, a project was initiated to update and modernize the probabilistic safety assessment (PSA) of the GE-Hitachi PRISM sodium fast reactor. This project is a collaboration between GE-Hitachi and Argonne National Laboratory (Argonne), and funded in part by the U.S. Department of Energy. Specifically, the role of Argonne is to assess the reliability of passive safety systems, complete a mechanistic source term calculation, and provide component reliability estimates. The assessment of passive system reliability focused on the performance of the Reactor Vessel Auxiliary Cooling System (RVACS) and the inherent reactivity feedback mechanisms of the metal fuel core. Themore » mechanistic source term assessment attempted to provide a sequence specific source term evaluation to quantify offsite consequences. Lastly, the reliability assessment focused on components specific to the sodium fast reactor, including electromagnetic pumps, intermediate heat exchangers, the steam generator, and sodium valves and piping.« less
NASA Astrophysics Data System (ADS)
Ozaki, Hirokazu; Kara, Atsushi; Cheng, Zixue
2012-05-01
In this article, we investigate the reliability of M-for-N (M:N) shared protection systems. We focus on the reliability that is perceived by an end user of one of N units. We assume that any failed unit is instantly replaced by one of the M units (if available). We describe the effectiveness of such a protection system in a quantitative manner under the condition that the failed units are not repairable. Mathematical analysis gives the closed-form solution of the reliability and mean time to failure (MTTF). We also analyse several numerical examples of the reliability and MTTF. This result can be applied, for example, to the analysis and design of an integrated circuit consisting of redundant backup components. In such a device, repairing a failed component is unrealistic. The analysis provides useful information for the design for general shared protection systems in which the failed units are not repaired.
Fry, Craig L; Lintzeris, Nick
2003-02-01
To develop a standard measure of blood-borne virus transmission risk behaviour, and examine the underlying psychometric properties. The Blood-borne Virus Transmission Risk Assessment Questionnaire (BBV-TRAQ) was developed over three consecutive phases of the original BBV-TRAQ study in adherence to classical scale development procedures, culminating in the recruitment of a development sample of current injecting drug users via convenience and snowball sampling. Needle and syringe programmes (NSPs), medical clinics, alcohol/drug agencies, peer-based and outreach organizations across inner and outer metropolitan Melbourne. Two hundred and nine current injecting drug users. The mean age was 27 years, 68% were male, 65% unemployed, 36% with prison history and 25% in methadone maintenance. BBV-TRAQ items cover specific injecting, sexual and skin penetration risk practices. BBV-TRAQ characteristics were assessed via measures of internal and test-retest reliability; collateral validation; and principal components analyses. The BBV-TRAQ has satisfactory psychometric properties. Internal (a=0.87), test-retest (r=0.84) and inter-observer reliability results were high, suggesting that the instrument provides a reliable measure of BBV risk behaviour and is reliable over time and across interviewers. A principal components analysis with varimax rotation produced a parsimonious factor solution despite modest communality, and indicated that three factors (injecting, sex and skin penetration/hygiene risks) are required to describe BBV risk behaviour. The BBV-TRAQ is reliable and represents the first risk assessment tool to incorporate sufficient coverage of injecting, sex and other skin penetration risk practices to be considered truly content valid. The questionnaire is indicated for use in addictions research, clinical, peer education and BBV risk behaviour surveillance settings.
Tabbakh, Tamara; Freeland-Graves, Jeanne
2016-08-01
The home environment is an important setting for the development of weight status in adolescence. At present a limited number of valid and reliable tools are available to evaluate the weight-related comprehensive home environment of this population. The goal of this research was to develop the Multidimensional Home Environment Scale which measures multiple components of the home. It includes psychological, social, and environmental domains from the perspective of an adolescent and the mother. Items were generated based on a literature review and then assessed for content validity by an expert panel and focus group in the target population. Internal consistency reliability was determined using Cronbach's α. Principal components analysis with varimax rotation was employed for assessment of construct validity. Temporal stability was evaluated using paired sample t-tests and bivariate correlations between responses at two different times, 1-2weeks apart. Associations between adolescent and mother responses were utilized for convergent validity. The final versions contained 32-items for adolescents and 36-items for mothers; these were administered to 218 adolescents and mothers. The subscales on the questionnaires exhibited high construct validity, internal consistency reliability (adolescent: α=0.82, mother: α=0.83) and test-retest reliability (adolescent: r=0.90, p<0.01; mother: r=0.91, p<0.01). Total home environment scores were computed, with greater scores reflecting a better health environment. These results verify the utility of the MHES as a valid and reliable instrument. This promising tool can be utilized to capture the comprehensive home environment of young adolescents (11-14years old). Copyright © 2016 Elsevier Ltd. All rights reserved.
Ronald, Angelica; Sieradzka, Dominika; Cardno, Alastair G.; Haworth, Claire M. A.; McGuire, Philip; Freeman, Daniel
2014-01-01
We aimed to characterize multiple psychotic experiences, each assessed on a spectrum of severity (ie, quantitatively), in a general population sample of adolescents. Over five thousand 16-year-old twins and their parents completed the newly devised Specific Psychotic Experiences Questionnaire (SPEQ); a subsample repeated it approximately 9 months later. SPEQ was investigated in terms of factor structure, intersubscale correlations, frequency of endorsement and reported distress, reliability and validity, associations with traits of anxiety, depression and personality, and sex differences. Principal component analysis revealed a 6-component solution: paranoia, hallucinations, cognitive disorganization, grandiosity, anhedonia, and parent-rated negative symptoms. These components formed the basis of 6 subscales. Correlations between different experiences were low to moderate. All SPEQ subscales, except Grandiosity, correlated significantly with traits of anxiety, depression, and neuroticism. Scales showed good internal consistency, test-retest reliability, and convergent validity. Girls endorsed more paranoia, hallucinations, and cognitive disorganization; boys reported more grandiosity and anhedonia and had more parent-rated negative symptoms. As in adults at high risk for psychosis and with psychotic disorders, psychotic experiences in adolescents are characterized by multiple components. The study of psychotic experiences as distinct dimensional quantitative traits is likely to prove an important strategy for future research, and the SPEQ is a self- and parent-report questionnaire battery that embodies this approach. PMID:24062593
NASA/CARES dual-use ceramic technology spinoff applications
NASA Technical Reports Server (NTRS)
Powers, Lynn M.; Janosik, Lesley A.; Gyekenyesi, John P.; Nemeth, Noel N.
1994-01-01
NASA has developed software that enables American industry to establish the reliability and life of ceramic structures in a wide variety of 21st Century applications. Designing ceramic components to survive at higher temperatures than the capability of most metals and in severe loading environments involves the disciplines of statistics and fracture mechanics. Successful application of advanced ceramics material properties and the use of a probabilistic brittle material design methodology. The NASA program, known as CARES (Ceramics Analysis and Reliability Evaluation of Structures), is a comprehensive general purpose design tool that predicts the probability of failure of a ceramic component as a function of its time in service. The latest version of this software, CARESALIFE, is coupled to several commercially available finite element analysis programs (ANSYS, MSC/NASTRAN, ABAQUS, COSMOS/N4, MARC), resulting in an advanced integrated design tool which is adapted to the computing environment of the user. The NASA-developed CARES software has been successfully used by industrial, government, and academic organizations to design and optimize ceramic components for many demanding applications. Industrial sectors impacted by this program include aerospace, automotive, electronic, medical, and energy applications. Dual-use applications include engine components, graphite and ceramic high temperature valves, TV picture tubes, ceramic bearings, electronic chips, glass building panels, infrared windows, radiant heater tubes, heat exchangers, and artificial hips, knee caps, and teeth.
Clemens, Sheila M; Gailey, Robert S; Bennett, Christopher L; Pasquina, Paul F; Kirk-Sanchez, Neva J; Gaunaurd, Ignacio A
2018-03-01
Using a custom mobile application to evaluate the reliability and validity of the Component Timed-Up-and-Go test to assess prosthetic mobility in people with lower limb amputation. Cross-sectional design. National conference for people with limb loss. A total of 118 people with non-vascular cause of lower limb amputation participated. Subjects had a mean age of 48 (±13.7) years and were an average of 10 years post amputation. Of them, 54% ( n = 64) of subjects were male. None. The Component Timed-Up-and-Go was administered using a mobile iPad application, generating a total time to complete the test and five component times capturing each subtask (sit to stand transitions, linear gait, turning) of the standard timed-up-and-go test. The outcome underwent test-retest reliability using intraclass correlation coefficients (ICCs) and convergent validity analyses through correlation with self-report measures of balance and mobility. The Component Timed-Up-and-Go exhibited excellent test-retest reliability with ICCs ranging from .98 to .86 for total and component times. Evidence of discriminative validity resulted from significant differences in mean total times between people with transtibial (10.1 (SD: ±2.3)) and transfemoral (12.76 (SD: ±5.1) amputation, as well as significant differences in all five component times ( P < .05). Convergent validity of the Component Timed-Up-and-Go was demonstrated through moderate correlations with the PLUS-M ( r s = -.56). The Component Timed-Up-and-Go is a reliable and valid clinical tool for detailed assessment of prosthetic mobility in people with non-vascular lower limb amputation. The iPad application provided a means to easily record data, contributing to clinical utility.
Eigbefoh, J O; Isabu, P; Okpere, E; Abebe, J
2008-07-01
Untreated urinary tract infection can have devastating maternal and neonatal effects. Thus, routine screening for bacteriuria is advocated. This study was designed to evaluate the diagnostic accuracy of the rapid dipstick test to predict urinary tract infection in pregnancy with the gold standard of urine microscopy, culture and sensitivity acting as the control. The urine dipstick test uses the leucocyte esterase, nitrite and test for protein singly and in combination. The result of the dipstick was compared with the gold standard, urine microscopy, culture and sensitivity using confidence interval for proportions. The reliability and validity of the urine dipstick was also evaluated. Overall, the urine dipstick test has a poor correlation with urine culture (p = 0.125, CI 95%). The same holds true for individual components of the dipstick test. The overall sensitivity of the urine dipstick test was poor at 2.3%. Individual sensitivity of the various components varied between 9.1% for leucocyte esterase and the nitrite test to 56.8% for leucocyte esterase alone. The other components of the dipstick test, the test of nitrite, test for protein and combination of the test (leucocyte esterase, nitrite and proteinuria) appear to decrease the sensitivity of the leucocyte esterase test alone. The ability of the urine dipstick test to correctly rule out urinary tract infection (specificity) was high. The positive predictive value for the dipstick test was high, with the leucocyte esterase test having the highest positive predictive value compared with the other components of the dipstick test. The negative predictive value (NPV) was expectedly highest for the leucocyte esterase test alone with values higher than the other components of the urine dipstick test singly and in various combinations. Compared with the other parameters of the urine dipstick test, singly and in combination, leucocyte esterase appears to be the most accurate (90.25%). The dipstick test has a limited use in screening for asymptomatic bacteriuria. The leucocyte esterase test component of the dipstick test appears to have the highest reliability and validity. The other parameters of the dipstick test decreases the reliability and validity of the leucocyte esterase test. A positive test merits empirical antibiotics, while a negative test is an indication for urine culture. The urine dipstick test if positive will also be useful in follow-up of patient after treatment of urinary tract infection. This is useful in poor resource setting especially in the third world where there is a dearth of trained personnel and equipment for urine culture.
NASA Glenn Research Center Support of the Advanced Stirling Radioisotope Generator Project
NASA Technical Reports Server (NTRS)
Wilson, Scott D.; Wong, Wayne A.
2015-01-01
A high-efficiency radioisotope power system was being developed for long-duration NASA space science missions. The U.S. Department of Energy (DOE) managed a flight contract with Lockheed Martin Space Systems Company to build Advanced Stirling Radioisotope Generators (ASRGs), with support from NASA Glenn Research Center. DOE initiated termination of that contract in late 2013, primarily due to budget constraints. Sunpower, Inc., held two parallel contracts to produce Advanced Stirling Convertors (ASCs), one with Lockheed Martin to produce ASC-F flight units, and one with Glenn for the production of ASC-E3 engineering unit "pathfinders" that are built to the flight design. In support of those contracts, Glenn provided testing, materials expertise, Government-furnished equipment, inspection capabilities, and related data products to Lockheed Martin and Sunpower. The technical support included material evaluations, component tests, convertor characterization, and technology transfer. Material evaluations and component tests were performed on various ASC components in order to assess potential life-limiting mechanisms and provide data for reliability models. Convertor level tests were conducted to characterize performance under operating conditions that are representative of various mission conditions. Despite termination of the ASRG flight development contract, NASA continues to recognize the importance of high-efficiency ASC power conversion for Radioisotope Power Systems (RPS) and continues investment in the technology, including the continuation of the ASC-E3 contract. This paper describes key Government support for the ASRG project and future tests to be used to provide data for ongoing reliability assessments.
NASA Astrophysics Data System (ADS)
Ocaña, J. L.; Porro, J. A.; Díaz, M.; Ruiz de Lara, L.; Correa, C.; Gil-Santos, A.; Peral, D.
2013-02-01
Laser shock processing (LSP) is being increasingly applied as an effective technology for the improvement of metallic materials mechanical and surface properties in different types of components as a means of enhancement of their corrosion and fatigue life behavior. As reported in previous contributions by the authors, a main effect resulting from the application of the LSP technique consists on the generation of relatively deep compression residual stresses field into metallic alloy pieces allowing an improved mechanical behaviour, explicitly the life improvement of the treated specimens against wear, crack growth and stress corrosion cracking. Additional results accomplished by the authors in the line of practical development of the LSP technique at an experimental level (aiming its integral assessment from an interrelated theoretical and experimental point of view) are presented in this paper. Concretely, follow-on experimental results on the residual stress profiles and associated surface properties modification successfully reached in typical materials (especially Al and Ti alloys characteristic of high reliability components in the aerospace, nuclear and biomedical sectors) under different LSP irradiation conditions are presented along with a practical correlated analysis on the protective character of the residual stress profiles obtained under different irradiation strategies. Additional remarks on the improved character of the LSP technique over the traditional "shot peening" technique in what concerns depth of induced compressive residual stresses fields are also made through the paper.
Structural reliability assessment capability in NESSUS
NASA Technical Reports Server (NTRS)
Millwater, H.; Wu, Y.-T.
1992-01-01
The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.
Structural reliability assessment capability in NESSUS
NASA Astrophysics Data System (ADS)
Millwater, H.; Wu, Y.-T.
1992-07-01
The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.
1995-09-22
Modules 345-800 Amperes/400-3000 Votts - Current and Thermal Ratings of Module * Circuit Currents Element Data Model* Current Thermal Units...IGBTs modules (Powerex) 56 Main components for rectifiers, Diode Bridge modules (Powerex) 65 Heat Sinks (Aavid Engineering) 85 Westinghouse...exciter circuit , are not reliable enough for military applications, and they were replaced by brushless alternators. The brushless AC alternator
Space Station Freedom electric power system availability study
NASA Technical Reports Server (NTRS)
Turnquist, Scott R.
1990-01-01
The results are detailed of follow-on availability analyses performed on the Space Station Freedom electric power system (EPS). The scope includes analyses of several EPS design variations, these are: the 4-photovoltaic (PV) module baseline EPS design, a 6-PV module EPS design, and a 3-solar dynamic module EPS design which included a 10 kW PV module. The analyses performed included: determining the discrete power levels that the EPS will operate at upon various component failures and the availability of each of these operating states; ranking EPS components by the relative contribution each component type gives to the power availability of the EPS; determining the availability impacts of including structural and long-life EPS components in the availability models used in the analyses; determining optimum sparing strategies, for storing space EPS components on-orbit, to maintain high average-power-capability with low lift-mass requirements; and analyses to determine the sensitivity of EPS-availability to uncertainties in the component reliability and maintainability data used.
NASA Technical Reports Server (NTRS)
Berg, Melanie; LaBel, Kenneth A.
2016-01-01
This presentation focuses on reliability and trust for the users portion of the FPGA design flow. It is assumed that the manufacturer prior to hand-off to the user tests FPGA internal components. The objective is to present the challenges of creating reliable and trusted designs. The following will be addressed: What makes a design vulnerable to functional flaws (reliability) or attackers (trust)? What are the challenges for verifying a reliable design versus a trusted design?
Hsu, Ya-Chuan
2011-09-01
: Diverse social and recreational activities in elder care institutions have been provided to enrich a person's mental well-being amidst what is a relatively monotonous life. However, few instruments that measure the social activities of long-term care residents are available. : This study was designed to develop a culturally sensitive instrument (Socially Supportive Activity Inventory, SSAI) to assess quantity and quality of social activities for long-term care institutions and validate the instrument's psychometric properties. : The SSAI was developed on the basis of the social support theory, a synthesis of literature, and Taiwanese cultural mores. The instrument was rigorously subjected to a two-stage process to evaluate its reliability and validity. In Stage 1, six experts from diverse backgrounds were recruited to evaluate instrument items and estimate the content validity of the instrument using a content validity questionnaire. Items were modified and refined on the basis of the responses of the expert panel and a set of criteria. After obtaining approval from a university institutional review board, in the second stage of evaluating test-retest reliability, a convenience sample of 10 Taiwanese institutionalized elders in a pilot study, recruited from a nursing home, completed the revised instrument at two separate times over 2 weeks. : Results showed a content validity of .96. Test-retest reliability from a sample of 10 participants yielded stability coefficients of .76-1.00. The stability coefficient was 1.00 for the component of frequency, .76-1.00 for the component of meaningfulness, and .78-1.00 for the component of enjoyment. : The SSAI is a highly relevant and reliable culturally based instrument that can measure social activity in long-term care facilities. Because of the pilot nature of this study, future directions include further exploration of the SSAI instrument's psychometric properties. This should be done by enlarging the sample size to include more long-term care facilities and individual participants. Future studies can utilize diverse measures of social activity for comparison and validation of the SSAI.
Next-generation fiber lasers enabled by high-performance components
NASA Astrophysics Data System (ADS)
Kliner, D. A. V.; Victor, B.; Rivera, C.; Fanning, G.; Balsley, D.; Farrow, R. L.; Kennedy, K.; Hampton, S.; Hawke, R.; Soukup, E.; Reynolds, M.; Hodges, A.; Emery, J.; Brown, A.; Almonte, K.; Nelson, M.; Foley, B.; Dawson, D.; Hemenway, D. M.; Urbanek, W.; DeVito, M.; Bao, L.; Koponen, J.; Gross, K.
2018-02-01
Next-generation industrial fiber lasers enable challenging applications that cannot be addressed with legacy fiber lasers. Key features of next-generation fiber lasers include robust back-reflection protection, high power stability, wide power tunability, high-speed modulation and waveform generation, and facile field serviceability. These capabilities are enabled by high-performance components, particularly pump diodes and optical fibers, and by advanced fiber laser designs. We summarize the performance and reliability of nLIGHT diodes, fibers, and next-generation industrial fiber lasers at power levels of 500 W - 8 kW. We show back-reflection studies with up to 1 kW of back-reflected power, power-stability measurements in cw and modulated operation exhibiting sub-1% stability over a 5 - 100% power range, and high-speed modulation (100 kHz) and waveform generation with a bandwidth 20x higher than standard fiber lasers. We show results from representative applications, including cutting and welding of highly reflective metals (Cu and Al) for production of Li-ion battery modules and processing of carbon fiber reinforced polymers.
Nayback-Beebe, Ann M; Yoder, Linda H
2011-06-01
The Interpersonal Relationship Inventory-Short Form (IPRI-SF) has demonstrated psychometric consistency across several demographic and clinical populations; however, it has not been psychometrically tested in a military population. The purpose of this study was to psychometrically evaluate the reliability and component structure of the IPRI-SF in active duty United States Army female service members (FSMs). The reliability estimates were .93 for the social support subscale and .91 for the conflict subscale. Principal component analysis demonstrated an obliquely rotated three-component solution that accounted for 58.9% of the variance. The results of this study support the reliability and validity of the IPRI-SF for use in FSMs; however, a three-factor structure emerged in this sample of FSMs post-deployment that represents "cultural context." Copyright © 2011 Wiley Periodicals, Inc.
Strength Analysis and Reliability Evaluation for Speed Reducers
NASA Astrophysics Data System (ADS)
Tsai, Yuo-Tern; Hsu, Yung-Yuan
2017-09-01
This paper studies the structural stresses of differential drive (DD) and harmonic drive (HD) for design improvement of reducers. The designed principles of the two reducers are reported for function comparison. The critical components of the reducers are constructed for performing motion simulation and stress analysis. DD is designed based on differential displacement of the decelerated gear ring as well as HD on a flexible spline. Finite element method (FEM) is used to analyze the structural stresses including the dynamic properties of the reducers. The stresses including kinematic properties of the two reducers are compared to observe the properties of the designs. The analyzed results are applied to identify the allowable loads of the reducers in use. The reliabilities of the reducers in different loads are further calculated according to the variation of stress. The studied results are useful on engineering analysis and reliability evaluation for designing a speed reducer with high ratios.
Difficult Decisions Made Easier
NASA Technical Reports Server (NTRS)
2006-01-01
NASA missions are extremely complex and prone to sudden, catastrophic failure if equipment falters or if an unforeseen event occurs. For these reasons, NASA trains to expect the unexpected. It tests its equipment and systems in extreme conditions, and it develops risk-analysis tests to foresee any possible problems. The Space Agency recently worked with an industry partner to develop reliability analysis software capable of modeling complex, highly dynamic systems, taking into account variations in input parameters and the evolution of the system over the course of a mission. The goal of this research was multifold. It included performance and risk analyses of complex, multiphase missions, like the insertion of the Mars Reconnaissance Orbiter; reliability analyses of systems with redundant and/or repairable components; optimization analyses of system configurations with respect to cost and reliability; and sensitivity analyses to identify optimal areas for uncertainty reduction or performance enhancement.
Terrapin technologies manned Mars mission proposal
NASA Technical Reports Server (NTRS)
Amato, Michael; Bryant, Heather; Coleman, Rodney; Compy, Chris; Crouse, Patrick; Crunkleton, Joe; Hurtado, Edgar; Iverson, Eirik; Kamosa, Mike; Kraft, Lauri (Editor)
1990-01-01
A Manned Mars Mission (M3) design study is proposed. The purpose of M3 is to transport 10 personnel and a habitat with all required support systems and supplies from low Earth orbit (LEO) to the surface of Mars and, after an eight-man surface expedition of 3 months, to return the personnel safely to LEO. The proposed hardware design is based on systems and components of demonstrated high capability and reliability. The mission design builds on past mission experience, but incorporates innovative design approaches to achieve mission priorities. Those priorities, in decreasing order of importance, are safety, reliability, minimum personnel transfer time, minimum weight, and minimum cost. The design demonstrates the feasibility and flexibility of a Waverider transfer module.
Long life reliability thermal control systems study
NASA Technical Reports Server (NTRS)
Scollon, T. R., Jr.; Killen, R. E.
1972-01-01
The results of a program undertaken to conceptually design and evaluate a passive, high reliability, long life thermal control system for space station application are presented. The program consisted of four steps: (1) investigate and select potential thermal system elements; (2) conceive, evaluate and select a thermal control system using these elements; (3) conduct a verification test of a prototype segment of the selected system; and (4) evaluate the utilization of waste heat from the power supply. The result of this project is a conceptual thermal control system design which employs heat pipes as primary components, both for heat transport and temperature control. The system, its evaluation, and the test results are described.
Robot-Powered Reliability Testing at NREL's ESIF
Harrison, Kevin
2018-02-14
With auto manufacturers expected to roll out fuel cell electric vehicles in the 2015 to 2017 timeframe, the need for a reliable hydrogen fueling infrastructure is greater than ever. That's why the National Renewable Energy Laboratory (NREL) is using a robot in its Energy Systems Integration Facility (ESIF) to assess the durability of hydrogen fueling hoses, a largely untestedâand currently costlyâcomponent of hydrogen fueling stations. The automated machine mimics the repetitive stress of a human bending and twisting the hose to refuel a vehicleâall under the high pressure and low temperature required to deliver hydrogen to a fuel cell vehicle's onboard storage tank.
Duan, Lili; Liu, Xiao; Zhang, John Z H
2016-05-04
Efficient and reliable calculation of protein-ligand binding free energy is a grand challenge in computational biology and is of critical importance in drug design and many other molecular recognition problems. The main challenge lies in the calculation of entropic contribution to protein-ligand binding or interaction systems. In this report, we present a new interaction entropy method which is theoretically rigorous, computationally efficient, and numerically reliable for calculating entropic contribution to free energy in protein-ligand binding and other interaction processes. Drastically different from the widely employed but extremely expensive normal mode method for calculating entropy change in protein-ligand binding, the new method calculates the entropic component (interaction entropy or -TΔS) of the binding free energy directly from molecular dynamics simulation without any extra computational cost. Extensive study of over a dozen randomly selected protein-ligand binding systems demonstrated that this interaction entropy method is both computationally efficient and numerically reliable and is vastly superior to the standard normal mode approach. This interaction entropy paradigm introduces a novel and intuitive conceptual understanding of the entropic effect in protein-ligand binding and other general interaction systems as well as a practical method for highly efficient calculation of this effect.
NASA Technical Reports Server (NTRS)
Gernand, Jeffrey L.; Gillespie, Amanda M.; Monaghan, Mark W.; Cummings, Nicholas H.
2010-01-01
Success of the Constellation Program's lunar architecture requires successfully launching two vehicles, Ares I/Orion and Ares V/Altair, in a very limited time period. The reliability and maintainability of flight vehicles and ground systems must deliver a high probability of successfully launching the second vehicle in order to avoid wasting the on-orbit asset launched by the first vehicle. The Ground Operations Project determined which ground subsystems had the potential to affect the probability of the second launch and allocated quantitative availability requirements to these subsystems. The Ground Operations Project also developed a methodology to estimate subsystem reliability, availability and maintainability to ensure that ground subsystems complied with allocated launch availability and maintainability requirements. The verification analysis developed quantitative estimates of subsystem availability based on design documentation; testing results, and other information. Where appropriate, actual performance history was used for legacy subsystems or comparative components that will support Constellation. The results of the verification analysis will be used to verify compliance with requirements and to highlight design or performance shortcomings for further decision-making. This case study will discuss the subsystem requirements allocation process, describe the ground systems methodology for completing quantitative reliability, availability and maintainability analysis, and present findings and observation based on analysis leading to the Ground Systems Preliminary Design Review milestone.
Reliability of an interactive computer program for advance care planning.
Schubart, Jane R; Levi, Benjamin H; Camacho, Fabian; Whitehead, Megan; Farace, Elana; Green, Michael J
2012-06-01
Despite widespread efforts to promote advance directives (ADs), completion rates remain low. Making Your Wishes Known: Planning Your Medical Future (MYWK) is an interactive computer program that guides individuals through the process of advance care planning, explaining health conditions and interventions that commonly involve life or death decisions, helps them articulate their values/goals, and translates users' preferences into a detailed AD document. The purpose of this study was to demonstrate that (in the absence of major life changes) the AD generated by MYWK reliably reflects an individual's values/preferences. English speakers ≥30 years old completed MYWK twice, 4 to 6 weeks apart. Reliability indices were assessed for three AD components: General Wishes; Specific Wishes for treatment; and Quality-of-Life values (QoL). Twenty-four participants completed the study. Both the Specific Wishes and QoL scales had high internal consistency in both time periods (Knuder Richardson formula 20 [KR-20]=0.83-0.95, and 0.86-0.89). Test-retest reliability was perfect for General Wishes (κ=1), high for QoL (Pearson's correlation coefficient=0.83), but lower for Specific Wishes (Pearson's correlation coefficient=0.57). MYWK generates an AD where General Wishes and QoL (but not Specific Wishes) statements remain consistent over time.
Reliability of an Interactive Computer Program for Advance Care Planning
Levi, Benjamin H.; Camacho, Fabian; Whitehead, Megan; Farace, Elana; Green, Michael J
2012-01-01
Abstract Despite widespread efforts to promote advance directives (ADs), completion rates remain low. Making Your Wishes Known: Planning Your Medical Future (MYWK) is an interactive computer program that guides individuals through the process of advance care planning, explaining health conditions and interventions that commonly involve life or death decisions, helps them articulate their values/goals, and translates users' preferences into a detailed AD document. The purpose of this study was to demonstrate that (in the absence of major life changes) the AD generated by MYWK reliably reflects an individual's values/preferences. English speakers ≥30 years old completed MYWK twice, 4 to 6 weeks apart. Reliability indices were assessed for three AD components: General Wishes; Specific Wishes for treatment; and Quality-of-Life values (QoL). Twenty-four participants completed the study. Both the Specific Wishes and QoL scales had high internal consistency in both time periods (Knuder Richardson formula 20 [KR-20]=0.83–0.95, and 0.86–0.89). Test-retest reliability was perfect for General Wishes (κ=1), high for QoL (Pearson's correlation coefficient=0.83), but lower for Specific Wishes (Pearson's correlation coefficient=0.57). MYWK generates an AD where General Wishes and QoL (but not Specific Wishes) statements remain consistent over time. PMID:22512830
Analytical Modeling and Performance Prediction of Remanufactured Gearbox Components
NASA Astrophysics Data System (ADS)
Pulikollu, Raja V.; Bolander, Nathan; Vijayakar, Sandeep; Spies, Matthew D.
Gearbox components operate in extreme environments, often leading to premature removal or overhaul. Though worn or damaged, these components still have the ability to function given the appropriate remanufacturing processes are deployed. Doing so reduces a significant amount of resources (time, materials, energy, manpower) otherwise required to produce a replacement part. Unfortunately, current design and analysis approaches require extensive testing and evaluation to validate the effectiveness and safety of a component that has been used in the field then processed outside of original OEM specification. To test all possible combination of component coupled with various levels of potential damage repaired through various options of processing would be an expensive and time consuming feat, thus prohibiting a broad deployment of remanufacturing processes across industry. However, such evaluation and validation can occur through Integrated Computational Materials Engineering (ICME) modeling and simulation. Sentient developed a microstructure-based component life prediction (CLP) tool to quantify and assist gearbox components remanufacturing process. This was achieved by modeling the design-manufacturing-microstructure-property relationship. The CLP tool assists in remanufacturing of high value, high demand rotorcraft, automotive and wind turbine gears and bearings. This paper summarizes the CLP models development, and validation efforts by comparing the simulation results with rotorcraft spiral bevel gear physical test data. CLP analyzes gear components and systems for safety, longevity, reliability and cost by predicting (1) New gearbox component performance, and optimal time-to-remanufacture (2) Qualification of used gearbox components for remanufacturing process (3) Predicting the remanufactured component performance.
NASA Astrophysics Data System (ADS)
Bechou, L.; Deshayes, Y.; Aupetit-Berthelemot, C.; Guerin, A.; Tronche, C.
Space missions for Earth Observation are called upon to carry a growing number of instruments in their payload, whose performances are increasing. Future space systems are therefore intended to generate huge amounts of data and a key challenge in coming years will therefore lie in the ability to transmit that significant quantity of data to ground. Thus very high data rate Payload Telemetry (PLTM) systems will be required to face the demand of the future Earth Exploration Satellite Systems and reliability is one of the major concern of such systems. An attractive approach associated with the concept of predictive modeling consists in analyzing the impact of components malfunctioning on the optical link performances taking into account the network requirements and experimental degradation laws. Reliability estimation is traditionally based on life-testing and a basic approach is to use Telcordia requirements (468GR) for optical telecommunication applications. However, due to the various interactions between components, operating lifetime of a system cannot be taken as the lifetime of the less reliable component. In this paper, an original methodology is proposed to estimate reliability of an optical communication system by using a dedicated system simulator for predictive modeling and design for reliability. At first, we present frameworks of point-to-point optical communication systems for space applications where high data rate (or frequency bandwidth), lower cost or mass saving are needed. Optoelectronics devices used in these systems can be similar to those found in terrestrial optical network. Particularly we report simulation results of transmission performances after introduction of DFB Laser diode parameters variations versus time extrapolated from accelerated tests based on terrestrial or submarine telecommunications qualification standards. Simulations are performed to investigate and predict the consequence of degradations of the Laser diode (acting as a - requency carrier) on system performances (eye diagram, quality factor and BER). The studied link consists in 4× 2.5 Gbits/s WDM channels with direct modulation and equally spaced (0,8 nm) around the 1550 nm central wavelength. Results clearly show that variation of fundamental parameters such as bias current or central wavelength induces a penalization of dynamic performances of the complete WDM link. In addition different degradation kinetics of aged Laser diodes from a same batch have been implemented to build the final distribution of Q-factor and BER values after 25 years. When considering long optical distance, fiber attenuation, EDFA noise, dispersion, PMD, ... penalize network performances that can be compensated using Forward Error Correction (FEC) coding. Three methods have been investigated in the case of On-Off Keying (OOK) transmission over an unipolar optical channel corrupted by Gaussian noise. Such system simulations highlight the impact of component parameter degradations on the whole network performances allowing to optimize various time and cost consuming sensitivity analyses at the early stage of the system development. Thus the validity of failure criteria in relation with mission profiles can be evaluated representing a significant part of the general PDfR effort in particular for aerospace applications.
Reliability analysis based on the losses from failures.
Todinov, M T
2006-04-01
The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.
NASA Astrophysics Data System (ADS)
Qin, Fangcheng; Li, Yongtang; Qi, Huiping; Ju, Li
2017-01-01
Research on compact manufacturing technology for shape and performance controllability of metallic components can realize the simplification and high-reliability of manufacturing process on the premise of satisfying the requirement of macro/micro-structure. It is not only the key paths in improving performance, saving material and energy, and green manufacturing of components used in major equipments, but also the challenging subjects in frontiers of advanced plastic forming. To provide a novel horizon for the manufacturing in the critical components is significant. Focused on the high-performance large-scale components such as bearing rings, flanges, railway wheels, thick-walled pipes, etc, the conventional processes and their developing situations are summarized. The existing problems including multi-pass heating, wasting material and energy, high cost and high-emission are discussed, and the present study unable to meet the manufacturing in high-quality components is also pointed out. Thus, the new techniques related to casting-rolling compound precise forming of rings, compact manufacturing for duplex-metal composite rings, compact manufacturing for railway wheels, and casting-extruding continuous forming of thick-walled pipes are introduced in detail, respectively. The corresponding research contents, such as casting ring blank, hot ring rolling, near solid-state pressure forming, hot extruding, are elaborated. Some findings in through-thickness microstructure evolution and mechanical properties are also presented. The components produced by the new techniques are mainly characterized by fine and homogeneous grains. Moreover, the possible directions for further development of those techniques are suggested. Finally, the key scientific problems are first proposed. All of these results and conclusions have reference value and guiding significance for the integrated control of shape and performance in advanced compact manufacturing.
Behavioral Scale Reliability and Measurement Invariance Evaluation Using Latent Variable Modeling
ERIC Educational Resources Information Center
Raykov, Tenko
2004-01-01
A latent variable modeling approach to reliability and measurement invariance evaluation for multiple-component measuring instruments is outlined. An initial discussion deals with the limitations of coefficient alpha, a frequently used index of composite reliability. A widely and readily applicable structural modeling framework is next described…
NASA Technical Reports Server (NTRS)
Maisel, James E.
1988-01-01
Addressed are some of the space electrical power system technologies that should be developed for the U.S. space program to remain competitive in the 21st century. A brief historical overview of some U.S. manned/unmanned spacecraft power systems is discussed to establish the fact that electrical systems are and will continue to become more sophisticated as the power levels appoach those on the ground. Adaptive/Expert power systems that can function in an extraterrestrial environment will be required to take an appropriate action during electrical faults so that the impact is minimal. Manhours can be reduced significantly by relinquishing tedious routine system component maintenance to the adaptive/expert system. By cataloging component signatures over time this system can set a flag for a premature component failure and thus possibly avoid a major fault. High frequency operation is important if the electrical power system mass is to be cut significantly. High power semiconductor or vacuum switching components will be required to meet future power demands. System mass tradeoffs have been investigated in terms of operating at high temperature, efficiency, voltage regulation, and system reliability. High temperature semiconductors will be required. Silicon carbide materials will operate at a temperature around 1000 K and the diamond material up to 1300 K. The driver for elevated temperature operation is that radiator mass is reduced significantly because of inverse temperature to the fourth power.
Characterizing wind power resource reliability in southern Africa
Fant, Charles; Gunturu, Bhaskar; Schlosser, Adam
2015-08-29
Producing electricity from wind is attractive because it provides a clean, low-maintenance power supply. However, wind resource is intermittent on various timescales, thus occasionally introducing large and sudden changes in power supply. A better understanding of this variability can greatly benefit power grid planning. In the following study, wind resource is characterized using metrics that highlight these intermittency issues; therefore identifying areas of high and low wind power reliability in southern Africa and Kenya at different time-scales. After developing a wind speed profile, these metrics are applied at various heights in order to assess the added benefit of raising themore » wind turbine hub. Furthermore, since the interconnection of wind farms can aid in reducing the overall intermittency, the value of interconnecting near-by sites is mapped using two distinct methods. Of the countries in this region, the Republic of South Africa has shown the most interest in wind power investment. For this reason, we focus parts of the study on wind reliability in the country. The study finds that, although mean Wind Power Density is high in South Africa compared to its neighboring countries, wind power resource tends to be less reliable than in other parts of southern Africa—namely central Tanzania. We also find that South Africa’s potential varies over different timescales, with higher reliability in the summer than winter, and higher reliability during the day than at night. This study is concluded by introducing two methods and measures to characterize the value of interconnection, including the use of principal component analysis to identify areas with a common signal.« less
Characterizing wind power resource reliability in southern Africa
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fant, Charles; Gunturu, Bhaskar; Schlosser, Adam
Producing electricity from wind is attractive because it provides a clean, low-maintenance power supply. However, wind resource is intermittent on various timescales, thus occasionally introducing large and sudden changes in power supply. A better understanding of this variability can greatly benefit power grid planning. In the following study, wind resource is characterized using metrics that highlight these intermittency issues; therefore identifying areas of high and low wind power reliability in southern Africa and Kenya at different time-scales. After developing a wind speed profile, these metrics are applied at various heights in order to assess the added benefit of raising themore » wind turbine hub. Furthermore, since the interconnection of wind farms can aid in reducing the overall intermittency, the value of interconnecting near-by sites is mapped using two distinct methods. Of the countries in this region, the Republic of South Africa has shown the most interest in wind power investment. For this reason, we focus parts of the study on wind reliability in the country. The study finds that, although mean Wind Power Density is high in South Africa compared to its neighboring countries, wind power resource tends to be less reliable than in other parts of southern Africa—namely central Tanzania. We also find that South Africa’s potential varies over different timescales, with higher reliability in the summer than winter, and higher reliability during the day than at night. This study is concluded by introducing two methods and measures to characterize the value of interconnection, including the use of principal component analysis to identify areas with a common signal.« less
Beyhun, Nazim Ercument; Can, Gamze; Tiryaki, Ahmet; Karakullukcu, Serdar; Bulut, Bekir; Yesilbas, Sehbal; Kavgaci, Halil; Topbas, Murat
2016-01-01
Background Needs based biopsychosocial distress instrument for cancer patients (CANDI) is a scale based on needs arising due to the effects of cancer. Objectives The aim of this research was to determine the reliability and validity of the CANDI scale in the Turkish language. Patients and Methods The study was performed with the participation of 172 cancer patients aged 18 and over. Factor analysis (principal components analysis) was used to assess construct validity. Criterion validities were tested by computing Spearman correlation between CANDI and hospital anxiety depression scale (HADS), and brief symptom inventory (BSI) (convergent validity) and quality of life scales (FACT-G) (divergent validity). Test-retest reliabilities and internal consistencies were measured with intraclass correlation (ICC) and Cronbach-α. Results A three-factor solution (emotional, physical and social) was found with factor analysis. Internal reliability (α = 0.94) and test-retest reliability (ICC = 0.87) were significantly high. Correlations between CANDI and HADS (rs = 0.67), and BSI (rs = 0.69) and FACT-G (rs = -0.76) were moderate and significant in the expected direction. Conclusions CANDI is a valid and reliable scale in cancer patients with a three-factor structure (emotional, physical and social) in the Turkish language. PMID:27621931
Multisite Reliability of Cognitive BOLD Data
Brown, Gregory G.; Mathalon, Daniel H.; Stern, Hal; Ford, Judith; Mueller, Bryon; Greve, Douglas N.; McCarthy, Gregory; Voyvodic, Jim; Glover, Gary; Diaz, Michele; Yetter, Elizabeth; Burak Ozyurt, I.; Jorgensen, Kasper W.; Wible, Cynthia G.; Turner, Jessica A.; Thompson, Wesley K.; Potkin, Steven G.
2010-01-01
Investigators perform multi-site functional magnetic resonance imaging studies to increase statistical power, to enhance generalizability, and to improve the likelihood of sampling relevant subgroups. Yet undesired site variation in imaging methods could off-set these potential advantages. We used variance components analysis to investigate sources of variation in the blood oxygen level dependent (BOLD) signal across four 3T magnets in voxelwise and region of interest (ROI) analyses. Eighteen participants traveled to four magnet sites to complete eight runs of a working memory task involving emotional or neutral distraction. Person variance was more than 10 times larger than site variance for five of six ROIs studied. Person-by-site interactions, however, contributed sizable unwanted variance to the total. Averaging over runs increased between-site reliability, with many voxels showing good to excellent between-site reliability when eight runs were averaged and regions of interest showing fair to good reliability. Between-site reliability depended on the specific functional contrast analyzed in addition to the number of runs averaged. Although median effect size was correlated with between-site reliability, dissociations were observed for many voxels. Brain regions where the pooled effect size was large but between-site reliability was poor were associated with reduced individual differences. Brain regions where the pooled effect size was small but between-site reliability was excellent were associated with a balance of participants who displayed consistently positive or consistently negative BOLD responses. Although between-site reliability of BOLD data can be good to excellent, acquiring highly reliable data requires robust activation paradigms, ongoing quality assurance, and careful experimental control. PMID:20932915
Modified personal interviews: resurrecting reliable personal interviews for admissions?
Hanson, Mark D; Kulasegaram, Kulamakan Mahan; Woods, Nicole N; Fechtig, Lindsey; Anderson, Geoff
2012-10-01
Traditional admissions personal interviews provide flexible faculty-student interactions but are plagued by low inter-interview reliability. Axelson and Kreiter (2009) retrospectively showed that multiple independent sampling (MIS) may improve reliability of personal interviews; thus, the authors incorporated MIS into the admissions process for medical students applying to the University of Toronto's Leadership Education and Development Program (LEAD). They examined the reliability and resource demands of this modified personal interview (MPI) format. In 2010-2011, LEAD candidates submitted written applications, which were used to screen for participation in the MPI process. Selected candidates completed four brief (10-12 minutes) independent MPIs each with a different interviewer. The authors blueprinted MPI questions to (i.e., aligned them with) leadership attributes, and interviewers assessed candidates' eligibility on a five-point Likert-type scale. The authors analyzed inter-interview reliability using the generalizability theory. Sixteen candidates submitted applications; 10 proceeded to the MPI stage. Reliability of the written application components was 0.75. The MPI process had overall inter-interview reliability of 0.79. Correlation between the written application and MPI scores was 0.49. A decision study showed acceptable reliability of 0.74 with only three MPIs scored using one global rating. Furthermore, a traditional admissions interview format would take 66% more time than the MPI format. The MPI format, used during the LEAD admissions process, achieved high reliability with minimal faculty resources. The MPI format's reliability and effective resource use were possible through MIS and employment of expert interviewers. MPIs may be useful for other admissions tasks.
High speed photodiodes in standard nanometer scale CMOS technology: a comparative study.
Nakhkoob, Behrooz; Ray, Sagar; Hella, Mona M
2012-05-07
This paper compares various techniques for improving the frequency response of silicon photodiodes fabricated in mainstream CMOS technology for fully integrated optical receivers. The three presented photodiodes, Spatially Modulated Light detectors, Double, and Interrupted P-Finger photodiodes, aim at reducing the low speed diffusive component of the photo generated current. For the first photodiode, Spatially Modulated Light (SML) detectors, the low speed current component is canceled out by converting it to a common mode current driving a differential transimpedance amplifier. The Double Photodiode (DP) uses two depletion regions to increase the fast drift component, while the Interrupted-P Finger Photodiode (IPFPD) redirects the low speed component towards a different contact from the main fast terminal of the photodiode. Extensive device simulations using 130 nm CMOS technology-parameters are presented to compare their performance using the same technological platform. Finally a new type of photodiode that uses triple well CMOS technology is introduced that can achieve a bandwidth of roughly 10 GHz without any process modification or high reverse bias voltages that would jeopardize the photodetector and subsequent transimpedance amplifier reliability.
Cognitive and neural components of the phenomenology of agency.
Morsella, Ezequiel; Berger, Christopher C; Krieger, Stepehen C
2011-06-01
A primary aspect of the self is the sense of agency – the sense that one is causing an action. In the spirit of recent reductionistic approaches to other complex, multifaceted phenomena (e.g., working memory; cf. Johnson &Johnson, 2009), we attempt to unravel the sense of agency by investigating its most basic components, without invoking high-level conceptual or 'central executive' processes. After considering the high-level components of agency, we examine the cognitive and neural underpinnings of its low-level components, which include basic consciousness and subjective urges (e.g., the urge to breathe when holding one's breath). Regarding urges, a quantitative review revealed that certain inter-representational dynamics (conflicts between action plans, as when holding one's breath) reliably engender fundamental aspects both of the phenomenology of agency and of 'something countering the will of the self'. The neural correlates of such dynamics, for both primordial urges (e.g., air hunger) and urges elicited in laboratory interference tasks, are entertained. In addition, we discuss the implications of this unique perspective for the study of disorders involving agency.
Enhanced Component Performance Study: Turbine-Driven Pumps 1998–2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2015-11-01
This report presents an enhanced performance evaluation of turbine-driven pumps (TDPs) at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience failure reports from fiscal year 1998 through 2014 for the component reliability as reported in the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The TDP failure modes considered are failure to start (FTS), failure to run less than or equal to one hour (FTR=1H), failure to run more than one hour (FTR>1H), and normally running systems FTS and failure to run (FTR). The component reliability estimates and themore » reliability data are trended for the most recent 10-year period while yearly estimates for reliability are provided for the entire active period. Statistically significant increasing trends were identified for TDP unavailability, for frequency of start demands for standby TDPs, and for run hours in the first hour after start. Statistically significant decreasing trends were identified for start demands for normally running TDPs, and for run hours per reactor critical year for normally running TDPs.« less
PSHFT - COMPUTERIZED LIFE AND RELIABILITY MODELLING FOR TURBOPROP TRANSMISSIONS
NASA Technical Reports Server (NTRS)
Savage, M.
1994-01-01
The computer program PSHFT calculates the life of a variety of aircraft transmissions. A generalized life and reliability model is presented for turboprop and parallel shaft geared prop-fan aircraft transmissions. The transmission life and reliability model is a combination of the individual reliability models for all the bearings and gears in the main load paths. The bearing and gear reliability models are based on the statistical two parameter Weibull failure distribution method and classical fatigue theories. The computer program developed to calculate the transmission model is modular. In its present form, the program can analyze five different transmissions arrangements. Moreover, the program can be easily modified to include additional transmission arrangements. PSHFT uses the properties of a common block two-dimensional array to separate the component and transmission property values from the analysis subroutines. The rows correspond to specific components with the first row containing the values for the entire transmission. Columns contain the values for specific properties. Since the subroutines (which determine the transmission life and dynamic capacity) interface solely with this property array, they are separated from any specific transmission configuration. The system analysis subroutines work in an identical manner for all transmission configurations considered. Thus, other configurations can be added to the program by simply adding component property determination subroutines. PSHFT consists of a main program, a series of configuration specific subroutines, generic component property analysis subroutines, systems analysis subroutines, and a common block. The main program selects the routines to be used in the analysis and sequences their operation. The series of configuration specific subroutines input the configuration data, perform the component force and life analyses (with the help of the generic component property analysis subroutines), fill the property array, call up the system analysis routines, and finally print out the analysis results for the system and components. PSHFT is written in FORTRAN 77 and compiled on a MicroSoft FORTRAN compiler. The program will run on an IBM PC AT compatible with at least 104k bytes of memory. The program was developed in 1988.
Garcia, Darren J.; Skadberg, Rebecca M.; Schmidt, Megan; ...
2018-03-05
The Diagnostic and Statistical Manual of Mental Disorders (5th ed. [DSM–5]; American Psychiatric Association, 2013) Section III Alternative Model for Personality Disorders (AMPD) represents a novel approach to the diagnosis of personality disorder (PD). In this model, PD diagnosis requires evaluation of level of impairment in personality functioning (Criterion A) and characterization by pathological traits (Criterion B). Questions about clinical utility, complexity, and difficulty in learning and using the AMPD have been expressed in recent scholarly literature. We examined the learnability, interrater reliability, and clinical utility of the AMPD using a vignette methodology and graduate student raters. Results showed thatmore » student clinicians can learn Criterion A of the AMPD to a high level of interrater reliability and agreement with expert ratings. Interrater reliability of the 25 trait facets of the AMPD varied but showed overall acceptable levels of agreement. Examination of severity indexes of PD impairment showed the level of personality functioning (LPF) added information beyond that of global assessment of functioning (GAF). Clinical utility ratings were generally strong. Lastly, the satisfactory interrater reliability of components of the AMPD indicates the model, including the LPF, is very learnable.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, Darren J.; Skadberg, Rebecca M.; Schmidt, Megan
The Diagnostic and Statistical Manual of Mental Disorders (5th ed. [DSM–5]; American Psychiatric Association, 2013) Section III Alternative Model for Personality Disorders (AMPD) represents a novel approach to the diagnosis of personality disorder (PD). In this model, PD diagnosis requires evaluation of level of impairment in personality functioning (Criterion A) and characterization by pathological traits (Criterion B). Questions about clinical utility, complexity, and difficulty in learning and using the AMPD have been expressed in recent scholarly literature. We examined the learnability, interrater reliability, and clinical utility of the AMPD using a vignette methodology and graduate student raters. Results showed thatmore » student clinicians can learn Criterion A of the AMPD to a high level of interrater reliability and agreement with expert ratings. Interrater reliability of the 25 trait facets of the AMPD varied but showed overall acceptable levels of agreement. Examination of severity indexes of PD impairment showed the level of personality functioning (LPF) added information beyond that of global assessment of functioning (GAF). Clinical utility ratings were generally strong. Lastly, the satisfactory interrater reliability of components of the AMPD indicates the model, including the LPF, is very learnable.« less
Reliability and Validity of the Korean Version of the Internet Addiction Test among College Students
Lee, Kounseok; Lee, Hye-Kyung; Gyeong, Hyunsu; Yu, Byeongkwan; Song, Yul-Mai
2013-01-01
We developed a Korean translation of the Internet Addiction Test (KIAT), widely used self-report for internet addiction and tested its reliability and validity in a sample of college students. Two hundred seventy-nine college students at a national university completed the KIAT. Internal consistency and two week test-retest reliability were calculated from the data, and principal component factor analysis was conducted. Participants also completed the Internet Addiction Diagnostic Questionnaire (IADQ), the Korea Internet addiction scale (K-scale), and the Patient Health Questionnaire-9 for the criterion validity. Cronbach's alpha of the whole scale was 0.91, and test-retest reliability was also good (r = 0.73). The IADQ, the K-scale, and depressive symptoms were significantly correlated with the KIAT scores, demonstrating concurrent and convergent validity. The factor analysis extracted four factors (Excessive use, Dependence, Withdrawal, and Avoidance of reality) that accounted for 59% of total variance. The KIAT has outstanding internal consistency and high test-retest reliability. Also, the factor structure and validity data show that the KIAT is comparable to the original version. Thus, the KIAT is a psychometrically sound tool for assessing internet addiction in the Korean-speaking population. PMID:23678270
Performance and reliability of the NASA biomass production chamber
NASA Technical Reports Server (NTRS)
Fortson, R. E.; Sager, J. C.; Chetirkin, P. V.
1994-01-01
The Biomass Production Chamber (BPC) at the Kennedy Space Center is part of the Controlled Ecological Life Support System (CELSS) Breadboard Project. Plants are grown in a closed environment in an effort to quantify their contributions to the requirements for life support. Performance of this system is described. Also, in building this system, data from component and subsystem failures are being recorded. These data are used to identify problem areas in the design and implementation. The techniques used to measure the reliability will be useful in the design and construction of future CELSS. Possible methods for determining the reliability of a green plant, the primary component of CELSS, are discussed.
Assuring long-term reliability of concentrator PV systems
NASA Astrophysics Data System (ADS)
McConnell, R.; Garboushian, V.; Brown, J.; Crawford, C.; Darban, K.; Dutra, D.; Geer, S.; Ghassemian, V.; Gordon, R.; Kinsey, G.; Stone, K.; Turner, G.
2009-08-01
Concentrator PV (CPV) systems have attracted significant interest because these systems incorporate the world's highest efficiency solar cells and they are targeting the lowest cost production of solar electricity for the world's utility markets. Because these systems are just entering solar markets, manufacturers and customers need to assure their reliability for many years of operation. There are three general approaches for assuring CPV reliability: 1) field testing and development over many years leading to improved product designs, 2) testing to internationally accepted qualification standards (especially for new products) and 3) extended reliability tests to identify critical weaknesses in a new component or design. Amonix has been a pioneer in all three of these approaches. Amonix has an internal library of field failure data spanning over 15 years that serves as the basis for its seven generations of CPV systems. An Amonix product served as the test CPV module for the development of the world's first qualification standard completed in March 2001. Amonix staff has served on international standards development committees, such as the International Electrotechnical Commission (IEC), in support of developing CPV standards needed in today's rapidly expanding solar markets. Recently Amonix employed extended reliability test procedures to assure reliability of multijunction solar cell operation in its seventh generation high concentration PV system. This paper will discuss how these three approaches have all contributed to assuring reliability of the Amonix systems.
DiFilippo, Kristen Nicole; Huang, Wenhao; Chapman-Novakofski, Karen M
2017-10-27
The extensive availability and increasing use of mobile apps for nutrition-based health interventions makes evaluation of the quality of these apps crucial for integration of apps into nutritional counseling. The goal of this research was the development, validation, and reliability testing of the app quality evaluation (AQEL) tool, an instrument for evaluating apps' educational quality and technical functionality. Items for evaluating app quality were adapted from website evaluations, with additional items added to evaluate the specific characteristics of apps, resulting in 79 initial items. Expert panels of nutrition and technology professionals and app users reviewed items for face and content validation. After recommended revisions, nutrition experts completed a second AQEL review to ensure clarity. On the basis of 150 sets of responses using the revised AQEL, principal component analysis was completed, reducing AQEL into 5 factors that underwent reliability testing, including internal consistency, split-half reliability, test-retest reliability, and interrater reliability (IRR). Two additional modifiable constructs for evaluating apps based on the age and needs of the target audience as selected by the evaluator were also tested for construct reliability. IRR testing using intraclass correlations (ICC) with all 7 constructs was conducted, with 15 dietitians evaluating one app. Development and validation resulted in the 51-item AQEL. These were reduced to 25 items in 5 factors after principal component analysis, plus 9 modifiable items in two constructs that were not included in principal component analysis. Internal consistency and split-half reliability of the following constructs derived from principal components analysis was good (Cronbach alpha >.80, Spearman-Brown coefficient >.80): behavior change potential, support of knowledge acquisition, app function, and skill development. App purpose split half-reliability was .65. Test-retest reliability showed no significant change over time (P>.05) for all but skill development (P=.001). Construct reliability was good for items assessing age appropriateness of apps for children, teens, and a general audience. In addition, construct reliability was acceptable for assessing app appropriateness for various target audiences (Cronbach alpha >.70). For the 5 main factors, ICC (1,k) was >.80, with a P value of <.05. When 15 nutrition professionals evaluated one app, ICC (2,15) was .98, with a P value of <.001 for all 7 constructs when the modifiable items were specified for adults seeking weight loss support. Our preliminary effort shows that AQEL is a valid, reliable instrument for evaluating nutrition apps' qualities for clinical interventions by nutrition clinicians, educators, and researchers. Further efforts in validating AQEL in various contexts are needed. ©Kristen Nicole DiFilippo, Wenhao Huang, Karen M. Chapman-Novakofski. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 27.10.2017.
Weizman, Lior; Sira, Liat Ben; Joskowicz, Leo; Rubin, Daniel L.; Yeom, Kristen W.; Constantini, Shlomi; Shofty, Ben; Bashat, Dafna Ben
2014-01-01
Purpose: Tracking the progression of low grade tumors (LGTs) is a challenging task, due to their slow growth rate and associated complex internal tumor components, such as heterogeneous enhancement, hemorrhage, and cysts. In this paper, the authors show a semiautomatic method to reliably track the volume of LGTs and the evolution of their internal components in longitudinal MRI scans. Methods: The authors' method utilizes a spatiotemporal evolution modeling of the tumor and its internal components. Tumor components gray level parameters are estimated from the follow-up scan itself, obviating temporal normalization of gray levels. The tumor delineation procedure effectively incorporates internal classification of the baseline scan in the time-series as prior data to segment and classify a series of follow-up scans. The authors applied their method to 40 MRI scans of ten patients, acquired at two different institutions. Two types of LGTs were included: Optic pathway gliomas and thalamic astrocytomas. For each scan, a “gold standard” was obtained manually by experienced radiologists. The method is evaluated versus the gold standard with three measures: gross total volume error, total surface distance, and reliability of tracking tumor components evolution. Results: Compared to the gold standard the authors' method exhibits a mean Dice similarity volumetric measure of 86.58% and a mean surface distance error of 0.25 mm. In terms of its reliability in tracking the evolution of the internal components, the method exhibits strong positive correlation with the gold standard. Conclusions: The authors' method provides accurate and repeatable delineation of the tumor and its internal components, which is essential for therapy assessment of LGTs. Reliable tracking of internal tumor components over time is novel and potentially will be useful to streamline and improve follow-up of brain tumors, with indolent growth and behavior. PMID:24784396
Manzi, Luigi; Villafañe, Jorge Hugo; Indino, Cristian; Tamini, Jacopo; Berjano, Pedro; Usuelli, Federico Giuseppe
2017-11-08
The purpose of this study was to investigate the test-retest reliability of the Phi angle in patients undergoing total ankle replacement (TAR) for end stage ankle osteoarthritis (OA) to assess the rotational alignment of the talar component. Retrospective observational cross-sectional study of prospectively collected data. Post-operative anteroposterior radiographs of the foot of 170 patients who underwent TAR for the ankle OA were evaluated. Three physicians measured Phi on the 170 randomly sorted and anonymized radiographs on two occasions, one week apart (test and retest conditions), inter and intra-observer agreement were evaluated. Test-retest reliability of Phi angle measurement was excellent for patients with Hintegra TAR (ICC=0.995; p<0.001) and Zimmer TAR (ICC=0.995; p<0.001) on radiographs of subjects with ankle OA. There were no significant differences in the reliability of the Phi angle measurement between patients with Hintegra vs. Zimmer implants (p>0.05). Measurement of Phi angle on weight-bearing dorsoplantar radiograph showed an excellent reliability among orthopaedic surgeons in determining the position of the talar component in the axial plane. Level II, cross sectional study. Copyright © 2017 European Foot and Ankle Society. Published by Elsevier Ltd. All rights reserved.
Assuring Electronics Reliability: What Could and Should Be Done Differently
NASA Astrophysics Data System (ADS)
Suhir, E.
The following “ ten commandments” for the predicted and quantified reliability of aerospace electronic, and photonic products are addressed and discussed: 1) The best product is the best compromise between the needs for reliability, cost effectiveness and time-to-market; 2) Reliability cannot be low, need not be higher than necessary, but has to be adequate for a particular product; 3) When reliability is imperative, ability to quantify it is a must, especially if optimization is considered; 4) One cannot design a product with quantified, optimized and assured reliability by limiting the effort to the highly accelerated life testing (HALT) that does not quantify reliability; 5) Reliability is conceived at the design stage and should be taken care of, first of all, at this stage, when a “ genetically healthy” product should be created; reliability evaluations and assurances cannot be delayed until the product is fabricated and shipped to the customer, i.e., cannot be left to the prognostics-and-health-monitoring/managing (PHM) stage; it is too late at this stage to change the design or the materials for improved reliability; that is why, when reliability is imperative, users re-qualify parts to assess their lifetime and use redundancy to build a highly reliable system out of insufficiently reliable components; 6) Design, fabrication, qualification and PHM efforts should consider and be specific for particular products and their most likely actual or at least anticipated application(s); 7) Probabilistic design for reliability (PDfR) is an effective means for improving the state-of-the-art in the field: nothing is perfect, and the difference between an unreliable product and a robust one is “ merely” the probability of failure (PoF); 8) Highly cost-effective and highly focused failure oriented accelerated testing (FOAT) geared to a particular pre-determined reliability model and aimed at understanding the physics of failure- anticipated by this model is an important constituent part of the PDfR effort; 9) Predictive modeling (PM) is another important constituent of the PDfR approach; in combination with FOAT, it is a powerful means to carry out sensitivity analyses (SA), to quantify and nearly eliminate failures (“ principle of practical confidence” ); 10) Consistent, comprehensive and physically meaningful PDfR can effectively contribute to the most feasible and the most effective qualification test (QT) methodologies, practices and specifications. The general concepts addressed in the paper are illustrated by numerical examples. It is concluded that although the suggested concept is promising and fruitful, further research, refinement, and validations are needed before this concept becomes widely accepted by the engineering community and implemented into practice. It is important that this novel approach is introduced gradually, whenever feasible and appropriate, in addition to, and in some situations even instead of, the currently employed various types and modifications of the forty year old HALT.
Liu, Jingyu; Demirci, Oguz; Calhoun, Vince D.
2009-01-01
Relationships between genomic data and functional brain images are of great interest but require new analysis approaches to integrate the high-dimensional data types. This letter presents an extension of a technique called parallel independent component analysis (paraICA), which enables the joint analysis of multiple modalities including interconnections between them. We extend our earlier work by allowing for multiple interconnections and by providing important overfitting controls. Performance was assessed by simulations under different conditions, and indicated reliable results can be extracted by properly balancing overfitting and underfitting. An application to functional magnetic resonance images and single nucleotide polymorphism array produced interesting findings. PMID:19834575
Liu, Jingyu; Demirci, Oguz; Calhoun, Vince D
2008-01-01
Relationships between genomic data and functional brain images are of great interest but require new analysis approaches to integrate the high-dimensional data types. This letter presents an extension of a technique called parallel independent component analysis (paraICA), which enables the joint analysis of multiple modalities including interconnections between them. We extend our earlier work by allowing for multiple interconnections and by providing important overfitting controls. Performance was assessed by simulations under different conditions, and indicated reliable results can be extracted by properly balancing overfitting and underfitting. An application to functional magnetic resonance images and single nucleotide polymorphism array produced interesting findings.
The Local Wind Pump for Marginal Societies in Indonesia: A Perspective of Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Gunawan, Insan; Taufik, Ahmad
2007-10-01
There are many efforts to reduce a cost of investment of well established hybrid wind pump applied to rural areas. A recent study on a local wind pump (LWP) for marginal societies in Indonesia (traditional farmers, peasant and tribes) was one of the efforts reporting a new application area. The objectives of the study were defined to measure reliability value of the LWP due to fluctuated wind intensity, low wind speed, economic point of view regarding a prolong economic crisis occurring and an available local component of the LWP and to sustain economics productivity (agriculture product) of the society. In the study, a fault tree analysis (FTA) was deployed as one of three methods used for assessing the LWP. In this article, the FTA has been thoroughly discussed in order to improve a better performance of the LWP applied in dry land watering system of Mesuji district of Lampung province-Indonesia. In the early stage, all of local component of the LWP was classified in term of its function. There were four groups of the components. Moreover, all of the sub components of each group were subjected to failure modes of the FTA, namely (1) primary failure modes; (2) secondary failure modes and (3) common failure modes. In the data processing stage, an available software package, ITEM was deployed. It was observed that the component indicated obtaining relative a long life duration of operational life cycle in 1,666 hours. Moreover, to enhance high performance the LWP, maintenance schedule, critical sub component suffering from failure and an overhaul priority have been identified in term of quantity values. Throughout a year pilot project, it can be concluded that the LWP is a reliable product to the societies enhancing their economics productivities.
Qualification and issues with space flight laser systems and components
NASA Astrophysics Data System (ADS)
Ott, Melanie N.; Coyle, D. B.; Canham, John S.; Leidecker, Henning W., Jr.
2006-02-01
The art of flight quality solid-state laser development is still relatively young, and much is still unknown regarding the best procedures, components, and packaging required for achieving the maximum possible lifetime and reliability when deployed in the harsh space environment. One of the most important issues is the limited and unstable supply of quality, high power diode arrays with significant technological heritage and market lifetime. Since Spectra Diode Labs Inc. ended their involvement in the pulsed array business in the late 1990's, there has been a flurry of activity from other manufacturers, but little effort focused on flight quality production. This forces NASA, inevitably, to examine the use of commercial parts to enable space flight laser designs. System-level issues such as power cycling, operational derating, duty cycle, and contamination risks to other laser components are some of the more significant unknown, if unquantifiable, parameters that directly effect transmitter reliability. Designs and processes can be formulated for the system and the components (including thorough modeling) to mitigate risk based on the known failures modes as well as lessons learned that GSFC has collected over the past ten years of space flight operation of lasers. In addition, knowledge of the potential failure modes related to the system and the components themselves can allow the qualification testing to be done in an efficient yet, effective manner. Careful test plan development coupled with physics of failure knowledge will enable cost effect qualification of commercial technology. Presented here will be lessons learned from space flight experience, brief synopsis of known potential failure modes, mitigation techniques, and options for testing from the system level to the component level.
Qualification and Issues with Space Flight Laser Systems and Components
NASA Technical Reports Server (NTRS)
Ott, Melanie N.; Coyle, D. Barry; Canham, John S.; Leidecker, Henning W.
2006-01-01
The art of flight quality solid-state laser development is still relatively young, and much is still unknown regarding the best procedures, components, and packaging required for achieving the maximum possible lifetime and reliability when deployed in the harsh space environment. One of the most important issues is the limited and unstable supply of quality, high power diode arrays with significant technological heritage and market lifetime. Since Spectra Diode Labs Inc. ended their involvement in the pulsed array business in the late 1990's, there has been a flurry of activity from other manufacturers, but little effort focused on flight quality production. This forces NASA, inevitably, to examine the use of commercial parts to enable space flight laser designs. System-level issues such as power cycling, operational derating, duty cycle, and contamination risks to other laser components are some of the more significant unknown, if unquantifiable, parameters that directly effect transmitter reliability. Designs and processes can be formulated for the system and the components (including thorough modeling) to mitigate risk based on the known failures modes as well as lessons learned that GSFC has collected over the past ten years of space flight operation of lasers. In addition, knowledge of the potential failure modes related to the system and the components themselves can allow the qualification testing to be done in an efficient yet, effective manner. Careful test plan development coupled with physics of failure knowledge will enable cost effect qualification of commercial technology. Presented here will be lessons learned from space flight experience, brief synopsis of known potential failure modes, mitigation techniques, and options for testing from the system level to the component level.
Qualification and Issues with Space Flight Laser Systems and Components
NASA Technical Reports Server (NTRS)
Ott, Melanie N.; Coyle, D. Barry; Canham, John S.; Leidecker, Henning W.
2006-01-01
The art of flight quality solid-state laser development is still relatively young, and much is still unknown regarding the best procedures, components, and packaging required for achieving the maximum possible lifetime and reliability when deployed in the harsh space environment. One of the most important issues is the limited and unstable supply of quality, high power diode arrays with significant technological heritage and market lifetime. Since Spectra Diode Labs Inc. ended their involvement in the pulsed array business in the late 199O's, there has been a flurry of activity from other manufacturers, but little effort focused on flight quality production. This forces NASA, inevitably, to examine the use of commercial parts to enable space flight laser designs. System-level issues such as power cycling, operational derating, duty cycle, and contamination risks to other laser components are some of the more significant unknown, if unquantifiable, parameters that directly effect transmitter reliability. Designs and processes can be formulated for the system and the components (including thorough modeling) to mitigate risk based on the known failures modes as well as lessons learned that GSFC has collected over the past ten years of space flight operation of lasers. In addition, knowledge of the potential failure modes related to the system and the components themselves can allow the qualification testing to be done in an efficient yet, effective manner. Careful test plan development coupled with physics of failure knowledge will enable cost effect qualification of commercial technology. Presented here will be lessons learned from space flight experience, brief synopsis of known potential failure modes, mitigation techniques, and options for testing from the system level to the component level.
Rychlik, Michał; Samborski, Włodzimierz
2015-01-01
The aim of this study was to assess the validity and test-retest reliability of Thermovision Technique of Dry Needling (TTDN) for the gluteus minimus muscle. TTDN is a new thermography approach used to support trigger points (TrPs) diagnostic criteria by presence of short-term vasomotor reactions occurring in the area where TrPs refer pain. Method. Thirty chronic sciatica patients (n=15 TrP-positive and n=15 TrPs-negative) and 15 healthy volunteers were evaluated by TTDN three times during two consecutive days based on TrPs of the gluteus minimus muscle confirmed additionally by referred pain presence. TTDN employs average temperature (T avr), maximum temperature (T max), low/high isothermal-area, and autonomic referred pain phenomenon (AURP) that reflects vasodilatation/vasoconstriction. Validity and test-retest reliability were assessed concurrently. Results. Two components of TTDN validity and reliability, T avr and AURP, had almost perfect agreement according to κ (e.g., thigh: 0.880 and 0.938; calf: 0.902 and 0.956, resp.). The sensitivity for T avr, T max, AURP, and high isothermal-area was 100% for everyone, but specificity of 100% was for T avr and AURP only. Conclusion. TTDN is a valid and reliable method for T avr and AURP measurement to support TrPs diagnostic criteria for the gluteus minimus muscle when digitally evoked referred pain pattern is present. PMID:26137486
Test-retest reliability and stability of N400 effects in a word-pair semantic priming paradigm.
Kiang, Michael; Patriciu, Iulia; Roy, Carolyn; Christensen, Bruce K; Zipursky, Robert B
2013-04-01
Elicited by any meaningful stimulus, the N400 event-related potential (ERP) component is reduced when the stimulus is related to a preceding one. This N400 semantic priming effect has been used to probe abnormal semantic relationship processing in clinical disorders, and suggested as a possible biomarker for treatment studies. Validating N400 semantic priming effects as a clinical biomarker requires characterizing their test-retest reliability. We assessed test-retest reliability of N400 semantic priming in 16 healthy adults who viewed the same related and unrelated prime-target word pairs in two sessions one week apart. As expected, N400 amplitudes were smaller for related versus unrelated targets across sessions. N400 priming effects (amplitude differences between unrelated and related targets) were highly correlated across sessions (r=0.85, P<0.0001), but smaller in the second session due to larger N400s to related targets. N400 priming effects have high reliability over a one-week interval. They may decrease with repeat testing, possibly because of motivational changes. Use of N400 priming effects in treatment studies should account for possible magnitude decreases with repeat testing. Further research is needed to delineate N400 priming effects' test-retest reliability and stability in different age and clinical groups, and with different stimulus types. Copyright © 2012 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Propulsion system research and development for electric and hybrid vehicles
NASA Technical Reports Server (NTRS)
Schwartz, H. J.
1980-01-01
An approach to propulsion subsystem technology is presented. Various tests of component reliability are described to aid in the production of better quality vehicles. component characterization work is described to provide engineering data to manufacturers on component performance and on important component propulsion system interactions.
Damage Tolerance and Reliability of Turbine Engine Components
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1999-01-01
This report describes a formal method to quantify structural damage tolerance and reliability in the presence of a multitude of uncertainties in turbine engine components. The method is based at the material behavior level where primitive variables with their respective scatter ranges are used to describe behavior. Computational simulation is then used to propagate the uncertainties to the structural scale where damage tolerance and reliability are usually specified. Several sample cases are described to illustrate the effectiveness, versatility, and maturity of the method. Typical results from this method demonstrate that it is mature and that it can be used to probabilistically evaluate turbine engine structural components. It may be inferred from the results that the method is suitable for probabilistically predicting the remaining life in aging or deteriorating structures, for making strategic projections and plans, and for achieving better, cheaper, faster products that give competitive advantages in world markets.
Psychometric evaluation of the Revised Professional Practice Environment (RPPE) scale.
Erickson, Jeanette Ives; Duffy, Mary E; Ditomassi, Marianne; Jones, Dorothy
2009-05-01
The purpose was to examine the psychometric properties of the Revised Professional Practice Environment (RPPE) scale. Despite renewed focus on studying health professionals' practice environments, there are still few reliable and valid instruments available to assist nurse administrators in decision making. A psychometric evaluation using a random-sample cross-validation procedure (calibration sample [CS], n = 775; validation sample [VS], n = 775) was undertaken. Cronbach alpha internal consistency reliability of the total score (r = 0.93 [CS] and 0.92 [VS]), resulting subscale scores (r range: 0.80-0.87 [CS], 0.81-0.88 [VS]), and principal components analyses with Varimax rotation and Kaiser normalization (8 components, 59.2% variance [CS], 59.7% [VS]) produced almost identical results in both samples. The multidimensional RPPE is a psychometrically sound measure of 8 components of the professional practice environment in the acute care setting and sufficiently reliable and valid for use as independent subscales in healthcare research.
A Concept for a Mobile Remote Manipulator System
NASA Technical Reports Server (NTRS)
Mikulus, M. M., Jr.; Bush, H. G.; Wallsom, R. E.; Jensen, J. K.
1985-01-01
A conceptual design for a Mobile Remote Manipulator System (MRMS) is presented. This concept does not require continuous rails for mobility (only guide pins at truss hardpoints) and is very compact, being only one bay square. The MRMS proposed is highly maneuverable and is able to move in any direction along the orthogonal guide pin array under complete control at all times. The proposed concept would greatly enhance the safety and operational capabilities of astronauts performing EVA functions such as structural assembly, payload transport and attachment, space station maintenance, repair or modification, and future spacecraft construction or servicing. The MRMS drive system conceptual design presented is a reasonably simple mechanical device which can be designed to exhibit high reliability. Developmentally, all components of the proposed MRMS either exist or are considered to be completely state of the art designs requiring minimal development, features which should enhance reliability and minimize costs.
Photonic Component Qualification and Implementation Activities at NASA Goddard Space Flight Center
NASA Technical Reports Server (NTRS)
Ott, Melanie N.; Jin, Xiaodan Linda; Chuska, Richard F.; LaRocca, Frank V.; MacMurphy, Shawn L.; Matuszeski, Adam J.; Zellar, Ronald S.; Friedberg, Patricia R.; Malenab, Mary C.
2006-01-01
The photonics group in Code 562 at NASA Goddard Space Flight Center supports a variety of space flight programs at NASA including the: International Space Station (ISS), Shuttle Return to Flight Mission, Lunar Reconnaissance Orbiter (LRO), Express Logistics Carrier, and the NASA Electronic Parts and Packaging Program (NEPP). Through research, development, and testing of the photonic systems to support these missions much information has been gathered on practical implementations for space environments. Presented here are the highlights and lessons learned as a result of striving to satisfy the project requirements for high performance and reliable commercial optical fiber components for space flight systems. The approach of how to qualify optical fiber components for harsh environmental conditions, the physics of failure and development lessons learned will be discussed.
A maintenance model for k-out-of-n subsystems aboard a fleet of advanced commercial aircraft
NASA Technical Reports Server (NTRS)
Miller, D. R.
1978-01-01
Proposed highly reliable fault-tolerant reconfigurable digital control systems for a future generation of commercial aircraft consist of several k-out-of-n subsystems. Each of these flight-critical subsystems will consist of n identical components, k of which must be functioning properly in order for the aircraft to be dispatched. Failed components are recoverable; they are repaired in a shop. Spares are inventoried at a main base where they may be substituted for failed components on planes during layovers. Penalties are assessed when failure of a k-out-of-n subsystem causes a dispatch cancellation or delay. A maintenance model for a fleet of aircraft with such control systems is presented. The goals are to demonstrate economic feasibility and to optimize.
NASA Case Sensitive Review and Audit Approach
NASA Astrophysics Data System (ADS)
Lee, Arthur R.; Bacus, Thomas H.; Bowersox, Alexandra M.; Newman, J. Steven
2005-12-01
As an Agency involved in high-risk endeavors NASA continually reassesses its commitment to engineering excellence and compliance to requirements. As a component of NASA's continual process improvement, the Office of Safety and Mission Assurance (OSMA) established the Review and Assessment Division (RAD) [1] to conduct independent audits to verify compliance with Agency requirements that impact safe and reliable operations. In implementing its responsibilities, RAD benchmarked various approaches for conducting audits, focusing on organizations that, like NASA, operate in high-risk environments - where seemingly inconsequential departures from safety, reliability, and quality requirements can have catastrophic impact to the public, NASA personnel, high-value equipment, and the environment. The approach used by the U.S. Navy Submarine Program [2] was considered the most fruitful framework for the invigorated OSMA audit processes. Additionally, the results of benchmarking activity revealed that not all audits are conducted using just one approach or even with the same objectives. This led to the concept of discrete, unique "audit cases."
Charalambous, A; Molassiotis, A
2017-01-01
The Short Form Chronic Respiratory Questionnaire (SF-CRQ) is frequently used in patients with obstructive pulmonary disease and it has demonstrated excellent psychometric properties. Since there is no psychometric information for its use with lung cancer patients, this study explored its validity and reliability in this population. Forty-six patients were assessed at two time points (with a 4-week interval) using the SF-CRQ, the modified Borg Scale, five numerical rating scales related to Perceived Severity of Breathlessness, and the Hospital Anxiety and Depression Scale. Internal consistency reliability was investigated by Cronbach's alpha reliability coefficient, test-retest reliability by Spearman-Brown reliability coefficient (P), content validity as well as convergent validity by Pearson's correlation coefficient between the SF-CRQ, and the conceptual similar scales mentioned above were explored. A principal component factor analysis was performed. The internal consistency was high [α = 0.88 (baseline) and 0.91 (after 1 month)]. The SF-CRQ had good stability with test-retest reliability ranging from r = 0.64 to 0.78, P < 0.001. Factor analysis suggests a single construct in this population. The preliminary data analyses supported the convergent, content, and construct validity of the SF-CRQ providing promising evidence that this can be a valid and reliable instrument for the assessment of quality of life related to breathlessness in lung cancer patients. © 2015 John Wiley & Sons Ltd.
Spark-integrated propellant injector head with flashback barrier
NASA Technical Reports Server (NTRS)
Mungas, Gregory Stuart (Inventor); Fisher, David James (Inventor); Mungas, Christopher (Inventor)
2012-01-01
High performance propellants flow through specialized mechanical hardware that allows for effective and safe thermal decomposition and/or combustion of the propellants. By integrating a sintered metal component between a propellant feed source and the combustion chamber, an effective and reliable fuel injector head may be implemented. Additionally the fuel injector head design integrates a spark ignition mechanism that withstands extremely hot running conditions without noticeable spark mechanism degradation.
Optical interconnection and packaging technologies for advanced avionics systems
NASA Astrophysics Data System (ADS)
Schroeder, J. E.; Christian, N. L.; Cotti, B.
1992-09-01
An optical backplane developed to demonstrate the advantages of high-performance optical interconnections and supporting technologies and designed to be compatible with standard avionics racks is described. The hardware demonstrates the three basic components of optical interconnects: optical sources, an optical signal distribution network, and optical receivers. Results from characterization and environmental tests, including a demonstration of the reliable transmission of serial data at a 1 Gb/s, are reported.
Pesicek, Jeremy; Cieślik, Konrad; Lambert, Marc-André; Carrillo, Pedro; Birkelo, Brad
2016-01-01
We have determined source mechanisms for nine high-quality microseismic events induced during hydraulic fracturing of the Montney Shale in Canada. Seismic data were recorded using a dense regularly spaced grid of sensors at the surface. The design and geometry of the survey are such that the recorded P-wave amplitudes essentially map the upper focal hemisphere, allowing the source mechanism to be interpreted directly from the data. Given the inherent difficulties of computing reliable moment tensors (MTs) from high-frequency microseismic data, the surface amplitude and polarity maps provide important additional confirmation of the source mechanisms. This is especially critical when interpreting non-shear source processes, which are notoriously susceptible to artifacts due to incomplete or inaccurate source modeling. We have found that most of the nine events contain significant non-double-couple (DC) components, as evident in the surface amplitude data and the resulting MT models. Furthermore, we found that source models that are constrained to be purely shear do not explain the data for most events. Thus, even though non-DC components of MTs can often be attributed to modeling artifacts, we argue that they are required by the data in some cases, and can be reliably computed and confidently interpreted under favorable conditions.
Simulation supported POD for RT test case-concept and modeling
NASA Astrophysics Data System (ADS)
Gollwitzer, C.; Bellon, C.; Deresch, A.; Ewert, U.; Jaenisch, G.-R.; Zscherpel, U.; Mistral, Q.
2012-05-01
Within the framework of the European project PICASSO, the radiographic simulator aRTist (analytical Radiographic Testing inspection simulation tool) developed by BAM has been extended for reliability assessment of film and digital radiography. NDT of safety relevant components of aerospace industry requires the proof of probability of detection (POD) of the inspection. Modeling tools can reduce the expense of such extended, time consuming NDT trials, if the result of simulation fits to the experiment. Our analytic simulation tool consists of three modules for the description of the radiation source, the interaction of radiation with test pieces and flaws, and the detection process with special focus on film and digital industrial radiography. It features high processing speed with near-interactive frame rates and a high level of realism. A concept has been developed as well as a software extension for reliability investigations, completed by a user interface for planning automatic simulations with varying parameters and defects. Furthermore, an automatic image analysis procedure is included to evaluate the defect visibility. The radiographic modeling from 3D CAD of aero engine components and quality test samples are compared as a precondition for real trials. This enables the evaluation and optimization of film replacement for application of modern digital equipment for economical NDT and defined POD.
Weibull-Based Design Methodology for Rotating Aircraft Engine Structures
NASA Technical Reports Server (NTRS)
Zaretsky, Erwin; Hendricks, Robert C.; Soditus, Sherry
2002-01-01
The NASA Energy Efficient Engine (E(sup 3)-Engine) is used as the basis of a Weibull-based life and reliability analysis. Each component's life and thus the engine's life is defined by high-cycle fatigue (HCF) or low-cycle fatigue (LCF). Knowing the cumulative life distribution of each of the components making up the engine as represented by a Weibull slope is a prerequisite to predicting the life and reliability of the entire engine. As the engine Weibull slope increases, the predicted lives decrease. The predicted engine lives L(sub 5) (95 % probability of survival) of approximately 17,000 and 32,000 hr do correlate with current engine maintenance practices without and with refurbishment. respectively. The individual high pressure turbine (HPT) blade lives necessary to obtain a blade system life L(sub 0.1) (99.9 % probability of survival) of 9000 hr for Weibull slopes of 3, 6 and 9, are 47,391 and 20,652 and 15,658 hr, respectively. For a design life of the HPT disks having probable points of failure equal to or greater than 36,000 hr at a probability of survival of 99.9 %, the predicted disk system life L(sub 0.1) can vary from 9,408 to 24,911 hr.
Haskard-Zolnierek, Kelly B
2012-01-01
This paper describes the development of the 47-item Physician-Patient Communication about Pain (PCAP) scale for use with audiotaped medical visit interactions. Patient pain was assessed with the Medical Outcomes Study SF-36 Bodily Pain Scale. Four raters assessed 181 audiotaped patient interactions with 68 physicians. Descriptive statistics of PCAP items were computed. Principal components analyses with 20 scale items were used to reduce the scale to composite variables for analyses. Validity was assessed through (1) comparing PCAP composite scores for patients with high versus low pain and (2) correlating PCAP composites with a separate communication rating scale. Principal components analyses yielded four physician and five patient communication composites (mean alpha=.77). Some evidence for concurrent validity was provided (5 of 18 correlations with communication validation rating scale were significant). Paired-sample t tests showed significant differences for 4 patient PCAP composites, showing the PCAP scale discriminates between high and low pain patients' communication. The PCAP scale shows partial evidence of reliability and two forms of validity. More research with this scale (developing more reliable and valid composites) is needed to extend these preliminary findings before this scale is applicable for use in practice. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Abdul-Aziz, Ali; Baaklini, George Y.; Roth, Don J.
2004-01-01
Engine makers and aviation safety government institutions continue to have a strong interest in monitoring the health of rotating components in aircraft engines to improve safety and to lower maintenance costs. To prevent catastrophic failure (burst) of the engine, they use nondestructive evaluation (NDE) and major overhauls for periodic inspections to discover any cracks that might have formed. The lowest cost fluorescent penetrant inspection NDE technique can fail to disclose cracks that are tightly closed during rest or that are below the surface. The NDE eddy current system is more effective at detecting both crack types, but it requires careful setup and operation and only a small portion of the disk can be practically inspected. So that sensor systems can sustain normal function in a severe environment, health-monitoring systems require the sensor system to transmit a signal if a crack detected in the component is above a predetermined length (but below the length that would lead to failure) and lastly to act neutrally upon the overall performance of the engine system and not interfere with engine maintenance operations. Therefore, more reliable diagnostic tools and high-level techniques for detecting damage and monitoring the health of rotating components are very essential in maintaining engine safety and reliability and in assessing life.
NASA Glenn Research in Controls and Diagnostics for Intelligent Aerospace Propulsion Systems
NASA Technical Reports Server (NTRS)
2005-01-01
With the increased emphasis on aircraft safety, enhanced performance and affordability, and the need to reduce the environmental impact of aircraft, there are many new challenges being faced by the designers of aircraft propulsion systems. Also the propulsion systems required to enable the NASA (National Aeronautics and Space Administration) Vision for Space Exploration in an affordable manner will need to have high reliability, safety and autonomous operation capability. The Controls and Dynamics Branch at NASA Glenn Research Center (GRC) in Cleveland, Ohio, is leading and participating in various projects in partnership with other organizations within GRC and across NASA, the U.S. aerospace industry, and academia to develop advanced controls and health management technologies that will help meet these challenges through the concept of Intelligent Propulsion Systems. The key enabling technologies for an Intelligent Propulsion System are the increased efficiencies of components through active control, advanced diagnostics and prognostics integrated with intelligent engine control to enhance operational reliability and component life, and distributed control with smart sensors and actuators in an adaptive fault tolerant architecture. This paper describes the current activities of the Controls and Dynamics Branch in the areas of active component control and propulsion system intelligent control, and presents some recent analytical and experimental results in these areas.
Mid-frequency Band Dynamics of Large Space Structures
NASA Technical Reports Server (NTRS)
Coppolino, Robert N.; Adams, Douglas S.
2004-01-01
High and low intensity dynamic environments experienced by a spacecraft during launch and on-orbit operations, respectively, induce structural loads and motions, which are difficult to reliably predict. Structural dynamics in low- and mid-frequency bands are sensitive to component interface uncertainty and non-linearity as evidenced in laboratory testing and flight operations. Analytical tools for prediction of linear system response are not necessarily adequate for reliable prediction of mid-frequency band dynamics and analysis of measured laboratory and flight data. A new MATLAB toolbox, designed to address the key challenges of mid-frequency band dynamics, is introduced in this paper. Finite-element models of major subassemblies are defined following rational frequency-wavelength guidelines. For computational efficiency, these subassemblies are described as linear, component mode models. The complete structural system model is composed of component mode subassemblies and linear or non-linear joint descriptions. Computation and display of structural dynamic responses are accomplished employing well-established, stable numerical methods, modern signal processing procedures and descriptive graphical tools. Parametric sensitivity and Monte-Carlo based system identification tools are used to reconcile models with experimental data and investigate the effects of uncertainties. Models and dynamic responses are exported for employment in applications, such as detailed structural integrity and mechanical-optical-control performance analyses.
Reliability evaluation methodology for NASA applications
NASA Technical Reports Server (NTRS)
Taneja, Vidya S.
1992-01-01
Liquid rocket engine technology has been characterized by the development of complex systems containing large number of subsystems, components, and parts. The trend to even larger and more complex system is continuing. The liquid rocket engineers have been focusing mainly on performance driven designs to increase payload delivery of a launch vehicle for a given mission. In otherwords, although the failure of a single inexpensive part or component may cause the failure of the system, reliability in general has not been considered as one of the system parameters like cost or performance. Up till now, quantification of reliability has not been a consideration during system design and development in the liquid rocket industry. Engineers and managers have long been aware of the fact that the reliability of the system increases during development, but no serious attempts have been made to quantify reliability. As a result, a method to quantify reliability during design and development is needed. This includes application of probabilistic models which utilize both engineering analysis and test data. Classical methods require the use of operating data for reliability demonstration. In contrast, the method described in this paper is based on similarity, analysis, and testing combined with Bayesian statistical analysis.