Sample records for component reliability case

  1. Towards early software reliability prediction for computer forensic tools (case study).

    PubMed

    Abu Talib, Manar

    2016-01-01

    Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.

  2. Interrater reliability of identifying indicators of posterior ligamentous complex disruption when plain films are indeterminate in thoracolumbar injuries.

    PubMed

    Schweitzer, Karl M; Vaccaro, Alexander R; Harrop, James S; Hurlbert, John; Carrino, John A; Rechtine, Glenn R; Schwartz, David G; Alanay, Ahmet; Sharma, Dinesh K; Anderson, D Greg; Lee, Joon Y; Arnold, Paul M

    2007-09-01

    The Spine Trauma Study Group (STSG) has proposed a novel thoracolumbar injury classification system and score (TLICS) in an attempt to define traumatic spinal injuries and direct appropriate management schemes objectively. The TLICS assigns specific point values based on three variables to generate a final severity score that guides potential treatment options. Within this algorithm, significant emphasis has been placed on posterior ligamentous complex (PLC) integrity. The purpose of this study was to determine the interrater reliability of indicators surgeons use when assessing PLC disruption on imaging studies, including computed tomography (CT) and magnetic resonance imaging (MRI). Orthopedic surgeons and neurosurgeons retrospectively reviewed a series of thoracolumbar injury case studies. Thirteen case studies, including images, were distributed to STSG members for individual, independent evaluation of the following three criteria: (1) diastasis of the facet joints on CT; (2) posterior edema-like signal in the region of PLC components on sagittal T2-weighted fat saturation (FAT SAT) MRI; and (3) disrupted PLC components on sagittal T1-weighted MRI. Interrater agreement on the presence or absence of each of the three criteria in each of the 13 cases was assessed. Absolute interrater percent agreement on diastasis of the facet joints on CT and posterior edema-like signal in the region of PLC components on sagittal T2-weighted FAT SAT MRI was similar (agreement 70.5%). Interrater agreement on disrupted PLC components on sagittal T1-weighted MRI was 48.9%. Facet joint diastasis on CT was the most reliable indicator of PLC disruption as assessed by both Cohen's kappa (kappa = 0.395) and intraclass correlation coefficient (ICC 0.430). The interrater reliability of assessing diastasis of the facet joints on CT had fair to moderate agreement. The reliability of assessing the posterior edema-like signal in the region of PLC components was lower but also fair, whereas the reliability of identifying disrupted PLC components was poor.

  3. The reliability of the pass/fail decision for assessments comprised of multiple components.

    PubMed

    Möltner, Andreas; Tımbıl, Sevgi; Jünger, Jana

    2015-01-01

    The decision having the most serious consequences for a student taking an assessment is the one to pass or fail that student. For this reason, the reliability of the pass/fail decision must be determined for high quality assessments, just as the measurement reliability of the point values. Assessments in a particular subject (graded course credit) are often composed of multiple components that must be passed independently of each other. When "conjunctively" combining separate pass/fail decisions, as with other complex decision rules for passing, adequate methods of analysis are necessary for estimating the accuracy and consistency of these classifications. To date, very few papers have addressed this issue; a generally applicable procedure was published by Douglas and Mislevy in 2010. Using the example of an assessment comprised of several parts that must be passed separately, this study analyzes the reliability underlying the decision to pass or fail students and discusses the impact of an improved method for identifying those who do not fulfill the minimum requirements. The accuracy and consistency of the decision to pass or fail an examinee in the subject cluster Internal Medicine/General Medicine/Clinical Chemistry at the University of Heidelberg's Faculty of Medicine was investigated. This cluster requires students to separately pass three components (two written exams and an OSCE), whereby students may reattempt to pass each component twice. Our analysis was carried out using the method described by Douglas and Mislevy. Frequently, when complex logical connections exist between the individual pass/fail decisions in the case of low failure rates, only a very low reliability for the overall decision to grant graded course credit can be achieved, even if high reliabilities exist for the various components. For the example analyzed here, the classification accuracy and consistency when conjunctively combining the three individual parts is relatively low with κ=0.49 or κ=0.47, despite the good reliability of over 0.75 for each of the three components. The option to repeat each component twice leads to a situation in which only about half of the candidates who do not satisfy the minimum requirements would fail the overall assessment, while the other half is able to continue their studies despite having deficient knowledge and skills. The method put forth by Douglas and Mislevy allows the analysis of the decision accuracy and consistency for complex combinations of scores from different components. Even in the case of highly reliable components, it is not necessarily so that a reliable pass/fail decision has been reached - for instance in the case of low failure rates. Assessments must be administered with the explicit goal of identifying examinees that do not fulfill the minimum requirements.

  4. The reliability of the pass/fail decision for assessments comprised of multiple components

    PubMed Central

    Möltner, Andreas; Tımbıl, Sevgi; Jünger, Jana

    2015-01-01

    Objective: The decision having the most serious consequences for a student taking an assessment is the one to pass or fail that student. For this reason, the reliability of the pass/fail decision must be determined for high quality assessments, just as the measurement reliability of the point values. Assessments in a particular subject (graded course credit) are often composed of multiple components that must be passed independently of each other. When “conjunctively” combining separate pass/fail decisions, as with other complex decision rules for passing, adequate methods of analysis are necessary for estimating the accuracy and consistency of these classifications. To date, very few papers have addressed this issue; a generally applicable procedure was published by Douglas and Mislevy in 2010. Using the example of an assessment comprised of several parts that must be passed separately, this study analyzes the reliability underlying the decision to pass or fail students and discusses the impact of an improved method for identifying those who do not fulfill the minimum requirements. Method: The accuracy and consistency of the decision to pass or fail an examinee in the subject cluster Internal Medicine/General Medicine/Clinical Chemistry at the University of Heidelberg’s Faculty of Medicine was investigated. This cluster requires students to separately pass three components (two written exams and an OSCE), whereby students may reattempt to pass each component twice. Our analysis was carried out using the method described by Douglas and Mislevy. Results: Frequently, when complex logical connections exist between the individual pass/fail decisions in the case of low failure rates, only a very low reliability for the overall decision to grant graded course credit can be achieved, even if high reliabilities exist for the various components. For the example analyzed here, the classification accuracy and consistency when conjunctively combining the three individual parts is relatively low with κ=0.49 or κ=0.47, despite the good reliability of over 0.75 for each of the three components. The option to repeat each component twice leads to a situation in which only about half of the candidates who do not satisfy the minimum requirements would fail the overall assessment, while the other half is able to continue their studies despite having deficient knowledge and skills. Conclusion: The method put forth by Douglas and Mislevy allows the analysis of the decision accuracy and consistency for complex combinations of scores from different components. Even in the case of highly reliable components, it is not necessarily so that a reliable pass/fail decision has been reached – for instance in the case of low failure rates. Assessments must be administered with the explicit goal of identifying examinees that do not fulfill the minimum requirements. PMID:26483855

  5. Failure-Time Distribution Of An m-Out-of-n System

    NASA Technical Reports Server (NTRS)

    Scheuer, Ernest M.

    1988-01-01

    Formulas for reliability extended to more general cases. Useful in analyses of reliabilities of practical systems and structures, especially of redundant systems of identical components, among which operating loads distributed equally.

  6. Decreasing inventory of a cement factory roller mill parts using reliability centered maintenance method

    NASA Astrophysics Data System (ADS)

    Witantyo; Rindiyah, Anita

    2018-03-01

    According to data from maintenance planning and control, it was obtained that highest inventory value is non-routine components. Maintenance components are components which procured based on maintenance activities. The problem happens because there is no synchronization between maintenance activities and the components required. Reliability Centered Maintenance method is used to overcome the problem by reevaluating maintenance activities required components. The case chosen is roller mill system because it has the highest unscheduled downtime record. Components required for each maintenance activities will be determined by its failure distribution, so the number of components needed could be predicted. Moreover, those components will be reclassified from routine component to be non-routine component, so the procurement could be carried out regularly. Based on the conducted analysis, failure happens in almost every maintenance task are classified to become scheduled on condition task, scheduled discard task, schedule restoration task and no schedule maintenance. From 87 used components for maintenance activities are evaluated and there 19 components that experience reclassification from non-routine components to routine components. Then the reliability and need of those components were calculated for one-year operation period. Based on this invention, it is suggested to change all of the components in overhaul activity to increase the reliability of roller mill system. Besides, the inventory system should follow maintenance schedule and the number of required components in maintenance activity so the value of procurement will be decreased and the reliability system will increase.

  7. Parametric Mass Reliability Study

    NASA Technical Reports Server (NTRS)

    Holt, James P.

    2014-01-01

    The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.

  8. A GA based penalty function technique for solving constrained redundancy allocation problem of series system with interval valued reliability of components

    NASA Astrophysics Data System (ADS)

    Gupta, R. K.; Bhunia, A. K.; Roy, D.

    2009-10-01

    In this paper, we have considered the problem of constrained redundancy allocation of series system with interval valued reliability of components. For maximizing the overall system reliability under limited resource constraints, the problem is formulated as an unconstrained integer programming problem with interval coefficients by penalty function technique and solved by an advanced GA for integer variables with interval fitness function, tournament selection, uniform crossover, uniform mutation and elitism. As a special case, considering the lower and upper bounds of the interval valued reliabilities of the components to be the same, the corresponding problem has been solved. The model has been illustrated with some numerical examples and the results of the series redundancy allocation problem with fixed value of reliability of the components have been compared with the existing results available in the literature. Finally, sensitivity analyses have been shown graphically to study the stability of our developed GA with respect to the different GA parameters.

  9. Developing Reliable Life Support for Mars

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2017-01-01

    A human mission to Mars will require highly reliable life support systems. Mars life support systems may recycle water and oxygen using systems similar to those on the International Space Station (ISS). However, achieving sufficient reliability is less difficult for ISS than it will be for Mars. If an ISS system has a serious failure, it is possible to provide spare parts, or directly supply water or oxygen, or if necessary bring the crew back to Earth. Life support for Mars must be designed, tested, and improved as needed to achieve high demonstrated reliability. A quantitative reliability goal should be established and used to guide development t. The designers should select reliable components and minimize interface and integration problems. In theory a system can achieve the component-limited reliability, but testing often reveal unexpected failures due to design mistakes or flawed components. Testing should extend long enough to detect any unexpected failure modes and to verify the expected reliability. Iterated redesign and retest may be required to achieve the reliability goal. If the reliability is less than required, it may be improved by providing spare components or redundant systems. The number of spares required to achieve a given reliability goal depends on the component failure rate. If the failure rate is under estimated, the number of spares will be insufficient and the system may fail. If the design is likely to have undiscovered design or component problems, it is advisable to use dissimilar redundancy, even though this multiplies the design and development cost. In the ideal case, a human tended closed system operational test should be conducted to gain confidence in operations, maintenance, and repair. The difficulty in achieving high reliability in unproven complex systems may require the use of simpler, more mature, intrinsically higher reliability systems. The limitations of budget, schedule, and technology may suggest accepting lower and less certain expected reliability. A plan to develop reliable life support is needed to achieve the best possible reliability.

  10. Effectiveness of back-to-back testing

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.; Eckhardt, David E.; Caglayan, Alper; Kelly, John P. J.

    1987-01-01

    Three models of back-to-back testing processes are described. Two models treat the case where there is no intercomponent failure dependence. The third model describes the more realistic case where there is correlation among the failure probabilities of the functionally equivalent components. The theory indicates that back-to-back testing can, under the right conditions, provide a considerable gain in software reliability. The models are used to analyze the data obtained in a fault-tolerant software experiment. It is shown that the expected gain is indeed achieved, and exceeded, provided the intercomponent failure dependence is sufficiently small. However, even with the relatively high correlation the use of several functionally equivalent components coupled with back-to-back testing may provide a considerable reliability gain. Implications of this finding are that the multiversion software development is a feasible and cost effective approach to providing highly reliable software components intended for fault-tolerant software systems, on condition that special attention is directed at early detection and elimination of correlated faults.

  11. On modeling human reliability in space flights - Redundancy and recovery operations

    NASA Astrophysics Data System (ADS)

    Aarset, M.; Wright, J. F.

    The reliability of humans is of paramount importance to the safety of space flight systems. This paper describes why 'back-up' operators might not be the best solution, and in some cases, might even degrade system reliability. The problem associated with human redundancy calls for special treatment in reliability analyses. The concept of Standby Redundancy is adopted, and psychological and mathematical models are introduced to improve the way such problems can be estimated and handled. In the past, human reliability has practically been neglected in most reliability analyses, and, when included, the humans have been modeled as a component and treated numerically the way technical components are. This approach is not wrong in itself, but it may lead to systematic errors if too simple analogies from the technical domain are used in the modeling of human behavior. In this paper redundancy in a man-machine system will be addressed. It will be shown how simplification from the technical domain, when applied to human components of a system, may give non-conservative estimates of system reliability.

  12. Technique for Early Reliability Prediction of Software Components Using Behaviour Models

    PubMed Central

    Ali, Awad; N. A. Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad

    2016-01-01

    Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction. PMID:27668748

  13. Handbook of experiences in the design and installation of solar heating and cooling systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ward, D.S.; Oberoi, H.S.

    1980-07-01

    A large array of problems encountered are detailed, including design errors, installation mistakes, cases of inadequate durability of materials and unacceptable reliability of components, and wide variations in the performance and operation of different solar systems. Durability, reliability, and design problems are reviewed for solar collector subsystems, heat transfer fluids, thermal storage, passive solar components, piping/ducting, and reliability/operational problems. The following performance topics are covered: criteria for design and performance analysis, domestic hot water systems, passive space heating systems, active space heating systems, space cooling systems, analysis of systems performance, and performance evaluations. (MHR)

  14. A Bayesian approach to reliability and confidence

    NASA Technical Reports Server (NTRS)

    Barnes, Ron

    1989-01-01

    The historical evolution of NASA's interest in quantitative measures of reliability assessment is outlined. The introduction of some quantitative methodologies into the Vehicle Reliability Branch of the Safety, Reliability and Quality Assurance (SR and QA) Division at Johnson Space Center (JSC) was noted along with the development of the Extended Orbiter Duration--Weakest Link study which will utilize quantitative tools for a Bayesian statistical analysis. Extending the earlier work of NASA sponsor, Richard Heydorn, researchers were able to produce a consistent Bayesian estimate for the reliability of a component and hence by a simple extension for a system of components in some cases where the rate of failure is not constant but varies over time. Mechanical systems in general have this property since the reliability usually decreases markedly as the parts degrade over time. While they have been able to reduce the Bayesian estimator to a simple closed form for a large class of such systems, the form for the most general case needs to be attacked by the computer. Once a table is generated for this form, researchers will have a numerical form for the general solution. With this, the corresponding probability statements about the reliability of a system can be made in the most general setting. Note that the utilization of uniform Bayesian priors represents a worst case scenario in the sense that as researchers incorporate more expert opinion into the model, they will be able to improve the strength of the probability calculations.

  15. A Methodology for Quantifying Certain Design Requirements During the Design Phase

    NASA Technical Reports Server (NTRS)

    Adams, Timothy; Rhodes, Russel

    2005-01-01

    A methodology for developing and balancing quantitative design requirements for safety, reliability, and maintainability has been proposed. Conceived as the basis of a more rational approach to the design of spacecraft, the methodology would also be applicable to the design of automobiles, washing machines, television receivers, or almost any other commercial product. Heretofore, it has been common practice to start by determining the requirements for reliability of elements of a spacecraft or other system to ensure a given design life for the system. Next, safety requirements are determined by assessing the total reliability of the system and adding redundant components and subsystems necessary to attain safety goals. As thus described, common practice leaves the maintainability burden to fall to chance; therefore, there is no control of recurring costs or of the responsiveness of the system. The means that have been used in assessing maintainability have been oriented toward determining the logistical sparing of components so that the components are available when needed. The process established for developing and balancing quantitative requirements for safety (S), reliability (R), and maintainability (M) derives and integrates NASA s top-level safety requirements and the controls needed to obtain program key objectives for safety and recurring cost (see figure). Being quantitative, the process conveniently uses common mathematical models. Even though the process is shown as being worked from the top down, it can also be worked from the bottom up. This process uses three math models: (1) the binomial distribution (greaterthan- or-equal-to case), (2) reliability for a series system, and (3) the Poisson distribution (less-than-or-equal-to case). The zero-fail case for the binomial distribution approximates the commonly known exponential distribution or "constant failure rate" distribution. Either model can be used. The binomial distribution was selected for modeling flexibility because it conveniently addresses both the zero-fail and failure cases. The failure case is typically used for unmanned spacecraft as with missiles.

  16. Strategies for Increasing the Market Share of Recycled Products—A Games Theory Approach

    NASA Astrophysics Data System (ADS)

    Batzias, Dimitris F.; Pollalis, Yannis A.

    2009-08-01

    A methodological framework (including 28 activity stages and 10 decision nodes) has been designed under the form of an algorithmic procedure for the development of strategies for increasing the market share of recycled products within a games theory context. A case example is presented referring to a paper market, where a recycling company (RC) is in competition with a virgin-raw-material-using company (VC). The strategies of the VC, for increasing its market share, are the strengthening of (and advertisement based on) the high quality (VC1), the high reliability (VC2), the combination quality and reliability, putting emphasis on the first component (VC3), the combination quality and reliability, putting emphasis on the second component (VC4). The strategies of the RC, for increasing its market share, are proper advertisement based on the low price of produced recycled paper satisfying minimum quality requirements (RC1), the combination of low price with sensitization of the public as regards environmental and materials-saving issues, putting emphasis on the first component (RC2), the same combination, putting emphasis on the second component (RC3). Analysis of all possible situations for the case example under examination is also presented.

  17. Reliability of Health-Related Physical Fitness Tests among Colombian Children and Adolescents: The FUPRECOL Study

    PubMed Central

    Ramírez-Vélez, Robinson; Rodrigues-Bezerra, Diogo; Correa-Bautista, Jorge Enrique; Izquierdo, Mikel; Lobelo, Felipe

    2015-01-01

    Substantial evidence indicates that youth physical fitness levels are an important marker of lifestyle and cardio-metabolic health profiles and predict future risk of chronic diseases. The reliability physical fitness tests have not been explored in Latino-American youth population. This study’s aim was to examine the reliability of health-related physical fitness tests that were used in the Colombian health promotion “Fuprecol study”. Participants were 229 Colombian youth (boys n = 124 and girls n = 105) aged 9 to 17.9 years old. Five components of health-related physical fitness were measured: 1) morphological component: height, weight, body mass index (BMI), waist circumference, triceps skinfold, subscapular skinfold, and body fat (%) via impedance; 2) musculoskeletal component: handgrip and standing long jump test; 3) motor component: speed/agility test (4x10 m shuttle run); 4) flexibility component (hamstring and lumbar extensibility, sit-and-reach test); 5) cardiorespiratory component: 20-meter shuttle-run test (SRT) to estimate maximal oxygen consumption. The tests were performed two times, 1 week apart on the same day of the week, except for the SRT which was performed only once. Intra-observer technical errors of measurement (TEMs) and inter-rater (reliability) were assessed in the morphological component. Reliability for the Musculoskeletal, motor and cardiorespiratory fitness components was examined using Bland–Altman tests. For the morphological component, TEMs were small and reliability was greater than 95% of all cases. For the musculoskeletal, motor, flexibility and cardiorespiratory components, we found adequate reliability patterns in terms of systematic errors (bias) and random error (95% limits of agreement). When the fitness assessments were performed twice, the systematic error was nearly 0 for all tests, except for the sit and reach (mean difference: -1.03% [95% CI = -4.35% to -2.28%]. The results from this study indicate that the “Fuprecol study” health-related physical fitness battery, administered by physical education teachers, was reliable for measuring health-related components of fitness in children and adolescents aged 9–17.9 years old in a school setting in Colombia. PMID:26474474

  18. Reliability of Health-Related Physical Fitness Tests among Colombian Children and Adolescents: The FUPRECOL Study.

    PubMed

    Ramírez-Vélez, Robinson; Rodrigues-Bezerra, Diogo; Correa-Bautista, Jorge Enrique; Izquierdo, Mikel; Lobelo, Felipe

    2015-01-01

    Substantial evidence indicates that youth physical fitness levels are an important marker of lifestyle and cardio-metabolic health profiles and predict future risk of chronic diseases. The reliability physical fitness tests have not been explored in Latino-American youth population. This study's aim was to examine the reliability of health-related physical fitness tests that were used in the Colombian health promotion "Fuprecol study". Participants were 229 Colombian youth (boys n = 124 and girls n = 105) aged 9 to 17.9 years old. Five components of health-related physical fitness were measured: 1) morphological component: height, weight, body mass index (BMI), waist circumference, triceps skinfold, subscapular skinfold, and body fat (%) via impedance; 2) musculoskeletal component: handgrip and standing long jump test; 3) motor component: speed/agility test (4x10 m shuttle run); 4) flexibility component (hamstring and lumbar extensibility, sit-and-reach test); 5) cardiorespiratory component: 20-meter shuttle-run test (SRT) to estimate maximal oxygen consumption. The tests were performed two times, 1 week apart on the same day of the week, except for the SRT which was performed only once. Intra-observer technical errors of measurement (TEMs) and inter-rater (reliability) were assessed in the morphological component. Reliability for the Musculoskeletal, motor and cardiorespiratory fitness components was examined using Bland-Altman tests. For the morphological component, TEMs were small and reliability was greater than 95% of all cases. For the musculoskeletal, motor, flexibility and cardiorespiratory components, we found adequate reliability patterns in terms of systematic errors (bias) and random error (95% limits of agreement). When the fitness assessments were performed twice, the systematic error was nearly 0 for all tests, except for the sit and reach (mean difference: -1.03% [95% CI = -4.35% to -2.28%]. The results from this study indicate that the "Fuprecol study" health-related physical fitness battery, administered by physical education teachers, was reliable for measuring health-related components of fitness in children and adolescents aged 9-17.9 years old in a school setting in Colombia.

  19. Validity Evidence and Scoring Guidelines for Standardized Patient Encounters and Patient Notes From a Multisite Study of Clinical Performance Examinations in Seven Medical Schools.

    PubMed

    Park, Yoon Soo; Hyderi, Abbas; Heine, Nancy; May, Win; Nevins, Andrew; Lee, Ming; Bordage, Georges; Yudkowsky, Rachel

    2017-11-01

    To examine validity evidence of local graduation competency examination scores from seven medical schools using shared cases and to provide rater training protocols and guidelines for scoring patient notes (PNs). Between May and August 2016, clinical cases were developed, shared, and administered across seven medical schools (990 students participated). Raters were calibrated using training protocols, and guidelines were developed collaboratively across sites to standardize scoring. Data included scores from standardized patient encounters for history taking, physical examination, and PNs. Descriptive statistics were used to examine scores from the different assessment components. Generalizability studies (G-studies) using variance components were conducted to estimate reliability for composite scores. Validity evidence was collected for response process (rater perception), internal structure (variance components, reliability), relations to other variables (interassessment correlations), and consequences (composite score). Student performance varied by case and task. In the PNs, justification of differential diagnosis was the most discriminating task. G-studies showed that schools accounted for less than 1% of total variance; however, for the PNs, there were differences in scores for varying cases and tasks across schools, indicating a school effect. Composite score reliability was maximized when the PN was weighted between 30% and 40%. Raters preferred using case-specific scoring guidelines with clear point-scoring systems. This multisite study presents validity evidence for PN scores based on scoring rubric and case-specific scoring guidelines that offer rigor and feedback for learners. Variability in PN scores across participating sites may signal different approaches to teaching clinical reasoning among medical schools.

  20. Neurology objective structured clinical examination reliability using generalizability theory

    PubMed Central

    Park, Yoon Soo; Lukas, Rimas V.; Brorson, James R.

    2015-01-01

    Objectives: This study examines factors affecting reliability, or consistency of assessment scores, from an objective structured clinical examination (OSCE) in neurology through generalizability theory (G theory). Methods: Data include assessments from a multistation OSCE taken by 194 medical students at the completion of a neurology clerkship. Facets evaluated in this study include cases, domains, and items. Domains refer to areas of skill (or constructs) that the OSCE measures. G theory is used to estimate variance components associated with each facet, derive reliability, and project the number of cases required to obtain a reliable (consistent, precise) score. Results: Reliability using G theory is moderate (Φ coefficient = 0.61, G coefficient = 0.64). Performance is similar across cases but differs by the particular domain, such that the majority of variance is attributed to the domain. Projections in reliability estimates reveal that students need to participate in 3 OSCE cases in order to increase reliability beyond the 0.70 threshold. Conclusions: This novel use of G theory in evaluating an OSCE in neurology provides meaningful measurement characteristics of the assessment. Differing from prior work in other medical specialties, the cases students were randomly assigned did not influence their OSCE score; rather, scores varied in expected fashion by domain assessed. PMID:26432851

  1. Neurology objective structured clinical examination reliability using generalizability theory.

    PubMed

    Blood, Angela D; Park, Yoon Soo; Lukas, Rimas V; Brorson, James R

    2015-11-03

    This study examines factors affecting reliability, or consistency of assessment scores, from an objective structured clinical examination (OSCE) in neurology through generalizability theory (G theory). Data include assessments from a multistation OSCE taken by 194 medical students at the completion of a neurology clerkship. Facets evaluated in this study include cases, domains, and items. Domains refer to areas of skill (or constructs) that the OSCE measures. G theory is used to estimate variance components associated with each facet, derive reliability, and project the number of cases required to obtain a reliable (consistent, precise) score. Reliability using G theory is moderate (Φ coefficient = 0.61, G coefficient = 0.64). Performance is similar across cases but differs by the particular domain, such that the majority of variance is attributed to the domain. Projections in reliability estimates reveal that students need to participate in 3 OSCE cases in order to increase reliability beyond the 0.70 threshold. This novel use of G theory in evaluating an OSCE in neurology provides meaningful measurement characteristics of the assessment. Differing from prior work in other medical specialties, the cases students were randomly assigned did not influence their OSCE score; rather, scores varied in expected fashion by domain assessed. © 2015 American Academy of Neurology.

  2. Damage Tolerance and Reliability of Turbine Engine Components

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1999-01-01

    This report describes a formal method to quantify structural damage tolerance and reliability in the presence of a multitude of uncertainties in turbine engine components. The method is based at the material behavior level where primitive variables with their respective scatter ranges are used to describe behavior. Computational simulation is then used to propagate the uncertainties to the structural scale where damage tolerance and reliability are usually specified. Several sample cases are described to illustrate the effectiveness, versatility, and maturity of the method. Typical results from this method demonstrate that it is mature and that it can be used to probabilistically evaluate turbine engine structural components. It may be inferred from the results that the method is suitable for probabilistically predicting the remaining life in aging or deteriorating structures, for making strategic projections and plans, and for achieving better, cheaper, faster products that give competitive advantages in world markets.

  3. Contamination-Free Manufacturing: Tool Component Qualification, Verification and Correlation with Wafers

    NASA Astrophysics Data System (ADS)

    Tan, Samantha H.; Chen, Ning; Liu, Shi; Wang, Kefei

    2003-09-01

    As part of the semiconductor industry "contamination-free manufacturing" effort, significant emphasis has been placed on reducing potential sources of contamination from process equipment and process equipment components. Process tools contain process chambers and components that are exposed to the process environment or process chemistry and in some cases are in direct contact with production wafers. Any contamination from these sources must be controlled or eliminated in order to maintain high process yields, device performance, and device reliability. This paper discusses new nondestructive analytical methods for quantitative measurement of the cleanliness of metal, quartz, polysilicon and ceramic components that are used in process equipment tools. The goal of these new procedures is to measure the effectiveness of cleaning procedures and to verify whether a tool component part is sufficiently clean for installation and subsequent routine use in the manufacturing line. These procedures provide a reliable "qualification method" for tool component certification and also provide a routine quality control method for reliable operation of cleaning facilities. Cost advantages to wafer manufacturing include higher yields due to improved process cleanliness and elimination of yield loss and downtime resulting from the installation of "bad" components in process tools. We also discuss a representative example of wafer contamination having been linked to a specific process tool component.

  4. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp

    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storagemore » systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results« less

  5. Increasing the reliability of the fluid/crystallized difference score from the Kaufman Adolescent and Adult Intelligence Test with reliable component analysis.

    PubMed

    Caruso, J C

    2001-06-01

    The unreliability of difference scores is a well documented phenomenon in the social sciences and has led researchers and practitioners to interpret differences cautiously, if at all. In the case of the Kaufman Adult and Adolescent Intelligence Test (KAIT), the unreliability of the difference between the Fluid IQ and the Crystallized IQ is due to the high correlation between the two scales. The consequences of the lack of precision with which differences are identified are wide confidence intervals and unpowerful significance tests (i.e., large differences are required to be declared statistically significant). Reliable component analysis (RCA) was performed on the subtests of the KAIT in order to address these problems. RCA is a new data reduction technique that results in uncorrelated component scores with maximum proportions of reliable variance. Results indicate that the scores defined by RCA have discriminant and convergent validity (with respect to the equally weighted scores) and that differences between the scores, derived from a single testing session, were more reliable than differences derived from equal weighting for each age group (11-14 years, 15-34 years, 35-85+ years). This reliability advantage results in narrower confidence intervals around difference scores and smaller differences required for statistical significance.

  6. Applying the High Reliability Health Care Maturity Model to Assess Hospital Performance: A VA Case Study.

    PubMed

    Sullivan, Jennifer L; Rivard, Peter E; Shin, Marlena H; Rosen, Amy K

    2016-09-01

    The lack of a tool for categorizing and differentiating hospitals according to their high reliability organization (HRO)-related characteristics has hindered progress toward implementing and sustaining evidence-based HRO practices. Hospitals would benefit both from an understanding of the organizational characteristics that support HRO practices and from knowledge about the steps necessary to achieve HRO status to reduce the risk of harm and improve outcomes. The High Reliability Health Care Maturity (HRHCM) model, a model for health care organizations' achievement of high reliability with zero patient harm, incorporates three major domains critical for promoting HROs-Leadership, Safety Culture, and Robust Process Improvement ®. A study was conducted to examine the content validity of the HRHCM model and evaluate whether it can differentiate hospitals' maturity levels for each of the model's components. Staff perceptions of patient safety at six US Department of Veterans Affairs (VA) hospitals were examined to determine whether all 14 HRHCM components were present and to characterize each hospital's level of organizational maturity. Twelve of the 14 components from the HRHCM model were detected; two additional characteristics emerged that are present in the HRO literature but not represented in the model-teamwork culture and system-focused tools for learning and improvement. Each hospital's level of organizational maturity could be characterized for 9 of the 14 components. The findings suggest the HRHCM model has good content validity and that there is differentiation between hospitals on model components. Additional research is needed to understand how these components can be used to build the infrastructure necessary for reaching high reliability.

  7. Probabilistic Structural Analysis Methods (PSAM) for Select Space Propulsion System Components

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Probabilistic Structural Analysis Methods (PSAM) are described for the probabilistic structural analysis of engine components for current and future space propulsion systems. Components for these systems are subjected to stochastic thermomechanical launch loads. Uncertainties or randomness also occurs in material properties, structural geometry, and boundary conditions. Material property stochasticity, such as in modulus of elasticity or yield strength, exists in every structure and is a consequence of variations in material composition and manufacturing processes. Procedures are outlined for computing the probabilistic structural response or reliability of the structural components. The response variables include static or dynamic deflections, strains, and stresses at one or several locations, natural frequencies, fatigue or creep life, etc. Sample cases illustrates how the PSAM methods and codes simulate input uncertainties and compute probabilistic response or reliability using a finite element model with probabilistic methods.

  8. Damage Tolerance and Reliability of Turbine Engine Components

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1999-01-01

    A formal method is described to quantify structural damage tolerance and reliability in the presence of multitude of uncertainties in turbine engine components. The method is based at the materials behaviour level where primitive variables with their respective scatters are used to describe the behavior. Computational simulation is then used to propagate those uncertainties to the structural scale where damage tolerance and reliability are usually specified. Several sample cases are described to illustrate the effectiveness, versatility, and maturity of the method. Typical results from these methods demonstrate that the methods are mature and that they can be used for future strategic projections and planning to assure better, cheaper, faster, products for competitive advantages in world markets. These results also indicate that the methods are suitable for predicting remaining life in aging or deteriorating structures.

  9. Damage Tolerance and Reliability of Turbine Engine Components

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1998-01-01

    A formal method is described to quantify structural damage tolerance and reliability in the presence of multitude of uncertainties in turbine engine components. The method is based at the materials behavior level where primitive variables with their respective scatters are used to describe that behavior. Computational simulation is then used to propagate those uncertainties to the structural scale where damage tolerance and reliability are usually specified. Several sample cases are described to illustrate the effectiveness, versatility, and maturity of the method. Typical results from these methods demonstrate that the methods are mature and that they can be used for future strategic projections and planning to assure better, cheaper, faster products for competitive advantages in world markets. These results also indicate that the methods are suitable for predicting remaining life in aging or deteriorating structures.

  10. NASA Applications and Lessons Learned in Reliability Engineering

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.; Fuller, Raymond P.

    2011-01-01

    Since the Shuttle Challenger accident in 1986, communities across NASA have been developing and extensively using quantitative reliability and risk assessment methods in their decision making process. This paper discusses several reliability engineering applications that NASA has used over the year to support the design, development, and operation of critical space flight hardware. Specifically, the paper discusses several reliability engineering applications used by NASA in areas such as risk management, inspection policies, components upgrades, reliability growth, integrated failure analysis, and physics based probabilistic engineering analysis. In each of these areas, the paper provides a brief discussion of a case study to demonstrate the value added and the criticality of reliability engineering in supporting NASA project and program decisions to fly safely. Examples of these case studies discussed are reliability based life limit extension of Shuttle Space Main Engine (SSME) hardware, Reliability based inspection policies for Auxiliary Power Unit (APU) turbine disc, probabilistic structural engineering analysis for reliability prediction of the SSME alternate turbo-pump development, impact of ET foam reliability on the Space Shuttle System risk, and reliability based Space Shuttle upgrade for safety. Special attention is given in this paper to the physics based probabilistic engineering analysis applications and their critical role in evaluating the reliability of NASA development hardware including their potential use in a research and technology development environment.

  11. Reliability analysis based on the losses from failures.

    PubMed

    Todinov, M T

    2006-04-01

    The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.

  12. Reciprocal Relationships: Something for Everyone.

    PubMed

    Tumosa, Nina

    2017-01-01

    Reciprocal relationships based on mutual goals, respect and trust are key to maintaining working relationships and getting reliable research results. Yet relationship building is not a concept taught in academia. These skills are often learned the hard way, with singular solutions found for case-by-case scenarios. Several journeys to identify the components, barriers and rewards of reciprocal relationships are discussed.

  13. Development and Testing of a USM High Altitude Balloon

    NASA Astrophysics Data System (ADS)

    Thaheer, A. S. Mohamed; Ismail, N. A.; Yusoff, S. H. Md.; Nasirudin, M. A.

    2018-04-01

    This paper discusses on tests conducted on the component and subsystem level during development of the USM High Altitude Balloon (HAB). The tests conducted by selecting initial components then tested individually based on several case studies such as reliability test, camera viewing, power consumption, thermal capability, and parachute performance. Then, the component is integrated into sub-system level for integration and functionality test. The preliminary result is utilized to tune the components and sub-systems and trial launch is conducted where the sample images are recorded and atmospheric data successfully collected.

  14. Critical issues in assuring long lifetime and fail-safe operation of optical communications network

    NASA Astrophysics Data System (ADS)

    Paul, Dilip K.

    1993-09-01

    Major factors in assuring long lifetime and fail-safe operation in optical communications networks are reviewed in this paper. Reliable functionality to design specifications, complexity of implementation, and cost are the most critical issues. As economics is the driving force to set the goals as well as priorities for the design, development, safe operation, and maintenance schedules of reliable networks, a balance is sought between the degree of reliability enhancement, cost, and acceptable outage of services. Protecting both the link and the network with high reliability components, hardware duplication, and diversity routing can ensure the best network availability. Case examples include both fiber optic and lasercom systems. Also, the state-of-the-art reliability of photonics in space environment is presented.

  15. CRAX/Cassandra Reliability Analysis Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, D.

    1999-02-10

    Over the past few years Sandia National Laboratories has been moving toward an increased dependence on model- or physics-based analyses as a means to assess the impact of long-term storage on the nuclear weapons stockpile. These deterministic models have also been used to evaluate replacements for aging systems, often involving commercial off-the-shelf components (COTS). In addition, the models have been used to assess the performance of replacement components manufactured via unique, small-lot production runs. In either case, the limited amount of available test data dictates that the only logical course of action to characterize the reliability of these components ismore » to specifically consider the uncertainties in material properties, operating environment etc. within the physics-based (deterministic) model. This not only provides the ability to statistically characterize the expected performance of the component or system, but also provides direction regarding the benefits of additional testing on specific components within the system. An effort was therefore initiated to evaluate the capabilities of existing probabilistic methods and, if required, to develop new analysis methods to support the inclusion of uncertainty in the classical design tools used by analysts and design engineers at Sandia. The primary result of this effort is the CMX (Cassandra Exoskeleton) reliability analysis software.« less

  16. Identifying key components for an effective case report poster: an observational study.

    PubMed

    Willett, Lisa L; Paranjape, Anuradha; Estrada, Carlos

    2009-03-01

    Residents demonstrate scholarly activity by presenting posters at academic meetings. Although recommendations from national organizations are available, evidence identifying which components are most important is not. To develop and test an evaluation tool to measure the quality of case report posters and identify the specific components most in need of improvement. Faculty evaluators reviewed case report posters and provided on-site feedback to presenters at poster sessions of four annual academic general internal medicine meetings. A newly developed ten-item evaluation form measured poster quality for specific components of content, discussion, and format (5-point Likert scale, 1 = lowest, 5 = highest). Evaluation tool performance, including Cronbach alpha and inter-rater reliability, overall poster scores, differences across meetings and evaluators and specific components of the posters most in need of improvement. Forty-five evaluators from 20 medical institutions reviewed 347 posters. Cronbach's alpha of the evaluation form was 0.84 and inter-rater reliability, Spearman's rho 0.49 (p < 0.001). The median score was 4.1 (Q1 -Q3, 3.7-4.6)(Q1 = 25th, Q3 = 75th percentile). The national meeting median score was higher than the regional meetings (4.4 vs, 4.0, P < 0.001). We found no difference in faculty scores. The following areas were identified as most needing improvement: clearly state learning objectives, tie conclusions to learning objectives, and use appropriate amount of words. Our evaluation tool provides empirical data to guide trainees as they prepare posters for presentation which may improve poster quality and enhance their scholarly productivity.

  17. Probabilistic Methods for Structural Design and Reliability

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Whitlow, Woodrow, Jr. (Technical Monitor)

    2002-01-01

    This report describes a formal method to quantify structural damage tolerance and reliability in the presence of a multitude of uncertainties in turbine engine components. The method is based at the material behavior level where primitive variables with their respective scatter ranges are used to describe behavior. Computational simulation is then used to propagate the uncertainties to the structural scale where damage tolerance and reliability are usually specified. Several sample cases are described to illustrate the effectiveness, versatility, and maturity of the method. Typical results from this method demonstrate, that it is mature and that it can be used to probabilistically evaluate turbine engine structural components. It may be inferred from the results that the method is suitable for probabilistically predicting the remaining life in aging or in deteriorating structures, for making strategic projections and plans, and for achieving better, cheaper, faster products that give competitive advantages in world markets.

  18. In Situ, On-Demand Lubrication System Developed for Space Mechanisms

    NASA Technical Reports Server (NTRS)

    Marchetti, Mario; Pepper, Stephen V.; Jansen, Mark J.; Predmore, Roamer E.

    2003-01-01

    Many moving mechanical assemblies (MMA) for space mechanisms rely on liquid lubricants to provide reliable, long-term performance. The proper performance of the MMA is critical in assuring a successful mission. Historically, mission lifetimes were short and MMA duty cycles were minimal. As mission lifetimes were extended, other components, such as batteries and computers, failed before lubricated systems. However, improvements in these ancillary systems over the last decade have left the tribological systems of the MMAs as the limiting factor in determining spacecraft reliability. Typically, MMAs are initially lubricated with a very small charge that is supposed to last the entire mission lifetime, often well in excess of 5 years. In many cases, the premature failure of a lubricated component can result in mission failure.

  19. Identifying Key Components for an Effective Case Report Poster: An Observational Study

    PubMed Central

    Paranjape, Anuradha; Estrada, Carlos

    2008-01-01

    BACKGROUND Residents demonstrate scholarly activity by presenting posters at academic meetings. Although recommendations from national organizations are available, evidence identifying which components are most important is not. OBJECTIVE To develop and test an evaluation tool to measure the quality of case report posters and identify the specific components most in need of improvement. DESIGN Faculty evaluators reviewed case report posters and provided on-site feedback to presenters at poster sessions of four annual academic general internal medicine meetings. A newly developed ten-item evaluation form measured poster quality for specific components of content, discussion, and format (5-point Likert scale, 1 = lowest, 5 = highest). Main outcome measure(s): Evaluation tool performance, including Cronbach alpha and inter-rater reliability, overall poster scores, differences across meetings and evaluators and specific components of the posters most in need of improvement. RESULTS Forty-five evaluators from 20 medical institutions reviewed 347 posters. Cronbach’s alpha of the evaluation form was 0.84 and inter-rater reliability, Spearman’s rho 0.49 ( < 0.001). The median score was 4.1 (Q1 -Q3, 3.7-4.6)(Q1 = 25th, Q3 = 75th percentile). The national meeting median score was higher than the regional meetings (4.4 vs, 4.0,  < 0.001). We found no difference in faculty scores. The following areas were identified as most needing improvement: clearly state learning objectives, tie conclusions to learning objectives, and use appropriate amount of words. CONCLUSIONS Our evaluation tool provides empirical data to guide trainees as they prepare posters for presentation which may improve poster quality and enhance their scholarly productivity. PMID:19089510

  20. Field reliability of Ricor microcoolers

    NASA Astrophysics Data System (ADS)

    Pundak, N.; Porat, Z.; Barak, M.; Zur, Y.; Pasternak, G.

    2009-05-01

    Over the recent 25 years Ricor has fielded in excess of 50,000 Stirling cryocoolers, among which approximately 30,000 units are of micro integral rotary driven type. The statistical population of the fielded units is counted in thousands/ hundreds per application category. In contrast to MTTF values as gathered and presented based on standard reliability demonstration tests, where the failure of the weakest component dictates the end of product life, in the case of field reliability, where design and workmanship failures are counted and considered, the values are usually reported in number of failures per million hours of operation. These values are important and relevant to the prediction of service capabilities and plan.

  1. High-efficiency high-reliability optical components for a large, high-average-power visible laser system

    NASA Astrophysics Data System (ADS)

    Taylor, John R.; Stolz, Christopher J.

    1993-08-01

    Laser system performance and reliability depends on the related performance and reliability of the optical components which define the cavity and transport subsystems. High-average-power and long transport lengths impose specific requirements on component performance. The complexity of the manufacturing process for optical components requires a high degree of process control and verification. Qualification has proven effective in ensuring confidence in the procurement process for these optical components. Issues related to component reliability have been studied and provide useful information to better understand the long term performance and reliability of the laser system.

  2. High-efficiency high-reliability optical components for a large, high-average-power visible laser system

    NASA Astrophysics Data System (ADS)

    Taylor, J. R.; Stolz, C. J.

    1992-12-01

    Laser system performance and reliability depends on the related performance and reliability of the optical components which define the cavity and transport subsystems. High-average-power and long transport lengths impose specific requirements on component performance. The complexity of the manufacturing process for optical components requires a high degree of process control and verification. Qualification has proven effective in ensuring confidence in the procurement process for these optical components. Issues related to component reliability have been studied and provide useful information to better understand the long term performance and reliability of the laser system.

  3. Competing risk models in reliability systems, a weibull distribution model with bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, Ismed; Satria Gondokaryono, Yudi

    2016-02-01

    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range between the true value and the maximum likelihood estimated value lines.

  4. Increased Reliability of Gas Turbine Components by Robust Coatings Manufacturing

    NASA Astrophysics Data System (ADS)

    Sharma, A.; Dudykevych, T.; Sansom, D.; Subramanian, R.

    2017-08-01

    The expanding operational windows of the advanced gas turbine components demand increasing performance capability from protective coating systems. This demand has led to the development of novel multi-functional, multi-materials coating system architectures over the last years. In addition, the increasing dependency of components exposed to extreme environment on protective coatings results in more severe penalties, in case of a coating system failure. This emphasizes that reliability and consistency of protective coating systems are equally important to their superior performance. By means of examples, this paper describes the effects of scatter in the material properties resulting from manufacturing variations on coating life predictions. A strong foundation in process-property-performance correlations as well as regular monitoring and control of the coating process is essential for robust and well-controlled coating process. Proprietary and/or commercially available diagnostic tools can help in achieving these goals, but their usage in industrial setting is still limited. Various key contributors to process variability are briefly discussed along with the limitations of existing process and product control methods. Other aspects that are important for product reliability and consistency in serial manufacturing as well as advanced testing methodologies to simplify and enhance product inspection and improve objectivity are briefly described.

  5. The Development and Preliminary Validation of a Rubric to Assess Medical Students' Written Summary Statements in Virtual Patient Cases.

    PubMed

    Smith, Sherilyn; Kogan, Jennifer R; Berman, Norman B; Dell, Michael S; Brock, Douglas M; Robins, Lynne S

    2016-01-01

    The ability to create a concise summary statement can be assessed as a marker for clinical reasoning. The authors describe the development and preliminary validation of a rubric to assess such summary statements. Between November 2011 and June 2014, four researchers independently coded 50 summary statements randomly selected from a large database of medical students' summary statements in virtual patient cases to each create an assessment rubric. Through an iterative process, they created a consensus assessment rubric and applied it to 60 additional summary statements. Cronbach alpha calculations determined the internal consistency of the rubric components, intraclass correlation coefficient (ICC) calculations determined the interrater agreement, and Spearman rank-order correlations determined the correlations between rubric components. Researchers' comments describing their individual rating approaches were analyzed using content analysis. The final rubric included five components: factual accuracy, appropriate narrowing of the differential diagnosis, transformation of information, use of semantic qualifiers, and a global rating. Internal consistency was acceptable (Cronbach alpha 0.771). Interrater reliability for the entire rubric was acceptable (ICC 0.891; 95% confidence interval 0.859-0.917). Spearman calculations revealed a range of correlations across cases. Content analysis of the researchers' comments indicated differences in their application of the assessment rubric. This rubric has potential as a tool for feedback and assessment. Opportunities for future study include establishing interrater reliability with other raters and on different cases, designing training for raters to use the tool, and assessing how feedback using this rubric affects students' clinical reasoning skills.

  6. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  7. The Study of the Relationship between Probabilistic Design and Axiomatic Design Methodology. Volume 3

    NASA Technical Reports Server (NTRS)

    Onwubiko, Chin-Yere; Onyebueke, Landon

    1996-01-01

    Structural failure is rarely a "sudden death" type of event, such sudden failures may occur only under abnormal loadings like bomb or gas explosions and very strong earthquakes. In most cases, structures fail due to damage accumulated under normal loadings such as wind loads, dead and live loads. The consequence of cumulative damage will affect the reliability of surviving components and finally causes collapse of the system. The cumulative damage effects on system reliability under time-invariant loadings are of practical interest in structural design and therefore will be investigated in this study. The scope of this study is, however, restricted to the consideration of damage accumulation as the increase in the number of failed components due to the violation of their strength limits.

  8. Parts and Components Reliability Assessment: A Cost Effective Approach

    NASA Technical Reports Server (NTRS)

    Lee, Lydia

    2009-01-01

    System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.

  9. The Development of DNA Based Methods for the Reliable and Efficient Identification of Nicotiana tabacum in Tobacco and Its Derived Products

    PubMed Central

    Fan, Wei; Li, Rong; Li, Sifan; Ping, Wenli; Li, Shujun; Naumova, Alexandra; Peelen, Tamara; Yuan, Zheng; Zhang, Dabing

    2016-01-01

    Reliable methods are needed to detect the presence of tobacco components in tobacco products to effectively control smuggling and classify tariff and excise in tobacco industry to control illegal tobacco trade. In this study, two sensitive and specific DNA based methods, one quantitative real-time PCR (qPCR) assay and the other loop-mediated isothermal amplification (LAMP) assay, were developed for the reliable and efficient detection of the presence of tobacco (Nicotiana tabacum) in various tobacco samples and commodities. Both assays targeted the same sequence of the uridine 5′-monophosphate synthase (UMPS), and their specificities and sensitivities were determined with various plant materials. Both qPCR and LAMP methods were reliable and accurate in the rapid detection of tobacco components in various practical samples, including customs samples, reconstituted tobacco samples, and locally purchased cigarettes, showing high potential for their application in tobacco identification, particularly in the special cases where the morphology or chemical compositions of tobacco have been disrupted. Therefore, combining both methods would facilitate not only the detection of tobacco smuggling control, but also the detection of tariff classification and of excise. PMID:27635142

  10. Reliability prediction of ontology-based service compositions using Petri net and time series models.

    PubMed

    Li, Jia; Xia, Yunni; Luo, Xin

    2014-01-01

    OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy.

  11. Chapter 15: Reliability of Wind Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, Shuangwen; O'Connor, Ryan

    The global wind industry has witnessed exciting developments in recent years. The future will be even brighter with further reductions in capital and operation and maintenance costs, which can be accomplished with improved turbine reliability, especially when turbines are installed offshore. One opportunity for the industry to improve wind turbine reliability is through the exploration of reliability engineering life data analysis based on readily available data or maintenance records collected at typical wind plants. If adopted and conducted appropriately, these analyses can quickly save operation and maintenance costs in a potentially impactful manner. This chapter discusses wind turbine reliability bymore » highlighting the methodology of reliability engineering life data analysis. It first briefly discusses fundamentals for wind turbine reliability and the current industry status. Then, the reliability engineering method for life analysis, including data collection, model development, and forecasting, is presented in detail and illustrated through two case studies. The chapter concludes with some remarks on potential opportunities to improve wind turbine reliability. An owner and operator's perspective is taken and mechanical components are used to exemplify the potential benefits of reliability engineering analysis to improve wind turbine reliability and availability.« less

  12. Reliability of Source Mechanisms for a Hydraulic Fracturing Dataset

    NASA Astrophysics Data System (ADS)

    Eyre, T.; Van der Baan, M.

    2016-12-01

    Non-double-couple components have been inferred for induced seismicity due to fluid injection, yet these components are often poorly constrained due to the acquisition geometry. Likewise non-double-couple components in microseismic recordings are not uncommon. Microseismic source mechanisms provide an insight into the fracturing behaviour of a hydraulically stimulated reservoir. However, source inversion in a hydraulic fracturing environment is complicated by the likelihood of volumetric contributions to the source due to the presence of high pressure fluids, which greatly increases the possible solution space and therefore the non-uniqueness of the solutions. Microseismic data is usually recorded on either 2D surface or borehole arrays of sensors. In many cases, surface arrays appear to constrain source mechanisms with high shear components, whereas borehole arrays tend to constrain more variable mechanisms including those with high tensile components. The abilities of each geometry to constrain the true source mechanisms are therefore called into question.The ability to distinguish between shear and tensile source mechanisms with different acquisition geometries is investigated using synthetic data. For both inversions, both P- and S- wave amplitudes recorded on three component sensors need to be included to obtain reliable solutions. Surface arrays appear to give more reliable solutions due to a greater sampling of the focal sphere, but in reality tend to record signals with a low signal to noise ratio. Borehole arrays can produce acceptable results, however the reliability is much more affected by relative source-receiver locations and source orientation, with biases produced in many of the solutions. Therefore more care must be taken when interpreting results.These findings are taken into account when interpreting a microseismic dataset of 470 events recorded by two vertical borehole arrays monitoring a horizontal treatment well. Source locations and mechanisms are calculated and the results discussed, including the biases caused by the array geometry. The majority of the events are located within the target reservoir, however a small, seemingly disconnected cluster of events appears 100 m above the reservoir.

  13. The Epidemiology of Transfusion-related Acute Lung Injury Varies According to the Applied Definition of Lung Injury Onset Time.

    PubMed

    Vande Vusse, Lisa K; Caldwell, Ellen; Tran, Edward; Hogl, Laurie; Dinwiddie, Steven; López, José A; Maier, Ronald V; Watkins, Timothy R

    2015-09-01

    Research that applies an unreliable definition for transfusion-related acute lung injury (TRALI) may draw false conclusions about its risk factors and biology. The effectiveness of preventive strategies may decrease as a consequence. However, the reliability of the consensus TRALI definition is unknown. To prospectively study the effect of applying two plausible definitions of acute respiratory distress syndrome onset time on TRALI epidemiology. We studied 316 adults admitted to the intensive care unit and transfused red blood cells within 24 hours of blunt trauma. We identified patients with acute respiratory distress syndrome, and defined acute respiratory distress syndrome onset time two ways: (1) the time at which the first radiographic or oxygenation criterion was met, and (2) the time both criteria were met. We categorized two corresponding groups of TRALI cases transfused in the 6 hours before acute respiratory distress syndrome onset. We used Cohen's kappa to measure agreement between the TRALI cases and implicated blood components identified by the two acute respiratory distress syndrome onset time definitions. In a nested case-control study, we examined potential risk factors for each group of TRALI cases, including demographics, injury severity, and characteristics of blood components transfused in the 6 hours before acute respiratory distress syndrome onset. Forty-two of 113 patients with acute respiratory distress syndrome were TRALI cases per the first acute respiratory distress syndrome onset time definition and 63 per the second definition. There was slight agreement between the two groups of TRALI cases (κ = 0.16; 95% confidence interval, -0.01 to 0.33) and between the implicated blood components (κ = 0.15, 95% confidence interval, 0.11-0.20). Age, Injury Severity Score, high plasma-volume components, and transfused plasma volume were risk factors for TRALI when applying the second acute respiratory distress syndrome onset time definition but not when applying the first definition. The epidemiology of TRALI varies when applying two plausible definitions of acute respiratory distress syndrome onset time to severely injured trauma patients. A TRALI definition that standardizes acute respiratory distress syndrome onset time might improve reliability and align efforts to understand epidemiology, biology, and prevention.

  14. Slow Crack Growth and Fatigue Life Prediction of Ceramic Components Subjected to Variable Load History

    NASA Technical Reports Server (NTRS)

    Jadaan, Osama

    2001-01-01

    Present capabilities of the NASA CARES/Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code include probabilistic life prediction of ceramic components subjected to fast fracture, slow crack growth (stress corrosion), and cyclic fatigue failure modes. Currently, this code has the capability to compute the time-dependent reliability of ceramic structures subjected to simple time-dependent loading. For example, in slow crack growth (SCG) type failure conditions CARES/Life can handle the cases of sustained and linearly increasing time-dependent loads, while for cyclic fatigue applications various types of repetitive constant amplitude loads can be accounted for. In real applications applied loads are rarely that simple, but rather vary with time in more complex ways such as, for example, engine start up, shut down, and dynamic and vibrational loads. In addition, when a given component is subjected to transient environmental and or thermal conditions, the material properties also vary with time. The objective of this paper is to demonstrate a methodology capable of predicting the time-dependent reliability of components subjected to transient thermomechanical loads that takes into account the change in material response with time. In this paper, the dominant delayed failure mechanism is assumed to be SCG. This capability has been added to the NASA CARES/Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code, which has also been modified to have the ability of interfacing with commercially available FEA codes executed for transient load histories. An example involving a ceramic exhaust valve subjected to combustion cycle loads is presented to demonstrate the viability of this methodology and the CARES/Life program.

  15. Systems engineering principles for the design of biomedical signal processing systems.

    PubMed

    Faust, Oliver; Acharya U, Rajendra; Sputh, Bernhard H C; Min, Lim Choo

    2011-06-01

    Systems engineering aims to produce reliable systems which function according to specification. In this paper we follow a systems engineering approach to design a biomedical signal processing system. We discuss requirements capturing, specification definition, implementation and testing of a classification system. These steps are executed as formal as possible. The requirements, which motivate the system design, are based on diabetes research. The main requirement for the classification system is to be a reliable component of a machine which controls diabetes. Reliability is very important, because uncontrolled diabetes may lead to hyperglycaemia (raised blood sugar) and over a period of time may cause serious damage to many of the body systems, especially the nerves and blood vessels. In a second step, these requirements are refined into a formal CSP‖ B model. The formal model expresses the system functionality in a clear and semantically strong way. Subsequently, the proven system model was translated into an implementation. This implementation was tested with use cases and failure cases. Formal modeling and automated model checking gave us deep insight in the system functionality. This insight enabled us to create a reliable and trustworthy implementation. With extensive tests we established trust in the reliability of the implementation. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  16. Different Approaches for Ensuring Performance/Reliability of Plastic Encapsulated Microcircuits (PEMs) in Space Applications

    NASA Technical Reports Server (NTRS)

    Gerke, R. David; Sandor, Mike; Agarwal, Shri; Moor, Andrew F.; Cooper, Kim A.

    2000-01-01

    Engineers within the commercial and aerospace industries are using trade-off and risk analysis to aid in reducing spacecraft system cost while increasing performance and maintaining high reliability. In many cases, Commercial Off-The-Shelf (COTS) components, which include Plastic Encapsulated Microcircuits (PEMs), are candidate packaging technologies for spacecrafts due to their lower cost, lower weight and enhanced functionality. Establishing and implementing a parts program that effectively and reliably makes use of these potentially less reliable, but state-of-the-art devices, has become a significant portion of the job for the parts engineer. Assembling a reliable high performance electronic system, which includes COTS components, requires that the end user assume a risk. To minimize the risk involved, companies have developed methodologies by which they use accelerated stress testing to assess the product and reduce the risk involved to the total system. Currently, there are no industry standard procedures for accomplishing this risk mitigation. This paper will present the approaches for reducing the risk of using PEMs devices in space flight systems as developed by two independent Laboratories. The JPL procedure involves primarily a tailored screening with accelerated stress philosophy while the APL procedure is primarily, a lot qualification procedure. Both Laboratories successfully have reduced the risk of using the particular devices for their respective systems and mission requirements.

  17. In-Flight Manual Electronics Repair for Deep-Space Missions

    NASA Technical Reports Server (NTRS)

    Pettegrew, Richard; Easton, John; Struk, Peter; Anderson, Eric

    2007-01-01

    Severe limitations on mass and volume available for spares on long-duration spaceflight missions will require electronics repair to be conducted at the component level, rather than at the sub-assembly level (referred to as Orbital Replacement Unit, or 'ORU'), as is currently the case aboard the International Space Station. Performing reliable component-level repairs in a reduced gravity environment by crew members will require careful planning, and some specialty tools and systems. Additionally, spacecraft systems must be designed to enable such repairs. This paper is an overview of a NASA project which examines all of these aspects of component level electronic repair. Results of case studies that detail how NASA, the U.S. Navy, and a commercial company currently approach electronics repair are presented, along with results of a trade study examining commercial technologies and solutions which may be used in future applications. Initial design recommendations resulting from these studies are also presented.

  18. Lifetime Reliability Prediction of Ceramic Structures Under Transient Thermomechanical Loads

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Jadaan, Osama J.; Gyekenyesi, John P.

    2005-01-01

    An analytical methodology is developed to predict the probability of survival (reliability) of ceramic components subjected to harsh thermomechanical loads that can vary with time (transient reliability analysis). This capability enables more accurate prediction of ceramic component integrity against fracture in situations such as turbine startup and shutdown, operational vibrations, atmospheric reentry, or other rapid heating or cooling situations (thermal shock). The transient reliability analysis methodology developed herein incorporates the following features: fast-fracture transient analysis (reliability analysis without slow crack growth, SCG); transient analysis with SCG (reliability analysis with time-dependent damage due to SCG); a computationally efficient algorithm to compute the reliability for components subjected to repeated transient loading (block loading); cyclic fatigue modeling using a combined SCG and Walker fatigue law; proof testing for transient loads; and Weibull and fatigue parameters that are allowed to vary with temperature or time. Component-to-component variation in strength (stochastic strength response) is accounted for with the Weibull distribution, and either the principle of independent action or the Batdorf theory is used to predict the effect of multiaxial stresses on reliability. The reliability analysis can be performed either as a function of the component surface (for surface-distributed flaws) or component volume (for volume-distributed flaws). The transient reliability analysis capability has been added to the NASA CARES/ Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code. CARES/Life was also updated to interface with commercially available finite element analysis software, such as ANSYS, when used to model the effects of transient load histories. Examples are provided to demonstrate the features of the methodology as implemented in the CARES/Life program.

  19. A particle swarm model for estimating reliability and scheduling system maintenance

    NASA Astrophysics Data System (ADS)

    Puzis, Rami; Shirtz, Dov; Elovici, Yuval

    2016-05-01

    Modifying data and information system components may introduce new errors and deteriorate the reliability of the system. Reliability can be efficiently regained with reliability centred maintenance, which requires reliability estimation for maintenance scheduling. A variant of the particle swarm model is used to estimate reliability of systems implemented according to the model view controller paradigm. Simulations based on data collected from an online system of a large financial institute are used to compare three component-level maintenance policies. Results show that appropriately scheduled component-level maintenance greatly reduces the cost of upholding an acceptable level of reliability by reducing the need in system-wide maintenance.

  20. The validity and reliability of the sixth-year internal medical examination administered at the King Abdulaziz University Medical College.

    PubMed

    Fallatah, Hind I; Tekian, Ara; Park, Yoon Soo; Al Shawa, Lana

    2015-02-01

    Exams are essential components of medical students' knowledge and skill assessment during their clinical years of study. The paper provides a retrospective analysis of validity evidence for the internal medicine component of the written and clinical exams administered in 2012 and 2013 at King Abdulaziz University's Faculty of Medicine. Students' scores for the clinical and written exams were obtained. Four faculty members (two senior members and two junior members) were asked to rate the exam questions, including MCQs and OSCEs, for evidence of content validity using a rating scale of 1-5 for each item. Cronbach's alpha was used to measure the internal consistency reliability. Correlations were used to examine the associations between different forms of assessment and groups of students. A total of 824 students completed the internal medicine course and took the exam. The numbers of rated questions were 320 and 46 for the MCQ and OSCE, respectively. Significant correlations were found between the MCQ section, the OSCE section, and the continuous assessment marks, which include 20 long-case presentations during the course; participation in daily rounds, clinical sessions and tutorials; the performance of simple procedures, such as IV cannulation and ABG extraction; and the student log book. Although the OSCE exam was reliable for the two groups that had taken the final clinical OSCE, the clinical long- and short-case exams were not reliable across the two groups that had taken the oral clinical exams. The correlation analysis showed a significant linear association between the raters with respect to evidence of content validity for both the MCQ and OSCE, r = .219 P < .001 and r = .678 P < .001, respectively, and r = .241 P < .001 and r = .368 P = .023 for the internal structure validity, respectively. Reliability measured using Cronbach's alpha was greater for assessments administered in 2013. The pattern of relationships between the MCQ and OSCE scores provides evidence of the validity of these measures for use in the evaluation of knowledge and clinical skills in internal medicine. The OSCE exam is more reliable than the short- and long-case clinical exams and requires less effort on the part of examiners and patients.

  1. Is computed tomography an accurate and reliable method for measuring total knee arthroplasty component rotation?

    PubMed

    Figueroa, José; Guarachi, Juan Pablo; Matas, José; Arnander, Magnus; Orrego, Mario

    2016-04-01

    Computed tomography (CT) is widely used to assess component rotation in patients with poor results after total knee arthroplasty (TKA). The purpose of this study was to simultaneously determine the accuracy and reliability of CT in measuring TKA component rotation. TKA components were implanted in dry-bone models and assigned to two groups. The first group (n = 7) had variable femoral component rotations, and the second group (n = 6) had variable tibial tray rotations. CT images were then used to assess component rotation. Accuracy of CT rotational assessment was determined by mean difference, in degrees, between implanted component rotation and CT-measured rotation. Intraclass correlation coefficient (ICC) was applied to determine intra-observer and inter-observer reliability. Femoral component accuracy showed a mean difference of 2.5° and the tibial tray a mean difference of 3.2°. There was good intra- and inter-observer reliability for both components, with a femoral ICC of 0.8 and 0.76, and tibial ICC of 0.68 and 0.65, respectively. CT rotational assessment accuracy can differ from true component rotation by approximately 3° for each component. It does, however, have good inter- and intra-observer reliability.

  2. The Chinese version of the Child and Adolescent Scale of Environment (CASE-C): validity and reliability for children with disabilities in Taiwan.

    PubMed

    Kang, Lin-Ju; Yen, Chia-Feng; Bedell, Gary; Simeonsson, Rune J; Liou, Tsan-Hon; Chi, Wen-Chou; Liu, Shu-Wen; Liao, Hua-Fang; Hwang, Ai-Wen

    2015-03-01

    Measurement of children's participation and environmental factors is a key component of the assessment in the new Disability Evaluation System (DES) in Taiwan. The Child and Adolescent Scale of Environment (CASE) was translated into Traditional Chinese (CASE-C) and used for assessing environmental factors affecting the participation of children and youth with disabilities in the DES. The aim of this study was to validate the CASE-C. Participants were 614 children and youth aged 6.0-17.9 years with disabilities, with the largest condition group comprised of children with intellectual disability (61%). Internal structure, internal consistency, test-retest reliability, convergent validity, and discriminant (known group) validity were examined using exploratory factor analyses, Cronbach's α coefficient, intra-class correlation coefficients (ICC), correlation analyses, and univariate ANOVAs. A three-factor structure (Family/Community Resources, Assistance/Attitude Supports, and Physical Design Access) of the CASE-C was produced with 38% variance explained. The CASE-C had adequate internal consistency (Cronbach's α=.74-.86) and test-retest reliability (ICCs=.73-.90). Children and youth with disabilities who had higher levels of severity of impairment encountered more environmental barriers and those experiencing more environmental problems also had greater restrictions in participation. The CASE-C scores were found to distinguish children on the basis of disability condition and impairment severity, but not on the basis of age or sex. The CASE-C is valid for assessing environmental problems experienced by children and youth with disabilities in Taiwan. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Implementing the undergraduate mini-CEX: a tailored approach at Southampton University.

    PubMed

    Hill, Faith; Kendall, Kathleen; Galbraith, Kevin; Crossley, Jim

    2009-04-01

    The mini-clinical evaluation exercise (mini-CEX) is widely used in the UK to assess clinical competence, but there is little evidence regarding its implementation in the undergraduate setting. This study aimed to estimate the validity and reliability of the undergraduate mini-CEX and discuss the challenges involved in its implementation. A total of 3499 mini-CEX forms were completed. Validity was assessed by estimating associations between mini-CEX score and a number of external variables, examining the internal structure of the instrument, checking competency domain response rates and profiles against expectations, and by qualitative evaluation of stakeholder interviews. Reliability was evaluated by overall reliability coefficient (R), estimation of the standard error of measurement (SEM), and from stakeholders' perceptions. Variance component analysis examined the contribution of relevant factors to students' scores. Validity was threatened by various confounding variables, including: examiner status; case complexity; attachment specialty; patient gender, and case focus. Factor analysis suggested that competency domains reflect a single latent variable. Maximum reliability can be achieved by aggregating scores over 15 encounters (R = 0.73; 95% confidence interval [CI] +/- 0.28 based on a 6-point assessment scale). Examiner stringency contributed 29% of score variation and student attachment aptitude 13%. Stakeholder interviews revealed staff development needs but the majority perceived the mini-CEX as more reliable and valid than the previous long case. The mini-CEX has good overall utility for assessing aspects of the clinical encounter in an undergraduate setting. Strengths include fidelity, wide sampling, perceived validity, and formative observation and feedback. Reliability is limited by variable examiner stringency, and validity by confounding variables, but these should be viewed within the context of overall assessment strategies.

  4. Reliability Prediction of Ontology-Based Service Compositions Using Petri Net and Time Series Models

    PubMed Central

    Li, Jia; Xia, Yunni; Luo, Xin

    2014-01-01

    OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy. PMID:24688429

  5. A Design Heritage-Based Forecasting Methodology for Risk Informed Management of Advanced Systems

    NASA Technical Reports Server (NTRS)

    Maggio, Gaspare; Fragola, Joseph R.

    1999-01-01

    The development of next generation systems often carries with it the promise of improved performance, greater reliability, and reduced operational costs. These expectations arise from the use of novel designs, new materials, advanced integration and production technologies intended for functionality replacing the previous generation. However, the novelty of these nascent technologies is accompanied by lack of operational experience and, in many cases, no actual testing as well. Therefore some of the enthusiasm surrounding most new technologies may be due to inflated aspirations from lack of knowledge rather than actual future expectations. This paper proposes a design heritage approach for improved reliability forecasting of advanced system components. The basis of the design heritage approach is to relate advanced system components to similar designs currently in operation. The demonstrated performance of these components could then be used to forecast the expected performance and reliability of comparable advanced technology components. In this approach the greater the divergence of the advanced component designs from the current systems the higher the uncertainty that accompanies the associated failure estimates. Designers of advanced systems are faced with many difficult decisions. One of the most common and more difficult types of these decisions are those related to the choice between design alternatives. In the past decision-makers have found these decisions to be extremely difficult to make because they often involve the trade-off between a known performing fielded design and a promising paper design. When it comes to expected reliability performance the paper design always looks better because it is on paper and it addresses all the know failure modes of the fielded design. On the other hand there is a long, and sometimes very difficult road, between the promise of a paper design and its fulfillment; with the possibility that sometimes the reliability promise is not fulfilled at all. Decision makers in advanced technology areas have always known to discount the performance claims of a design to a degree in proportion to its stage of development, and at times have preferred the more mature design over the one of lesser maturity even with the latter promising substantially better performance once fielded. As with the broader measures of performance this has also been true for projected reliability performance. Paper estimates of potential advances in design reliability are to a degree uncertain in proportion to the maturity of the features being proposed to secure those advances. This is especially true when performance-enhancing features in other areas are also planned to be part of the development program.

  6. Reliability analysis of component-level redundant topologies for solid-state fault current limiter

    NASA Astrophysics Data System (ADS)

    Farhadi, Masoud; Abapour, Mehdi; Mohammadi-Ivatloo, Behnam

    2018-04-01

    Experience shows that semiconductor switches in power electronics systems are the most vulnerable components. One of the most common ways to solve this reliability challenge is component-level redundant design. There are four possible configurations for the redundant design in component level. This article presents a comparative reliability analysis between different component-level redundant designs for solid-state fault current limiter. The aim of the proposed analysis is to determine the more reliable component-level redundant configuration. The mean time to failure (MTTF) is used as the reliability parameter. Considering both fault types (open circuit and short circuit), the MTTFs of different configurations are calculated. It is demonstrated that more reliable configuration depends on the junction temperature of the semiconductor switches in the steady state. That junction temperature is a function of (i) ambient temperature, (ii) power loss of the semiconductor switch and (iii) thermal resistance of heat sink. Also, results' sensitivity to each parameter is investigated. The results show that in different conditions, various configurations have higher reliability. The experimental results are presented to clarify the theory and feasibility of the proposed approaches. At last, levelised costs of different configurations are analysed for a fair comparison.

  7. Fast and reliable obstacle detection and segmentation for cross-country navigation

    NASA Technical Reports Server (NTRS)

    Talukder, A.; Manduchi, R.; Rankin, A.; Matthies, L.

    2002-01-01

    Obstacle detection is one of the main components of the control system of autonomous vehicles. In the case of indoor/urban navigation, obstacles are typically defined as surface points that are higher than the ground plane. This characterization, however, cannot be used in cross-country and unstructured environments, where the notion of ground plane is often not meaningful.

  8. The GRASP 3: Graphical Reliability Analysis Simulation Program. Version 3: A users' manual and modelling guide

    NASA Technical Reports Server (NTRS)

    Phillips, D. T.; Manseur, B.; Foster, J. W.

    1982-01-01

    Alternate definitions of system failure create complex analysis for which analytic solutions are available only for simple, special cases. The GRASP methodology is a computer simulation approach for solving all classes of problems in which both failure and repair events are modeled according to the probability laws of the individual components of the system.

  9. Computing Reliabilities Of Ceramic Components Subject To Fracture

    NASA Technical Reports Server (NTRS)

    Nemeth, N. N.; Gyekenyesi, J. P.; Manderscheid, J. M.

    1992-01-01

    CARES calculates fast-fracture reliability or failure probability of macroscopically isotropic ceramic components. Program uses results from commercial structural-analysis program (MSC/NASTRAN or ANSYS) to evaluate reliability of component in presence of inherent surface- and/or volume-type flaws. Computes measure of reliability by use of finite-element mathematical model applicable to multiple materials in sense model made function of statistical characterizations of many ceramic materials. Reliability analysis uses element stress, temperature, area, and volume outputs, obtained from two-dimensional shell and three-dimensional solid isoparametric or axisymmetric finite elements. Written in FORTRAN 77.

  10. Reliability analysis of component of affination centrifugal 1 machine by using reliability engineering

    NASA Astrophysics Data System (ADS)

    Sembiring, N.; Ginting, E.; Darnello, T.

    2017-12-01

    Problems that appear in a company that produces refined sugar, the production floor has not reached the level of critical machine availability because it often suffered damage (breakdown). This results in a sudden loss of production time and production opportunities. This problem can be solved by Reliability Engineering method where the statistical approach to historical damage data is performed to see the pattern of the distribution. The method can provide a value of reliability, rate of damage, and availability level, of an machine during the maintenance time interval schedule. The result of distribution test to time inter-damage data (MTTF) flexible hose component is lognormal distribution while component of teflon cone lifthing is weibull distribution. While from distribution test to mean time of improvement (MTTR) flexible hose component is exponential distribution while component of teflon cone lifthing is weibull distribution. The actual results of the flexible hose component on the replacement schedule per 720 hours obtained reliability of 0.2451 and availability 0.9960. While on the critical components of teflon cone lifthing actual on the replacement schedule per 1944 hours obtained reliability of 0.4083 and availability 0.9927.

  11. Design of fuel cell powered data centers for sufficient reliability and availability

    NASA Astrophysics Data System (ADS)

    Ritchie, Alexa J.; Brouwer, Jacob

    2018-04-01

    It is challenging to design a sufficiently reliable fuel cell electrical system for use in data centers, which require 99.9999% uptime. Such a system could lower emissions and increase data center efficiency, but the reliability and availability of such a system must be analyzed and understood. Currently, extensive backup equipment is used to ensure electricity availability. The proposed design alternative uses multiple fuel cell systems each supporting a small number of servers to eliminate backup power equipment provided the fuel cell design has sufficient reliability and availability. Potential system designs are explored for the entire data center and for individual fuel cells. Reliability block diagram analysis of the fuel cell systems was accomplished to understand the reliability of the systems without repair or redundant technologies. From this analysis, it was apparent that redundant components would be necessary. A program was written in MATLAB to show that the desired system reliability could be achieved by a combination of parallel components, regardless of the number of additional components needed. Having shown that the desired reliability was achievable through some combination of components, a dynamic programming analysis was undertaken to assess the ideal allocation of parallel components.

  12. Validation of the Italian Version of the Caregiver Abuse Screen among Family Caregivers of Older People with Alzheimer's Disease.

    PubMed

    Melchiorre, Maria Gabriella; Di Rosa, Mirko; Barbabella, Francesco; Barbini, Norma; Lattanzio, Fabrizia; Chiatti, Carlos

    2017-01-01

    Introduction . Elder abuse is often a hidden phenomenon and, in many cases, screening practices are difficult to implement among older people with dementia. The Caregiver Abuse Screen (CASE) is a useful tool which is administered to family caregivers for detecting their potential abusive behavior. Objectives . To validate the Italian version of the CASE tool in the context of family caregiving of older people with Alzheimer's disease (AD) and to identify risk factors for elder abuse in Italy. Methods . The CASE test was administered to 438 caregivers, recruited in the Up-Tech study. Validity and reliability were evaluated using Spearman's correlation coefficients, principal-component analysis, and Cronbach's alphas. The association between the CASE and other variables potentially associated with elder abuse was also analyzed. Results . The factor analysis suggested the presence of a single factor, with a strong internal consistency (Cronbach's alpha = 0.86). CASE score was strongly correlated with well-known risk factors of abuse. At multivariate level, main factors associated with CASE total score were caregiver burden and AD-related behavioral disturbances. Conclusions . The Italian version of the CASE is a reliable and consistent screening tool for tackling the risk of being or becoming perpetrators of abuse by family caregivers of people with AD.

  13. Validation of the Italian Version of the Caregiver Abuse Screen among Family Caregivers of Older People with Alzheimer's Disease

    PubMed Central

    Di Rosa, Mirko; Barbabella, Francesco; Barbini, Norma; Chiatti, Carlos

    2017-01-01

    Introduction. Elder abuse is often a hidden phenomenon and, in many cases, screening practices are difficult to implement among older people with dementia. The Caregiver Abuse Screen (CASE) is a useful tool which is administered to family caregivers for detecting their potential abusive behavior. Objectives. To validate the Italian version of the CASE tool in the context of family caregiving of older people with Alzheimer's disease (AD) and to identify risk factors for elder abuse in Italy. Methods. The CASE test was administered to 438 caregivers, recruited in the Up-Tech study. Validity and reliability were evaluated using Spearman's correlation coefficients, principal-component analysis, and Cronbach's alphas. The association between the CASE and other variables potentially associated with elder abuse was also analyzed. Results. The factor analysis suggested the presence of a single factor, with a strong internal consistency (Cronbach's alpha = 0.86). CASE score was strongly correlated with well-known risk factors of abuse. At multivariate level, main factors associated with CASE total score were caregiver burden and AD-related behavioral disturbances. Conclusions. The Italian version of the CASE is a reliable and consistent screening tool for tackling the risk of being or becoming perpetrators of abuse by family caregivers of people with AD. PMID:28265571

  14. Understanding software faults and their role in software reliability modeling

    NASA Technical Reports Server (NTRS)

    Munson, John C.

    1994-01-01

    This study is a direct result of an on-going project to model the reliability of a large real-time control avionics system. In previous modeling efforts with this system, hardware reliability models were applied in modeling the reliability behavior of this system. In an attempt to enhance the performance of the adapted reliability models, certain software attributes were introduced in these models to control for differences between programs and also sequential executions of the same program. As the basic nature of the software attributes that affect software reliability become better understood in the modeling process, this information begins to have important implications on the software development process. A significant problem arises when raw attribute measures are to be used in statistical models as predictors, for example, of measures of software quality. This is because many of the metrics are highly correlated. Consider the two attributes: lines of code, LOC, and number of program statements, Stmts. In this case, it is quite obvious that a program with a high value of LOC probably will also have a relatively high value of Stmts. In the case of low level languages, such as assembly language programs, there might be a one-to-one relationship between the statement count and the lines of code. When there is a complete absence of linear relationship among the metrics, they are said to be orthogonal or uncorrelated. Usually the lack of orthogonality is not serious enough to affect a statistical analysis. However, for the purposes of some statistical analysis such as multiple regression, the software metrics are so strongly interrelated that the regression results may be ambiguous and possibly even misleading. Typically, it is difficult to estimate the unique effects of individual software metrics in the regression equation. The estimated values of the coefficients are very sensitive to slight changes in the data and to the addition or deletion of variables in the regression equation. Since most of the existing metrics have common elements and are linear combinations of these common elements, it seems reasonable to investigate the structure of the underlying common factors or components that make up the raw metrics. The technique we have chosen to use to explore this structure is a procedure called principal components analysis. Principal components analysis is a decomposition technique that may be used to detect and analyze collinearity in software metrics. When confronted with a large number of metrics measuring a single construct, it may be desirable to represent the set by some smaller number of variables that convey all, or most, of the information in the original set. Principal components are linear transformations of a set of random variables that summarize the information contained in the variables. The transformations are chosen so that the first component accounts for the maximal amount of variation of the measures of any possible linear transform; the second component accounts for the maximal amount of residual variation; and so on. The principal components are constructed so that they represent transformed scores on dimensions that are orthogonal. Through the use of principal components analysis, it is possible to have a set of highly related software attributes mapped into a small number of uncorrelated attribute domains. This definitively solves the problem of multi-collinearity in subsequent regression analysis. There are many software metrics in the literature, but principal component analysis reveals that there are few distinct sources of variation, i.e. dimensions, in this set of metrics. It would appear perfectly reasonable to characterize the measurable attributes of a program with a simple function of a small number of orthogonal metrics each of which represents a distinct software attribute domain.

  15. [Analysis of active components of evidence materials secured in the cases of drugs abuse associated with amphetamines and cannabis products].

    PubMed

    Wachowiak, Roman; Strach, Bogna

    2006-01-01

    The study takes advantage of the presently available effective physicochemical methods (isolation, crystallization, determination of melting point, TLC, GLC and UV spectrophotometry) for an objective and reliable qualitative and quantitative analysis of frequently abused drugs. The authors determined the conditions for qualitative and quantitative analysis of active components of the secured evidence materials containing amphetamine sulphate, methylamphetamine hydrochloride, 3,4-me-tylenedioxy-methamphetamine hydrochloride (MDMA, Ecstasy), as well as delta(9)-tetrahydrocannabinol (delta(9)-THC) as an active component of cannabis (marihuana, hashish). The usefulness of physicochemical tests of evidence materials for opinionating purposes is subject to a detailed forensic toxicological interpretation.

  16. Artifact removal in the context of group ICA: a comparison of single-subject and group approaches

    PubMed Central

    Du, Yuhui; Allen, Elena A.; He, Hao; Sui, Jing; Wu, Lei; Calhoun, Vince D.

    2018-01-01

    Independent component analysis (ICA) has been widely applied to identify intrinsic brain networks from fMRI data. Group ICA computes group-level components from all data and subsequently estimates individual-level components to recapture inter-subject variability. However, the best approach to handle artifacts, which may vary widely among subjects, is not yet clear. In this work, we study and compare two ICA approaches for artifacts removal. One approach, recommended in recent work by the Human Connectome Project, first performs ICA on individual subject data to remove artifacts, and then applies a group ICA on the cleaned data from all subjects. We refer to this approach as Individual ICA based artifacts Removal Plus Group ICA (IRPG). A second proposed approach, called Group Information Guided ICA (GIG-ICA), performs ICA on group data, then removes the group-level artifact components, and finally performs subject-specific ICAs using the group-level non-artifact components as spatial references. We used simulations to evaluate the two approaches with respect to the effects of data quality, data quantity, variable number of sources among subjects, and spatially unique artifacts. Resting-state test-retest datasets were also employed to investigate the reliability of functional networks. Results from simulations demonstrate GIG-ICA has greater performance compared to IRPG, even in the case when single-subject artifacts removal is perfect and when individual subjects have spatially unique artifacts. Experiments using test-retest data suggest that GIG-ICA provides more reliable functional networks. Based on high estimation accuracy, ease of implementation, and high reliability of functional networks, we find GIG-ICA to be a promising approach. PMID:26859308

  17. Network challenges for cyber physical systems with tiny wireless devices: a case study on reliable pipeline condition monitoring.

    PubMed

    Ali, Salman; Qaisar, Saad Bin; Saeed, Husnain; Khan, Muhammad Farhan; Naeem, Muhammad; Anpalagan, Alagan

    2015-03-25

    The synergy of computational and physical network components leading to the Internet of Things, Data and Services has been made feasible by the use of Cyber Physical Systems (CPSs). CPS engineering promises to impact system condition monitoring for a diverse range of fields from healthcare, manufacturing, and transportation to aerospace and warfare. CPS for environment monitoring applications completely transforms human-to-human, human-to-machine and machine-to-machine interactions with the use of Internet Cloud. A recent trend is to gain assistance from mergers between virtual networking and physical actuation to reliably perform all conventional and complex sensing and communication tasks. Oil and gas pipeline monitoring provides a novel example of the benefits of CPS, providing a reliable remote monitoring platform to leverage environment, strategic and economic benefits. In this paper, we evaluate the applications and technical requirements for seamlessly integrating CPS with sensor network plane from a reliability perspective and review the strategies for communicating information between remote monitoring sites and the widely deployed sensor nodes. Related challenges and issues in network architecture design and relevant protocols are also provided with classification. This is supported by a case study on implementing reliable monitoring of oil and gas pipeline installations. Network parameters like node-discovery, node-mobility, data security, link connectivity, data aggregation, information knowledge discovery and quality of service provisioning have been reviewed.

  18. Network Challenges for Cyber Physical Systems with Tiny Wireless Devices: A Case Study on Reliable Pipeline Condition Monitoring

    PubMed Central

    Ali, Salman; Qaisar, Saad Bin; Saeed, Husnain; Farhan Khan, Muhammad; Naeem, Muhammad; Anpalagan, Alagan

    2015-01-01

    The synergy of computational and physical network components leading to the Internet of Things, Data and Services has been made feasible by the use of Cyber Physical Systems (CPSs). CPS engineering promises to impact system condition monitoring for a diverse range of fields from healthcare, manufacturing, and transportation to aerospace and warfare. CPS for environment monitoring applications completely transforms human-to-human, human-to-machine and machine-to-machine interactions with the use of Internet Cloud. A recent trend is to gain assistance from mergers between virtual networking and physical actuation to reliably perform all conventional and complex sensing and communication tasks. Oil and gas pipeline monitoring provides a novel example of the benefits of CPS, providing a reliable remote monitoring platform to leverage environment, strategic and economic benefits. In this paper, we evaluate the applications and technical requirements for seamlessly integrating CPS with sensor network plane from a reliability perspective and review the strategies for communicating information between remote monitoring sites and the widely deployed sensor nodes. Related challenges and issues in network architecture design and relevant protocols are also provided with classification. This is supported by a case study on implementing reliable monitoring of oil and gas pipeline installations. Network parameters like node-discovery, node-mobility, data security, link connectivity, data aggregation, information knowledge discovery and quality of service provisioning have been reviewed. PMID:25815444

  19. The Value of Large-Scale Randomised Control Trials in System-Wide Improvement: The Case of the Reading Catch-Up Programme

    ERIC Educational Resources Information Center

    Fleisch, Brahm; Taylor, Stephen; Schöer, Volker; Mabogoane, Thabo

    2017-01-01

    This article illustrates the value of large-scale impact evaluations with counterfactual components. It begins by exploring the limitations of small-scale impact studies, which do not allow reliable inference to a wider population or which do not use valid comparison groups. The paper then describes the design features of a recent large-scale…

  20. Uncovertebral joint injury in cervical facet dislocation: the headphones sign.

    PubMed

    Palmieri, Francesco; Cassar-Pullicino, Victor N; Dell'Atti, Claudia; Lalam, Radhesh K; Tins, Bernhard J; Tyrrell, Prudencia N M; McCall, Iain W

    2006-06-01

    The purpose of our study is to demonstrate the uncovertebral mal-alignment as a reliable indirect sign of cervical facet joint dislocation. We examined the uncovertebral axial plane alignment of 12 patients with unilateral and bilateral cervical facet joint dislocation (UCFJD and BCFJD, respectively), comparing its frequency to the reverse hamburger bun sign on CT and MR axial images. Of the seven cases with BCFJD, five clearly demonstrated the diagnostic reverse facet joint hamburger bun sign on CT and MR images, but in two cases this sign was not detectable. In the five cases with UCFJD, four demonstrated the reverse hamburger bun sign on both CT and MRI. In one case the reverse hamburger bun sign was not seen adequately with either image modality, but the facet dislocation was identified on sagittal imaging. The uncovertebral mal-alignment was detected in all 12 cases. Normally, the two components of the uncovertebral joint enjoy a concentric relationship that in the axial plane is reminiscent of the relationship of headphones with the wearer's head. We name this appearance the 'headphones' sign. Radiologists should be aware of the headphones sign as a reliable indicator of facet joint dislocation on axial imaging used in the assessment of cervical spine injuries.

  1. Psychometrics of a new questionnaire to assess glaucoma adherence: the Glaucoma Treatment Compliance Assessment Tool (an American Ophthalmological Society thesis).

    PubMed

    Mansberger, Steven L; Sheppler, Christina R; McClure, Tina M; Vanalstine, Cory L; Swanson, Ingrid L; Stoumbos, Zoey; Lambert, William E

    2013-09-01

    To report the psychometrics of the Glaucoma Treatment Compliance Assessment Tool (GTCAT), a new questionnaire designed to assess adherence with glaucoma therapy. We developed the questionnaire according to the constructs of the Health Belief Model. We evaluated the questionnaire using data from a cross-sectional study with focus groups (n = 20) and a prospective observational case series (n=58). Principal components analysis provided assessment of construct validity. We repeated the questionnaire after 3 months for test-retest reliability. We evaluated predictive validity using an electronic dosing monitor as an objective measure of adherence. Focus group participants provided 931 statements related to adherence, of which 88.7% (826/931) could be categorized into the constructs of the Health Belief Model. Perceived barriers accounted for 31% (288/931) of statements, cues-to-action 14% (131/931), susceptibility 12% (116/931), benefits 12% (115/931), severity 10% (91/931), and self-efficacy 9% (85/931). The principal components analysis explained 77% of the variance with five components representing Health Belief Model constructs. Reliability analyses showed acceptable Cronbach's alphas (>.70) for four of the seven components (severity, susceptibility, barriers [eye drop administration], and barriers [discomfort]). Predictive validity was high, with several Health Belief Model questions significantly associated (P <.05) with adherence and a correlation coefficient (R (2)) of .40. Test-retest reliability was 90%. The GTCAT shows excellent repeatability, content, construct, and predictive validity for glaucoma adherence. A multisite trial is needed to determine whether the results can be generalized and whether the questionnaire accurately measures the effect of interventions to increase adherence.

  2. Insight and Evidence Motivating the Simplification of Dual-Analysis Hybrid Systems into Single-Analysis Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo; Diniz, F. L. R.; Takacs, L. L.; Suarez, M. J.

    2018-01-01

    Many hybrid data assimilation systems currently used for NWP employ some form of dual-analysis system approach. Typically a hybrid variational analysis is responsible for creating initial conditions for high-resolution forecasts, and an ensemble analysis system is responsible for creating sample perturbations used to form the flow-dependent part of the background error covariance required in the hybrid analysis component. In many of these, the two analysis components employ different methodologies, e.g., variational and ensemble Kalman filter. In such cases, it is not uncommon to have observations treated rather differently between the two analyses components; recentering of the ensemble analysis around the hybrid analysis is used to compensated for such differences. Furthermore, in many cases, the hybrid variational high-resolution system implements some type of four-dimensional approach, whereas the underlying ensemble system relies on a three-dimensional approach, which again introduces discrepancies in the overall system. Connected to these is the expectation that one can reliably estimate observation impact on forecasts issued from hybrid analyses by using an ensemble approach based on the underlying ensemble strategy of dual-analysis systems. Just the realization that the ensemble analysis makes substantially different use of observations as compared to their hybrid counterpart should serve as enough evidence of the implausibility of such expectation. This presentation assembles numerous anecdotal evidence to illustrate the fact that hybrid dual-analysis systems must, at the very minimum, strive for consistent use of the observations in both analysis sub-components. Simpler than that, this work suggests that hybrid systems can reliably be constructed without the need to employ a dual-analysis approach. In practice, the idea of relying on a single analysis system is appealing from a cost-maintenance perspective. More generally, single-analysis systems avoid contradictions such as having to choose one sub-component to generate performance diagnostics to another, possibly not fully consistent, component.

  3. Reliability prediction of large fuel cell stack based on structure stress analysis

    NASA Astrophysics Data System (ADS)

    Liu, L. F.; Liu, B.; Wu, C. W.

    2017-09-01

    The aim of this paper is to improve the reliability of Proton Electrolyte Membrane Fuel Cell (PEMFC) stack by designing the clamping force and the thickness difference between the membrane electrode assembly (MEA) and the gasket. The stack reliability is directly determined by the component reliability, which is affected by the material property and contact stress. The component contact stress is a random variable because it is usually affected by many uncertain factors in the production and clamping process. We have investigated the influences of parameter variation coefficient on the probability distribution of contact stress using the equivalent stiffness model and the first-order second moment method. The optimal contact stress to make the component stay in the highest level reliability is obtained by the stress-strength interference model. To obtain the optimal contact stress between the contact components, the optimal thickness of the component and the stack clamping force are optimally designed. Finally, a detailed description is given how to design the MEA and gasket dimensions to obtain the highest stack reliability. This work can provide a valuable guidance in the design of stack structure for a high reliability of fuel cell stack.

  4. Measuring occupational balance and its relationship to perceived stress and health: Mesurer l'équilibre occupationnel et sa relation avec le stress perçus et la santé.

    PubMed

    Yu, Yu; Manku, Mandeep; Backman, Catherine L

    2018-04-01

    There is an assumption that occupational balance is integrally related to health and well-being. This study aimed to investigate test-retest reliability of the English-translated Occupational Balance Questionnaire (OBQ), its relationship to measures of health (Short Form Health Survey-36 Version 2.0 [SF-36v2]) and stress (Perceived Stress Scale-10; PSS-10), and demographic differences in OBQ scores in Canadian adults. Test-retest reliability (2 weeks) was assessed using intraclass correlation (ICC) coefficients. Online surveys from 86 adults were analyzed using descriptive, correlational, and t test statistics. OBQ test-retest reliability was ICC = 0.74 (95% CI [0.34, 0.90]; p = .003) when excluding an influential case ( n = 20). OBQ correlations with PSS-10 were r = -.72; with SF-36v2 Mental Component Score, r = .65; and with Physical Component Score, r = .31; all p < .001. Age and gender had no impact on OBQ scores. Findings help elucidate relationships among health, stress, and occupational balance; however, further psychometric testing is warranted before using OBQ for clinical purposes.

  5. Fatigue Damage Spectrum calculation in a Mission Synthesis procedure for Sine-on-Random excitations

    NASA Astrophysics Data System (ADS)

    Angeli, Andrea; Cornelis, Bram; Troncossi, Marco

    2016-09-01

    In many real-life environments, certain mechanical and electronic components may be subjected to Sine-on-Random vibrations, i.e. excitations composed of random vibrations superimposed on deterministic (sinusoidal) contributions, in particular sine tones due to some rotating parts of the system (e.g. helicopters, engine-mounted components,...). These components must be designed to withstand the fatigue damage induced by the “composed” vibration environment, and qualification tests are advisable for the most critical ones. In the case of an accelerated qualification test, a proper test tailoring which starts from the real environment (measured vibration signals) and which preserves not only the accumulated fatigue damage but also the “nature” of the excitation (i.e. sinusoidal components plus random process) is important to obtain reliable results. In this paper, the classic time domain approach is taken as a reference for the comparison of different methods for the Fatigue Damage Spectrum (FDS) calculation in case of Sine-on-Random vibration environments. Then, a methodology to compute a Sine-on-Random specification based on a mission FDS is proposed.

  6. Compound estimation procedures in reliability

    NASA Technical Reports Server (NTRS)

    Barnes, Ron

    1990-01-01

    At NASA, components and subsystems of components in the Space Shuttle and Space Station generally go through a number of redesign stages. While data on failures for various design stages are sometimes available, the classical procedures for evaluating reliability only utilize the failure data on the present design stage of the component or subsystem. Often, few or no failures have been recorded on the present design stage. Previously, Bayesian estimators for the reliability of a single component, conditioned on the failure data for the present design, were developed. These new estimators permit NASA to evaluate the reliability, even when few or no failures have been recorded. Point estimates for the latter evaluation were not possible with the classical procedures. Since different design stages of a component (or subsystem) generally have a good deal in common, the development of new statistical procedures for evaluating the reliability, which consider the entire failure record for all design stages, has great intuitive appeal. A typical subsystem consists of a number of different components and each component has evolved through a number of redesign stages. The present investigations considered compound estimation procedures and related models. Such models permit the statistical consideration of all design stages of each component and thus incorporate all the available failure data to obtain estimates for the reliability of the present version of the component (or subsystem). A number of models were considered to estimate the reliability of a component conditioned on its total failure history from two design stages. It was determined that reliability estimators for the present design stage, conditioned on the complete failure history for two design stages have lower risk than the corresponding estimators conditioned only on the most recent design failure data. Several models were explored and preliminary models involving bivariate Poisson distribution and the Consael Process (a bivariate Poisson process) were developed. Possible short comings of the models are noted. An example is given to illustrate the procedures. These investigations are ongoing with the aim of developing estimators that extend to components (and subsystems) with three or more design stages.

  7. Prospective very young asteroid pairs

    NASA Astrophysics Data System (ADS)

    Galád, A.; Vokrouhlický, D.; Zizka, J.

    2014-07-01

    Several tens of asteroid pairs can be discerned from the background main-belt asteroids. The majority of them are thought to have formed within only the last few 10^6 yr. The youngest recognized pairs have formed more than ≈ 10 kyr ago. As some details of pair formation are still not understood well, the study of young pairs is of great importance. It is mainly because the conditions at the time of the pair formation could be deduced much more reliably for young pairs. For example, space weathering on the surfaces of the components, or changes in their rotational properties (in spin rates, tumbling, coordinates of rotational pole) could be negligible since the formation of young pairs. Also, possible strong perturbations by main-belt bodies on pair formation can be reliably studied only for extremely young pairs. Some pairs can quickly blend in with the background asteroids, so even the frequency of asteroid pair formation could be determined more reliably based on young pairs (though only after a statistically significant sample is at disposal). In our regular search for young pairs in the growing asteroid database, only multiopposition asteroids with very similar orbital and proper elements are investigated. Every pair component is represented by a number of clones within orbital uncertainties and drifting in semimajor axis due to the Yarkovsky effect. We found that, if the previously unrecognized pairs (87887) 2000 SS_{286} - 2002 AT_{49} and (355258) 2007 LY_{4} - 2013AF_{40} formed at the recent very close approach of their components, they could become the youngest known pairs. In both cases, the relative encounter velocities of the components were only ˜ 0.1 m s^{-1}. However, the minimum distances between some clones are too large and a few clones of the latter pair did not encounter recently (within ≈ 10 kyr). The age of some prospective young pairs cannot be determined reliably without improved orbital properties (e.g., the second component of a pair (320025) 2007 DT_{76} - 2007 DP_{16}). It is because some components suffered recently repeated close approaches to Ceres or other large main-belt perturbers. In general, the uncertainties in age estimation can be heavily reduced after the physical properties (e.g., sense of rotation, shape, size, binarity) of the pair components are determined.

  8. Design of preventive maintenance system using the reliability engineering and maintenance value stream mapping methods in PT. XYZ

    NASA Astrophysics Data System (ADS)

    Sembiring, N.; Panjaitan, N.; Angelita, S.

    2018-02-01

    PT. XYZ is a company owned by non-governmental organizations engaged in the field of production of rubber processing becoming crumb rubber. Part of the production is supported by some of machines and interacting equipment to achieve optimal productivity. Types of the machine that are used in the production process are Conveyor Breaker, Breaker, Rolling Pin, Hammer Mill, Mill Roll, Conveyor, Shredder Crumb, and Dryer. Maintenance system in PT. XYZ is corrective maintenance i.e. repairing or replacing the engine components after the crash on the machine. Replacement of engine components on corrective maintenance causes the machine to stop operating during the production process is in progress. The result is in the loss of production time due to the operator must replace the damaged engine components. The loss of production time can impact on the production targets which were not reached and lead to high loss costs. The cost for all components is Rp. 4.088.514.505. This cost is really high just for maintaining a Mill Roll Machine. Therefore PT. XYZ is needed to do preventive maintenance i.e. scheduling engine components and improving maintenance efficiency. The used methods are Reliability Engineering and Maintenance Value Stream Mapping (MVSM). The needed data in this research are the interval of time damage to engine components, opportunity cost, labor cost, component cost, corrective repair time, preventive repair time, Mean Time To Opportunity (MTTO), Mean Time To Repair (MTTR), and Mean Time To Yield (MTTY). In this research, the critical components of Mill Roll machine are Spier, Bushing, Bearing, Coupling and Roll. Determination of damage distribution, reliability, MTTF, cost of failure, cost of preventive, current state map, and future state map are done so that the replacement time for each critical component with the lowest maintenance cost and preparation of Standard Operation Procedure (SOP) are developed. For the critical component that has been determined, the Spier component replacement time interval is 228 days with a reliability value of 0,503171, Bushing component is 240 days with reliability value of 0.36861, Bearing component is 202 days with reliability value of 0,503058, Coupling component is 247 days with reliability value of 0,50108 and Roll component is 301 days with reliability value of 0,373525. The results show that the cost decreases from Rp 300,688,114 to Rp 244,384,371 obtained from corrective maintenance to preventive maintenance. While maintenance efficiency increases with the application of preventive maintenance i.e. for Spier component from 54,0540541% to 74,07407%, Bushing component from 52,3809524% to 68,75%, Bearing component from 40% to 52,63158%, Coupling component from 60.9756098% to 71.42857%, and Roll components from 64.516129% to 74.7663551%.

  9. Effect of interaction between dynapenic component of the European working group on sarcopenia in older people sarcopenia criteria and obesity on activities of daily living in the elderly.

    PubMed

    Kim, Yeon-Pyo; Kim, Sun; Joh, Ju-Youn; Hwang, Hwan-Sik

    2014-05-01

    To examine the effects of interaction between dynapenic component of the European Working Group on Sarcopenia in Older People (EWGSOP) sarcopenia and obesity (dynapenia × obesity) on activities of daily living (ADL) in older participants. Cross-sectional analysis of the Validity and Reliability of Korean Frailty Index and the Validity and Reliability of the Kaigo-Yobo Checklist in Korean Elderly studies. Six welfare facilities operated by government in South Korea. Four hundred eighty-seven community-dwelling individuals (157 males, 330 females) >65 years of age. Dynapenic component of the EWGSOP sarcopenia was defined as usual gait speed <0.8 m/s or grip strength lower than cut-off value (male <25.3 kg, female <12.0 kg). Obesity was defined as body mass index ≥27.5 kg/m(2). ADL were assessed using the Barthel index. There were 14 obese with dynapenic component cases (2 males, 12 females) of the 487 participants. Interaction of dynapenia and obesity was significant on multivariate generalized linear model analysis (P = .015). Dynapenic component of the EWGSOP sarcopenia and obesity in the elderly is associated with multiplicative interactions rather than additive interactions in ADL. Copyright © 2014 American Medical Directors Association, Inc. Published by Elsevier Inc. All rights reserved.

  10. Space flight requirements for fiber optic components: qualification testing and lessons learned

    NASA Astrophysics Data System (ADS)

    Ott, Melanie N.; Jin, Xiaodan Linda; Chuska, Richard; Friedberg, Patricia; Malenab, Mary; Matuszeski, Adam

    2006-04-01

    "Qualification" of fiber optic components holds a very different meaning than it did ten years ago. In the past, qualification meant extensive prolonged testing and screening that led to a programmatic method of reliability assurance. For space flight programs today, the combination of using higher performance commercial technology, with shorter development schedules and tighter mission budgets makes long term testing and reliability characterization unfeasible. In many cases space flight missions will be using technology within years of its development and an example of this is fiber laser technology. Although the technology itself is not a new product the components that comprise a fiber laser system change frequently as processes and packaging changes occur. Once a process or the materials for manufacturing a component change, even the data that existed on its predecessor can no longer provide assurance on the newer version. In order to assure reliability during a space flight mission, the component engineer must understand the requirements of the space flight environment as well as the physics of failure of the components themselves. This can be incorporated into an efficient and effective testing plan that "qualifies" a component to specific criteria defined by the program given the mission requirements and the component limitations. This requires interaction at the very initial stages of design between the system design engineer, mechanical engineer, subsystem engineer and the component hardware engineer. Although this is the desired interaction what typically occurs is that the subsystem engineer asks the components or development engineers to meet difficult requirements without knowledge of the current industry situation or the lack of qualification data. This is then passed on to the vendor who can provide little help with such a harsh set of requirements due to high cost of testing for space flight environments. This presentation is designed to guide the engineers of design, development and components, and vendors of commercial components with how to make an efficient and effective qualification test plan with some basic generic information about many space flight requirements. Issues related to the physics of failure, acceptance criteria and lessons learned will also be discussed to assist with understanding how to approach a space flight mission in an ever changing commercial photonics industry.

  11. Space Flight Requirements for Fiber Optic Components; Qualification Testing and Lessons Learned

    NASA Technical Reports Server (NTRS)

    Ott, Melanie N.; Jin, Xiaodan Linda; Chuska, Richard; Friedberg, Patricia; Malenab, Mary; Matuszeski, Adam

    2007-01-01

    "Qualification" of fiber optic components holds a very different meaning than it did ten years ago. In the past, qualification meant extensive prolonged testing and screening that led to a programmatic method of reliability assurance. For space flight programs today, the combination of using higher performance commercial technology, with shorter development schedules and tighter mission budgets makes long term testing and reliability characterization unfeasible. In many cases space flight missions will be using technology within years of its development and an example of this is fiber laser technology. Although the technology itself is not a new product the components that comprise a fiber laser system change frequently as processes and packaging changes occur. Once a process or the materials for manufacturing a component change, even the data that existed on its predecessor can no longer provide assurance on the newer version. In order to assure reliability during a space flight mission, the component engineer must understand the requirements of the space flight environment as well as the physics of failure of the components themselves. This can be incorporated into an efficient and effective testing plan that "qualifies" a component to specific criteria defined by the program given the mission requirements and the component limitations. This requires interaction at the very initial stages of design between the system design engineer, mechanical engineer, subsystem engineer and the component hardware engineer. Although this is the desired interaction what typically occurs is that the subsystem engineer asks the components or development engineers to meet difficult requirements without knowledge of the current industry situation or the lack of qualification data. This is then passed on to the vendor who can provide little help with such a harsh set of requirements due to high cost of testing for space flight environments. This presentation is designed to guide the engineers of design, development and components, and vendors of commercial components with how to make an efficient and effective qualification test plan with some basic generic information about many space flight requirements. Issues related to the physics of failure, acceptance criteria and lessons learned will also be discussed to assist with understanding how to approach a space flight mission in an ever changing commercial photonics industry.

  12. The factorial reliability of the Middlesex Hospital Questionnaire in normal subjects.

    PubMed

    Bagley, C

    1980-03-01

    The internal reliability of the Middlesex Hospital Questionnaire and its component subscales has been checked by means of principal components analyses of data on 256 normal subjects. The subscales (with the possible exception of Hysteria) were found to contribute to the general underlying factor of psychoneurosis. In general, the principal components analysis points to the reliability of the subscales, despite some item overlap.

  13. Reliability analysis of laminated CMC components through shell subelement techniques

    NASA Technical Reports Server (NTRS)

    Starlinger, A.; Duffy, S. F.; Gyekenyesi, J. P.

    1992-01-01

    An updated version of the integrated design program C/CARES (composite ceramic analysis and reliability evaluation of structures) was developed for the reliability evaluation of CMC laminated shell components. The algorithm is now split in two modules: a finite-element data interface program and a reliability evaluation algorithm. More flexibility is achieved, allowing for easy implementation with various finite-element programs. The new interface program from the finite-element code MARC also includes the option of using hybrid laminates and allows for variations in temperature fields throughout the component.

  14. User's guide to the Reliability Estimation System Testbed (REST)

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam

    1992-01-01

    The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.

  15. Reliability considerations of a fuel cell backup power system for telecom applications

    NASA Astrophysics Data System (ADS)

    Serincan, Mustafa Fazil

    2016-03-01

    A commercial fuel cell backup power unit is tested in real life operating conditions at a base station of a Turkish telecom operator. The fuel cell system responds to 256 of 260 electric power outages successfully, providing the required power to the base station. Reliability of the fuel cell backup power unit is found to be 98.5% at the system level. On the other hand, a qualitative reliability analysis at the component level is carried out. Implications of the power management algorithm on reliability is discussed. Moreover, integration of the backup power unit to the base station ecosystem is reviewed in the context of reliability. Impact of inverter design on the stability of the output power is outlined. Significant current harmonics are encountered when a generic inverter is used. However, ripples are attenuated significantly when a custom design inverter is used. Further, fault conditions are considered for real world case studies such as running out of hydrogen, a malfunction in the system, or an unprecedented operating scheme. Some design guidelines are suggested for hybridization of the backup power unit for an uninterrupted operation.

  16. The replacement of dry heat in generic reliability assurance requirements for passive optical components

    NASA Astrophysics Data System (ADS)

    Ren, Xusheng; Qian, Longsheng; Zhang, Guiyan

    2005-12-01

    According to Generic Reliability Assurance Requirements for Passive Optical Components GR-1221-CORE (Issue 2, January 1999), reliability determination test of different kinds of passive optical components which using in uncontrolled environments is taken. The test condition of High Temperature Storage Test (Dry Test) and Damp Test is in below sheet. Except for humidity condition, all is same. In order to save test time and cost, after a sires of contrast tests, the replacement of Dry Heat is discussed. Controlling the Failure mechanism of dry heat and damp heat of passive optical components, the contrast test of dry heat and damp heat for passive optical components (include DWDM, CWDM, Coupler, Isolator, mini Isolator) is taken. The test result of isolator is listed. Telcordia test not only test the reliability of the passive optical components, but also test the patience of the experimenter. The cost of Telcordia test in money, manpower and material resources, especially in time is heavy burden for the company. After a series of tests, we can find that Damp heat could factually test the reliability of passive optical components, and equipment manufacturer in accord with component manufacture could omit the dry heat test if damp heat test is taken first and passed.

  17. Principle of maximum entropy for reliability analysis in the design of machine components

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin

    2018-03-01

    We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.

  18. A Case Study of the Alignment between Curriculum and Assessment in the New York State Earth Science Standards-Based System

    ERIC Educational Resources Information Center

    Contino, Julie

    2013-01-01

    In a standards-based system, it is important for all components of the system to align in order to achieve the intended goals. No Child Left Behind law mandates that assessments be fully aligned with state standards, be valid, reliable and fair, be reported to all stakeholders, and provide evidence that all students in the state are meeting the…

  19. Multi-criteria decision making development of ion chromatographic method for determination of inorganic anions in oilfield waters based on artificial neural networks retention model.

    PubMed

    Stefanović, Stefica Cerjan; Bolanča, Tomislav; Luša, Melita; Ukić, Sime; Rogošić, Marko

    2012-02-24

    This paper describes the development of ad hoc methodology for determination of inorganic anions in oilfield water, since their composition often significantly differs from the average (concentration of components and/or matrix). Therefore, fast and reliable method development has to be performed in order to ensure the monitoring of desired properties under new conditions. The method development was based on computer assisted multi-criteria decision making strategy. The used criteria were: maximal value of objective functions used, maximal robustness of the separation method, minimal analysis time, and maximal retention distance between two nearest components. Artificial neural networks were used for modeling of anion retention. The reliability of developed method was extensively tested by the validation of performance characteristics. Based on validation results, the developed method shows satisfactory performance characteristics, proving the successful application of computer assisted methodology in the described case study. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Improvements of vacuum system in J-PARC 3 GeV synchrotron

    NASA Astrophysics Data System (ADS)

    Kamiya, J.; Hikichi, Y.; Namekawa, Y.; Takeishi, K.; Yanagibashi, T.; Kinsho, M.; Yamamoto, K.; Sato, A.

    2017-07-01

    The RCS vacuum system has been upgraded since the completion of its construction towards the objectives of both better vacuum quality and higher reliability of the components. For the better vacuum quality, (1) pressure of the injection beam line was improved to prevent the H-beam from converting to H0; (2) leakage in the beam injection area due to the thermal expansion was eliminated by applying the adequate torque amount for the clamps; (3) new in-situ degassing method of the kicker magnet was developed. For the reliability increase of the components, (1) A considerable number of fluoroelastmer seal was exchanged to metal seal with the low spring constant bellows and the light clamps; (2) TMP controller for the long cable was developed to prevent the controller failure by the severe electrical noise; (3) A number of TMP were installed instead of ion pumps in the RF cavity section as an insurance for the case of pump trouble.

  1. Improved Accuracy of the Inherent Shrinkage Method for Fast and More Reliable Welding Distortion Calculations

    NASA Astrophysics Data System (ADS)

    Mendizabal, A.; González-Díaz, J. B.; San Sebastián, M.; Echeverría, A.

    2016-07-01

    This paper describes the implementation of a simple strategy adopted for the inherent shrinkage method (ISM) to predict welding-induced distortion. This strategy not only makes it possible for the ISM to reach accuracy levels similar to the detailed transient analysis method (considered the most reliable technique for calculating welding distortion) but also significantly reduces the time required for these types of calculations. This strategy is based on the sequential activation of welding blocks to account for welding direction and transient movement of the heat source. As a result, a significant improvement in distortion prediction is achieved. This is demonstrated by experimentally measuring and numerically analyzing distortions in two case studies: a vane segment subassembly of an aero-engine, represented with 3D-solid elements, and a car body component, represented with 3D-shell elements. The proposed strategy proves to be a good alternative for quickly estimating the correct behaviors of large welded components and may have important practical applications in the manufacturing industry.

  2. Clinically orientated classification incorporating shoulder balance for the surgical treatment of adolescent idiopathic scoliosis.

    PubMed

    Elsebaie, H B; Dannawi, Z; Altaf, F; Zaidan, A; Al Mukhtar, M; Shaw, M J; Gibson, A; Noordeen, H

    2016-02-01

    The achievement of shoulder balance is an important measure of successful scoliosis surgery. No previously described classification system has taken shoulder balance into account. We propose a simple classification system for AIS based on two components which include the curve type and shoulder level. Altogether, three curve types have been defined according to the size and location of the curves, each curve pattern is subdivided into type A or B depending on the shoulder level. This classification was tested for interobserver reproducibility and intraobserver reliability. A retrospective analysis of the radiographs of 232 consecutive cases of AIS patients treated surgically between 2005 and 2009 was also performed. Three major types and six subtypes were identified. Type I accounted for 30 %, type II 28 % and type III 42 %. The retrospective analysis showed three patients developed a decompensation that required extension of the fusion. One case developed worsening of shoulder balance requiring further surgery. This classification was tested for interobserver and intraobserver reliability. The mean kappa coefficients for interobserver reproducibility ranged from 0.89 to 0.952, while the mean kappa value for intraobserver reliability was 0.964 indicating a good-to-excellent reliability. The treatment algorithm guides the spinal surgeon to achieve optimal curve correction and postoperative shoulder balance whilst fusing the smallest number of spinal segments. The high interobserver reproducibility and intraobserver reliability makes it an invaluable tool to describe scoliosis curves in everyday clinical practice.

  3. Analysis on Sealing Reliability of Bolted Joint Ball Head Component of Satellite Propulsion System

    NASA Astrophysics Data System (ADS)

    Guo, Tao; Fan, Yougao; Gao, Feng; Gu, Shixin; Wang, Wei

    2018-01-01

    Propulsion system is one of the important subsystems of satellite, and its performance directly affects the service life, attitude control and reliability of the satellite. The Paper analyzes the sealing principle of bolted joint ball head component of satellite propulsion system and discuss from the compatibility of hydrazine anhydrous and bolted joint ball head component, influence of ground environment on the sealing performance of bolted joint ball heads, and material failure caused by environment, showing that the sealing reliability of bolted joint ball head component is good and the influence of above three aspects on sealing of bolted joint ball head component can be ignored.

  4. Reliability Quantification of Advanced Stirling Convertor (ASC) Components

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward

    2010-01-01

    The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.

  5. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    NASA Astrophysics Data System (ADS)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  6. Reliable harvest of a dorsal scapular artery perforator flap by augmenting its perfusion.

    PubMed

    Kim, So-Young; Lee, Kyeong-Tae; Mun, Goo-Hyun

    2016-02-01

    Despite confirmation of a reliable perforasome in the dorsal scapular artery in an anatomic study, a true perforator flap has not been recommended in previous clinical studies because of concerns regarding insufficient perfusion in the distal region. In this report, we present two cases of reconstruction for occipital defects caused by tumor extirpation using pedicled dorsal scapular artery perforator flaps without a muscle component. To secure the perfusion of the dorsal scapular artery perforator flap, inclusion of an additional perforator was attempted for perfusion augmentation. The second dorsal scapular artery perforator was harvested in one case. In an additional case, the sixth dorsal intercostal artery perforator with a branch that directly connected with the dorsal scapular artery within the trapezius muscle was additionally harvested. The flaps survived without any perfusion-related complications, including tip necrosis, and no donor site morbidities were observed. We suggest that a perfusion augmented dorsal scapular artery perforator flap by harvesting multiple perforators could be a safe and useful alternative for reconstructive surgery of head and neck defects. © 2014 Wiley Periodicals, Inc.

  7. Calculating system reliability with SRFYDO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M

    2010-01-01

    SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for themore » system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.« less

  8. Psychometrics of A New Questionnaire to Assess Glaucoma Adherence: The Glaucoma Treatment Compliance Assessment Tool (An American Ophthalmological Society Thesis)

    PubMed Central

    Mansberger, Steven L.; Sheppler, Christina R.; McClure, Tina M.; VanAlstine, Cory L.; Swanson, Ingrid L.; Stoumbos, Zoey; Lambert, William E.

    2013-01-01

    Purpose: To report the psychometrics of the Glaucoma Treatment Compliance Assessment Tool (GTCAT), a new questionnaire designed to assess adherence with glaucoma therapy. Methods: We developed the questionnaire according to the constructs of the Health Belief Model. We evaluated the questionnaire using data from a cross-sectional study with focus groups (n = 20) and a prospective observational case series (n=58). Principal components analysis provided assessment of construct validity. We repeated the questionnaire after 3 months for test-retest reliability. We evaluated predictive validity using an electronic dosing monitor as an objective measure of adherence. Results: Focus group participants provided 931 statements related to adherence, of which 88.7% (826/931) could be categorized into the constructs of the Health Belief Model. Perceived barriers accounted for 31% (288/931) of statements, cues-to-action 14% (131/931), susceptibility 12% (116/931), benefits 12% (115/931), severity 10% (91/931), and self-efficacy 9% (85/931). The principal components analysis explained 77% of the variance with five components representing Health Belief Model constructs. Reliability analyses showed acceptable Cronbach’s alphas (>.70) for four of the seven components (severity, susceptibility, barriers [eye drop administration], and barriers [discomfort]). Predictive validity was high, with several Health Belief Model questions significantly associated (P <.05) with adherence and a correlation coefficient (R2) of .40. Test-retest reliability was 90%. Conclusion: The GTCAT shows excellent repeatability, content, construct, and predictive validity for glaucoma adherence. A multisite trial is needed to determine whether the results can be generalized and whether the questionnaire accurately measures the effect of interventions to increase adherence. PMID:24072942

  9. A Simulation on Organizational Communication Patterns During a Terrorist Attack

    DTIC Science & Technology

    2008-06-01

    and the Air Support Headquarters. The call is created at the time of attack, and it automatically includes a request for help. Reliability of...communication conditions. 2. Air Support call : This call is produced for just the Headquarters of Air Component, only in case of armed attacks. The request can...estimated speed of armored vehicles in combat areas (West-Point Organization, 2002). When a call for air support is received, an information

  10. A framework for conducting mechanistic based reliability assessments of components operating in complex systems

    NASA Astrophysics Data System (ADS)

    Wallace, Jon Michael

    2003-10-01

    Reliability prediction of components operating in complex systems has historically been conducted in a statistically isolated manner. Current physics-based, i.e. mechanistic, component reliability approaches focus more on component-specific attributes and mathematical algorithms and not enough on the influence of the system. The result is that significant error can be introduced into the component reliability assessment process. The objective of this study is the development of a framework that infuses the needs and influence of the system into the process of conducting mechanistic-based component reliability assessments. The formulated framework consists of six primary steps. The first three steps, identification, decomposition, and synthesis, are primarily qualitative in nature and employ system reliability and safety engineering principles to construct an appropriate starting point for the component reliability assessment. The following two steps are the most unique. They involve a step to efficiently characterize and quantify the system-driven local parameter space and a subsequent step using this information to guide the reduction of the component parameter space. The local statistical space quantification step is accomplished using two proposed multivariate probability models: Multi-Response First Order Second Moment and Taylor-Based Inverse Transformation. Where existing joint probability models require preliminary distribution and correlation information of the responses, these models combine statistical information of the input parameters with an efficient sampling of the response analyses to produce the multi-response joint probability distribution. Parameter space reduction is accomplished using Approximate Canonical Correlation Analysis (ACCA) employed as a multi-response screening technique. The novelty of this approach is that each individual local parameter and even subsets of parameters representing entire contributing analyses can now be rank ordered with respect to their contribution to not just one response, but the entire vector of component responses simultaneously. The final step of the framework is the actual probabilistic assessment of the component. Although the same multivariate probability tools employed in the characterization step can be used for the component probability assessment, variations of this final step are given to allow for the utilization of existing probabilistic methods such as response surface Monte Carlo and Fast Probability Integration. The overall framework developed in this study is implemented to assess the finite-element based reliability prediction of a gas turbine airfoil involving several failure responses. Results of this implementation are compared to results generated using the conventional 'isolated' approach as well as a validation approach conducted through large sample Monte Carlo simulations. The framework resulted in a considerable improvement to the accuracy of the part reliability assessment and an improved understanding of the component failure behavior. Considerable statistical complexity in the form of joint non-normal behavior was found and accounted for using the framework. Future applications of the framework elements are discussed.

  11. DATMAN: A reliability data analysis program using Bayesian updating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, M.; Feltus, M.A.

    1996-12-31

    Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less

  12. Reliability study of an emerging fire suppression system

    DOE PAGES

    Miller, David A.; Rossati, Lyric M.; Fritz, Nathan K.; ...

    2015-11-01

    Self-contained fire extinguishers are a robust, reliable and minimally invasive means of fire suppression for gloveboxes. Plutonium gloveboxes are known to present harsh environmental conditions for polymer materials, these include radiation damage and chemical exposure, both of which tend to degrade the lifetime of engineered polymer components. The primary component of interest in self-contained fire extinguishers is the nylon 6-6 machined tube that comprises the main body of the system.Thermo-mechanical modeling and characterization of nylon 6-6 for use in plutonium glovebox applications has been carried out. Data has been generated regarding property degradation leading to poor, or reduced, engineering performancemore » of nylon 6-6 components. In this study, nylon 6-6 tensile specimens conforming to the casing of self-contained fire extinguisher systems have been exposed to hydrochloric, nitric, and sulfuric acids. This information was used to predict the performance of a load bearing engineering component comprised of nylon 6-6 and designed to operate in a consistent manner over a specified time period. The study provides a fundamental understanding of the engineering performance of the fire suppression system and the effects of environmental degradation due to acid exposure on engineering performance. Data generated help identify the limitations of self-contained fire extinguishers. No critical areas of concern for plutonium glovebox applications of nylon 6-6 have been identified when considering exposure to mineral acid.« less

  13. Breast Shape Analysis With Curvature Estimates and Principal Component Analysis for Cosmetic and Reconstructive Breast Surgery.

    PubMed

    Catanuto, Giuseppe; Taher, Wafa; Rocco, Nicola; Catalano, Francesca; Allegra, Dario; Milotta, Filippo Luigi Maria; Stanco, Filippo; Gallo, Giovanni; Nava, Maurizio Bruno

    2018-03-20

    Breast shape is defined utilizing mainly qualitative assessment (full, flat, ptotic) or estimates, such as volume or distances between reference points, that cannot describe it reliably. We will quantitatively describe breast shape with two parameters derived from a statistical methodology denominated principal component analysis (PCA). We created a heterogeneous dataset of breast shapes acquired with a commercial infrared 3-dimensional scanner on which PCA was performed. We plotted on a Cartesian plane the two highest values of PCA for each breast (principal components 1 and 2). Testing of the methodology on a preoperative and postoperative surgical case and test-retest was performed by two operators. The first two principal components derived from PCA are able to characterize the shape of the breast included in the dataset. The test-retest demonstrated that different operators are able to obtain very similar values of PCA. The system is also able to identify major changes in the preoperative and postoperative stages of a two-stage reconstruction. Even minor changes were correctly detected by the system. This methodology can reliably describe the shape of a breast. An expert operator and a newly trained operator can reach similar results in a test/re-testing validation. Once developed and after further validation, this methodology could be employed as a good tool for outcome evaluation, auditing, and benchmarking.

  14. Reliability study of an emerging fire suppression system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, David A.; Rossati, Lyric M.; Fritz, Nathan K.

    Self-contained fire extinguishers are a robust, reliable and minimally invasive means of fire suppression for gloveboxes. Plutonium gloveboxes are known to present harsh environmental conditions for polymer materials, these include radiation damage and chemical exposure, both of which tend to degrade the lifetime of engineered polymer components. The primary component of interest in self-contained fire extinguishers is the nylon 6-6 machined tube that comprises the main body of the system.Thermo-mechanical modeling and characterization of nylon 6-6 for use in plutonium glovebox applications has been carried out. Data has been generated regarding property degradation leading to poor, or reduced, engineering performancemore » of nylon 6-6 components. In this study, nylon 6-6 tensile specimens conforming to the casing of self-contained fire extinguisher systems have been exposed to hydrochloric, nitric, and sulfuric acids. This information was used to predict the performance of a load bearing engineering component comprised of nylon 6-6 and designed to operate in a consistent manner over a specified time period. The study provides a fundamental understanding of the engineering performance of the fire suppression system and the effects of environmental degradation due to acid exposure on engineering performance. Data generated help identify the limitations of self-contained fire extinguishers. No critical areas of concern for plutonium glovebox applications of nylon 6-6 have been identified when considering exposure to mineral acid.« less

  15. Analysis of fatigue reliability for high temperature and high pressure multi-stage decompression control valve

    NASA Astrophysics Data System (ADS)

    Yu, Long; Xu, Juanjuan; Zhang, Lifang; Xu, Xiaogang

    2018-03-01

    Based on stress-strength interference theory to establish the reliability mathematical model for high temperature and high pressure multi-stage decompression control valve (HMDCV), and introduced to the temperature correction coefficient for revising material fatigue limit at high temperature. Reliability of key dangerous components and fatigue sensitivity curve of each component are calculated and analyzed by the means, which are analyzed the fatigue life of control valve and combined with reliability theory of control valve model. The impact proportion of each component on the control valve system fatigue failure was obtained. The results is shown that temperature correction factor makes the theoretical calculations of reliability more accurate, prediction life expectancy of main pressure parts accords with the technical requirements, and valve body and the sleeve have obvious influence on control system reliability, the stress concentration in key part of control valve can be reduced in the design process by improving structure.

  16. Ultra Reliable Closed Loop Life Support for Long Space Missions

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Ewert, Michael K.

    2010-01-01

    Spacecraft human life support systems can achieve ultra reliability by providing sufficient spares to replace all failed components. The additional mass of spares for ultra reliability is approximately equal to the original system mass, provided that the original system reliability is not too low. Acceptable reliability can be achieved for the Space Shuttle and Space Station by preventive maintenance and by replacing failed units. However, on-demand maintenance and repair requires a logistics supply chain in place to provide the needed spares. In contrast, a Mars or other long space mission must take along all the needed spares, since resupply is not possible. Long missions must achieve ultra reliability, a very low failure rate per hour, since they will take years rather than weeks and cannot be cut short if a failure occurs. Also, distant missions have a much higher mass launch cost per kilogram than near-Earth missions. Achieving ultra reliable spacecraft life support systems with acceptable mass will require a well-planned and extensive development effort. Analysis must determine the reliability requirement and allocate it to subsystems and components. Ultra reliability requires reducing the intrinsic failure causes, providing spares to replace failed components and having "graceful" failure modes. Technologies, components, and materials must be selected and designed for high reliability. Long duration testing is needed to confirm very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The system must be designed, developed, integrated, and tested with system reliability in mind. Maintenance and reparability of failed units must not add to the probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass should start soon since it must be a long term effort.

  17. Issues and Methods for Assessing COTS Reliability, Maintainability, and Availability

    NASA Technical Reports Server (NTRS)

    Schneidewind, Norman F.; Nikora, Allen P.

    1998-01-01

    Many vendors produce products that are not domain specific (e.g., network server) and have limited functionality (e.g., mobile phone). In contrast, many customers of COTS develop systems that am domain specific (e.g., target tracking system) and have great variability in functionality (e.g., corporate information system). This discussion takes the viewpoint of how the customer can ensure the quality of COTS components. In evaluating the benefits and costs of using COTS, we must consider the environment in which COTS will operate. Thus we must distinguish between using a non-mission critical application like a spreadsheet program to produce a budget and a mission critical application like military strategic and tactical operations. Whereas customers will tolerate an occasional bug in the former, zero tolerance is the rule in the latter. We emphasize the latter because this is the arena where there are major unresolved problems in the application of COTS. Furthermore, COTS components may be embedded in the larger customer system. We refer to these as embedded systems. These components must be reliable, maintainable, and available, and must be with the larger system in order for the customer to benefit from the advertised advantages of lower development and maintenance costs. Interestingly, when the claims of COTS advantages are closely examined, one finds that to a great extent these COTS components consist of hardware and office products, not mission critical software [1]. Obviously, COTS components are different from custom components with respect to one or more of the following attributes: source, development paradigm, safety, reliability, maintainability, availability, security, and other attributes. However, the important question is whether they should be treated differently when deciding to deploy them for operational use; we suggest the answer is no. We use reliability as an example to justify our answer. In order to demonstrate its reliability, a COTS component must pass the same reliability evaluations as the custom components, otherwise the COTS components will be the weakest link in the chain of components and will be the determinant of software system reliability. The challenge is that there will be less information available for evaluating COTS components than for custom components but this does not mean we should despair and do nothing. Actually, there is a lot we can do even in the absence of documentation on COTS components because the customer will have information about how COTS components are to be used in the larger system. To illustrate our approach, we will consider the reliability, maintainability, and availability (RMA) of COTS components as used in larger systems. Finally, COTS suppliers might consider increasing visibility into their products to assist customers in determining the components' fitness for use in a particular application. We offer ideas of information that would be useful to customers, and what vendors might do to provide it.

  18. Reliability approach to rotating-component design. [fatigue life and stress concentration

    NASA Technical Reports Server (NTRS)

    Kececioglu, D. B.; Lalli, V. R.

    1975-01-01

    A probabilistic methodology for designing rotating mechanical components using reliability to relate stress to strength is explained. The experimental test machines and data obtained for steel to verify this methodology are described. A sample mechanical rotating component design problem is solved by comparing a deterministic design method with the new design-by reliability approach. The new method shows that a smaller size and weight can be obtained for specified rotating shaft life and reliability, and uses the statistical distortion-energy theory with statistical fatigue diagrams for optimum shaft design. Statistical methods are presented for (1) determining strength distributions for steel experimentally, (2) determining a failure theory for stress variations in a rotating shaft subjected to reversed bending and steady torque, and (3) relating strength to stress by reliability.

  19. Hermetic diode laser transmitter module

    NASA Astrophysics Data System (ADS)

    Ollila, Jyrki; Kautio, Kari; Vahakangas, Jouko; Hannula, Tapio; Kopola, Harri K.; Oikarinen, Jorma; Sivonen, Matti

    1999-04-01

    In very demanding optoelectronic sensor applications it is necessary to encapsulate semiconductor components hermetically in metal housings to ensure reliable operation of the sensor. In this paper we report on the development work to package a laser diode transmitter module for a time- off-light distance sensor application. The module consists of a lens, laser diode, electronic circuit and optomechanics. Specifications include high acceleration, -40....+75 degree(s)C temperature range, very low gas leakage and mass-production capability. We have applied solder glasses for sealing optical lenses and electrical leads hermetically into a metal case. The lens-metal case sealing has been made by using a special soldering glass preform preserving the optical quality of the lens. The metal housings are finally sealed in an inert atmosphere by welding. The assembly concept to retain excellent optical power and tight optical axis alignment specifications is described. The reliability of the laser modules manufactured has been extensively tested using different aging and environmental test procedures. Sealed packages achieve MIL- 883 standard requirements for gas leakage.

  20. NASA Case Sensitive Review and Audit Approach

    NASA Astrophysics Data System (ADS)

    Lee, Arthur R.; Bacus, Thomas H.; Bowersox, Alexandra M.; Newman, J. Steven

    2005-12-01

    As an Agency involved in high-risk endeavors NASA continually reassesses its commitment to engineering excellence and compliance to requirements. As a component of NASA's continual process improvement, the Office of Safety and Mission Assurance (OSMA) established the Review and Assessment Division (RAD) [1] to conduct independent audits to verify compliance with Agency requirements that impact safe and reliable operations. In implementing its responsibilities, RAD benchmarked various approaches for conducting audits, focusing on organizations that, like NASA, operate in high-risk environments - where seemingly inconsequential departures from safety, reliability, and quality requirements can have catastrophic impact to the public, NASA personnel, high-value equipment, and the environment. The approach used by the U.S. Navy Submarine Program [2] was considered the most fruitful framework for the invigorated OSMA audit processes. Additionally, the results of benchmarking activity revealed that not all audits are conducted using just one approach or even with the same objectives. This led to the concept of discrete, unique "audit cases."

  1. Applicability and Limitations of Reliability Allocation Methods

    NASA Technical Reports Server (NTRS)

    Cruz, Jose A.

    2016-01-01

    Reliability allocation process may be described as the process of assigning reliability requirements to individual components within a system to attain the specified system reliability. For large systems, the allocation process is often performed at different stages of system design. The allocation process often begins at the conceptual stage. As the system design develops, more information about components and the operating environment becomes available, different allocation methods can be considered. Reliability allocation methods are usually divided into two categories: weighting factors and optimal reliability allocation. When properly applied, these methods can produce reasonable approximations. Reliability allocation techniques have limitations and implied assumptions that need to be understood by system engineers. Applying reliability allocation techniques without understanding their limitations and assumptions can produce unrealistic results. This report addresses weighting factors, optimal reliability allocation techniques, and identifies the applicability and limitations of each reliability allocation technique.

  2. Reliability of Laterality Effects in a Dichotic Listening Task with Words and Syllables

    ERIC Educational Resources Information Center

    Russell, Nancy L.; Voyer, Daniel

    2004-01-01

    Large and reliable laterality effects have been found using a dichotic target detection task in a recent experiment using word stimuli pronounced with an emotional component. The present study tested the hypothesis that the magnitude and reliability of the laterality effects would increase with the removal of the emotional component and variations…

  3. Reliability Assessment for COTS Components in Space Flight Applications

    NASA Technical Reports Server (NTRS)

    Krishnan, G. S.; Mazzuchi, Thomas A.

    2001-01-01

    Systems built for space flight applications usually demand very high degree of performance and a very high level of accuracy. Hence, the design engineers are often prone to selecting state-of-art technologies for inclusion in their system design. The shrinking budgets also necessitate use of COTS (Commercial Off-The-Shelf) components, which are construed as being less expensive. The performance and accuracy requirements for space flight applications are much more stringent than those for the commercial applications. The quantity of systems designed and developed for space applications are much lower in number than those produced for the commercial applications. With a given set of requirements, are these COTS components reliable? This paper presents a model for assessing the reliability of COTS components in space applications and the associated affect on the system reliability. We illustrate the method with a real application.

  4. Assessing patient-centered care: one approach to health disparities education.

    PubMed

    Wilkerson, LuAnn; Fung, Cha-Chi; May, Win; Elliott, Donna

    2010-05-01

    Patient-centered care has been described as one approach to cultural competency education that could reduce racial and ethnic health disparities by preparing providers to deliver care that is respectful and responsive to the preferences of each patient. In order to evaluate the effectiveness of a curriculum in teaching patient-centered care (PCC) behaviors to medical students, we drew on the work of Kleinman, Eisenberg, and Good to develop a scale that could be embedded across cases in an objective structured clinical examination (OSCE). To compare the reliability, validity, and feasibility of an embedded patient-centered care scale with the use of a single culturally challenging case in measuring students' use of PCC behaviors as part of a comprehensive OSCE. A total of 322 students from two California medical schools participated in the OSCE as beginning seniors. Cronbach's alpha was used to assess the internal consistency of each approach. Construct validity was addressed by establishing convergent and divergent validity using the cultural challenge case total score and OSCE component scores. Feasibility assessment considered cost and training needs for the standardized patients (SPs). Medical students demonstrated a moderate level of patient-centered skill (mean = 63%, SD = 11%). The PCC Scale demonstrated an acceptable level of internal consistency (alpha = 0.68) over the single case scale (alpha = 0.60). Both convergent and divergent validities were established through low to moderate correlation coefficients. The insertion of PCC items across multiple cases in a comprehensive OSCE can provide a reliable estimate of students' use of PCC behaviors without incurring extra costs associated with implementing a special cross-cultural OSCE. This approach is particularly feasible when an OSCE is already part of the standard assessment of clinical skills. Reliability may be increased with an additional investment in SP training.

  5. Design for Verification: Using Design Patterns to Build Reliable Systems

    NASA Technical Reports Server (NTRS)

    Mehlitz, Peter C.; Penix, John; Koga, Dennis (Technical Monitor)

    2003-01-01

    Components so far have been mainly used in commercial software development to reduce time to market. While some effort has been spent on formal aspects of components, most of this was done in the context of programming language or operating system framework integration. As a consequence, increased reliability of composed systems is mainly regarded as a side effect of a more rigid testing of pre-fabricated components. In contrast to this, Design for Verification (D4V) puts the focus on component specific property guarantees, which are used to design systems with high reliability requirements. D4V components are domain specific design pattern instances with well-defined property guarantees and usage rules, which are suitable for automatic verification. The guaranteed properties are explicitly used to select components according to key system requirements. The D4V hypothesis is that the same general architecture and design principles leading to good modularity, extensibility and complexity/functionality ratio can be adapted to overcome some of the limitations of conventional reliability assurance measures, such as too large a state space or too many execution paths.

  6. Test-retest reliability of cognitive EEG

    NASA Technical Reports Server (NTRS)

    McEvoy, L. K.; Smith, M. E.; Gevins, A.

    2000-01-01

    OBJECTIVE: Task-related EEG is sensitive to changes in cognitive state produced by increased task difficulty and by transient impairment. If task-related EEG has high test-retest reliability, it could be used as part of a clinical test to assess changes in cognitive function. The aim of this study was to determine the reliability of the EEG recorded during the performance of a working memory (WM) task and a psychomotor vigilance task (PVT). METHODS: EEG was recorded while subjects rested quietly and while they performed the tasks. Within session (test-retest interval of approximately 1 h) and between session (test-retest interval of approximately 7 days) reliability was calculated for four EEG components: frontal midline theta at Fz, posterior theta at Pz, and slow and fast alpha at Pz. RESULTS: Task-related EEG was highly reliable within and between sessions (r0.9 for all components in WM task, and r0.8 for all components in the PVT). Resting EEG also showed high reliability, although the magnitude of the correlation was somewhat smaller than that of the task-related EEG (r0.7 for all 4 components). CONCLUSIONS: These results suggest that under appropriate conditions, task-related EEG has sufficient retest reliability for use in assessing clinical changes in cognitive status.

  7. Forensic DNA expertise of incest in early period of pregnancy.

    PubMed

    Jakovski, Zlatko; Jankova, Renata; Nikolova, Ksenija; Spasevska, Liljana; Jovanovic, Rubens; Janeska, Biljana

    2011-01-01

    Proving incest from tissue obtained by abortion early in pregnancy can be a challenge. Problems include the small quantity of embryonic tissue in the products of conception, and the mixing of DNA from mother and embryo. In many cases, this amorphous material cannot be grossly segregated into maternal and fetal components. Thus, morphological discrimination requires microscopy to select relevant tissue particles from which DNA can be typed. This combination of methods is reliable and efficient. In this article, we present two cases of incest discovered by examination of products of conception. Copyright © 2010 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  8. A Case Study on the Application of a Structured Experimental Method for Optimal Parameter Design of a Complex Control System

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2015-01-01

    This report documents a case study on the application of Reliability Engineering techniques to achieve an optimal balance between performance and robustness by tuning the functional parameters of a complex non-linear control system. For complex systems with intricate and non-linear patterns of interaction between system components, analytical derivation of a mathematical model of system performance and robustness in terms of functional parameters may not be feasible or cost-effective. The demonstrated approach is simple, structured, effective, repeatable, and cost and time efficient. This general approach is suitable for a wide range of systems.

  9. A Methodology for the Development of a Reliability Database for an Advanced Reactor Probabilistic Risk Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, Dave; Brunett, Acacia J.; Bucknor, Matthew

    GE Hitachi Nuclear Energy (GEH) and Argonne National Laboratory are currently engaged in a joint effort to modernize and develop probabilistic risk assessment (PRA) techniques for advanced non-light water reactors. At a high level the primary outcome of this project will be the development of next-generation PRA methodologies that will enable risk-informed prioritization of safety- and reliability-focused research and development, while also identifying gaps that may be resolved through additional research. A subset of this effort is the development of a reliability database (RDB) methodology to determine applicable reliability data for inclusion in the quantification of the PRA. The RDBmore » method developed during this project seeks to satisfy the requirements of the Data Analysis element of the ASME/ANS Non-LWR PRA standard. The RDB methodology utilizes a relevancy test to examine reliability data and determine whether it is appropriate to include as part of the reliability database for the PRA. The relevancy test compares three component properties to establish the level of similarity to components examined as part of the PRA. These properties include the component function, the component failure modes, and the environment/boundary conditions of the component. The relevancy test is used to gauge the quality of data found in a variety of sources, such as advanced reactor-specific databases, non-advanced reactor nuclear databases, and non-nuclear databases. The RDB also establishes the integration of expert judgment or separate reliability analysis with past reliability data. This paper provides details on the RDB methodology, and includes an example application of the RDB methodology for determining the reliability of the intermediate heat exchanger of a sodium fast reactor. The example explores a variety of reliability data sources, and assesses their applicability for the PRA of interest through the use of the relevancy test.« less

  10. Sensitivity analysis by approximation formulas - Illustrative examples. [reliability analysis of six-component architectures

    NASA Technical Reports Server (NTRS)

    White, A. L.

    1983-01-01

    This paper examines the reliability of three architectures for six components. For each architecture, the probabilities of the failure states are given by algebraic formulas involving the component fault rate, the system recovery rate, and the operating time. The dominant failure modes are identified, and the change in reliability is considered with respect to changes in fault rate, recovery rate, and operating time. The major conclusions concern the influence of system architecture on failure modes and parameter requirements. Without this knowledge, a system designer may pick an inappropriate structure.

  11. Electronic synoptic operative reporting: assessing the reliability and completeness of synoptic reports for pancreatic resection.

    PubMed

    Park, Jason; Pillarisetty, Venu G; Brennan, Murray F; Jarnagin, William R; D'Angelica, Michael I; Dematteo, Ronald P; G Coit, Daniel; Janakos, Maria; Allen, Peter J

    2010-09-01

    Electronic synoptic operative reports (E-SORs) have replaced dictated reports at many institutions, but whether E-SORs adequately document the components and findings of an operation has received limited study. This study assessed the reliability and completeness of E-SORs for pancreatic surgery developed at our institution. An attending surgeon and surgical fellow prospectively and independently completed an E-SOR after each of 112 major pancreatic resections (78 proximal, 29 distal, and 5 central) over a 10-month period (September 2008 to June 2009). Reliability was assessed by calculating the interobserver agreement between attending physician and fellow reports. Completeness was assessed by comparing E-SORs to a case-matched (surgeon and procedure) historical control of dictated reports, using a 39-item checklist developed through an internal and external query of 13 high-volume pancreatic surgeons. Interobserver agreement between attending and fellow was moderate to very good for individual categorical E-SOR items (kappa = 0.65 to 1.00, p < 0.001 for all items). Compared with dictated reports, E-SORs had significantly higher completeness checklist scores (mean 88.8 +/- 5.4 vs 59.6 +/- 9.2 [maximum possible score, 100], p < 0.01) and were available in patients' electronic records in a significantly shorter interval of time (median 0.5 vs 5.8 days from case end, p < 0.01). The mean time taken to complete E-SORs was 4.0 +/- 1.6 minutes per case. E-SORs for pancreatic surgery are reliable, complete in data collected, and rapidly available, all of which support their clinical implementation. The inherent strengths of E-SORs offer real promise of a new standard for operative reporting and health communication. Copyright 2010 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  12. First impressions: gait cues drive reliable trait judgements.

    PubMed

    Thoresen, John C; Vuong, Quoc C; Atkinson, Anthony P

    2012-09-01

    Personality trait attribution can underpin important social decisions and yet requires little effort; even a brief exposure to a photograph can generate lasting impressions. Body movement is a channel readily available to observers and allows judgements to be made when facial and body appearances are less visible; e.g., from great distances. Across three studies, we assessed the reliability of trait judgements of point-light walkers and identified motion-related visual cues driving observers' judgements. The findings confirm that observers make reliable, albeit inaccurate, trait judgements, and these were linked to a small number of motion components derived from a Principal Component Analysis of the motion data. Parametric manipulation of the motion components linearly affected trait ratings, providing strong evidence that the visual cues captured by these components drive observers' trait judgements. Subsequent analyses suggest that reliability of trait ratings was driven by impressions of emotion, attractiveness and masculinity. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Perform qualify reliability-power tests by shooting common mistakes: practical problems and standard answers per Telcordia/Bellcore requests

    NASA Astrophysics Data System (ADS)

    Yu, Zheng

    2002-08-01

    Facing the new demands of the optical fiber communications market, almost all the performance and reliability of optical network system are dependent on the qualification of the fiber optics components. So, how to comply with the system requirements, the Telcordia / Bellcore reliability and high-power testing has become the key issue for the fiber optics components manufacturers. The qualification of Telcordia / Bellcore reliability or high-power testing is a crucial issue for the manufacturers. It is relating to who is the outstanding one in the intense competition market. These testing also need maintenances and optimizations. Now, work on the reliability and high-power testing have become the new demands in the market. The way is needed to get the 'Triple-Win' goal expected by the component-makers, the reliability-testers and the system-users. To those who are meeting practical problems for the testing, there are following seven topics that deal with how to shoot the common mistakes to perform qualify reliability and high-power testing: ¸ Qualification maintenance requirements for the reliability testing ¸ Lots control for preparing the reliability testing ¸ Sampling select per the reliability testing ¸ Interim measurements during the reliability testing ¸ Basic referencing factors relating to the high-power testing ¸ Necessity of re-qualification testing for the changing of producing ¸ Understanding the similarity for product family by the definitions

  14. Accounting for Proof Test Data in a Reliability Based Design Optimization Framework

    NASA Technical Reports Server (NTRS)

    Ventor, Gerharad; Scotti, Stephen J.

    2012-01-01

    This paper investigates the use of proof (or acceptance) test data during the reliability based design optimization of structural components. It is assumed that every component will be proof tested and that the component will only enter into service if it passes the proof test. The goal is to reduce the component weight, while maintaining high reliability, by exploiting the proof test results during the design process. The proposed procedure results in the simultaneous design of the structural component and the proof test itself and provides the designer with direct control over the probability of failing the proof test. The procedure is illustrated using two analytical example problems and the results indicate that significant weight savings are possible when exploiting the proof test results during the design process.

  15. Machine Maintenance Scheduling with Reliability Engineering Method and Maintenance Value Stream Mapping

    NASA Astrophysics Data System (ADS)

    Sembiring, N.; Nasution, A. H.

    2018-02-01

    Corrective maintenance i.e replacing or repairing the machine component after machine break down always done in a manufacturing company. It causes the production process must be stopped. Production time will decrease due to the maintenance team must replace or repair the damage machine component. This paper proposes a preventive maintenance’s schedule for a critical component of a critical machine of an crude palm oil and kernel company due to increase maintenance efficiency. The Reliability Engineering & Maintenance Value Stream Mapping is used as a method and a tool to analize the reliability of the component and reduce the wastage in any process by segregating value added and non value added activities.

  16. Interrater reliability assessment using the Test of Gross Motor Development-2.

    PubMed

    Barnett, Lisa M; Minto, Christine; Lander, Natalie; Hardy, Louise L

    2014-11-01

    The aim was to examine interrater reliability of the object control subtest from the Test of Gross Motor Development-2 by live observation in a school field setting. Reliability Study--cross sectional. Raters were rated on their ability to agree on (1) the raw total for the six object control skills; (2) each skill performance and (3) the skill components. Agreement for the object control subtest and the individual skills was assessed by an intraclass correlation (ICC) and a kappa statistic assessed for skill component agreement. A total of 37 children (65% girls) aged 4-8 years (M = 6.2, SD = 0.8) were assessed in six skills by two raters; equating to 222 skill tests. Interrater reliability was excellent for the object control subset (ICC = 0.93), and for individual skills, highest for the dribble (ICC = 0.94) followed by strike (ICC = 0.85), overhand throw (ICC = 0.84), underhand roll (ICC = 0.82), kick (ICC = 0.80) and the catch (ICC = 0.71). The strike and the throw had more components with less agreement. Even though the overall subtest score and individual skill agreement was good, some skill components had lower agreement, suggesting these may be more problematic to assess. This may mean some skill components need to be specified differently in order to improve component reliability. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  17. Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system components, part 2

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The technical effort and computer code enhancements performed during the sixth year of the Probabilistic Structural Analysis Methods program are summarized. Various capabilities are described to probabilistically combine structural response and structural resistance to compute component reliability. A library of structural resistance models is implemented in the Numerical Evaluations of Stochastic Structures Under Stress (NESSUS) code that included fatigue, fracture, creep, multi-factor interaction, and other important effects. In addition, a user interface was developed for user-defined resistance models. An accurate and efficient reliability method was developed and was successfully implemented in the NESSUS code to compute component reliability based on user-selected response and resistance models. A risk module was developed to compute component risk with respect to cost, performance, or user-defined criteria. The new component risk assessment capabilities were validated and demonstrated using several examples. Various supporting methodologies were also developed in support of component risk assessment.

  18. A novel standardized algorithm using SPECT/CT evaluating unhappy patients after unicondylar knee arthroplasty--a combined analysis of tracer uptake distribution and component position.

    PubMed

    Suter, Basil; Testa, Enrique; Stämpfli, Patrick; Konala, Praveen; Rasch, Helmut; Friederich, Niklaus F; Hirschmann, Michael T

    2015-03-20

    The introduction of a standardized SPECT/CT algorithm including a localization scheme, which allows accurate identification of specific patterns and thresholds of SPECT/CT tracer uptake, could lead to a better understanding of the bone remodeling and specific failure modes of unicondylar knee arthroplasty (UKA). The purpose of the present study was to introduce a novel standardized SPECT/CT algorithm for patients after UKA and evaluate its clinical applicability, usefulness and inter- and intra-observer reliability. Tc-HDP-SPECT/CT images of consecutive patients (median age 65, range 48-84 years) with 21 knees after UKA were prospectively evaluated. The tracer activity on SPECT/CT was localized using a specific standardized UKA localization scheme. For tracer uptake analysis (intensity and anatomical distribution pattern) a 3D volumetric quantification method was used. The maximum intensity values were recorded for each anatomical area. In addition, ratios between the respective value in the measured area and the background tracer activity were calculated. The femoral and tibial component position (varus-valgus, flexion-extension, internal and external rotation) was determined in 3D-CT. The inter- and intraobserver reliability of the localization scheme, grading of the tracer activity and component measurements were determined by calculating the intraclass correlation coefficients (ICC). The localization scheme, grading of the tracer activity and component measurements showed high inter- and intra-observer reliabilities for all regions (tibia, femur and patella). For measurement of component position there was strong agreement between the readings of the two observers; the ICC for the orientation of the femoral component was 0.73-1.00 (intra-observer reliability) and 0.91-1.00 (inter-observer reliability). The ICC for the orientation of the tibial component was 0.75-1.00 (intra-observer reliability) and 0.77-1.00 (inter-observer reliability). The SPECT/CT algorithm presented combining the mechanical information on UKA component position, alignment and metabolic data is highly reliable and proved to be a valuable, consistent and useful tool for analysing postoperative knees after UKA. Using this standardized approach in clinical studies might be helpful in establishing the diagnosis in patients with pain after UKA.

  19. Reliability models applicable to space telescope solar array assembly system

    NASA Technical Reports Server (NTRS)

    Patil, S. A.

    1986-01-01

    A complex system may consist of a number of subsystems with several components in series, parallel, or combination of both series and parallel. In order to predict how well the system will perform, it is necessary to know the reliabilities of the subsystems and the reliability of the whole system. The objective of the present study is to develop mathematical models of the reliability which are applicable to complex systems. The models are determined by assuming k failures out of n components in a subsystem. By taking k = 1 and k = n, these models reduce to parallel and series models; hence, the models can be specialized to parallel, series combination systems. The models are developed by assuming the failure rates of the components as functions of time and as such, can be applied to processes with or without aging effects. The reliability models are further specialized to Space Telescope Solar Arrray (STSA) System. The STSA consists of 20 identical solar panel assemblies (SPA's). The reliabilities of the SPA's are determined by the reliabilities of solar cell strings, interconnects, and diodes. The estimates of the reliability of the system for one to five years are calculated by using the reliability estimates of solar cells and interconnects given n ESA documents. Aging effects in relation to breaks in interconnects are discussed.

  20. Lead (Pb)-Free Solder Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    VIANCO,PAUL T.

    2000-08-15

    Legislative and marketing forces both abroad and in the US are causing the electronics industry to consider the use of Pb-free solders in place of traditional Sn-Pb alloys. Previous case studies have demonstrated the satisfactory manufacturability and reliability of several Pb-free compositions for printed circuit board applications. Those data, together with the results of fundamental studies on Pb-free solder materials, have indicated the general feasibility of their use in the broader range of present-day, electrical and electronic components.

  1. Objective structured clinical examination for pharmacy students in Qatar: cultural and contextual barriers to assessment.

    PubMed

    Wilby, K J; Black, E K; Austin, Z; Mukhalalati, B; Aboulsoud, S; Khalifa, S I

    2016-07-10

    This study aimed to evaluate the feasibility and psychometric defensibility of implementing a comprehensive objective structured clinical examination (OSCE) on the complete pharmacy programme for pharmacy students in a Middle Eastern context, and to identify facilitators and barriers to implementation within new settings. Eight cases were developed, validated, and had standards set according to a blueprint, and were assessed with graduating pharmacy students. Assessor reliability was evaluated using inter-class coefficients (ICCs). Concurrent validity was evaluated by comparing OSCE results to professional skills course grades. Field notes were maintained to generate recommendations for implementation in other contexts. The examination pass mark was 424 points out of 700 (60.6%). All 23 participants passed. Mean performance was 74.6%. Low to moderate inter-rater reliability was obtained for analytical and global components (average ICC 0.77 and 0.48, respectively). In conclusion, OSCE was feasible in Qatar but context-related validity and reliability concerns must be addressed prior to future iterations in Qatar and elsewhere.

  2. Synthetic incoherent feedforward circuits show adaptation to the amount of their genetic template

    PubMed Central

    Bleris, Leonidas; Xie, Zhen; Glass, David; Adadey, Asa; Sontag, Eduardo; Benenson, Yaakov

    2011-01-01

    Natural and synthetic biological networks must function reliably in the face of fluctuating stoichiometry of their molecular components. These fluctuations are caused in part by changes in relative expression efficiency and the DNA template amount of the network-coding genes. Gene product levels could potentially be decoupled from these changes via built-in adaptation mechanisms, thereby boosting network reliability. Here, we show that a mechanism based on an incoherent feedforward motif enables adaptive gene expression in mammalian cells. We modeled, synthesized, and tested transcriptional and post-transcriptional incoherent loops and found that in all cases the gene product adapts to changes in DNA template abundance. We also observed that the post-transcriptional form results in superior adaptation behavior, higher absolute expression levels, and lower intrinsic fluctuations. Our results support a previously hypothesized endogenous role in gene dosage compensation for such motifs and suggest that their incorporation in synthetic networks will improve their robustness and reliability. PMID:21811230

  3. Effect of Individual Component Life Distribution on Engine Life Prediction

    NASA Technical Reports Server (NTRS)

    Zaretsky, Erwin V.; Hendricks, Robert C.; Soditus, Sherry M.

    2003-01-01

    The effect of individual engine component life distributions on engine life prediction was determined. A Weibull-based life and reliability analysis of the NASA Energy Efficient Engine was conducted. The engine s life at a 95 and 99.9 percent probability of survival was determined based upon the engine manufacturer s original life calculations and assumed values of each of the component s cumulative life distributions as represented by a Weibull slope. The lives of the high-pressure turbine (HPT) disks and blades were also evaluated individually and as a system in a similar manner. Knowing the statistical cumulative distribution of each engine component with reasonable engineering certainty is a condition precedent to predicting the life and reliability of an entire engine. The life of a system at a given reliability will be less than the lowest-lived component in the system at the same reliability (probability of survival). Where Weibull slopes of all the engine components are equal, the Weibull slope had a minimal effect on engine L(sub 0.1) life prediction. However, at a probability of survival of 95 percent (L(sub 5) life), life decreased with increasing Weibull slope.

  4. Development and validation of a questionnaire to evaluate patient satisfaction with diabetes disease management.

    PubMed

    Paddock, L E; Veloski, J; Chatterton, M L; Gevirtz, F O; Nash, D B

    2000-07-01

    To develop a reliable and valid questionnaire to measure patient satisfaction with diabetes disease management programs. Questions related to structure, process, and outcomes were categorized into 14 domains defining the essential elements of diabetes disease management. Health professionals confirmed the content validity. Face validity was established by a patient focus group. The questionnaire was mailed to 711 patients with diabetes who participated in a disease management program. To reduce the number of questionnaire items, a principal components analysis was performed using a varimax rotation. The Scree test was used to select significant components. To further assess reliability and validity; Cronbach's alpha and product-moment correlations were calculated for components having > or =3 items with loadings >0.50. The validated 73-item mailed satisfaction survey had a 34.1% response rate. Principal components analysis yielded 13 components with eigenvalues > 1.0. The Scree test proposed a 6-component solution (39 items), which explained 59% of the total variation. Internal consistency reliabilities computed for the first 6 components (alpha = 0.79-0.95) were acceptable. The final questionnaire, the Diabetes Management Evaluation Tool (DMET), was designed to assess patient satisfaction with diabetes disease management programs. Although more extensive testing of the questionnaire is appropriate, preliminary reliability and validity of the DMET has been demonstrated.

  5. Analysis of whisker-toughened CMC structural components using an interactive reliability model

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Palko, Joseph L.

    1992-01-01

    Realizing wider utilization of ceramic matrix composites (CMC) requires the development of advanced structural analysis technologies. This article focuses on the use of interactive reliability models to predict component probability of failure. The deterministic William-Warnke failure criterion serves as theoretical basis for the reliability model presented here. The model has been implemented into a test-bed software program. This computer program has been coupled to a general-purpose finite element program. A simple structural problem is presented to illustrate the reliability model and the computer algorithm.

  6. Psychometric properties of the Spanish version of the Cocaine Selective Severity Assessment to evaluate cocaine withdrawal in treatment-seeking individuals.

    PubMed

    Pérez de los Cobos, José; Trujols, Joan; Siñol, Núria; Vasconcelos e Rego, Lisiane; Iraurgi, Ioseba; Batlle, Francesca

    2014-09-01

    Reliable and valid assessment of cocaine withdrawal is relevant for treating cocaine-dependent patients. This study examined the psychometric properties of the Spanish version of the Cocaine Selective Severity Assessment (CSSA), an instrument that measures cocaine withdrawal. Participants were 170 cocaine-dependent inpatients receiving detoxification treatment. Principal component analysis revealed a 4-factor structure for CSSA that included the following components: 'Cocaine Craving and Psychological Distress', 'Lethargy', 'Carbohydrate Craving and Irritability', and 'Somatic Depressive Symptoms'. These 4 components accounted for 56.0% of total variance. Internal reliability for these components ranged from unacceptable to good (Chronbach's alpha: 0.87, 0.65, 0.55, and 0.22, respectively). All components except Somatic Depressive Symptoms presented concurrent validity with cocaine use. In summary, while some properties of the Spanish version of the CSSA are satisfactory, such as interpretability of factor structure and test-retest reliability, other properties, such as internal reliability and concurrent validity of some factors, are inadequate. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Condition monitoring of distributed systems using two-stage Bayesian inference data fusion

    NASA Astrophysics Data System (ADS)

    Jaramillo, Víctor H.; Ottewill, James R.; Dudek, Rafał; Lepiarczyk, Dariusz; Pawlik, Paweł

    2017-03-01

    In industrial practice, condition monitoring is typically applied to critical machinery. A particular piece of machinery may have its own condition monitoring system that allows the health condition of said piece of equipment to be assessed independently of any connected assets. However, industrial machines are typically complex sets of components that continuously interact with one another. In some cases, dynamics resulting from the inception and development of a fault can propagate between individual components. For example, a fault in one component may lead to an increased vibration level in both the faulty component, as well as in connected healthy components. In such cases, a condition monitoring system focusing on a specific element in a connected set of components may either incorrectly indicate a fault, or conversely, a fault might be missed or masked due to the interaction of a piece of equipment with neighboring machines. In such cases, a more holistic condition monitoring approach that can not only account for such interactions, but utilize them to provide a more complete and definitive diagnostic picture of the health of the machinery is highly desirable. In this paper, a Two-Stage Bayesian Inference approach allowing data from separate condition monitoring systems to be combined is presented. Data from distributed condition monitoring systems are combined in two stages, the first data fusion occurring at a local, or component, level, and the second fusion combining data at a global level. Data obtained from an experimental rig consisting of an electric motor, two gearboxes, and a load, operating under a range of different fault conditions is used to illustrate the efficacy of the method at pinpointing the root cause of a problem. The obtained results suggest that the approach is adept at refining the diagnostic information obtained from each of the different machine components monitored, therefore improving the reliability of the health assessment of each individual element, as well as the entire piece of machinery.

  8. FOR Allocation to Distribution Systems based on Credible Improvement Potential (CIP)

    NASA Astrophysics Data System (ADS)

    Tiwary, Aditya; Arya, L. D.; Arya, Rajesh; Choube, S. C.

    2017-02-01

    This paper describes an algorithm for forced outage rate (FOR) allocation to each section of an electrical distribution system subject to satisfaction of reliability constraints at each load point. These constraints include threshold values of basic reliability indices, for example, failure rate, interruption duration and interruption duration per year at load points. Component improvement potential measure has been used for FOR allocation. Component with greatest magnitude of credible improvement potential (CIP) measure is selected for improving reliability performance. The approach adopted is a monovariable method where one component is selected for FOR allocation and in the next iteration another component is selected for FOR allocation based on the magnitude of CIP. The developed algorithm is implemented on sample radial distribution system.

  9. The strategic research agenda of the Technology Platform Photonics21: European component industry for broadband communications and the FP 7

    NASA Astrophysics Data System (ADS)

    Thylén, Lars

    2006-07-01

    The design and manufacture of components and systems underpin the European and indeed worldwide photonics industry. Optical materials and photonic components serve as the basis for systems building at different levels of complexity. In most cases, they perform a key function and dictate the performance of these systems. New products and processes will generate economic activity for the European photonics industry into the 21 st century. However, progress will rely on Europe's ability to develop new and better materials, components and systems. To achieve success, photonic components and systems must: •be reliable and inexpensive •be generic and adaptable •offer superior functionality •be innovative and protected by Intellectual Property •be aligned to market opportunities The challenge in the short-, medium-, and long-term is to put a coordinating framework in place which will make the European activity in this technology area competitive as compared to those in the US and Asia. In the short term the aim should be to facilitate the vibrant and profitable European photonics industry to further develop its ability to commercialize advances in photonic related technologies. In the medium and longer terms the objective must be to place renewed emphasis on materials research and the design and manufacturing of key components and systems to form the critical link between science endeavour and commercial success. All these general issues are highly relevant for the component intensive broadband communications industry. Also relevant for this development is the convergence of data and telecom, where the low cost of data com meets with the high reliability requirements of telecom. The text below is to a degree taken form the Strategic Research Agenda of the Technology Platform Photonics 21 [1], as this contains a concerted effort to iron out a strategy for EU in the area of photonics components and systems.

  10. Big data analytics for the Future Circular Collider reliability and availability studies

    NASA Astrophysics Data System (ADS)

    Begy, Volodimir; Apollonio, Andrea; Gutleber, Johannes; Martin-Marquez, Manuel; Niemi, Arto; Penttinen, Jussi-Pekka; Rogova, Elena; Romero-Marin, Antonio; Sollander, Peter

    2017-10-01

    Responding to the European Strategy for Particle Physics update 2013, the Future Circular Collider study explores scenarios of circular frontier colliders for the post-LHC era. One branch of the study assesses industrial approaches to model and simulate the reliability and availability of the entire particle collider complex based on the continuous monitoring of CERN’s accelerator complex operation. The modelling is based on an in-depth study of the CERN injector chain and LHC, and is carried out as a cooperative effort with the HL-LHC project. The work so far has revealed that a major challenge is obtaining accelerator monitoring and operational data with sufficient quality, to automate the data quality annotation and calculation of reliability distribution functions for systems, subsystems and components where needed. A flexible data management and analytics environment that permits integrating the heterogeneous data sources, the domain-specific data quality management algorithms and the reliability modelling and simulation suite is a key enabler to complete this accelerator operation study. This paper describes the Big Data infrastructure and analytics ecosystem that has been put in operation at CERN, serving as the foundation on which reliability and availability analysis and simulations can be built. This contribution focuses on data infrastructure and data management aspects and presents case studies chosen for its validation.

  11. KI-67 heterogeneity in well differentiated gastro-entero-pancreatic neuroendocrine tumors: when is biopsy reliable for grade assessment?

    PubMed

    Grillo, Federica; Valle, Luca; Ferone, Diego; Albertelli, Manuela; Brisigotti, Maria Pia; Cittadini, Giuseppe; Vanoli, Alessandro; Fiocca, Roberto; Mastracci, Luca

    2017-09-01

    Ki-67 heterogeneity can impact on gastroenteropancreatic neuroendocrine tumor grade assignment, especially when tissue is scarce. This work is aimed at devising adequacy criteria for grade assessment in biopsy specimens. To analyze the impact of biopsy size on reliability, 360 virtual biopsies of different thickness and lengths were constructed. Furthermore, to estimate the mean amount of non-neoplastic tissue component present in biopsies, 28 real biopsies were collected, the non-neoplastic components (fibrosis and inflammation) quantified and the effective area of neoplastic tissue calculated for each biopsy. Heterogeneity of Ki-67 distribution, G2 tumors and biopsy size all play an important role in reducing the reliability of biopsy samples in Ki-67-based grade assignment. In particular in G2 cases, 59.9% of virtual biopsies downgraded the tumor and the smaller the biopsy, the more frequent downgrading occurs. In real biopsies the presence of non-neoplastic tissue reduced the available total area by a mean of 20%. By coupling the results from these two different approaches we show that both biopsy size and non-neoplastic component must be taken into account for biopsy adequacy. In particular, we can speculate that if the minimum biopsy area, necessary to confidently (80% concordance) grade gastro-entero-pancreatic neuroendocrine tumors on virtual biopsies ranges between 15 and 30 mm 2 , and if real biopsies are on average composed of only 80% of neoplastic tissue, then biopsies with a surface area not <12 mm 2 should be performed; using 18G needles, this corresponds to a minimum total length of 15 mm.

  12. Mission Reliability Estimation for Repairable Robot Teams

    NASA Technical Reports Server (NTRS)

    Trebi-Ollennu, Ashitey; Dolan, John; Stancliff, Stephen

    2010-01-01

    A mission reliability estimation method has been designed to translate mission requirements into choices of robot modules in order to configure a multi-robot team to have high reliability at minimal cost. In order to build cost-effective robot teams for long-term missions, one must be able to compare alternative design paradigms in a principled way by comparing the reliability of different robot models and robot team configurations. Core modules have been created including: a probabilistic module with reliability-cost characteristics, a method for combining the characteristics of multiple modules to determine an overall reliability-cost characteristic, and a method for the generation of legitimate module combinations based on mission specifications and the selection of the best of the resulting combinations from a cost-reliability standpoint. The developed methodology can be used to predict the probability of a mission being completed, given information about the components used to build the robots, as well as information about the mission tasks. In the research for this innovation, sample robot missions were examined and compared to the performance of robot teams with different numbers of robots and different numbers of spare components. Data that a mission designer would need was factored in, such as whether it would be better to have a spare robot versus an equivalent number of spare parts, or if mission cost can be reduced while maintaining reliability using spares. This analytical model was applied to an example robot mission, examining the cost-reliability tradeoffs among different team configurations. Particularly scrutinized were teams using either redundancy (spare robots) or repairability (spare components). Using conservative estimates of the cost-reliability relationship, results show that it is possible to significantly reduce the cost of a robotic mission by using cheaper, lower-reliability components and providing spares. This suggests that the current design paradigm of building a minimal number of highly robust robots may not be the best way to design robots for extended missions.

  13. Fracture of the proximal tibia after revision total knee arthroplasty with an extensor mechanism allograft.

    PubMed

    Klein, Gregg R; Levine, Harlan B; Sporer, Scott M; Hartzband, Mark A

    2013-02-01

    Extensor mechanism reconstruction with an extensor mechanism allograft (EMA) remains one of the most reliable methods for treating the extensor mechanism deficient total knee arthroplasty. We report 3 patients who were treated with an EMA who sustained a proximal tibial shaft fracture. In all 3 cases, a short tibial component was present that ended close to the level of the distal extent of the bone block. When performing an EMA, it is important to recognize that the tibial bone block creates a stress riser and revision to a long-stemmed tibial component should be strongly considered to bypass this point to minimize the risk of fracture. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Reliability analysis of laminated CMC components through shell subelement techniques

    NASA Technical Reports Server (NTRS)

    Starlinger, Alois; Duffy, Stephen F.; Gyekenyesi, John P.

    1992-01-01

    An updated version of the integrated design program Composite Ceramics Analysis and Reliability Evaluation of Structures (C/CARES) was developed for the reliability evaluation of ceramic matrix composites (CMC) laminated shell components. The algorithm is now split into two modules: a finite-element data interface program and a reliability evaluation algorithm. More flexibility is achieved, allowing for easy implementation with various finite-element programs. The interface program creates a neutral data base which is then read by the reliability module. This neutral data base concept allows easy data transfer between different computer systems. The new interface program from the finite-element code Matrix Automated Reduction and Coupling (MARC) also includes the option of using hybrid laminates (a combination of plies of different materials or different layups) and allows for variations in temperature fields throughout the component. In the current version of C/CARES, a subelement technique was implemented, enabling stress gradients within an element to be taken into account. The noninteractive reliability function is now evaluated at each Gaussian integration point instead of using averaging techniques. As a result of the increased number of stress evaluation points, considerable improvements in the accuracy of reliability analyses were realized.

  15. Implementation and Testing of Low Cost Uav Platform for Orthophoto Imaging

    NASA Astrophysics Data System (ADS)

    Brucas, D.; Suziedelyte-Visockiene, J.; Ragauskas, U.; Berteska, E.; Rudinskas, D.

    2013-08-01

    Implementation of Unmanned Aerial Vehicles for civilian applications is rapidly increasing. Technologies which were expensive and available only for military use have recently spread on civilian market. There is a vast number of low cost open source components and systems for implementation on UAVs available. Using of low cost hobby and open source components ensures considerable decrease of UAV price, though in some cases compromising its reliability. In Space Science and Technology Institute (SSTI) in collaboration with Vilnius Gediminas Technical University (VGTU) researches have been performed in field of constructing and implementation of small UAVs composed of low cost open source components (and own developments). Most obvious and simple implementation of such UAVs - orthophoto imaging with data download and processing after the flight. The construction, implementation of UAVs, flight experience, data processing and data implementation will be further covered in the paper and presentation.

  16. Simultaneous determination of rifampicin, isoniazid and pyrazinamide in tablet preparations by multivariate spectrophotometric calibration.

    PubMed

    Goicoechea, H C; Olivieri, A C

    1999-08-01

    The use of multivariate spectrophotometric calibration is presented for the simultaneous determination of the active components of tablets used in the treatment of pulmonary tuberculosis. The resolution of ternary mixtures of rifampicin, isoniazid and pyrazinamide has been accomplished by using partial least squares (PLS-1) regression analysis. Although the components show an important degree of spectral overlap, they have been simultaneously determined with high accuracy and precision, rapidly and with no need of nonaqueous solvents for dissolving the samples. No interference has been observed from the tablet excipients. A comparison is presented with the related multivariate method of classical least squares (CLS) analysis, which is shown to yield less reliable results due to the severe spectral overlap among the studied compounds. This is highlighted in the case of isoniazid, due to the small absorbances measured for this component.

  17. A 3-Component Mixture of Rayleigh Distributions: Properties and Estimation in Bayesian Framework

    PubMed Central

    Aslam, Muhammad; Tahir, Muhammad; Hussain, Zawar; Al-Zahrani, Bander

    2015-01-01

    To study lifetimes of certain engineering processes, a lifetime model which can accommodate the nature of such processes is desired. The mixture models of underlying lifetime distributions are intuitively more appropriate and appealing to model the heterogeneous nature of process as compared to simple models. This paper is about studying a 3-component mixture of the Rayleigh distributionsin Bayesian perspective. The censored sampling environment is considered due to its popularity in reliability theory and survival analysis. The expressions for the Bayes estimators and their posterior risks are derived under different scenarios. In case the case that no or little prior information is available, elicitation of hyperparameters is given. To examine, numerically, the performance of the Bayes estimators using non-informative and informative priors under different loss functions, we have simulated their statistical properties for different sample sizes and test termination times. In addition, to highlight the practical significance, an illustrative example based on a real-life engineering data is also given. PMID:25993475

  18. A novel approach for analyzing fuzzy system reliability using different types of intuitionistic fuzzy failure rates of components.

    PubMed

    Kumar, Mohit; Yadav, Shiv Prasad

    2012-03-01

    This paper addresses the fuzzy system reliability analysis using different types of intuitionistic fuzzy numbers. Till now, in the literature, to analyze the fuzzy system reliability, it is assumed that the failure rates of all components of a system follow the same type of fuzzy set or intuitionistic fuzzy set. However, in practical problems, such type of situation rarely occurs. Therefore, in the present paper, a new algorithm has been introduced to construct the membership function and non-membership function of fuzzy reliability of a system having components following different types of intuitionistic fuzzy failure rates. Functions of intuitionistic fuzzy numbers are calculated to construct the membership function and non-membership function of fuzzy reliability via non-linear programming techniques. Using the proposed algorithm, membership functions and non-membership functions of fuzzy reliability of a series system and a parallel systems are constructed. Our study generalizes the various works of the literature. Numerical examples are given to illustrate the proposed algorithm. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Test-retest reliability of infant event related potentials evoked by faces.

    PubMed

    Munsters, N M; van Ravenswaaij, H; van den Boomen, C; Kemner, C

    2017-04-05

    Reliable measures are required to draw meaningful conclusions regarding developmental changes in longitudinal studies. Little is known, however, about the test-retest reliability of face-sensitive event related potentials (ERPs), a frequently used neural measure in infants. The aim of the current study is to investigate the test-retest reliability of ERPs typically evoked by faces in 9-10 month-old infants. The infants (N=31) were presented with neutral, fearful and happy faces that contained only the lower or higher spatial frequency information. They were tested twice within two weeks. The present results show that the test-retest reliability of the face-sensitive ERP components is moderate (P400 and Nc) to substantial (N290). However, there is low test-retest reliability for the effects of the specific experimental manipulations (i.e. emotion and spatial frequency) on the face-sensitive ERPs. To conclude, in infants the face-sensitive ERP components (i.e. N290, P400 and Nc) show adequate test-retest reliability, but not the effects of emotion and spatial frequency on these ERP components. We propose that further research focuses on investigating elements that might increase the test-retest reliability, as adequate test-retest reliability is necessary to draw meaningful conclusions on individual developmental trajectories of the face-sensitive ERPs in infants. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Reliability Assessment Approach for Stirling Convertors and Generators

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Schreiber, Jeffrey G.; Zampino, Edward; Best, Timothy

    2004-01-01

    Stirling power conversion is being considered for use in a Radioisotope Power System for deep-space science missions because it offers a multifold increase in the conversion efficiency of heat to electric power. Quantifying the reliability of a Radioisotope Power System that utilizes Stirling power conversion technology is important in developing and demonstrating the capability for long-term success. A description of the Stirling power convertor is provided, along with a discussion about some of the key components. Ongoing efforts to understand component life, design variables at the component and system levels, related sources, and the nature of uncertainties is discussed. The requirement for reliability also is discussed, and some of the critical areas of concern are identified. A section on the objectives of the performance model development and a computation of reliability is included to highlight the goals of this effort. Also, a viable physics-based reliability plan to model the design-level variable uncertainties at the component and system levels is outlined, and potential benefits are elucidated. The plan involves the interaction of different disciplines, maintaining the physical and probabilistic correlations at all the levels, and a verification process based on rational short-term tests. In addition, both top-down and bottom-up coherency were maintained to follow the physics-based design process and mission requirements. The outlined reliability assessment approach provides guidelines to improve the design and identifies governing variables to achieve high reliability in the Stirling Radioisotope Generator design.

  1. CARES/Life Software for Designing More Reliable Ceramic Parts

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Powers, Lynn M.; Baker, Eric H.

    1997-01-01

    Products made from advanced ceramics show great promise for revolutionizing aerospace and terrestrial propulsion, and power generation. However, ceramic components are difficult to design because brittle materials in general have widely varying strength values. The CAPES/Life software eases this task by providing a tool to optimize the design and manufacture of brittle material components using probabilistic reliability analysis techniques. Probabilistic component design involves predicting the probability of failure for a thermomechanically loaded component from specimen rupture data. Typically, these experiments are performed using many simple geometry flexural or tensile test specimens. A static, dynamic, or cyclic load is applied to each specimen until fracture. Statistical strength and SCG (fatigue) parameters are then determined from these data. Using these parameters and the results obtained from a finite element analysis, the time-dependent reliability for a complex component geometry and loading is then predicted. Appropriate design changes are made until an acceptable probability of failure has been reached.

  2. Characteristics and Implications of Diagnostic Justification Scores Based on the New Patient Note Format of the USMLE Step 2 CS Exam.

    PubMed

    Yudkowsky, Rachel; Park, Yoon Soo; Hyderi, Abbas; Bordage, Georges

    2015-11-01

    To determine the psychometric characteristics of diagnostic justification scores based on the patient note format of the United States Medical Licensing Examination Step 2 Clinical Skills exam, which requires students to document history and physical findings, differential diagnoses, diagnostic justification, and plan for immediate workup. End-of-third-year medical students at one institution wrote notes for five standardized patient cases in May 2013 (n = 180) and 2014 (n = 177). Each case was scored using a four-point rubric to rate each of the four note components. Descriptive statistics and item analyses were computed and a generalizability study done. Across cases, 10% to 48% provided no diagnostic justification or had several missing or incorrect links between history and physical findings and diagnoses. The average intercase correlation for justification scores ranged from 0.06 to 0.16; internal consistency reliability of justification scores (coefficient alpha across cases) was 0.38. Overall, justification scores had the highest mean item discrimination across cases. The generalizability study showed that person-case interaction (12%) and task-case interaction (13%) had the largest variance components, indicating substantial case specificity. The diagnostic justification task provides unique information about student achievement and curricular gaps. Students struggled to correctly justify their diagnoses; performance was highly case specific. Diagnostic justification was the most discriminating element of the patient note and had the greatest variability in student performance across cases. The curriculum should provide a wide range of clinical cases and emphasize recognition and interpretation of clinically discriminating findings to promote the development of clinical reasoning skills.

  3. What is the longitudinal magneto-optical Kerr effect?

    NASA Astrophysics Data System (ADS)

    Ander Arregi, Jon; Riego, Patricia; Berger, Andreas

    2017-01-01

    We explore the commonly used classification scheme for the magneto-optical Kerr effect (MOKE), which essentially utilizes a dual definition based simultaneously on the Cartesian coordinate components of the magnetization vector with respect to the plane of incidence reference frame and specific elements of the reflection matrix, which describes light reflection from a ferromagnetic surface. We find that an unambiguous correspondence in between reflection matrix elements and magnetization components is valid only in special cases, while in more general cases, it leads to inconsistencies due to an intermixing of the presumed separate effects of longitudinal, transverse and polar MOKE. As an example, we investigate in this work both theoretically and experimentally a material that possesses anisotropic magneto-optical properties in accordance with its crystal symmetry. The derived equations, which specifically predict a so-far unknown polarization effect for the transverse magnetization component, are confirmed by detailed experiments on epitaxial hcp Co films. The results indicate that magneto-optical anisotropy causes significant deviations from the commonly employed MOKE data interpretation. Our work addresses the associated anomalies, provides a suitable analysis route for reliable MOKE magnetometry procedures, and proposes a revised MOKE terminology scheme.

  4. Nonlinear dynamic simulation of single- and multi-spool core engines

    NASA Technical Reports Server (NTRS)

    Schobeiri, T.; Lippke, C.; Abouelkheir, M.

    1993-01-01

    In this paper a new computational method for accurate simulation of the nonlinear dynamic behavior of single- and multi-spool core engines, turbofan engines, and power generation gas turbine engines is presented. In order to perform the simulation, a modularly structured computer code has been developed which includes individual mathematical modules representing various engine components. The generic structure of the code enables the dynamic simulation of arbitrary engine configurations ranging from single-spool thrust generation to multi-spool thrust/power generation engines under adverse dynamic operating conditions. For precise simulation of turbine and compressor components, row-by-row calculation procedures were implemented that account for the specific turbine and compressor cascade and blade geometry and characteristics. The dynamic behavior of the subject engine is calculated by solving a number of systems of partial differential equations, which describe the unsteady behavior of the individual components. In order to ensure the capability, accuracy, robustness, and reliability of the code, comprehensive critical performance assessment and validation tests were performed. As representatives, three different transient cases with single- and multi-spool thrust and power generation engines were simulated. The transient cases range from operating with a prescribed fuel schedule, to extreme load changes, to generator and turbine shut down.

  5. Representing Geospatial Environment Observation Capability Information: A Case Study of Managing Flood Monitoring Sensors in the Jinsha River Basin

    PubMed Central

    Hu, Chuli; Guan, Qingfeng; Li, Jie; Wang, Ke; Chen, Nengcheng

    2016-01-01

    Sensor inquirers cannot understand comprehensive or accurate observation capability information because current observation capability modeling does not consider the union of multiple sensors nor the effect of geospatial environmental features on the observation capability of sensors. These limitations result in a failure to discover credible sensors or plan for their collaboration for environmental monitoring. The Geospatial Environmental Observation Capability (GEOC) is proposed in this study and can be used as an information basis for the reliable discovery and collaborative planning of multiple environmental sensors. A field-based GEOC (GEOCF) information representation model is built. Quintuple GEOCF feature components and two GEOCF operations are formulated based on the geospatial field conceptual framework. The proposed GEOCF markup language is used to formalize the proposed GEOCF. A prototype system called GEOCapabilityManager is developed, and a case study is conducted for flood observation in the lower reaches of the Jinsha River Basin. The applicability of the GEOCF is verified through the reliable discovery of flood monitoring sensors and planning for the collaboration of these sensors. PMID:27999247

  6. Representing Geospatial Environment Observation Capability Information: A Case Study of Managing Flood Monitoring Sensors in the Jinsha River Basin.

    PubMed

    Hu, Chuli; Guan, Qingfeng; Li, Jie; Wang, Ke; Chen, Nengcheng

    2016-12-16

    Sensor inquirers cannot understand comprehensive or accurate observation capability information because current observation capability modeling does not consider the union of multiple sensors nor the effect of geospatial environmental features on the observation capability of sensors. These limitations result in a failure to discover credible sensors or plan for their collaboration for environmental monitoring. The Geospatial Environmental Observation Capability (GEOC) is proposed in this study and can be used as an information basis for the reliable discovery and collaborative planning of multiple environmental sensors. A field-based GEOC (GEOCF) information representation model is built. Quintuple GEOCF feature components and two GEOCF operations are formulated based on the geospatial field conceptual framework. The proposed GEOCF markup language is used to formalize the proposed GEOCF. A prototype system called GEOCapabilityManager is developed, and a case study is conducted for flood observation in the lower reaches of the Jinsha River Basin. The applicability of the GEOCF is verified through the reliable discovery of flood monitoring sensors and planning for the collaboration of these sensors.

  7. Scale for positive aspects of caregiving experience: development, reliability, and factor structure.

    PubMed

    Kate, N; Grover, S; Kulhara, P; Nehra, R

    2012-06-01

    OBJECTIVE. To develop an instrument (Scale for Positive Aspects of Caregiving Experience [SPACE]) that evaluates positive caregiving experience and assess its psychometric properties. METHODS. Available scales which assess some aspects of positive caregiving experience were reviewed and a 50-item questionnaire with a 5-point rating was constructed. In all, 203 primary caregivers of patients with severe mental disorders were asked to complete the questionnaire. Internal consistency, test-retest reliability, cross-language reliability, split-half reliability, and face validity were evaluated. Principal component factor analysis was run to assess the factorial validity of the scale. RESULTS. The scale developed as part of the study was found to have good internal consistency, test-retest reliability, cross-language reliability, split-half reliability, and face validity. Principal component factor analysis yielded a 4-factor structure, which also had good test-retest reliability and cross-language reliability. There was a strong correlation between the 4 factors obtained. CONCLUSION. The SPACE developed as part of this study has good psychometric properties.

  8. Transmission overhaul estimates for partial and full replacement at repair

    NASA Technical Reports Server (NTRS)

    Savage, M.; Lewicki, D. G.

    1991-01-01

    Timely transmission overhauls increase in-flight service reliability greater than the calculated design reliabilities of the individual aircraft transmission components. Although necessary for aircraft safety, transmission overhauls contribute significantly to aircraft expense. Predictions of a transmission's maintenance needs at the design stage should enable the development of more cost effective and reliable transmissions in the future. The frequency is estimated of overhaul along with the number of transmissions or components needed to support the overhaul schedule. Two methods based on the two parameter Weibull statistical distribution for component life are used to estimate the time between transmission overhauls. These methods predict transmission lives for maintenance schedules which repair the transmission with a complete system replacement or repair only failed components of the transmission. An example illustrates the methods.

  9. Psychometric evaluation of the Persian version of the Templer's Death Anxiety Scale in cancer patients.

    PubMed

    Soleimani, Mohammad Ali; Yaghoobzadeh, Ameneh; Bahrami, Nasim; Sharif, Saeed Pahlevan; Sharif Nia, Hamid

    2016-10-01

    In this study, 398 Iranian cancer patients completed the 15-item Templer's Death Anxiety Scale (TDAS). Tests of internal consistency, principal components analysis, and confirmatory factor analysis were conducted to assess the internal consistency and factorial validity of the Persian TDAS. The construct reliability statistic and average variance extracted were also calculated to measure construct reliability, convergent validity, and discriminant validity. Principal components analysis indicated a 3-component solution, which was generally supported in the confirmatory analysis. However, acceptable cutoffs for construct reliability, convergent validity, and discriminant validity were not fulfilled for the three subscales that were derived from the principal component analysis. This study demonstrated both the advantages and potential limitations of using the TDAS with Persian-speaking cancer patients.

  10. Identification of Reliable Components in Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS): a Data-Driven Approach across Metabolic Processes.

    PubMed

    Motegi, Hiromi; Tsuboi, Yuuri; Saga, Ayako; Kagami, Tomoko; Inoue, Maki; Toki, Hideaki; Minowa, Osamu; Noda, Tetsuo; Kikuchi, Jun

    2015-11-04

    There is an increasing need to use multivariate statistical methods for understanding biological functions, identifying the mechanisms of diseases, and exploring biomarkers. In addition to classical analyses such as hierarchical cluster analysis, principal component analysis, and partial least squares discriminant analysis, various multivariate strategies, including independent component analysis, non-negative matrix factorization, and multivariate curve resolution, have recently been proposed. However, determining the number of components is problematic. Despite the proposal of several different methods, no satisfactory approach has yet been reported. To resolve this problem, we implemented a new idea: classifying a component as "reliable" or "unreliable" based on the reproducibility of its appearance, regardless of the number of components in the calculation. Using the clustering method for classification, we applied this idea to multivariate curve resolution-alternating least squares (MCR-ALS). Comparisons between conventional and modified methods applied to proton nuclear magnetic resonance ((1)H-NMR) spectral datasets derived from known standard mixtures and biological mixtures (urine and feces of mice) revealed that more plausible results are obtained by the modified method. In particular, clusters containing little information were detected with reliability. This strategy, named "cluster-aided MCR-ALS," will facilitate the attainment of more reliable results in the metabolomics datasets.

  11. A real time neural net estimator of fatigue life

    NASA Technical Reports Server (NTRS)

    Troudet, T.; Merrill, W.

    1990-01-01

    A neural network architecture is proposed to estimate, in real-time, the fatigue life of mechanical components, as part of the intelligent Control System for Reusable Rocket Engines. Arbitrary component loading values were used as input to train a two hidden-layer feedforward neural net to estimate component fatigue damage. The ability of the net to learn, based on a local strain approach, the mapping between load sequence and fatigue damage has been demonstrated for a uniaxial specimen. Because of its demonstrated performance, the neural computation may be extended to complex cases where the loads are biaxial or triaxial, and the geometry of the component is complex (e.g., turbopumps blades). The generality of the approach is such that load/damage mappings can be directly extracted from experimental data without requiring any knowledge of the stress/strain profile of the component. In addition, the parallel network architecture allows real-time life calculations even for high-frequency vibrations. Owing to its distributed nature, the neural implementation will be robust and reliable, enabling its use in hostile environments such as rocket engines.

  12. Scaling Impacts in Life Support Architecture and Technology Selection

    NASA Technical Reports Server (NTRS)

    Lange, Kevin

    2016-01-01

    For long-duration space missions outside of Earth orbit, reliability considerations will drive higher levels of redundancy and/or on-board spares for life support equipment. Component scaling will be a critical element in minimizing overall launch mass while maintaining an acceptable level of system reliability. Building on an earlier reliability study (AIAA 2012-3491), this paper considers the impact of alternative scaling approaches, including the design of technology assemblies and their individual components to maximum, nominal, survival, or other fractional requirements. The optimal level of life support system closure is evaluated for deep-space missions of varying duration using equivalent system mass (ESM) as the comparative basis. Reliability impacts are included in ESM by estimating the number of component spares required to meet a target system reliability. Common cause failures are included in the analysis. ISS and ISS-derived life support technologies are considered along with selected alternatives. This study focusses on minimizing launch mass, which may be enabling for deep-space missions.

  13. A new method for computing the reliability of consecutive k-out-of-n:F systems

    NASA Astrophysics Data System (ADS)

    Gökdere, Gökhan; Gürcan, Mehmet; Kılıç, Muhammet Burak

    2016-01-01

    In many physical systems, reliability evaluation, such as ones encountered in telecommunications, the design of integrated circuits, microwave relay stations, oil pipeline systems, vacuum systems in accelerators, computer ring networks, and spacecraft relay stations, have had applied consecutive k-out-of-n system models. These systems are characterized as logical connections among the components of the systems placed in lines or circles. In literature, a great deal of attention has been paid to the study of the reliability evaluation of consecutive k-out-of-n systems. In this paper, we propose a new method to compute the reliability of consecutive k-out-of-n:F systems, with n linearly and circularly arranged components. The proposed method provides a simple way for determining the system failure probability. Also, we write R-Project codes based on our proposed method to compute the reliability of the linear and circular systems which have a great number of components.

  14. Preliminary Development of Real Time Usage-Phase Monitoring System for CNC Machine Tools with a Case Study on CNC Machine VMC 250

    NASA Astrophysics Data System (ADS)

    Budi Harja, Herman; Prakosa, Tri; Raharno, Sri; Yuwana Martawirya, Yatna; Nurhadi, Indra; Setyo Nogroho, Alamsyah

    2018-03-01

    The production characteristic of job-shop industry at which products have wide variety but small amounts causes every machine tool will be shared to conduct production process with dynamic load. Its dynamic condition operation directly affects machine tools component reliability. Hence, determination of maintenance schedule for every component should be calculated based on actual usage of machine tools component. This paper describes study on development of monitoring system to obtaining information about each CNC machine tool component usage in real time approached by component grouping based on its operation phase. A special device has been developed for monitoring machine tool component usage by utilizing usage phase activity data taken from certain electronics components within CNC machine. The components are adaptor, servo driver and spindle driver, as well as some additional components such as microcontroller and relays. The obtained data are utilized for detecting machine utilization phases such as power on state, machine ready state or spindle running state. Experimental result have shown that the developed CNC machine tool monitoring system is capable of obtaining phase information of machine tool usage as well as its duration and displays the information at the user interface application.

  15. Improvements to a Response Surface Thermal Model for Orion Mated to the International Space Station

    NASA Technical Reports Server (NTRS)

    Miller, StephenW.; Walker, William Q.

    2011-01-01

    This study is an extension of previous work to evaluate the applicability of Design of Experiments (DOE)/Response Surface Methodology to on-orbit thermal analysis. The goal was to determine if the methodology could produce a Response Surface Equation (RSE) that predicted the thermal model temperature results within +/-10 F. An RSE is a polynomial expression that can then be used to predict temperatures for a defined range of factor combinations. Based on suggestions received from the previous work, this study used a model with simpler geometry, considered polynomials up to fifth order, and evaluated orbital temperature variations to establish a minimum and maximum temperature for each component. A simplified Outer Mold Line (OML) thermal model of the Orion spacecraft was used in this study. The factors chosen were the vehicle's Yaw, Pitch, and Roll (defining the on-orbit attitude), the Beta angle (restricted to positive beta angles from 0 to 75), and the environmental constants (varying from cold to hot). All factors were normalized from their native ranges to a non-dimensional range from -1.0 to 1.0. Twenty-three components from the OML were chosen and the minimum and maximum orbital temperatures were calculated for each to produce forty-six responses for the DOE model. A customized DOE case matrix of 145 analysis cases was developed which used analysis points at the factor corners, mid-points, and center. From this data set, RSE s were developed which consisted of cubic, quartic, and fifth order polynomials. The results presented are for the fifth order RSE. The RSE results were then evaluated for agreement with the analytical model predictions to produce a +/-3(sigma) error band. Forty of the 46 responses had a +/-3(sigma) value of 10 F or less. Encouraged by this initial success, two additional sets of verification cases were selected. One contained 20 cases, the other 50 cases. These cases were evaluated both with the fifth order RSE and with the analytical model. For the maximum temperature predictions, 12 of the 23 components had all predictions within +/-10 F and 17 were within +/-20 F. For the minimum temperature predictions, only 4 of the 23 components (the four radiator temperatures), were within the 10 F goal. The maximum temperature RSEs were then run through 59,049 screening cases. The RSE predictions were then filtered to find 55 cases that produced the hottest temperatures. These 55 cases were then analyzed using the thermal model and the results compared against the RSE predictions. As noted earlier, 12 of the 23 responses were within +/-10 F at 17 within +/-20 F. These results demonstrate that if properly formulated, an RSE can provide a reliable, fast temperature prediction. Despite this progress, additional work is needed to determine why the minimum temperatures responses and 6 of the hot temperature responses did not produce reliable RSEs. Recommend focus areas are the model itself (arithmetic vs. diffusion nodes) and seeking consultations with statistical application experts.

  16. Methods for Calculating Frequency of Maintenance of Complex Information Security System Based on Dynamics of Its Reliability

    NASA Astrophysics Data System (ADS)

    Varlataya, S. K.; Evdokimov, V. E.; Urzov, A. Y.

    2017-11-01

    This article describes a process of calculating a certain complex information security system (CISS) reliability using the example of the technospheric security management model as well as ability to determine the frequency of its maintenance using the system reliability parameter which allows one to assess man-made risks and to forecast natural and man-made emergencies. The relevance of this article is explained by the fact the CISS reliability is closely related to information security (IS) risks. Since reliability (or resiliency) is a probabilistic characteristic of the system showing the possibility of its failure (and as a consequence - threats to the protected information assets emergence), it is seen as a component of the overall IS risk in the system. As it is known, there is a certain acceptable level of IS risk assigned by experts for a particular information system; in case of reliability being a risk-forming factor maintaining an acceptable risk level should be carried out by the routine analysis of the condition of CISS and its elements and their timely service. The article presents a reliability parameter calculation for the CISS with a mixed type of element connection, a formula of the dynamics of such system reliability is written. The chart of CISS reliability change is a S-shaped curve which can be divided into 3 periods: almost invariable high level of reliability, uniform reliability reduction, almost invariable low level of reliability. Setting the minimum acceptable level of reliability, the graph (or formula) can be used to determine the period of time during which the system would meet requirements. Ideally, this period should not be longer than the first period of the graph. Thus, the proposed method of calculating the CISS maintenance frequency helps to solve a voluminous and critical task of the information assets risk management.

  17. Reliability and maintainability assessment factors for reliable fault-tolerant systems

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.

    1984-01-01

    A long term goal of the NASA Langley Research Center is the development of a reliability assessment methodology of sufficient power to enable the credible comparison of the stochastic attributes of one ultrareliable system design against others. This methodology, developed over a 10 year period, is a combined analytic and simulative technique. An analytic component is the Computer Aided Reliability Estimation capability, third generation, or simply CARE III. A simulative component is the Gate Logic Software Simulator capability, or GLOSS. The numerous factors that potentially have a degrading effect on system reliability and the ways in which these factors that are peculiar to highly reliable fault tolerant systems are accounted for in credible reliability assessments. Also presented are the modeling difficulties that result from their inclusion and the ways in which CARE III and GLOSS mitigate the intractability of the heretofore unworkable mathematics.

  18. Indian water rights settlements and water management innovations: The role of the Arizona Water Settlements Act

    NASA Astrophysics Data System (ADS)

    Bark, Rosalind H.; Jacobs, Katharine L.

    2009-05-01

    In the American southwest, over-allocated water supplies, groundwater depletion, and potential climate change impacts are major water management concerns. It may therefore seem counterintuitive that the resolution of outstanding senior tribal water claims, essentially reallocating finite water supplies to tribes, could support improved water supply reliability for many water users as is the case with the 2004 Arizona Water Settlements Act. The large size of the settlement and its multiple components translate to significant impacts on water policy in Arizona. Key water management solutions incorporated into the settlement and associated legislation have expanded the water manager's "toolbox" and are expected to enhance water supply reliability both within and outside Arizona's active management areas. Many of these new tools are transferable to water management applications in other states.

  19. Response prediction techniques and case studies of a path blocking system based on Global Transmissibility Direct Transmissibility method

    NASA Astrophysics Data System (ADS)

    Wang, Zengwei; Zhu, Ping; Zhao, Jianxuan

    2017-02-01

    In this paper, the prediction capabilities of the Global Transmissibility Direct Transmissibility (GTDT) method are further developed. Two path blocking techniques solely using the easily measured variables of the original system to predict the response of a path blocking system are generalized to finite element models of continuous systems. The proposed techniques are derived theoretically in a general form for the scenarios of setting the response of a subsystem to zero and of removing the link between two directly connected subsystems. The objective of this paper is to verify the reliability of the proposed techniques by finite element simulations. Two typical cases, the structural vibration transmission case and the structure-borne sound case, in two different configurations are employed to illustrate the validity of proposed techniques. The points of attention for each case have been discussed, and conclusions are given. It is shown that for the two cases of blocking a subsystem the proposed techniques are able to predict the new response using measured variables of the original system, even though operational forces are unknown. For the structural vibration transmission case of removing a connector between two components, the proposed techniques are available only when the rotational component responses of the connector are very small. The proposed techniques offer relative path measures and provide an alternative way to deal with NVH problems. The work in this paper provides guidance and reference for the engineering application of the GTDT prediction techniques.

  20. Mass and Reliability Source (MaRS) Database

    NASA Technical Reports Server (NTRS)

    Valdenegro, Wladimir

    2017-01-01

    The Mass and Reliability Source (MaRS) Database consolidates components mass and reliability data for all Oribital Replacement Units (ORU) on the International Space Station (ISS) into a single database. It was created to help engineers develop a parametric model that relates hardware mass and reliability. MaRS supplies relevant failure data at the lowest possible component level while providing support for risk, reliability, and logistics analysis. Random-failure data is usually linked to the ORU assembly. MaRS uses this data to identify and display the lowest possible component failure level. As seen in Figure 1, the failure point is identified to the lowest level: Component 2.1. This is useful for efficient planning of spare supplies, supporting long duration crewed missions, allowing quicker trade studies, and streamlining diagnostic processes. MaRS is composed of information from various databases: MADS (operating hours), VMDB (indentured part lists), and ISS PART (failure data). This information is organized in Microsoft Excel and accessed through a program made in Microsoft Access (Figure 2). The focus of the Fall 2017 internship tour was to identify the components that were the root cause of failure from the given random-failure data, develop a taxonomy for the database, and attach material headings to the component list. Secondary objectives included verifying the integrity of the data in MaRS, eliminating any part discrepancies, and generating documentation for future reference. Due to the nature of the random-failure data, data mining had to be done manually without the assistance of an automated program to ensure positive identification.

  1. A complex network-based importance measure for mechatronics systems

    NASA Astrophysics Data System (ADS)

    Wang, Yanhui; Bi, Lifeng; Lin, Shuai; Li, Man; Shi, Hao

    2017-01-01

    In view of the negative impact of functional dependency, this paper attempts to provide an alternative importance measure called Improved-PageRank (IPR) for measuring the importance of components in mechatronics systems. IPR is a meaningful extension of the centrality measures in complex network, which considers usage reliability of components and functional dependency between components to increase importance measures usefulness. Our work makes two important contributions. First, this paper integrates the literature of mechatronic architecture and complex networks theory to define component network. Second, based on the notion of component network, a meaningful IPR is brought into the identifying of important components. In addition, the IPR component importance measures, and an algorithm to perform stochastic ordering of components due to the time-varying nature of usage reliability of components and functional dependency between components, are illustrated with a component network of bogie system that consists of 27 components.

  2. An overview of fatigue failures at the Rocky Flats Wind System Test Center

    NASA Technical Reports Server (NTRS)

    Waldon, C. A.

    1981-01-01

    Potential small wind energy conversion (SWECS) design problems were identified to improve product quality and reliability. Mass produced components such as gearboxes, generators, bearings, etc., are generally reliable due to their widespread uniform use in other industries. The likelihood of failure increases, though, in the interfacing of these components and in SWECS components designed for a specific system use. Problems relating to the structural integrity of such components are discussed and analyzed with techniques currently used in quality assurance programs in other manufacturing industries.

  3. Structured assessment of microsurgery skills in the clinical setting.

    PubMed

    Chan, WoanYi; Niranjan, Niri; Ramakrishnan, Venkat

    2010-08-01

    Microsurgery is an essential component in plastic surgery training. Competence has become an important issue in current surgical practice and training. The complexity of microsurgery requires detailed assessment and feedback on skills components. This article proposes a method of Structured Assessment of Microsurgery Skills (SAMS) in a clinical setting. Three types of assessment (i.e., modified Global Rating Score, errors list and summative rating) were incorporated to develop the SAMS method. Clinical anastomoses were recorded on videos using a digital microscope system and were rated by three consultants independently and in a blinded fashion. Fifteen clinical cases of microvascular anastomoses performed by trainees and a consultant microsurgeon were assessed using SAMS. The consultant had consistently the highest scores. Construct validity was also demonstrated by improvement of SAMS scores of microsurgery trainees. The overall inter-rater reliability was strong (alpha=0.78). The SAMS method provides both formative and summative assessment of microsurgery skills. It is demonstrated to be a valid, reliable and feasible assessment tool of operating room performance to provide systematic and comprehensive feedback as part of the learning cycle. Copyright 2009 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  4. Ceramic component reliability with the restructured NASA/CARES computer program

    NASA Technical Reports Server (NTRS)

    Powers, Lynn M.; Starlinger, Alois; Gyekenyesi, John P.

    1992-01-01

    The Ceramics Analysis and Reliability Evaluation of Structures (CARES) integrated design program on statistical fast fracture reliability and monolithic ceramic components is enhanced to include the use of a neutral data base, two-dimensional modeling, and variable problem size. The data base allows for the efficient transfer of element stresses, temperatures, and volumes/areas from the finite element output to the reliability analysis program. Elements are divided to insure a direct correspondence between the subelements and the Gaussian integration points. Two-dimensional modeling is accomplished by assessing the volume flaw reliability with shell elements. To demonstrate the improvements in the algorithm, example problems are selected from a round-robin conducted by WELFEP (WEakest Link failure probability prediction by Finite Element Postprocessors).

  5. Transit ridership, reliability, and retention.

    DOT National Transportation Integrated Search

    2008-10-01

    This project explores two major components that affect transit ridership: travel time reliability and rider : retention. It has been recognized that transit travel time reliability may have a significant impact on : attractiveness of transit to many ...

  6. Development and initial validation of the Classification of Early-Onset Scoliosis (C-EOS).

    PubMed

    Williams, Brendan A; Matsumoto, Hiroko; McCalla, Daren J; Akbarnia, Behrooz A; Blakemore, Laurel C; Betz, Randal R; Flynn, John M; Johnston, Charles E; McCarthy, Richard E; Roye, David P; Skaggs, David L; Smith, John T; Snyder, Brian D; Sponseller, Paul D; Sturm, Peter F; Thompson, George H; Yazici, Muharrem; Vitale, Michael G

    2014-08-20

    Early-onset scoliosis is a heterogeneous condition, with highly variable manifestations and natural history. No standardized classification system exists to describe and group patients, to guide optimal care, or to prognosticate outcomes within this population. A classification system for early-onset scoliosis is thus a necessary prerequisite to the timely evolution of care of these patients. Fifteen experienced surgeons participated in a nominal group technique designed to achieve a consensus-based classification system for early-onset scoliosis. A comprehensive list of factors important in managing early-onset scoliosis was generated using a standardized literature review, semi-structured interviews, and open forum discussion. Three group meetings and two rounds of surveying guided the selection of classification components, subgroupings, and cut-points. Initial validation of the system was conducted using an interobserver reliability assessment based on the classification of a series of thirty cases. Nominal group technique was used to identify three core variables (major curve angle, etiology, and kyphosis) with high group content validity scores. Age and curve progression ranked slightly lower. Participants evaluated the cases of thirty patients with early-onset scoliosis for reliability testing. The mean kappa value for etiology (0.64) was substantial, while the mean kappa values for major curve angle (0.95) and kyphosis (0.93) indicated almost perfect agreement. The final classification consisted of a continuous age prefix, etiology (congenital or structural, neuromuscular, syndromic, and idiopathic), major curve angle (1, 2, 3, or 4), and kyphosis (-, N, or +) variables, and an optional progression modifier (P0, P1, or P2). Utilizing formal consensus-building methods in a large group of surgeons experienced in treating early-onset scoliosis, a novel classification system for early-onset scoliosis was developed with all core components demonstrating substantial to excellent interobserver reliability. This classification system will serve as a foundation to guide ongoing research efforts and standardize communication in the clinical setting. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.

  7. Testing the Technology Acceptance Model: HIV case managers' intention to use a continuity of care record with context-specific links.

    PubMed

    Schnall, Rebecca; Bakken, Suzanne

    2011-09-01

    To assess the applicability of the Technology Acceptance Model (TAM) constructs in explaining HIV case managers' behavioural intention to use a continuity of care record (CCR) with context-specific links designed to meet their information needs. Data were collected from 94 case managers who provide care to persons living with HIV (PLWH) using an online survey comprising three components: (1) demographic information: age, gender, ethnicity, race, Internet usage and computer experience; (2) mock-up of CCR with context-specific links; and items related to TAM constructs. Data analysis included: principal components factor analysis (PCA), assessment of internal consistency reliability and univariate and multivariate analysis. PCA extracted three factors (Perceived Ease of Use, Perceived Usefulness and Perceived Barriers to Use), explained variance = 84.9%, Cronbach's ά = 0.69-0.91. In a linear regression model, Perceived Ease of Use, Perceived Usefulness and Perceived Barriers to Use explained 43.6% (p < 0.001) of the variance in Behavioural Intention to use a CCR with context-specific links. Our study contributes to the evidence base regarding TAM in health care through expanding the type of professional surveyed, study setting and Health Information Technology assessed.

  8. Discrete component bonding and thick film materials study

    NASA Technical Reports Server (NTRS)

    Kinser, D. L.

    1975-01-01

    The results are summarized of an investigation of discrete component bonding reliability and a fundamental study of new thick film resistor materials. The component bonding study examined several types of solder bonded components with some processing variable studies to determine their influence upon bonding reliability. The bonding reliability was assessed using the thermal cycle: 15 minutes at room temperature, 15 minutes at +125 C 15 minutes at room temperature, and 15 minutes at -55 C. The thick film resistor materials examined were of the transition metal oxide-phosphate glass family with several elemental metal additions of the same transition metal. These studies were conducted by preparing a paste of the subject composition, printing, drying, and firing using both air and reducing atmospheres. The resulting resistors were examined for adherence, resistance, thermal coefficient of resistance, and voltage coefficient of resistance.

  9. Precision cast vs. wrought superalloys

    NASA Technical Reports Server (NTRS)

    Tien, J. K.; Borofka, J. C.; Casey, M. E.

    1986-01-01

    While cast polycrystalline superalloys recommend themselves in virtue of better 'buy-to-fly' ratios and higher strengthening gamma-prime volume fractions than those of wrought superalloys, the expansion of their use into such critical superalloy applications as gas turbine hot section components has been slowed by insufficient casting process opportunities for microstructural control. Attention is presently drawn, however, to casting process developments facilitating the production of defect-tolerant superalloy castings having improved fracture reliability. Integrally bladed turbine wheel and thin-walled turbine exhaust case near-net-shape castings have been produced by these means.

  10. Grid-Level Application of Electrical Energy Storage: Example Use Cases in the United States and China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yingchen; Gevorgian, Vahan; Wang, Caixia

    Electrical energy storage (EES) systems are expected to play an increasing role in helping the United States and China-the world's largest economies with the two largest power systems-meet the challenges of integrating more variable renewable resources and enhancing the reliability of power systems by improving the operating capabilities of the electric grid. EES systems are becoming integral components of a resilient and efficient grid through a diverse set of applications that include energy management, load shifting, frequency regulation, grid stabilization, and voltage support.

  11. What Are the Real Procedural Costs of Bariatric Surgery? A Systematic Literature Review of Published Cost Analyses.

    PubMed

    Doble, Brett; Wordsworth, Sarah; Rogers, Chris A; Welbourn, Richard; Byrne, James; Blazeby, Jane M

    2017-08-01

    This review aims to evaluate the current literature on the procedural costs of bariatric surgery for the treatment of severe obesity. Using a published framework for the conduct of micro-costing studies for surgical interventions, existing cost estimates from the literature are assessed for their accuracy, reliability and comprehensiveness based on their consideration of seven 'important' cost components. MEDLINE, PubMed, key journals and reference lists of included studies were searched up to January 2017. Eligible studies had to report per-case, total procedural costs for any type of bariatric surgery broken down into two or more individual cost components. A total of 998 citations were screened, of which 13 studies were included for analysis. Included studies were mainly conducted from a US hospital perspective, assessed either gastric bypass or adjustable gastric banding procedures and considered a range of different cost components. The mean total procedural costs for all included studies was US$14,389 (range, US$7423 to US$33,541). No study considered all of the recommended 'important' cost components and estimation methods were poorly reported. The accuracy, reliability and comprehensiveness of the existing cost estimates are, therefore, questionable. There is a need for a comparative cost analysis of the different approaches to bariatric surgery, with the most appropriate costing approach identified to be micro-costing methods. Such an analysis will not only be useful in estimating the relative cost-effectiveness of different surgeries but will also ensure appropriate reimbursement and budgeting by healthcare payers to ensure barriers to access this effective treatment by severely obese patients are minimised.

  12. Life and reliability models for helicopter transmissions

    NASA Technical Reports Server (NTRS)

    Savage, M.; Knorr, R. J.; Coy, J. J.

    1982-01-01

    Computer models of life and reliability are presented for planetary gear trains with a fixed ring gear, input applied to the sun gear, and output taken from the planet arm. For this transmission the input and output shafts are co-axial and the input and output torques are assumed to be coaxial with these shafts. Thrust and side loading are neglected. The reliability model is based on the Weibull distributions of the individual reliabilities of the in transmission components. The system model is also a Weibull distribution. The load versus life model for the system is a power relationship as the models for the individual components. The load-life exponent and basic dynamic capacity are developed as functions of the components capacities. The models are used to compare three and four planet, 150 kW (200 hp), 5:1 reduction transmissions with 1500 rpm input speed to illustrate their use.

  13. Lifetime Reliability Evaluation of Structural Ceramic Parts with the CARES/LIFE Computer Program

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Powers, Lynn M.; Janosik, Lesley A.; Gyekenyesi, John P.

    1993-01-01

    The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing the power law, Paris law, or Walker equation. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), Weibull's normal stress averaging method (NSA), or Batdorf's theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Two example problems demonstrating cyclic fatigue parameter estimation and component reliability analysis with proof testing are included.

  14. Reliability and availability analysis of a 10 kW@20 K helium refrigerator

    NASA Astrophysics Data System (ADS)

    Li, J.; Xiong, L. Y.; Liu, L. Q.; Wang, H. R.; Wang, B. M.

    2017-02-01

    A 10 kW@20 K helium refrigerator has been established in the Technical Institute of Physics and Chemistry, Chinese Academy of Sciences. To evaluate and improve this refrigerator’s reliability and availability, a reliability and availability analysis is performed. According to the mission profile of this refrigerator, a functional analysis is performed. The failure data of the refrigerator components are collected and failure rate distributions are fitted by software Weibull++ V10.0. A Failure Modes, Effects & Criticality Analysis (FMECA) is performed and the critical components with higher risks are pointed out. Software BlockSim V9.0 is used to calculate the reliability and the availability of this refrigerator. The result indicates that compressors, turbine and vacuum pump are the critical components and the key units of this refrigerator. The mitigation actions with respect to design, testing, maintenance and operation are proposed to decrease those major and medium risks.

  15. Developing Ultra Reliable Life Support for the Moon and Mars

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2009-01-01

    Recycling life support systems can achieve ultra reliability by using spares to replace failed components. The added mass for spares is approximately equal to the original system mass, provided the original system reliability is not very low. Acceptable reliability can be achieved for the space shuttle and space station by preventive maintenance and by replacing failed units, However, this maintenance and repair depends on a logistics supply chain that provides the needed spares. The Mars mission must take all the needed spares at launch. The Mars mission also must achieve ultra reliability, a very low failure rate per hour, since it requires years rather than weeks and cannot be cut short if a failure occurs. Also, the Mars mission has a much higher mass launch cost per kilogram than shuttle or station. Achieving ultra reliable space life support with acceptable mass will require a well-planned and extensive development effort. Analysis must define the reliability requirement and allocate it to subsystems and components. Technologies, components, and materials must be designed and selected for high reliability. Extensive testing is needed to ascertain very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The systems must be designed, produced, integrated, and tested without impairing system reliability. Maintenance and failed unit replacement should not introduce any additional probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass must start soon if it is to produce timely results for the moon and Mars.

  16. Reliability Modeling of Microelectromechanical Systems Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Perera. J. Sebastian

    2000-01-01

    Microelectromechanical systems (MEMS) are a broad and rapidly expanding field that is currently receiving a great deal of attention because of the potential to significantly improve the ability to sense, analyze, and control a variety of processes, such as heating and ventilation systems, automobiles, medicine, aeronautical flight, military surveillance, weather forecasting, and space exploration. MEMS are very small and are a blend of electrical and mechanical components, with electrical and mechanical systems on one chip. This research establishes reliability estimation and prediction for MEMS devices at the conceptual design phase using neural networks. At the conceptual design phase, before devices are built and tested, traditional methods of quantifying reliability are inadequate because the device is not in existence and cannot be tested to establish the reliability distributions. A novel approach using neural networks is created to predict the overall reliability of a MEMS device based on its components and each component's attributes. The methodology begins with collecting attribute data (fabrication process, physical specifications, operating environment, property characteristics, packaging, etc.) and reliability data for many types of microengines. The data are partitioned into training data (the majority) and validation data (the remainder). A neural network is applied to the training data (both attribute and reliability); the attributes become the system inputs and reliability data (cycles to failure), the system output. After the neural network is trained with sufficient data. the validation data are used to verify the neural networks provided accurate reliability estimates. Now, the reliability of a new proposed MEMS device can be estimated by using the appropriate trained neural networks developed in this work.

  17. Is case-specificity content-specificity? An analysis of data from extended-matching questions.

    PubMed

    Dory, Valerie; Gagnon, Robert; Charlin, Bernard

    2010-03-01

    Case-specificity, i.e., variability of a subject's performance across cases, has been a consistent finding in medical education. It has important implications for assessment validity and reliability. Its root causes remain a matter of discussion. One hypothesis, content-specificity, links variability of performance to variable levels of relevant knowledge. Extended-matching items (EMIs) are an ideal format to test this hypothesis as items are grouped by topic. If differences pertaining to content knowledge are the main cause of case-specificity, variability across topics should be high and variability across items within the same topic low. We used generalisability analysis on results of a written test composed of 159 EMIs sat by two cohorts of general practice trainees at one university. Two hundred and twenty-seven trainees took part. The variance component attributed to subjects was small. Variance attributed to topics was smaller than variance attributed to items. The main source of error was interaction between subjects and items, accounting for two-thirds of error. The generalisability D study revealed that for the same total number of items, increasing the number of topics results in a higher G coefficient than increasing the number of items per topic. Topical knowledge does not seem to explain case-specificity observed in our data. Structure of knowledge and reasoning strategy may be more important, in particular pattern-recognition which EMIs were designed to elicit. The causal explanations of case-specificity may be dependent on test format. Increasing the number of topics with fewer items each would increase reliability but also testing time.

  18. Overview of the SAE G-11 RMSL (Reliability, Maintainability, Supportability, and Logistics) Division Activities and Technical Projects

    NASA Technical Reports Server (NTRS)

    Singhal, Surendra N.

    2003-01-01

    The SAE G-11 RMSL (Reliability, Maintainability, Supportability, and Logistics) Division activities include identification and fulfillment of joint industry, government, and academia needs for development and implementation of RMSL technologies. Four Projects in the Probabilistic Methods area and two in the area of RMSL have been identified. These are: (1) Evaluation of Probabilistic Technology - progress has been made toward the selection of probabilistic application cases. Future effort will focus on assessment of multiple probabilistic softwares in solving selected engineering problems using probabilistic methods. Relevance to Industry & Government - Case studies of typical problems encountering uncertainties, results of solutions to these problems run by different codes, and recommendations on which code is applicable for what problems; (2) Probabilistic Input Preparation - progress has been made in identifying problem cases such as those with no data, little data and sufficient data. Future effort will focus on developing guidelines for preparing input for probabilistic analysis, especially with no or little data. Relevance to Industry & Government - Too often, we get bogged down thinking we need a lot of data before we can quantify uncertainties. Not True. There are ways to do credible probabilistic analysis with little data; (3) Probabilistic Reliability - probabilistic reliability literature search has been completed along with what differentiates it from statistical reliability. Work on computation of reliability based on quantification of uncertainties in primitive variables is in progress. Relevance to Industry & Government - Correct reliability computations both at the component and system level are needed so one can design an item based on its expected usage and life span; (4) Real World Applications of Probabilistic Methods (PM) - A draft of volume 1 comprising aerospace applications has been released. Volume 2, a compilation of real world applications of probabilistic methods with essential information demonstrating application type and timehost savings by the use of probabilistic methods for generic applications is in progress. Relevance to Industry & Government - Too often, we say, 'The Proof is in the Pudding'. With help from many contributors, we hope to produce such a document. Problem is - not too many people are coming forward due to proprietary nature. So, we are asking to document only minimum information including problem description, what method used, did it result in any savings, and how much?; (5) Software Reliability - software reliability concept, program, implementation, guidelines, and standards are being documented. Relevance to Industry & Government - software reliability is a complex issue that must be understood & addressed in all facets of business in industry, government, and other institutions. We address issues, concepts, ways to implement solutions, and guidelines for maximizing software reliability; (6) Maintainability Standards - maintainability/serviceability industry standard/guidelines and industry best practices and methodologies used in performing maintainability/ serviceability tasks are being documented. Relevance to Industry & Government - Any industry or government process, project, and/or tool must be maintained and serviced to realize the life and performance it was designed for. We address issues and develop guidelines for optimum performance & life.

  19. Reliability and Maintainability Data for Lead Lithium Cooling Systems

    DOE PAGES

    Cadwallader, Lee

    2016-11-16

    This article presents component failure rate data for use in assessment of lead lithium cooling systems. Best estimate data applicable to this liquid metal coolant is presented. Repair times for similar components are also referenced in this work. These data support probabilistic safety assessment and reliability, availability, maintainability and inspectability analyses.

  20. Component Structure, Reliability, and Stability of Lawrence's Self-Esteem Questionnaire (LAWSEQ)

    ERIC Educational Resources Information Center

    Rae, Gordon; Dalto, Georgia; Loughrey, Dolores; Woods, Caroline

    2011-01-01

    Lawrence's Self-Esteem Questionnaire (LAWSEQ) was administered to 120 Year 1 pupils in six schools in Belfast, Northern Ireland. A principal components analysis indicated that the scale items were unidimensional and that the reliability of the scores, as estimated by Cronbach's alpha, was satisfactory ([alpha] = 0.73). There were no differences…

  1. The Five 'R's' for Developing Trusted Software Frameworks to increase confidence in, and maximise reuse of, Open Source Software.

    NASA Astrophysics Data System (ADS)

    Fraser, Ryan; Gross, Lutz; Wyborn, Lesley; Evans, Ben; Klump, Jens

    2015-04-01

    Recent investments in HPC, cloud and Petascale data stores, have dramatically increased the scale and resolution that earth science challenges can now be tackled. These new infrastructures are highly parallelised and to fully utilise them and access the large volumes of earth science data now available, a new approach to software stack engineering needs to be developed. The size, complexity and cost of the new infrastructures mean any software deployed has to be reliable, trusted and reusable. Increasingly software is available via open source repositories, but these usually only enable code to be discovered and downloaded. As a user it is hard for a scientist to judge the suitability and quality of individual codes: rarely is there information on how and where codes can be run, what the critical dependencies are, and in particular, on the version requirements and licensing of the underlying software stack. A trusted software framework is proposed to enable reliable software to be discovered, accessed and then deployed on multiple hardware environments. More specifically, this framework will enable those who generate the software, and those who fund the development of software, to gain credit for the effort, IP, time and dollars spent, and facilitate quantification of the impact of individual codes. For scientific users, the framework delivers reviewed and benchmarked scientific software with mechanisms to reproduce results. The trusted framework will have five separate, but connected components: Register, Review, Reference, Run, and Repeat. 1) The Register component will facilitate discovery of relevant software from multiple open source code repositories. The registration process of the code should include information about licensing, hardware environments it can be run on, define appropriate validation (testing) procedures and list the critical dependencies. 2) The Review component is targeting on the verification of the software typically against a set of benchmark cases. This will be achieved by linking the code in the software framework to peer review forums such as Mozilla Science or appropriate Journals (e.g. Geoscientific Model Development Journal) to assist users to know which codes to trust. 3) Referencing will be accomplished by linking the Software Framework to groups such as Figshare or ImpactStory that help disseminate and measure the impact of scientific research, including program code. 4) The Run component will draw on information supplied in the registration process, benchmark cases described in the review and relevant information to instantiate the scientific code on the selected environment. 5) The Repeat component will tap into existing Provenance Workflow engines that will automatically capture information that relate to a particular run of that software, including identification of all input and output artefacts, and all elements and transactions within that workflow. The proposed trusted software framework will enable users to rapidly discover and access reliable code, reduce the time to deploy it and greatly facilitate sharing, reuse and reinstallation of code. Properly designed it could enable an ability to scale out to massively parallel systems and be accessed nationally/ internationally for multiple use cases, including Supercomputer centres, cloud facilities, and local computers.

  2. [MaRS Project

    NASA Technical Reports Server (NTRS)

    Aruljothi, Arunvenkatesh

    2016-01-01

    The Space Exploration Division of the Safety and Mission Assurances Directorate is responsible for reducing the risk to Human Space Flight Programs by providing system safety, reliability, and risk analysis. The Risk & Reliability Analysis branch plays a part in this by utilizing Probabilistic Risk Assessment (PRA) and Reliability and Maintainability (R&M) tools to identify possible types of failure and effective solutions. A continuous effort of this branch is MaRS, or Mass and Reliability System, a tool that was the focus of this internship. Future long duration space missions will have to find a balance between the mass and reliability of their spare parts. They will be unable take spares of everything and will have to determine what is most likely to require maintenance and spares. Currently there is no database that combines mass and reliability data of low level space-grade components. MaRS aims to be the first database to do this. The data in MaRS will be based on the hardware flown on the International Space Stations (ISS). The components on the ISS have a long history and are well documented, making them the perfect source. Currently, MaRS is a functioning excel workbook database; the backend is complete and only requires optimization. MaRS has been populated with all the assemblies and their components that are used on the ISS; the failures of these components are updated regularly. This project was a continuation on the efforts of previous intern groups. Once complete, R&M engineers working on future space flight missions will be able to quickly access failure and mass data on assemblies and components, allowing them to make important decisions and tradeoffs.

  3. Analytical models for coupling reliability in identical two-magnet systems during slow reversals

    NASA Astrophysics Data System (ADS)

    Kani, Nickvash; Naeemi, Azad

    2017-12-01

    This paper follows previous works which investigated the strength of dipolar coupling in two-magnet systems. While those works focused on qualitative analyses, this manuscript elucidates reversal through dipolar coupling culminating in analytical expressions for reversal reliability in identical two-magnet systems. The dipolar field generated by a mono-domain magnetic body can be represented by a tensor containing both longitudinal and perpendicular field components; this field changes orientation and magnitude based on the magnetization of neighboring nanomagnets. While the dipolar field does reduce to its longitudinal component at short time-scales, for slow magnetization reversals, the simple longitudinal field representation greatly underestimates the scope of parameters that ensure reliable coupling. For the first time, analytical models that map the geometric and material parameters required for reliable coupling in two-magnet systems are developed. It is shown that in biaxial nanomagnets, the x ̂ and y ̂ components of the dipolar field contribute to the coupling, while all three dimensions contribute to the coupling between a pair of uniaxial magnets. Additionally, the ratio of the longitudinal and perpendicular components of the dipolar field is also very important. If the perpendicular components in the dipolar tensor are too large, the nanomagnet pair may come to rest in an undesirable meta-stable state away from the free axis. The analytical models formulated in this manuscript map the minimum and maximum parameters for reliable coupling. Using these models, it is shown that there is a very small range of material parameters which can facilitate reliable coupling between perpendicular-magnetic-anisotropy nanomagnets; hence, in-plane nanomagnets are more suitable for coupled systems.

  4. Case Mis-Conceptualization in Psychological Treatment: An Enduring Clinical Problem.

    PubMed

    Ridley, Charles R; Jeffrey, Christina E; Roberson, Richard B

    2017-04-01

    Case conceptualization, an integral component of mental health treatment, aims to facilitate therapeutic gains by formulating a clear picture of a client's psychological presentation. However, despite numerous attempts to improve this clinical activity, it remains unclear how well existing methods achieve their purported purpose. Case formulation is inconsistently defined in the literature and implemented in practice, with many methods varying in complexity, theoretical grounding, and empirical support. In addition, many of the methods demand a precise clinical acumen that is easily influenced by judgmental and inferential errors. These errors occur regardless of clinicians' level of training or amount of clinical experience. Overall, the lack of a consensus definition, a diversity of methods, and susceptibility of clinicians to errors are manifestations of the state of crisis in case conceptualization. This article, the 2nd in a series of 5 on thematic mapping, argues the need for more reliable and valid models of case conceptualization. © 2017 Wiley Periodicals, Inc.

  5. Fuel Cell Balance-of-Plant Reliability Testbed Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sproat, Vern; LaHurd, Debbie

    Reliability of the fuel cell system balance-of-plant (BoP) components is a critical factor that needs to be addressed prior to fuel cells becoming fully commercialized. Failure or performance degradation of BoP components has been identified as a life-limiting factor in fuel cell systems.1 The goal of this project is to develop a series of test beds that will test system components such as pumps, valves, sensors, fittings, etc., under operating conditions anticipated in real Polymer Electrolyte Membrane (PEM) fuel cell systems. Results will be made generally available to begin removing reliability as a roadblock to the growth of the PEMmore » fuel cell industry. Stark State College students participating in the project, in conjunction with their coursework, have been exposed to technical knowledge and training in the handling and maintenance of hydrogen, fuel cells and system components as well as component failure modes and mechanisms. Three test beds were constructed. Testing was completed on gas flow pumps, tubing, and pressure and temperature sensors and valves.« less

  6. Reliability, Convergent Validity and Time Invariance of Default Mode Network Deviations in Early Adult Major Depressive Disorder.

    PubMed

    Bessette, Katie L; Jenkins, Lisanne M; Skerrett, Kristy A; Gowins, Jennifer R; DelDonno, Sophie R; Zubieta, Jon-Kar; McInnis, Melvin G; Jacobs, Rachel H; Ajilore, Olusola; Langenecker, Scott A

    2018-01-01

    There is substantial variability across studies of default mode network (DMN) connectivity in major depressive disorder, and reliability and time-invariance are not reported. This study evaluates whether DMN dysconnectivity in remitted depression (rMDD) is reliable over time and symptom-independent, and explores convergent relationships with cognitive features of depression. A longitudinal study was conducted with 82 young adults free of psychotropic medications (47 rMDD, 35 healthy controls) who completed clinical structured interviews, neuropsychological assessments, and 2 resting-state fMRI scans across 2 study sites. Functional connectivity analyses from bilateral posterior cingulate and anterior hippocampal formation seeds in DMN were conducted at both time points within a repeated-measures analysis of variance to compare groups and evaluate reliability of group-level connectivity findings. Eleven hyper- (from posterior cingulate) and 6 hypo- (from hippocampal formation) connectivity clusters in rMDD were obtained with moderate to adequate reliability in all but one cluster (ICC's range = 0.50 to 0.76 for 16 of 17). The significant clusters were reduced with a principle component analysis (5 components obtained) to explore these connectivity components, and were then correlated with cognitive features (rumination, cognitive control, learning and memory, and explicit emotion identification). At the exploratory level, for convergent validity, components consisting of posterior cingulate with cognitive control network hyperconnectivity in rMDD were related to cognitive control (inverse) and rumination (positive). Components consisting of anterior hippocampal formation with social emotional network and DMN hypoconnectivity were related to memory (inverse) and happy emotion identification (positive). Thus, time-invariant DMN connectivity differences exist early in the lifespan course of depression and are reliable. The nuanced results suggest a ventral within-network hypoconnectivity associated with poor memory and a dorsal cross-network hyperconnectivity linked to poorer cognitive control and elevated rumination. Study of early course remitted depression with attention to reliability and symptom independence could lead to more readily translatable clinical assessment tools for biomarkers.

  7. Reliability assessment of an OVH HV power line truss transmission tower subjected to seismic loading

    NASA Astrophysics Data System (ADS)

    Winkelmann, Karol; Jakubowska, Patrycja; Soltysik, Barbara

    2017-03-01

    The study focuses on the reliability of a transmission tower OS24 ON150 + 10, an element of an OVH HV power line, under seismic loading. In order to describe the seismic force, the real-life recording of the horizontal component of the El Centro earthquake was adopted. The amplitude and the period of this excitation are assumed random, their variation is described by Weibull distribution. The possible space state of the phenomenon is given in the form of a structural response surface (RSM methodology), approximated by an ANOVA table with directional sampling (DS) points. Four design limit states are considered: stress limit criterion for a natural load combination, criterion for an accidental combination (one-sided cable snap), vertical and horizontal translation criteria. According to these cases the HLRF reliability index β is used for structural safety assessment. The RSM approach is well suited for the analysis - it is numerically efficient, not excessively time consuming, indicating a high confidence level. Given the problem conditions, the seismic excitation is shown the sufficient trigger to the loss of load-bearing capacity or stability of the tower.

  8. Reliability Issues in Stirling Radioisotope Power Systems

    NASA Technical Reports Server (NTRS)

    Schreiber, Jeffrey; Shah, Ashwin

    2005-01-01

    Stirling power conversion is a potential candidate for use in a Radioisotope Power System (RPS) for space science missions because it offers a multifold increase in the conversion efficiency of heat to electric power and reduced requirement of radioactive material. Reliability of an RPS that utilizes Stirling power conversion technology is important in order to ascertain long term successful performance. Owing to long life time requirement (14 years), it is difficult to perform long-term tests that encompass all the uncertainties involved in the design variables of components and subsystems comprising the RPS. The requirement for uninterrupted performance reliability and related issues are discussed, and some of the critical areas of concern are identified. An overview of the current on-going efforts to understand component life, design variables at the component and system levels, and related sources and nature of uncertainties are also discussed. Current status of the 110 watt Stirling Radioisotope Generator (SRG110) reliability efforts is described. Additionally, an approach showing the use of past experience on other successfully used power systems to develop a reliability plan for the SRG110 design is outlined.

  9. Reliability Issues in Stirling Radioisotope Power Systems

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Schreiber, Jeffrey G.

    2004-01-01

    Stirling power conversion is a potential candidate for use in a Radioisotope Power System (RPS) for space science missions because it offers a multifold increase in the conversion efficiency of heat to electric power and reduced requirement of radioactive material. Reliability of an RPS that utilizes Stirling power conversion technology is important in order to ascertain long term successful performance. Owing to long life time requirement (14 years), it is difficult to perform long-term tests that encompass all the uncertainties involved in the design variables of components and subsystems comprising the RPS. The requirement for uninterrupted performance reliability and related issues are discussed, and some of the critical areas of concern are identified. An overview of the current on-going efforts to understand component life, design variables at the component and system levels, and related sources and nature of uncertainties are also discussed. Current status of the 110 watt Stirling Radioisotope Generator (SRG110) reliability efforts is described. Additionally, an approach showing the use of past experience on other successfully used power systems to develop a reliability plan for the SRG110 design is outlined.

  10. Reliability and risk assessment of structures

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1991-01-01

    Development of reliability and risk assessment of structural components and structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) the evaluation of the various uncertainties in terms of cumulative distribution functions for various structural response variables based on known or assumed uncertainties in primitive structural variables; (2) evaluation of the failure probability; (3) reliability and risk-cost assessment; and (4) an outline of an emerging approach for eventual certification of man-rated structures by computational methods. Collectively, the results demonstrate that the structural durability/reliability of man-rated structural components and structures can be effectively evaluated by using formal probabilistic methods.

  11. Scaled CMOS Reliability and Considerations for Spacecraft Systems: Bottom-Up and Top-Down Perspective

    NASA Technical Reports Server (NTRS)

    White, Mark

    2012-01-01

    New space missions will increasingly rely on more advanced technologies because of system requirements for higher performance, particularly in instruments and high-speed processing. Component-level reliability challenges with scaled CMOS in spacecraft systems from a bottom-up perspective have been presented. Fundamental Front-end and Back-end processing reliability issues with more aggressively scaled parts have been discussed. Effective thermal management from system-level to the componentlevel (top-down) is a key element in overall design of reliable systems. Thermal management in space systems must consider a wide range of issues, including thermal loading of many different components, and frequent temperature cycling of some systems. Both perspectives (top-down and bottom-up) play a large role in robust, reliable spacecraft system design.

  12. Maximally reliable spatial filtering of steady state visual evoked potentials.

    PubMed

    Dmochowski, Jacek P; Greaves, Alex S; Norcia, Anthony M

    2015-04-01

    Due to their high signal-to-noise ratio (SNR) and robustness to artifacts, steady state visual evoked potentials (SSVEPs) are a popular technique for studying neural processing in the human visual system. SSVEPs are conventionally analyzed at individual electrodes or linear combinations of electrodes which maximize some variant of the SNR. Here we exploit the fundamental assumption of evoked responses--reproducibility across trials--to develop a technique that extracts a small number of high SNR, maximally reliable SSVEP components. This novel spatial filtering method operates on an array of Fourier coefficients and projects the data into a low-dimensional space in which the trial-to-trial spectral covariance is maximized. When applied to two sample data sets, the resulting technique recovers physiologically plausible components (i.e., the recovered topographies match the lead fields of the underlying sources) while drastically reducing the dimensionality of the data (i.e., more than 90% of the trial-to-trial reliability is captured in the first four components). Moreover, the proposed technique achieves a higher SNR than that of the single-best electrode or the Principal Components. We provide a freely-available MATLAB implementation of the proposed technique, herein termed "Reliable Components Analysis". Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Flexible organic TFT bio-signal amplifier using reliable chip component assembly process with conductive adhesive.

    PubMed

    Yoshimoto, Shusuke; Uemura, Takafumi; Akiyama, Mihoko; Ihara, Yoshihiro; Otake, Satoshi; Fujii, Tomoharu; Araki, Teppei; Sekitani, Tsuyoshi

    2017-07-01

    This paper presents a flexible organic thin-film transistor (OTFT) amplifier for bio-signal monitoring and presents the chip component assembly process. Using a conductive adhesive and a chip mounter, the chip components are mounted on a flexible film substrate, which has OTFT circuits. This study first investigates the assembly technique reliability for chip components on the flexible substrate. This study also specifically examines heart pulse wave monitoring conducted using the proposed flexible amplifier circuit and a flexible piezoelectric film. We connected the amplifier to a bluetooth device for a wearable device demonstration.

  14. Correlation study between vibrational environmental and failure rates of civil helicopter components

    NASA Technical Reports Server (NTRS)

    Alaniz, O.

    1979-01-01

    An investigation of two selected helicopter types, namely, the Models 206A/B and 212, is reported. An analysis of the available vibration and reliability data for these two helicopter types resulted in the selection of ten components located in five different areas of the helicopter and consisting primarily of instruments, electrical components, and other noncritical flight hardware. The potential for advanced technology in suppressing vibration in helicopters was assessed. The are still several unknowns concerning both the vibration environment and the reliability of helicopter noncritical flight components. Vibration data for the selected components were either insufficient or inappropriate. The maintenance data examined for the selected components were inappropriate due to variations in failure mode identification, inconsistent reporting, or inaccurate informaton.

  15. 49 CFR Appendix E to Part 238 - General Principles of Reliability-Based Maintenance Programs

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... that have already occurred but were not evident to the operating crew. (b) Components or systems in a... shows decreasing reliability with increasing operating age. An age/time limit may be used to reduce the... maintenance of a component or system to protect the safety and operating capability of the equipment, a number...

  16. 49 CFR Appendix E to Part 238 - General Principles of Reliability-Based Maintenance Programs

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... that have already occurred but were not evident to the operating crew. (b) Components or systems in a... shows decreasing reliability with increasing operating age. An age/time limit may be used to reduce the... maintenance of a component or system to protect the safety and operating capability of the equipment, a number...

  17. 49 CFR Appendix E to Part 238 - General Principles of Reliability-Based Maintenance Programs

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... that have already occurred but were not evident to the operating crew. (b) Components or systems in a... shows decreasing reliability with increasing operating age. An age/time limit may be used to reduce the... maintenance of a component or system to protect the safety and operating capability of the equipment, a number...

  18. 49 CFR Appendix E to Part 238 - General Principles of Reliability-Based Maintenance Programs

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... that have already occurred but were not evident to the operating crew. (b) Components or systems in a... shows decreasing reliability with increasing operating age. An age/time limit may be used to reduce the... maintenance of a component or system to protect the safety and operating capability of the equipment, a number...

  19. System reliability analysis through corona testing

    NASA Technical Reports Server (NTRS)

    Lalli, V. R.; Mueller, L. A.; Koutnik, E. A.

    1975-01-01

    A corona vacuum test facility for nondestructive testing of power system components was built in the Reliability and Quality Engineering Test Laboratories at the NASA Lewis Research Center. The facility was developed to simulate operating temperature and vacuum while monitoring corona discharges with residual gases. The facility is being used to test various high-voltage power system components.

  20. Microstructure-Evolution and Reliability Assessment Tool for Lead-Free Component Insertion in Army Electronics

    DTIC Science & Technology

    2008-10-01

    provide adequate means for thermal heat dissipation and cooling. Thus electronic packaging has four main functions [1]: • Signal distribution which... dissipation , involving structural and materials consideration. • Mechanical, chemical and electromagnetic protection of components and... nature when compared to phenomenological models. Microelectronic packaging industry spends typically several months building and reliability

  1. General Practitioners' Understanding Pertaining to Reliability, Interactive and Usability Components Associated with Health Websites

    ERIC Educational Resources Information Center

    Usher, Wayne

    2009-01-01

    This study was undertaken to determine the level of understanding of Gold Coast general practitioners (GPs) pertaining to such criteria as reliability, interactive and usability components associated with health websites. These are important considerations due to the increased levels of computer and World Wide Web (WWW)/Internet use and health…

  2. Increasing the Reliability of Ability-Achievement Difference Scores: An Example Using the Kaufman Assessment Battery for Children.

    ERIC Educational Resources Information Center

    Caruso, John C.; Witkiewitz, Katie

    2002-01-01

    As an alternative to equally weighted difference scores, examined an orthogonal reliable component analysis (RCA) solution and an oblique principal components analysis (PCA) solution for the standardization sample of the Kaufman Assessment Battery for Children (KABC; A. Kaufman and N. Kaufman, 1983). Discusses the practical implications of the…

  3. The influence of various test plans on mission reliability. [for Shuttle Spacelab payloads

    NASA Technical Reports Server (NTRS)

    Stahle, C. V.; Gongloff, H. R.; Young, J. P.; Keegan, W. B.

    1977-01-01

    Methods have been developed for the evaluation of cost effective vibroacoustic test plans for Shuttle Spacelab payloads. The shock and vibration environments of components have been statistically represented, and statistical decision theory has been used to evaluate the cost effectiveness of five basic test plans with structural test options for two of the plans. Component, subassembly, and payload testing have been performed for each plan along with calculations of optimum test levels and expected costs. The tests have been ranked according to both minimizing expected project costs and vibroacoustic reliability. It was found that optimum costs may vary up to $6 million with the lowest plan eliminating component testing and maintaining flight vibration reliability via subassembly tests at high acoustic levels.

  4. On-orbit spacecraft reliability

    NASA Technical Reports Server (NTRS)

    Bloomquist, C.; Demars, D.; Graham, W.; Henmi, P.

    1978-01-01

    Operational and historic data for 350 spacecraft from 52 U.S. space programs were analyzed for on-orbit reliability. Failure rates estimates are made for on-orbit operation of spacecraft subsystems, components, and piece parts, as well as estimates of failure probability for the same elements during launch. Confidence intervals for both parameters are also given. The results indicate that: (1) the success of spacecraft operation is only slightly affected by most reported incidents of anomalous behavior; (2) the occurrence of the majority of anomalous incidents could have been prevented piror to launch; (3) no detrimental effect of spacecraft dormancy is evident; (4) cycled components in general are not demonstrably less reliable than uncycled components; and (5) application of product assurance elements is conductive to spacecraft success.

  5. Reliability apportionment approach for spacecraft solar array using fuzzy reasoning Petri net and fuzzy comprehensive evaluation

    NASA Astrophysics Data System (ADS)

    Wu, Jianing; Yan, Shaoze; Xie, Liyang; Gao, Peng

    2012-07-01

    The reliability apportionment of spacecraft solar array is of significant importance for spacecraft designers in the early stage of design. However, it is difficult to use the existing methods to resolve reliability apportionment problem because of the data insufficiency and the uncertainty of the relations among the components in the mechanical system. This paper proposes a new method which combines the fuzzy comprehensive evaluation with fuzzy reasoning Petri net (FRPN) to accomplish the reliability apportionment of the solar array. The proposed method extends the previous fuzzy methods and focuses on the characteristics of the subsystems and the intrinsic associations among the components. The analysis results show that the synchronization mechanism may obtain the highest reliability value and the solar panels and hinges may get the lowest reliability before design and manufacturing. Our developed method is of practical significance for the reliability apportionment of solar array where the design information has not been clearly identified, particularly in early stage of design.

  6. Accurate reliability analysis method for quantum-dot cellular automata circuits

    NASA Astrophysics Data System (ADS)

    Cui, Huanqing; Cai, Li; Wang, Sen; Liu, Xiaoqiang; Yang, Xiaokuo

    2015-10-01

    Probabilistic transfer matrix (PTM) is a widely used model in the reliability research of circuits. However, PTM model cannot reflect the impact of input signals on reliability, so it does not completely conform to the mechanism of the novel field-coupled nanoelectronic device which is called quantum-dot cellular automata (QCA). It is difficult to get accurate results when PTM model is used to analyze the reliability of QCA circuits. To solve this problem, we present the fault tree models of QCA fundamental devices according to different input signals. After that, the binary decision diagram (BDD) is used to quantitatively investigate the reliability of two QCA XOR gates depending on the presented models. By employing the fault tree models, the impact of input signals on reliability can be identified clearly and the crucial components of a circuit can be found out precisely based on the importance values (IVs) of components. So this method is contributive to the construction of reliable QCA circuits.

  7. Identifying reliable independent components via split-half comparisons

    PubMed Central

    Groppe, David M.; Makeig, Scott; Kutas, Marta

    2011-01-01

    Independent component analysis (ICA) is a family of unsupervised learning algorithms that have proven useful for the analysis of the electroencephalogram (EEG) and magnetoencephalogram (MEG). ICA decomposes an EEG/MEG data set into a basis of maximally temporally independent components (ICs) that are learned from the data. As with any statistic, a concern with using ICA is the degree to which the estimated ICs are reliable. An IC may not be reliable if ICA was trained on insufficient data, if ICA training was stopped prematurely or at a local minimum (for some algorithms), or if multiple global minima were present. Consequently, evidence of ICA reliability is critical for the credibility of ICA results. In this paper, we present a new algorithm for assessing the reliability of ICs based on applying ICA separately to split-halves of a data set. This algorithm improves upon existing methods in that it considers both IC scalp topographies and activations, uses a probabilistically interpretable threshold for accepting ICs as reliable, and requires applying ICA only three times per data set. As evidence of the method’s validity, we show that the method can perform comparably to more time intensive bootstrap resampling and depends in a reasonable manner on the amount of training data. Finally, using the method we illustrate the importance of checking the reliability of ICs by demonstrating that IC reliability is dramatically increased by removing the mean EEG at each channel for each epoch of data rather than the mean EEG in a prestimulus baseline. PMID:19162199

  8. International classification of reliability for implanted cochlear implant receiver stimulators.

    PubMed

    Battmer, Rolf-Dieter; Backous, Douglas D; Balkany, Thomas J; Briggs, Robert J S; Gantz, Bruce J; van Hasselt, Andrew; Kim, Chong Sun; Kubo, Takeshi; Lenarz, Thomas; Pillsbury, Harold C; O'Donoghue, Gerard M

    2010-10-01

    To design an international standard to be used when reporting reliability of the implanted components of cochlear implant systems to appropriate governmental authorities, cochlear implant (CI) centers, and for journal editors in evaluating manuscripts involving cochlear implant reliability. The International Consensus Group for Cochlear Implant Reliability Reporting was assembled to unify ongoing efforts in the United States, Europe, Asia, and Australia to create a consistent and comprehensive classification system for the implanted components of CI systems across manufacturers. All members of the consensus group are from tertiary referral cochlear implant centers. None. A clinically relevant classification scheme adapted from principles of ISO standard 5841-2:2000 originally designed for reporting reliability of cardiac pacemakers, pulse generators, or leads. Standard definitions for device failure, survival time, clinical benefit, reduced clinical benefit, and specification were generated. Time intervals for reporting back to implant centers for devices tested to be "out of specification," categorization of explanted devices, the method of cumulative survival reporting, and content of reliability reports to be issued by manufacturers was agreed upon by all members. The methodology for calculating Cumulative survival was adapted from ISO standard 5841-2:2000. The International Consensus Group on Cochlear Implant Device Reliability Reporting recommends compliance to this new standard in reporting reliability of implanted CI components by all manufacturers of CIs and the adoption of this standard as a minimal reporting guideline for editors of journals publishing cochlear implant research results.

  9. Training less-experienced faculty improves reliability of skills assessment in cardiac surgery.

    PubMed

    Lou, Xiaoying; Lee, Richard; Feins, Richard H; Enter, Daniel; Hicks, George L; Verrier, Edward D; Fann, James I

    2014-12-01

    Previous work has demonstrated high inter-rater reliability in the objective assessment of simulated anastomoses among experienced educators. We evaluated the inter-rater reliability of less-experienced educators and the impact of focused training with a video-embedded coronary anastomosis assessment tool. Nine less-experienced cardiothoracic surgery faculty members from different institutions evaluated 2 videos of simulated coronary anastomoses (1 by a medical student and 1 by a resident) at the Thoracic Surgery Directors Association Boot Camp. They then underwent a 30-minute training session using an assessment tool with embedded videos to anchor rating scores for 10 components of coronary artery anastomosis. Afterward, they evaluated 2 videos of a different student and resident performing the task. Components were scored on a 1 to 5 Likert scale, yielding an average composite score. Inter-rater reliabilities of component and composite scores were assessed using intraclass correlation coefficients (ICCs) and overall pass/fail ratings with kappa. All components of the assessment tool exhibited improvement in reliability, with 4 (bite, needle holder use, needle angles, and hand mechanics) improving the most from poor (ICC range, 0.09-0.48) to strong (ICC range, 0.80-0.90) agreement. After training, inter-rater reliabilities for composite scores improved from moderate (ICC, 0.76) to strong (ICC, 0.90) agreement, and for overall pass/fail ratings, from poor (kappa = 0.20) to moderate (kappa = 0.78) agreement. Focused, video-based anchor training facilitates greater inter-rater reliability in the objective assessment of simulated coronary anastomoses. Among raters with less teaching experience, such training may be needed before objective evaluation of technical skills. Published by Elsevier Inc.

  10. Synchronization and fault-masking in redundant real-time systems

    NASA Technical Reports Server (NTRS)

    Krishna, C. M.; Shin, K. G.; Butler, R. W.

    1983-01-01

    A real time computer may fail because of massive component failures or not responding quickly enough to satisfy real time requirements. An increase in redundancy - a conventional means of improving reliability - can improve the former but can - in some cases - degrade the latter considerably due to the overhead associated with redundancy management, namely the time delay resulting from synchronization and voting/interactive consistency techniques. The implications of synchronization and voting/interactive consistency algorithms in N-modular clusters on reliability are considered. All these studies were carried out in the context of real time applications. As a demonstrative example, we have analyzed results from experiments conducted at the NASA Airlab on the Software Implemented Fault Tolerance (SIFT) computer. This analysis has indeed indicated that in most real time applications, it is better to employ hardware synchronization instead of software synchronization and not allow reconfiguration.

  11. The definition and evaluation of the skills required to obtain a patient's history of illness: the use of videotape recordings

    PubMed Central

    Anderson, J.; Dowling, M. A. C.; Day, J. L.; Pettingale, K. W.

    1970-01-01

    Videotape recording apparatus was used to make records of case histories obtained from patients by students and doctors. These records were studied in order to identify the skills required to obtain a patient's history of illness. Each skill was defined. A questionnaire was developed in order to assess these skills and three independent observers watched the records of eighteen students and completed a questionnaire for each. The results of this were analysed for reliability and reproducibility between examiners. Moderate reliability and reproducibility were demonstrated. The questionnaire appeared to be a valid method of assessment and was capable of providing significant discrimination between students for each skill. A components analysis suggested that the marks for each skill depend on an overall impression obtained by each examiner and this overall impression is influenced by different skills for each examiner. PMID:5488220

  12. Use of Probabilistic Engineering Methods in the Detailed Design and Development Phases of the NASA Ares Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Fayssal, Safie; Weldon, Danny

    2008-01-01

    The United States National Aeronautics and Space Administration (NASA) is in the midst of a space exploration program called Constellation to send crew and cargo to the international Space Station, to the moon, and beyond. As part of the Constellation program, a new launch vehicle, Ares I, is being developed by NASA Marshall Space Flight Center. Designing a launch vehicle with high reliability and increased safety requires a significant effort in understanding design variability and design uncertainty at the various levels of the design (system, element, subsystem, component, etc.) and throughout the various design phases (conceptual, preliminary design, etc.). In a previous paper [1] we discussed a probabilistic functional failure analysis approach intended mainly to support system requirements definition, system design, and element design during the early design phases. This paper provides an overview of the application of probabilistic engineering methods to support the detailed subsystem/component design and development as part of the "Design for Reliability and Safety" approach for the new Ares I Launch Vehicle. Specifically, the paper discusses probabilistic engineering design analysis cases that had major impact on the design and manufacturing of the Space Shuttle hardware. The cases represent important lessons learned from the Space Shuttle Program and clearly demonstrate the significance of probabilistic engineering analysis in better understanding design deficiencies and identifying potential design improvement for Ares I. The paper also discusses the probabilistic functional failure analysis approach applied during the early design phases of Ares I and the forward plans for probabilistic design analysis in the detailed design and development phases.

  13. 2nd Generation Reusable Launch Vehicle (2G RLV). Revised

    NASA Technical Reports Server (NTRS)

    Matlock, Steve; Sides, Steve; Kmiec, Tom; Arbogast, Tim; Mayers, Tom; Doehnert, Bill

    2001-01-01

    This is a revised final report and addresses all of the work performed on this program. Specifically, it covers vehicle architecture background, definition of six baseline engine cycles, reliability baseline (space shuttle main engine QRAS), and component level reliability/performance/cost for the six baseline cycles, and selection of 3 cycles for further study. This report further addresses technology improvement selection and component level reliability/performance/cost for the three cycles selected for further study, as well as risk reduction plans, and recommendation for future studies.

  14. Data Applicability of Heritage and New Hardware For Launch Vehicle Reliability Models

    NASA Technical Reports Server (NTRS)

    Al Hassan, Mohammad; Novack, Steven

    2015-01-01

    Bayesian reliability requires the development of a prior distribution to represent degree of belief about the value of a parameter (such as a component's failure rate) before system specific data become available from testing or operations. Generic failure data are often provided in reliability databases as point estimates (mean or median). A component's failure rate is considered a random variable where all possible values are represented by a probability distribution. The applicability of the generic data source is a significant source of uncertainty that affects the spread of the distribution. This presentation discusses heuristic guidelines for quantifying uncertainty due to generic data applicability when developing prior distributions mainly from reliability predictions.

  15. Teamwork as an Essential Component of High-Reliability Organizations

    PubMed Central

    Baker, David P; Day, Rachel; Salas, Eduardo

    2006-01-01

    Organizations are increasingly becoming dynamic and unstable. This evolution has given rise to greater reliance on teams and increased complexity in terms of team composition, skills required, and degree of risk involved. High-reliability organizations (HROs) are those that exist in such hazardous environments where the consequences of errors are high, but the occurrence of error is extremely low. In this article, we argue that teamwork is an essential component of achieving high reliability particularly in health care organizations. We describe the fundamental characteristics of teams, review strategies in team training, demonstrate the criticality of teamwork in HROs and finally, identify specific challenges the health care community must address to improve teamwork and enhance reliability. PMID:16898980

  16. Reliability systems for implantable cardiac defibrillator batteries

    NASA Astrophysics Data System (ADS)

    Takeuchi, Esther S.

    The reliability of the power sources used in implantable cardiac defibrillators is critical due to the life-saving nature of the device. Achieving a high reliability power source depends on several systems functioning together. Appropriate cell design is the first step in assuring a reliable product. Qualification of critical components and of the cells using those components is done prior to their designation as implantable grade. Product consistency is assured by control of manufacturing practices and verified by sampling plans using both accelerated and real-time testing. Results to date show that lithium/silver vanadium oxide cells used for implantable cardiac defibrillators have a calculated maximum random failure rate of 0.005% per test month.

  17. Constellation Ground Systems Launch Availability Analysis: Enhancing Highly Reliable Launch Systems Design

    NASA Technical Reports Server (NTRS)

    Gernand, Jeffrey L.; Gillespie, Amanda M.; Monaghan, Mark W.; Cummings, Nicholas H.

    2010-01-01

    Success of the Constellation Program's lunar architecture requires successfully launching two vehicles, Ares I/Orion and Ares V/Altair, in a very limited time period. The reliability and maintainability of flight vehicles and ground systems must deliver a high probability of successfully launching the second vehicle in order to avoid wasting the on-orbit asset launched by the first vehicle. The Ground Operations Project determined which ground subsystems had the potential to affect the probability of the second launch and allocated quantitative availability requirements to these subsystems. The Ground Operations Project also developed a methodology to estimate subsystem reliability, availability and maintainability to ensure that ground subsystems complied with allocated launch availability and maintainability requirements. The verification analysis developed quantitative estimates of subsystem availability based on design documentation; testing results, and other information. Where appropriate, actual performance history was used for legacy subsystems or comparative components that will support Constellation. The results of the verification analysis will be used to verify compliance with requirements and to highlight design or performance shortcomings for further decision-making. This case study will discuss the subsystem requirements allocation process, describe the ground systems methodology for completing quantitative reliability, availability and maintainability analysis, and present findings and observation based on analysis leading to the Ground Systems Preliminary Design Review milestone.

  18. Further psychometric evaluation and revision of the Mayo-Portland Adaptability Inventory in a national sample.

    PubMed

    Malec, James F; Kragness, Miriam; Evans, Randall W; Finlay, Karen L; Kent, Ann; Lezak, Muriel D

    2003-01-01

    To evaluate the internal consistency of the Mayo-Portland Adaptability Inventory (MPAI), further refine the instrument, and provide reference data based on a large, geographically diverse sample of persons with acquired brain injury (ABI). 386 persons, most with moderate to severe ABI. Outpatient, community-based, and residential rehabilitation facilities for persons with ABI located in the United States: West, Midwest, and Southeast. Rasch, item cluster, principal components, and traditional psychometric analyses for internal consistency of MPAI data and subscales. With rescoring of rating scales for 4 items, a 29-item version of the MPAI showed satisfactory internal consistency by Rasch (Person Reliability=.88; Item Reliability=.99) and traditional psychometric indicators (Cronbach's alpha=.89). Three rationally derived subscales for Ability, Activity, and Participation demonstrated psychometric properties that were equivalent to subscales derived empirically through item cluster and factor analyses. For the 3 subscales, Person Reliability ranged from.78 to.79; Item Reliability, from.98 to.99; and Cronbach's alpha, from.76 to.83. Subscales correlated moderately (Pearson r =.49-.65) with each other and strongly with the overall scale (Pearson r=.82-.86). Outcome after ABI is represented by the unitary dimension described by the MPAI. MPAI subscales further define regions of this dimension that may be useful for evaluation of clinical cases and program evaluation.

  19. Reliability in the DSM-III field trials: interview v case summary.

    PubMed

    Hyler, S E; Williams, J B; Spitzer, R L

    1982-11-01

    A study compared the reliability of psychiatric diagnoses obtained from the live interviews and from case summaries, on the same patients, by the same clinicians, using the same DSM-III diagnostic criteria. The results showed that the reliability of the major diagnostic classes of DSM-III was higher when diagnoses were made from live interviews than when they were made from case summaries. We conclude that diagnoses based on information contained in traditionally prepared case summaries may lead to an underestimation of the reliability of diagnoses made based on information collected during a "live" interview.

  20. Diverse Redundant Systems for Reliable Space Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2015-01-01

    Reliable life support systems are required for deep space missions. The probability of a fatal life support failure should be less than one in a thousand in a multi-year mission. It is far too expensive to develop a single system with such high reliability. Using three redundant units would require only that each have a failure probability of one in ten over the mission. Since the system development cost is inverse to the failure probability, this would cut cost by a factor of one hundred. Using replaceable subsystems instead of full systems would further cut cost. Using full sets of replaceable components improves reliability more than using complete systems as spares, since a set of components could repair many different failures instead of just one. Replaceable components would require more tools, space, and planning than full systems or replaceable subsystems. However, identical system redundancy cannot be relied on in practice. Common cause failures can disable all the identical redundant systems. Typical levels of common cause failures will defeat redundancy greater than two. Diverse redundant systems are required for reliable space life support. Three, four, or five diverse redundant systems could be needed for sufficient reliability. One system with lower level repair could be substituted for two diverse systems to save cost.

  1. Reading comprehension and its underlying components in second-language learners: A meta-analysis of studies comparing first- and second-language learners.

    PubMed

    Melby-Lervåg, Monica; Lervåg, Arne

    2014-03-01

    We report a systematic meta-analytic review of studies comparing reading comprehension and its underlying components (language comprehension, decoding, and phonological awareness) in first- and second-language learners. The review included 82 studies, and 576 effect sizes were calculated for reading comprehension and underlying components. Key findings were that, compared to first-language learners, second-language learners display a medium-sized deficit in reading comprehension (pooled effect size d = -0.62), a large deficit in language comprehension (pooled effect size d = -1.12), but only small differences in phonological awareness (pooled effect size d = -0.08) and decoding (pooled effect size d = -0.12). A moderator analysis showed that characteristics related to the type of reading comprehension test reliably explained the variation in the differences in reading comprehension between first- and second-language learners. For language comprehension, studies of samples from low socioeconomic backgrounds and samples where only the first language was used at home generated the largest group differences in favor of first-language learners. Test characteristics and study origin reliably contributed to the variations between the studies of language comprehension. For decoding, Canadian studies showed group differences in favor of second-language learners, whereas the opposite was the case for U.S. studies. Regarding implications, unless specific decoding problems are detected, interventions that aim to ameliorate reading comprehension problems among second-language learners should focus on language comprehension skills.

  2. System principles, mathematical models and methods to ensure high reliability of safety systems

    NASA Astrophysics Data System (ADS)

    Zaslavskyi, V.

    2017-04-01

    Modern safety and security systems are composed of a large number of various components designed for detection, localization, tracking, collecting, and processing of information from the systems of monitoring, telemetry, control, etc. They are required to be highly reliable in a view to correctly perform data aggregation, processing and analysis for subsequent decision making support. On design and construction phases of the manufacturing of such systems a various types of components (elements, devices, and subsystems) are considered and used to ensure high reliability of signals detection, noise isolation, and erroneous commands reduction. When generating design solutions for highly reliable systems a number of restrictions and conditions such as types of components and various constrains on resources should be considered. Various types of components perform identical functions; however, they are implemented using diverse principles, approaches and have distinct technical and economic indicators such as cost or power consumption. The systematic use of different component types increases the probability of tasks performing and eliminates the common cause failure. We consider type-variety principle as an engineering principle of system analysis, mathematical models based on this principle, and algorithms for solving optimization problems of highly reliable safety and security systems design. Mathematical models are formalized in a class of two-level discrete optimization problems of large dimension. The proposed approach, mathematical models, algorithms can be used for problem solving of optimal redundancy on the basis of a variety of methods and control devices for fault and defects detection in technical systems, telecommunication networks, and energy systems.

  3. Volcano Monitoring: A Case Study in Pervasive Computing

    NASA Astrophysics Data System (ADS)

    Peterson, Nina; Anusuya-Rangappa, Lohith; Shirazi, Behrooz A.; Song, Wenzhan; Huang, Renjie; Tran, Daniel; Chien, Steve; Lahusen, Rick

    Recent advances in wireless sensor network technology have provided robust and reliable solutions for sophisticated pervasive computing applications such as inhospitable terrain environmental monitoring. We present a case study for developing a real-time pervasive computing system, called OASIS for optimized autonomous space in situ sensor-web, which combines ground assets (a sensor network) and space assets (NASA’s earth observing (EO-1) satellite) to monitor volcanic activities at Mount St. Helens. OASIS’s primary goals are: to integrate complementary space and in situ ground sensors into an interactive and autonomous sensorweb, to optimize power and communication resource management of the sensorweb and to provide mechanisms for seamless and scalable fusion of future space and in situ components. The OASIS in situ ground sensor network development addresses issues related to power management, bandwidth management, quality of service management, topology and routing management, and test-bed design. The space segment development consists of EO-1 architectural enhancements, feedback of EO-1 data into the in situ component, command and control integration, data ingestion and dissemination and field demonstrations.

  4. Reliability and Probabilistic Risk Assessment - How They Play Together

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal; Stutts, Richard; Huang, Zhaofeng

    2015-01-01

    Since the Space Shuttle Challenger accident in 1986, NASA has extensively used probabilistic analysis methods to assess, understand, and communicate the risk of space launch vehicles. Probabilistic Risk Assessment (PRA), used in the nuclear industry, is one of the probabilistic analysis methods NASA utilizes to assess Loss of Mission (LOM) and Loss of Crew (LOC) risk for launch vehicles. PRA is a system scenario based risk assessment that uses a combination of fault trees, event trees, event sequence diagrams, and probability distributions to analyze the risk of a system, a process, or an activity. It is a process designed to answer three basic questions: 1) what can go wrong that would lead to loss or degraded performance (i.e., scenarios involving undesired consequences of interest), 2) how likely is it (probabilities), and 3) what is the severity of the degradation (consequences). Since the Challenger accident, PRA has been used in supporting decisions regarding safety upgrades for launch vehicles. Another area that was given a lot of emphasis at NASA after the Challenger accident is reliability engineering. Reliability engineering has been a critical design function at NASA since the early Apollo days. However, after the Challenger accident, quantitative reliability analysis and reliability predictions were given more scrutiny because of their importance in understanding failure mechanism and quantifying the probability of failure, which are key elements in resolving technical issues, performing design trades, and implementing design improvements. Although PRA and reliability are both probabilistic in nature and, in some cases, use the same tools, they are two different activities. Specifically, reliability engineering is a broad design discipline that deals with loss of function and helps understand failure mechanism and improve component and system design. PRA is a system scenario based risk assessment process intended to assess the risk scenarios that could lead to a major/top undesirable system event, and to identify those scenarios that are high-risk drivers. PRA output is critical to support risk informed decisions concerning system design. This paper describes the PRA process and the reliability engineering discipline in detail. It discusses their differences and similarities and how they work together as complementary analyses to support the design and risk assessment processes. Lessons learned, applications, and case studies in both areas are also discussed in the paper to demonstrate and explain these differences and similarities.

  5. Status of the Flooding Fragility Testing Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, C. L.; Savage, B.; Bhandari, B.

    2016-06-01

    This report provides an update on research addressing nuclear power plant component reliability under flooding conditions. The research includes use of the Component Flooding Evaluation Laboratory (CFEL) where individual components and component subassemblies will be tested to failure under various flooding conditions. The resulting component reliability data can then be incorporated with risk simulation strategies to provide a more thorough representation of overall plant risk. The CFEL development strategy consists of four interleaved phases. Phase 1 addresses design and application of CFEL with water rise and water spray capabilities allowing testing of passive and active components including fully electrified components.more » Phase 2 addresses research into wave generation techniques followed by the design and addition of the wave generation capability to CFEL. Phase 3 addresses methodology development activities including small scale component testing, development of full scale component testing protocol, and simulation techniques including Smoothed Particle Hydrodynamic (SPH) based computer codes. Phase 4 involves full scale component testing including work on full scale component testing in a surrogate CFEL testing apparatus.« less

  6. System reliability analysis through corona testing

    NASA Technical Reports Server (NTRS)

    Lalli, V. R.; Mueller, L. A.; Koutnik, E. A.

    1975-01-01

    In the Reliability and Quality Engineering Test Laboratory at the NASA Lewis Research Center a nondestructive, corona-vacuum test facility for testing power system components was developed using commercially available hardware. The test facility was developed to simulate operating temperature and vacuum while monitoring corona discharges with residual gases. This facility is being used to test various high voltage power system components.

  7. Missile Systems Maintenance, AFSC 411XOB/C.

    DTIC Science & Technology

    1988-04-01

    technician’s rating. A statistical measurement of their agreement, known as the interrater reliability (as assessed through components of variance of...senior technician’s ratings. A statistical measurement of their agreement, known as the interrater reliability (as assessed through components of...FABRICATION TRANSITORS *INPUT/OUTPUT (PERIPHERAL) DEVICES SOLID-STATE SPECIAL PURPOSE DEVICES COMPUTER MICRO PROCESSORS AND PROGRAMS POWER SUPPLIES

  8. Virtual surgical planning and 3D printing in prosthetic orbital reconstruction with percutaneous implants: a technical case report

    PubMed Central

    Huang, Yu-Hui; Seelaus, Rosemary; Zhao, Linping; Patel, Pravin K; Cohen, Mimis

    2016-01-01

    Osseointegrated titanium implants to the cranial skeleton for retention of facial prostheses have proven to be a reliable replacement for adhesive systems. However, improper placement of the implants can jeopardize prosthetic outcomes, and long-term success of an implant-retained prosthesis. Three-dimensional (3D) computer imaging, virtual planning, and 3D printing have become accepted components of the preoperative planning and design phase of treatment. Computer-aided design and computer-assisted manufacture that employ cone-beam computed tomography data offer benefits to patient treatment by contributing to greater predictability and improved treatment efficiencies with more reliable outcomes in surgical and prosthetic reconstruction. 3D printing enables transfer of the virtual surgical plan to the operating room by fabrication of surgical guides. Previous studies have shown that accuracy improves considerably with guided implantation when compared to conventional template or freehand implant placement. This clinical case report demonstrates the use of a 3D technological pathway for preoperative virtual planning through prosthesis fabrication, utilizing 3D printing, for a patient with an acquired orbital defect that was restored with an implant-retained silicone orbital prosthesis. PMID:27843356

  9. Virtual surgical planning and 3D printing in prosthetic orbital reconstruction with percutaneous implants: a technical case report.

    PubMed

    Huang, Yu-Hui; Seelaus, Rosemary; Zhao, Linping; Patel, Pravin K; Cohen, Mimis

    2016-01-01

    Osseointegrated titanium implants to the cranial skeleton for retention of facial prostheses have proven to be a reliable replacement for adhesive systems. However, improper placement of the implants can jeopardize prosthetic outcomes, and long-term success of an implant-retained prosthesis. Three-dimensional (3D) computer imaging, virtual planning, and 3D printing have become accepted components of the preoperative planning and design phase of treatment. Computer-aided design and computer-assisted manufacture that employ cone-beam computed tomography data offer benefits to patient treatment by contributing to greater predictability and improved treatment efficiencies with more reliable outcomes in surgical and prosthetic reconstruction. 3D printing enables transfer of the virtual surgical plan to the operating room by fabrication of surgical guides. Previous studies have shown that accuracy improves considerably with guided implantation when compared to conventional template or freehand implant placement. This clinical case report demonstrates the use of a 3D technological pathway for preoperative virtual planning through prosthesis fabrication, utilizing 3D printing, for a patient with an acquired orbital defect that was restored with an implant-retained silicone orbital prosthesis.

  10. Estimating distributions with increasing failure rate in an imperfect repair model.

    PubMed

    Kvam, Paul H; Singh, Harshinder; Whitaker, Lyn R

    2002-03-01

    A failed system is repaired minimally if after failure, it is restored to the working condition of an identical system of the same age. We extend the nonparametric maximum likelihood estimator (MLE) of a system's lifetime distribution function to test units that are known to have an increasing failure rate. Such items comprise a significant portion of working components in industry. The order-restricted MLE is shown to be consistent. Similar results hold for the Brown-Proschan imperfect repair model, which dictates that a failed component is repaired perfectly with some unknown probability, and is otherwise repaired minimally. The estimators derived are motivated and illustrated by failure data in the nuclear industry. Failure times for groups of emergency diesel generators and motor-driven pumps are analyzed using the order-restricted methods. The order-restricted estimators are consistent and show distinct differences from the ordinary MLEs. Simulation results suggest significant improvement in reliability estimation is available in many cases when component failure data exhibit the IFR property.

  11. A PC program to optimize system configuration for desired reliability at minimum cost

    NASA Technical Reports Server (NTRS)

    Hills, Steven W.; Siahpush, Ali S.

    1994-01-01

    High reliability is desired in all engineered systems. One way to improve system reliability is to use redundant components. When redundant components are used, the problem becomes one of allocating them to achieve the best reliability without exceeding other design constraints such as cost, weight, or volume. Systems with few components can be optimized by simply examining every possible combination but the number of combinations for most systems is prohibitive. A computerized iteration of the process is possible but anything short of a super computer requires too much time to be practical. Many researchers have derived mathematical formulations for calculating the optimum configuration directly. However, most of the derivations are based on continuous functions whereas the real system is composed of discrete entities. Therefore, these techniques are approximations of the true optimum solution. This paper describes a computer program that will determine the optimum configuration of a system of multiple redundancy of both standard and optional components. The algorithm is a pair-wise comparative progression technique which can derive the true optimum by calculating only a small fraction of the total number of combinations. A designer can quickly analyze a system with this program on a personal computer.

  12. Stirling Convertor Fasteners Reliability Quantification

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Korovaichuk, Igor; Kovacevich, Tiodor; Schreiber, Jeffrey G.

    2006-01-01

    Onboard Radioisotope Power Systems (RPS) being developed for NASA s deep-space science and exploration missions require reliable operation for up to 14 years and beyond. Stirling power conversion is a candidate for use in an RPS because it offers a multifold increase in the conversion efficiency of heat to electric power and reduced inventory of radioactive material. Structural fasteners are responsible to maintain structural integrity of the Stirling power convertor, which is critical to ensure reliable performance during the entire mission. Design of fasteners involve variables related to the fabrication, manufacturing, behavior of fasteners and joining parts material, structural geometry of the joining components, size and spacing of fasteners, mission loads, boundary conditions, etc. These variables have inherent uncertainties, which need to be accounted for in the reliability assessment. This paper describes these uncertainties along with a methodology to quantify the reliability, and provides results of the analysis in terms of quantified reliability and sensitivity of Stirling power conversion reliability to the design variables. Quantification of the reliability includes both structural and functional aspects of the joining components. Based on the results, the paper also describes guidelines to improve the reliability and verification testing.

  13. Reliability and Validity of the Sensory Component of the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI): A Systematic Review.

    PubMed

    Hales, M; Biros, E; Reznik, J E

    2015-01-01

    Since 1982, the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) has been used to classify sensation of spinal cord injury (SCI) through pinprick and light touch scores. The absence of proprioception, pain, and temperature within this scale creates questions about its validity and accuracy. To assess whether the sensory component of the ISNCSCI represents a reliable and valid measure of classification of SCI. A systematic review of studies examining the reliability and validity of the sensory component of the ISNCSCI published between 1982 and February 2013 was conducted. The electronic databases MEDLINE via Ovid, CINAHL, PEDro, and Scopus were searched for relevant articles. A secondary search of reference lists was also completed. Chosen articles were assessed according to the Oxford Centre for Evidence-Based Medicine hierarchy of evidence and critically appraised using the McMasters Critical Review Form. A statistical analysis was conducted to investigate the variability of the results given by reliability studies. Twelve studies were identified: 9 reviewed reliability and 3 reviewed validity. All studies demonstrated low levels of evidence and moderate critical appraisal scores. The majority of the articles (~67%; 6/9) assessing the reliability suggested that training was positively associated with better posttest results. The results of the 3 studies that assessed the validity of the ISNCSCI scale were confounding. Due to the low to moderate quality of the current literature, the sensory component of the ISNCSCI requires further revision and investigation if it is to be a useful tool in clinical trials.

  14. Reliability and Validity of the Sensory Component of the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI): A Systematic Review

    PubMed Central

    Hales, M.; Biros, E.

    2015-01-01

    Background: Since 1982, the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) has been used to classify sensation of spinal cord injury (SCI) through pinprick and light touch scores. The absence of proprioception, pain, and temperature within this scale creates questions about its validity and accuracy. Objectives: To assess whether the sensory component of the ISNCSCI represents a reliable and valid measure of classification of SCI. Methods: A systematic review of studies examining the reliability and validity of the sensory component of the ISNCSCI published between 1982 and February 2013 was conducted. The electronic databases MEDLINE via Ovid, CINAHL, PEDro, and Scopus were searched for relevant articles. A secondary search of reference lists was also completed. Chosen articles were assessed according to the Oxford Centre for Evidence-Based Medicine hierarchy of evidence and critically appraised using the McMasters Critical Review Form. A statistical analysis was conducted to investigate the variability of the results given by reliability studies. Results: Twelve studies were identified: 9 reviewed reliability and 3 reviewed validity. All studies demonstrated low levels of evidence and moderate critical appraisal scores. The majority of the articles (~67%; 6/9) assessing the reliability suggested that training was positively associated with better posttest results. The results of the 3 studies that assessed the validity of the ISNCSCI scale were confounding. Conclusions: Due to the low to moderate quality of the current literature, the sensory component of the ISNCSCI requires further revision and investigation if it is to be a useful tool in clinical trials. PMID:26363591

  15. Reliability and validity of a Swedish language version of the Resilience Scale.

    PubMed

    Nygren, Björn; Randström, Kerstin Björkman; Lejonklou, Anna K; Lundman, Beril

    2004-01-01

    The purpose of this study was to test the reliability and validity of the Swedish language version of the Resilience Scale (RS). Participants were 142 adults between 19-85 years of age. Internal consistency reliability, stability over time, and construct validity were evaluated using Cronbach's alpha, principal components analysis with varimax rotation and correlations with scores on the Sense of Coherence Scale (SOC) and the Rosenberg Self-Esteem Scale (RSE). The mean score on the RS was 142 (SD = 15). The possible scores on the RS range from 25 to 175, and scores higher than 146 are considered high. The test-retest correlation was .78. Correlations with the SOC and the RSE were .41 (p < 0.01) and .37 (p < 0.01), respectively. Personal Assurance and Acceptance of Self and Life emerged as components from the principal components analysis. These findings provide evidence for the reliability and validity of the Swedish language version of the RS.

  16. Improving the Reliability of Technological Subsystems Equipment for Steam Turbine Unit in Operation

    NASA Astrophysics Data System (ADS)

    Brodov, Yu. M.; Murmansky, B. E.; Aronson, R. T.

    2017-11-01

    The authors’ conception is presented of an integrated approach to reliability improving of the steam turbine unit (STU) state along with its implementation examples for the various STU technological subsystems. Basing on the statistical analysis of damage to turbine individual parts and components, on the development and application of modern methods and technologies of repair and on operational monitoring techniques, the critical components and elements of equipment are identified and priorities are proposed for improving the reliability of STU equipment in operation. The research results are presented of the analysis of malfunctions for various STU technological subsystems equipment operating as part of power units and at cross-linked thermal power plants and resulting in turbine unit shutdown (failure). Proposals are formulated and justified for adjustment of maintenance and repair for turbine components and parts, for condenser unit equipment, for regeneration subsystem and oil supply system that permit to increase the operational reliability, to reduce the cost of STU maintenance and repair and to optimize the timing and amount of repairs.

  17. Enhanced Component Performance Study: Air-Operated Valves 1998-2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeder, John Alton

    2015-11-01

    This report presents a performance evaluation of air-operated valves (AOVs) at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience failure reports from fiscal year 1998 through 2014 for the component reliability as reported in the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The AOV failure modes considered are failure-to-open/close, failure to operate or control, and spurious operation. The component reliability estimates and the reliability data are trended for the most recent 10-year period, while yearly estimates for reliability are provided for the entire active period. One statistically significantmore » trend was observed in the AOV data: The frequency of demands per reactor year for valves recording the fail-to-open or fail-to-close failure modes, for high-demand valves (those with greater than twenty demands per year), was found to be decreasing. The decrease was about three percent over the ten year period trended.« less

  18. PV inverter performance and reliability: What is the role of the bus capacitor?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flicker, Jack; Kaplar, Robert; Marinella, Matthew

    In order to elucidate how the degradation of individual components affects the state of the photovoltaic inverter as a whole, we have carried out SPICE simulations to investigate the voltage and current ripple on the DC bus. The bus capacitor is generally considered to be among the least reliable components of the system, so we have simulated how the degradation of bus capacitors affects the AC ripple at the terminals of the PV module. Degradation-induced ripple leads to an increased degradation rate in a positive feedback cycle. Additionally, laboratory experiments are being carried out to ascertain the reliability of metallizedmore » thin film capacitors. By understanding the degradation mechanisms and their effects on the inverter as a system, steps can be made to more effectively replace marginal components with more reliable ones, increasing the lifetime and efficiency of the inverter and decreasing its cost per watt towards the US Department of Energy goals.« less

  19. A Paleomagnetic and Paleointensity Study on Late Pliocene Volcanic Rocks From Southern Georgia (Caucasus)

    NASA Astrophysics Data System (ADS)

    Calvo-Rathert, M.; Bogalo, M.; Gogichaishvili, A.; Vegas-Tubia, N.; Sologashvili, J.; Villalain, J.

    2009-05-01

    A paleomagnetic, rock-magnetic and paleointensity study was carried out on 21 basaltic lava flows belonging to four different sequences of late Pliocene age from southern Georgia (Caucasus): Diliska (5 flows), Kvemo Orozmani (5 flows), Dmanisi (11 flows) and Zemo Karabulaki (3 flows). Paleomagnetic analysis generally showed the presence of a single component (mainly in the Dmanisi sequence) but also two more or less superimposed components in several other cases. All sites except one clearly displayed a normal-polarity characteristic component. Susceptibility-versus-temperature curves measured in argon atmosphere on whole- rock powdered samples yielded low-Ti titanomagnetite as main carrier of remanence, although a lower Tc- component (300-400C) was also observed in several cases. Both reversible and non-reversible k-T curves were measured. A pilot paleointensity study was performed with the Coe method on two samples of each of those sites considered suitable after interpretation of rock-magnetic and paleomagnetic results. The pilot study showed that reliable paleointensity results were mainly obtained from sites of the Dmanisi sequence. This thick sequence of basaltic lava flows records the upper end of the normal-polarity Olduvai subchron, a fact confirmed by 40Ar/39Ar dating of the uppermost lava flow and overlying volcanogenic ashes, which yields ages of 1.8 to 1.85 My. A new paleointensity experiment was carried out only on samples belonging to the Dmanisi sequence. Although this work is still in progress, first results show that paleointensities are low, their values lying between 10 and 20 µT in many cases, and not being higher than 30 µT. For comparison, present day field is 47 µT.

  20. Detection of recycled marine sediment components in crater lake fluids using 129I

    NASA Astrophysics Data System (ADS)

    Fehn, U.; Snyder, G. T.; Varekamp, J. C.

    2002-06-01

    Crater lakes provide time-integrated samples of volcanic fluids, which may carry information on source components. We tested under what circumstances 129I concentrations can be used for the detection of a signal derived from the recycling of marine sediments in subduction zone magmatism. The 129I system has been successfully used to determine origin and pathways in other volcanic fluids, but the application of this system to crater lakes is complicated by the presence of anthropogenic 129I, related to recent nuclear activities. Results are reported from four crater lakes, associated with subducting crust varying in age between 23 and 98 Ma. The 129I/I ratios determined for Copahue, Argentina, (129I/I=700×10-15) and White Island, New Zealand, (129I/I=284×10-15) demonstrate the presence of iodine in the crater lakes that was derived from recycled marine sediments. A comparison to the ages of the subducted sediments in these two cases indicates that the ratios likely reflect iodine remobilization from the entire sediment column that was undergoing subduction. While the 129I signals in Poás and Rincón de la Vieja, Costa Rica also demonstrate the presence of recycled iodine, the relatively high percentage of meteoric water in these lakes prevents a reliable determination of source ages. The observed high concentrations of iodine and 129I/I ratios substantially below current surface values strongly argue for the presence of recycled marine components in the arc magmas of all four cases. Components from subducted marine sediments can be quantified and related to specific parts of the sediment column in cases where the iodine concentration in the lake waters exceeds 5 μM.

  1. Time-dependent reliability analysis of ceramic engine components

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.

    1993-01-01

    The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing either the power or Paris law relations. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Two example problems demonstrating proof testing and fatigue parameter estimation are given.

  2. Fracture mechanics concepts in reliability analysis of monolithic ceramics

    NASA Technical Reports Server (NTRS)

    Manderscheid, Jane M.; Gyekenyesi, John P.

    1987-01-01

    Basic design concepts for high-performance, monolithic ceramic structural components are addressed. The design of brittle ceramics differs from that of ductile metals because of the inability of ceramic materials to redistribute high local stresses caused by inherent flaws. Random flaw size and orientation requires that a probabilistic analysis be performed in order to determine component reliability. The current trend in probabilistic analysis is to combine linear elastic fracture mechanics concepts with the two parameter Weibull distribution function to predict component reliability under multiaxial stress states. Nondestructive evaluation supports this analytical effort by supplying data during verification testing. It can also help to determine statistical parameters which describe the material strength variation, in particular the material threshold strength (the third Weibull parameter), which in the past was often taken as zero for simplicity.

  3. NASA-DoD Lead-Free Electronics Project

    NASA Technical Reports Server (NTRS)

    Kessel, Kurt R.

    2009-01-01

    In response to concerns about risks from lead-free induced faults to high reliability products, NASA has initiated a multi-year project to provide manufacturers and users with data to clarify the risks of lead-free materials in their products. The project will also be of interest to component manufacturers supplying to high reliability markets. The project was launched in November 2006. The primary technical objective of the project is to undertake comprehensive testing to generate information on failure modes/criteria to better understand the reliability of: - Packages (e.g., TSOP, BOA, PDIP) assembled and reworked with solder interconnects consisting of lead-free alloys - Packages (e.g., TSOP, BOA, PDIP) assembled and reworked with solder interconnects consisting of mixed alloys, lead component finish/lead-free solder and lead-free component finish/SnPb solder.

  4. NASA-DoD Lead-Free Electronics Project

    NASA Technical Reports Server (NTRS)

    Kessel, Kurt R.

    2009-01-01

    In response to concerns about risks from lead-free induced faults to high reliability products, NASA has initiated a multi-year project to provide manufacturers and users with data to clarify the risks of lead-free materials in their products. The project will also be of interest to component manufacturers supplying to high reliability markets. The project was launched in November 2006. The primary technical objective of the project is to undertake comprehensive testing to generate information on failure modes/criteria to better understand the reliability of: - Packages (e.g., TSOP, BGA, PDIP) assembled and reworked with solder interconnects consisting of lead-free alloys - Packages (e.g., TSOP, BGA, PDIP) assembled and reworked with solder interconnects consisting of mixed alloys, lead component finish/lead-free solder and lead-free component finish/SnPb solder.

  5. A method to optimize the shield compact and lightweight combining the structure with components together by genetic algorithm and MCNP code.

    PubMed

    Cai, Yao; Hu, Huasi; Pan, Ziheng; Hu, Guang; Zhang, Tao

    2018-05-17

    To optimize the shield for neutrons and gamma rays compact and lightweight, a method combining the structure and components together was established employing genetic algorithms and MCNP code. As a typical case, the fission energy spectrum of 235 U which mixed neutrons and gamma rays was adopted in this study. Six types of materials were presented and optimized by the method. Spherical geometry was adopted in the optimization after checking the geometry effect. Simulations have made to verify the reliability of the optimization method and the efficiency of the optimized materials. To compare the materials visually and conveniently, the volume and weight needed to build a shield are employed. The results showed that, the composite multilayer material has the best performance. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Can Reliability of Multiple Component Measuring Instruments Depend on Response Option Presentation Mode?

    ERIC Educational Resources Information Center

    Menold, Natalja; Raykov, Tenko

    2016-01-01

    This article examines the possible dependency of composite reliability on presentation format of the elements of a multi-item measuring instrument. Using empirical data and a recent method for interval estimation of group differences in reliability, we demonstrate that the reliability of an instrument need not be the same when polarity of the…

  7. A New Tool for Nutrition App Quality Evaluation (AQEL): Development, Validation, and Reliability Testing

    PubMed Central

    Huang, Wenhao; Chapman-Novakofski, Karen M

    2017-01-01

    Background The extensive availability and increasing use of mobile apps for nutrition-based health interventions makes evaluation of the quality of these apps crucial for integration of apps into nutritional counseling. Objective The goal of this research was the development, validation, and reliability testing of the app quality evaluation (AQEL) tool, an instrument for evaluating apps’ educational quality and technical functionality. Methods Items for evaluating app quality were adapted from website evaluations, with additional items added to evaluate the specific characteristics of apps, resulting in 79 initial items. Expert panels of nutrition and technology professionals and app users reviewed items for face and content validation. After recommended revisions, nutrition experts completed a second AQEL review to ensure clarity. On the basis of 150 sets of responses using the revised AQEL, principal component analysis was completed, reducing AQEL into 5 factors that underwent reliability testing, including internal consistency, split-half reliability, test-retest reliability, and interrater reliability (IRR). Two additional modifiable constructs for evaluating apps based on the age and needs of the target audience as selected by the evaluator were also tested for construct reliability. IRR testing using intraclass correlations (ICC) with all 7 constructs was conducted, with 15 dietitians evaluating one app. Results Development and validation resulted in the 51-item AQEL. These were reduced to 25 items in 5 factors after principal component analysis, plus 9 modifiable items in two constructs that were not included in principal component analysis. Internal consistency and split-half reliability of the following constructs derived from principal components analysis was good (Cronbach alpha >.80, Spearman-Brown coefficient >.80): behavior change potential, support of knowledge acquisition, app function, and skill development. App purpose split half-reliability was .65. Test-retest reliability showed no significant change over time (P>.05) for all but skill development (P=.001). Construct reliability was good for items assessing age appropriateness of apps for children, teens, and a general audience. In addition, construct reliability was acceptable for assessing app appropriateness for various target audiences (Cronbach alpha >.70). For the 5 main factors, ICC (1,k) was >.80, with a P value of <.05. When 15 nutrition professionals evaluated one app, ICC (2,15) was .98, with a P value of <.001 for all 7 constructs when the modifiable items were specified for adults seeking weight loss support. Conclusions Our preliminary effort shows that AQEL is a valid, reliable instrument for evaluating nutrition apps’ qualities for clinical interventions by nutrition clinicians, educators, and researchers. Further efforts in validating AQEL in various contexts are needed. PMID:29079554

  8. Interactive decision support in hepatic surgery

    PubMed Central

    Dugas, Martin; Schauer, Rolf; Volk, Andreas; Rau, Horst

    2002-01-01

    Background Hepatic surgery is characterized by complicated operations with a significant peri- and postoperative risk for the patient. We developed a web-based, high-granular research database for comprehensive documentation of all relevant variables to evaluate new surgical techniques. Methods To integrate this research system into the clinical setting, we designed an interactive decision support component. The objective is to provide relevant information for the surgeon and the patient to assess preoperatively the risk of a specific surgical procedure. Based on five established predictors of patient outcomes, the risk assessment tool searches for similar cases in the database and aggregates the information to estimate the risk for an individual patient. Results The physician can verify the analysis and exclude manually non-matching cases according to his expertise. The analysis is visualized by means of a Kaplan-Meier plot. To evaluate the decision support component we analyzed data on 165 patients diagnosed with hepatocellular carcinoma (period 1996–2000). The similarity search provides a two-peak distribution indicating there are groups of similar patients and singular cases which are quite different to the average. The results of the risk estimation are consistent with the observed survival data, but must be interpreted with caution because of the limited number of matching reference cases. Conclusion Critical issues for the decision support system are clinical integration, a transparent and reliable knowledge base and user feedback. PMID:12003639

  9. Transient Reliability Analysis Capability Developed for CARES/Life

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.

    2001-01-01

    The CARES/Life software developed at the NASA Glenn Research Center provides a general-purpose design tool that predicts the probability of the failure of a ceramic component as a function of its time in service. This award-winning software has been widely used by U.S. industry to establish the reliability and life of a brittle material (e.g., ceramic, intermetallic, and graphite) structures in a wide variety of 21st century applications.Present capabilities of the NASA CARES/Life code include probabilistic life prediction of ceramic components subjected to fast fracture, slow crack growth (stress corrosion), and cyclic fatigue failure modes. Currently, this code can compute the time-dependent reliability of ceramic structures subjected to simple time-dependent loading. For example, in slow crack growth failure conditions CARES/Life can handle sustained and linearly increasing time-dependent loads, whereas in cyclic fatigue applications various types of repetitive constant-amplitude loads can be accounted for. However, in real applications applied loads are rarely that simple but vary with time in more complex ways such as engine startup, shutdown, and dynamic and vibrational loads. In addition, when a given component is subjected to transient environmental and or thermal conditions, the material properties also vary with time. A methodology has now been developed to allow the CARES/Life computer code to perform reliability analysis of ceramic components undergoing transient thermal and mechanical loading. This means that CARES/Life will be able to analyze finite element models of ceramic components that simulate dynamic engine operating conditions. The methodology developed is generalized to account for material property variation (on strength distribution and fatigue) as a function of temperature. This allows CARES/Life to analyze components undergoing rapid temperature change in other words, components undergoing thermal shock. In addition, the capability has been developed to perform reliability analysis for components that undergo proof testing involving transient loads. This methodology was developed for environmentally assisted crack growth (crack growth as a function of time and loading), but it will be extended to account for cyclic fatigue (crack growth as a function of load cycles) as well.

  10. Strategies and Approaches to TPS Design

    NASA Technical Reports Server (NTRS)

    Kolodziej, Paul

    2005-01-01

    Thermal protection systems (TPS) insulate planetary probes and Earth re-entry vehicles from the aerothermal heating experienced during hypersonic deceleration to the planet s surface. The systems are typically designed with some additional capability to compensate for both variations in the TPS material and for uncertainties in the heating environment. This additional capability, or robustness, also provides a surge capability for operating under abnormal severe conditions for a short period of time, and for unexpected events, such as meteoroid impact damage, that would detract from the nominal performance. Strategies and approaches to developing robust designs must also minimize mass because an extra kilogram of TPS displaces one kilogram of payload. Because aircraft structures must be optimized for minimum mass, reliability-based design approaches for mechanical components exist that minimize mass. Adapting these existing approaches to TPS component design takes advantage of the extensive work, knowledge, and experience from nearly fifty years of reliability-based design of mechanical components. A Non-Dimensional Load Interference (NDLI) method for calculating the thermal reliability of TPS components is presented in this lecture and applied to several examples. A sensitivity analysis from an existing numerical simulation of a carbon phenolic TPS provides insight into the effects of the various design parameters, and is used to demonstrate how sensitivity analysis may be used with NDLI to develop reliability-based designs of TPS components.

  11. Hardening communication ports for survival in electrical overstress environments

    NASA Technical Reports Server (NTRS)

    Clark, O. Melville

    1991-01-01

    Greater attention is being focused on the protection of data I/O ports since both experience and lab tests have shown that components at these locations are extremely vulnerable to electrical overstress (EOS) in the form of transient voltages. Lightning and electrostatic discharge (ESD) are the major contributors to these failures; however, these losses can be prevented. Hardening against transient voltages at both the board level and system level has a proven record of improving reliability by orders of magnitude. The EOS threats, typical failure modes, and transient voltage mitigation techniques are reviewed. Case histories are also reviewed.

  12. Reliability model generator

    NASA Technical Reports Server (NTRS)

    Cohen, Gerald C. (Inventor); McMann, Catherine M. (Inventor)

    1991-01-01

    An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.

  13. Using the virtual reality device Oculus Rift for neuropsychological assessment of visual processing capabilities

    PubMed Central

    Foerster, Rebecca M.; Poth, Christian H.; Behler, Christian; Botsch, Mario; Schneider, Werner X.

    2016-01-01

    Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen’s visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions. PMID:27869220

  14. CARES/LIFE Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Powers, Lynn M.; Janosik, Lesley A.; Gyekenyesi, John P.

    2003-01-01

    This manual describes the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction (CARES/LIFE) computer program. The program calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. CARES/LIFE is an extension of the CARES (Ceramic Analysis and Reliability Evaluation of Structures) computer program. The program uses results from MSC/NASTRAN, ABAQUS, and ANSYS finite element analysis programs to evaluate component reliability due to inherent surface and/or volume type flaws. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing the power law, Paris law, or Walker law. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled by using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. The probabilistic time-dependent theories used in CARES/LIFE, along with the input and output for CARES/LIFE, are described. Example problems to demonstrate various features of the program are also included.

  15. Using the virtual reality device Oculus Rift for neuropsychological assessment of visual processing capabilities.

    PubMed

    Foerster, Rebecca M; Poth, Christian H; Behler, Christian; Botsch, Mario; Schneider, Werner X

    2016-11-21

    Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen's visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions.

  16. Engine System Model Development for Nuclear Thermal Propulsion

    NASA Technical Reports Server (NTRS)

    Nelson, Karl W.; Simpson, Steven P.

    2006-01-01

    In order to design, analyze, and evaluate conceptual Nuclear Thermal Propulsion (NTP) engine systems, an improved NTP design and analysis tool has been developed. The NTP tool utilizes the Rocket Engine Transient Simulation (ROCETS) system tool and many of the routines from the Enabler reactor model found in Nuclear Engine System Simulation (NESS). Improved non-nuclear component models and an external shield model were added to the tool. With the addition of a nearly complete system reliability model, the tool will provide performance, sizing, and reliability data for NERVA-Derived NTP engine systems. A new detailed reactor model is also being developed and will replace Enabler. The new model will allow more flexibility in reactor geometry and include detailed thermal hydraulics and neutronics models. A description of the reactor, component, and reliability models is provided. Another key feature of the modeling process is the use of comprehensive spreadsheets for each engine case. The spreadsheets include individual worksheets for each subsystem with data, plots, and scaled figures, making the output very useful to each engineering discipline. Sample performance and sizing results with the Enabler reactor model are provided including sensitivities. Before selecting an engine design, all figures of merit must be considered including the overall impacts on the vehicle and mission. Evaluations based on key figures of merit of these results and results with the new reactor model will be performed. The impacts of clustering and external shielding will also be addressed. Over time, the reactor model will be upgraded to design and analyze other NTP concepts with CERMET and carbide fuel cores.

  17. A real time neural net estimator of fatigue life

    NASA Technical Reports Server (NTRS)

    Troudet, T.; Merrill, W.

    1990-01-01

    A neural net architecture is proposed to estimate, in real-time, the fatigue life of mechanical components, as part of the Intelligent Control System for Reusable Rocket Engines. Arbitrary component loading values were used as input to train a two hidden-layer feedforward neural net to estimate component fatigue damage. The ability of the net to learn, based on a local strain approach, the mapping between load sequence and fatigue damage has been demonstrated for a uniaxial specimen. Because of its demonstrated performance, the neural computation may be extended to complex cases where the loads are biaxial or triaxial, and the geometry of the component is complex (e.g., turbopump blades). The generality of the approach is such that load/damage mappings can be directly extracted from experimental data without requiring any knowledge of the stress/strain profile of the component. In addition, the parallel network architecture allows real-time life calculations even for high frequency vibrations. Owing to its distributed nature, the neural implementation will be robust and reliable, enabling its use in hostile environments such as rocket engines. This neural net estimator of fatigue life is seen as the enabling technology to achieve component life prognosis, and therefore would be an important part of life extending control for reusable rocket engines.

  18. Scaled CMOS Reliability and Considerations for Spacecraft Systems : Bottom-Up and Top-Down Perspectives

    NASA Technical Reports Server (NTRS)

    White, Mark

    2012-01-01

    The recently launched Mars Science Laboratory (MSL) flagship mission, named Curiosity, is the most complex rover ever built by NASA and is scheduled to touch down on the red planet in August, 2012 in Gale Crater. The rover and its instruments will have to endure the harsh environments of the surface of Mars to fulfill its main science objectives. Such complex systems require reliable microelectronic components coupled with adequate component and system-level design margins. Reliability aspects of these elements of the spacecraft system are presented from bottom- up and top-down perspectives.

  19. 2nd Generation RLV Risk Reduction Definition Program: Pratt & Whitney Propulsion Risk Reduction Requirements Program (TA-3 & TA-4)

    NASA Technical Reports Server (NTRS)

    Matlock, Steve

    2001-01-01

    This is the final report and addresses all of the work performed on this program. Specifically, it covers vehicle architecture background, definition of six baseline engine cycles, reliability baseline (space shuttle main engine QRAS), and component level reliability/performance/cost for the six baseline cycles, and selection of 3 cycles for further study. This report further addresses technology improvement selection and component level reliability/performance/cost for the three cycles selected for further study, as well as risk reduction plans, and recommendation for future studies.

  20. Transient Reliability of Ceramic Structures For Heat Engine Applications

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Jadaan, Osama M.

    2002-01-01

    The objectives of this report was to develop a methodology to predict the time-dependent reliability (probability of failure) of brittle material components subjected to transient thermomechanical loading, taking into account the change in material response with time. This methodology for computing the transient reliability in ceramic components subjected to fluctuation thermomechanical loading was developed, assuming SCG (Slow Crack Growth) as the delayed mode of failure. It takes into account the effect of varying Weibull modulus and materials with time. It was also coded into a beta version of NASA's CARES/Life code, and an example demonstrating its viability was presented.

  1. An overview of the mathematical and statistical analysis component of RICIS

    NASA Technical Reports Server (NTRS)

    Hallum, Cecil R.

    1987-01-01

    Mathematical and statistical analysis components of RICIS (Research Institute for Computing and Information Systems) can be used in the following problem areas: (1) quantification and measurement of software reliability; (2) assessment of changes in software reliability over time (reliability growth); (3) analysis of software-failure data; and (4) decision logic for whether to continue or stop testing software. Other areas of interest to NASA/JSC where mathematical and statistical analysis can be successfully employed include: math modeling of physical systems, simulation, statistical data reduction, evaluation methods, optimization, algorithm development, and mathematical methods in signal processing.

  2. Dense surface seismic data confirm non-double-couple source mechanisms induced by hydraulic fracturing

    USGS Publications Warehouse

    Pesicek, Jeremy; Cieślik, Konrad; Lambert, Marc-André; Carrillo, Pedro; Birkelo, Brad

    2016-01-01

    We have determined source mechanisms for nine high-quality microseismic events induced during hydraulic fracturing of the Montney Shale in Canada. Seismic data were recorded using a dense regularly spaced grid of sensors at the surface. The design and geometry of the survey are such that the recorded P-wave amplitudes essentially map the upper focal hemisphere, allowing the source mechanism to be interpreted directly from the data. Given the inherent difficulties of computing reliable moment tensors (MTs) from high-frequency microseismic data, the surface amplitude and polarity maps provide important additional confirmation of the source mechanisms. This is especially critical when interpreting non-shear source processes, which are notoriously susceptible to artifacts due to incomplete or inaccurate source modeling. We have found that most of the nine events contain significant non-double-couple (DC) components, as evident in the surface amplitude data and the resulting MT models. Furthermore, we found that source models that are constrained to be purely shear do not explain the data for most events. Thus, even though non-DC components of MTs can often be attributed to modeling artifacts, we argue that they are required by the data in some cases, and can be reliably computed and confidently interpreted under favorable conditions.

  3. Simulation supported POD for RT test case-concept and modeling

    NASA Astrophysics Data System (ADS)

    Gollwitzer, C.; Bellon, C.; Deresch, A.; Ewert, U.; Jaenisch, G.-R.; Zscherpel, U.; Mistral, Q.

    2012-05-01

    Within the framework of the European project PICASSO, the radiographic simulator aRTist (analytical Radiographic Testing inspection simulation tool) developed by BAM has been extended for reliability assessment of film and digital radiography. NDT of safety relevant components of aerospace industry requires the proof of probability of detection (POD) of the inspection. Modeling tools can reduce the expense of such extended, time consuming NDT trials, if the result of simulation fits to the experiment. Our analytic simulation tool consists of three modules for the description of the radiation source, the interaction of radiation with test pieces and flaws, and the detection process with special focus on film and digital industrial radiography. It features high processing speed with near-interactive frame rates and a high level of realism. A concept has been developed as well as a software extension for reliability investigations, completed by a user interface for planning automatic simulations with varying parameters and defects. Furthermore, an automatic image analysis procedure is included to evaluate the defect visibility. The radiographic modeling from 3D CAD of aero engine components and quality test samples are compared as a precondition for real trials. This enables the evaluation and optimization of film replacement for application of modern digital equipment for economical NDT and defined POD.

  4. Reliability and Validity of the Dyadic Observed Communication Scale (DOCS).

    PubMed

    Hadley, Wendy; Stewart, Angela; Hunter, Heather L; Affleck, Katelyn; Donenberg, Geri; Diclemente, Ralph; Brown, Larry K

    2013-02-01

    We evaluated the reliability and validity of the Dyadic Observed Communication Scale (DOCS) coding scheme, which was developed to capture a range of communication components between parents and adolescents. Adolescents and their caregivers were recruited from mental health facilities for participation in a large, multi-site family-based HIV prevention intervention study. Seventy-one dyads were randomly selected from the larger study sample and coded using the DOCS at baseline. Preliminary validity and reliability of the DOCS was examined using various methods, such as comparing results to self-report measures and examining interrater reliability. Results suggest that the DOCS is a reliable and valid measure of observed communication among parent-adolescent dyads that captures both verbal and nonverbal communication behaviors that are typical intervention targets. The DOCS is a viable coding scheme for use by researchers and clinicians examining parent-adolescent communication. Coders can be trained to reliably capture individual and dyadic components of communication for parents and adolescents and this complex information can be obtained relatively quickly.

  5. From the ocean to a salt marsh: towards understanding iron reduction processes with FORC-PCA.

    NASA Astrophysics Data System (ADS)

    Muraszko, J. R.; Lascu, I.; Collins, S. M.; Harrison, R. J.

    2017-12-01

    Biogenic magnetic minerals are a high fidelity recorder of climate change. Their sensitivity to sedimentary redox conditions and bottom water ventilation have the potential to provide useful insights into past diagenetic conditions. However, the mechanisms controlling preservation and dissolution of magnetosomes are not fully understood, thus undermining the reliability of the paleomagnetic records in marine environments. Recovering information about the diagenetic past of the sediment is a crucial challenge; specifically, the biogenic components need to be identified and unmixed from the bulk magnetic signal. We address the issue in this study by applying Principal Component Analysis on First Order Reversal Curve diagrams (FORC-PCA) in case studies of cores obtained from the Iberian Margin and the sedimentologically active coastal salt marshes of Norfolk. We demonstrate the applicability of FORC-PCA as a new environmental proxy, yielding a high resolution temporal marine record of environmental changes reflected in magnetic composition over the last 194 kyr. The strongest variations are observed in the microbially derived components, the bulk properties of the sediment being controlled by a low coercivity SP-SD component which is generally anticorrelated with the magnetosome signal. Supported by TEM studies, we suggest the prevalence of clusters of nano-particles of magnetite associated with iron reduction. To further investigate the mechanisms controlling these processes, the active sedimentary environment of Norfolk was chosen as a case study of early diagenesis controlled by strong vertical geochemical gradients.

  6. Sensitivity-Informed De Novo Programming for Many-Objective Water Portfolio Planning Under Uncertainty

    NASA Astrophysics Data System (ADS)

    Kasprzyk, J. R.; Reed, P. M.; Kirsch, B. R.; Characklis, G. W.

    2009-12-01

    Risk-based water supply management presents severe cognitive, computational, and social challenges to planning in a changing world. Decision aiding frameworks must confront the cognitive biases implicit to risk, the severe uncertainties associated with long term planning horizons, and the consequent ambiguities that shape how we define and solve water resources planning and management problems. This paper proposes and demonstrates a new interactive framework for sensitivity informed de novo programming. The theoretical focus of our many-objective de novo programming is to promote learning and evolving problem formulations to enhance risk-based decision making. We have demonstrated our proposed de novo programming framework using a case study for a single city’s water supply in the Lower Rio Grande Valley (LRGV) in Texas. Key decisions in this case study include the purchase of permanent rights to reservoir inflows and anticipatory thresholds for acquiring transfers of water through optioning and spot leases. A 10-year Monte Carlo simulation driven by historical data is used to provide performance metrics for the supply portfolios. The three major components of our methodology include Sobol globoal sensitivity analysis, many-objective evolutionary optimization and interactive tradeoff visualization. The interplay between these components allows us to evaluate alternative design metrics, their decision variable controls and the consequent system vulnerabilities. Our LRGV case study measures water supply portfolios’ efficiency, reliability, and utilization of transfers in the water supply market. The sensitivity analysis is used interactively over interannual, annual, and monthly time scales to indicate how the problem controls change as a function of the timescale of interest. These results have been used then to improve our exploration and understanding of LRGV costs, vulnerabilities, and the water portfolios’ critical reliability constraints. These results demonstrate how we can adaptively improve the value and robustness of our problem formulations by evolving our definition of optimality to discover key tradeoffs.

  7. Advanced Reactor PSA Methodologies for System Reliability Analysis and Source Term Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, D.; Brunett, A.; Passerini, S.

    Beginning in 2015, a project was initiated to update and modernize the probabilistic safety assessment (PSA) of the GE-Hitachi PRISM sodium fast reactor. This project is a collaboration between GE-Hitachi and Argonne National Laboratory (Argonne), and funded in part by the U.S. Department of Energy. Specifically, the role of Argonne is to assess the reliability of passive safety systems, complete a mechanistic source term calculation, and provide component reliability estimates. The assessment of passive system reliability focused on the performance of the Reactor Vessel Auxiliary Cooling System (RVACS) and the inherent reactivity feedback mechanisms of the metal fuel core. Themore » mechanistic source term assessment attempted to provide a sequence specific source term evaluation to quantify offsite consequences. Lastly, the reliability assessment focused on components specific to the sodium fast reactor, including electromagnetic pumps, intermediate heat exchangers, the steam generator, and sodium valves and piping.« less

  8. User-perceived reliability of unrepairable shared protection systems with functionally identical units

    NASA Astrophysics Data System (ADS)

    Ozaki, Hirokazu; Kara, Atsushi; Cheng, Zixue

    2012-05-01

    In this article, we investigate the reliability of M-for-N (M:N) shared protection systems. We focus on the reliability that is perceived by an end user of one of N units. We assume that any failed unit is instantly replaced by one of the M units (if available). We describe the effectiveness of such a protection system in a quantitative manner under the condition that the failed units are not repairable. Mathematical analysis gives the closed-form solution of the reliability and mean time to failure (MTTF). We also analyse several numerical examples of the reliability and MTTF. This result can be applied, for example, to the analysis and design of an integrated circuit consisting of redundant backup components. In such a device, repairing a failed component is unrealistic. The analysis provides useful information for the design for general shared protection systems in which the failed units are not repaired.

  9. Time-Varying, Multi-Scale Adaptive System Reliability Analysis of Lifeline Infrastructure Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gearhart, Jared Lee; Kurtz, Nolan Scot

    2014-09-01

    The majority of current societal and economic needs world-wide are met by the existing networked, civil infrastructure. Because the cost of managing such infrastructure is high and increases with time, risk-informed decision making is essential for those with management responsibilities for these systems. To address such concerns, a methodology that accounts for new information, deterioration, component models, component importance, group importance, network reliability, hierarchical structure organization, and efficiency concerns has been developed. This methodology analyzes the use of new information through the lens of adaptive Importance Sampling for structural reliability problems. Deterioration, multi-scale bridge models, and time-variant component importance aremore » investigated for a specific network. Furthermore, both bridge and pipeline networks are studied for group and component importance, as well as for hierarchical structures in the context of specific networks. Efficiency is the primary driver throughout this study. With this risk-informed approach, those responsible for management can address deteriorating infrastructure networks in an organized manner.« less

  10. The Component Timed-Up-and-Go test: the utility and psychometric properties of using a mobile application to determine prosthetic mobility in people with lower limb amputations.

    PubMed

    Clemens, Sheila M; Gailey, Robert S; Bennett, Christopher L; Pasquina, Paul F; Kirk-Sanchez, Neva J; Gaunaurd, Ignacio A

    2018-03-01

    Using a custom mobile application to evaluate the reliability and validity of the Component Timed-Up-and-Go test to assess prosthetic mobility in people with lower limb amputation. Cross-sectional design. National conference for people with limb loss. A total of 118 people with non-vascular cause of lower limb amputation participated. Subjects had a mean age of 48 (±13.7) years and were an average of 10 years post amputation. Of them, 54% ( n = 64) of subjects were male. None. The Component Timed-Up-and-Go was administered using a mobile iPad application, generating a total time to complete the test and five component times capturing each subtask (sit to stand transitions, linear gait, turning) of the standard timed-up-and-go test. The outcome underwent test-retest reliability using intraclass correlation coefficients (ICCs) and convergent validity analyses through correlation with self-report measures of balance and mobility. The Component Timed-Up-and-Go exhibited excellent test-retest reliability with ICCs ranging from .98 to .86 for total and component times. Evidence of discriminative validity resulted from significant differences in mean total times between people with transtibial (10.1 (SD: ±2.3)) and transfemoral (12.76 (SD: ±5.1) amputation, as well as significant differences in all five component times ( P < .05). Convergent validity of the Component Timed-Up-and-Go was demonstrated through moderate correlations with the PLUS-M ( r s  = -.56). The Component Timed-Up-and-Go is a reliable and valid clinical tool for detailed assessment of prosthetic mobility in people with non-vascular lower limb amputation. The iPad application provided a means to easily record data, contributing to clinical utility.

  11. Reliability and Productivity Modeling for the Optimization of Separated Spacecraft Interferometers

    NASA Technical Reports Server (NTRS)

    Kenny, Sean (Technical Monitor); Wertz, Julie

    2002-01-01

    As technological systems grow in capability, they also grow in complexity. Due to this complexity, it is no longer possible for a designer to use engineering judgement to identify the components that have the largest impact on system life cycle metrics, such as reliability, productivity, cost, and cost effectiveness. One way of identifying these key components is to build quantitative models and analysis tools that can be used to aid the designer in making high level architecture decisions. Once these key components have been identified, two main approaches to improving a system using these components exist: add redundancy or improve the reliability of the component. In reality, the most effective approach to almost any system will be some combination of these two approaches, in varying orders of magnitude for each component. Therefore, this research tries to answer the question of how to divide funds, between adding redundancy and improving the reliability of components, to most cost effectively improve the life cycle metrics of a system. While this question is relevant to any complex system, this research focuses on one type of system in particular: Separate Spacecraft Interferometers (SSI). Quantitative models are developed to analyze the key life cycle metrics of different SSI system architectures. Next, tools are developed to compare a given set of architectures in terms of total performance, by coupling different life cycle metrics together into one performance metric. Optimization tools, such as simulated annealing and genetic algorithms, are then used to search the entire design space to find the "optimal" architecture design. Sensitivity analysis tools have been developed to determine how sensitive the results of these analyses are to uncertain user defined parameters. Finally, several possibilities for the future work that could be done in this area of research are presented.

  12. Reliability analysis of the objective structured clinical examination using generalizability theory.

    PubMed

    Trejo-Mejía, Juan Andrés; Sánchez-Mendiola, Melchor; Méndez-Ramírez, Ignacio; Martínez-González, Adrián

    2016-01-01

    The objective structured clinical examination (OSCE) is a widely used method for assessing clinical competence in health sciences education. Studies using this method have shown evidence of validity and reliability. There are no published studies of OSCE reliability measurement with generalizability theory (G-theory) in Latin America. The aims of this study were to assess the reliability of an OSCE in medical students using G-theory and explore its usefulness for quality improvement. An observational cross-sectional study was conducted at National Autonomous University of Mexico (UNAM) Faculty of Medicine in Mexico City. A total of 278 fifth-year medical students were assessed with an 18-station OSCE in a summative end-of-career final examination. There were four exam versions. G-theory with a crossover random effects design was used to identify the main sources of variance. Examiners, standardized patients, and cases were considered as a single facet of analysis. The exam was applied to 278 medical students. The OSCE had a generalizability coefficient of 0.93. The major components of variance were stations, students, and residual error. The sites and the versions of the tests had minimum variance. Our study achieved a G coefficient similar to that found in other reports, which is acceptable for summative tests. G-theory allows the estimation of the magnitude of multiple sources of error and helps decision makers to determine the number of stations, test versions, and examiners needed to obtain reliable measurements.

  13. Using meta-quality to assess the utility of volunteered geographic information for science.

    PubMed

    Langley, Shaun A; Messina, Joseph P; Moore, Nathan

    2017-11-06

    Volunteered geographic information (VGI) has strong potential to be increasingly valuable to scientists in collaboration with non-scientists. The abundance of mobile phones and other wireless forms of communication open up significant opportunities for the public to get involved in scientific research. As these devices and activities become more abundant, questions of uncertainty and error in volunteer data are emerging as critical components for using volunteer-sourced spatial data. Here we present a methodology for using VGI and assessing its sensitivity to three types of error. More specifically, this study evaluates the reliability of data from volunteers based on their historical patterns. The specific context is a case study in surveillance of tsetse flies, a health concern for being the primary vector of African Trypanosomiasis. Reliability, as measured by a reputation score, determines the threshold for accepting the volunteered data for inclusion in a tsetse presence/absence model. Higher reputation scores are successful in identifying areas of higher modeled tsetse prevalence. A dynamic threshold is needed but the quality of VGI will improve as more data are collected and the errors in identifying reliable participants will decrease. This system allows for two-way communication between researchers and the public, and a way to evaluate the reliability of VGI. Boosting the public's ability to participate in such work can improve disease surveillance and promote citizen science. In the absence of active surveillance, VGI can provide valuable spatial information given that the data are reliable.

  14. Reliability analysis of the objective structured clinical examination using generalizability theory.

    PubMed

    Trejo-Mejía, Juan Andrés; Sánchez-Mendiola, Melchor; Méndez-Ramírez, Ignacio; Martínez-González, Adrián

    2016-01-01

    Background The objective structured clinical examination (OSCE) is a widely used method for assessing clinical competence in health sciences education. Studies using this method have shown evidence of validity and reliability. There are no published studies of OSCE reliability measurement with generalizability theory (G-theory) in Latin America. The aims of this study were to assess the reliability of an OSCE in medical students using G-theory and explore its usefulness for quality improvement. Methods An observational cross-sectional study was conducted at National Autonomous University of Mexico (UNAM) Faculty of Medicine in Mexico City. A total of 278 fifth-year medical students were assessed with an 18-station OSCE in a summative end-of-career final examination. There were four exam versions. G-theory with a crossover random effects design was used to identify the main sources of variance. Examiners, standardized patients, and cases were considered as a single facet of analysis. Results The exam was applied to 278 medical students. The OSCE had a generalizability coefficient of 0.93. The major components of variance were stations, students, and residual error. The sites and the versions of the tests had minimum variance. Conclusions Our study achieved a G coefficient similar to that found in other reports, which is acceptable for summative tests. G-theory allows the estimation of the magnitude of multiple sources of error and helps decision makers to determine the number of stations, test versions, and examiners needed to obtain reliable measurements.

  15. Structural reliability assessment capability in NESSUS

    NASA Technical Reports Server (NTRS)

    Millwater, H.; Wu, Y.-T.

    1992-01-01

    The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.

  16. Structural reliability assessment capability in NESSUS

    NASA Astrophysics Data System (ADS)

    Millwater, H.; Wu, Y.-T.

    1992-07-01

    The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.

  17. Reliability of intracerebral hemorrhage classification systems: A systematic review.

    PubMed

    Rannikmäe, Kristiina; Woodfield, Rebecca; Anderson, Craig S; Charidimou, Andreas; Chiewvit, Pipat; Greenberg, Steven M; Jeng, Jiann-Shing; Meretoja, Atte; Palm, Frederic; Putaala, Jukka; Rinkel, Gabriel Je; Rosand, Jonathan; Rost, Natalia S; Strbian, Daniel; Tatlisumak, Turgut; Tsai, Chung-Fen; Wermer, Marieke Jh; Werring, David; Yeh, Shin-Joe; Al-Shahi Salman, Rustam; Sudlow, Cathie Lm

    2016-08-01

    Accurately distinguishing non-traumatic intracerebral hemorrhage (ICH) subtypes is important since they may have different risk factors, causal pathways, management, and prognosis. We systematically assessed the inter- and intra-rater reliability of ICH classification systems. We sought all available reliability assessments of anatomical and mechanistic ICH classification systems from electronic databases and personal contacts until October 2014. We assessed included studies' characteristics, reporting quality and potential for bias; summarized reliability with kappa value forest plots; and performed meta-analyses of the proportion of cases classified into each subtype. We included 8 of 2152 studies identified. Inter- and intra-rater reliabilities were substantial to perfect for anatomical and mechanistic systems (inter-rater kappa values: anatomical 0.78-0.97 [six studies, 518 cases], mechanistic 0.89-0.93 [three studies, 510 cases]; intra-rater kappas: anatomical 0.80-1 [three studies, 137 cases], mechanistic 0.92-0.93 [two studies, 368 cases]). Reporting quality varied but no study fulfilled all criteria and none was free from potential bias. All reliability studies were performed with experienced raters in specialist centers. Proportions of ICH subtypes were largely consistent with previous reports suggesting that included studies are appropriately representative. Reliability of existing classification systems appears excellent but is unknown outside specialist centers with experienced raters. Future reliability comparisons should be facilitated by studies following recently published reporting guidelines. © 2016 World Stroke Organization.

  18. Reliable Design Versus Trust

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Kenneth A.

    2016-01-01

    This presentation focuses on reliability and trust for the users portion of the FPGA design flow. It is assumed that the manufacturer prior to hand-off to the user tests FPGA internal components. The objective is to present the challenges of creating reliable and trusted designs. The following will be addressed: What makes a design vulnerable to functional flaws (reliability) or attackers (trust)? What are the challenges for verifying a reliable design versus a trusted design?

  19. Psychometric properties of the Interpersonal Relationship Inventory-Short Form for active duty female service members.

    PubMed

    Nayback-Beebe, Ann M; Yoder, Linda H

    2011-06-01

    The Interpersonal Relationship Inventory-Short Form (IPRI-SF) has demonstrated psychometric consistency across several demographic and clinical populations; however, it has not been psychometrically tested in a military population. The purpose of this study was to psychometrically evaluate the reliability and component structure of the IPRI-SF in active duty United States Army female service members (FSMs). The reliability estimates were .93 for the social support subscale and .91 for the conflict subscale. Principal component analysis demonstrated an obliquely rotated three-component solution that accounted for 58.9% of the variance. The results of this study support the reliability and validity of the IPRI-SF for use in FSMs; however, a three-factor structure emerged in this sample of FSMs post-deployment that represents "cultural context." Copyright © 2011 Wiley Periodicals, Inc.

  20. HTGR plant availability and reliability evaluations. Volume I. Summary of evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cadwallader, G.J.; Hannaman, G.W.; Jacobsen, F.K.

    1976-12-01

    The report (1) describes a reliability assessment methodology for systematically locating and correcting areas which may contribute to unavailability of new and uniquely designed components and systems, (2) illustrates the methodology by applying it to such components in a high-temperature gas-cooled reactor (Public Service Company of Colorado's Fort St. Vrain 330-MW(e) HTGR), and (3) compares the results of the assessment with actual experience. The methodology can be applied to any component or system; however, it is particularly valuable for assessments of components or systems which provide essential functions, or the failure or mishandling of which could result in relatively largemore » economic losses.« less

  1. Behavioral Scale Reliability and Measurement Invariance Evaluation Using Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko

    2004-01-01

    A latent variable modeling approach to reliability and measurement invariance evaluation for multiple-component measuring instruments is outlined. An initial discussion deals with the limitations of coefficient alpha, a frequently used index of composite reliability. A widely and readily applicable structural modeling framework is next described…

  2. Application of redundancy in the Saturn 5 guidance and control system

    NASA Technical Reports Server (NTRS)

    Moore, F. B.; White, J. B.

    1976-01-01

    The Saturn launch vehicle's guidance and control system is so complex that the reliability of a simplex system is not adequate to fulfill mission requirements. Thus, to achieve the desired reliability, redundancy encompassing a wide range of types and levels was employed. At one extreme, the lowest level, basic components (resistors, capacitors, relays, etc.) are employed in series, parallel, or quadruplex arrangements to insure continued system operation in the presence of possible failure conditions. At the other extreme, the highest level, complete subsystem duplication is provided so that a backup subsystem can be employed in case the primary system malfunctions. In between these two extremes, many other redundancy schemes and techniques are employed at various levels. Basic redundancy concepts are covered to gain insight into the advantages obtained with various techniques. Points and methods of application of these techniques are included. The theoretical gain in reliability resulting from redundancy is assessed and compared to a simplex system. Problems and limitations encountered in the practical application of redundancy are discussed as well as techniques verifying proper operation of the redundant channels. As background for the redundancy application discussion, a basic description of the guidance and control system is included.

  3. Probabilistic assessment of dynamic system performance. Part 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belhadj, Mohamed

    1993-01-01

    Accurate prediction of dynamic system failure behavior can be important for the reliability and risk analyses of nuclear power plants, as well as for their backfitting to satisfy given constraints on overall system reliability, or optimization of system performance. Global analysis of dynamic systems through investigating the variations in the structure of the attractors of the system and the domains of attraction of these attractors as a function of the system parameters is also important for nuclear technology in order to understand the fault-tolerance as well as the safety margins of the system under consideration and to insure a safemore » operation of nuclear reactors. Such a global analysis would be particularly relevant to future reactors with inherent or passive safety features that are expected to rely on natural phenomena rather than active components to achieve and maintain safe shutdown. Conventionally, failure and global analysis of dynamic systems necessitate the utilization of different methodologies which have computational limitations on the system size that can be handled. Using a Chapman-Kolmogorov interpretation of system dynamics, a theoretical basis is developed that unifies these methodologies as special cases and which can be used for a comprehensive safety and reliability analysis of dynamic systems.« less

  4. Closed-form solution of decomposable stochastic models

    NASA Technical Reports Server (NTRS)

    Sjogren, Jon A.

    1990-01-01

    Markov and semi-Markov processes are increasingly being used in the modeling of complex reconfigurable systems (fault tolerant computers). The estimation of the reliability (or some measure of performance) of the system reduces to solving the process for its state probabilities. Such a model may exhibit numerous states and complicated transition distributions, contributing to an expensive and numerically delicate solution procedure. Thus, when a system exhibits a decomposition property, either structurally (autonomous subsystems), or behaviorally (component failure versus reconfiguration), it is desirable to exploit this decomposition in the reliability calculation. In interesting cases there can be failure states which arise from non-failure states of the subsystems. Equations are presented which allow the computation of failure probabilities of the total (combined) model without requiring a complete solution of the combined model. This material is presented within the context of closed-form functional representation of probabilities as utilized in the Symbolic Hierarchical Automated Reliability and Performance Evaluator (SHARPE) tool. The techniques adopted enable one to compute such probability functions for a much wider class of systems at a reduced computational cost. Several examples show how the method is used, especially in enhancing the versatility of the SHARPE tool.

  5. How Reliable is the Acetabular Cup Position Assessment from Routine Radiographs?

    PubMed Central

    Carvajal Alba, Jaime A.; Vincent, Heather K.; Sodhi, Jagdeep S.; Latta, Loren L.; Parvataneni, Hari K.

    2017-01-01

    Abstract Background: Cup position is crucial for optimal outcomes in total hip arthroplasty. Radiographic assessment of component position is routinely performed in the early postoperative period. Aims: The aims of this study were to determine in a controlled environment if routine radiographic methods accurately and reliably assess the acetabular cup position and to assess if there is a statistical difference related to the rater’s level of training. Methods: A pelvic model was mounted in a spatial frame. An acetabular cup was fixed in different degrees of version and inclination. Standardized radiographs were obtained. Ten observers including five fellowship-trained orthopaedic surgeons and five orthopaedic residents performed a blind assessment of cup position. Inclination was assessed from anteroposterior radiographs of the pelvis and version from cross-table lateral radiographs of the hip. Results: The radiographic methods used showed to be imprecise specially when the cup was positioned at the extremes of version and inclination. An excellent inter-observer reliability (Intra-class coefficient > 0,9) was evidenced. There were no differences related to the level of training of the raters. Conclusions: These widely used radiographic methods should be interpreted cautiously and computed tomography should be utilized in cases when further intervention is contemplated. PMID:28852355

  6. Partitioning Detectability Components in Populations Subject to Within-Season Temporary Emigration Using Binomial Mixture Models

    PubMed Central

    O’Donnell, Katherine M.; Thompson, Frank R.; Semlitsch, Raymond D.

    2015-01-01

    Detectability of individual animals is highly variable and nearly always < 1; imperfect detection must be accounted for to reliably estimate population sizes and trends. Hierarchical models can simultaneously estimate abundance and effective detection probability, but there are several different mechanisms that cause variation in detectability. Neglecting temporary emigration can lead to biased population estimates because availability and conditional detection probability are confounded. In this study, we extend previous hierarchical binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model’s potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3–5 surveys each spring and fall 2010–2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling), while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling). By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and protocols that maximize species availability and conditional detection probability to increase population parameter estimate reliability. PMID:25775182

  7. Estimation of lifetime distributions on 1550-nm DFB laser diodes using Monte-Carlo statistic computations

    NASA Astrophysics Data System (ADS)

    Deshayes, Yannick; Verdier, Frederic; Bechou, Laurent; Tregon, Bernard; Danto, Yves; Laffitte, Dominique; Goudard, Jean Luc

    2004-09-01

    High performance and high reliability are two of the most important goals driving the penetration of optical transmission into telecommunication systems ranging from 880 nm to 1550 nm. Lifetime prediction defined as the time at which a parameter reaches its maximum acceptable shirt still stays the main result in terms of reliability estimation for a technology. For optoelectronic emissive components, selection tests and life testing are specifically used for reliability evaluation according to Telcordia GR-468 CORE requirements. This approach is based on extrapolation of degradation laws, based on physics of failure and electrical or optical parameters, allowing both strong test time reduction and long-term reliability prediction. Unfortunately, in the case of mature technology, there is a growing complexity to calculate average lifetime and failure rates (FITs) using ageing tests in particular due to extremely low failure rates. For present laser diode technologies, time to failure tend to be 106 hours aged under typical conditions (Popt=10 mW and T=80°C). These ageing tests must be performed on more than 100 components aged during 10000 hours mixing different temperatures and drive current conditions conducting to acceleration factors above 300-400. These conditions are high-cost, time consuming and cannot give a complete distribution of times to failure. A new approach consists in use statistic computations to extrapolate lifetime distribution and failure rates in operating conditions from physical parameters of experimental degradation laws. In this paper, Distributed Feedback single mode laser diodes (DFB-LD) used for 1550 nm telecommunication network working at 2.5 Gbit/s transfer rate are studied. Electrical and optical parameters have been measured before and after ageing tests, performed at constant current, according to Telcordia GR-468 requirements. Cumulative failure rates and lifetime distributions are computed using statistic calculations and equations of drift mechanisms versus time fitted from experimental measurements.

  8. Case-mix adjustment and enabled reporting of the health care experiences of adults with disabilities.

    PubMed

    Palsbo, Susan E; Diao, Guoqing; Palsbo, Gregory A; Tang, Liansheng; Rosenberger, William F; Mastal, Margaret F

    2010-09-01

    To develop activity limitation clusters for case-mix adjustment of health care ratings and as a population profiler, and to develop a cognitively accessible report of statistically reliable quality and access measures comparing the health care experiences of adults with and without disabilities, within and across health delivery organizations. Observational study. Three California Medicaid health care organizations. Adults (N = 1086) of working age enrolled for at least 1 year in Medicaid because of disability. Not applicable. Principal components analysis created 4 clusters of activity limitations that we used to characterize case mix. We identified and calculated 28 quality measures using responses from a proposed enabled version of the Consumer Assessment of Healthcare Providers and Systems (CAHPS) survey. We calculated scores for overall care as the weighted mean of the case-mix adjusted ratings. Disability caused a greater bias on health plan ratings and specialist ratings than did demographic factors. Proxy respondents rated care the same as self-respondents. Telephone and mail administration were equivalent for service reports, but telephone respondents tended to offer more positive global ratings. Plan-level reliability estimates for new composites on shared decision making and advice on healthy living are .79 and .87, respectively. Plan-level reliability estimates for a new composite measure on family planning did not discriminate between health plans because respondents rated all health plans poorly. Approximately 125 respondents per site are necessary to detect group differences. Self-reported activity limitations incorporating standard questions from the American Community Survey can be used to create a disability case-mix index and to construct profiles of a population's activity limitations. The enabled comparative report, which we call the Assessment of Health Plans and Providers by People with Activity Limitations, is more cognitively accessible than typical CAHPS report templates for state Medicaid plans. The CAHPS Medicaid reporting tools may provide misleading ratings of health plan and physician quality by people with disabilities because the mean ratings do not account for systematic biases associated with disability. More testing on larger populations would help to quantify the strength of various reporting biases.

  9. Enhanced Component Performance Study: Turbine-Driven Pumps 1998–2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeder, John Alton

    2015-11-01

    This report presents an enhanced performance evaluation of turbine-driven pumps (TDPs) at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience failure reports from fiscal year 1998 through 2014 for the component reliability as reported in the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The TDP failure modes considered are failure to start (FTS), failure to run less than or equal to one hour (FTR=1H), failure to run more than one hour (FTR>1H), and normally running systems FTS and failure to run (FTR). The component reliability estimates and themore » reliability data are trended for the most recent 10-year period while yearly estimates for reliability are provided for the entire active period. Statistically significant increasing trends were identified for TDP unavailability, for frequency of start demands for standby TDPs, and for run hours in the first hour after start. Statistically significant decreasing trends were identified for start demands for normally running TDPs, and for run hours per reactor critical year for normally running TDPs.« less

  10. PSHFT - COMPUTERIZED LIFE AND RELIABILITY MODELLING FOR TURBOPROP TRANSMISSIONS

    NASA Technical Reports Server (NTRS)

    Savage, M.

    1994-01-01

    The computer program PSHFT calculates the life of a variety of aircraft transmissions. A generalized life and reliability model is presented for turboprop and parallel shaft geared prop-fan aircraft transmissions. The transmission life and reliability model is a combination of the individual reliability models for all the bearings and gears in the main load paths. The bearing and gear reliability models are based on the statistical two parameter Weibull failure distribution method and classical fatigue theories. The computer program developed to calculate the transmission model is modular. In its present form, the program can analyze five different transmissions arrangements. Moreover, the program can be easily modified to include additional transmission arrangements. PSHFT uses the properties of a common block two-dimensional array to separate the component and transmission property values from the analysis subroutines. The rows correspond to specific components with the first row containing the values for the entire transmission. Columns contain the values for specific properties. Since the subroutines (which determine the transmission life and dynamic capacity) interface solely with this property array, they are separated from any specific transmission configuration. The system analysis subroutines work in an identical manner for all transmission configurations considered. Thus, other configurations can be added to the program by simply adding component property determination subroutines. PSHFT consists of a main program, a series of configuration specific subroutines, generic component property analysis subroutines, systems analysis subroutines, and a common block. The main program selects the routines to be used in the analysis and sequences their operation. The series of configuration specific subroutines input the configuration data, perform the component force and life analyses (with the help of the generic component property analysis subroutines), fill the property array, call up the system analysis routines, and finally print out the analysis results for the system and components. PSHFT is written in FORTRAN 77 and compiled on a MicroSoft FORTRAN compiler. The program will run on an IBM PC AT compatible with at least 104k bytes of memory. The program was developed in 1988.

  11. Component Reliability Testing of Long-Life Sorption Cryocoolers

    NASA Technical Reports Server (NTRS)

    Bard, S.; Wu, J.; Karlmann, P.; Mirate, C.; Wade, L.

    1994-01-01

    This paper summarizes ongoing experiments characterizing the ability of critical sorption cryocooler components to achieve highly reliable operation for long-life space missions. Test data obtained over the past several years at JPL are entirely consistent with achieving ten year life for sorption compressors, electrical heaters, container materials, valves, and various sorbent materials suitable for driving 8 to 180 K refrigeration stages. Test results for various compressor systems are reported. Planned future tests necessary to gain a detailed understanding of the sensitivity of cooler performance and component life to operating constraints, design configurations, and fabrication, assembly and handling techniques, are also discussed.

  12. Performance and reliability of the NASA biomass production chamber

    NASA Technical Reports Server (NTRS)

    Fortson, R. E.; Sager, J. C.; Chetirkin, P. V.

    1994-01-01

    The Biomass Production Chamber (BPC) at the Kennedy Space Center is part of the Controlled Ecological Life Support System (CELSS) Breadboard Project. Plants are grown in a closed environment in an effort to quantify their contributions to the requirements for life support. Performance of this system is described. Also, in building this system, data from component and subsystem failures are being recorded. These data are used to identify problem areas in the design and implementation. The techniques used to measure the reliability will be useful in the design and construction of future CELSS. Possible methods for determining the reliability of a green plant, the primary component of CELSS, are discussed.

  13. A New Tool for Nutrition App Quality Evaluation (AQEL): Development, Validation, and Reliability Testing.

    PubMed

    DiFilippo, Kristen Nicole; Huang, Wenhao; Chapman-Novakofski, Karen M

    2017-10-27

    The extensive availability and increasing use of mobile apps for nutrition-based health interventions makes evaluation of the quality of these apps crucial for integration of apps into nutritional counseling. The goal of this research was the development, validation, and reliability testing of the app quality evaluation (AQEL) tool, an instrument for evaluating apps' educational quality and technical functionality. Items for evaluating app quality were adapted from website evaluations, with additional items added to evaluate the specific characteristics of apps, resulting in 79 initial items. Expert panels of nutrition and technology professionals and app users reviewed items for face and content validation. After recommended revisions, nutrition experts completed a second AQEL review to ensure clarity. On the basis of 150 sets of responses using the revised AQEL, principal component analysis was completed, reducing AQEL into 5 factors that underwent reliability testing, including internal consistency, split-half reliability, test-retest reliability, and interrater reliability (IRR). Two additional modifiable constructs for evaluating apps based on the age and needs of the target audience as selected by the evaluator were also tested for construct reliability. IRR testing using intraclass correlations (ICC) with all 7 constructs was conducted, with 15 dietitians evaluating one app. Development and validation resulted in the 51-item AQEL. These were reduced to 25 items in 5 factors after principal component analysis, plus 9 modifiable items in two constructs that were not included in principal component analysis. Internal consistency and split-half reliability of the following constructs derived from principal components analysis was good (Cronbach alpha >.80, Spearman-Brown coefficient >.80): behavior change potential, support of knowledge acquisition, app function, and skill development. App purpose split half-reliability was .65. Test-retest reliability showed no significant change over time (P>.05) for all but skill development (P=.001). Construct reliability was good for items assessing age appropriateness of apps for children, teens, and a general audience. In addition, construct reliability was acceptable for assessing app appropriateness for various target audiences (Cronbach alpha >.70). For the 5 main factors, ICC (1,k) was >.80, with a P value of <.05. When 15 nutrition professionals evaluated one app, ICC (2,15) was .98, with a P value of <.001 for all 7 constructs when the modifiable items were specified for adults seeking weight loss support. Our preliminary effort shows that AQEL is a valid, reliable instrument for evaluating nutrition apps' qualities for clinical interventions by nutrition clinicians, educators, and researchers. Further efforts in validating AQEL in various contexts are needed. ©Kristen Nicole DiFilippo, Wenhao Huang, Karen M. Chapman-Novakofski. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 27.10.2017.

  14. Performances and reliability predictions of optical data transmission links using a system simulator for aerospace applications

    NASA Astrophysics Data System (ADS)

    Bechou, L.; Deshayes, Y.; Aupetit-Berthelemot, C.; Guerin, A.; Tronche, C.

    Space missions for Earth Observation are called upon to carry a growing number of instruments in their payload, whose performances are increasing. Future space systems are therefore intended to generate huge amounts of data and a key challenge in coming years will therefore lie in the ability to transmit that significant quantity of data to ground. Thus very high data rate Payload Telemetry (PLTM) systems will be required to face the demand of the future Earth Exploration Satellite Systems and reliability is one of the major concern of such systems. An attractive approach associated with the concept of predictive modeling consists in analyzing the impact of components malfunctioning on the optical link performances taking into account the network requirements and experimental degradation laws. Reliability estimation is traditionally based on life-testing and a basic approach is to use Telcordia requirements (468GR) for optical telecommunication applications. However, due to the various interactions between components, operating lifetime of a system cannot be taken as the lifetime of the less reliable component. In this paper, an original methodology is proposed to estimate reliability of an optical communication system by using a dedicated system simulator for predictive modeling and design for reliability. At first, we present frameworks of point-to-point optical communication systems for space applications where high data rate (or frequency bandwidth), lower cost or mass saving are needed. Optoelectronics devices used in these systems can be similar to those found in terrestrial optical network. Particularly we report simulation results of transmission performances after introduction of DFB Laser diode parameters variations versus time extrapolated from accelerated tests based on terrestrial or submarine telecommunications qualification standards. Simulations are performed to investigate and predict the consequence of degradations of the Laser diode (acting as a - requency carrier) on system performances (eye diagram, quality factor and BER). The studied link consists in 4× 2.5 Gbits/s WDM channels with direct modulation and equally spaced (0,8 nm) around the 1550 nm central wavelength. Results clearly show that variation of fundamental parameters such as bias current or central wavelength induces a penalization of dynamic performances of the complete WDM link. In addition different degradation kinetics of aged Laser diodes from a same batch have been implemented to build the final distribution of Q-factor and BER values after 25 years. When considering long optical distance, fiber attenuation, EDFA noise, dispersion, PMD, ... penalize network performances that can be compensated using Forward Error Correction (FEC) coding. Three methods have been investigated in the case of On-Off Keying (OOK) transmission over an unipolar optical channel corrupted by Gaussian noise. Such system simulations highlight the impact of component parameter degradations on the whole network performances allowing to optimize various time and cost consuming sensitivity analyses at the early stage of the system development. Thus the validity of failure criteria in relation with mission profiles can be evaluated representing a significant part of the general PDfR effort in particular for aerospace applications.

  15. Semiautomatic segmentation and follow-up of multicomponent low-grade tumors in longitudinal brain MRI studies

    PubMed Central

    Weizman, Lior; Sira, Liat Ben; Joskowicz, Leo; Rubin, Daniel L.; Yeom, Kristen W.; Constantini, Shlomi; Shofty, Ben; Bashat, Dafna Ben

    2014-01-01

    Purpose: Tracking the progression of low grade tumors (LGTs) is a challenging task, due to their slow growth rate and associated complex internal tumor components, such as heterogeneous enhancement, hemorrhage, and cysts. In this paper, the authors show a semiautomatic method to reliably track the volume of LGTs and the evolution of their internal components in longitudinal MRI scans. Methods: The authors' method utilizes a spatiotemporal evolution modeling of the tumor and its internal components. Tumor components gray level parameters are estimated from the follow-up scan itself, obviating temporal normalization of gray levels. The tumor delineation procedure effectively incorporates internal classification of the baseline scan in the time-series as prior data to segment and classify a series of follow-up scans. The authors applied their method to 40 MRI scans of ten patients, acquired at two different institutions. Two types of LGTs were included: Optic pathway gliomas and thalamic astrocytomas. For each scan, a “gold standard” was obtained manually by experienced radiologists. The method is evaluated versus the gold standard with three measures: gross total volume error, total surface distance, and reliability of tracking tumor components evolution. Results: Compared to the gold standard the authors' method exhibits a mean Dice similarity volumetric measure of 86.58% and a mean surface distance error of 0.25 mm. In terms of its reliability in tracking the evolution of the internal components, the method exhibits strong positive correlation with the gold standard. Conclusions: The authors' method provides accurate and repeatable delineation of the tumor and its internal components, which is essential for therapy assessment of LGTs. Reliable tracking of internal tumor components over time is novel and potentially will be useful to streamline and improve follow-up of brain tumors, with indolent growth and behavior. PMID:24784396

  16. Constellation Ground Systems Launch Availability Analysis: Enhancing Highly Reliable Launch Systems Design

    NASA Technical Reports Server (NTRS)

    Gernand, Jeffrey L.; Gillespie, Amanda M.; Monaghan, Mark W.; Cummings, Nicholas H.

    2010-01-01

    Success of the Constellation Program's lunar architecture requires successfully launching two vehicles, Ares I/Orion and Ares V/Altair, within a very limited time period. The reliability and maintainability of flight vehicles and ground systems must deliver a high probability of successfully launching the second vehicle in order to avoid wasting the on-orbit asset launched by the first vehicle. The Ground Operations Project determined which ground subsystems had the potential to affect the probability of the second launch and allocated quantitative availability requirements to these subsystems. The Ground Operations Project also developed a methodology to estimate subsystem reliability, availability, and maintainability to ensure that ground subsystems complied with allocated launch availability and maintainability requirements. The verification analysis developed quantitative estimates of subsystem availability based on design documentation, testing results, and other information. Where appropriate, actual performance history was used to calculate failure rates for legacy subsystems or comparative components that will support Constellation. The results of the verification analysis will be used to assess compliance with requirements and to highlight design or performance shortcomings for further decision making. This case study will discuss the subsystem requirements allocation process, describe the ground systems methodology for completing quantitative reliability, availability, and maintainability analysis, and present findings and observation based on analysis leading to the Ground Operations Project Preliminary Design Review milestone.

  17. Precision, Reliability, and Effect Size of Slope Variance in Latent Growth Curve Models: Implications for Statistical Power Analysis

    PubMed Central

    Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Lindenberger, Ulman; Hertzog, Christopher

    2018-01-01

    Latent Growth Curve Models (LGCM) have become a standard technique to model change over time. Prediction and explanation of inter-individual differences in change are major goals in lifespan research. The major determinants of statistical power to detect individual differences in change are the magnitude of true inter-individual differences in linear change (LGCM slope variance), design precision, alpha level, and sample size. Here, we show that design precision can be expressed as the inverse of effective error. Effective error is determined by instrument reliability and the temporal arrangement of measurement occasions. However, it also depends on another central LGCM component, the variance of the latent intercept and its covariance with the latent slope. We derive a new reliability index for LGCM slope variance—effective curve reliability (ECR)—by scaling slope variance against effective error. ECR is interpretable as a standardized effect size index. We demonstrate how effective error, ECR, and statistical power for a likelihood ratio test of zero slope variance formally relate to each other and how they function as indices of statistical power. We also provide a computational approach to derive ECR for arbitrary intercept-slope covariance. With practical use cases, we argue for the complementary utility of the proposed indices of a study's sensitivity to detect slope variance when making a priori longitudinal design decisions or communicating study designs. PMID:29755377

  18. Stress and Reliability Analysis of a Metal-Ceramic Dental Crown

    NASA Technical Reports Server (NTRS)

    Anusavice, Kenneth J; Sokolowski, Todd M.; Hojjatie, Barry; Nemeth, Noel N.

    1996-01-01

    Interaction of mechanical and thermal stresses with the flaws and microcracks within the ceramic region of metal-ceramic dental crowns can result in catastrophic or delayed failure of these restorations. The objective of this study was to determine the combined influence of induced functional stresses and pre-existing flaws and microcracks on the time-dependent probability of failure of a metal-ceramic molar crown. A three-dimensional finite element model of a porcelain fused-to-metal (PFM) molar crown was developed using the ANSYS finite element program. The crown consisted of a body porcelain, opaque porcelain, and a metal substrate. The model had a 300 Newton load applied perpendicular to one cusp, a load of 30ON applied at 30 degrees from the perpendicular load case, directed toward the center, and a 600 Newton vertical load. Ceramic specimens were subjected to a biaxial flexure test and the load-to-failure of each specimen was measured. The results of the finite element stress analysis and the flexure tests were incorporated in the NASA developed CARES/LIFE program to determine the Weibull and fatigue parameters and time-dependent fracture reliability of the PFM crown. CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/Or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program.

  19. Reliability of the Phi angle to assess rotational alignment of the talar component in total ankle replacement.

    PubMed

    Manzi, Luigi; Villafañe, Jorge Hugo; Indino, Cristian; Tamini, Jacopo; Berjano, Pedro; Usuelli, Federico Giuseppe

    2017-11-08

    The purpose of this study was to investigate the test-retest reliability of the Phi angle in patients undergoing total ankle replacement (TAR) for end stage ankle osteoarthritis (OA) to assess the rotational alignment of the talar component. Retrospective observational cross-sectional study of prospectively collected data. Post-operative anteroposterior radiographs of the foot of 170 patients who underwent TAR for the ankle OA were evaluated. Three physicians measured Phi on the 170 randomly sorted and anonymized radiographs on two occasions, one week apart (test and retest conditions), inter and intra-observer agreement were evaluated. Test-retest reliability of Phi angle measurement was excellent for patients with Hintegra TAR (ICC=0.995; p<0.001) and Zimmer TAR (ICC=0.995; p<0.001) on radiographs of subjects with ankle OA. There were no significant differences in the reliability of the Phi angle measurement between patients with Hintegra vs. Zimmer implants (p>0.05). Measurement of Phi angle on weight-bearing dorsoplantar radiograph showed an excellent reliability among orthopaedic surgeons in determining the position of the talar component in the axial plane. Level II, cross sectional study. Copyright © 2017 European Foot and Ankle Society. Published by Elsevier Ltd. All rights reserved.

  20. Digital templating for THA: a simple computer-assisted application for complex hip arthritis cases.

    PubMed

    Hafez, Mahmoud A; Ragheb, Gad; Hamed, Adel; Ali, Amr; Karim, Said

    2016-10-01

    Total hip arthroplasty (THA) is the standard procedure for end-stage arthritis of the hip. Its technical success relies on preoperative planning of the surgical procedure and virtual setup of the operative performance. Digital hip templating is one methodology of preoperative planning for THA which requires a digital preoperative radiograph and a computer with special software. This is a prospective study involving 23 patients (25 hips) who were candidates for complex THA surgery (unilateral or bilateral). Digital templating is done by radiographic assessment using radiographic magnification correction, leg length discrepancy and correction measurements, acetabular component and femoral component templating as well as neck resection measurement. The overall accuracy for templating the stem implant's exact size is 81%. This percentage increased to 94% when considering sizing within 1 size. Digital templating has proven effective, reliable and essential technique for preoperative planning and accurate prediction of THA sizing and alignment.

  1. [Variability in nursing workload within Swiss Diagnosis Related Groups].

    PubMed

    Baumberger, Dieter; Bürgin, Reto; Bartholomeyczik, Sabine

    2014-04-01

    Nursing care inputs represent one of the major cost components in the Swiss Diagnosis Related Group (DRG) structure. High and low nursing workloads in individual cases are supposed to balance out via the DRG group. Research results indicating possible problems in this area cannot be reliably extrapolated to SwissDRG. An analysis of nursing workload figures with DRG indicators was carried out in order to decide whether there is a need to develop SwissDRG classification criteria that are specific to nursing care. The case groups were determined with SwissDRG 0.1, and nursing workload with LEP Nursing 2. Robust statistical methods were used. The evaluation of classification accuracy was carried out with R2 as the measurement of variance reduction and the coefficient of homogeneity (CH). To ensure reliable conclusions, statistical tests with bootstrapping methods were performed. The sample included 213 groups with a total of 73930 cases from ten hospitals. The DRG classification was seen to have limited explanatory power for variability in nursing workload inputs, both for all cases (R2 = 0.16) and for inliers (R2 = 0.32). Nursing workload homogeneity was statistically significant unsatisfactory (CH < 0.67) in 123 groups, including 24 groups in which it was significant defective (CH < 0.60). Therefore, there is a high risk of high and low nursing workloads not balancing out in these groups, and, as a result, of financial resources being wrongly allocated. The development of nursing-care-specific SwissDRG classification criteria for improved homogeneity and variance reduction is therefore indicated.

  2. Propulsion system research and development for electric and hybrid vehicles

    NASA Technical Reports Server (NTRS)

    Schwartz, H. J.

    1980-01-01

    An approach to propulsion subsystem technology is presented. Various tests of component reliability are described to aid in the production of better quality vehicles. component characterization work is described to provide engineering data to manufacturers on component performance and on important component propulsion system interactions.

  3. Effect of blade outlet angle on radial thrust of single-blade centrifugal pump

    NASA Astrophysics Data System (ADS)

    Nishi, Y.; Fukutomi, J.; Fujiwara, R.

    2012-11-01

    Single-blade centrifugal pumps are widely used as sewage pumps. However, a large radial thrust acts on a single blade during pump operation because of the geometrical axial asymmetry of the impeller. This radial thrust causes vibrations of the pump shaft, reducing the service life of bearings and shaft seal devices. Therefore, to ensure pump reliability, it is necessary to quantitatively understand the radial thrust and clarify the behavior and generation mechanism. This study investigated the radial thrust acting on two kinds of single-blade centrifugal impellers having different blade outlet angles by experiments and computational fluid dynamics (CFD) analysis. Furthermore, the radial thrust was modeled by a combination of three components, inertia, momentum, and pressure, by applying an unsteady conservation of momentum to this impeller. As a result, the effects of the blade outlet angle on both the radial thrust and the modeled components were clarified. The total head of the impeller with a blade outlet angle of 16 degrees increases more than the impeller with a blade outlet angle of 8 degrees at a large flow rate. In this case, since the static pressure of the circumference of the impeller increases uniformly, the time-averaged value of the radial thrust of both impellers does not change at every flow rate. On the other hand, since the impeller blade loading becomes large, the fluctuation component of the radial thrust of the impeller with the blade outlet angle of 16 degrees increases. If the blade outlet angle increases, the fluctuation component of the inertia component will increase, but the time-averaged value of the inertia component is located near the origin despite changes in the flow rate. The fluctuation component of the momentum component becomes large at all flow rates. Furthermore, although the time-averaged value of the pressure component is almost constant, the fluctuation component of the pressure component becomes large at a large flow rate. In addition to the increase of the fluctuation component of this pressure component, because the fluctuation component of the inertia and momentum components becomes large (as mentioned above), the radial thrust increases at a large flow rate, as is the case for the impeller with a large blade outlet angle.

  4. Psychometric evaluation of the Revised Professional Practice Environment (RPPE) scale.

    PubMed

    Erickson, Jeanette Ives; Duffy, Mary E; Ditomassi, Marianne; Jones, Dorothy

    2009-05-01

    The purpose was to examine the psychometric properties of the Revised Professional Practice Environment (RPPE) scale. Despite renewed focus on studying health professionals' practice environments, there are still few reliable and valid instruments available to assist nurse administrators in decision making. A psychometric evaluation using a random-sample cross-validation procedure (calibration sample [CS], n = 775; validation sample [VS], n = 775) was undertaken. Cronbach alpha internal consistency reliability of the total score (r = 0.93 [CS] and 0.92 [VS]), resulting subscale scores (r range: 0.80-0.87 [CS], 0.81-0.88 [VS]), and principal components analyses with Varimax rotation and Kaiser normalization (8 components, 59.2% variance [CS], 59.7% [VS]) produced almost identical results in both samples. The multidimensional RPPE is a psychometrically sound measure of 8 components of the professional practice environment in the acute care setting and sufficiently reliable and valid for use as independent subscales in healthcare research.

  5. NASA-DoD Lead-Free Electronics Project

    NASA Technical Reports Server (NTRS)

    Kessel, Kurt

    2009-01-01

    In response to concerns about risks from lead-free induced faults to high reliability products, NASA has initiated a multi-year project to provide manufacturers and users with data to clarify the risks of lead-free materials in their products. The project will also be of interest to component manufacturers supplying to high reliability markets. The project was launched in November 2006. The primary technical objective of the project is to undertake comprehensive testing to generate information on failure modes/criteria to better understand the reliability of: (1) Packages (e.g., Thin Small Outline Package [TSOP], Ball Grid Array [BGA], Plastic Dual In-line Package [PDIP]) assembled and reworked with solder interconnects consisting of lead-free alloys (2) Packages (e.g., TSOP, BGA, PDIP) assembled and reworked with solder interconnects consisting of mixed alloys, lead component finish/lead-free solder and lead-free component finish/SnPb solder

  6. A Distributed Approach to System-Level Prognostics

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, Indranil

    2012-01-01

    Prognostics, which deals with predicting remaining useful life of components, subsystems, and systems, is a key technology for systems health management that leads to improved safety and reliability with reduced costs. The prognostics problem is often approached from a component-centric view. However, in most cases, it is not specifically component lifetimes that are important, but, rather, the lifetimes of the systems in which these components reside. The system-level prognostics problem can be quite difficult due to the increased scale and scope of the prognostics problem and the relative Jack of scalability and efficiency of typical prognostics approaches. In order to address these is ues, we develop a distributed solution to the system-level prognostics problem, based on the concept of structural model decomposition. The system model is decomposed into independent submodels. Independent local prognostics subproblems are then formed based on these local submodels, resul ting in a scalable, efficient, and flexible distributed approach to the system-level prognostics problem. We provide a formulation of the system-level prognostics problem and demonstrate the approach on a four-wheeled rover simulation testbed. The results show that the system-level prognostics problem can be accurately and efficiently solved in a distributed fashion.

  7. Bonded repair of composite aircraft structures: A review of scientific challenges and opportunities

    NASA Astrophysics Data System (ADS)

    Katnam, K. B.; Da Silva, L. F. M.; Young, T. M.

    2013-08-01

    Advanced composite materials have gained popularity in high-performance structural designs such as aerospace applications that require lightweight components with superior mechanical properties in order to perform in demanding service conditions as well as provide energy efficiency. However, one of the major challenges that the aerospace industry faces with advanced composites - because of their inherent complex damage behaviour - is structural repair. Composite materials are primarily damaged by mechanical loads and/or environmental conditions. If material damage is not extensive, structural repair is the only feasible solution as replacing the entire component is not cost-effective in many cases. Bonded composite repairs (e.g. scarf patches) are generally preferred as they provide enhanced stress transfer mechanisms, joint efficiencies and aerodynamic performance. With an increased usage of advanced composites in primary and secondary aerospace structural components, it is thus essential to have robust, reliable and repeatable structural bonded repair procedures to restore damaged composite components. But structural bonded repairs, especially with primary structures, pose several scientific challenges with the current existing repair technologies. In this regard, the area of structural bonded repair of composites is broadly reviewed - starting from damage assessment to automation - to identify current scientific challenges and future opportunities.

  8. Reliability evaluation methodology for NASA applications

    NASA Technical Reports Server (NTRS)

    Taneja, Vidya S.

    1992-01-01

    Liquid rocket engine technology has been characterized by the development of complex systems containing large number of subsystems, components, and parts. The trend to even larger and more complex system is continuing. The liquid rocket engineers have been focusing mainly on performance driven designs to increase payload delivery of a launch vehicle for a given mission. In otherwords, although the failure of a single inexpensive part or component may cause the failure of the system, reliability in general has not been considered as one of the system parameters like cost or performance. Up till now, quantification of reliability has not been a consideration during system design and development in the liquid rocket industry. Engineers and managers have long been aware of the fact that the reliability of the system increases during development, but no serious attempts have been made to quantify reliability. As a result, a method to quantify reliability during design and development is needed. This includes application of probabilistic models which utilize both engineering analysis and test data. Classical methods require the use of operating data for reliability demonstration. In contrast, the method described in this paper is based on similarity, analysis, and testing combined with Bayesian statistical analysis.

  9. The Interplay of Surface Mount Solder Joint Quality and Reliability of Low Volume SMAs

    NASA Technical Reports Server (NTRS)

    Ghaffarian, R.

    1997-01-01

    Spacecraft electronics including those used at the Jet Propulsion Laboratory (JPL), demand production of highly reliable assemblies. JPL has recently completed an extensive study, funded by NASA's code Q, of the interplay between manufacturing defects and reliability of ball grid array (BGA) and surface mount electronic components.

  10. ANALYSIS OF SEQUENTIAL FAILURES FOR ASSESSMENT OF RELIABILITY AND SAFETY OF MANUFACTURING SYSTEMS. (R828541)

    EPA Science Inventory

    Assessment of reliability and safety of a manufacturing system with sequential failures is an important issue in industry, since the reliability and safety of the system depend not only on all failed states of system components, but also on the sequence of occurrences of those...

  11. Reliability of Radioisotope Stirling Convertor Linear Alternator

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin; Korovaichuk, Igor; Geng, Steven M.; Schreiber, Jeffrey G.

    2006-01-01

    Onboard radioisotope power systems being developed and planned for NASA s deep-space missions would require reliable design lifetimes of up to 14 years. Critical components and materials of Stirling convertors have been undergoing extensive testing and evaluation in support of a reliable performance for the specified life span. Of significant importance to the successful development of the Stirling convertor is the design of a lightweight and highly efficient linear alternator. Alternator performance could vary due to small deviations in the permanent magnet properties, operating temperature, and component geometries. Durability prediction and reliability of the alternator may be affected by these deviations from nominal design conditions. Therefore, it is important to evaluate the effect of these uncertainties in predicting the reliability of the linear alternator performance. This paper presents a study in which a reliability-based methodology is used to assess alternator performance. The response surface characterizing the induced open-circuit voltage performance is constructed using 3-D finite element magnetic analysis. Fast probability integration method is used to determine the probability of the desired performance and its sensitivity to the alternator design parameters.

  12. Food allergy in dogs and cats: a review.

    PubMed

    Verlinden, A; Hesta, M; Millet, S; Janssens, G P J

    2006-01-01

    Food allergy (FA) is defined as "all immune-mediated reactions following food intake," in contrast with food intolerance (FI), which is non-immune-mediated. Impairment of the mucosal barrier and loss of oral tolerance are risk factors for the development of FA. Type I, III, and IV hypersensitivity reactions are the most likely immunologic mechanisms. Food allergens are (glyco-)proteins with a molecular weight from 10-70 kDa and are resistant to treatment with heat, acid, and proteases. The exact prevalence of FA in dogs and cats remains unknown. There is no breed, sex or age predilection, although some breeds are commonly affected. Before the onset of clinical signs, the animals have been fed the offending food components for at least two years, although some animals are less than a year old. FA is a non-seasonal disease with skin and/or gastrointestinal disorders. Pruritus is the main complaint and is mostly corticoid-resistant. In 20-30% of the cases, dogs and cats have concurrent allergic diseases (atopy/flea-allergic dermatitis). A reliable diagnosis can only be made with dietary elimination-challenge trials. Provocation testing is necessary for the identification of the causative food component(s). Therapy of FA consists of avoiding the offending food component(s).

  13. Diagnostic implications of IDH1-R132H and OLIG2 expression patterns in rare and challenging glioblastoma variants.

    PubMed

    Joseph, Nancy M; Phillips, Joanna; Dahiya, Sonika; M Felicella, Michelle; Tihan, Tarik; Brat, Daniel J; Perry, Arie

    2013-03-01

    Recent work has demonstrated that nearly all diffuse gliomas display nuclear immunoreactivity for the bHLH transcription factor OLIG2, and the R132H mutant isocitrate dehydrogenase 1 (IDH1) protein is expressed in the majority of diffuse gliomas other than primary glioblastoma. However, these antibodies have not been widely applied to rarer glioblastoma variants, which can be diagnostically challenging when the astrocytic features are subtle. We therefore surveyed the expression patterns of OLIG2 and IDH1 in 167 non-conventional glioblastomas, including 45 small cell glioblastomas, 45 gliosarcomas, 34 glioblastomas with primitive neuroectodermal tumor-like foci (PNET-like foci), 23 with an oligodendroglial component, 11 granular cell glioblastomas, and 9 giant cell glioblastomas. OLIG2 was strongly expressed in all glioblastomas with oligodendroglial component, 98% of small cell glioblastomas, and all granular cell glioblastomas, the latter being particularly helpful in ruling out macrophage-rich lesions. In 74% of glioblastomas with PNET-like foci, OLIG2 expression was retained in the PNET-like foci, providing a useful distinction from central nervous system PNETs. The glial component of gliosarcomas was OLIG2 positive in 93% of cases, but only 14% retained focal expression in the sarcomatous component; as such this marker would not reliably distinguish these from pure sarcoma in most cases. OLIG2 was expressed in 67% of giant cell glioblastomas. IDH1 was expressed in 55% of glioblastomas with oligodendroglial component, 15% of glioblastomas with PNET-like foci, 7% of gliosarcomas, and none of the small cell, granular cell, or giant cell glioblastomas. This provides further support for the notion that most glioblastomas with oligodendroglial component are secondary, while small cell glioblastomas, granular cell glioblastomas, and giant cell glioblastomas are primary variants. Therefore, in one of the most challenging differential diagnoses, IDH1 positivity could provide strong support for glioblastoma with oligodendroglial component, while essentially excluding small cell glioblastoma.

  14. Traditional neuropsychological correlates and reliability of the automated neuropsychological assessment metrics-4 battery for Parkinson's disease.

    PubMed

    Hawkins, Keith A; Jennings, Danna; Vincent, Andrea S; Gilliland, Kirby; West, Adrienne; Marek, Kenneth

    2012-08-01

    The automated neuropsychological assessment metrics battery-4 for PD offers the promise of a computerized approach to cognitive assessment. To assess its utility, the ANAM4-PD was administered to 72 PD patients and 24 controls along with a traditional battery. Reliability was assessed by retesting 26 patients. The cognitive efficiency score (CES; a global score) exhibited high reliability (r = 0.86). Constituent variables exhibited lower reliability. The CES correlated strongly with the traditional battery global score, but displayed weaker relationships to UPDRS scores than the traditional score. Multivariate analysis of variance revealed a significant difference between the patient and control groups in ANAM4-PD performance, with three ANAM4-PD tests, math, tower, and pursuit tracking, displaying sizeable differences. In discriminant analyses these variables were as effective as the total ANAM4-PD in classifying cases designated as impaired based on traditional variables. Principal components analyses uncovered fewer factors in the ANAM4-PD relative to the traditional battery. ANAM4-PD variables correlated at higher levels with traditional motor and processing speed variables than with untimed executive, intellectual or memory variables. The ANAM4-PD displays high global reliability, but variable subtest reliability. The battery assesses a narrower range of cognitive functions than traditional tests, and discriminates between patients and controls less effectively. Three ANAM4-PD tests, pursuit tracking, math, and tower performed as well as the total ANAM4-PD in classifying patients as cognitively impaired. These findings could guide the refinement of the ANAM4-PD as an efficient method of screening for mild to moderate cognitive deficits in PD patients. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Ceramic bearings with bilayer coating in cementless total hip arthroplasty. A safe solution. A retrospective study of one hundred and twenty six cases with more than ten years' follow-up.

    PubMed

    Ferreira, André; Aslanian, Thierry; Dalin, Thibaud; Picaud, Jean

    2017-05-01

    Using a ceramic-ceramic bearings, cementless total hip arthroplasty (THA) has provided good clinical results. To ensure longevity a good quality fixation of the implants is mandatory. Different surface treatments had been used, with inconsistent results. We hypothesized that a "bilayer coating" applied to both THA components using validated technology will provide a long-lasting and reliable bone fixation. We studied the survival and bone integration of a continuous, single-surgeon, retrospective series of 126 THA cases (116 patients) with an average follow-up of 12.2 years (minimum 10 years). The THA consisted of cementless implants with a bilayer coating of titanium and hydroxyapatite and used a ceramic-ceramic bearing. With surgical revision for any cause (except infection) as the end point, THA survival was 95.1 % at 13 years. Stem (98.8 %) and cup (98.6 %) survival was similar at 13 years. Bone integration was confirmed in 100 % of implants (Engh-Massin score of 17.42 and ARA score of 5.94). There were no instances of loosening. Revisions were performed because of instability (1.6 %), prosthetic impingement or material-related issues. A bilayer titanium and hydroxyapatite coating provides strong, fast, reliable osseo integration, without deterioration at the interface or release of damaging particles. The good clinical outcomes expected of ceramic bearings were achieved, as were equally reliable stem and cup fixation.

  16. The work and social adjustment scale: reliability, sensitivity and value.

    PubMed

    Zahra, Daniel; Qureshi, Adam; Henley, William; Taylor, Rod; Quinn, Cath; Pooler, Jill; Hardy, Gillian; Newbold, Alexandra; Byng, Richard

    2014-06-01

    To investigate the psychometric properties of the Work and Social Adjustment Scale (WSAS) as an outcome measure for the Improving Access to Psychological Therapy programme, assessing its value as an addition to the Patient Health (PHQ-9) and Generalised Anxiety Disorder questionnaires (GAD-7). Little research has investigated these properties to date. Reliability and responsiveness to change were assessed using data from 4,835 patients. Principal components analysis was used to determine whether the WSAS measures a factor distinct from the PHQ-9 and GAD-7. The WSAS measures a distinct social functioning factor, has high internal reliability, and is sensitive to treatment effects. The WSAS, PHQ-9 and GAD-7 perform comparably on measures of reliability and sensitivity. The WSAS also measures a distinct social functioning component suggesting it has potential as an additional outcome measure.

  17. Impact of broad-specification fuels on future jet aircraft. [engine components and performance

    NASA Technical Reports Server (NTRS)

    Grobman, J. S.

    1978-01-01

    The effects that broad specification fuels have on airframe and engine components were discussed along with the improvements in component technology required to use broad specification fuels without sacrificing performance, reliability, maintainability, or safety.

  18. Examining the factor structure of MUIS-C scale among baby boomers with hepatitis C.

    PubMed

    Reinoso, Humberto; Türegün, Mehmet

    2016-11-01

    Baby boomers account for two out of every three cases of hepatitis C infection in the U.S. To conduct an exploratory factor analysis directed at supporting the use of the MUIS-C as a reliable instrument in measuring illness uncertainty among baby boomers with hepatitis C. The steps of conducting a typical principal component analysis (PCA) with an oblique rotation were used on a sample of 146 participants, the sampling adequacy of items was examined via the Kaiser-Meyer-Olkin (KMO) measure, and the Bartlett's sphericity test was used for appropriateness of conducting a factor analysis. A two-factor structure was obtained by using Horn's parallel analysis method. The two factors explained a cumulative total of 45.8% of the variance. The results of the analyses indicated that the MUIS-C was a valid and reliable instrument and potentially suitable for use in baby boomer population diagnosed with hepatitis C. Published by Elsevier Inc.

  19. Improved source inversion from joint measurements of translational and rotational ground motions

    NASA Astrophysics Data System (ADS)

    Donner, S.; Bernauer, M.; Reinwald, M.; Hadziioannou, C.; Igel, H.

    2017-12-01

    Waveform inversion for seismic point (moment tensor) and kinematic sources is a standard procedure. However, especially in the local and regional distances a lack of appropriate velocity models, the sparsity of station networks, or a low signal-to-noise ratio combined with more complex waveforms hamper the successful retrieval of reliable source solutions. We assess the potential of rotational ground motion recordings to increase the resolution power and reduce non-uniquenesses for point and kinematic source solutions. Based on synthetic waveform data, we perform a Bayesian (i.e. probabilistic) inversion. Thus, we avoid the subjective selection of the most reliable solution according the lowest misfit or other constructed criterion. In addition, we obtain unbiased measures of resolution and possible trade-offs. Testing different earthquake mechanisms and scenarios, we can show that the resolution of the source solutions can be improved significantly. Especially depth dependent components show significant improvement. Next to synthetic data of station networks, we also tested sparse-network and single station cases.

  20. Measurement of Postmortem Pupil Size: A New Method with Excellent Reliability and Its Application to Pupil Changes in the Early Postmortem Period.

    PubMed

    Fleischer, Luise; Sehner, Susanne; Gehl, Axel; Riemer, Martin; Raupach, Tobias; Anders, Sven

    2017-05-01

    Measurement of postmortem pupil width is a potential component of death time estimation. However, no standardized measurement method has been described. We analyzed a total of 71 digital images for pupil-iris ratio using the software ImageJ. Images were analyzed three times by four different examiners. In addition, serial images from 10 cases were taken between 2 and 50 h postmortem to detect spontaneous pupil changes. Intra- and inter-rater reliability of the method was excellent (ICC > 0.95). The method is observer independent and yields consistent results, and images can be digitally stored and re-evaluated. The method seems highly eligible for forensic and scientific purposes. While statistical analysis of spontaneous pupil changes revealed a significant polynomial of quartic degree for postmortem time (p = 0.001), an obvious pattern was not detected. These results do not indicate suitability of spontaneous pupil changes for forensic death time estimation, as formerly suggested. © 2016 American Academy of Forensic Sciences.

  1. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2009-01-01

    A stochastic design optimization methodology (SDO) has been developed to design components of an airframe structure that can be made of metallic and composite materials. The design is obtained as a function of the risk level, or reliability, p. The design method treats uncertainties in load, strength, and material properties as distribution functions, which are defined with mean values and standard deviations. A design constraint or a failure mode is specified as a function of reliability p. Solution to stochastic optimization yields the weight of a structure as a function of reliability p. Optimum weight versus reliability p traced out an inverted-S-shaped graph. The center of the inverted-S graph corresponded to 50 percent (p = 0.5) probability of success. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponds to unity for reliability p (or p = 1). Weight can be reduced to a small value for the most failure-prone design with a reliability that approaches zero (p = 0). Reliability can be changed for different components of an airframe structure. For example, the landing gear can be designed for a very high reliability, whereas it can be reduced to a small extent for a raked wingtip. The SDO capability is obtained by combining three codes: (1) The MSC/Nastran code was the deterministic analysis tool, (2) The fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and (3) NASA Glenn Research Center s optimization testbed CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life raked wingtip structure of the Boeing 767-400 extended range airliner made of metallic and composite materials.

  2. Inter-rater reliability and generalizability of patient note scores using a scoring rubric based on the USMLE Step-2 CS format.

    PubMed

    Park, Yoon Soo; Hyderi, Abbas; Bordage, Georges; Xing, Kuan; Yudkowsky, Rachel

    2016-10-01

    Recent changes to the patient note (PN) format of the United States Medical Licensing Examination have challenged medical schools to improve the instruction and assessment of students taking the Step-2 clinical skills examination. The purpose of this study was to gather validity evidence regarding response process and internal structure, focusing on inter-rater reliability and generalizability, to determine whether a locally-developed PN scoring rubric and scoring guidelines could yield reproducible PN scores. A randomly selected subsample of historical data (post-encounter PN from 55 of 177 medical students) was rescored by six trained faculty raters in November-December 2014. Inter-rater reliability (% exact agreement and kappa) was calculated for five standardized patient cases administered in a local graduation competency examination. Generalizability studies were conducted to examine the overall reliability. Qualitative data were collected through surveys and a rater-debriefing meeting. The overall inter-rater reliability (weighted kappa) was .79 (Documentation = .63, Differential Diagnosis = .90, Justification = .48, and Workup = .54). The majority of score variance was due to case specificity (13 %) and case-task specificity (31 %), indicating differences in student performance by case and by case-task interactions. Variance associated with raters and its interactions were modest (<5 %). Raters felt that justification was the most difficult task to score and that having case and level-specific scoring guidelines during training was most helpful for calibration. The overall inter-rater reliability indicates high level of confidence in the consistency of note scores. Designs for scoring notes may optimize reliability by balancing the number of raters and cases.

  3. Simplified Phased-Mission System Analysis for Systems with Independent Component Repairs

    NASA Technical Reports Server (NTRS)

    Somani, Arun K.

    1996-01-01

    Accurate analysis of reliability of system requires that it accounts for all major variations in system's operation. Most reliability analyses assume that the system configuration, success criteria, and component behavior remain the same. However, multiple phases are natural. We present a new computationally efficient technique for analysis of phased-mission systems where the operational states of a system can be described by combinations of components states (such as fault trees or assertions). Moreover, individual components may be repaired, if failed, as part of system operation but repairs are independent of the system state. For repairable systems Markov analysis techniques are used but they suffer from state space explosion. That limits the size of system that can be analyzed and it is expensive in computation. We avoid the state space explosion. The phase algebra is used to account for the effects of variable configurations, repairs, and success criteria from phase to phase. Our technique yields exact (as opposed to approximate) results. We demonstrate our technique by means of several examples and present numerical results to show the effects of phases and repairs on the system reliability/availability.

  4. System life and reliability modeling for helicopter transmissions

    NASA Technical Reports Server (NTRS)

    Savage, M.; Brikmanis, C. K.

    1986-01-01

    A computer program which simulates life and reliability of helicopter transmissions is presented. The helicopter transmissions may be composed of spiral bevel gear units and planetary gear units - alone, in series or in parallel. The spiral bevel gear units may have either single or dual input pinions, which are identical. The planetary gear units may be stepped or unstepped and the number of planet gears carried by the planet arm may be varied. The reliability analysis used in the program is based on the Weibull distribution lives of the transmission components. The computer calculates the system lives and dynamic capacities of the transmission components and the transmission. The system life is defined as the life of the component or transmission at an output torque at which the probability of survival is 90 percent. The dynamic capacity of a component or transmission is defined as the output torque which can be applied for one million output shaft cycles for a probability of survival of 90 percent. A complete summary of the life and dynamic capacity results is produced by the program.

  5. Inter-rater and test-retest reliability of quality assessments by novice student raters using the Jadad and Newcastle-Ottawa Scales.

    PubMed

    Oremus, Mark; Oremus, Carolina; Hall, Geoffrey B C; McKinnon, Margaret C

    2012-01-01

    Quality assessment of included studies is an important component of systematic reviews. The authors investigated inter-rater and test-retest reliability for quality assessments conducted by inexperienced student raters. Student raters received a training session on quality assessment using the Jadad Scale for randomised controlled trials and the Newcastle-Ottawa Scale (NOS) for observational studies. Raters were randomly assigned into five pairs and they each independently rated the quality of 13-20 articles. These articles were drawn from a pool of 78 papers examining cognitive impairment following electroconvulsive therapy to treat major depressive disorder. The articles were randomly distributed to the raters. Two months later, each rater re-assessed the quality of half of their assigned articles. McMaster Integrative Neuroscience Discovery and Study Program. 10 students taking McMaster Integrative Neuroscience Discovery and Study Program courses. The authors measured inter-rater reliability using κ and the intraclass correlation coefficient type 2,1 or ICC(2,1). The authors measured test-retest reliability using ICC(2,1). Inter-rater reliability varied by scale question. For the six-item Jadad Scale, question-specific κs ranged from 0.13 (95% CI -0.11 to 0.37) to 0.56 (95% CI 0.29 to 0.83). The ranges were -0.14 (95% CI -0.28 to 0.00) to 0.39 (95% CI -0.02 to 0.81) for the NOS cohort and -0.20 (95% CI -0.49 to 0.09) to 1.00 (95% CI 1.00 to 1.00) for the NOS case-control. For overall scores on the six-item Jadad Scale, ICC(2,1)s for inter-rater and test-retest reliability (accounting for systematic differences between raters) were 0.32 (95% CI 0.08 to 0.52) and 0.55 (95% CI 0.41 to 0.67), respectively. Corresponding ICC(2,1)s for the NOS cohort were -0.19 (95% CI -0.67 to 0.35) and 0.62 (95% CI 0.25 to 0.83), and for the NOS case-control, the ICC(2,1)s were 0.46 (95% CI -0.13 to 0.92) and 0.83 (95% CI 0.48 to 0.95). Inter-rater reliability was generally poor to fair and test-retest reliability was fair to excellent. A pilot rating phase following rater training may be one way to improve agreement.

  6. On fatigue crack growth under random loading

    NASA Astrophysics Data System (ADS)

    Zhu, W. Q.; Lin, Y. K.; Lei, Y.

    1992-09-01

    A probabilistic analysis of the fatigue crack growth, fatigue life and reliability of a structural or mechanical component is presented on the basis of fracture mechanics and theory of random processes. The material resistance to fatigue crack growth and the time-history of the stress are assumed to be random. Analytical expressions are obtained for the special case in which the random stress is a stationary narrow-band Gaussian random process, and a randomized Paris-Erdogan law is applicable. As an example, the analytical method is applied to a plate with a central crack, and the results are compared with those obtained from digital Monte Carlo simulations.

  7. Detecting incapacity of a quantum channel.

    PubMed

    Smith, Graeme; Smolin, John A

    2012-06-08

    Using unreliable or noisy components for reliable communication requires error correction. But which noise processes can support information transmission, and which are too destructive? For classical systems any channel whose output depends on its input has the capacity for communication, but the situation is substantially more complicated in the quantum setting. We find a generic test for incapacity based on any suitable forbidden transformation--a protocol for communication with a channel passing our test would also allow one to implement the associated forbidden transformation. Our approach includes both known quantum incapacity tests--positive partial transposition and antidegradability (no cloning)--as special cases, putting them both on the same footing.

  8. A model of scientific attitudes assessment by observation in physics learning based scientific approach: case study of dynamic fluid topic in high school

    NASA Astrophysics Data System (ADS)

    Yusliana Ekawati, Elvin

    2017-01-01

    This study aimed to produce a model of scientific attitude assessment in terms of the observations for physics learning based scientific approach (case study of dynamic fluid topic in high school). Development of instruments in this study adaptation of the Plomp model, the procedure includes the initial investigation, design, construction, testing, evaluation and revision. The test is done in Surakarta, so that the data obtained are analyzed using Aiken formula to determine the validity of the content of the instrument, Cronbach’s alpha to determine the reliability of the instrument, and construct validity using confirmatory factor analysis with LISREL 8.50 program. The results of this research were conceptual models, instruments and guidelines on scientific attitudes assessment by observation. The construct assessment instruments include components of curiosity, objectivity, suspended judgment, open-mindedness, honesty and perseverance. The construct validity of instruments has been qualified (rated load factor > 0.3). The reliability of the model is quite good with the Alpha value 0.899 (> 0.7). The test showed that the model fits the theoretical models are supported by empirical data, namely p-value 0.315 (≥ 0.05), RMSEA 0.027 (≤ 0.08)

  9. Structured assessment of current mental state in clinical practice: an international study of the reliability and validity of the Current Psychiatric State interview, CPS-50.

    PubMed

    Falloon, I R H; Mizuno, M; Murakami, M; Roncone, R; Unoka, Z; Harangozo, J; Pullman, J; Gedye, R; Held, T; Hager, B; Erickson, D; Burnett, K

    2005-01-01

    To develop a reliable standardized assessment of psychiatric symptoms for use in clinical practice. A 50-item interview, the Current Psychiatric State 50 (CPS-50), was used to assess 237 patients with a range of psychiatric diagnoses. Ratings were made by interviewers after a 2-day training. Comparisons of inter-rater reliability on each item and on eight clinical subscales were made across four international centres and between psychiatrists and non-psychiatrists. A principal components analysis was used to validate these clinical scales. Acceptable inter-rater reliability (intra-class coefficient > 0.80) was found for 46 of the 50 items, and for all eight subscales. There was no difference between centres or between psychiatrists and non-psychiatrists. The principal components analysis factors were similar to the clinical scales. The CPS-50 is a reliable standardized assessment of current mental status that can be used in clinical practice by all mental health professionals after brief training. Blackwell Munksgaard 2004

  10. The weakest t-norm based intuitionistic fuzzy fault-tree analysis to evaluate system reliability.

    PubMed

    Kumar, Mohit; Yadav, Shiv Prasad

    2012-07-01

    In this paper, a new approach of intuitionistic fuzzy fault-tree analysis is proposed to evaluate system reliability and to find the most critical system component that affects the system reliability. Here weakest t-norm based intuitionistic fuzzy fault tree analysis is presented to calculate fault interval of system components from integrating expert's knowledge and experience in terms of providing the possibility of failure of bottom events. It applies fault-tree analysis, α-cut of intuitionistic fuzzy set and T(ω) (the weakest t-norm) based arithmetic operations on triangular intuitionistic fuzzy sets to obtain fault interval and reliability interval of the system. This paper also modifies Tanaka et al.'s fuzzy fault-tree definition. In numerical verification, a malfunction of weapon system "automatic gun" is presented as a numerical example. The result of the proposed method is compared with the listing approaches of reliability analysis methods. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Mass and Reliability System (MaRS)

    NASA Technical Reports Server (NTRS)

    Barnes, Sarah

    2016-01-01

    The Safety and Mission Assurance (S&MA) Directorate is responsible for mitigating risk, providing system safety, and lowering risk for space programs from ground to space. The S&MA is divided into 4 divisions: The Space Exploration Division (NC), the International Space Station Division (NE), the Safety & Test Operations Division (NS), and the Quality and Flight Equipment Division (NT). The interns, myself and Arun Aruljothi, will be working with the Risk & Reliability Analysis Branch under the NC Division's. The mission of this division is to identify, characterize, diminish, and communicate risk by implementing an efficient and effective assurance model. The team utilizes Reliability and Maintainability (R&M) and Probabilistic Risk Assessment (PRA) to ensure decisions concerning risks are informed, vehicles are safe and reliable, and program/project requirements are realistic and realized. This project pertains to the Orion mission, so it is geared toward a long duration Human Space Flight Program(s). For space missions, payload is a critical concept; balancing what hardware can be replaced by components verse by Orbital Replacement Units (ORU) or subassemblies is key. For this effort a database was created that combines mass and reliability data, called Mass and Reliability System or MaRS. The U.S. International Space Station (ISS) components are used as reference parts in the MaRS database. Using ISS components as a platform is beneficial because of the historical context and the environment similarities to a space flight mission. MaRS uses a combination of systems: International Space Station PART for failure data, Vehicle Master Database (VMDB) for ORU & components, Maintenance & Analysis Data Set (MADS) for operation hours and other pertinent data, & Hardware History Retrieval System (HHRS) for unit weights. MaRS is populated using a Visual Basic Application. Once populated, the excel spreadsheet is comprised of information on ISS components including: operation hours, random/nonrandom failures, software/hardware failures, quantity, orbital replaceable units (ORU), date of placement, unit weight, frequency of part, etc. The motivation for creating such a database will be the development of a mass/reliability parametric model to estimate mass required for replacement parts. Once complete, engineers working on future space flight missions will have access a mean time to failures and on parts along with their mass, this will be used to make proper decisions for long duration space flight missions

  12. On the use of genetic algorithm to optimize industrial assets lifecycle management under safety and budget constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lonchampt, J.; Fessart, K.

    2013-07-01

    The purpose of this paper is to describe the method and tool dedicated to optimize investments planning for industrial assets. These investments may either be preventive maintenance tasks, asset enhancements or logistic investments such as spare parts purchases. The two methodological points to investigate in such an issue are: 1. The measure of the profitability of a portfolio of investments 2. The selection and planning of an optimal set of investments 3. The measure of the risk of a portfolio of investments The measure of the profitability of a set of investments in the IPOP tool is synthesised in themore » Net Present Value indicator. The NPV is the sum of the differences of discounted cash flows (direct costs, forced outages...) between the situations with and without a given investment. These cash flows are calculated through a pseudo-Markov reliability model representing independently the components of the industrial asset and the spare parts inventories. The component model has been widely discussed over the years but the spare part model is a new one based on some approximations that will be discussed. This model, referred as the NPV function, takes for input an investments portfolio and gives its NPV. The second issue is to optimize the NPV. If all investments were independent, this optimization would be an easy calculation, unfortunately there are two sources of dependency. The first one is introduced by the spare part model, as if components are indeed independent in their reliability model, the fact that several components use the same inventory induces a dependency. The second dependency comes from economic, technical or logistic constraints, such as a global maintenance budget limit or a safety requirement limiting the residual risk of failure of a component or group of component, making the aggregation of individual optimum not necessary feasible. The algorithm used to solve such a difficult optimization problem is a genetic algorithm. After a description of the features of the software a test case is presented showing the influence of the optimization algorithm parameters on its efficiency to find an optimal investments planning. (authors)« less

  13. FY12 End of Year Report for NEPP DDR2 Reliability

    NASA Technical Reports Server (NTRS)

    Guertin, Steven M.

    2013-01-01

    This document reports the status of the NASA Electronic Parts and Packaging (NEPP) Double Data Rate 2 (DDR2) Reliability effort for FY2012. The task expanded the focus of evaluating reliability effects targeted for device examination. FY11 work highlighted the need to test many more parts and to examine more operating conditions, in order to provide useful recommendations for NASA users of these devices. This year's efforts focused on development of test capabilities, particularly focusing on those that can be used to determine overall lot quality and identify outlier devices, and test methods that can be employed on components for flight use. Flight acceptance of components potentially includes considerable time for up-screening (though this time may not currently be used for much reliability testing). Manufacturers are much more knowledgeable about the relevant reliability mechanisms for each of their devices. We are not in a position to know what the appropriate reliability tests are for any given device, so although reliability testing could be focused for a given device, we are forced to perform a large campaign of reliability tests to identify devices with degraded reliability. With the available up-screening time for NASA parts, it is possible to run many device performance studies. This includes verification of basic datasheet characteristics. Furthermore, it is possible to perform significant pattern sensitivity studies. By doing these studies we can establish higher reliability of flight components. In order to develop these approaches, it is necessary to develop test capability that can identify reliability outliers. To do this we must test many devices to ensure outliers are in the sample, and we must develop characterization capability to measure many different parameters. For FY12 we increased capability for reliability characterization and sample size. We increased sample size this year by moving from loose devices to dual inline memory modules (DIMMs) with an approximate reduction of 20 to 50 times in terms of per device under test (DUT) cost. By increasing sample size we have improved our ability to characterize devices that may be considered reliability outliers. This report provides an update on the effort to improve DDR2 testing capability. Although focused on DDR2, the methods being used can be extended to DDR and DDR3 with relative ease.

  14. Implementing a Microcontroller Watchdog with a Field-Programmable Gate Array (FPGA)

    NASA Technical Reports Server (NTRS)

    Straka, Bartholomew

    2013-01-01

    Reliability is crucial to safety. Redundancy of important system components greatly enhances reliability and hence safety. Field-Programmable Gate Arrays (FPGAs) are useful for monitoring systems and handling the logic necessary to keep them running with minimal interruption when individual components fail. A complete microcontroller watchdog with logic for failure handling can be implemented in a hardware description language (HDL.). HDL-based designs are vendor-independent and can be used on many FPGAs with low overhead.

  15. Tools and Techniques for Adding Fault Tolerance to Distributed and Parallel Programs

    DTIC Science & Technology

    1991-12-07

    is rapidly approaching dimensions where fault tolerance can no longer be ignored. No matter how reliable the i .nd~ividual components May be, the...The scale of parallel computing systems is rapidly approaching dimensions where 41to’- erance can no longer be ignored. No matter how relitble the...those employed in the Tandem [71 and Stratus [35] systems, is clearly impractical. * No matter how reliable the individual components are, the sheer

  16. Performance Evaluation of Counter-Based Dynamic Load Balancing Schemes for Massive Contingency Analysis with Different Computing Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Huang, Zhenyu; Chavarría-Miranda, Daniel

    Contingency analysis is a key function in the Energy Management System (EMS) to assess the impact of various combinations of power system component failures based on state estimation. Contingency analysis is also extensively used in power market operation for feasibility test of market solutions. High performance computing holds the promise of faster analysis of more contingency cases for the purpose of safe and reliable operation of today’s power grids with less operating margin and more intermittent renewable energy sources. This paper evaluates the performance of counter-based dynamic load balancing schemes for massive contingency analysis under different computing environments. Insights frommore » the performance evaluation can be used as guidance for users to select suitable schemes in the application of massive contingency analysis. Case studies, as well as MATLAB simulations, of massive contingency cases using the Western Electricity Coordinating Council power grid model are presented to illustrate the application of high performance computing with counter-based dynamic load balancing schemes.« less

  17. Fatigue after stroke: the development and evaluation of a case definition.

    PubMed

    Lynch, Joanna; Mead, Gillian; Greig, Carolyn; Young, Archie; Lewis, Susan; Sharpe, Michael

    2007-11-01

    While fatigue after stroke is a common problem, it has no generally accepted definition. Our aim was to develop a case definition for post-stroke fatigue and to test its psychometric properties. A case definition with face validity and an associated structured interview was constructed. After initial piloting, the feasibility, reliability (test-retest and inter-rater) and concurrent validity (in relation to four fatigue severity scales) were determined in 55 patients with stroke. All participating patients provided satisfactory answers to all the case definition probe questions demonstrating its feasibility For test-retest reliability, kappa was 0.78 (95% CI, 0.57-0.94, P<.01) and for inter-rater reliability kappa was 0.80 (95% CI, 0.62-0.99, P<.01). Patients fulfilling the case definition also had substantially higher fatigue scores on four fatigue severity scales (P<.001) indicating concurrent validity. The proposed case definition is feasible to administer and reliable in practice, and there is evidence of concurrent validity. It requires further evaluation in different settings.

  18. Noninteractive macroscopic reliability model for whisker-reinforced ceramic composites

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Arnold, Steven M.

    1990-01-01

    Considerable research is underway in the field of material science focusing on incorporating silicon carbide whiskers into silicon nitride and alumina matrices. These composites show the requisite thermal stability and thermal shock resistance necessary for use as components in advanced gas turbines and heat exchangers. This paper presents a macroscopic noninteractive reliability model for whisker-reinforced ceramic composites. The theory is multiaxial and is applicable to composites that can be characterized as transversely isotropic. Enough processing data exists to suggest this idealization encompasses a significantly large class of fabricated components. A qualitative assessment of the model is made by presenting reliability surfaces in several different stress spaces and for different values of model parameters.

  19. Reliability and life prediction of ceramic composite structures at elevated temperatures

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Gyekenyesi, John P.

    1994-01-01

    Methods are highlighted that ascertain the structural reliability of components fabricated of composites with ceramic matrices reinforced with ceramic fibers or whiskers and subject to quasi-static load conditions at elevated temperatures. Each method focuses on a particular composite microstructure: whisker-toughened ceramics, laminated ceramic matrix composites, and fabric reinforced ceramic matrix composites. In addition, since elevated service temperatures usually involve time-dependent effects, a section dealing with reliability degradation as a function of load history has been included. A recurring theme throughout this chapter is that even though component failure is controlled by a sequence of many microfailure events, failure of ceramic composites will be modeled using macrovariables.

  20. Performance and reliability of the NASA Biomass Production Chamber

    NASA Technical Reports Server (NTRS)

    Sager, J. C.; Chetirkin, P. V.

    1994-01-01

    The Biomass Production Chamber (BPC) at the Kennedy Space Center is part of the Controlled Ecological Life Support System (CELSS) Breadboard Project. Plants are grown in a closed environment in an effort to quantify their contributions to the requirements for life support. Performance of this system is described. Also, in building this system, data from component and subsystem failures are being recorded. These data are used to identify problem areas in the design and implementation. The techniques used to measure the reliability will be useful in the design and construction of future CELSS. Possible methods for determining the reliability of a green plant, the primary component of a CELSS, are discussed.

  1. Between-Day Reliability of Pre-Participation Screening Components in Pre-Professional Ballet and Contemporary Dancers.

    PubMed

    Kenny, Sarah J; Palacios-Derflingher, Luz; Owoeye, Oluwatoyosi B A; Whittaker, Jackie L; Emery, Carolyn A

    2018-03-15

    Critical appraisal of research investigating risk factors for musculoskeletal injury in dancers suggests high quality reliability studies are lacking. The purpose of this study was to determine between-day reliability of pre-participation screening (PPS) components in pre-professional ballet and contemporary dancers. Thirty-eight dancers (35 female, 3 male; median age; 18 years; range: 11 to 30 years) participated. Screening components (Athletic Coping Skills Inventory-28, body mass index, percent total body fat, total bone mineral density, Foot Posture Index-6, hip and ankle range of motion, three lumbopelvic control tasks, unipedal dynamic balance, and the Y-Balance Test) were conducted one week apart. Intra-class correlation coefficients (ICCs: 95% confidence intervals), standard error of measurement, minimal detectable change (MDC), Bland-Altman methods of agreement [95% limits of agreement (LOA)], Cohen's kappa coefficients, standard error, and percent agreements were calculated. Depending on the screening component, ICC estimates ranged from 0.51 to 0.98, kappa coefficients varied between -0.09 and 0.47, and percent agreement spanned 71% to 95%. Wide 95% LOA were demonstrated by Foot Posture Index-6 (right: -6.06, 7.31), passive hip external rotation (right: -9.89, 16.54), and passive supine turnout (left: -15.36, 17.58). The PPS components examined demonstrated moderate to excellent relative reliability with mean between-day differences less than MDC, or sufficient percent agreement, across all assessments. However, due to wide 95% limits of agreement, the Foot Posture Index-6 and passive hip range of motion are not recommended for screening injury risk in pre-professional dancers.

  2. Limitations of Reliability for Long-Endurance Human Spaceflight

    NASA Technical Reports Server (NTRS)

    Owens, Andrew C.; de Weck, Olivier L.

    2016-01-01

    Long-endurance human spaceflight - such as missions to Mars or its moons - will present a never-before-seen maintenance logistics challenge. Crews will be in space for longer and be farther way from Earth than ever before. Resupply and abort options will be heavily constrained, and will have timescales much longer than current and past experience. Spare parts and/or redundant systems will have to be included to reduce risk. However, the high cost of transportation means that this risk reduction must be achieved while also minimizing mass. The concept of increasing system and component reliability is commonly discussed as a means to reduce risk and mass by reducing the probability that components will fail during a mission. While increased reliability can reduce maintenance logistics mass requirements, the rate of mass reduction decreases over time. In addition, reliability growth requires increased test time and cost. This paper assesses trends in test time requirements, cost, and maintenance logistics mass savings as a function of increase in Mean Time Between Failures (MTBF) for some or all of the components in a system. In general, reliability growth results in superlinear growth in test time requirements, exponential growth in cost, and sublinear benefits (in terms of logistics mass saved). These trends indicate that it is unlikely that reliability growth alone will be a cost-effective approach to maintenance logistics mass reduction and risk mitigation for long-endurance missions. This paper discusses these trends as well as other options to reduce logistics mass such as direct reduction of part mass, commonality, or In-Space Manufacturing (ISM). Overall, it is likely that some combination of all available options - including reliability growth - will be required to reduce mass and mitigate risk for future deep space missions.

  3. Increased Reliability for Single-Case Research Results: Is the Bootstrap the Answer?

    ERIC Educational Resources Information Center

    Parker, Richard I.

    2006-01-01

    There is need for objective and reliable single-case research (SCR) results in the movement toward evidence-based interventions (EBI), for inclusion in meta-analyses, and for funding accountability in clinical contexts. Yet SCR deals with data that often do not conform to parametric data assumptions and that yield results of low reliability. A…

  4. Plasma Sprayed Bondable Stainless Surface (BOSS) Coatings for Corrosion Protection and Adhesion Treatments

    NASA Technical Reports Server (NTRS)

    Davis, G. D.; Groff, G. B.; Rooney, M.; Cooke, A. V.; Boothe, R.

    1995-01-01

    Plasma-sprayed Bondable Stainless Surface (BOSS) coatings are being developed under the Solid Propulsion Integrity Program's (SPIP) Bondlines Package. These coatings are designed as a steel case preparation treatment prior to insulation lay-up. Other uses include the exterior of steel cases and bonding surfaces of nozzle components. They provide excellent bondability - rubber insulation and epoxy bonds fail cohesively within the polymer - for both fresh surfaces and surfaces having undergone natural and accelerated environmental aging. They have passed the MSFC requirements for protection of inland and sea coast environment. Because BOSS coatings are inherently corrosion resistant, they do not require preservation by greases or oils. The reduction/elimination of greases and oils, known bondline degraders, can increase SRM reliability, decrease costs by reducing the number of process steps, and decrease environmental pollution by reducing the amount of methyl chloroform used for degreasing and thus reduce release of the ozone-depleting chemical in accordance with the Clean Air Act and the Montreal Protocol. The coatings can potential extend the life of RSRM case segments and nozzle components by eliminating erosion due to multiple grit blasting during each use cycle and corrosion damage during marine recovery. Concurrent work for the Air Force show that other BOSS coatings give excellent bondline strength and durability for high-performance structures of aluminum and titanium.

  5. Reliability of risk-adjusted outcomes for profiling hospital surgical quality.

    PubMed

    Krell, Robert W; Hozain, Ahmed; Kao, Lillian S; Dimick, Justin B

    2014-05-01

    Quality improvement platforms commonly use risk-adjusted morbidity and mortality to profile hospital performance. However, given small hospital caseloads and low event rates for some procedures, it is unclear whether these outcomes reliably reflect hospital performance. To determine the reliability of risk-adjusted morbidity and mortality for hospital performance profiling using clinical registry data. A retrospective cohort study was conducted using data from the American College of Surgeons National Surgical Quality Improvement Program, 2009. Participants included all patients (N = 55,466) who underwent colon resection, pancreatic resection, laparoscopic gastric bypass, ventral hernia repair, abdominal aortic aneurysm repair, and lower extremity bypass. Outcomes included risk-adjusted overall morbidity, severe morbidity, and mortality. We assessed reliability (0-1 scale: 0, completely unreliable; and 1, perfectly reliable) for all 3 outcomes. We also quantified the number of hospitals meeting minimum acceptable reliability thresholds (>0.70, good reliability; and >0.50, fair reliability) for each outcome. For overall morbidity, the most common outcome studied, the mean reliability depended on sample size (ie, how high the hospital caseload was) and the event rate (ie, how frequently the outcome occurred). For example, mean reliability for overall morbidity was low for abdominal aortic aneurysm repair (reliability, 0.29; sample size, 25 cases per year; and event rate, 18.3%). In contrast, mean reliability for overall morbidity was higher for colon resection (reliability, 0.61; sample size, 114 cases per year; and event rate, 26.8%). Colon resection (37.7% of hospitals), pancreatic resection (7.1% of hospitals), and laparoscopic gastric bypass (11.5% of hospitals) were the only procedures for which any hospitals met a reliability threshold of 0.70 for overall morbidity. Because severe morbidity and mortality are less frequent outcomes, their mean reliability was lower, and even fewer hospitals met the thresholds for minimum reliability. Most commonly reported outcome measures have low reliability for differentiating hospital performance. This is especially important for clinical registries that sample rather than collect 100% of cases, which can limit hospital case accrual. Eliminating sampling to achieve the highest possible caseloads, adjusting for reliability, and using advanced modeling strategies (eg, hierarchical modeling) are necessary for clinical registries to increase their benchmarking reliability.

  6. Reliability considerations in the placement of control system components

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.

    1983-01-01

    This paper presents a methodology, along with applications to a grid type structure, for incorporating reliability considerations in the decision for actuator placement on large space structures. The method involves the minimization of a criterion that considers mission life and the reliability of the system components. It is assumed that the actuator gains are to be readjusted following failures, but their locations cannot be changed. The goal of the design is to suppress vibrations of the grid and the integral square of the grid modal amplitudes is used as a measure of performance of the control system. When reliability of the actuators is considered, a more pertinent measure is the expected value of the integral; that is, the sum of the squares of the modal amplitudes for each possible failure state considered, multiplied by the probability that the failure state will occur. For a given set of actuator locations, the optimal criterion may be graphed as a function of the ratio of the mean time to failure of the components and the design mission life or reservicing interval. The best location of the actuators is typically different for a short mission life than for a long one.

  7. Probabilistic Prediction of Lifetimes of Ceramic Parts

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Gyekenyesi, John P.; Jadaan, Osama M.; Palfi, Tamas; Powers, Lynn; Reh, Stefan; Baker, Eric H.

    2006-01-01

    ANSYS/CARES/PDS is a software system that combines the ANSYS Probabilistic Design System (PDS) software with a modified version of the Ceramics Analysis and Reliability Evaluation of Structures Life (CARES/Life) Version 6.0 software. [A prior version of CARES/Life was reported in Program for Evaluation of Reliability of Ceramic Parts (LEW-16018), NASA Tech Briefs, Vol. 20, No. 3 (March 1996), page 28.] CARES/Life models effects of stochastic strength, slow crack growth, and stress distribution on the overall reliability of a ceramic component. The essence of the enhancement in CARES/Life 6.0 is the capability to predict the probability of failure using results from transient finite-element analysis. ANSYS PDS models the effects of uncertainty in material properties, dimensions, and loading on the stress distribution and deformation. ANSYS/CARES/PDS accounts for the effects of probabilistic strength, probabilistic loads, probabilistic material properties, and probabilistic tolerances on the lifetime and reliability of the component. Even failure probability becomes a stochastic quantity that can be tracked as a response variable. ANSYS/CARES/PDS enables tracking of all stochastic quantities in the design space, thereby enabling more precise probabilistic prediction of lifetimes of ceramic components.

  8. Enhanced Component Performance Study: Motor-Driven Pumps 1998–2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schroeder, John Alton

    2016-02-01

    This report presents an enhanced performance evaluation of motor-driven pumps at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience failure reports from fiscal year 1998 through 2014 for the component reliability as reported in the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The motor-driven pump failure modes considered for standby systems are failure to start, failure to run less than or equal to one hour, and failure to run more than one hour; for normally running systems, the failure modes considered are failure to start and failure tomore » run. An eight hour unreliability estimate is also calculated and trended. The component reliability estimates and the reliability data are trended for the most recent 10-year period while yearly estimates for reliability are provided for the entire active period. Statistically significant increasing trends were identified in pump run hours per reactor year. Statistically significant decreasing trends were identified for standby systems industry-wide frequency of start demands, and run hours per reactor year for runs of less than or equal to one hour.« less

  9. Learning Style Scales: a valid and reliable questionnaire.

    PubMed

    Abdollahimohammad, Abdolghani; Ja'afar, Rogayah

    2014-01-01

    Learning-style instruments assist students in developing their own learning strategies and outcomes, in eliminating learning barriers, and in acknowledging peer diversity. Only a few psychometrically validated learning-style instruments are available. This study aimed to develop a valid and reliable learning-style instrument for nursing students. A cross-sectional survey study was conducted in two nursing schools in two countries. A purposive sample of 156 undergraduate nursing students participated in the study. Face and content validity was obtained from an expert panel. The LSS construct was established using principal axis factoring (PAF) with oblimin rotation, a scree plot test, and parallel analysis (PA). The reliability of LSS was tested using Cronbach's α, corrected item-total correlation, and test-retest. Factor analysis revealed five components, confirmed by PA and a relatively clear curve on the scree plot. Component strength and interpretability were also confirmed. The factors were labeled as perceptive, solitary, analytic, competitive, and imaginative learning styles. Cronbach's α was >0.70 for all subscales in both study populations. The corrected item-total correlations were >0.30 for the items in each component. The LSS is a valid and reliable inventory for evaluating learning style preferences in nursing students in various multicultural environments.

  10. Final Report: Studies in Structural, Stochastic and Statistical Reliability for Communication Networks and Engineered Systems

    DTIC Science & Technology

    to do so, and (5) three distinct versions of the problem of estimating component reliability from system failure-time data are treated, each resulting inconsistent estimators with asymptotically normal distributions.

  11. Enhancing treatment fidelity in psychotherapy research: novel approach to measure the components of cognitive behavioural therapy for relapse prevention in first-episode psychosis.

    PubMed

    Alvarez-Jimenez, Mario; Wade, Darryl; Cotton, Sue; Gee, Donna; Pearce, Tracey; Crisp, Kingsley; McGorry, Patrick D; Gleeson, John F

    2008-12-01

    Establishing treatment fidelity is one of the most important aspects of psychotherapy research. Treatment fidelity refers to the methodological strategies used to examine and enhance the reliability and validity of psychotherapy. This study sought to develop and evaluate a measure specifically designed to assess fidelity to the different therapeutic components (i.e. therapy phases) of the individual intervention of a psychotherapy clinical trial (the EPISODE II trial). A representative sample of sessions stratified by therapy phase was assessed using a specifically developed fidelity measure (Relapse Prevention Therapy-Fidelity Scale, RPT-FS). Each RPT-FS subscale was designed to include a different component/phase of therapy and its major therapeutic ingredients. The measure was found to be reliable and had good internal consistency. The RPT-FS discriminated, almost perfectly, between therapy phases. The analysis of the therapeutic strategies implemented during the intervention indicated that treatment fidelity was good throughout therapy phases. While therapists primarily engaged in interventions from the appropriate therapeutic phase, flexibility in therapy was evident. This study described the development of a brief, reliable and internally consistent measure to determine both treatment fidelity and the therapy components implemented throughout the intervention. This methodology can be potentially useful to determine those components related to therapeutic change.

  12. Electromechanical delay components during skeletal muscle contraction and relaxation in patients with myotonic dystrophy type 1.

    PubMed

    Esposito, Fabio; Cè, Emiliano; Rampichini, Susanna; Limonta, Eloisa; Venturelli, Massimo; Monti, Elena; Bet, Luciano; Fossati, Barbara; Meola, Giovanni

    2016-01-01

    The electromechanical delay during muscle contraction and relaxation can be partitioned into mainly electrochemical and mainly mechanical components by an EMG, mechanomyographic, and force combined approach. Component duration and measurement reliability were investigated during contraction and relaxation in a group of patients with myotonic dystrophy type 1 (DM1, n = 13) and in healthy controls (n = 13). EMG, mechanomyogram, and force were recorded in DM1 and in age- and body-matched controls from tibialis anterior (distal muscle) and vastus lateralis (proximal muscle) muscles during maximum voluntary and electrically-evoked isometric contractions. The electrochemical and mechanical components of the electromechanical delay during muscle contraction and relaxation were calculated off-line. Maximum strength was significantly lower in DM1 than in controls under both experimental conditions. All electrochemical and mechanical components were significantly longer in DM1 in both muscles. Measurement reliability was very high in both DM1 and controls. The high reliability of the measurements and the differences between DM1 patients and controls suggest that the EMG, mechanomyographic, and force combined approach could be utilized as a valid tool to assess the level of neuromuscular dysfunction in this pathology, and to follow the efficacy of pharmacological or non-pharmacological interventions. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Commercialized VCSEL components fabricated at TrueLight Corporation

    NASA Astrophysics Data System (ADS)

    Pan, Jin-Shan; Lin, Yung-Sen; Li, Chao-Fang A.; Chang, C. H.; Wu, Jack; Lee, Bor-Lin; Chuang, Y. H.; Tu, S. L.; Wu, Calvin; Huang, Kai-Feng

    2001-05-01

    TrueLight Corporation was found in 1997 and it is the pioneer of VCSEL components supplier in Taiwan. We specialize in the production and distribution of VCSEL (Vertical Cavity Surface Emitting Laser) and other high-speed PIN-detector devices and components. Our core technology is developed to meet blooming demand of fiber optic transmission. Our intention is to diverse the device application into data communication, telecommunication and industrial markets. One mission is to provide the high performance, highly reliable and low-cost VCSEL components for data communication and sensing applications. For the past three years, TrueLight Corporation has entered successfully into the Gigabit Ethernet and the Fiber Channel data communication area. In this paper, we will focus on the fabrication of VCSEL components. We will present you the evolution of implanted and oxide-confined VCSEL process, device characterization, also performance in Gigabit data communication and the most important reliability issue

  14. Reliability enhancement through optimal burn-in

    NASA Astrophysics Data System (ADS)

    Kuo, W.

    1984-06-01

    A numerical reliability and cost model is defined for production line burn-in tests of electronic components. The necessity of burn-in is governed by upper and lower bounds: burn-in is mandatory for operation-critical or nonreparable component; no burn-in is needed when failure effects are insignificant or easily repairable. The model considers electronic systems in terms of a series of components connected by a single black box. The infant mortality rate is described with a Weibull distribution. Performance reaches a steady state after burn-in, and the cost of burn-in is a linear function for each component. A minimum cost is calculated among the costs and total time of burn-in, shop repair, and field repair, with attention given to possible losses in future sales from inadequate burn-in testing.

  15. Fundamentals of endoscopic surgery: creation and validation of the hands-on test.

    PubMed

    Vassiliou, Melina C; Dunkin, Brian J; Fried, Gerald M; Mellinger, John D; Trus, Thadeus; Kaneva, Pepa; Lyons, Calvin; Korndorffer, James R; Ujiki, Michael; Velanovich, Vic; Kochman, Michael L; Tsuda, Shawn; Martinez, Jose; Scott, Daniel J; Korus, Gary; Park, Adrian; Marks, Jeffrey M

    2014-03-01

    The Fundamentals of Endoscopic Surgery™ (FES) program consists of online materials and didactic and skills-based tests. All components were designed to measure the skills and knowledge required to perform safe flexible endoscopy. The purpose of this multicenter study was to evaluate the reliability and validity of the hands-on component of the FES examination, and to establish the pass score. Expert endoscopists identified the critical skill set required for flexible endoscopy. They were then modeled in a virtual reality simulator (GI Mentor™ II, Simbionix™ Ltd., Airport City, Israel) to create five tasks and metrics. Scores were designed to measure both speed and precision. Validity evidence was assessed by correlating performance with self-reported endoscopic experience (surgeons and gastroenterologists [GIs]). Internal consistency of each test task was assessed using Cronbach's alpha. Test-retest reliability was determined by having the same participant perform the test a second time and comparing their scores. Passing scores were determined by a contrasting groups methodology and use of receiver operating characteristic curves. A total of 160 participants (17 % GIs) performed the simulator test. Scores on the five tasks showed good internal consistency reliability and all had significant correlations with endoscopic experience. Total FES scores correlated 0.73, with participants' level of endoscopic experience providing evidence of their validity, and their internal consistency reliability (Cronbach's alpha) was 0.82. Test-retest reliability was assessed in 11 participants, and the intraclass correlation was 0.85. The passing score was determined and is estimated to have a sensitivity (true positive rate) of 0.81 and a 1-specificity (false positive rate) of 0.21. The FES hands-on skills test examines the basic procedural components required to perform safe flexible endoscopy. It meets rigorous standards of reliability and validity required for high-stakes examinations, and, together with the knowledge component, may help contribute to the definition and determination of competence in endoscopy.

  16. Reliability of the Ego-Grasping Scale.

    PubMed

    Lester, David

    2012-04-01

    Research using Knoblauch and Falconer's Ego-Grasping Scale is reviewed. Using a sample of 695 undergraduate students, the scale had moderate reliability (Cronbach alpha, odd-even numbered items, and test-retest), but a principal-components analysis with a varimax rotation identified five components, indicating heterogeneity in the content of the items. Lower Ego-Grasping scores appear to be associated with better psychological health. The scale has been translated and used with Korean, Kuwaiti, and Turkish students, indicating that the scale can be useful in cross-cultural studies.

  17. Reliability and Confidence Interval Analysis of a CMC Turbine Stator Vane

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Gyekenyesi, John P.; Mital, Subodh K.

    2008-01-01

    High temperature ceramic matrix composites (CMC) are being explored as viable candidate materials for hot section gas turbine components. These advanced composites can potentially lead to reduced weight, enable higher operating temperatures requiring less cooling and thus leading to increased engine efficiencies. However, these materials are brittle and show degradation with time at high operating temperatures due to creep as well as cyclic mechanical and thermal loads. In addition, these materials are heterogeneous in their make-up and various factors affect their properties in a specific design environment. Most of these advanced composites involve two- and three-dimensional fiber architectures and require a complex multi-step high temperature processing. Since there are uncertainties associated with each of these in addition to the variability in the constituent material properties, the observed behavior of composite materials exhibits scatter. Traditional material failure analyses employing a deterministic approach, where failure is assumed to occur when some allowable stress level or equivalent stress is exceeded, are not adequate for brittle material component design. Such phenomenological failure theories are reasonably successful when applied to ductile materials such as metals. Analysis of failure in structural components is governed by the observed scatter in strength, stiffness and loading conditions. In such situations, statistical design approaches must be used. Accounting for these phenomena requires a change in philosophy on the design engineer s part that leads to a reduced focus on the use of safety factors in favor of reliability analyses. The reliability approach demands that the design engineer must tolerate a finite risk of unacceptable performance. This risk of unacceptable performance is identified as a component's probability of failure (or alternatively, component reliability). The primary concern of the engineer is minimizing this risk in an economical manner. The methods to accurately determine the service life of an engine component with associated variability have become increasingly difficult. This results, in part, from the complex missions which are now routinely considered during the design process. These missions include large variations of multi-axial stresses and temperatures experienced by critical engine parts. There is a need for a convenient design tool that can accommodate various loading conditions induced by engine operating environments, and material data with their associated uncertainties to estimate the minimum predicted life of a structural component. A probabilistic composite micromechanics technique in combination with woven composite micromechanics, structural analysis and Fast Probability Integration (FPI) techniques has been used to evaluate the maximum stress and its probabilistic distribution in a CMC turbine stator vane. Furthermore, input variables causing scatter are identified and ranked based upon their sensitivity magnitude. Since the measured data for the ceramic matrix composite properties is very limited, obtaining a probabilistic distribution with their corresponding parameters is difficult. In case of limited data, confidence bounds are essential to quantify the uncertainty associated with the distribution. Usually 90 and 95% confidence intervals are computed for material properties. Failure properties are then computed with the confidence bounds. Best estimates and the confidence bounds on the best estimate of the cumulative probability function for R-S (strength - stress) are plotted. The methodologies and the results from these analyses will be discussed in the presentation.

  18. Sensor Data Fusion with Z-Numbers and Its Application in Fault Diagnosis

    PubMed Central

    Jiang, Wen; Xie, Chunhe; Zhuang, Miaoyan; Shou, Yehang; Tang, Yongchuan

    2016-01-01

    Sensor data fusion technology is widely employed in fault diagnosis. The information in a sensor data fusion system is characterized by not only fuzziness, but also partial reliability. Uncertain information of sensors, including randomness, fuzziness, etc., has been extensively studied recently. However, the reliability of a sensor is often overlooked or cannot be analyzed adequately. A Z-number, Z = (A, B), can represent the fuzziness and the reliability of information simultaneously, where the first component A represents a fuzzy restriction on the values of uncertain variables and the second component B is a measure of the reliability of A. In order to model and process the uncertainties in a sensor data fusion system reasonably, in this paper, a novel method combining the Z-number and Dempster–Shafer (D-S) evidence theory is proposed, where the Z-number is used to model the fuzziness and reliability of the sensor data and the D-S evidence theory is used to fuse the uncertain information of Z-numbers. The main advantages of the proposed method are that it provides a more robust measure of reliability to the sensor data, and the complementary information of multi-sensors reduces the uncertainty of the fault recognition, thus enhancing the reliability of fault detection. PMID:27649193

  19. Recent advances in computational structural reliability analysis methods

    NASA Astrophysics Data System (ADS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-10-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  20. Recent advances in computational structural reliability analysis methods

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  1. Development of KSC program for investigating and generating field failure rates. Volume 1: Summary and overview

    NASA Technical Reports Server (NTRS)

    Bean, E. E.; Bloomquist, C. E.

    1972-01-01

    A summary of the KSC program for investigating the reliability aspects of the ground support activities is presented. An analysis of unsatisfactory condition reports (RC), and the generation of reliability assessment of components based on the URC are discussed along with the design considerations for attaining reliable real time hardware/software configurations.

  2. A System for Integrated Reliability and Safety Analyses

    NASA Technical Reports Server (NTRS)

    Kostiuk, Peter; Shapiro, Gerald; Hanson, Dave; Kolitz, Stephan; Leong, Frank; Rosch, Gene; Coumeri, Marc; Scheidler, Peter, Jr.; Bonesteel, Charles

    1999-01-01

    We present an integrated reliability and aviation safety analysis tool. The reliability models for selected infrastructure components of the air traffic control system are described. The results of this model are used to evaluate the likelihood of seeing outcomes predicted by simulations with failures injected. We discuss the design of the simulation model, and the user interface to the integrated toolset.

  3. A Survey of Electronics Obsolescence and Reliability

    DTIC Science & Technology

    2010-07-01

    properties but there are many minor and major variations (e.g. curing schedule) affecting their usage in packaging processes and in reworking. Curing...within them. Electronic obsolescence is increasingly associated with physical characteristics that reduce component and system reliability, both in usage ...semiconductor technologies and of electronic systems, both in usage and in storage. By design, electronics technologies include few reliability margins

  4. Body of Knowledge (BOK) for Leadless Quad Flat No-Lead/bottom Termination Components (QFN/BTC) Package Trends and Reliability

    NASA Technical Reports Server (NTRS)

    Ghaffarian, Reza

    2014-01-01

    Bottom terminated components and quad flat no-lead (BTC/QFN) packages have been extensively used by commercial industry for more than a decade. Cost and performance advantages and the closeness of the packages to the boards make them especially unique for radio frequency (RF) applications. A number of high-reliability parts are now available in this style of package configuration. This report presents a summary of literature surveyed and provides a body of knowledge (BOK) gathered on the status of BTC/QFN and their advanced versions of multi-row QFN (MRQFN) packaging technologies. The report provides a comprehensive review of packaging trends and specifications on design, assembly, and reliability. Emphasis is placed on assembly reliability and associated key design and process parameters because they show lower life than standard leaded package assembly under thermal cycling exposures. Inspection of hidden solder joints for assuring quality is challenging and is similar to ball grid arrays (BGAs). Understanding the key BTC/QFN technology trends, applications, processing parameters, workmanship defects, and reliability behavior is important when judicially selecting and narrowing the follow-on packages for evaluation and testing, as well as for the low risk insertion in high-reliability applications.

  5. Statistical validity of using ratio variables in human kinetics research.

    PubMed

    Liu, Yuanlong; Schutz, Robert W

    2003-09-01

    The purposes of this study were to investigate the validity of the simple ratio and three alternative deflation models and examine how the variation of the numerator and denominator variables affects the reliability of a ratio variable. A simple ratio and three alternative deflation models were fitted to four empirical data sets, and common criteria were applied to determine the best model for deflation. Intraclass correlation was used to examine the component effect on the reliability of a ratio variable. The results indicate that the validity, of a deflation model depends on the statistical characteristics of the particular component variables used, and an optimal deflation model for all ratio variables may not exist. Therefore, it is recommended that different models be fitted to each empirical data set to determine the best deflation model. It was found that the reliability of a simple ratio is affected by the coefficients of variation and the within- and between-trial correlations between the numerator and denominator variables. It was recommended that researchers should compute the reliability of the derived ratio scores and not assume that strong reliabilities in the numerator and denominator measures automatically lead to high reliability in the ratio measures.

  6. Reliability measurement for mixed mode failures of 33/11 kilovolt electric power distribution stations.

    PubMed

    Alwan, Faris M; Baharum, Adam; Hassan, Geehan S

    2013-01-01

    The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter [Formula: see text] and shape parameters [Formula: see text] and [Formula: see text]. Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models.

  7. Reliability Measurement for Mixed Mode Failures of 33/11 Kilovolt Electric Power Distribution Stations

    PubMed Central

    Alwan, Faris M.; Baharum, Adam; Hassan, Geehan S.

    2013-01-01

    The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter and shape parameters and . Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models. PMID:23936346

  8. Body of Knowledge (BOK) for Leadless Quad Flat No-Lead/Bottom Termination Components (QFN/BTC) Package Trends and Reliability

    NASA Technical Reports Server (NTRS)

    Ghaffarian, Reza

    2014-01-01

    Bottom terminated components and quad flat no-lead (BTC/QFN) packages have been extensively used by commercial industry for more than a decade. Cost and performance advantages and the closeness of the packages to the boards make them especially unique for radio frequency (RF) applications. A number of high-reliability parts are now available in this style of package configuration. This report presents a summary of literature surveyed and provides a body of knowledge (BOK) gathered on the status of BTC/QFN and their advanced versions of multi-row QFN (MRQFN) packaging technologies. The report provides a comprehensive review of packaging trends and specifications on design, assembly, and reliability. Emphasis is placed on assembly reliability and associated key design and process parameters because they show lower life than standard leaded package assembly under thermal cycling exposures. Inspection of hidden solder joints for assuring quality is challenging and is similar to ball grid arrays (BGAs). Understanding the key BTC/QFN technology trends, applications, processing parameters, workmanship defects, and reliability behavior is important when judicially selecting and narrowing the follow-on packages for evaluation and testing, as well as for the low risk insertion in high-reliability applications.

  9. Reliability and validity of the upper-body dressing scale in Japanese patients with vascular dementia with hemiparesis.

    PubMed

    Endo, Arisa; Suzuki, Makoto; Akagi, Atsumi; Chiba, Naoyuki; Ishizaka, Ikuyo; Matsunaga, Atsuhiko; Fukuda, Michinari

    2015-03-01

    The purpose of this study was to examine the reliability and validity of the Upper-body Dressing Scale (UBDS) for buttoned shirt dressing, which evaluates the learning process of new component actions of upper-body dressing in patients diagnosed with dementia and hemiparesis. This was a preliminary correlational study of concurrent validity and reliability in which 10 vascular dementia patients with hemiparesis were enrolled and assessed repeatedly by six occupational therapists by means of the UBDS and the dressing item of the Functional Independence Measure (FIM). Intraclass correlation coefficient was 0.97 for intra-rater reliability and 0.99 for inter-rater reliability. The level of correlation between UBDS score and FIM dressing item scores was -0.93. UBDS scores for paralytic hand passed into the sleeve and sleeve pulled up beyond the shoulder joint were worse than the scores for the other components of the task. The UBDS has good reliability and validity for vascular dementia patients with hemiparesis. Further research is needed to investigate the relation between UBDS score and the effect of intervention and to clarify sensitivity or responsiveness of the scale to clinical change. Copyright © 2014 John Wiley & Sons, Ltd.

  10. Reliability of clinical impact grading by healthcare professionals of common prescribing error and optimisation cases in critical care patients.

    PubMed

    Bourne, Richard S; Shulman, Rob; Tomlin, Mark; Borthwick, Mark; Berry, Will; Mills, Gary H

    2017-04-01

    To identify between and within profession-rater reliability of clinical impact grading for common critical care prescribing error and optimisation cases. To identify representative clinical impact grades for each individual case. Electronic questionnaire. 5 UK NHS Trusts. 30 Critical care healthcare professionals (doctors, pharmacists and nurses). Participants graded severity of clinical impact (5-point categorical scale) of 50 error and 55 optimisation cases. Case between and within profession-rater reliability and modal clinical impact grading. Between and within profession rater reliability analysis used linear mixed model and intraclass correlation, respectively. The majority of error and optimisation cases (both 76%) had a modal clinical severity grade of moderate or higher. Error cases: doctors graded clinical impact significantly lower than pharmacists (-0.25; P < 0.001) and nurses (-0.53; P < 0.001), with nurses significantly higher than pharmacists (0.28; P < 0.001). Optimisation cases: doctors graded clinical impact significantly lower than nurses and pharmacists (-0.39 and -0.5; P < 0.001, respectively). Within profession reliability grading was excellent for pharmacists (0.88 and 0.89; P < 0.001) and doctors (0.79 and 0.83; P < 0.001) but only fair to good for nurses (0.43 and 0.74; P < 0.001), for optimisation and error cases, respectively. Representative clinical impact grades for over 100 common prescribing error and optimisation cases are reported for potential clinical practice and research application. The between professional variability highlights the importance of multidisciplinary perspectives in assessment of medication error and optimisation cases in clinical practice and research. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  11. Application of reliability-centered-maintenance to BWR ECCS motor operator valve performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feltus, M.A.; Choi, Y.A.

    1993-01-01

    This paper describes the application of reliability-centered maintenance (RCM) methods to plant probabilistic risk assessment (PRA) and safety analyses for four boiling water reactor emergency core cooling systems (ECCSs): (1) high-pressure coolant injection (HPCI); (2) reactor core isolation cooling (RCIC); (3) residual heat removal (RHR); and (4) core spray systems. Reliability-centered maintenance is a system function-based technique for improving a preventive maintenance program that is applied on a component basis. Those components that truly affect plant function are identified, and maintenance tasks are focused on preventing their failures. The RCM evaluation establishes the relevant criteria that preserve system function somore » that an RCM-focused approach can be flexible and dynamic.« less

  12. Generalized Reliability Methodology Applied to Brittle Anisotropic Single Crystals. Degree awarded by Washington Univ., 1999

    NASA Technical Reports Server (NTRS)

    Salem, Jonathan A.

    2002-01-01

    A generalized reliability model was developed for use in the design of structural components made from brittle, homogeneous anisotropic materials such as single crystals. The model is based on the Weibull distribution and incorporates a variable strength distribution and any equivalent stress failure criteria. In addition to the reliability model, an energy based failure criterion for elastically anisotropic materials was formulated. The model is different from typical Weibull-based models in that it accounts for strength anisotropy arising from fracture toughness anisotropy and thereby allows for strength and reliability predictions of brittle, anisotropic single crystals subjected to multiaxial stresses. The model is also applicable to elastically isotropic materials exhibiting strength anisotropy due to an anisotropic distribution of flaws. In order to develop and experimentally verify the model, the uniaxial and biaxial strengths of a single crystal nickel aluminide were measured. The uniaxial strengths of the <100> and <110> crystal directions were measured in three and four-point flexure. The biaxial strength was measured by subjecting <100> plates to a uniform pressure in a test apparatus that was developed and experimentally verified. The biaxial strengths of the single crystal plates were estimated by extending and verifying the displacement solution for a circular, anisotropic plate to the case of a variable radius and thickness. The best correlation between the experimental strength data and the model predictions occurred when an anisotropic stress analysis was combined with the normal stress criterion and the strength parameters associated with the <110> crystal direction.

  13. Development and empirical validation of symmetric component measures of multidimensional constructs: customer and competitor orientation.

    PubMed

    Sørensen, Hans Eibe; Slater, Stanley F

    2008-08-01

    Atheoretical measure purification may lead to construct deficient measures. The purpose of this paper is to provide a theoretically driven procedure for the development and empirical validation of symmetric component measures of multidimensional constructs. Particular emphasis is placed on establishing a formalized three-step procedure for achieving a posteriori content validity. Then the procedure is applied to development and empirical validation of two symmetrical component measures of market orientation, customer orientation and competitor orientation. Analysis suggests that average variance extracted is particularly critical to reliability in the respecification of multi-indicator measures. In relation to this, the results also identify possible deficiencies in using Cronbach alpha for establishing reliable and valid measures.

  14. Structural reliability analysis of laminated CMC components

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Palko, Joseph L.; Gyekenyesi, John P.

    1991-01-01

    For laminated ceramic matrix composite (CMC) materials to realize their full potential in aerospace applications, design methods and protocols are a necessity. The time independent failure response of these materials is focussed on and a reliability analysis is presented associated with the initiation of matrix cracking. A public domain computer algorithm is highlighted that was coupled with the laminate analysis of a finite element code and which serves as a design aid to analyze structural components made from laminated CMC materials. Issues relevant to the effect of the size of the component are discussed, and a parameter estimation procedure is presented. The estimation procedure allows three parameters to be calculated from a failure population that has an underlying Weibull distribution.

  15. Retrogression and Re-Ageing In-Service Demonstrator Reliability Trials: Stage 3 Component Test Report

    DTIC Science & Technology

    2012-03-01

    6 4.5 Component, Furnace and Quench Bath Thermometry...................................... 6 4.6 Component Heat Treatment...7 4.6.2 Post-Retrogression Quench .................................................................... 9 4.6.3...23 5.5.2 Temperature Profile – Post-Retrogression Quenching .................... 23 5.5.3 Temperature

  16. New understandings of failure modes in SSL luminaires

    NASA Astrophysics Data System (ADS)

    Shepherd, Sarah D.; Mills, Karmann C.; Yaga, Robert; Johnson, Cortina; Davis, J. Lynn

    2014-09-01

    As SSL products are being rapidly introduced into the market, there is a need to develop standard screening and testing protocols that can be performed quickly and provide data surrounding product lifetime and performance. These protocols, derived from standard industry tests, are known as ALTs (accelerated life tests) and can be performed in a timeframe of weeks to months instead of years. Accelerated testing utilizes a combination of elevated temperature and humidity conditions as well as electrical power cycling to control aging of the luminaires. In this study, we report on the findings of failure modes for two different luminaire products exposed to temperature-humidity ALTs. LEDs are typically considered the determining component for the rate of lumen depreciation. However, this study has shown that each luminaire component can independently or jointly influence system performance and reliability. Material choices, luminaire designs, and driver designs all have significant impacts on the system reliability of a product. From recent data, it is evident that the most common failure modes are not within the LED, but instead occur within resistors, capacitors, and other electrical components of the driver. Insights into failure modes and rates as a result of ALTs are reported with emphasis on component influence on overall system reliability.

  17. Study on fast discrimination of varieties of yogurt using Vis/NIR-spectroscopy

    NASA Astrophysics Data System (ADS)

    He, Yong; Feng, Shuijuan; Deng, Xunfei; Li, Xiaoli

    2006-09-01

    A new approach for discrimination of varieties of yogurt by means of VisINTR-spectroscopy was present in this paper. Firstly, through the principal component analysis (PCA) of spectroscopy curves of 5 typical kinds of yogurt, the clustering of yogurt varieties was processed. The analysis results showed that the cumulate reliabilities of PC1 and PC2 (the first two principle components) were more than 98.956%, and the cumulate reliabilities from PC1 to PC7 (the first seven principle components) was 99.97%. Secondly, a discrimination model of Artificial Neural Network (ANN-BP) was set up. The first seven principles components of the samples were applied as ANN-BP inputs, and the value of type of yogurt were applied as outputs, then the three-layer ANN-BP model was build. In this model, every variety yogurt includes 27 samples, the total number of sample is 135, and the rest 25 samples were used as prediction set. The results showed the distinguishing rate of the five yogurt varieties was 100%. It presented that this model was reliable and practicable. So a new approach for the rapid and lossless discrimination of varieties of yogurt was put forward.

  18. CCARES: A computer algorithm for the reliability analysis of laminated CMC components

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Gyekenyesi, John P.

    1993-01-01

    Structural components produced from laminated CMC (ceramic matrix composite) materials are being considered for a broad range of aerospace applications that include various structural components for the national aerospace plane, the space shuttle main engine, and advanced gas turbines. Specifically, these applications include segmented engine liners, small missile engine turbine rotors, and exhaust nozzles. Use of these materials allows for improvements in fuel efficiency due to increased engine temperatures and pressures, which in turn generate more power and thrust. Furthermore, this class of materials offers significant potential for raising the thrust-to-weight ratio of gas turbine engines by tailoring directions of high specific reliability. The emerging composite systems, particularly those with silicon nitride or silicon carbide matrix, can compete with metals in many demanding applications. Laminated CMC prototypes have already demonstrated functional capabilities at temperatures approaching 1400 C, which is well beyond the operational limits of most metallic materials. Laminated CMC material systems have several mechanical characteristics which must be carefully considered in the design process. Test bed software programs are needed that incorporate stochastic design concepts that are user friendly, computationally efficient, and have flexible architectures that readily incorporate changes in design philosophy. The CCARES (Composite Ceramics Analysis and Reliability Evaluation of Structures) program is representative of an effort to fill this need. CCARES is a public domain computer algorithm, coupled to a general purpose finite element program, which predicts the fast fracture reliability of a structural component under multiaxial loading conditions.

  19. Monolithic ceramic analysis using the SCARE program

    NASA Technical Reports Server (NTRS)

    Manderscheid, Jane M.

    1988-01-01

    The Structural Ceramics Analysis and Reliability Evaluation (SCARE) computer program calculates the fast fracture reliability of monolithic ceramic components. The code is a post-processor to the MSC/NASTRAN general purpose finite element program. The SCARE program automatically accepts the MSC/NASTRAN output necessary to compute reliability. This includes element stresses, temperatures, volumes, and areas. The SCARE program computes two-parameter Weibull strength distributions from input fracture data for both volume and surface flaws. The distributions can then be used to calculate the reliability of geometrically complex components subjected to multiaxial stress states. Several fracture criteria and flaw types are available for selection by the user, including out-of-plane crack extension theories. The theoretical basis for the reliability calculations was proposed by Batdorf. These models combine linear elastic fracture mechanics (LEFM) with Weibull statistics to provide a mechanistic failure criterion. Other fracture theories included in SCARE are the normal stress averaging technique and the principle of independent action. The objective of this presentation is to summarize these theories, including their limitations and advantages, and to provide a general description of the SCARE program, along with example problems.

  20. Space Radiation Effects and Reliability Consideration for the Proposed Jupiter Europa Orbiter

    NASA Technical Reports Server (NTRS)

    Johnston, Allan

    2011-01-01

    The proposed Jupiter Europa Orbiter (JEO) mission to explore the Jovian moon Europa poses a number of challenges. The spacecraft must operate for about seven years during the transit time to the vicinity of Jupiter, and then endure unusually high radiation levels during exploration and orbiting phases. The ability to withstand usually high total dose levels is critical for the mission, along with meeting the high reliability standards for flagship NASA missions. Reliability of new microelectronic components must be sufficiently understood to meet overall mission requirements.The proposed Jupiter Europa Orbiter (JEO) mission to explore the Jovian moon Europa poses a number of challenges. The spacecraft must operate for about seven years during the transit time to the vicinity of Jupiter, and then endure unusually high radiation levels during exploration and orbiting phases. The ability to withstand usually high total dose levels is critical for the mission, along with meeting the high reliability standards for flagship NASA missions. Reliability of new microelectronic components must be sufficiently understood to meet overall mission requirements.

  1. The Typical General Aviation Aircraft

    NASA Technical Reports Server (NTRS)

    Turnbull, Andrew

    1999-01-01

    The reliability of General Aviation aircraft is unknown. In order to "assist the development of future GA reliability and safety requirements", a reliability study needs to be performed. Before any studies on General Aviation aircraft reliability begins, a definition of a typical aircraft that encompasses most of the general aviation characteristics needs to be defined. In this report, not only is the typical general aviation aircraft defined for the purpose of the follow-on reliability study, but it is also separated, or "sifted" into several different categories where individual analysis can be performed on the reasonably independent systems. In this study, the typical General Aviation aircraft is a four-place, single engine piston, all aluminum fixed-wing certified aircraft with a fixed tricycle landing gear and a cable operated flight control system. The system breakdown of a GA aircraft "sifts" the aircraft systems and components into five categories: Powerplant, Airframe, Aircraft Control Systems, Cockpit Instrumentation Systems, and the Electrical Systems. This breakdown was performed along the lines of a failure of the system. Any component that caused a system to fail was considered a part of that system.

  2. Creation of the NaSCoRD Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denman, Matthew R.; Jankovsky, Zachary Kyle; Stuart, William

    This report was written as part of a United States Department of Energy (DOE), Office of Nuclear Energy, Advanced Reactor Technologies program funded project to re-create the capabilities of the legacy Centralized Reliability Database Organization (CREDO) database. The CREDO database provided a record of component design and performance documentation across various systems that used sodium as a working fluid. Regaining this capability will allow the DOE complex and the domestic sodium reactor industry to better understand how previous systems were designed and built for use in improving the design and operations of future loops. The contents of this report include:more » overview of the current state of domestic sodium reliability databases; summary of the ongoing effort to improve, understand, and process the CREDO information; summary of the initial efforts to develop a unified sodium reliability database called the Sodium System Component Reliability Database (NaSCoRD); and explain both how potential users can access the domestic sodium reliability databases and the type of information that can be accessed from these databases.« less

  3. Reliability of the Roussel Uclaf Causality Assessment Method for Assessing Causality in Drug-Induced Liver Injury*

    PubMed Central

    Rochon, James; Protiva, Petr; Seeff, Leonard B.; Fontana, Robert J.; Liangpunsakul, Suthat; Watkins, Paul B.; Davern, Timothy; McHutchison, John G.

    2013-01-01

    The Roussel Uclaf Causality Assessment Method (RUCAM) was developed to quantify the strength of association between a liver injury and the medication implicated as causing the injury. However, its reliability in a research setting has never been fully explored. The aim of this study was to determine test-retest and interrater reliabilities of RUCAM in retrospectively-identified cases of drug induced liver injury. The Drug-Induced Liver Injury Network is enrolling well-defined cases of hepatotoxicity caused by isoniazid, phenytoin, clavulanate/amoxicillin, or valproate occurring since 1994. Each case was adjudicated by three reviewers working independently; after an interval of at least 5 months, cases were readjudicated by the same reviewers. A total of 40 drug-induced liver injury cases were enrolled including individuals treated with isoniazid (nine), phenytoin (five), clavulanate/amoxicillin (15), and valproate (11). Mean ± standard deviation age at protocol-defined onset was 44.8 ± 19.5 years; patients were 68% female and 78% Caucasian. Cases were classified as hepatocellular (44%), mixed (28%), or cholestatic (28%). Test-retest differences ranged from −7 to +8 with complete agreement in only 26% of cases. On average, the maximum absolute difference among the three reviewers was 3.1 on the first adjudication and 2.7 on the second, although much of this variability could be attributed to differences between the enrolling investigator and the external reviewers. The test-retest reliability by the same assessors was 0.54 (upper 95% confidence limit = 0.77); the interrater reliability was 0.45 (upper 95% confidence limit = 0.58). Categorizing the RUCAM to a five-category scale improved these reliabilities but only marginally. Conclusion The mediocre reliability of the RUCAM is problematic for future studies of drug-induced liver injury. Alternative methods, including modifying the RUCAM, developing drug-specific instruments, or causality assessment based on expert opinion, may be more appropriate. PMID:18798340

  4. A recursive Bayesian approach for fatigue damage prognosis: An experimental validation at the reliability component level

    NASA Astrophysics Data System (ADS)

    Gobbato, Maurizio; Kosmatka, John B.; Conte, Joel P.

    2014-04-01

    Fatigue-induced damage is one of the most uncertain and highly unpredictable failure mechanisms for a large variety of mechanical and structural systems subjected to cyclic and random loads during their service life. A health monitoring system capable of (i) monitoring the critical components of these systems through non-destructive evaluation (NDE) techniques, (ii) assessing their structural integrity, (iii) recursively predicting their remaining fatigue life (RFL), and (iv) providing a cost-efficient reliability-based inspection and maintenance plan (RBIM) is therefore ultimately needed. In contribution to these objectives, the first part of the paper provides an overview and extension of a comprehensive reliability-based fatigue damage prognosis methodology — previously developed by the authors — for recursively predicting and updating the RFL of critical structural components and/or sub-components in aerospace structures. In the second part of the paper, a set of experimental fatigue test data, available in the literature, is used to provide a numerical verification and an experimental validation of the proposed framework at the reliability component level (i.e., single damage mechanism evolving at a single damage location). The results obtained from this study demonstrate (i) the importance and the benefits of a nearly continuous NDE monitoring system, (ii) the efficiency of the recursive Bayesian updating scheme, and (iii) the robustness of the proposed framework in recursively updating and improving the RFL estimations. This study also demonstrates that the proposed methodology can lead to either an extent of the RFL (with a consequent economical gain without compromising the minimum safety requirements) or an increase of safety by detecting a premature fault and therefore avoiding a very costly catastrophic failure.

  5. 78 FR 77574 - Protection System Maintenance Reliability Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-24

    ... protection system component type, except that the maintenance program for all batteries associated with the... Electric System reliability and promoting efficiency through consolidation [of protection system-related... ITC that PRC-005-2 promotes efficiency by consolidating protection system maintenance requirements...

  6. Reliability-based evaluation of bridge components for consistent safety margins.

    DOT National Transportation Integrated Search

    2010-10-01

    The Load and Resistant Factor Design (LRFD) approach is based on the concept of structural reliability. The approach is more : rational than the former design approaches such as Load Factor Design or Allowable Stress Design. The LRFD Specification fo...

  7. Investigation of low glass transition temperature on COTS PEMs reliability

    NASA Technical Reports Server (NTRS)

    Sandor, M.; Agarwal, S.

    2002-01-01

    Many factors influence PEM component reliability.One of the factors that can affect PEM performance and reliability is the glass transition temperature (Tg) and the coefficient of thermal expansion (CTE) of the encapsulant or underfill. JPL/NASA is investigating how the Tg and CTE for PEMs affect device reliability under different temperature and aging conditions. Other issues with Tg are also being investigated. Some preliminary data will be presented on glass transition temperature test results conducted at JPL.

  8. Award-Winning CARES/Life Ceramics Durability Evaluation Software Is Making Advanced Technology Accessible

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Products made from advanced ceramics show great promise for revolutionizing aerospace and terrestrial propulsion and power generation. However, ceramic components are difficult to design because brittle materials in general have widely varying strength values. The CARES/Life software developed at the NASA Lewis Research Center eases this by providing a tool that uses probabilistic reliability analysis techniques to optimize the design and manufacture of brittle material components. CARES/Life is an integrated package that predicts the probability of a monolithic ceramic component's failure as a function of its time in service. It couples commercial finite element programs--which resolve a component's temperature and stress distribution - with reliability evaluation and fracture mechanics routines for modeling strength - limiting defects. These routines are based on calculations of the probabilistic nature of the brittle material's strength.

  9. Program For Evaluation Of Reliability Of Ceramic Parts

    NASA Technical Reports Server (NTRS)

    Nemeth, N.; Janosik, L. A.; Gyekenyesi, J. P.; Powers, Lynn M.

    1996-01-01

    CARES/LIFE predicts probability of failure of monolithic ceramic component as function of service time. Assesses risk that component fractures prematurely as result of subcritical crack growth (SCG). Effect of proof testing of components prior to service also considered. Coupled to such commercially available finite-element programs as ANSYS, ABAQUS, MARC, MSC/NASTRAN, and COSMOS/M. Also retains all capabilities of previous CARES code, which includes estimation of fast-fracture component reliability and Weibull parameters from inert strength (without SCG contributing to failure) specimen data. Estimates parameters that characterize SCG from specimen data as well. Written in ANSI FORTRAN 77 to be machine-independent. Program runs on any computer in which sufficient addressable memory (at least 8MB) and FORTRAN 77 compiler available. For IBM-compatible personal computer with minimum 640K memory, limited program available (CARES/PC, COSMIC number LEW-15248).

  10. Weighted Fuzzy Risk Priority Number Evaluation of Turbine and Compressor Blades Considering Failure Mode Correlations

    NASA Astrophysics Data System (ADS)

    Gan, Luping; Li, Yan-Feng; Zhu, Shun-Peng; Yang, Yuan-Jian; Huang, Hong-Zhong

    2014-06-01

    Failure mode, effects and criticality analysis (FMECA) and Fault tree analysis (FTA) are powerful tools to evaluate reliability of systems. Although single failure mode issue can be efficiently addressed by traditional FMECA, multiple failure modes and component correlations in complex systems cannot be effectively evaluated. In addition, correlated variables and parameters are often assumed to be precisely known in quantitative analysis. In fact, due to the lack of information, epistemic uncertainty commonly exists in engineering design. To solve these problems, the advantages of FMECA, FTA, fuzzy theory, and Copula theory are integrated into a unified hybrid method called fuzzy probability weighted geometric mean (FPWGM) risk priority number (RPN) method. The epistemic uncertainty of risk variables and parameters are characterized by fuzzy number to obtain fuzzy weighted geometric mean (FWGM) RPN for single failure mode. Multiple failure modes are connected using minimum cut sets (MCS), and Boolean logic is used to combine fuzzy risk priority number (FRPN) of each MCS. Moreover, Copula theory is applied to analyze the correlation of multiple failure modes in order to derive the failure probabilities of each MCS. Compared to the case where dependency among multiple failure modes is not considered, the Copula modeling approach eliminates the error of reliability analysis. Furthermore, for purpose of quantitative analysis, probabilities importance weight from failure probabilities are assigned to FWGM RPN to reassess the risk priority, which generalize the definition of probability weight and FRPN, resulting in a more accurate estimation than that of the traditional models. Finally, a basic fatigue analysis case drawn from turbine and compressor blades in aeroengine is used to demonstrate the effectiveness and robustness of the presented method. The result provides some important insights on fatigue reliability analysis and risk priority assessment of structural system under failure correlations.

  11. Venus Express Chemical Propulsion System - The Mars Express Legacy

    NASA Astrophysics Data System (ADS)

    Hunter, C. J.

    2004-10-01

    ESA's ambition of inter-planetary exploration using a fast-track low cost industrial programme was well achieved with Mars Express. Reusing the platform architecture for the service module and specifically the Propulsion system enabled Venus Express to benefit from several lessons learnt from the Mars Express experience. Using all existing components qualified for previous programmes, many of them commercial telecommunication spacecraft programmes with components available from stock, an industrial organisation familiar from Mars Express was able to compress the schedule to make the November 2005 launch window a realistic target. While initial inspection of the CPS schematic indicates a modified Eurostar type architecture, - a similar system using some Eurostar components - would be a fairer description. The use of many parts of the system on arrival at the destination (Mars or Venus in this case) is a departure from the usual mode of operation, where many components are used during the initial few weeks of GTO or GEO. The system modifications over the basic Eurostar system have catered for this in terms of reliability contingencies by replacing components, or providing different levels of test capability or isolation in flight. This paper aims to provide an introduction to the system, address the evolution from Eurostar, and provide an initial assessment of the success of these modifications using the Mars Express experience, and how measures have been adopted specifically for Venus Express.

  12. An approximation formula for a class of Markov reliability models

    NASA Technical Reports Server (NTRS)

    White, A. L.

    1984-01-01

    A way of considering a small but often used class of reliability model and approximating algebraically the systems reliability is shown. The models considered are appropriate for redundant reconfigurable digital control systems that operate for a short period of time without maintenance, and for such systems the method gives a formula in terms of component fault rates, system recovery rates, and system operating time.

  13. Usefulness of p16/CDKN2A fluorescence in situ hybridization and BAP1 immunohistochemistry for the diagnosis of biphasic mesothelioma.

    PubMed

    Wu, Di; Hiroshima, Kenzo; Yusa, Toshikazu; Ozaki, Daisuke; Koh, Eitetsu; Sekine, Yasuo; Matsumoto, Shinji; Nabeshima, Kazuki; Sato, Ayuko; Tsujimura, Tohru; Yamakawa, Hisami; Tada, Yuji; Shimada, Hideaki; Tagawa, Masatoshi

    2017-02-01

    Malignant mesothelioma is a highly aggressive neoplasm, and the histologic subtype is one of the most reliable prognostic factors. Some biphasic mesotheliomas are difficult to distinguish from epithelioid mesotheliomas with atypical fibrous stroma. The aim of this study was to analyze p16/CDKN2A deletions in mesotheliomas by fluorescence in situ hybridization (FISH) and BAP1 immunohistochemistry to evaluate their potential role in the diagnosis of biphasic mesothelioma. We collected 38 cases of pleural mesotheliomas. The results of this study clearly distinguished 29 cases of biphasic mesothelioma from 9 cases of epithelioid mesothelioma. The proportion of biphasic mesotheliomas with homozygous deletions of p16/CDKN2A in total was 96.6% (28/29). Homozygous deletion of p16/CDKN2A was observed in 18 (94.7%) of 19 biphasic mesotheliomas with 100% concordance of the p16/CDKN2A deletion status between the epithelioid and sarcomatoid components in each case. Homozygous deletion of the p16/CDKN2A was observed in 7 (77.8%) of 9 epithelioid mesotheliomas but not in fibrous stroma. BAP1 loss was observed in 5 (38.5%) of 13 biphasic mesotheliomas and in both epithelioid and sarcomatoid components. BAP1 loss was observed in 5 (62.5%) of 8 epithelioid mesotheliomas but not in fibrous stroma. Homozygous deletion of p16/CDKN2A is common in biphasic mesotheliomas, and the analysis of only one component of mesothelioma is sufficient to show that the tumor is malignant. However, compared with histology alone, FISH analysis of the p16/CDKN2A status and BAP1 immunohistochemistry in the spindled mesothelium provide a more objective means to differentiate between biphasic mesothelioma and epithelioid mesothelioma with atypical stromal cells. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Parameter expansion for estimation of reduced rank covariance matrices (Open Access publication)

    PubMed Central

    Meyer, Karin

    2008-01-01

    Parameter expanded and standard expectation maximisation algorithms are described for reduced rank estimation of covariance matrices by restricted maximum likelihood, fitting the leading principal components only. Convergence behaviour of these algorithms is examined for several examples and contrasted to that of the average information algorithm, and implications for practical analyses are discussed. It is shown that expectation maximisation type algorithms are readily adapted to reduced rank estimation and converge reliably. However, as is well known for the full rank case, the convergence is linear and thus slow. Hence, these algorithms are most useful in combination with the quadratically convergent average information algorithm, in particular in the initial stages of an iterative solution scheme. PMID:18096112

  15. A Case Study for Probabilistic Methods Validation (MSFC Center Director's Discretionary Fund, Project No. 94-26)

    NASA Technical Reports Server (NTRS)

    Price J. M.; Ortega, R.

    1998-01-01

    Probabilistic method is not a universally accepted approach for the design and analysis of aerospace structures. The validity of this approach must be demonstrated to encourage its acceptance as it viable design and analysis tool to estimate structural reliability. The objective of this Study is to develop a well characterized finite population of similar aerospace structures that can be used to (1) validate probabilistic codes, (2) demonstrate the basic principles behind probabilistic methods, (3) formulate general guidelines for characterization of material drivers (such as elastic modulus) when limited data is available, and (4) investigate how the drivers affect the results of sensitivity analysis at the component/failure mode level.

  16. Microgrids and distributed generation systems: Control, operation, coordination and planning

    NASA Astrophysics Data System (ADS)

    Che, Liang

    Distributed Energy Resources (DERs) which include distributed generations (DGs), distributed energy storage systems, and adjustable loads are key components in microgrid operations. A microgrid is a small electric power system integrated with on-site DERs to serve all or some portion of the local load and connected to the utility grid through the point of common coupling (PCC). Microgrids can operate in both grid-connected mode and island mode. The structure and components of hierarchical control for a microgrid at Illinois Institute of Technology (IIT) are discussed and analyzed. Case studies would address the reliable and economic operation of IIT microgrid. The simulation results of IIT microgrid operation demonstrate that the hierarchical control and the coordination strategy of distributed energy resources (DERs) is an effective way of optimizing the economic operation and the reliability of microgrids. The benefits and challenges of DC microgrids are addressed with a DC model for the IIT microgrid. We presented the hierarchical control strategy including the primary, secondary, and tertiary controls for economic operation and the resilience of a DC microgrid. The simulation results verify that the proposed coordinated strategy is an effective way of ensuring the resilient response of DC microgrids to emergencies and optimizing their economic operation at steady state. The concept and prototype of a community microgrid that interconnecting multiple microgrids in a community are proposed. Two works are conducted. For the coordination, novel three-level hierarchical coordination strategy to coordinate the optimal power exchanges among neighboring microgrids is proposed. For the planning, a multi-microgrid interconnection planning framework using probabilistic minimal cut-set (MCS) based iterative methodology is proposed for enhancing the economic, resilience, and reliability signals in multi-microgrid operations. The implementation of high-reliability microgrids requires proper protection schemes that effectively function in both grid-connected and island modes. This chapter presents a communication-assisted four-level hierarchical protection strategy for high-reliability microgrids, and tests the proposed protection strategy based on a loop structured microgrid. The simulation results demonstrate the proposed strategy to be an effective and efficient option for microgrid protection. Additionally, microgrid topology ought to be optimally planned. To address the microgrid topology planning, a graph-partitioning and integer-programming integrated methodology is proposed. This work is not included in the dissertation. Interested readers can refer to our related publication.

  17. A new technique in the global reliability of cyclic communications network

    NASA Technical Reports Server (NTRS)

    Sjogren, Jon A.

    1989-01-01

    The global reliability of a communications network is the probability that given any pair of nodes, there exists a viable path between them. A characterization of connectivity, for a given class of networks, can enable one to find this reliability. Such a characterization is described for a useful class of undirected networks called daisy-chained or braided networks. This leads to a new method of quickly computing the global reliability of these networks. Asymptotic behavior in terms of component reliability is related to geometric properties of the given graph. Generalization of the technique is discussed.

  18. Performance of intraclass correlation coefficient (ICC) as a reliability index under various distributions in scale reliability studies.

    PubMed

    Mehta, Shraddha; Bastero-Caballero, Rowena F; Sun, Yijun; Zhu, Ray; Murphy, Diane K; Hardas, Bhushan; Koch, Gary

    2018-04-29

    Many published scale validation studies determine inter-rater reliability using the intra-class correlation coefficient (ICC). However, the use of this statistic must consider its advantages, limitations, and applicability. This paper evaluates how interaction of subject distribution, sample size, and levels of rater disagreement affects ICC and provides an approach for obtaining relevant ICC estimates under suboptimal conditions. Simulation results suggest that for a fixed number of subjects, ICC from the convex distribution is smaller than ICC for the uniform distribution, which in turn is smaller than ICC for the concave distribution. The variance component estimates also show that the dissimilarity of ICC among distributions is attributed to the study design (ie, distribution of subjects) component of subject variability and not the scale quality component of rater error variability. The dependency of ICC on the distribution of subjects makes it difficult to compare results across reliability studies. Hence, it is proposed that reliability studies should be designed using a uniform distribution of subjects because of the standardization it provides for representing objective disagreement. In the absence of uniform distribution, a sampling method is proposed to reduce the non-uniformity. In addition, as expected, high levels of disagreement result in low ICC, and when the type of distribution is fixed, any increase in the number of subjects beyond a moderately large specification such as n = 80 does not have a major impact on ICC. Copyright © 2018 John Wiley & Sons, Ltd.

  19. Test-retest reliability of a computer-assisted self-administered questionnaire on early life exposure in a nasopharyngeal carcinoma case-control study.

    PubMed

    Mai, Zhi-Ming; Lin, Jia-Huang; Chiang, Shing-Chun; Ngan, Roger Kai-Cheong; Kwong, Dora Lai-Wan; Ng, Wai-Tong; Ng, Alice Wan-Ying; Yuen, Kam-Tong; Ip, Kai-Ming; Chan, Yap-Hang; Lee, Anne Wing-Mui; Ho, Sai-Yin; Lung, Maria Li; Lam, Tai-Hing

    2018-05-04

    We evaluated the reliability of early life nasopharyngeal carcinoma (NPC) aetiology factors in the questionnaire of an NPC case-control study in Hong Kong during 2014-2017. 140 subjects aged 18+ completed the same computer-assisted questionnaire twice, separated by at least 2 weeks. The questionnaire included most known NPC aetiology factors and the present analysis focused on early life exposure. Test-retest reliability of all the 285 questionnaire items was assessed in all subjects and in 5 subgroups defined by cases/controls, sex, time between 1 st and 2 nd questionnaire (2-29/≥30 weeks), education (secondary or less/postsecondary), and age (25-44/45-59/60+ years) at the first questionnaire. The reliability of items on dietary habits, body figure, skin tone and sun exposure in early life periods (age 6-12 and 13-18) was moderate-to-almost perfect, and most other items had fair-to-substantial reliability in all life periods (age 6-12, 13-18 and 19-30, and 10 years ago). Differences in reliability by strata of the 5 subgroups were only observed in a few items. This study is the first to report the reliability of an NPC questionnaire, and make the questionnaire available online. Overall, our questionnaire had acceptable reliability, suggesting that previous NPC study results on the same risk factors would have similar reliability.

  20. SSME component assembly and life management expert system

    NASA Technical Reports Server (NTRS)

    Ali, M.; Dietz, W. E.; Ferber, H. J.

    1989-01-01

    The space shuttle utilizes several rocket engine systems, all of which must function with a high degree of reliability for successful mission completion. The space shuttle main engine (SSME) is by far the most complex of the rocket engine systems and is designed to be reusable. The reusability of spacecraft systems introduces many problems related to testing, reliability, and logistics. Components must be assembled from parts inventories in a manner which will most effectively utilize the available parts. Assembly must be scheduled to efficiently utilize available assembly benches while still maintaining flight schedules. Assembled components must be assigned to as many contiguous flights as possible, to minimize component changes. Each component must undergo a rigorous testing program prior to flight. In addition, testing and assembly of flight engines and components must be done in conjunction with the assembly and testing of developmental engines and components. The development, testing, manufacture, and flight assignments of the engine fleet involves the satisfaction of many logistical and operational requirements, subject to many constraints. The purpose of the SSME Component Assembly and Life Management Expert System (CALMES) is to assist the engine assembly and scheduling process, and to insure that these activities utilize available resources as efficiently as possible.

  1. Laser beam soldering of micro-optical components

    NASA Astrophysics Data System (ADS)

    Eberhardt, R.

    2003-05-01

    MOTIVATION Ongoing miniaturisation and higher requirements within optical assemblies and the processing of temperature sensitive components demands for innovative selective joining techniques. So far adhesive bonding has primarily been used to assemble and adjust hybrid micro optical systems. However, the properties of the organic polymers used for the adhesives limit the application of these systems. In fields of telecommunication and lithography, an enhancement of existing joining techniques is necessary to improve properties like humidity resistance, laserstability, UV-stability, thermal cycle reliability and life time reliability. Against this background laser beam soldering of optical components is a reasonable joining technology alternative. Properties like: - time and area restricted energy input - energy input can be controlled by the process temperature - direct and indirect heating of the components is possible - no mechanical contact between joining tool and components give good conditions to meet the requirements on a joining technology for sensitive optical components. Additionally to the laser soldering head, for the assembly of optical components it is necessary to include positioning units to adjust the position of the components with high accuracy before joining. Furthermore, suitable measurement methods to characterize the soldered assemblies (for instance in terms of position tolerances) need to be developed.

  2. Reliability of Causality Assessment for Drug, Herbal and Dietary Supplement Hepatoxicity in the Drug-Induced Liver Injury Network (DILIN)

    PubMed Central

    Hayashi, Paul H.; Barnhart, Huiman X.; Fontana, Robert J.; Chalasani, Naga; Davern, Timothy J.; Talwalkar, Jayant A.; Reddy, K. Rajender; Stolz, Andrew A.; Hoofnagle, Jay H.; Rockey, Don C.

    2014-01-01

    Background Due to the lack of objective tests to diagnose drug induced liver injury (DILI), causality assessment is a matter of debate. Expert opinion is often used in research and industry but its test-retest reliability is unknown. Aims To determine the test-retest reliability of the expert opinion process used by the Drug-Induced Liver Injury Network (DILIN) Methods Three DILIN hepatologists adjudicate suspected hepatotoxicity cases to 1 of 5 categories representing levels of likelihood of DILI. Adjudication is based on retrospective assessment of gathered case data that includes prospective follow-up information. One hundred randomly selected DILIN cases were re-assessed using the same processes for initial assessment but by 3 different reviewers in 92% of cases. Results The median time between assessments was 938 days (range: 140–2352). Thirty-one cases involved >1 agent. Weighted kappa statistics for overall case and individual agent category agreement were 0.60 (95% CI: 0.50–0.71) and 0.60 (0.52–0.68), respectively. Overall case adjudications were within one category of each other 93% of the time, while 5% differed by 2 categories and 2% differed by 3 categories. Fourteen-percent crossed the 50% threshold of likelihood due to competing diagnoses or atypical timing between drug exposure and injury. Conclusions The DILIN expert opinion causality assessment method has moderate inter-observer reliability but very good agreement within 1 category. A small but important proportion of cases could not be reliably diagnosed as ≥ 50% likely to be DILI. PMID:24661785

  3. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Coroneos, Rula; Patnaik, Surya N.

    2011-01-01

    A stochastic optimization methodology (SDO) has been developed to design airframe structural components made of metallic and composite materials. The design method accommodates uncertainties in load, strength, and material properties that are defined by distribution functions with mean values and standard deviations. A response parameter, like a failure mode, has become a function of reliability. The primitive variables like thermomechanical loads, material properties, and failure theories, as well as variables like depth of beam or thickness of a membrane, are considered random parameters with specified distribution functions defined by mean values and standard deviations.

  4. The multiple mini-interview for selecting medical residents: first experience in the Middle East region.

    PubMed

    Ahmed, Ashraf; Qayed, Khalil Ibrahim; Abdulrahman, Mahera; Tavares, Walter; Rosenfeld, Jack

    2014-08-01

    Numerous studies have shown that multiple mini-interviews (MMI) provides a standard, fair, and more reliable method for assessing applicants. This article presents the first MMI experience for selection of medical residents in the Middle East culture and an Arab country. In 2012, we started using the MMI in interviewing applicants to the residency program of Dubai Health Authority. This interview process consisted of eight, eight-minute structured interview scenarios. Applicants rotated through the stations, each with its own interviewer and scenario. They read the scenario and were requested to discuss the issues with the interviewers. Sociodemographic and station assessment data provided for each applicant were analyzed to determine whether the MMI was a reliable assessment of the non-clinical attributes in the present setting of an Arab country. One hundred and eighty-seven candidates from 27 different countries were interviewed for Dubai Residency Training Program using MMI. They were graduates of 5 medical universities within United Arab Emirates (UAE) and 60 different universities outside UAE. With this applicant's pool, a MMI with eight stations, produced absolute and relative reliability of 0.8 and 0.81, respectively. The person × station interaction contributed 63% of the variance components, the person contributed 34% of the variance components, and the station contributed 2% of the variance components. The MMI has been used in numerous universities in English speaking countries. The MMI evaluates non-clinical attributes and this study provides further evidence for its reliability but in a different country and culture. The MMI offers a fair and more reliable assessment of applicants to medical residency programs. The present data show that this assessment technique applied in a non-western country and Arab culture still produced reliable results.

  5. Retest reliability of individual alpha ERD topography assessed by human electroencephalography.

    PubMed

    Vázquez-Marrufo, Manuel; Galvao-Carmona, Alejandro; Benítez Lugo, María Luisa; Ruíz-Peña, Juan Luis; Borges Guerra, Mónica; Izquierdo Ayuso, Guillermo

    2017-01-01

    Despite the immense literature related to diverse human electroencephalographic (EEG) parameters, very few studies have focused on the reliability of these measures. Some of the most studied components (i.e., P3 or MMN) have received more attention regarding the stability of their main parameters, such as latency, amplitude or topography. However, spectral modulations have not been as extensively evaluated considering that different analysis methods are available. The main aim of the present study is to assess the reliability of the latency, amplitude and topography of event-related desynchronization (ERD) for the alpha band (10-14 Hz) observed in a cognitive task (visual oddball). Topography reliability was analysed at different levels (for the group, within-subjects individually and between-subjects individually). The latency for alpha ERD showed stable behaviour between two sessions, and the amplitude exhibited an increment (more negative) in the second session. Alpha ERD topography exhibited a high correlation score between sessions at the group level (r = 0.903, p<0.001). The mean value for within-subject correlations was 0.750 (with a range from 0.391 to 0.954). Regarding between-subject topography comparisons, some subjects showed a highly specific topography, whereas other subjects showed topographies that were more similar to those of other subjects. ERD was mainly stable between the two sessions with the exception of amplitude, which exhibited an increment in the second session. Topography exhibits excellent reliability at the group level; however, it exhibits highly heterogeneous behaviour at the individual level. Considering that the P3 was previously evaluated for this group of subjects, a direct comparison of the correlation scores was possible, and it showed that the ERD component is less reliable in individual topography than in the ERP component (P3).

  6. Retest reliability of individual alpha ERD topography assessed by human electroencephalography

    PubMed Central

    Vázquez-Marrufo, Manuel; Benítez Lugo, María Luisa; Ruíz-Peña, Juan Luis; Borges Guerra, Mónica; Izquierdo Ayuso, Guillermo

    2017-01-01

    Background Despite the immense literature related to diverse human electroencephalographic (EEG) parameters, very few studies have focused on the reliability of these measures. Some of the most studied components (i.e., P3 or MMN) have received more attention regarding the stability of their main parameters, such as latency, amplitude or topography. However, spectral modulations have not been as extensively evaluated considering that different analysis methods are available. The main aim of the present study is to assess the reliability of the latency, amplitude and topography of event-related desynchronization (ERD) for the alpha band (10–14 Hz) observed in a cognitive task (visual oddball). Topography reliability was analysed at different levels (for the group, within-subjects individually and between-subjects individually). Results The latency for alpha ERD showed stable behaviour between two sessions, and the amplitude exhibited an increment (more negative) in the second session. Alpha ERD topography exhibited a high correlation score between sessions at the group level (r = 0.903, p<0.001). The mean value for within-subject correlations was 0.750 (with a range from 0.391 to 0.954). Regarding between-subject topography comparisons, some subjects showed a highly specific topography, whereas other subjects showed topographies that were more similar to those of other subjects. Conclusion ERD was mainly stable between the two sessions with the exception of amplitude, which exhibited an increment in the second session. Topography exhibits excellent reliability at the group level; however, it exhibits highly heterogeneous behaviour at the individual level. Considering that the P3 was previously evaluated for this group of subjects, a direct comparison of the correlation scores was possible, and it showed that the ERD component is less reliable in individual topography than in the ERP component (P3). PMID:29088307

  7. 30 CFR 27.39 - Tests to determine resistance to vibration.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., EVALUATION, AND APPROVAL OF MINING PRODUCTS METHANE-MONITORING SYSTEMS Test Requirements § 27.39 Tests to... to verify the reliability and durability of a methane-monitoring system or component(s) thereof where...

  8. Advanced Materials and Coatings for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Miyoshi, Kazuhisa

    2004-01-01

    In the application area of aerospace tribology, researchers and developers must guarantee the highest degree of reliability for materials, components, and systems. Even a small tribological failure can lead to catastrophic results. The absence of the required knowledge of tribology, as Professor H.P. Jost has said, can act as a severe brake in aerospace vehicle systems-and indeed has already done so. Materials and coatings must be able to withstand the aerospace environments that they encounter, such as vacuum terrestrial, ascent, and descent environments; be resistant to the degrading effects of air, water vapor, sand, foreign substances, and radiation during a lengthy service; be able to withstand the loads, stresses, and temperatures encountered form acceleration and vibration during operation; and be able to support reliable tribological operations in harsh environments throughout the mission of the vehicle. This presentation id divided into two sections: surface properties and technology practice related to aerospace tribology. The first section is concerned with the fundamental properties of the surfaces of solid-film lubricants and related materials and coatings, including carbon nanotubes. The second is devoted to applications. Case studies are used to review some aspects of real problems related to aerospace systems to help engineers and scientists to understand the tribological issues and failures. The nature of each problem is analyzed, and the tribological properties are examined. All the fundamental studies and case studies were conducted at the NASA Glenn Research Center.

  9. Evaluation of Counter-Based Dynamic Load Balancing Schemes for Massive Contingency Analysis on Over 10,000 Cores

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Huang, Zhenyu; Rice, Mark J.

    Contingency analysis studies are necessary to assess the impact of possible power system component failures. The results of the contingency analysis are used to ensure the grid reliability, and in power market operation for the feasibility test of market solutions. Currently, these studies are performed in real time based on the current operating conditions of the grid with a set of pre-selected contingency list, which might result in overlooking some critical contingencies caused by variable system status. To have a complete picture of a power grid, more contingencies need to be studied to improve grid reliability. High-performance computing techniques holdmore » the promise of being able to perform the analysis for more contingency cases within a much shorter time frame. This paper evaluates the performance of counter-based dynamic load balancing schemes for a massive contingency analysis program on 10,000+ cores. One million N-2 contingency analysis cases with a Western Electricity Coordinating Council power grid model have been used to demonstrate the performance. The speedup of 3964 with 4096 cores and 7877 with 10240 cores are obtained. This paper reports the performance of the load balancing scheme with a single counter and two counters, describes disk I/O issues, and discusses other potential techniques for further improving the performance.« less

  10. Rapid automation of a cell-based assay using a modular approach: case study of a flow-based Varicella Zoster Virus infectivity assay.

    PubMed

    Joelsson, Daniel; Gates, Irina V; Pacchione, Diana; Wang, Christopher J; Bennett, Philip S; Zhang, Yuhua; McMackin, Jennifer; Frey, Tina; Brodbeck, Kristin C; Baxter, Heather; Barmat, Scott L; Benetti, Luca; Bodmer, Jean-Luc

    2010-06-01

    Vaccine manufacturing requires constant analytical monitoring to ensure reliable quality and a consistent safety profile of the final product. Concentration and bioactivity of active components of the vaccine are key attributes routinely evaluated throughout the manufacturing cycle and for product release and dosage. In the case of live attenuated virus vaccines, bioactivity is traditionally measured in vitro by infection of susceptible cells with the vaccine followed by quantification of virus replication, cytopathology or expression of viral markers. These assays are typically multi-day procedures that require trained technicians and constant attention. Considering the need for high volumes of testing, automation and streamlining of these assays is highly desirable. In this study, the automation and streamlining of a complex infectivity assay for Varicella Zoster Virus (VZV) containing test articles is presented. The automation procedure was completed using existing liquid handling infrastructure in a modular fashion, limiting custom-designed elements to a minimum to facilitate transposition. In addition, cellular senescence data provided an optimal population doubling range for long term, reliable assay operation at high throughput. The results presented in this study demonstrate a successful automation paradigm resulting in an eightfold increase in throughput while maintaining assay performance characteristics comparable to the original assay. Copyright 2010 Elsevier B.V. All rights reserved.

  11. Measuring stakeholder participation in evaluation: an empirical validation of the Participatory Evaluation Measurement Instrument (PEMI).

    PubMed

    Daigneault, Pierre-Marc; Jacob, Steve; Tremblay, Joël

    2012-08-01

    Stakeholder participation is an important trend in the field of program evaluation. Although a few measurement instruments have been proposed, they either have not been empirically validated or do not cover the full content of the concept. This study consists of a first empirical validation of a measurement instrument that fully covers the content of participation, namely the Participatory Evaluation Measurement Instrument (PEMI). It specifically examines (1) the intercoder reliability of scores derived by two research assistants on published evaluation cases; (2) the convergence between the scores of coders and those of key respondents (i.e., authors); and (3) the convergence between the authors' scores on the PEMI and the Evaluation Involvement Scale (EIS). A purposive sample of 40 cases drawn from the evaluation literature was used to assess reliability. One author per case in this sample was then invited to participate in a survey; 25 fully usable questionnaires were received. Stakeholder participation was measured on nominal and ordinal scales. Cohen's κ, the intraclass correlation coefficient, and Spearman's ρ were used to assess reliability and convergence. Reliability results ranged from fair to excellent. Convergence between coders' and authors' scores ranged from poor to good. Scores derived from the PEMI and the EIS were moderately associated. Evidence from this study is strong in the case of intercoder reliability and ranges from weak to strong in the case of convergent validation. Globally, this suggests that the PEMI can produce scores that are both reliable and valid.

  12. Design of forging process variables under uncertainties

    NASA Astrophysics Data System (ADS)

    Repalle, Jalaja; Grandhi, Ramana V.

    2005-02-01

    Forging is a complex nonlinear process that is vulnerable to various manufacturing anomalies, such as variations in billet geometry, billet/die temperatures, material properties, and workpiece and forging equipment positional errors. A combination of these uncertainties could induce heavy manufacturing losses through premature die failure, final part geometric distortion, and reduced productivity. Identifying, quantifying, and controlling the uncertainties will reduce variability risk in a manufacturing environment, which will minimize the overall production cost. In this article, various uncertainties that affect the forging process are identified, and their cumulative effect on the forging tool life is evaluated. Because the forging process simulation is time-consuming, a response surface model is used to reduce computation time by establishing a relationship between the process performance and the critical process variables. A robust design methodology is developed by incorporating reliability-based optimization techniques to obtain sound forging components. A case study of an automotive-component forging-process design is presented to demonstrate the applicability of the method.

  13. ASRC Aerospace Corporation Selects Dynamically Reconfigurable Anadigm(Registered Trademark) FPAA For Advanced Data Acquisition System

    NASA Technical Reports Server (NTRS)

    Mata, Carlos T.

    2003-01-01

    Anadigm(registered trademark) today announced that ASRC Aerospace Corporation has designed Anadigm's dynamically reconfigurable Field Programmable Analog Array (FPAA) technology into an advanced data acquisition system developed under contract for NASA. ASRC Aerospace designed in the Anadigm(registered trademark) FPAA to provide complex analog signal conditioning in its intelligent, self-calibrating, and self-healing advanced data acquisition system (ADAS). The ADAS has potential applications in industrial, manufacturing, and aerospace markets. This system offers highly reliable operation while reducing the need for user interaction. Anadigm(registered trademark)'s dynamically reconfigurable FPAAs can be reconfigured in-system by the designer or on the fly by a microprocessor. A single device can thus be programmed to implement multiple analog functions and/or to adapt on-the-fly to maintain precision operation despite system degradation and aging. In the case of the ASRC advanced data acquisition system, the FPAA helps ensure that the system will continue to operating at 100% functionality despite changes in the environment, component degradation, and/or component failures.

  14. Do Clouds Compute? A Framework for Estimating the Value of Cloud Computing

    NASA Astrophysics Data System (ADS)

    Klems, Markus; Nimis, Jens; Tai, Stefan

    On-demand provisioning of scalable and reliable compute services, along with a cost model that charges consumers based on actual service usage, has been an objective in distributed computing research and industry for a while. Cloud Computing promises to deliver on this objective: consumers are able to rent infrastructure in the Cloud as needed, deploy applications and store data, and access them via Web protocols on a pay-per-use basis. The acceptance of Cloud Computing, however, depends on the ability for Cloud Computing providers and consumers to implement a model for business value co-creation. Therefore, a systematic approach to measure costs and benefits of Cloud Computing is needed. In this paper, we discuss the need for valuation of Cloud Computing, identify key components, and structure these components in a framework. The framework assists decision makers in estimating Cloud Computing costs and to compare these costs to conventional IT solutions. We demonstrate by means of representative use cases how our framework can be applied to real world scenarios.

  15. STAR Online Framework: from Metadata Collection to Event Analysis and System Control

    NASA Astrophysics Data System (ADS)

    Arkhipkin, D.; Lauret, J.

    2015-05-01

    In preparation for the new era of RHIC running (RHIC-II upgrades and possibly, the eRHIC era), the STAR experiment is expanding its modular Message Interface and Reliable Architecture framework (MIRA). MIRA allowed STAR to integrate meta-data collection, monitoring, and online QA components in a very agile and efficient manner using a messaging infrastructure approach. In this paper, we briefly summarize our past achievements, provide an overview of the recent development activities focused on messaging patterns and describe our experience with the complex event processor (CEP) recently integrated into the MIRA framework. CEP was used in the recent RHIC Run 14, which provided practical use cases. Finally, we present our requirements and expectations for the planned expansion of our systems, which will allow our framework to acquire features typically associated with Detector Control Systems. Special attention is given to aspects related to latency, scalability and interoperability within heterogeneous set of services, various data and meta-data acquisition components coexisting in STAR online domain.

  16. Reliability and paste process optimization of eutectic and lead-free for mixed packaging

    NASA Technical Reports Server (NTRS)

    Ramkumar, S. M.; Ganeshan, V.; Thenalur, K.; Ghaffarian, R.

    2002-01-01

    This paper reports the results of an experiment that utilized the JPL's area array consortium test vehicle design, containing a myriad of mixed technology components with an OSP finish. The details of the reliability study are presented in this paper.

  17. System reliability approaches for advanced propulsion system structures

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.; Mahadevan, S.

    1991-01-01

    This paper identifies significant issues that pertain to the estimation and use of system reliability in the design of advanced propulsion system structures. Linkages between the reliabilities of individual components and their effect on system design issues such as performance, cost, availability, and certification are examined. The need for system reliability computation to address the continuum nature of propulsion system structures and synergistic progressive damage modes has been highlighted. Available system reliability models are observed to apply only to discrete systems. Therefore a sequential structural reanalysis procedure is formulated to rigorously compute the conditional dependencies between various failure modes. The method is developed in a manner that supports both top-down and bottom-up analyses in system reliability.

  18. Effect of Surge Current Testing on Reliability of Solid Tantalum Capacitors

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2008-01-01

    Tantalum capacitors manufactured per military specifications are established reliability components and have less than 0.001% of failures per 1000 hours for grades D or S, thus positioning these parts among electronic components with the highest reliability characteristics. Still, failures of tantalum capacitors do happen and when it occurs it might have catastrophic consequences for the system. To reduce this risk, further development of a screening and qualification system with special attention to the possible deficiencies in the existing procedures is necessary. The purpose of this work is evaluation of the effect of surge current stress testing on reliability of the parts at both steady-state and multiple surge current stress conditions. In order to reveal possible degradation and precipitate more failures, various part types were tested and stressed in the range of voltage and temperature conditions exceeding the specified limits. A model to estimate the probability of post-surge current testing-screening failures and measures to improve the effectiveness of the screening process has been suggested.

  19. Space Station Freedom power supply commonality via modular design

    NASA Technical Reports Server (NTRS)

    Krauthamer, S.; Gangal, M. D.; Das, R.

    1990-01-01

    At mature operations, Space Station Freedom will need more than 2000 power supplies to feed housekeeping and user loads. Advanced technology power supplies from 20 to 250 W have been hybridized for terrestrial, aerospace, and industry applications in compact, efficient, reliable, lightweight packages compatible with electromagnetic interference requirements. The use of these hybridized packages as modules, either singly or in parallel, to satisfy the wide range of user power supply needs for all elements of the station is proposed. Proposed characteristics for the power supplies include common mechanical packaging, digital control, self-protection, high efficiency at full and partial loads, synchronization capability to reduce electromagnetic interference, redundancy, and soft-start capability. The inherent reliability is improved compared with conventional discrete component power supplies because the hybrid circuits use high-reliability components such as ceramic capacitors. Reliability is further improved over conventional supplies because the hybrid packages, which may be treated as a single part, reduce the parts count in the power supply.

  20. Interobserver Reliability of the Total Body Score System for Quantifying Human Decomposition.

    PubMed

    Dabbs, Gretchen R; Connor, Melissa; Bytheway, Joan A

    2016-03-01

    Several authors have tested the accuracy of the Total Body Score (TBS) method for quantifying decomposition, but none have examined the reliability of the method as a scoring system by testing interobserver error rates. Sixteen participants used the TBS system to score 59 observation packets including photographs and written descriptions of 13 human cadavers in different stages of decomposition (postmortem interval: 2-186 days). Data analysis used a two-way random model intraclass correlation in SPSS (v. 17.0). The TBS method showed "almost perfect" agreement between observers, with average absolute correlation coefficients of 0.990 and average consistency correlation coefficients of 0.991. While the TBS method may have sources of error, scoring reliability is not one of them. Individual component scores were examined, and the influences of education and experience levels were investigated. Overall, the trunk component scores were the least concordant. Suggestions are made to improve the reliability of the TBS method. © 2016 American Academy of Forensic Sciences.

  1. Impact of monaural frequency compression on binaural fusion at the brainstem level.

    PubMed

    Klauke, Isabelle; Kohl, Manuel C; Hannemann, Ronny; Kornagel, Ulrich; Strauss, Daniel J; Corona-Strauss, Farah I

    2015-08-01

    A classical objective measure for binaural fusion at the brainstem level is the so-called β-wave of the binaural interaction component (BIC) in the auditory brainstem response (ABR). However, in some cases it appeared that a reliable detection of this component still remains a challenge. In this study, we investigate the wavelet phase synchronization stability (WPSS) of ABR data for the analysis of binaural fusion and compare it to the BIC. In particular, we examine the impact of monaural nonlinear frequency compression on binaural fusion. As the auditory system is tonotopically organized, an interaural frequency mismatch caused by monaural frequency compression could negatively effect binaural fusion. In this study, only few subjects showed a detectable β-wave and in most cases only for low ITDs. However, we present a novel objective measure for binaural fusion that outperforms the current state-of-the-art technique (BIC): the WPSS analysis showed a significant difference between the phase stability of the sum of the monaurally evoked responses and the phase stability of the binaurally evoked ABR. This difference could be an indicator for binaural fusion in the brainstem. Furthermore, we observed that monaural frequency compression could indeed effect binaural fusion, as the WPSS results for this condition vary strongly from the results obtained without frequency compression.

  2. The DASS-14: Improving the Construct Validity and Reliability of the Depression, Anxiety, and Stress Scale in a Cohort of Health Professionals.

    PubMed

    Wise, Frances M; Harris, Darren W; Olver, John H

    2017-01-01

    Considerable research has been undertaken in evaluating the DASS-21 in a variety of clinical populations, but studies of the instrument's psychometric adequacy in healthcare professionals is lacking. This study aimed to establish and improve the construct validity and reliability of the DASS-21 in a cohort of Australian health professionals. 343 rehabilitation health professionals completed the DASS-21, along with a demographic questionnaire. Principal components analysis was performed to identify potential factors in the DASS-21. Factors were interpreted against theoretical constructs underlying the instrument. Items loading on separate factors were then subjected to reliability analysis to determine internal consistency of subscales. Items that demonstrated poor fit, or loaded onto more than one factor, were deleted to maximise the reliability of each subscale. Principal components analysis identified three dimensions (depression, anxiety, stress) in a modified version of the DASS-21 (renamed DASS-14), with appropriate construct validity and good reliability (a=0.73 to 0.88). The three dimensions accounted for over 62% of variance between items. The modified DASS-14 scale is a more parsimonious measure of depression, anxiety, and stress, with acceptable reliability and construct validity, in rehabilitation health professionals and is appropriate for use in studies of similar populations.

  3. Comprehensive reliability allocation method for CNC lathes based on cubic transformed functions of failure mode and effects analysis

    NASA Astrophysics Data System (ADS)

    Yang, Zhou; Zhu, Yunpeng; Ren, Hongrui; Zhang, Yimin

    2015-03-01

    Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.

  4. Prediction during language comprehension: benefits, costs, and ERP components.

    PubMed

    Van Petten, Cyma; Luka, Barbara J

    2012-02-01

    Because context has a robust influence on the processing of subsequent words, the idea that readers and listeners predict upcoming words has attracted research attention, but prediction has fallen in and out of favor as a likely factor in normal comprehension. We note that the common sense of this word includes both benefits for confirmed predictions and costs for disconfirmed predictions. The N400 component of the event-related potential (ERP) reliably indexes the benefits of semantic context. Evidence that the N400 is sensitive to the other half of prediction--a cost for failure--is largely absent from the literature. This raises the possibility that "prediction" is not a good description of what comprehenders do. However, it need not be the case that the benefits and costs of prediction are evident in a single ERP component. Research outside of language processing indicates that late positive components of the ERP are very sensitive to disconfirmed predictions. We review late positive components elicited by words that are potentially more or less predictable from preceding sentence context. This survey suggests that late positive responses to unexpected words are fairly common, but that these consist of two distinct components with different scalp topographies, one associated with semantically incongruent words and one associated with congruent words. We conclude with a discussion of the possible cognitive correlates of these distinct late positivities and their relationships with more thoroughly characterized ERP components, namely the P300, P600 response to syntactic errors, and the "old/new effect" in studies of recognition memory. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Reliability of the individual components of the Canadian Armed Forces Physical Employment Standard.

    PubMed

    Stockbrugger, Barry G; Reilly, Tara J; Blacklock, Rachel E; Gagnon, Patrick J

    2018-01-29

    This investigation recruited 24 participants from both the Canadian Armed Forces (CAF) and civilian populations to complete 4 separate trials at "best effort" of each of the 4 components in the CAF Physical Employment Standard named the FORCE Evaluation: Fitness for Operational Requirements of CAF Employment. Analyses were performed to examine the level of variability and reliability within each component. The results demonstrate that candidates should be provided with at least 1 retest if they have recently completed at least 2 previous best effort attempts as per the protocol. In addition, the minimal detectable difference is given for each of the 4 components in seconds which identifies the threshold for subsequent action, either retest or remedial training, for those unable to meet the minimum standard. These results will educate the delivery of this employment standard, function as a method of accommodation, in addition to providing direction for physical training programs.

  6. Intraductal oncocytic papillary neoplasms of the pancreas and bile ducts: a description of five new cases and review based on a systematic survey of the literature.

    PubMed

    Liszka, Lukasz; Pajak, Jacek; Zielińska-Pajak, Ewa; Krzych, Lukasz; Gołka, Dariusz; Mrowiec, Sławomir; Lampe, Paweł

    2010-05-01

    Intraductal oncocytic papillary neoplasms (IOPN) are rare tumors of the pancreatic and biliary ductal system. It is not absolutely clear if the molecular and clinicopathologic characteristics of IOPN differ significantly from other related lesions, namely intraductal papillary mucinous neoplasms (IPMN). Therefore it is not clear if it is reasonable to consider IOPN as a separate diagnostic and clinical entity. In order to describe the clinicopathologic characteristics of IOPN and to compare them with the IPMN profile, we performed a systematic review of the literature and additionally studied five previously unreported IOPN cases. IOPN differ from IPMN by lack of K-ras gene mutations in all studied cases. Several differences in the clinical and biological profile between IOPN and IPMN exist, but they are of quantitative rather than of qualitative nature. Additionally, pancreaticobiliary or gastric-foveolar IPMN components may coexist with IOPN component within a single lesion, which suggests at least a partial relation of the pathogenetic pathways of IPMN and IOPN. Importantly, the pathogenesis of accumulation of mitochondria and oxyphilic appearance of IOPN remains unknown. At present, there are no reliable criteria other than histopathological picture and K-ras gene status to differentiate IOPN from IPMN. In particular, no clear differences in optimal treatment options and prognosis between these tumors are known. Further studies are needed to clarify the biology of IOPN and to establish their position in clinicopathologic classifications of pancreatic tumors.

  7. Evaluating the spoken English proficiency of graduates of foreign medical schools.

    PubMed

    Boulet, J R; van Zanten, M; McKinley, D W; Gary, N E

    2001-08-01

    The purpose of this study was to gather additional evidence for the validity and reliability of spoken English proficiency ratings provided by trained standardized patients (SPs) in high-stakes clinical skills examination. Over 2500 candidates who took the Educational Commission for Foreign Medical Graduates' (ECFMG) Clinical Skills Assessment (CSA) were studied. The CSA consists of 10 or 11 timed clinical encounters. Standardized patients evaluate spoken English proficiency and interpersonal skills in every encounter. Generalizability theory was used to estimate the consistency of spoken English ratings. Validity coefficients were calculated by correlating summary English ratings with CSA scores and other external criterion measures. Mean spoken English ratings were also compared by various candidate background variables. The reliability of the spoken English ratings, based on 10 independent evaluations, was high. The magnitudes of the associated variance components indicated that the evaluation of a candidate's spoken English proficiency is unlikely to be affected by the choice of cases or SPs used in a given assessment. Proficiency in spoken English was related to native language (English versus other) and scores from the Test of English as a Foreign Language (TOEFL). The pattern of the relationships, both within assessment components and with external criterion measures, suggests that valid measures of spoken English proficiency are obtained. This result, combined with the high reproducibility of the ratings over encounters and SPs, supports the use of trained SPs to measure spoken English skills in a simulated medical environment.

  8. Simulation of MEMS for the Next Generation Space Telescope

    NASA Technical Reports Server (NTRS)

    Mott, Brent; Kuhn, Jonathan; Broduer, Steve (Technical Monitor)

    2001-01-01

    The NASA Goddard Space Flight Center (GSFC) is developing optical micro-electromechanical system (MEMS) components for potential application in Next Generation Space Telescope (NGST) science instruments. In this work, we present an overview of the electro-mechanical simulation of three MEMS components for NGST, which include a reflective micro-mirror array and transmissive microshutter array for aperture control for a near infrared (NIR) multi-object spectrometer and a large aperture MEMS Fabry-Perot tunable filter for a NIR wide field camera. In all cases the device must operate at cryogenic temperatures with low power consumption and low, complementary metal oxide semiconductor (CMOS) compatible, voltages. The goal of our simulation efforts is to adequately predict both the performance and the reliability of the devices during ground handling, launch, and operation to prevent failures late in the development process and during flight. This goal requires detailed modeling and validation of complex electro-thermal-mechanical interactions and very large non-linear deformations, often involving surface contact. Various parameters such as spatial dimensions and device response are often difficult to measure reliably at these small scales. In addition, these devices are fabricated from a wide variety of materials including surface micro-machined aluminum, reactive ion etched (RIE) silicon nitride, and deep reactive ion etched (DRIE) bulk single crystal silicon. The above broad set of conditions combine to be a formidable challenge for space flight qualification analysis. These simulations represent NASA/GSFC's first attempts at implementing a comprehensive strategy to address complex MEMS structures.

  9. Decoding and reconstructing color from responses in human visual cortex.

    PubMed

    Brouwer, Gijs Joost; Heeger, David J

    2009-11-04

    How is color represented by spatially distributed patterns of activity in visual cortex? Functional magnetic resonance imaging responses to several stimulus colors were analyzed with multivariate techniques: conventional pattern classification, a forward model of idealized color tuning, and principal component analysis (PCA). Stimulus color was accurately decoded from activity in V1, V2, V3, V4, and VO1 but not LO1, LO2, V3A/B, or MT+. The conventional classifier and forward model yielded similar accuracies, but the forward model (unlike the classifier) also reliably reconstructed novel stimulus colors not used to train (specify parameters of) the model. The mean responses, averaged across voxels in each visual area, were not reliably distinguishable for the different stimulus colors. Hence, each stimulus color was associated with a unique spatially distributed pattern of activity, presumably reflecting the color selectivity of cortical neurons. Using PCA, a color space was derived from the covariation, across voxels, in the responses to different colors. In V4 and VO1, the first two principal component scores (main source of variation) of the responses revealed a progression through perceptual color space, with perceptually similar colors evoking the most similar responses. This was not the case for any of the other visual cortical areas, including V1, although decoding was most accurate in V1. This dissociation implies a transformation from the color representation in V1 to reflect perceptual color space in V4 and VO1.

  10. Locating, characterizing and minimizing sources of error for a paper case-based structured oral examination in a multi-campus clerkship.

    PubMed

    Kumar, A; Bridgham, R; Potts, M; Gushurst, C; Hamp, M; Passal, D

    2001-01-01

    To determine consistency of assessment in a new paper case-based structured oral examination in a multi-community pediatrics clerkship, and to identify correctable problems in the administration of examination and assessment process. Nine paper case-based oral examinations were audio-taped. From audio-tapes five community coordinators scored examiner behaviors and graded student performance. Correlations among examiner behaviors scores were examined. Graphs identified grading patterns of evaluators. The effect of exam-giving on evaluators was assessed by t-test. Reliability of grades was calculated and the effect of reducing assessment problems was modeled. Exam-givers differed most in their "teaching-guiding" behavior, and this negatively correlated with student grades. Exam reliability was lowered mainly by evaluator differences in leniency and grading pattern; less important was absence of standardization in cases. While grade reliability was low in early use of the paper case-based oral examination, modeling of plausible effects of training and monitoring for greater uniformity in administration of the examination and assigning scores suggests that more adequate reliabilities can be attained.

  11. Analytical Algorithms to Quantify the Uncertainty in Remaining Useful Life Prediction

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar; Saxena, Abhinav; Daigle, Matthew; Goebel, Kai

    2013-01-01

    This paper investigates the use of analytical algorithms to quantify the uncertainty in the remaining useful life (RUL) estimate of components used in aerospace applications. The prediction of RUL is affected by several sources of uncertainty and it is important to systematically quantify their combined effect by computing the uncertainty in the RUL prediction in order to aid risk assessment, risk mitigation, and decisionmaking. While sampling-based algorithms have been conventionally used for quantifying the uncertainty in RUL, analytical algorithms are computationally cheaper and sometimes, are better suited for online decision-making. While exact analytical algorithms are available only for certain special cases (for e.g., linear models with Gaussian variables), effective approximations can be made using the the first-order second moment method (FOSM), the first-order reliability method (FORM), and the inverse first-order reliability method (Inverse FORM). These methods can be used not only to calculate the entire probability distribution of RUL but also to obtain probability bounds on RUL. This paper explains these three methods in detail and illustrates them using the state-space model of a lithium-ion battery.

  12. The weighted priors approach for combining expert opinions in logistic regression experiments

    DOE PAGES

    Quinlan, Kevin R.; Anderson-Cook, Christine M.; Myers, Kary L.

    2017-04-24

    When modeling the reliability of a system or component, it is not uncommon for more than one expert to provide very different prior estimates of the expected reliability as a function of an explanatory variable such as age or temperature. Our goal in this paper is to incorporate all information from the experts when choosing a design about which units to test. Bayesian design of experiments has been shown to be very successful for generalized linear models, including logistic regression models. We use this approach to develop methodology for the case where there are several potentially non-overlapping priors under consideration.more » While multiple priors have been used for analysis in the past, they have never been used in a design context. The Weighted Priors method performs well for a broad range of true underlying model parameter choices and is more robust when compared to other reasonable design choices. Finally, we illustrate the method through multiple scenarios and a motivating example. Additional figures for this article are available in the online supplementary information.« less

  13. Absolute paleointensity of the Earth's magnetic field during Jurassic: case study of La Negra Formation (northern Chile)

    NASA Astrophysics Data System (ADS)

    Morales, Juan; Goguitchaichvili, Avto; Alva-Valdivia, Luis M.; Urrutia-Fucugauchi, Jaime

    2003-08-01

    We carried out a detailed rock-magnetic and paleointensity study of the ˜187-Ma volcanic succession from northern Chile. A total of 32 consecutive lava flows (about 280 oriented standard paleomagnetic cores) were collected at the Tocopilla locality. Only 26 samples with apparently preserved primary magnetic mineralogy and without secondary magnetization components were pre-selected for Thellier paleointensity determination. Eleven samples coming from four lava flows yielded reliable paleointensity estimates. The flow-mean virtual dipole moments range from 3.7±0.9 to 7.1±0.5 (10 22 A m 2). This corresponds to a mean value of (5.0±1.8)×10 22 A m 2, which is in reasonably good agreement with other comparable quality paleointensity determinations from the Middle Jurassic. Given the large dispersion and the very poor distribution of reliable absolute intensity data, it is hard to draw any firm conclusions regarding the time evolution of the geomagnetic field. To cite this article: J. Morales et al., C. R. Geoscience 335 (2003).

  14. Application of neural networks to software quality modeling of a very large telecommunications system.

    PubMed

    Khoshgoftaar, T M; Allen, E B; Hudepohl, J P; Aud, S J

    1997-01-01

    Society relies on telecommunications to such an extent that telecommunications software must have high reliability. Enhanced measurement for early risk assessment of latent defects (EMERALD) is a joint project of Nortel and Bell Canada for improving the reliability of telecommunications software products. This paper reports a case study of neural-network modeling techniques developed for the EMERALD system. The resulting neural network is currently in the prototype testing phase at Nortel. Neural-network models can be used to identify fault-prone modules for extra attention early in development, and thus reduce the risk of operational problems with those modules. We modeled a subset of modules representing over seven million lines of code from a very large telecommunications software system. The set consisted of those modules reused with changes from the previous release. The dependent variable was membership in the class of fault-prone modules. The independent variables were principal components of nine measures of software design attributes. We compared the neural-network model with a nonparametric discriminant model and found the neural-network model had better predictive accuracy.

  15. A computational framework for prime implicants identification in noncoherent dynamic systems.

    PubMed

    Di Maio, Francesco; Baronchelli, Samuele; Zio, Enrico

    2015-01-01

    Dynamic reliability methods aim at complementing the capability of traditional static approaches (e.g., event trees [ETs] and fault trees [FTs]) by accounting for the system dynamic behavior and its interactions with the system state transition process. For this, the system dynamics is here described by a time-dependent model that includes the dependencies with the stochastic transition events. In this article, we present a novel computational framework for dynamic reliability analysis whose objectives are i) accounting for discrete stochastic transition events and ii) identifying the prime implicants (PIs) of the dynamic system. The framework entails adopting a multiple-valued logic (MVL) to consider stochastic transitions at discretized times. Then, PIs are originally identified by a differential evolution (DE) algorithm that looks for the optimal MVL solution of a covering problem formulated for MVL accident scenarios. For testing the feasibility of the framework, a dynamic noncoherent system composed of five components that can fail at discretized times has been analyzed, showing the applicability of the framework to practical cases. © 2014 Society for Risk Analysis.

  16. The weighted priors approach for combining expert opinions in logistic regression experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinlan, Kevin R.; Anderson-Cook, Christine M.; Myers, Kary L.

    When modeling the reliability of a system or component, it is not uncommon for more than one expert to provide very different prior estimates of the expected reliability as a function of an explanatory variable such as age or temperature. Our goal in this paper is to incorporate all information from the experts when choosing a design about which units to test. Bayesian design of experiments has been shown to be very successful for generalized linear models, including logistic regression models. We use this approach to develop methodology for the case where there are several potentially non-overlapping priors under consideration.more » While multiple priors have been used for analysis in the past, they have never been used in a design context. The Weighted Priors method performs well for a broad range of true underlying model parameter choices and is more robust when compared to other reasonable design choices. Finally, we illustrate the method through multiple scenarios and a motivating example. Additional figures for this article are available in the online supplementary information.« less

  17. A Review of Safety and Design Requirements of the Artificial Pancreas.

    PubMed

    Blauw, Helga; Keith-Hynes, Patrick; Koops, Robin; DeVries, J Hans

    2016-11-01

    As clinical studies with artificial pancreas systems for automated blood glucose control in patients with type 1 diabetes move to unsupervised real-life settings, product development will be a focus of companies over the coming years. Directions or requirements regarding safety in the design of an artificial pancreas are, however, lacking. This review aims to provide an overview and discussion of safety and design requirements of the artificial pancreas. We performed a structured literature search based on three search components-type 1 diabetes, artificial pancreas, and safety or design-and extended the discussion with our own experiences in developing artificial pancreas systems. The main hazards of the artificial pancreas are over- and under-dosing of insulin and, in case of a bi-hormonal system, of glucagon or other hormones. For each component of an artificial pancreas and for the complete system we identified safety issues related to these hazards and proposed control measures. Prerequisites that enable the control algorithms to provide safe closed-loop control are accurate and reliable input of glucose values, assured hormone delivery and an efficient user interface. In addition, the system configuration has important implications for safety, as close cooperation and data exchange between the different components is essential.

  18. Dilated contour extraction and component labeling algorithm for object vector representation

    NASA Astrophysics Data System (ADS)

    Skourikhine, Alexei N.

    2005-08-01

    Object boundary extraction from binary images is important for many applications, e.g., image vectorization, automatic interpretation of images containing segmentation results, printed and handwritten documents and drawings, maps, and AutoCAD drawings. Efficient and reliable contour extraction is also important for pattern recognition due to its impact on shape-based object characterization and recognition. The presented contour tracing and component labeling algorithm produces dilated (sub-pixel) contours associated with corresponding regions. The algorithm has the following features: (1) it always produces non-intersecting, non-degenerate contours, including the case of one-pixel wide objects; (2) it associates the outer and inner (i.e., around hole) contours with the corresponding regions during the process of contour tracing in a single pass over the image; (3) it maintains desired connectivity of object regions as specified by 8-neighbor or 4-neighbor connectivity of adjacent pixels; (4) it avoids degenerate regions in both background and foreground; (5) it allows an easy augmentation that will provide information about the containment relations among regions; (6) it has a time complexity that is dominantly linear in the number of contour points. This early component labeling (contour-region association) enables subsequent efficient object-based processing of the image information.

  19. A modeling framework for exposing risks in complex systems.

    PubMed

    Sharit, J

    2000-08-01

    This article introduces and develops a modeling framework for exposing risks in the form of human errors and adverse consequences in high-risk systems. The modeling framework is based on two components: a two-dimensional theory of accidents in systems developed by Perrow in 1984, and the concept of multiple system perspectives. The theory of accidents differentiates systems on the basis of two sets of attributes. One set characterizes the degree to which systems are interactively complex; the other emphasizes the extent to which systems are tightly coupled. The concept of multiple perspectives provides alternative descriptions of the entire system that serve to enhance insight into system processes. The usefulness of these two model components derives from a modeling framework that cross-links them, enabling a variety of work contexts to be exposed and understood that would otherwise be very difficult or impossible to identify. The model components and the modeling framework are illustrated in the case of a large and comprehensive trauma care system. In addition to its general utility in the area of risk analysis, this methodology may be valuable in applications of current methods of human and system reliability analysis in complex and continually evolving high-risk systems.

  20. Reliability issues of free-space communications systems and networks

    NASA Astrophysics Data System (ADS)

    Willebrand, Heinz A.

    2003-04-01

    Free space optics (FSO) is a high-speed point-to-point connectivity solution traditionally used in the enterprise campus networking market for building-to-building LAN connectivity. However, more recently some wire line and wireless carriers started to deploy FSO systems in their networks. The requirements on FSO system reliability, meaing both system availability and component reliability, are far more stringent in the carrier market when compared to the requirements in the enterprise market segment. This paper tries to outline some of the aspects that are important to ensure carrier class system reliability.

  1. Development and Validation of a Portable Platform for Deploying Decision-Support Algorithms in Prehospital Settings

    PubMed Central

    Reisner, A. T.; Khitrov, M. Y.; Chen, L.; Blood, A.; Wilkins, K.; Doyle, W.; Wilcox, S.; Denison, T.; Reifman, J.

    2013-01-01

    Summary Background Advanced decision-support capabilities for prehospital trauma care may prove effective at improving patient care. Such functionality would be possible if an analysis platform were connected to a transport vital-signs monitor. In practice, there are technical challenges to implementing such a system. Not only must each individual component be reliable, but, in addition, the connectivity between components must be reliable. Objective We describe the development, validation, and deployment of the Automated Processing of Physiologic Registry for Assessment of Injury Severity (APPRAISE) platform, intended to serve as a test bed to help evaluate the performance of decision-support algorithms in a prehospital environment. Methods We describe the hardware selected and the software implemented, and the procedures used for laboratory and field testing. Results The APPRAISE platform met performance goals in both laboratory testing (using a vital-sign data simulator) and initial field testing. After its field testing, the platform has been in use on Boston MedFlight air ambulances since February of 2010. Conclusion These experiences may prove informative to other technology developers and to healthcare stakeholders seeking to invest in connected electronic systems for prehospital as well as in-hospital use. Our experiences illustrate two sets of important questions: are the individual components reliable (e.g., physical integrity, power, core functionality, and end-user interaction) and is the connectivity between components reliable (e.g., communication protocols and the metadata necessary for data interpretation)? While all potential operational issues cannot be fully anticipated and eliminated during development, thoughtful design and phased testing steps can reduce, if not eliminate, technical surprises. PMID:24155791

  2. Optimum Component Design in N-Stage Series Systems to Maximize the Reliability Under Budget Constraint

    DTIC Science & Technology

    2003-03-01

    27 2.8.5 Marginal Analysis Method...Figure 11 Improved Configuration of Figure 10; Increases Basic System Reliability..... 26 Figure 12 Example of marginal analysis ...View of Main Book of Software ............................................................... 51 Figure 20 The View of Data Worksheet

  3. Constructing the 'Best' Reliability Data for the Job - Developing Generic Reliability Data from Alternative Sources Early in a Product's Development Phase

    NASA Technical Reports Server (NTRS)

    Kleinhammer, Roger K.; Graber, Robert R.; DeMott, D. L.

    2016-01-01

    Reliability practitioners advocate getting reliability involved early in a product development process. However, when assigned to estimate or assess the (potential) reliability of a product or system early in the design and development phase, they are faced with lack of reasonable models or methods for useful reliability estimation. Developing specific data is costly and time consuming. Instead, analysts rely on available data to assess reliability. Finding data relevant to the specific use and environment for any project is difficult, if not impossible. Instead, analysts attempt to develop the "best" or composite analog data to support the assessments. Industries, consortia and vendors across many areas have spent decades collecting, analyzing and tabulating fielded item and component reliability performance in terms of observed failures and operational use. This data resource provides a huge compendium of information for potential use, but can also be compartmented by industry, difficult to find out about, access, or manipulate. One method used incorporates processes for reviewing these existing data sources and identifying the available information based on similar equipment, then using that generic data to derive an analog composite. Dissimilarities in equipment descriptions, environment of intended use, quality and even failure modes impact the "best" data incorporated in an analog composite. Once developed, this composite analog data provides a "better" representation of the reliability of the equipment or component. It can be used to support early risk or reliability trade studies, or analytical models to establish the predicted reliability data points. It also establishes a baseline prior that may updated based on test data or observed operational constraints and failures, i.e., using Bayesian techniques. This tutorial presents a descriptive compilation of historical data sources across numerous industries and disciplines, along with examples of contents and data characteristics. It then presents methods for combining failure information from different sources and mathematical use of this data in early reliability estimation and analyses.

  4. The dilemma of controlling cultural eutrophication of lakes

    PubMed Central

    Schindler, David W.

    2012-01-01

    The management of eutrophication has been impeded by reliance on short-term experimental additions of nutrients to bottles and mesocosms. These measures of proximate nutrient limitation fail to account for the gradual changes in biogeochemical nutrient cycles and nutrient fluxes from sediments, and succession of communities that are important components of whole-ecosystem responses. Erroneous assumptions about ecosystem processes and lack of accounting for hysteresis during lake recovery have further confused management of eutrophication. I conclude that long-term, whole-ecosystem experiments and case histories of lake recovery provide the only reliable evidence for policies to reduce eutrophication. The only method that has had proven success in reducing the eutrophication of lakes is reducing input of phosphorus. There are no case histories or long-term ecosystem-scale experiments to support recent claims that to reduce eutrophication of lakes, nitrogen must be controlled instead of or in addition to phosphorus. Before expensive policies to reduce nitrogen input are implemented, they require ecosystem-scale verification. The recent claim that the ‘phosphorus paradigm’ for recovering lakes from eutrophication has been ‘eroded’ has no basis. Instead, the case for phosphorus control has been strengthened by numerous case histories and large-scale experiments spanning several decades. PMID:22915669

  5. Interrater reliability of Violence Risk Appraisal Guide scores provided in Canadian criminal proceedings.

    PubMed

    Edens, John F; Penson, Brittany N; Ruchensky, Jared R; Cox, Jennifer; Smith, Shannon Toney

    2016-12-01

    Published research suggests that most violence risk assessment tools have relatively high levels of interrater reliability, but recent evidence of inconsistent scores among forensic examiners in adversarial settings raises concerns about the "field reliability" of such measures. This study specifically examined the reliability of Violence Risk Appraisal Guide (VRAG) scores in Canadian criminal cases identified in the legal database, LexisNexis. Over 250 reported cases were located that made mention of the VRAG, with 42 of these cases containing 2 or more scores that could be submitted to interrater reliability analyses. Overall, scores were skewed toward higher risk categories. The intraclass correlation (ICCA1) was .66, with pairs of forensic examiners placing defendants into the same VRAG risk "bin" in 68% of the cases. For categorical risk statements (i.e., low, moderate, high), examiners provided converging assessment results in most instances (86%). In terms of potential predictors of rater disagreement, there was no evidence for adversarial allegiance in our sample. Rater disagreement in the scoring of 1 VRAG item (Psychopathy Checklist-Revised; Hare, 2003), however, strongly predicted rater disagreement in the scoring of the VRAG (r = .58). (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Design of ceramic components with the NASA/CARES computer program

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Manderscheid, Jane M.; Gyekenyesi, John P.

    1990-01-01

    The ceramics analysis and reliability evaluation of structures (CARES) computer program is described. The primary function of the code is to calculate the fast-fracture reliability or failure probability of macro-scopically isotropic ceramic components. These components may be subjected to complex thermomechanical loadings, such as those found in heat engine applications. CARES uses results from MSC/NASTRAN or ANSYS finite-element analysis programs to evaluate how inherent surface and/or volume type flaws component reliability. CARES utilizes the Batdorf model and the two-parameter Weibull cumulative distribution function to describe the effects of multiaxial stress states on material strength. The principle of independent action (PIA) and the Weibull normal stress averaging models are also included. Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities are estimated from four-point bend bar or uniform uniaxial tensile specimen fracture strength data. Parameter estimation can be performed for a single or multiple failure modes by using a least-squares analysis or a maximum likelihood method. Kolmogorov-Smirnov and Anderson-Darling goodness-to-fit-tests, 90 percent confidence intervals on the Weibull parameters, and Kanofsky-Srinivasan 90 percent confidence band values are also provided. Examples are provided to illustrate the various features of CARES.

  7. Reliability Centred Maintenance (RCM) Analysis of Laser Machine in Filling Lithos at PT X

    NASA Astrophysics Data System (ADS)

    Suryono, M. A. E.; Rosyidi, C. N.

    2018-03-01

    PT. X used automated machines which work for sixteen hours per day. Therefore, the machines should be maintained to keep the availability of the machines. The aim of this research is to determine maintenance tasks according to the cause of component’s failure using Reliability Centred Maintenance (RCM) and determine the amount of optimal inspection frequency which must be performed to the machine at filling lithos process. In this research, RCM is used as an analysis tool to determine the critical component and find optimal inspection frequencies to maximize machine’s reliability. From the analysis, we found that the critical machine in filling lithos process is laser machine in Line 2. Then we proceed to determine the cause of machine’s failure. Lastube component has the highest Risk Priority Number (RPN) among other components such as power supply, lens, chiller, laser siren, encoder, conveyor, and mirror galvo. Most of the components have operational consequences and the others have hidden failure consequences and safety consequences. Time-directed life-renewal task, failure finding task, and servicing task can be used to overcome these consequences. The results of data analysis show that the inspection must be performed once a month for laser machine in the form of preventive maintenance to lowering the downtime.

  8. Are Validity and Reliability "Relevant" in Qualitative Evaluation Research?

    ERIC Educational Resources Information Center

    Goodwin, Laura D.; Goodwin, William L.

    1984-01-01

    The views of prominant qualitative methodologists on the appropriateness of validity and reliability estimation for the measurement strategies employed in qualitative evaluations are summarized. A case is made for the relevance of validity and reliability estimation. Definitions of validity and reliability for qualitative measurement are presented…

  9. A longitudinal examination of event-related potentials sensitive to monetary reward and loss feedback from late childhood to middle adolescence.

    PubMed

    Kujawa, Autumn; Carroll, Ashley; Mumper, Emma; Mukherjee, Dahlia; Kessel, Ellen M; Olino, Thomas; Hajcak, Greg; Klein, Daniel N

    2017-11-04

    Brain regions involved in reward processing undergo developmental changes from childhood to adolescence, and alterations in reward-related brain function are thought to contribute to the development of psychopathology. Event-related potentials (ERPs), such as the reward positivity (RewP) component, are valid measures of reward responsiveness that are easily assessed across development and provide insight into temporal dynamics of reward processing. Little work has systematically examined developmental changes in ERPs sensitive to reward. In this longitudinal study of 75 youth assessed 3 times across 6years, we used principal components analyses (PCA) to differentiate ERPs sensitive to monetary reward and loss feedback in late childhood, early adolescence, and middle adolescence. We then tested reliability of, and developmental changes in, ERPs. A greater number of ERP components differentiated reward and loss feedback in late childhood compared to adolescence, but components in childhood accounted for only a small proportion of variance. A component consistent with RewP was the only one to consistently emerge at each of the 3 assessments. RewP demonstrated acceptable reliability, particularly from early to middle adolescence, though reliability estimates varied depending on scoring approach and developmental period. The magnitude of the RewP component did not significantly change across time. Results provide insight into developmental changes in the structure of ERPs sensitive to reward, and indicate that RewP is a consistently observed and relatively stable measure of reward responsiveness, particularly across adolescence. Copyright © 2017. Published by Elsevier B.V.

  10. Reliability Analysis of RSG-GAS Primary Cooling System to Support Aging Management Program

    NASA Astrophysics Data System (ADS)

    Deswandri; Subekti, M.; Sunaryo, Geni Rina

    2018-02-01

    Multipurpose Research Reactor G.A. Siwabessy (RSG-GAS) which has been operating since 1987 is one of the main facilities on supporting research, development and application of nuclear energy programs in BATAN. Until now, the RSG-GAS research reactor has been successfully operated safely and securely. However, because it has been operating for nearly 30 years, the structures, systems and components (SSCs) from the reactor would have started experiencing an aging phase. The process of aging certainly causes a decrease in reliability and safe performances of the reactor, therefore the aging management program is needed to resolve the issues. One of the programs in the aging management is to evaluate the safety and reliability of the system and also screening the critical components to be managed.One method that can be used for such purposes is the Fault Tree Analysis (FTA). In this papers FTA method is used to screening the critical components in the RSG-GAS Primary Cooling System. The evaluation results showed that the primary isolation valves are the basic events which are dominant against the system failure.

  11. A Statistical Simulation Approach to Safe Life Fatigue Analysis of Redundant Metallic Components

    NASA Technical Reports Server (NTRS)

    Matthews, William T.; Neal, Donald M.

    1997-01-01

    This paper introduces a dual active load path fail-safe fatigue design concept analyzed by Monte Carlo simulation. The concept utilizes the inherent fatigue life differences between selected pairs of components for an active dual path system, enhanced by a stress level bias in one component. The design is applied to a baseline design; a safe life fatigue problem studied in an American Helicopter Society (AHS) round robin. The dual active path design is compared with a two-element standby fail-safe system and the baseline design for life at specified reliability levels and weight. The sensitivity of life estimates for both the baseline and fail-safe designs was examined by considering normal and Weibull distribution laws and coefficient of variation levels. Results showed that the biased dual path system lifetimes, for both the first element failure and residual life, were much greater than for standby systems. The sensitivity of the residual life-weight relationship was not excessive at reliability levels up to R = 0.9999 and the weight penalty was small. The sensitivity of life estimates increases dramatically at higher reliability levels.

  12. Reliability demonstration test for load-sharing systems with exponential and Weibull components

    PubMed Central

    Hu, Qingpei; Yu, Dan; Xie, Min

    2017-01-01

    Conducting a Reliability Demonstration Test (RDT) is a crucial step in production. Products are tested under certain schemes to demonstrate whether their reliability indices reach pre-specified thresholds. Test schemes for RDT have been studied in different situations, e.g., lifetime testing, degradation testing and accelerated testing. Systems designed with several structures are also investigated in many RDT plans. Despite the availability of a range of test plans for different systems, RDT planning for load-sharing systems hasn’t yet received the attention it deserves. In this paper, we propose a demonstration method for two specific types of load-sharing systems with components subject to two distributions: exponential and Weibull. Based on the assumptions and interpretations made in several previous works on such load-sharing systems, we set the mean time to failure (MTTF) of the total system as the demonstration target. We represent the MTTF as a summation of mean time between successive component failures. Next, we introduce generalized test statistics for both the underlying distributions. Finally, RDT plans for the two types of systems are established on the basis of these test statistics. PMID:29284030

  13. Reliability demonstration test for load-sharing systems with exponential and Weibull components.

    PubMed

    Xu, Jianyu; Hu, Qingpei; Yu, Dan; Xie, Min

    2017-01-01

    Conducting a Reliability Demonstration Test (RDT) is a crucial step in production. Products are tested under certain schemes to demonstrate whether their reliability indices reach pre-specified thresholds. Test schemes for RDT have been studied in different situations, e.g., lifetime testing, degradation testing and accelerated testing. Systems designed with several structures are also investigated in many RDT plans. Despite the availability of a range of test plans for different systems, RDT planning for load-sharing systems hasn't yet received the attention it deserves. In this paper, we propose a demonstration method for two specific types of load-sharing systems with components subject to two distributions: exponential and Weibull. Based on the assumptions and interpretations made in several previous works on such load-sharing systems, we set the mean time to failure (MTTF) of the total system as the demonstration target. We represent the MTTF as a summation of mean time between successive component failures. Next, we introduce generalized test statistics for both the underlying distributions. Finally, RDT plans for the two types of systems are established on the basis of these test statistics.

  14. Space reliability technology - A historical perspective

    NASA Technical Reports Server (NTRS)

    Cohen, H.

    1984-01-01

    The progressive improvements in reliability of launch vehicles is traced from the Vanguard rocket to the STS. The Vanguard, built with minimal redundancy and a high mass ratio, was used as an operational vehicle midway through its test program in an attempt to meet the perceived challenge represented by the Sputnik. The fourth Vanguard failed due to inadequate contamination prevention and lack of inspection ports. Automatic firing sequences were adopted for the Titan rockets, which were an order of magnitude larger than the Vanguard and therefore had room for interior inspections. Qualification testing and reporting were introduced for components, along with X ray inspection of fuel tank welds. Dual systems were added for flight critical components when the Titan became man-rated for the Gemini program. Designs incorporated full failure mode effects and criticality analyses for the Apollo program, which exposed the limits of applicability of numerical reliability models. Fault tree analyses and program milestone reviews were initiated. The worth of man-in-the-loop in space activities for reliability was demonstrated with the rescue of Skylab after solar panel and meteoroid shield failures. It is now the reliability of the payload, rather than the vehicle, that is questioned for Shuttle launches.

  15. Space Vehicle Reliability Modeling in DIORAMA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tornga, Shawn Robert

    When modeling system performance of space based detection systems it is important to consider spacecraft reliability. As space vehicles age the components become prone to failure for a variety of reasons such as radiation damage. Additionally, some vehicles may lose the ability to maneuver once they exhaust fuel supplies. Typically failure is divided into two categories: engineering mistakes and technology surprise. This document will report on a method of simulating space vehicle reliability in the DIORAMA framework.

  16. Development of KSC program for investigating and generating field failure rates. Reliability handbook for ground support equipment

    NASA Technical Reports Server (NTRS)

    Bloomquist, C. E.; Kallmeyer, R. H.

    1972-01-01

    Field failure rates and confidence factors are presented for 88 identifiable components of the ground support equipment at the John F. Kennedy Space Center. For most of these, supplementary information regarding failure mode and cause is tabulated. Complete reliability assessments are included for three systems, eight subsystems, and nine generic piece-part classifications. Procedures for updating or augmenting the reliability results are also included.

  17. Examining the interrater reliability of the Hare Psychopathy Checklist-Revised across a large sample of trained raters.

    PubMed

    Blais, Julie; Forth, Adelle E; Hare, Robert D

    2017-06-01

    The goal of the current study was to assess the interrater reliability of the Psychopathy Checklist-Revised (PCL-R) among a large sample of trained raters (N = 280). All raters completed PCL-R training at some point between 1989 and 2012 and subsequently provided complete coding for the same 6 practice cases. Overall, 3 major conclusions can be drawn from the results: (a) reliability of individual PCL-R items largely fell below any appropriate standards while the estimates for Total PCL-R scores and factor scores were good (but not excellent); (b) the cases representing individuals with high psychopathy scores showed better reliability than did the cases of individuals in the moderate to low PCL-R score range; and (c) there was a high degree of variability among raters; however, rater specific differences had no consistent effect on scoring the PCL-R. Therefore, despite low reliability estimates for individual items, Total scores and factor scores can be reliably scored among trained raters. We temper these conclusions by noting that scoring standardized videotaped case studies does not allow the rater to interact directly with the offender. Real-world PCL-R assessments typically involve a face-to-face interview and much more extensive collateral information. We offer recommendations for new web-based training procedures. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. Power Electronics and Electric Machines | Transportation Research | NREL

    Science.gov Websites

    -to resource for information from cutting-edge thermal management research, making wide-scale adoption battery, the motor, and other powertrain components. NREL's thermal management and reliability research is thermal management technologies to improve performance, cost, and reliability for power electronics and

  19. Reliability and Validity of Scores on the IFSP Rating Scale

    ERIC Educational Resources Information Center

    Jung, Lee Ann; McWilliam, R. A.

    2005-01-01

    Evidence is presented regarding the construct validity and internal consistency reliability of scores for an investigator-developed individualized family service plan (IFSP) rating scale. One hundred and twenty IFSPs were rated using a 12-item instrument, the IFSP Rating Scale (McWilliam & Jung, 2001). Using principal components factor…

  20. Internal Consistency Reliability of the Self-Report Antisocial Process Screening Device

    ERIC Educational Resources Information Center

    Poythress, Norman G.; Douglas, Kevin S.; Falkenbach, Diana; Cruise, Keith; Lee, Zina; Murrie, Daniel C.; Vitacco, Michael

    2006-01-01

    The self-report version of the Antisocial Process Screening Device (APSD) has become a popular measure for assessing psychopathic features in justice-involved adolescents. However, the internal consistency reliability of its component scales (Narcissism, Callous-Unemotional, and Impulsivity) has been questioned in several studies. This study…

  1. Solar-Powered Supply Is Light and Reliable

    NASA Technical Reports Server (NTRS)

    Willis, A. E.; Garrett, H.; Matheney, J.

    1982-01-01

    DC supply originally intended for use in solar-powered spacecraft propulsion is lightweight and very reliable. Operates from 100-200 volt output of solar panels to produce 11 different dc voltages, with total demand of 3,138 watts. With exception of specially wound inductors and transformers, system uses readily available components.

  2. Measuring theory of mind in children. Psychometric properties of the ToM Storybooks.

    PubMed

    Blijd-Hoogewys, E M A; van Geert, P L C; Serra, M; Minderaa, R B

    2008-11-01

    Although research on Theory-of-Mind (ToM) is often based on single task measurements, more comprehensive instruments result in a better understanding of ToM development. The ToM Storybooks is a new instrument measuring basic ToM-functioning and associated aspects. There are 34 tasks, tapping various emotions, beliefs, desires and mental-physical distinctions. Four studies on the validity and reliability of the test are presented, in typically developing children (n = 324, 3-12 years) and children with PDD-NOS (n = 30). The ToM Storybooks have good psychometric qualities. A component analysis reveals five components corresponding with the underlying theoretical constructs. The internal consistency, test-retest reliability, inter-rater reliability, construct validity and convergent validity are good. The ToM Storybooks can be used in research as well as in clinical settings.

  3. Impact of data source on travel time reliability assessment.

    DOT National Transportation Integrated Search

    2014-08-01

    Travel time reliability measures are becoming an increasingly important input to the mobility and : congestion management studies. In the case of Maryland State Highway Administration, reliability : measures are key elements in the agencys Annual ...

  4. Reliability centered maintenance : a case study of railway transit maintenance to achieve optimal performance.

    DOT National Transportation Integrated Search

    2010-12-01

    The purpose of this qualitative case study was to identify the types of obstacles and patterns experienced by a single heavy rail transit agency located in North America that embedded a Reliability Centered Maintenance (RCM) Process. The outcome of t...

  5. Developing a method for specifying the components of behavior change interventions in practice: the example of smoking cessation.

    PubMed

    Lorencatto, Fabiana; West, Robert; Seymour, Natalie; Michie, Susan

    2013-06-01

    There is a difference between interventions as planned and as delivered in practice. Unless we know what was actually delivered, we cannot understand "what worked" in effective interventions. This study aimed to (a) assess whether an established taxonomy of 53 smoking cessation behavior change techniques (BCTs) may be applied or adapted as a method for reliably specifying the content of smoking cessation behavioral support consultations and (b) develop an effective method for training researchers and practitioners in the reliable application of the taxonomy. Fifteen transcripts of audio-recorded consultations delivered by England's Stop Smoking Services were coded into component BCTs using the taxonomy. Interrater reliability and potential adaptations to the taxonomy to improve coding were discussed following 3 coding waves. A coding training manual was developed through expert consensus and piloted on 10 trainees, assessing coding reliability and self-perceived competence before and after training. An average of 33 BCTs from the taxonomy were identified at least once across sessions and coding waves. Consultations contained on average 12 BCTs (range = 8-31). Average interrater reliability was high (88% agreement). The taxonomy was adapted to simplify coding by merging co-occurring BCTs and refining BCT definitions. Coding reliability and self-perceived competence significantly improved posttraining for all trainees. It is possible to apply a taxonomy to reliably identify and classify BCTs in smoking cessation behavioral support delivered in practice, and train inexperienced coders to do so reliably. This method can be used to investigate variability in provision of behavioral support across services, monitor fidelity of delivery, and identify training needs.

  6. Investigating univariate temporal patterns for intrinsic connectivity networks based on complexity and low-frequency oscillation: a test-retest reliability study.

    PubMed

    Wang, X; Jiao, Y; Tang, T; Wang, H; Lu, Z

    2013-12-19

    Intrinsic connectivity networks (ICNs) are composed of spatial components and time courses. The spatial components of ICNs were discovered with moderate-to-high reliability. So far as we know, few studies focused on the reliability of the temporal patterns for ICNs based their individual time courses. The goals of this study were twofold: to investigate the test-retest reliability of temporal patterns for ICNs, and to analyze these informative univariate metrics. Additionally, a correlation analysis was performed to enhance interpretability. Our study included three datasets: (a) short- and long-term scans, (b) multi-band echo-planar imaging (mEPI), and (c) eyes open or closed. Using dual regression, we obtained the time courses of ICNs for each subject. To produce temporal patterns for ICNs, we applied two categories of univariate metrics: network-wise complexity and network-wise low-frequency oscillation. Furthermore, we validated the test-retest reliability for each metric. The network-wise temporal patterns for most ICNs (especially for default mode network, DMN) exhibited moderate-to-high reliability and reproducibility under different scan conditions. Network-wise complexity for DMN exhibited fair reliability (ICC<0.5) based on eyes-closed sessions. Specially, our results supported that mEPI could be a useful method with high reliability and reproducibility. In addition, these temporal patterns were with physiological meanings, and certain temporal patterns were correlated to the node strength of the corresponding ICN. Overall, network-wise temporal patterns of ICNs were reliable and informative and could be complementary to spatial patterns of ICNs for further study. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  7. Reliability-based optimization of maintenance scheduling of mechanical components under fatigue

    PubMed Central

    Beaurepaire, P.; Valdebenito, M.A.; Schuëller, G.I.; Jensen, H.A.

    2012-01-01

    This study presents the optimization of the maintenance scheduling of mechanical components under fatigue loading. The cracks of damaged structures may be detected during non-destructive inspection and subsequently repaired. Fatigue crack initiation and growth show inherent variability, and as well the outcome of inspection activities. The problem is addressed under the framework of reliability based optimization. The initiation and propagation of fatigue cracks are efficiently modeled using cohesive zone elements. The applicability of the method is demonstrated by a numerical example, which involves a plate with two holes subject to alternating stress. PMID:23564979

  8. Ready, Reliable, and Relevant: The Army Reserve Component as an Operational Reserve

    DTIC Science & Technology

    2015-05-21

    SUBJECT TERMS Army Reserve Component, Army National Guard, United States Army Reserve, Operational Reserve, Total Force Policy, Mobilization...School of Advanced Military Studies Henry A. Arnold III, COL, IN Accepted this 21st day of May 2015 by...Component (AC). Fifth, the Army’s “ Total Force 2 “Army National Guard History: Always Ready

  9. 25 CFR 547.10 - What are the minimum standards for Class II gaming system critical events?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... GAMES § 547.10 What are the minimum standards for Class II gaming system critical events? This section... component can no longer be considered reliable. Accordingly, any game play on the affected component shall... or the medium itself has some fault.Any game play on the affected component shall cease immediately...

  10. 25 CFR 547.10 - What are the minimum standards for Class II gaming system critical events?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... GAMES § 547.10 What are the minimum standards for Class II gaming system critical events? This section... component can no longer be considered reliable. Accordingly, any game play on the affected component shall... or the medium itself has some fault.Any game play on the affected component shall cease immediately...

  11. Factor structure of DSM-IV criteria for obsessive compulsive personality disorder in patients with binge eating disorder.

    PubMed

    Grilo, C M

    2004-01-01

    To examine the factor structure of DSM-IV criteria for obsessive compulsive personality disorder (OCPD) in patients with binge eating disorder (BED). Two hundred and eleven consecutive out-patients with axis I diagnoses of BED were reliably assessed with semi-structured diagnostic interviews. The eight criteria for the OCPD diagnosis were examined with reliability and correlational analyses. Exploratory factor analysis was performed to identify potential components. Cronbach's coefficient alpha for the OCPD criteria was 0.77. Principal components factor analysis with varimax rotation revealed a three-factor solution (rigidity, perfectionism, and miserliness), which accounted for 65% of variance. The DSM-IV criteria for OCPD showed good internal consistency. Exploratory factor analysis, however, revealed three components that may reflect distinct interpersonal, intrapersonal (cognitive), and behavioral features.

  12. NEPP Evaluation of Automotive Grade Tantalum Chip Capacitors

    NASA Technical Reports Server (NTRS)

    Sampson, Mike; Brusse, Jay

    2018-01-01

    Automotive grade tantalum (Ta) chip capacitors are available at lower cost with smaller physical size and higher volumetric efficiency compared to military/space grade capacitors. Designers of high reliability aerospace and military systems would like to take advantage of these attributes while maintaining the high standards for long-term reliable operation they are accustomed to when selecting military-qualified established reliability tantalum chip capacitors (e.g., MIL-PRF-55365). The objective for this evaluation was to assess the long-term performance of off-the-shelf automotive grade Ta chip capacitors (i.e., manufacturer self-qualified per AEC Q-200). Two (2) lots of case size D manganese dioxide (MnO2) cathode Ta chip capacitors from 1 manufacturer were evaluated. The evaluation consisted of construction analysis, basic electrical parameter characterization, extended long-term (2000 hours) life testing and some accelerated stress testing. Tests and acceptance criteria were based upon manufacturer datasheets and the Automotive Electronics Council's AEC Q-200 qualification specification for passive electronic components. As-received a few capacitors were marginally above the specified tolerance for capacitance and ESR. X-ray inspection found that the anodes for some devices may not be properly aligned within the molded encapsulation leaving less than 1 mil thickness of the encapsulation. This evaluation found that the long-term life performance of automotive grade Ta chip capacitors is generally within specification limits suggesting these capacitors may be suitable for some space applications.

  13. Cervical auscultation as an adjunct to the clinical swallow examination: a comparison with fibre-optic endoscopic evaluation of swallowing.

    PubMed

    Bergström, Liza; Svensson, Per; Hartelius, Lena

    2014-10-01

    This prospective, single-blinded study investigated the validity and reliability of cervical auscultation (CA) under two conditions; (1) CA-only, using isolated swallow-sound clips, and (2) CSE + CA, using extra clinical swallow examination (CSE) information such as patient case history, oromotor assessment, and the same swallow-sound clips as condition one. The two CA conditions were compared against a fibre-optic endoscopic evaluation of swallowing (FEES) reference test. Each CA condition consisted of 18 swallows samples compiled from 12 adult patients consecutively referred to the FEES clinic. Patients' swallow sounds were simultaneously recorded during FEES via a Littmann E3200 electronic stethoscope. These 18 swallow samples were sent to 13 experienced dysphagia clinicians recruited from the UK and Australia who were blinded to the FEES results. Samples were rated in terms of (1) if dysphagic, (2) if the patient was safe on consistency trialled, and (3) dysphagia severity. Sensitivity measures ranged from 83-95%, specificity measures from 50-92% across the conditions. Intra-rater agreement ranged from 69-97% total agreement. Inter-rater reliability for dysphagia severity showed substantial agreement (rs = 0.68 and 0.74). Results show good rater reliability for CA-trained speech-language pathologists. Sensitivity and specificity for both CA conditions in this study are comparable to and often better than other well-established CSE components.

  14. Detecting fixation on a target using time-frequency distributions of a retinal birefringence scanning signal

    PubMed Central

    2013-01-01

    Background The fovea, which is the most sensitive part of the retina, is known to have birefringent properties, i.e. it changes the polarization state of light upon reflection. Existing devices use this property to obtain information on the orientation of the fovea and the direction of gaze. Such devices employ specific frequency components that appear during moments of fixation on a target. To detect them, previous methods have used solely the power spectrum of the Fast Fourier Transform (FFT), which, unfortunately, is an integral method, and does not give information as to where exactly the events of interest occur. With very young patients who are not cooperative enough, this presents a problem, because central fixation may be present only during very short-lasting episodes, and can easily be missed by the FFT. Method This paper presents a method for detecting short-lasting moments of central fixation in existing devices for retinal birefringence scanning, with the goal of a reliable detection of eye alignment. Signal analysis is based on the Continuous Wavelet Transform (CWT), which reliably localizes such events in the time-frequency plane. Even though the characteristic frequencies are not always strongly expressed due to possible artifacts, simple topological analysis of the time-frequency distribution can detect fixation reliably. Results In all six subjects tested, the CWT allowed precise identification of both frequency components. Moreover, in four of these subjects, episodes of intermittent but definitely present central fixation were detectable, similar to those in Figure 4. A simple FFT is likely to treat them as borderline cases, or entirely miss them, depending on the thresholds used. Conclusion Joint time-frequency analysis is a powerful tool in the detection of eye alignment, even in a noisy environment. The method is applicable to similar situations, where short-lasting diagnostic events need to be detected in time series acquired by means of scanning some substrate along a specific path. PMID:23668264

  15. Orbital-optimized third-order Møller-Plesset perturbation theory and its spin-component and spin-opposite scaled variants: Application to symmetry breaking problems

    NASA Astrophysics Data System (ADS)

    Bozkaya, Uǧur

    2011-12-01

    In this research, orbital-optimized third-order Møller-Plesset perturbation theory (OMP3) and its spin-component and spin-opposite scaled variants (SCS-OMP3 and SOS-OMP3) are introduced. Using a Lagrangian-based approach, an efficient, quadratically convergent algorithm for variational optimization of the molecular orbitals (MOs) for third-order Møller-Plesset perturbation theory (MP3) is presented. Explicit equations for response density matrices, the MO gradient, and Hessian are reported in spin-orbital form. The OMP3, SCS-OMP3, and SOS-OMP3 approaches are compared with the second-order Møller-Plesset perturbation theory (MP2), MP3, coupled-cluster doubles (CCD), optimized-doubles (OD), and coupled-cluster singles and doubles (CCSD) methods. All these methods are applied to the O4 +, O3, and seven diatomic molecules. Results demonstrate that the OMP3 and its variants provide significantly better vibrational frequencies than MP3, CCSD, and OD for the molecules where the symmetry-breaking problems are observed. For O4 +, the OMP3 prediction, 1343 cm-1, for ω6 (b3u) mode, where symmetry-breaking appears, is even better than presumably more reliable methods such as Brueckner doubles (BD), 1194 cm-1, and OD, 1193 cm-1, methods (the experimental value is 1320 cm-1). For O3, the predictions of SCS-OMP3 (1143 cm-1) and SOS-OMP3 (1165 cm-1) are remarkably better than the more robust OD method (1282 cm-1); the experimental value is 1089 cm-1. For the seven diatomics, again the SCS-OMP3 and SOS-OMP3 methods provide the lowest average errors, |Δωe| = 44 and |Δωe| = 35 cm-1, respectively, while for OD, |Δωe| = 161 cm-1and CCSD |Δωe| = 106 cm-1. Hence, the OMP3 and especially its spin-scaled variants perform much better than the MP3, CCSD, and more robust OD approaches for considered test cases. Therefore, considering both the computational cost and the reliability, SCS-OMP3 and SOS-OMP3 appear to be the best methods for the symmetry-breaking cases, based on present application results. The OMP3 method offers certain advantages: it provides reliable vibrational frequencies in case of symmetry-breaking problems, especially with spin-scaling tricks, its analytic gradients are easier to compute since there is no need to solve the coupled-perturbed equations for the orbital response, and the computation of one-electron properties are easier because there is no response contribution to the particle density matrices. The OMP3 has further advantages over standard MP3, making it promising for excited state properties via linear response theory.

  16. Reliable High Performance Peta- and Exa-Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bronevetsky, G

    2012-04-02

    As supercomputers become larger and more powerful, they are growing increasingly complex. This is reflected both in the exponentially increasing numbers of components in HPC systems (LLNL is currently installing the 1.6 million core Sequoia system) as well as the wide variety of software and hardware components that a typical system includes. At this scale it becomes infeasible to make each component sufficiently reliable to prevent regular faults somewhere in the system or to account for all possible cross-component interactions. The resulting faults and instability cause HPC applications to crash, perform sub-optimally or even produce erroneous results. As supercomputers continuemore » to approach Exascale performance and full system reliability becomes prohibitively expensive, we will require novel techniques to bridge the gap between the lower reliability provided by hardware systems and users unchanging need for consistent performance and reliable results. Previous research on HPC system reliability has developed various techniques for tolerating and detecting various types of faults. However, these techniques have seen very limited real applicability because of our poor understanding of how real systems are affected by complex faults such as soft fault-induced bit flips or performance degradations. Prior work on such techniques has had very limited practical utility because it has generally focused on analyzing the behavior of entire software/hardware systems both during normal operation and in the face of faults. Because such behaviors are extremely complex, such studies have only produced coarse behavioral models of limited sets of software/hardware system stacks. Since this provides little insight into the many different system stacks and applications used in practice, this work has had little real-world impact. My project addresses this problem by developing a modular methodology to analyze the behavior of applications and systems during both normal and faulty operation. By synthesizing models of individual components into a whole-system behavior models my work is making it possible to automatically understand the behavior of arbitrary real-world systems to enable them to tolerate a wide range of system faults. My project is following a multi-pronged research strategy. Section II discusses my work on modeling the behavior of existing applications and systems. Section II.A discusses resilience in the face of soft faults and Section II.B looks at techniques to tolerate performance faults. Finally Section III presents an alternative approach that studies how a system should be designed from the ground up to make resilience natural and easy.« less

  17. Reliability analysis of a phaser measurement unit using a generalized fuzzy lambda-tau(GFLT) technique.

    PubMed

    Komal

    2018-05-01

    Nowadays power consumption is increasing day-by-day. To fulfill failure free power requirement, planning and implementation of an effective and reliable power management system is essential. Phasor measurement unit(PMU) is one of the key device in wide area measurement and control systems. The reliable performance of PMU assures failure free power supply for any power system. So, the purpose of the present study is to analyse the reliability of a PMU used for controllability and observability of power systems utilizing available uncertain data. In this paper, a generalized fuzzy lambda-tau (GFLT) technique has been proposed for this purpose. In GFLT, system components' uncertain failure and repair rates are fuzzified using fuzzy numbers having different shapes such as triangular, normal, cauchy, sharp gamma and trapezoidal. To select a suitable fuzzy number for quantifying data uncertainty, system experts' opinion have been considered. The GFLT technique applies fault tree, lambda-tau method, fuzzified data using different membership functions, alpha-cut based fuzzy arithmetic operations to compute some important reliability indices. Furthermore, in this study ranking of critical components of the system using RAM-Index and sensitivity analysis have also been performed. The developed technique may be helpful to improve system performance significantly and can be applied to analyse fuzzy reliability of other engineering systems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Reliability of a Parallel Pipe Network

    NASA Technical Reports Server (NTRS)

    Herrera, Edgar; Chamis, Christopher (Technical Monitor)

    2001-01-01

    The goal of this NASA-funded research is to advance research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction methods for improved aerospace and aircraft propulsion system components. Reliability methods are used to quantify response uncertainties due to inherent uncertainties in design variables. In this report, several reliability methods are applied to a parallel pipe network. The observed responses are the head delivered by a main pump and the head values of two parallel lines at certain flow rates. The probability that the flow rates in the lines will be less than their specified minimums will be discussed.

  19. Multiprocessor switch with selective pairing

    DOEpatents

    Gara, Alan; Gschwind, Michael K; Salapura, Valentina

    2014-03-11

    System, method and computer program product for a multiprocessing system to offer selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). Each paired microprocessor or processor cores that provide one highly reliable thread for high-reliability connect with a system components such as a memory "nest" (or memory hierarchy), an optional system controller, and optional interrupt controller, optional I/O or peripheral devices, etc. The memory nest is attached to a selective pairing facility via a switch or a bus

  20. Investigation of reliability indicators of information analysis systems based on Markov’s absorbing chain model

    NASA Astrophysics Data System (ADS)

    Gilmanshin, I. R.; Kirpichnikov, A. P.

    2017-09-01

    In the result of study of the algorithm of the functioning of the early detection module of excessive losses, it is proven the ability to model it by using absorbing Markov chains. The particular interest is in the study of probability characteristics of early detection module functioning algorithm of losses in order to identify the relationship of indicators of reliability of individual elements, or the probability of occurrence of certain events and the likelihood of transmission of reliable information. The identified relations during the analysis allow to set thresholds reliability characteristics of the system components.

Top