Sample records for validity test failure

  1. Verification and Validation Process for Progressive Damage and Failure Analysis Methods in the NASA Advanced Composites Consortium

    NASA Technical Reports Server (NTRS)

    Wanthal, Steven; Schaefer, Joseph; Justusson, Brian; Hyder, Imran; Engelstad, Stephen; Rose, Cheryl

    2017-01-01

    The Advanced Composites Consortium is a US Government/Industry partnership supporting technologies to enable timeline and cost reduction in the development of certified composite aerospace structures. A key component of the consortium's approach is the development and validation of improved progressive damage and failure analysis methods for composite structures. These methods will enable increased use of simulations in design trade studies and detailed design development, and thereby enable more targeted physical test programs to validate designs. To accomplish this goal with confidence, a rigorous verification and validation process was developed. The process was used to evaluate analysis methods and associated implementation requirements to ensure calculation accuracy and to gage predictability for composite failure modes of interest. This paper introduces the verification and validation process developed by the consortium during the Phase I effort of the Advanced Composites Project. Specific structural failure modes of interest are first identified, and a subset of standard composite test articles are proposed to interrogate a progressive damage analysis method's ability to predict each failure mode of interest. Test articles are designed to capture the underlying composite material constitutive response as well as the interaction of failure modes representing typical failure patterns observed in aerospace structures.

  2. Ecological validity of the five digit test and the oral trails test.

    PubMed

    Paiva, Gabrielle Chequer de Castro; Fialho, Mariana Braga; Costa, Danielle de Souza; Paula, Jonas Jardim de

    2016-01-01

    Tests evaluating the attentional-executive system are widely used in clinical practice. However, proximity of an objective cognitive test with real-world situations (ecological validity) is not frequently investigated. The present study evaluate the association between measures of the Five Digit Test (FDT) and the Oral Trails Test (OTT) with self-reported cognitive failures in everyday life as measured by the Cognitive Failures Questionnaire (CFQ). Brazilian adults from 18-to-65 years old voluntarily performed the FDT and OTT tests and reported the frequency of cognitive failures in their everyday life through the CFQ. After controlling for the age effect, the measures of controlled attentional processes were associated with cognitive failures, yet the cognitive flexibility of both FDT and OTT accounted for by the majority of variance in most aspects of the CFQ factors. The FDT and the OTT measures were predictive of real-world problems such as cognitive failures in everyday activities/situations.

  3. Validation of self assessment patient knowledge questionnaire for heart failure patients.

    PubMed

    Lainscak, Mitja; Keber, Irena

    2005-12-01

    Several studies showed insufficient knowledge and poor compliance to non-pharmacological management in heart failure patients. Only a limited number of validated tools are available to assess their knowledge. The aim of the study was to test our 10-item Patient knowledge questionnaire. The Patient knowledge questionnaire was administered to 42 heart failure patients from Heart failure clinic and to 40 heart failure patients receiving usual care. Construct validity (Pearson correlation coefficient), internal consistency (Cronbach alpha), reproducibility (Wilcoxon signed rank test), and reliability (chi-square test and Student's t-test for independent samples) were assessed. Overall score of the Patient knowledge questionnaire had the strongest correlation to the question about regular weighing (r=0.69) and the weakest to the question about presence of heart disease (r=0.33). There was a strong correlation between question about fluid retention and questions assessing regular weighing, (r=0.86), weight of one litre of water (r=0.86), and salt restriction (r=0.57). The Cronbach alpha was 0.74 and could be improved by exclusion of questions about clear explanation (Chronbach alpha 0.75), importance of fruit, soup, and vegetables (Chronbach alpha 0.75), and self adjustment of diuretic (Chronbach alpha 0.81). During reproducibility testing 91% to 98% of questions were answered equally. Patients from Heart failure clinic scored significantly better than patients receiving usual care (7.9 (1.3) vs. 5.7 (2.2), p<0.001). Patient knowledge questionnaire is a valid and reliable tool to measure knowledge of heart failure patients.

  4. Further examination of embedded performance validity indicators for the Conners' Continuous Performance Test and Brief Test of Attention in a large outpatient clinical sample.

    PubMed

    Sharland, Michael J; Waring, Stephen C; Johnson, Brian P; Taran, Allise M; Rusin, Travis A; Pattock, Andrew M; Palcher, Jeanette A

    2018-01-01

    Assessing test performance validity is a standard clinical practice and although studies have examined the utility of cognitive/memory measures, few have examined attention measures as indicators of performance validity beyond the Reliable Digit Span. The current study further investigates the classification probability of embedded Performance Validity Tests (PVTs) within the Brief Test of Attention (BTA) and the Conners' Continuous Performance Test (CPT-II), in a large clinical sample. This was a retrospective study of 615 patients consecutively referred for comprehensive outpatient neuropsychological evaluation. Non-credible performance was defined two ways: failure on one or more PVTs and failure on two or more PVTs. Classification probability of the BTA and CPT-II into non-credible groups was assessed. Sensitivity, specificity, positive predictive value, and negative predictive value were derived to identify clinically relevant cut-off scores. When using failure on two or more PVTs as the indicator for non-credible responding compared to failure on one or more PVTs, highest classification probability, or area under the curve (AUC), was achieved by the BTA (AUC = .87 vs. .79). CPT-II Omission, Commission, and Total Errors exhibited higher classification probability as well. Overall, these findings corroborate previous findings, extending them to a large clinical sample. BTA and CPT-II are useful embedded performance validity indicators within a clinical battery but should not be used in isolation without other performance validity indicators.

  5. Evaluation of the Effect of the Volume Throughput and Maximum Flux of Low-Surface-Tension Fluids on Bacterial Penetration of 0.2 Micron-Rated Filters during Process-Specific Filter Validation Testing.

    PubMed

    Folmsbee, Martha

    2015-01-01

    Approximately 97% of filter validation tests result in the demonstration of absolute retention of the test bacteria, and thus sterile filter validation failure is rare. However, while Brevundimonas diminuta (B. diminuta) penetration of sterilizing-grade filters is rarely detected, the observation that some fluids (such as vaccines and liposomal fluids) may lead to an increased incidence of bacterial penetration of sterilizing-grade filters by B. diminuta has been reported. The goal of the following analysis was to identify important drivers of filter validation failure in these rare cases. The identification of these drivers will hopefully serve the purpose of assisting in the design of commercial sterile filtration processes with a low risk of filter validation failure for vaccine, liposomal, and related fluids. Filter validation data for low-surface-tension fluids was collected and evaluated with regard to the effect of bacterial load (CFU/cm(2)), bacterial load rate (CFU/min/cm(2)), volume throughput (mL/cm(2)), and maximum filter flux (mL/min/cm(2)) on bacterial penetration. The data set (∼1162 individual filtrations) included all instances of process-specific filter validation failures performed at Pall Corporation, including those using other filter media, but did not include all successful retentive filter validation bacterial challenges. It was neither practical nor necessary to include all filter validation successes worldwide (Pall Corporation) to achieve the goals of this analysis. The percentage of failed filtration events for the selected total master data set was 27% (310/1162). Because it is heavily weighted with penetration events, this percentage is considerably higher than the actual rate of failed filter validations, but, as such, facilitated a close examination of the conditions that lead to filter validation failure. In agreement with our previous reports, two of the significant drivers of bacterial penetration identified were the total bacterial load and the bacterial load rate. In addition to these parameters, another three possible drivers of failure were also identified: volume throughput, maximum filter flux, and pressure. Of the data for which volume throughput information was available, 24% (249/1038) of the filtrations resulted in penetration. However, for the volume throughput range of 680-2260 mL/cm(2), only 9 out of 205 bacterial challenges (∼4%) resulted in penetration. Of the data for which flux information was available, 22% (212/946) resulted in bacterial penetration. However, in the maximum filter flux range from 7 to 18 mL/min/cm(2), only one out of 121 filtrations (0.6%) resulted in penetration. A slight increase in filter failure was observed in filter bacterial challenges with a differential pressure greater than 30 psid. When designing a commercial process for the sterile filtration of a low-surface-tension fluid (or any other potentially high-risk fluid), targeting the volume throughput range of 680-2260 mL/cm(2) or flux range of 7-18 mL/min/cm(2), and maintaining the differential pressure below 30 psid, could significantly decrease the risk of validation filter failure. However, it is important to keep in mind that these are general trends described in this study and some test fluids may not conform to the general trends described here. Ultimately, it is important to evaluate both filterability and bacterial retention of the test fluid under proposed process conditions prior to finalizing the manufacturing process to ensure successful process-specific filter validation of low-surface-tension fluids. An overwhelming majority of process-specific filter validation (qualification) tests result in the demonstration of absolute retention of test bacteria by sterilizing-grade membrane filters. As such, process-specific filter validation failure is rare. However, while bacterial penetration of sterilizing-grade filters during process-specific filter validation is rarely detected, some fluids (such as vaccines and liposomal fluids) have been associated with an increased incidence of bacterial penetration. The goal of the following analysis was to identify important drivers of process-specific filter validation failure. The identification of these drivers will possibly serve to assist in the design of commercial sterile filtration processes with a low risk of filter validation failure. Filter validation data for low-surface-tension fluids was collected and evaluated with regard to bacterial concentration and rates, as well as filtered fluid volume and rate (Pall Corporation). The master data set (∼1160 individual filtrations) included all recorded instances of process-specific filter validation failures but did not include all successful filter validation bacterial challenge tests. This allowed for a close examination of the conditions that lead to process-specific filter validation failure. As previously reported, two significant drivers of bacterial penetration were identified: the total bacterial load (the total number of bacteria per filter) and the bacterial load rate (the rate at which bacteria were applied to the filter). In addition to these parameters, another three possible drivers of failure were also identified: volumetric throughput, filter flux, and pressure. When designing a commercial process for the sterile filtration of a low-surface-tension fluid (or any other penetrative-risk fluid), targeting the identified bacterial challenge loads, volume throughput, and corresponding flux rates could decrease, and possibly eliminate, the risk of validation filter failure. However, it is important to keep in mind that these are general trends described in this study and some test fluids may not conform to the general trends described here. Ultimately, it is important to evaluate both filterability and bacterial retention of the test fluid under proposed process conditions prior to finalizing the manufacturing process to ensure successful filter validation of low-surface-tension fluids. © PDA, Inc. 2015.

  6. Specificity rates for non-clinical, bilingual, Mexican Americans on three popular performance validity measures.

    PubMed

    Gasquoine, Philip G; Weimer, Amy A; Amador, Arnoldo

    2017-04-01

    To measure specificity as failure rates for non-clinical, bilingual, Mexican Americans on three popular performance validity measures: (a) the language format Reliable Digit Span; (b) visual-perceptual format Test of Memory Malingering; and (c) visual-perceptual format Dot Counting, using optimal/suboptimal effort cut scores developed for monolingual, English-speakers. Participants were 61 consecutive referrals, aged between 18 and 65 years, with <16 years of education who were subjectively bilingual (confirmed via formal assessment) and chose the language of assessment, Spanish or English, for the performance validity tests. Failure rates were 38% for Reliable Digit Span, 3% for the Test of Memory Malingering, and 7% for Dot Counting. For Reliable Digit Span, the failure rates for Spanish (46%) and English (31%) languages of administration did not differ significantly. Optimal/suboptimal effort cut scores derived for monolingual English-speakers can be used with Spanish/English bilinguals when using the visual-perceptual format Test of Memory Malingering and Dot Counting. The high failure rate for Reliable Digit Span suggests it should not be used as a performance validity measure with Spanish/English bilinguals, irrespective of the language of test administration, Spanish or English.

  7. Construct validity of the Chinese version of the Self-care of Heart Failure Index determined using structural equation modeling.

    PubMed

    Kang, Xiaofeng; Dennison Himmelfarb, Cheryl R; Li, Zheng; Zhang, Jian; Lv, Rong; Guo, Jinyu

    2015-01-01

    The Self-care of Heart Failure Index (SCHFI) is an empirically tested instrument for measuring the self-care of patients with heart failure. The aim of this study was to develop a simplified Chinese version of the SCHFI and provide evidence for its construct validity. A total of 182 Chinese with heart failure were surveyed. A 2-step structural equation modeling procedure was applied to test construct validity. Factor analysis showed 3 factors explaining 43% of the variance. Structural equation model confirmed that self-care maintenance, self-care management, and self-care confidence are indeed indicators of self-care, and self-care confidence was a positive and equally strong predictor of self-care maintenance and self-care management. Moreover, self-care scores were correlated with the Partners in Health Scale, indicating satisfactory concurrent validity. The Chinese version of the SCHFI is a theory-based instrument for assessing self-care of Chinese patients with heart failure.

  8. Prevalence of Invalid Performance on Baseline Testing for Sport-Related Concussion by Age and Validity Indicator.

    PubMed

    Abeare, Christopher A; Messa, Isabelle; Zuccato, Brandon G; Merker, Bradley; Erdodi, Laszlo

    2018-03-12

    Estimated base rates of invalid performance on baseline testing (base rates of failure) for the management of sport-related concussion range from 6.1% to 40.0%, depending on the validity indicator used. The instability of this key measure represents a challenge in the clinical interpretation of test results that could undermine the utility of baseline testing. To determine the prevalence of invalid performance on baseline testing and to assess whether the prevalence varies as a function of age and validity indicator. This retrospective, cross-sectional study included data collected between January 1, 2012, and December 31, 2016, from a clinical referral center in the Midwestern United States. Participants included 7897 consecutively tested, equivalently proportioned male and female athletes aged 10 to 21 years, who completed baseline neurocognitive testing for the purpose of concussion management. Baseline assessment was conducted with the Immediate Postconcussion Assessment and Cognitive Testing (ImPACT), a computerized neurocognitive test designed for assessment of concussion. Base rates of failure on published ImPACT validity indicators were compared within and across age groups. Hypotheses were developed after data collection but prior to analyses. Of the 7897 study participants, 4086 (51.7%) were male, mean (SD) age was 14.71 (1.78) years, 7820 (99.0%) were primarily English speaking, and the mean (SD) educational level was 8.79 (1.68) years. The base rate of failure ranged from 6.4% to 47.6% across individual indicators. Most of the sample (55.7%) failed at least 1 of 4 validity indicators. The base rate of failure varied considerably across age groups (117 of 140 [83.6%] for those aged 10 years to 14 of 48 [29.2%] for those aged 21 years), representing a risk ratio of 2.86 (95% CI, 2.60-3.16; P < .001). The results for base rate of failure were surprisingly high overall and varied widely depending on the specific validity indicator and the age of the examinee. The strong age association, with 3 of 4 participants aged 10 to 12 years failing validity indicators, suggests that the clinical interpretation and utility of baseline testing in this age group is questionable. These findings underscore the need for close scrutiny of performance validity indicators on baseline testing across age groups.

  9. Development of a clinically validated bulk failure test for ceramic crowns.

    PubMed

    Kelly, J Robert; Rungruanganunt, Patchnee; Hunter, Ben; Vailati, Francesca

    2010-10-01

    Traditional testing of ceramic crowns creates a stress state and damage modes that differ greatly from those seen clinically. There is a need to develop and communicate an in vitro testing protocol that is clinically valid. The purpose of this study was to develop an in vitro failure test for ceramic single-unit prostheses that duplicates the failure mechanism and stress state observed in clinically failed prostheses. This article first compares characteristics of traditional load-to-failure tests of ceramic crowns with the growing body of evidence regarding failure origins and stress states at failure from the examination of clinically failed crowns, finite element analysis (FEA), and data from clinical studies. Based on this analysis, an experimental technique was systematically developed and test materials were identified to recreate key aspects of clinical failure in vitro. One potential dentin analog material (an epoxy filled with woven glass fibers; NEMA grade G10) was evaluated for elastic modulus in blunt contact and for bond strength to resin cement as compared to hydrated dentin. Two bases with different elastic moduli (nickel chrome and resin-based composite) were tested for influence on failure loads. The influence of water during storage and loading (both monotonic and cyclic) was examined. Loading piston materials (G10, aluminum, stainless steel) and piston designs were varied to eliminate Hertzian cracking and to improve performance. Testing was extended from a monolayer ceramic (leucite-filled glass) to a bilayer ceramic system (glass-infiltrated alumina). The influence of cyclic rate on mean failure loads was examined (2 Hz, 10 Hz, 20 Hz) with the extremes compared statistically (t test; α=.05). Failure loads were highly influenced by base elastic modulus (t test; P<.001). Cyclic loading while in water significantly decreased mean failure loads (1-way ANOVA; P=.003) versus wet storage/dry cycling (350 N vs. 1270 N). G10 was not significantly different from hydrated dentin in terms of blunt contact elastic behavior or resin cement bond strength. Testing was successful with the bilayered ceramic, and the cycling rate altered mean failure loads only slightly (approximately 5%). Test methods and materials were developed to validly simulate many aspects of clinical failure. Copyright © 2010 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  10. Analysis for the Progressive Failure Response of Textile Composite Fuselage Frames

    NASA Technical Reports Server (NTRS)

    Johnson, Eric R.; Boitnott, Richard L. (Technical Monitor)

    2002-01-01

    A part of aviation accident mitigation is a crash worthy airframe structure, and an important measure of merit for a crash worthy structure is the amount of kinetic energy that can be absorbed in the crush of the structure. Prediction of the energy absorbed from finite element analyses requires modeling the progressive failure sequence. Progressive failure modes may include material degradation, fracture and crack growth, and buckling and collapse. The design of crash worthy airframe components will benefit from progressive failure analyses that have been validated by tests. The subject of this research is the development of a progressive failure analysis for textile composite. circumferential fuselage frames subjected to a quasi-static, crash-type load. The test data for these frames are reported, and these data, along with stub column test data, are to be used to develop and to validate methods for the progressive failure response.

  11. Embedded performance validity testing in neuropsychological assessment: Potential clinical tools.

    PubMed

    Rickards, Tyler A; Cranston, Christopher C; Touradji, Pegah; Bechtold, Kathleen T

    2018-01-01

    The article aims to suggest clinically-useful tools in neuropsychological assessment for efficient use of embedded measures of performance validity. To accomplish this, we integrated available validity-related and statistical research from the literature, consensus statements, and survey-based data from practicing neuropsychologists. We provide recommendations for use of 1) Cutoffs for embedded performance validity tests including Reliable Digit Span, California Verbal Learning Test (Second Edition) Forced Choice Recognition, Rey-Osterrieth Complex Figure Test Combination Score, Wisconsin Card Sorting Test Failure to Maintain Set, and the Finger Tapping Test; 2) Selecting number of performance validity measures to administer in an assessment; and 3) Hypothetical clinical decision-making models for use of performance validity testing in a neuropsychological assessment collectively considering behavior, patient reporting, and data indicating invalid or noncredible performance. Performance validity testing helps inform the clinician about an individual's general approach to tasks: response to failure, task engagement and persistence, compliance with task demands. Data-driven clinical suggestions provide a resource to clinicians and to instigate conversation within the field to make more uniform, testable decisions to further the discussion, and guide future research in this area.

  12. Independent validation of the MMPI-2-RF Somatic/Cognitive and Validity scales in TBI Litigants tested for effort.

    PubMed

    Youngjohn, James R; Wershba, Rebecca; Stevenson, Matthew; Sturgeon, John; Thomas, Michael L

    2011-04-01

    The MMPI-2 Restructured Form (MMPI-2-RF; Ben-Porath & Tellegen, 2008) is replacing the MMPI-2 as the most widely used personality test in neuropsychological assessment, but additional validation studies are needed. Our study examines MMPI-2-RF Validity scales and the newly created Somatic/Cognitive scales in a recently reported sample of 82 traumatic brain injury (TBI) litigants who either passed or failed effort tests (Thomas & Youngjohn, 2009). The restructured Validity scales FBS-r (restructured symptom validity), F-r (restructured infrequent responses), and the newly created Fs (infrequent somatic responses) were not significant predictors of TBI severity. FBS-r was significantly related to passing or failing effort tests, and Fs and F-r showed non-significant trends in the same direction. Elevations on the Somatic/Cognitive scales profile (MLS-malaise, GIC-gastrointestinal complaints, HPC-head pain complaints, NUC-neurological complaints, and COG-cognitive complaints) were significant predictors of effort test failure. Additionally, HPC had the anticipated paradoxical inverse relationship with head injury severity. The Somatic/Cognitive scales as a group were better predictors of effort test failure than the RF Validity scales, which was an unexpected finding. MLS arose as the single best predictor of effort test failure of all RF Validity and Somatic/Cognitive scales. Item overlap analysis revealed that all MLS items are included in the original MMPI-2 Hy scale, making MLS essentially a subscale of Hy. This study validates the MMPI-2-RF as an effective tool for use in neuropsychological assessment of TBI litigants.

  13. The Need and Requirements for Validating Damage Detection Capability

    DTIC Science & Technology

    2011-09-01

    Testing of Airborne Equipment [11], 2) Materials / Structure Certification, 3) NDE (POD) Validation Procedures, 4) Failure Mode Effects and Criticality...Analysis (FMECA), and 5) Cost Benefits Analysis [12]. Existing procedures for environmental testing of airborne equipment ensure flight...e.g. ultrasound or eddy current), damage type or failure conditions to detect, criticality of the damage state (e.g. safety of flight), likelihood of

  14. Validation of the Chinese version of the CogState computerised cognitive assessment battery in Taiwanese patients with heart failure.

    PubMed

    Chou, Cheng-Chen; Pressler, Susan J; Giordani, Bruno; Fetzer, Susan Jane

    2015-11-01

    To evaluate the validity of the Chinese version of the CogState battery, a computerised cognitive testing among patients with heart failure in Taiwan. Cognitive deficits are common in patients with heart failure and a validated Chinese measurement is required for assessing cognitive change for this population. The CogState computerised battery is a measurement of cognitive function and has been validated in many languages, but not Chinese. A cross-sectional study. A convenience sample consisted of 76 women with heart failure and 64 healthy women in northern Taiwan. Women completed the Chinese version of the CogState battery and the Montreal Cognitive Assessment. Construct validity of the Chinese version of the battery was evaluated by exploratory factor analysis and known-group comparisons. Convergent validity of the CogState tasks was examined by Pearson correlation coefficients. Principal components factor analysis with promax rotation showed two factors reflecting the speed and memory dimensions of the tests. Scores for CogState battery tasks showed significant differences between the heart failure and healthy control group. Examination of convergent validity of the CogState found a significant association with the Montreal Cognitive Assessment. The Chinese CogState Battery has satisfactory construct and convergent validity to measure cognitive deficits in patients with heart failure in Taiwan. The Chinese CogState battery is a valid instrument for detecting cognitive deficits that may be subtle in the early stages, and identifying changes that provide insights into patients' abilities to implement treatment accurately and consistently. Better interventions tailored to the needs of the cognitive impaired population can be developed. © 2015 John Wiley & Sons Ltd.

  15. Analysis for the Progressive Failure Response of Textile Composite Fuselage Frames

    NASA Technical Reports Server (NTRS)

    Johnson, Eric R.; Boitnott, Richard L. (Technical Monitor)

    2002-01-01

    A part of aviation accident mitigation is a crashworthy airframe structure, and an important measure of merit for a crashworthy structure is the amount of kinetic energy that can be absorbed in the crush of the structure. Prediction of the energy absorbed from finite element analyses requires modeling the progressive failure sequence. Progressive failure modes may include material degradation, fracture and crack growth, and buckling and collapse. The design of crashworthy airframe components will benefit from progressive failure analyses that have been validated by tests. The subject of this research is the development of a progressive failure analysis for a textile composite, circumferential fuselage frame subjected to a quasi-static, crash-type load. The test data for the frame are reported, and these data are used to develop and to validate methods for the progressive failure response.

  16. Herth hope index: psychometric testing of the Chinese version.

    PubMed

    Chan, Keung Sum; Li, Ho Cheung William; Chan, Sally Wai-Chi; Lopez, Violeta

    2012-09-01

    This article is a report on psychometric testing of the Chinese version of the herth hope index. The availability of a valid and reliable instrument that accurately measures the level of hope in patients with heart failure is crucial before any hope-enhancing interventions can be appropriately planned and evaluated. There is no such instrument for Chinese people. A test-retest, within-subjects design was used. A purposive sample of 120 Hong Kong Chinese patients with heart failure between the ages of 60 and 80 years admitted to two medical wards was recruited during an 8-month period in 2009. Participants were asked to respond to the Chinese version of the herth hope index, Hamilton depression rating scale and Rosenberg's self-esteem scale. The internal consistency, content validity and construct validity and test-retest reliability of the Chinese version of the herth hope index were assessed. The newly translated scale demonstrated adequate internal consistency, good content validity and appropriate convergent and discriminant validity. Confirmatory factor analysis added further evidence of the construct validity of the scale. Results suggest that the newly translated scale can be used as a self-report assessment tool in assessing the level of hope in Hong Kong Chinese patients with heart failure. © 2011 Blackwell Publishing Ltd.

  17. Translation and validation of the Self-care of Heart Failure Index into Persian.

    PubMed

    Siabani, Soraya; Leeder, Stephen R; Davidson, Patricia M; Najafi, Farid; Hamzeh, Behrooz; Solimani, Akram; Siahbani, Sara; Driscoll, Tim

    2014-01-01

    Chronic heart failure (CHF) is a common burdensome health problem worldwide. Self-care improves outcomes in patients with CHF. The Self-care of Heart Failure Index (SCHFI) is a well-known scale for assessing self-care. A reliable, valid, and culturally acceptable instrument is needed to develop and test self-care interventions in Iran. We sought to translate and validate the Persian version of SCHFI v 6.2 (pSCHFI). We translated the SCHFI into Persian (pSCHFI) using standardized methods. The reliability was evaluated by assessing Cronbach's α coefficient. Expert opinion, discussion with patients, and confirmatory factor analysis were used to assess face validity, content validity, and construct validity, respectively. The analysis, using 184 participants, showed acceptable internal consistency and construct validity for the 3 subscales of pSCHFI-self-care maintenance, self-care management, and self-care self-confidence. The pSCHFI is a valid instrument with an acceptable reliability for evaluating self-care in Persian patients with heart failure.

  18. Assessment of heart rate, acidosis, consciousness, oxygenation, and respiratory rate to predict noninvasive ventilation failure in hypoxemic patients.

    PubMed

    Duan, Jun; Han, Xiaoli; Bai, Linfu; Zhou, Lintong; Huang, Shicong

    2017-02-01

    To develop and validate a scale using variables easily obtained at the bedside for prediction of failure of noninvasive ventilation (NIV) in hypoxemic patients. The test cohort comprised 449 patients with hypoxemia who were receiving NIV. This cohort was used to develop a scale that considers heart rate, acidosis, consciousness, oxygenation, and respiratory rate (referred to as the HACOR scale) to predict NIV failure, defined as need for intubation after NIV intervention. The highest possible score was 25 points. To validate the scale, a separate group of 358 hypoxemic patients were enrolled in the validation cohort. The failure rate of NIV was 47.8 and 39.4% in the test and validation cohorts, respectively. In the test cohort, patients with NIV failure had higher HACOR scores at initiation and after 1, 12, 24, and 48 h of NIV than those with successful NIV. At 1 h of NIV the area under the receiver operating characteristic curve was 0.88, showing good predictive power for NIV failure. Using 5 points as the cutoff value, the sensitivity, specificity, positive predictive value, negative predictive value, and diagnostic accuracy for NIV failure were 72.6, 90.2, 87.2, 78.1, and 81.8%, respectively. These results were confirmed in the validation cohort. Moreover, the diagnostic accuracy for NIV failure exceeded 80% in subgroups classified by diagnosis, age, or disease severity and also at 1, 12, 24, and 48 h of NIV. Among patients with NIV failure with a HACOR score of >5 at 1 h of NIV, hospital mortality was lower in those who received intubation at ≤12 h of NIV than in those intubated later [58/88 (66%) vs. 138/175 (79%); p = 0.03). The HACOR scale variables are easily obtained at the bedside. The scale appears to be an effective way of predicting NIV failure in hypoxemic patients. Early intubation in high-risk patients may reduce hospital mortality.

  19. The role of biaxial stresses in discriminating between meaningful and illusory composite failure theories

    NASA Technical Reports Server (NTRS)

    Hart-Smith, L. J.

    1992-01-01

    The irrelevance of most composite failure criteria to conventional fiber-polymer composites is claimed to have remained undetected primarily because the experiments that can either validate or disprove them are difficult to perform. Uniaxial tests are considered inherently incapable of validating or refuting any composite failure theory because so much of the total load is carried by the fibers aligned in the direction of the load. The Ten-Percent Rule, a simple rule-of-mixtures analysis method, is said to work well only because of this phenomenon. It is stated that failure criteria can be verified for fibrous composites only by biaxial tests, with orthogonal in-plane stresses of the same as well as different signs, because these particular states of combined stress reveal substantial differences between the predictions of laminate strength made by various theories. Three scientifically plausible failure models for fibrous composites are compared, and it is shown that only the in-plane shear test (orthogonal tension and compression) is capable of distinguishing between them. This is because most theories are 'calibrated' against the measured uniaxial tension and compression tests and any cross-plied laminate tests dominated by those same states of stress must inevitably 'confirm' the theory.

  20. Failure mode and effects analysis outputs: are they valid?

    PubMed

    Shebl, Nada Atef; Franklin, Bryony Dean; Barber, Nick

    2012-06-10

    Failure Mode and Effects Analysis (FMEA) is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: Face validity: by comparing the FMEA participants' mapped processes with observational work. Content validity: by presenting the FMEA findings to other healthcare professionals. Criterion validity: by comparing the FMEA findings with data reported on the trust's incident report database. Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust's incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA's methodology for scoring failures, there were discrepancies between the teams' estimates and similar incidents reported on the trust's incident database. Furthermore, the concept of multiplying ordinal scales to prioritise failures is mathematically flawed. Until FMEA's validity is further explored, healthcare organisations should not solely depend on their FMEA results to prioritise patient safety issues.

  1. Development and psychometric evaluation of the Thirst Distress Scale for patients with heart failure.

    PubMed

    Waldréus, Nana; Jaarsma, Tiny; van der Wal, Martje Hl; Kato, Naoko P

    2018-03-01

    Patients with heart failure can experience thirst distress. However, there is no instrument to measure this in patients with heart failure. The aim of the present study was to develop the Thirst Distress Scale for patients with Heart Failure (TDS-HF) and to evaluate psychometric properties of the scale. The TDS-HF was developed to measure thirst distress in patients with heart failure. Face and content validity was confirmed using expert panels including patients and healthcare professionals. Data on the TDS-HF was collected from patients with heart failure at outpatient heart failure clinics and hospitals in Sweden, the Netherlands and Japan. Psychometric properties were evaluated using data from 256 heart failure patients (age 72±11 years). Concurrent validity of the scale was assessed using a thirst intensity visual analogue scale. Patients did not have any difficulties answering the questions, and time taken to answer the questions was about five minutes. Factor analysis of the scale showed one factor. After psychometric testing, one item was deleted. For the eight item TDS-HF, a single factor explained 61% of the variance and Cronbach's alpha was 0.90. The eight item TDS-HF was significantly associated with the thirst intensity score ( r=0.55, p<0.001). Regarding test-retest reliability, the intraclass correlation coefficient was 0.88, and the weighted kappa values ranged from 0.29-0.60. The eight-item TDS-HF is valid and reliable for measuring thirst distress in patients with heart failure.

  2. Digital Fly-By-Wire Flight Control Validation Experience

    NASA Technical Reports Server (NTRS)

    Szalai, K. J.; Jarvis, C. R.; Krier, G. E.; Megna, V. A.; Brock, L. D.; Odonnell, R. N.

    1978-01-01

    The experience gained in digital fly-by-wire technology through a flight test program being conducted by the NASA Dryden Flight Research Center in an F-8C aircraft is described. The system requirements are outlined, along with the requirements for flight qualification. The system is described, including the hardware components, the aircraft installation, and the system operation. The flight qualification experience is emphasized. The qualification process included the theoretical validation of the basic design, laboratory testing of the hardware and software elements, systems level testing, and flight testing. The most productive testing was performed on an iron bird aircraft, which used the actual electronic and hydraulic hardware and a simulation of the F-8 characteristics to provide the flight environment. The iron bird was used for sensor and system redundancy management testing, failure modes and effects testing, and stress testing in many cases with the pilot in the loop. The flight test program confirmed the quality of the validation process by achieving 50 flights without a known undetected failure and with no false alarms.

  3. Simulations of carbon fiber composite delamination tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kay, G

    2007-10-25

    Simulations of mode I interlaminar fracture toughness tests of a carbon-reinforced composite material (BMS 8-212) were conducted with LSDYNA. The fracture toughness tests were performed by U.C. Berkeley. The simulations were performed to investigate the validity and practicality of employing decohesive elements to represent interlaminar bond failures that are prevalent in carbon-fiber composite structure penetration events. The simulations employed a decohesive element formulation that was verified on a simple two element model before being employed to perform the full model simulations. Care was required during the simulations to ensure that the explicit time integration of LSDYNA duplicate the near steady-statemore » testing conditions. In general, this study validated the use of employing decohesive elements to represent the interlaminar bond failures seen in carbon-fiber composite structures, but the practicality of employing the elements to represent the bond failures seen in carbon-fiber composite structures during penetration events was not established.« less

  4. Cultural Competency: How Is It Measured? Does It Make a Difference?

    ERIC Educational Resources Information Center

    Geron, Scott Miyake

    2002-01-01

    Shortcomings in the measurement of cultural competence of health care and social service providers include the following: (1) failure to define individual and organizational cultural competence; (2) failure to include client/patient perspectives in design; and (3) failure to test reliability, validity, and psychometric properties of instruments.…

  5. The Math Essential Skills Screener--Upper Elementary Version (MESS-U): Studies of Reliability and Validity

    ERIC Educational Resources Information Center

    Erford, Bradley T.; Biddison, Amanda R.

    2006-01-01

    The Math Essential Skills Screener--Upper Elementary Version (MESS-U) is part of a series of screening tests designed to help identify students ages 9-11 who are at risk for mathematics failure. Internal consistency, test-retest reliability, item analysis, decision efficiency, convergent validity and factorial validity of the MESS-U were studied…

  6. Experimental validation of clock synchronization algorithms

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Graham, R. Lynn

    1992-01-01

    The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Midpoint Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the behavior of the clock system. It is found that a 100 percent penalty is paid to tolerate worst-case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as three clock ticks. Clock skew grows to six clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst-case conditions.

  7. Impact and Penetration of Thin Aluminum 2024 Flat Panels at Oblique Angles of Incidence

    NASA Technical Reports Server (NTRS)

    Ruggeri, Charles R.; Revilock, Duane M.; Pereira, J. Michael; Emmerling, William; Queitzsch, Gilbert K., Jr.

    2015-01-01

    The U.S. Federal Aviation Administration (FAA) and the National Aeronautics and Space Administration (NASA) are actively involved in improving the predictive capabilities of transient finite element computational methods for application to safety issues involving unintended impacts on aircraft and aircraft engine structures. One aspect of this work involves the development of an improved deformation and failure model for metallic materials, known as the Tabulated Johnson-Cook model, or MAT224, which has been implemented in the LS-DYNA commercial transient finite element analysis code (LSTC Corp., Livermore, CA) (Ref. 1). In this model the yield stress is a function of strain, strain rate and temperature and the plastic failure strain is a function of the state of stress, temperature and strain rate. The failure criterion is based on the accumulation of plastic strain in an element. The model also incorporates a regularization scheme to account for the dependency of plastic failure strain on mesh size. For a given material the model requires a significant amount of testing to determine the yield stress and failure strain as a function of the three-dimensional state of stress, strain rate and temperature. In addition, experiments are required to validate the model. Currently the model has been developed for Aluminum 2024 and validated against a series of ballistic impact tests on flat plates of various thicknesses (Refs. 1 to 3). Full development of the model for Titanium 6Al-4V is being completed, and mechanical testing for Inconel 718 has begun. The validation testing for the models involves ballistic impact tests using cylindrical projectiles impacting flat plates at a normal incidence (Ref. 2). By varying the thickness of the plates, different stress states and resulting failure modes are induced, providing a range of conditions over which the model can be validated. The objective of the study reported here was to provide experimental data to evaluate the model under more extreme conditions, using a projectile with a more complex shape and sharp contacts, impacting flat panels at oblique angles of incidence.

  8. Failure mode and effects analysis outputs: are they valid?

    PubMed Central

    2012-01-01

    Background Failure Mode and Effects Analysis (FMEA) is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Methods Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: · Face validity: by comparing the FMEA participants’ mapped processes with observational work. · Content validity: by presenting the FMEA findings to other healthcare professionals. · Criterion validity: by comparing the FMEA findings with data reported on the trust’s incident report database. · Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Results Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust’s incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. Conclusion There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA’s methodology for scoring failures, there were discrepancies between the teams’ estimates and similar incidents reported on the trust’s incident database. Furthermore, the concept of multiplying ordinal scales to prioritise failures is mathematically flawed. Until FMEA’s validity is further explored, healthcare organisations should not solely depend on their FMEA results to prioritise patient safety issues. PMID:22682433

  9. The Author’s Guide To Writing 412th Test Wing Technical Reports

    DTIC Science & Technology

    2014-12-01

    control CAD computer aided design cc cubic centimeters C.O. carry-over c/o checkout USAF United States Air Force C1 rolling moment coefficient...cooling air. Mission Impact: Results in maintenance inability to reliably duplicate and isolate valid aircraft failures, and degrades reliability...air. Mission Impact: Results in maintenance inability to reliably duplicate and isolate valid aircraft failures, and degrades reliability of system

  10. A new casemix adjustment index for hospital mortality among patients with congestive heart failure.

    PubMed

    Polanczyk, C A; Rohde, L E; Philbin, E A; Di Salvo, T G

    1998-10-01

    Comparative analysis of hospital outcomes requires reliable adjustment for casemix. Although congestive heart failure is one of the most common indications for hospitalization, congestive heart failure casemix adjustment has not been widely studied. The purposes of this study were (1) to describe and validate a new congestive heart failure-specific casemix adjustment index to predict in-hospital mortality and (2) to compare its performance to the Charlson comorbidity index. Data from all 4,608 admissions to the Massachusetts General Hospital from January 1990 to July 1996 with a principal ICD-9-CM discharge diagnosis of congestive heart failure were evaluated. Massachusetts General Hospital patients were randomly divided in a derivation and a validation set. By logistic regression, odds ratios for in-hospital death were computed and weights were assigned to construct a new predictive index in the derivation set. The performance of the index was tested in an internal Massachusetts General Hospital validation set and in a non-Massachusetts General Hospital external validation set incorporating data from all 1995 New York state hospital discharges with a primary discharge diagnosis of congestive heart failure. Overall in-hospital mortality was 6.4%. Based on the new index, patients were assigned to six categories with incrementally increasing hospital mortality rates ranging from 0.5% to 31%. By logistic regression, "c" statistics of the congestive heart failure-specific index (0.83 and 0.78, derivation and validation set) were significantly superior to the Charlson index (0.66). Similar incrementally increasing hospital mortality rates were observed in the New York database with the congestive heart failure-specific index ("c" statistics 0.75). In an administrative database, this congestive heart failure-specific index may be a more adequate casemix adjustment tool to predict hospital mortality in patients hospitalized for congestive heart failure.

  11. Assessing the Value-Added by the Environmental Testing Process with the Aide of Physics/Engineering of Failure Evaluations

    NASA Technical Reports Server (NTRS)

    Cornford, S.; Gibbel, M.

    1997-01-01

    NASA's Code QT Test Effectiveness Program is funding a series of applied research activities focused on utilizing the principles of physics and engineering of failure and those of engineering economics to assess and improve the value-added by the various validation and verification activities to organizations.

  12. Fracture simulation of restored teeth using a continuum damage mechanics failure model.

    PubMed

    Li, Haiyan; Li, Jianying; Zou, Zhenmin; Fok, Alex Siu-Lun

    2011-07-01

    The aim of this paper is to validate the use of a finite-element (FE) based continuum damage mechanics (CDM) failure model to simulate the debonding and fracture of restored teeth. Fracture testing of plastic model teeth, with or without a standard Class-II MOD (mesial-occusal-distal) restoration, was carried out to investigate their fracture behavior. In parallel, 2D FE models of the teeth are constructed and analyzed using the commercial FE software ABAQUS. A CDM failure model, implemented into ABAQUS via the user element subroutine (UEL), is used to simulate the debonding and/or final fracture of the model teeth under a compressive load. The material parameters needed for the CDM model to simulate fracture are obtained through separate mechanical tests. The predicted results are then compared with the experimental data of the fracture tests to validate the failure model. The failure processes of the intact and restored model teeth are successfully reproduced by the simulation. However, the fracture parameters obtained from testing small specimens need to be adjusted to account for the size effect. The results indicate that the CDM model is a viable model for the prediction of debonding and fracture in dental restorations. Copyright © 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  13. Embedding Patient Education in Mobile Platform for Patients With Heart Failure: Theory-Based Development and Beta Testing.

    PubMed

    Athilingam, Ponrathi; Osorio, Richard E; Kaplan, Howard; Oliver, Drew; O'neachtain, Tara; Rogal, Philip J

    2016-02-01

    Health education is an important component of multidisciplinary disease management of heart failure. The educational information given at the time of discharge after hospitalization or at initial diagnosis is often overwhelming to patients and is often lost or never consulted again. Therefore, the aim of this developmental project was to embed interactive heart failure education in a mobile platform. A patient-centered approach, grounded on several learning theories including Mayer's Cognitive Theory of Multimedia Learning, Sweller's Cognitive Load, Instructional Design Approach, and Problem-Based Learning, was utilized to develop and test the mobile app. Ten heart failure patients, who attended an outpatient heart failure clinic, completed beta testing. A validated self-confidence questionnaire was utilized to assess patients' confidence in using the mobile app. All participants (100%) reported moderate to extreme confidence in using the app, 95% were very likely to use the app, 100% reported the design was easy to navigate, and content on heart failure was appropriate. Having the information accessible on their mobile phone was reported as a positive, like a health coach by all patients. Clinicians and nurses validated the content. Thus, embedding health education in a mobile app is proposed in promoting persistent engagement to improve health outcomes.

  14. Psychometrics of the PHQ-9 as a measure of depressive symptoms in patients with heart failure.

    PubMed

    Hammash, Muna H; Hall, Lynne A; Lennie, Terry A; Heo, Seongkum; Chung, Misook L; Lee, Kyoung Suk; Moser, Debra K

    2013-10-01

    Depression in patients with heart failure commonly goes undiagnosed and untreated. The Patient Health Questionnaire-9 (PHQ-9) is a simple, valid measure of depressive symptoms that may facilitate clinical assessment. It has not been validated in patients with heart failure. To test the reliability, and concurrent and construct validity of the PHQ-9 in patients with heart failure. A total of 322 heart failure patients (32% female, 61 ± 12 years, 56% New York Heart Association class III/IV) completed the PHQ-9, the Beck Depression Inventory-II (BDI-II), and the Control Attitudes Scale (CAS). Cronbach's alpha of .83 supported the internal consistency reliability of the PHQ-9 in this sample. Inter-item correlations (range .22-.66) and item-total correlation (except item 9) supported homogeneity of the PHQ-9. Spearman's rho of .80, (p < .001) between the PHQ-9 and the BDI-II supported the concurrent validity as did the agreement between the PHQ-9 and the BDI-II (Kappa = 0.64, p < .001). At cut-off score of 10, the PHQ-9 was 70% sensitive and 92% specific in identifying depressive symptoms, using the BDI-II scores as the criterion for comparison. Differences in PHQ-9 scores by level of perceived control measured by CAS (t(318) = -5.05, p < .001) supported construct validity. The PHQ-9 is a reliable, valid measure of depressive symptoms in patients with heart failure.

  15. Verification and Validation of Adaptive and Intelligent Systems with Flight Test Results

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Larson, Richard R.

    2009-01-01

    F-15 IFCS project goals are: a) Demonstrate Control Approaches that can Efficiently Optimize Aircraft Performance in both Normal and Failure Conditions [A] & [B] failures. b) Advance Neural Network-Based Flight Control Technology for New Aerospace Systems Designs with a Pilot in the Loop. Gen II objectives include; a) Implement and Fly a Direct Adaptive Neural Network Based Flight Controller; b) Demonstrate the Ability of the System to Adapt to Simulated System Failures: 1) Suppress Transients Associated with Failure; 2) Re-Establish Sufficient Control and Handling of Vehicle for Safe Recovery. c) Provide Flight Experience for Development of Verification and Validation Processes for Flight Critical Neural Network Software.

  16. Effectiveness of symptom validity measures in identifying cognitive and behavioral symptom exaggeration in adult attention deficit hyperactivity disorder.

    PubMed

    Marshall, Paul; Schroeder, Ryan; O'Brien, Jeffrey; Fischer, Rebecca; Ries, Adam; Blesi, Brita; Barker, Jessica

    2010-10-01

    This study examines the effectiveness of symptom validity measures to detect suspect effort in cognitive testing and invalid completion of ADHD behavior rating scales in 268 adults referred for ADHD assessment. Patients were diagnosed with ADHD based on cognitive testing, behavior rating scales, and clinical interview. Suspect effort was diagnosed by at least two of the following: failure on embedded and free-standing SVT measures, a score > 2 SD below the ADD population average on tests, failure on an ADHD behavior rating scale validity scale, or a major discrepancy between reported and observed ADHD behaviors. A total of 22% of patients engaged in symptom exaggeration. The Word Memory test immediate recall and consistency score (both 64%), TOVA omission errors (63%) and reaction time variability (54%), CAT-A infrequency scale (58%), and b Test (47%) had good sensitivity as well as at least 90% specificity. Clearly, such measures should be used to help avoid making false positive diagnoses of ADHD.

  17. Validation Study of Unnotched Charpy and Taylor-Anvil Impact Experiments using Kayenta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamojjala, Krishna; Lacy, Jeffrey; Chu, Henry S.

    2015-03-01

    Validation of a single computational model with multiple available strain-to-failure fracture theories is presented through experimental tests and numerical simulations of the standardized unnotched Charpy and Taylor-anvil impact tests, both run using the same material model (Kayenta). Unnotched Charpy tests are performed on rolled homogeneous armor steel. The fracture patterns using Kayenta’s various failure options that include aleatory uncertainty and scale effects are compared against the experiments. Other quantities of interest include the average value of the absorbed energy and bend angle of the specimen. Taylor-anvil impact tests are performed on Ti6Al4V titanium alloy. The impact speeds of the specimenmore » are 321 m/s and 393 m/s. The goal of the numerical work is to reproduce the damage patterns observed in the laboratory. For the numerical study, the Johnson-Cook failure model is used as the ductile fracture criterion, and aleatory uncertainty is applied to rate-dependence parameters to explore its effect on the fracture patterns.« less

  18. Construct validity of the Heart Failure Screening Tool (Heart-FaST) to identify heart failure patients at risk of poor self-care: Rasch analysis.

    PubMed

    Reynolds, Nicholas A; Ski, Chantal F; McEvedy, Samantha M; Thompson, David R; Cameron, Jan

    2018-02-14

    The aim of this study was to psychometrically evaluate the Heart Failure Screening Tool (Heart-FaST) via: (1) examination of internal construct validity; (2) testing of scale function in accordance with design; and (3) recommendation for change/s, if items are not well adjusted, to improve psychometric credential. Self-care is vital to the management of heart failure. The Heart-FaST may provide a prospective assessment of risk, regarding the likelihood that patients with heart failure will engage in self-care. Psychometric validation of the Heart-FaST using Rasch analysis. The Heart-FaST was administered to 135 patients (median age = 68, IQR = 59-78 years; 105 males) enrolled in a multidisciplinary heart failure management program. The Heart-FaST is a nurse-administered tool for screening patients with HF at risk of poor self-care. A Rasch analysis of responses was conducted which tested data against Rasch model expectations, including whether items serve as unbiased, non-redundant indicators of risk and measure a single construct and that rating scales operate as intended. The results showed that data met Rasch model expectations after rescoring or deleting items due to poor discrimination, disordered thresholds, differential item functioning, or response dependence. There was no evidence of multidimensionality which supports the use of total scores from Heart-FaST as indicators of risk. Aggregate scores from this modified screening tool rank heart failure patients according to their "risk of poor self-care" demonstrating that the Heart-FaST items constitute a meaningful scale to identify heart failure patients at risk of poor engagement in heart failure self-care. © 2018 John Wiley & Sons Ltd.

  19. In-Flight Validation of a Pilot Rating Scale for Evaluating Failure Transients in Electronic Flight Control Systems

    NASA Technical Reports Server (NTRS)

    Kalinowski, Kevin F.; Tucker, George E.; Moralez, Ernesto, III

    2006-01-01

    Engineering development and qualification of a Research Flight Control System (RFCS) for the Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) JUH-60A has motivated the development of a pilot rating scale for evaluating failure transients in fly-by-wire flight control systems. The RASCAL RFCS includes a highly-reliable, dual-channel Servo Control Unit (SCU) to command and monitor the performance of the fly-by-wire actuators and protect against the effects of erroneous commands from the flexible, but single-thread Flight Control Computer. During the design phase of the RFCS, two piloted simulations were conducted on the Ames Research Center Vertical Motion Simulator (VMS) to help define the required performance characteristics of the safety monitoring algorithms in the SCU. Simulated failures, including hard-over and slow-over commands, were injected into the command path, and the aircraft response and safety monitor performance were evaluated. A subjective Failure/Recovery Rating (F/RR) scale was developed as a means of quantifying the effects of the injected failures on the aircraft state and the degree of pilot effort required to safely recover the aircraft. A brief evaluation of the rating scale was also conducted on the Army/NASA CH-47B variable stability helicopter to confirm that the rating scale was likely to be equally applicable to in-flight evaluations. Following the initial research flight qualification of the RFCS in 2002, a flight test effort was begun to validate the performance of the safety monitors and to validate their design for the safe conduct of research flight testing. Simulated failures were injected into the SCU, and the F/RR scale was applied to assess the results. The results validate the performance of the monitors, and indicate that the Failure/Recovery Rating scale is a very useful tool for evaluating failure transients in fly-by-wire flight control systems.

  20. Design and implementation of a novel mechanical testing system for cellular solids.

    PubMed

    Nazarian, Ara; Stauber, Martin; Müller, Ralph

    2005-05-01

    Cellular solids constitute an important class of engineering materials encompassing both man-made and natural constructs. Materials such as wood, cork, coral, and cancellous bone are examples of cellular solids. The structural analysis of cellular solid failure has been limited to 2D sections to illustrate global fracture patterns. Due to the inherent destructiveness of 2D methods, dynamic assessment of fracture progression has not been possible. Image-guided failure assessment (IGFA), a noninvasive technique to analyze 3D progressive bone failure, has been developed utilizing stepwise microcompression in combination with time-lapsed microcomputed tomographic imaging (microCT). This method allows for the assessment of fracture progression in the plastic region, where much of the structural deformation/energy absorption is encountered in a cellular solid. Therefore, the goal of this project was to design and fabricate a novel micromechanical testing system to validate the effectiveness of the stepwise IGFA technique compared to classical continuous mechanical testing, using a variety of engineered and natural cellular solids. In our analysis, we found stepwise compression to be a valid approach for IGFA with high precision and accuracy comparable to classical continuous testing. Therefore, this approach complements the conventional mechanical testing methods by providing visual insight into the failure propagation mechanisms of cellular solids. (c) 2005 Wiley Periodicals, Inc.

  1. Psychometric properties of the Symptom Status Questionnaire-Heart Failure.

    PubMed

    Heo, Seongkum; Moser, Debra K; Pressler, Susan J; Dunbar, Sandra B; Mudd-Martin, Gia; Lennie, Terry A

    2015-01-01

    Many patients with heart failure (HF) experience physical symptoms, poor health-related quality of life (HRQOL), and high rates of hospitalization. Physical symptoms are associated with HRQOL and are major antecedents of hospitalization. However, reliable and valid physical symptom instruments have not been established. Therefore, this study examined the psychometric properties of the Symptom Status Questionnaire-Heart Failure (SSQ-HF) in patients with HF. Data on symptoms using the SSQ-HF were collected from 249 patients (aged 61 years, 67% male, 45% in New York Heart Association functional class III/IV). Internal consistency reliability was assessed using Cronbach's α. Item homogeneity was assessed using item-total and interitem correlations. Construct validity was assessed using factor analysis and testing hypotheses on known relationships. Data on depressive symptoms (Beck Depression Inventory II), HRQOL (Minnesota Living With Heart Failure Questionnaire), and event-free survival were collected to test known relationships. Internal consistency reliability was supported: Cronbach's α was .80. Item-total correlation coefficients and interitem correlation coefficients were acceptable. Factor analysis supported the construct validity of the instrument. More severe symptoms were associated with more depressive symptoms, poorer HRQOL, and more risk for hospitalization, emergency department visit, or death, controlling for covariates. The findings of this study support the reliability and validity of the SSQ-HF. Clinicians and researchers can use this instrument to assess physical symptoms in patients with HF.

  2. Investigation of Spiral Bevel Gear Condition Indicator Validation Via AC-29-2C Using Damage Progression Tests

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.

    2014-01-01

    This report documents the results of spiral bevel gear rig tests performed under a NASA Space Act Agreement with the Federal Aviation Administration (FAA) to support validation and demonstration of rotorcraft Health and Usage Monitoring Systems (HUMS) for maintenance credits via FAA Advisory Circular (AC) 29-2C, Section MG-15, Airworthiness Approval of Rotorcraft (HUMS) (Ref. 1). The overarching goal of this work was to determine a method to validate condition indicators in the lab that better represent their response to faults in the field. Using existing in-service helicopter HUMS flight data from faulted spiral bevel gears as a "Case Study," to better understand the differences between both systems, and the availability of the NASA Glenn Spiral Bevel Gear Fatigue Rig, a plan was put in place to design, fabricate and test comparable gear sets with comparable failure modes within the constraints of the test rig. The research objectives of the rig tests were to evaluate the capability of detecting gear surface pitting fatigue and other generated failure modes on spiral bevel gear teeth using gear condition indicators currently used in fielded HUMS. Nineteen final design gear sets were tested. Tables were generated for each test, summarizing the failure modes observed on the gear teeth for each test during each inspection interval and color coded based on damage mode per inspection photos. Gear condition indicators (CI) Figure of Merit 4 (FM4), Root Mean Square (RMS), +/- 1 Sideband Index (SI1) and +/- 3 Sideband Index (SI3) were plotted along with rig operational parameters. Statistical tables of the means and standard deviations were calculated within inspection intervals for each CI. As testing progressed, it became clear that certain condition indicators were more sensitive to a specific component and failure mode. These tests were clustered together for further analysis. Maintenance actions during testing were also documented. Correlation coefficients were calculated between each CI, component, damage state and torque. Results found test rig and gear design, type of fault and data acquisition can affect CI performance. Results found FM4, SI1 and SI3 can be used to detect macro pitting on two more gear or pinion teeth as long as it is detected prior to progressing to other components or transitioning to another failure mode. The sensitivity of RMS to system and operational conditions limit its reliability for systems that are not maintained at steady state. Failure modes that occurred due to scuffing or fretting were challenging to detect with current gear diagnostic tools, since the damage is distributed across all the gear and pinion teeth, smearing the impacting signatures typically used to differentiate between a healthy and damaged tooth contact. This is one of three final reports published on the results of this project. In the second report, damage modes experienced in the field will be mapped to the failure modes created in the test rig. The helicopter CI data will then be re-processed with the same analysis techniques applied to spiral bevel rig test data. In the third report, results from the rig and helicopter data analysis will be correlated. Observations, findings and lessons learned using sub-scale rig failure progression tests to validate helicopter gear condition indicators will be presented.

  3. Scalable File Systems for High Performance Computing Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandt, S A

    2007-10-03

    Simulations of mode I interlaminar fracture toughness tests of a carbon-reinforced composite material (BMS 8-212) were conducted with LSDYNA. The fracture toughness tests were performed by U.C. Berkeley. The simulations were performed to investigate the validity and practicality of employing decohesive elements to represent interlaminar bond failures that are prevalent in carbon-fiber composite structure penetration events. The simulations employed a decohesive element formulation that was verified on a simple two element model before being employed to perform the full model simulations. Care was required during the simulations to ensure that the explicit time integration of LSDYNA duplicate the near steady-statemore » testing conditions. In general, this study validated the use of employing decohesive elements to represent the interlaminar bond failures seen in carbon-fiber composite structures, but the practicality of employing the elements to represent the bond failures seen in carbon-fiber composite structures during penetration events was not established.« less

  4. Development of wheelchair caster testing equipment and preliminary testing of caster models

    PubMed Central

    Mhatre, Anand; Ott, Joseph

    2017-01-01

    Background Because of the adverse environmental conditions present in less-resourced environments (LREs), the World Health Organization (WHO) has recommended that specialised wheelchair test methods may need to be developed to support product quality standards in these environments. A group of experts identified caster test methods as a high priority because of their common failure in LREs, and the insufficiency of existing test methods described in the International Organization for Standardization (ISO) Wheelchair Testing Standards (ISO 7176). Objectives To develop and demonstrate the feasibility of a caster system test method. Method Background literature and expert opinions were collected to identify existing caster test methods, caster failures common in LREs and environmental conditions present in LREs. Several conceptual designs for the caster testing method were developed, and through an iterative process using expert feedback, a final concept and a design were developed and a prototype was fabricated. Feasibility tests were conducted by testing a series of caster systems from wheelchairs used in LREs, and failure modes were recorded and compared to anecdotal reports about field failures. Results The new caster testing system was developed and it provides the flexibility to expose caster systems to typical conditions in LREs. Caster failures such as stem bolt fractures, fork fractures, bearing failures and tire cracking occurred during testing trials and are consistent with field failures. Conclusion The new caster test system has the capability to incorporate necessary test factors that degrade caster quality in LREs. Future work includes developing and validating a testing protocol that results in failure modes common during wheelchair use in LRE. PMID:29062762

  5. A unified bond theory, probabilistic meso-scale modeling, and experimental validation of deformed steel rebar in normal strength concrete

    NASA Astrophysics Data System (ADS)

    Wu, Chenglin

    Bond between deformed rebar and concrete is affected by rebar deformation pattern, concrete properties, concrete confinement, and rebar-concrete interfacial properties. Two distinct groups of bond models were traditionally developed based on the dominant effects of concrete splitting and near-interface shear-off failures. Their accuracy highly depended upon the test data sets selected in analysis and calibration. In this study, a unified bond model is proposed and developed based on an analogy to the indentation problem around the rib front of deformed rebar. This mechanics-based model can take into account the combined effect of concrete splitting and interface shear-off failures, resulting in average bond strengths for all practical scenarios. To understand the fracture process associated with bond failure, a probabilistic meso-scale model of concrete is proposed and its sensitivity to interface and confinement strengths are investigated. Both the mechanical and finite element models are validated with the available test data sets and are superior to existing models in prediction of average bond strength (< 6% error) and crack spacing (< 6% error). The validated bond model is applied to derive various interrelations among concrete crushing, concrete splitting, interfacial behavior, and the rib spacing-to-height ratio of deformed rebar. It can accurately predict the transition of failure modes from concrete splitting to rebar pullout and predict the effect of rebar surface characteristics as the rib spacing-to-height ratio increases. Based on the unified theory, a global bond model is proposed and developed by introducing bond-slip laws, and validated with testing of concrete beams with spliced reinforcement, achieving a load capacity prediction error of less than 26%. The optimal rebar parameters and concrete cover in structural designs can be derived from this study.

  6. Studies in knowledge-based diagnosis of failures in robotic assembly

    NASA Technical Reports Server (NTRS)

    Lam, Raymond K.; Pollard, Nancy S.; Desai, Rajiv S.

    1990-01-01

    The telerobot diagnostic system (TDS) is a knowledge-based system that is being developed for identification and diagnosis of failures in the space robotic domain. The system is able to isolate the symptoms of the failure, generate failure hypotheses based on these symptoms, and test their validity at various levels by interpreting or simulating the effects of the hypotheses on results of plan execution. The implementation of the TDS is outlined. The classification of failures and the types of system models used by the TDS are discussed. A detailed example of the TDS approach to failure diagnosis is provided.

  7. Prognostics of Power Electronics, Methods and Validation Experiments

    NASA Technical Reports Server (NTRS)

    Kulkarni, Chetan S.; Celaya, Jose R.; Biswas, Gautam; Goebel, Kai

    2012-01-01

    Abstract Failure of electronic devices is a concern for future electric aircrafts that will see an increase of electronics to drive and control safety-critical equipment throughout the aircraft. As a result, investigation of precursors to failure in electronics and prediction of remaining life of electronic components is of key importance. DC-DC power converters are power electronics systems employed typically as sourcing elements for avionics equipment. Current research efforts in prognostics for these power systems focuses on the identification of failure mechanisms and the development of accelerated aging methodologies and systems to accelerate the aging process of test devices, while continuously measuring key electrical and thermal parameters. Preliminary model-based prognostics algorithms have been developed making use of empirical degradation models and physics-inspired degradation model with focus on key components like electrolytic capacitors and power MOSFETs (metal-oxide-semiconductor-field-effect-transistor). This paper presents current results on the development of validation methods for prognostics algorithms of power electrolytic capacitors. Particularly, in the use of accelerated aging systems for algorithm validation. Validation of prognostics algorithms present difficulties in practice due to the lack of run-to-failure experiments in deployed systems. By using accelerated experiments, we circumvent this problem in order to define initial validation activities.

  8. Final Report: System Reliability Model for Solid-State Lighting (SSL) Luminaires

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, J. Lynn

    2017-05-31

    The primary objectives of this project was to develop and validate reliability models and accelerated stress testing (AST) methodologies for predicting the lifetime of integrated SSL luminaires. This study examined the likely failure modes for SSL luminaires including abrupt failure, excessive lumen depreciation, unacceptable color shifts, and increased power consumption. Data on the relative distribution of these failure modes were acquired through extensive accelerated stress tests and combined with industry data and other source of information on LED lighting. This data was compiled and utilized to build models of the aging behavior of key luminaire optical and electrical components.

  9. A systematic review of validated methods for identifying acute respiratory failure using administrative and claims data.

    PubMed

    Jones, Natalie; Schneider, Gary; Kachroo, Sumesh; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W

    2012-01-01

    The Food and Drug Administration's (FDA) Mini-Sentinel pilot program initially aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest (HOIs) from administrative and claims data. This paper summarizes the process and findings of the algorithm review of acute respiratory failure (ARF). PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the anaphylaxis HOI. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify ARF, including validation estimates of the coding algorithms. Our search revealed a deficiency of literature focusing on ARF algorithms and validation estimates. Only two studies provided codes for ARF, each using related yet different ICD-9 codes (i.e., ICD-9 codes 518.8, "other diseases of lung," and 518.81, "acute respiratory failure"). Neither study provided validation estimates. Research needs to be conducted on designing validation studies to test ARF algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.

  10. Investigation on bending failure to characterize crashworthiness of 6xxx-series aluminium sheet alloys with bending-tension test procedure

    NASA Astrophysics Data System (ADS)

    Henn, Philipp; Liewald, Mathias; Sindel, Manfred

    2018-05-01

    As lightweight design as well as crash performance are crucial to future car body design, exact material characterisation is important to use materials at their full potential and reach maximum efficiency. Within the scope of this paper, the potential of newly established bending-tension test procedure to characterise material crashworthiness is investigated. In this test setup for the determination of material failure, a buckling-bending test is coupled with a subsequent tensile test. If prior bending load is critical, tensile strength and elongation in the subsequent tensile test are dramatically reduced. The new test procedure therefore offers an applicable definition of failure as the incapacity of energy consumption in subsequent phases of the crash represents failure of a component. In addition to that, the correlation of loading condition with actual crash scenarios (buckling and free bending) is improved compared to three- point bending test. The potential of newly established bending-tension test procedure to characterise material crashworthiness is investigated in this experimental studys on two aluminium sheet alloys. Experimental results are validated with existing ductility characterisation from edge compression test.

  11. Failure analysis on optical fiber on swarm flight payload

    NASA Astrophysics Data System (ADS)

    Bourcier, Frédéric; Fratter, Isabelle; Teyssandier, Florent; Barenes, Magali; Dhenin, Jérémie; Peyriguer, Marie; Petre-Bordenave, Romain

    2017-11-01

    Failure analysis on optical components is usually carried-out, on standard testing devices such as optical/electronic microscopes and spectrometers, on isolated but representative samples. Such analyses are not contactless and not totally non-invasive, so they cannot be used easily on flight models. Furthermore, for late payload or satellite integration/validation phases with tight schedule issues, it could be necessary to carry out a failure analysis directly on the flight hardware, in cleanroom.

  12. Improving Attachments of Non-Invasive (Type III) Electronic Data Loggers to Cetaceans

    DTIC Science & Technology

    2015-09-30

    animals in human care will be performed to test and validate this approach. The cadaver trials will enable controlled testing to failure or with both...quantitative metrics and analysis tools to assess the impact of a tag on the animal . Here we will present: 1) the characterization of the mechanical...fine scale motion analysis for swimming animals . 2 APPROACH Our approach is divided into four subtasks: Task 1: Forces and failure modes

  13. Aircraft control surface failure detection and isolation using the OSGLR test. [orthogonal series generalized likelihood ratio

    NASA Technical Reports Server (NTRS)

    Bonnice, W. F.; Motyka, P.; Wagner, E.; Hall, S. R.

    1986-01-01

    The performance of the orthogonal series generalized likelihood ratio (OSGLR) test in detecting and isolating commercial aircraft control surface and actuator failures is evaluated. A modification to incorporate age-weighting which significantly reduces the sensitivity of the algorithm to modeling errors is presented. The steady-state implementation of the algorithm based on a single linear model valid for a cruise flight condition is tested using a nonlinear aircraft simulation. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection and isolation performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling on dynamic pressure and flap deflection is examined. Based on this testing, the OSGLR algorithm should be capable of detecting control surface failures that would affect the safe operation of a commercial aircraft. Isolation may be difficult if there are several surfaces which produce similar effects on the aircraft. Extending the algorithm over the entire operating envelope of a commercial aircraft appears feasible.

  14. Construction and Validation of a Questionnaire about Heart Failure Patients' Knowledge of Their Disease

    PubMed Central

    Bonin, Christiani Decker Batista; dos Santos, Rafaella Zulianello; Ghisi, Gabriela Lima de Melo; Vieira, Ariany Marques; Amboni, Ricardo; Benetti, Magnus

    2014-01-01

    Background The lack of tools to measure heart failure patients' knowledge about their syndrome when participating in rehabilitation programs demonstrates the need for specific recommendations regarding the amount or content of information required. Objectives To develop and validate a questionnaire to assess heart failure patients' knowledge about their syndrome when participating in cardiac rehabilitation programs. Methods The tool was developed based on the Coronary Artery Disease Education Questionnaire and applied to 96 patients with heart failure, with a mean age of 60.22 ± 11.6 years, 64% being men. Reproducibility was obtained via the intraclass correlation coefficient, using the test-retest method. Internal consistency was assessed by use of Cronbach's alpha, and construct validity, by use of exploratory factor analysis. Results The final version of the tool had 19 questions arranged in ten areas of importance for patient education. The proposed questionnaire had a clarity index of 8.94 ± 0.83. The intraclass correlation coefficient was 0.856, and Cronbach's alpha, 0.749. Factor analysis revealed five factors associated with the knowledge areas. Comparing the final scores with the characteristics of the population evidenced that low educational level and low income are significantly associated with low levels of knowledge. Conclusion The instrument has satisfactory clarity and validity indices, and can be used to assess the heart failure patients' knowledge about their syndrome when participating in cardiac rehabilitation programs. PMID:24652054

  15. Lifetime evaluation of large format CMOS mixed signal infrared devices

    NASA Astrophysics Data System (ADS)

    Linder, A.; Glines, Eddie

    2015-09-01

    New large scale foundry processes continue to produce reliable products. These new large scale devices continue to use industry best practice to screen for failure mechanisms and validate their long lifetime. The Failure-in-Time analysis in conjunction with foundry qualification information can be used to evaluate large format device lifetimes. This analysis is a helpful tool when zero failure life tests are typical. The reliability of the device is estimated by applying the failure rate to the use conditions. JEDEC publications continue to be the industry accepted methods.

  16. Fatigue Failure of Space Shuttle Main Engine Turbine Blades

    NASA Technical Reports Server (NTRS)

    Swanson, Gregrory R.; Arakere, Nagaraj K.

    2000-01-01

    Experimental validation of finite element modeling of single crystal turbine blades is presented. Experimental results from uniaxial high cycle fatigue (HCF) test specimens and full scale Space Shuttle Main Engine test firings with the High Pressure Fuel Turbopump Alternate Turbopump (HPFTP/AT) provide the data used for the validation. The conclusions show the significant contribution of the crystal orientation within the blade on the resulting life of the component, that the analysis can predict this variation, and that experimental testing demonstrates it.

  17. Integrated Resilient Aircraft Control Project Full Scale Flight Validation

    NASA Technical Reports Server (NTRS)

    Bosworth, John T.

    2009-01-01

    Objective: Provide validation of adaptive control law concepts through full scale flight evaluation. Technical Approach: a) Engage failure mode - destabilizing or frozen surface. b) Perform formation flight and air-to-air tracking tasks. Evaluate adaptive algorithm: a) Stability metrics. b) Model following metrics. Full scale flight testing provides an ability to validate different adaptive flight control approaches. Full scale flight testing adds credence to NASA's research efforts. A sustained research effort is required to remove the road blocks and provide adaptive control as a viable design solution for increased aircraft resilience.

  18. Examining the Potential for Gender Bias in the Prediction of Symptom Validity Test Failure by MMPI-2 Symptom Validity Scale Scores

    ERIC Educational Resources Information Center

    Lee, Tayla T. C.; Graham, John R.; Sellbom, Martin; Gervais, Roger O.

    2012-01-01

    Using a sample of individuals undergoing medico-legal evaluations (690 men, 519 women), the present study extended past research on potential gender biases for scores of the Symptom Validity (FBS) scale of the Minnesota Multiphasic Personality Inventory-2 by examining score- and item-level differences between men and women and determining the…

  19. Cross-Cultural Adaptation and Psychometric Testing of the Brazilian Version of the Self-Care of Heart Failure Index Version 6.2

    PubMed Central

    Ávila, Christiane Wahast; Riegel, Barbara; Pokorski, Simoni Chiarelli; Camey, Suzi; Silveira, Luana Claudia Jacoby; Rabelo-Silva, Eneida Rejane

    2013-01-01

    Objective. To adapt and evaluate the psychometric properties of the Brazilian version of the SCHFI v 6.2. Methods. With the approval of the original author, we conducted a complete cross-cultural adaptation of the instrument (translation, synthesis, back translation, synthesis of back translation, expert committee review, and pretesting). The adapted version was named Brazilian version of the self-care of heart failure index v 6.2. The psychometric properties assessed were face validity and content validity (by expert committee review), construct validity (convergent validity and confirmatory factor analysis), and reliability. Results. Face validity and content validity were indicative of semantic, idiomatic, experimental, and conceptual equivalence. Convergent validity was demonstrated by a significant though moderate correlation (r = −0.51) on comparison with equivalent question scores of the previously validated Brazilian European heart failure self-care behavior scale. Confirmatory factor analysis supported the original three-factor model as having the best fit, although similar results were obtained for inadequate fit indices. The reliability of the instrument, as expressed by Cronbach's alpha, was 0.40, 0.82, and 0.93 for the self-care maintenance, self-care management, and self-care confidence scales, respectively. Conclusion. The SCHFI v 6.2 was successfully adapted for use in Brazil. Nevertheless, further studies should be carried out to improve its psychometric properties. PMID:24163765

  20. Clinical decision making in response to performance validity test failure in a psychiatric setting.

    PubMed

    Marcopulos, Bernice A; Caillouet, Beth A; Bailey, Christopher M; Tussey, Chriscelyn; Kent, Julie-Ann; Frederick, Richard

    2014-01-01

    This study examined the clinical utility of a performance validity test (PVT) for screening consecutive referrals (N = 436) to a neuropsychology service at a state psychiatric hospital treating both civilly committed and forensic patients. We created a contingency table with Test of Memory Malingering (TOMM) pass/fail (355/81) and secondary gain present/absent (181/255) to examine pass rates associated with patient demographic, clinical and forensic status characteristics. Of the 81 failed PVTs, 48 had secondary gain defined as active criminal legal charges; 33 failed PVTs with no secondary gain. These individuals tended to be older, female, Caucasian, and civilly committed compared with the group with secondary gain who failed. From estimations of TOMM False Positive Rate and True Positive Rate we estimated base rates of neurocognitive malingering for our clinical population using the Test Validation Summary (TVS; Frederick & Bowden, 2009 ). Although PVT failure is clearly more common in a group with secondary gain (31%), there were a number of false positives (11%). Clinical ratings of patients without gain who failed suggested cognitive deficits, behavioral issues, and inattention. Low scores on PVTs in the absence of secondary gain provide useful information on test engagement and can inform clinical decisions about testing.

  1. Verification of the Multi-Axial, Temperature and Time Dependent (MATT) Failure Criterion

    NASA Technical Reports Server (NTRS)

    Richardson, David E.; Macon, David J.

    2005-01-01

    An extensive test and analytical effort has been completed by the Space Shuttle's Reusable Solid Rocket Motor (KSKM) nozzle program to characterize the failure behavior of two epoxy adhesives (TIGA 321 and EA946). As part of this effort, a general failure model, the "Multi-Axial, Temperature, and Time Dependent" or MATT failure criterion was developed. In the initial development of this failure criterion, tests were conducted to provide validation of the theory under a wide range of test conditions. The purpose of this paper is to present additional verification of the MATT failure criterion, under new loading conditions for the adhesives TIGA 321 and EA946. In many cases, the loading conditions involve an extrapolation from the conditions under which the material models were originally developed. Testing was conducted using three loading conditions: multi-axial tension, torsional shear, and non-uniform tension in a bondline condition. Tests were conducted at constant and cyclic loading rates ranging over four orders of magnitude. Tests were conducted under environmental conditions of primary interest to the RSRM program. The temperature range was not extreme, but the loading ranges were extreme (varying by four orders of magnitude). It should be noted that the testing was conducted at temperatures below the glass transition temperature of the TIGA 321 adhesive. However for the EA946, the testing was conducted at temperatures that bracketed the glass transition temperature.

  2. Validity testing and neuropsychology practice in the VA healthcare system: results from recent practitioner survey (.).

    PubMed

    Young, J Christopher; Roper, Brad L; Arentsen, Timothy J

    2016-05-01

    A survey of neuropsychologists in the Veterans Health Administration examined symptom/performance validity test (SPVT) practices and estimated base rates for patient response bias. Invitations were emailed to 387 psychologists employed within the Veterans Affairs (VA), identified as likely practicing neuropsychologists, resulting in 172 respondents (44.4% response rate). Practice areas varied, with 72% at least partially practicing in general neuropsychology clinics and 43% conducting VA disability exams. Mean estimated failure rates were 23.0% for clinical outpatient, 12.9% for inpatient, and 39.4% for disability exams. Failure rates were the highest for mTBI and PTSD referrals. Failure rates were positively correlated with the number of cases seen and frequency and number of SPVT use. Respondents disagreed regarding whether one (45%) or two (47%) failures are required to establish patient response bias, with those administering more measures employing the more stringent criterion. Frequency of the use of specific SPVTs is reported. Base rate estimates for SPVT failure in VA disability exams are comparable to those in other medicolegal settings. However, failure in routine clinical exams is much higher in the VA than in other settings, possibly reflecting the hybrid nature of the VA's role in both healthcare and disability determination. Generally speaking, VA neuropsychologists use SPVTs frequently and eschew pejorative terms to describe their failure. Practitioners who require only one SPVT failure to establish response bias may overclassify patients. Those who use few or no SPVTs may fail to identify response bias. Additional clinical and theoretical implications are discussed.

  3. Damage tolerance modeling and validation of a wireless sensory composite panel for a structural health monitoring system

    NASA Astrophysics Data System (ADS)

    Talagani, Mohamad R.; Abdi, Frank; Saravanos, Dimitris; Chrysohoidis, Nikos; Nikbin, Kamran; Ragalini, Rose; Rodov, Irena

    2013-05-01

    The paper proposes the diagnostic and prognostic modeling and test validation of a Wireless Integrated Strain Monitoring and Simulation System (WISMOS). The effort verifies a hardware and web based software tool that is able to evaluate and optimize sensorized aerospace composite structures for the purpose of Structural Health Monitoring (SHM). The tool is an extension of an existing suite of an SHM system, based on a diagnostic-prognostic system (DPS) methodology. The goal of the extended SHM-DPS is to apply multi-scale nonlinear physics-based Progressive Failure analyses to the "as-is" structural configuration to determine residual strength, remaining service life, and future inspection intervals and maintenance procedures. The DPS solution meets the JTI Green Regional Aircraft (GRA) goals towards low weight, durable and reliable commercial aircraft. It will take advantage of the currently developed methodologies within the European Clean sky JTI project WISMOS, with the capability to transmit, store and process strain data from a network of wireless sensors (e.g. strain gages, FBGA) and utilize a DPS-based methodology, based on multi scale progressive failure analysis (MS-PFA), to determine structural health and to advice with respect to condition based inspection and maintenance. As part of the validation of the Diagnostic and prognostic system, Carbon/Epoxy ASTM coupons were fabricated and tested to extract the mechanical properties. Subsequently two composite stiffened panels were manufactured, instrumented and tested under compressive loading: 1) an undamaged stiffened buckling panel; and 2) a damaged stiffened buckling panel including an initial diamond cut. Next numerical Finite element models of the two panels were developed and analyzed under test conditions using Multi-Scale Progressive Failure Analysis (an extension of FEM) to evaluate the damage/fracture evolution process, as well as the identification of contributing failure modes. The comparisons between predictions and test results were within 10% accuracy.

  4. Failure analysis of energy storage spring in automobile composite brake chamber

    NASA Astrophysics Data System (ADS)

    Luo, Zai; Wei, Qing; Hu, Xiaofeng

    2015-02-01

    This paper set energy storage spring of parking brake cavity, part of automobile composite brake chamber, as the research object. And constructed the fault tree model of energy storage spring which caused parking brake failure based on the fault tree analysis method. Next, the parking brake failure model of energy storage spring was established by analyzing the working principle of composite brake chamber. Finally, the data of working load and the push rod stroke measured by comprehensive test-bed valve was used to validate the failure model above. The experimental result shows that the failure model can distinguish whether the energy storage spring is faulted.

  5. Fatigue life prediction of liquid rocket engine combustor with subscale test verification

    NASA Astrophysics Data System (ADS)

    Sung, In-Kyung

    Reusable rocket systems such as the Space Shuttle introduced a new era in propulsion system design for economic feasibility. Practical reusable systems require an order of magnitude increase in life. To achieve this improved methods are needed to assess failure mechanisms and to predict life cycles of rocket combustor. A general goal of the research was to demonstrate the use of subscale rocket combustor prototype in a cost-effective test program. Life limiting factors and metal behaviors under repeated loads were surveyed and reviewed. The life prediction theories are presented, with an emphasis on studies that used subscale test hardware for model validation. From this review, low cycle fatigue (LCF) and creep-fatigue interaction (ratcheting) were identified as the main life limiting factors of the combustor. Several life prediction methods such as conventional and advanced viscoplastic models were used to predict life cycle due to low cycle thermal stress, transient effects, and creep rupture damage. Creep-fatigue interaction and cyclic hardening were also investigated. A prediction method based on 2D beam theory was modified using 3D plate deformation theory to provide an extended prediction method. For experimental validation two small scale annular plug nozzle thrusters were designed, built and tested. The test article was composed of a water-cooled liner, plug annular nozzle and 200 psia precombustor that used decomposed hydrogen peroxide as the oxidizer and JP-8 as the fuel. The first combustor was tested cyclically at the Advanced Propellants and Combustion Laboratory at Purdue University. Testing was stopped after 140 cycles due to an unpredicted failure mechanism due to an increasing hot spot in the location where failure was predicted. A second combustor was designed to avoid the previous failure, however, it was over pressurized and deformed beyond repair during cold-flow test. The test results are discussed and compared to the analytical and numerical predictions. A detailed comparison was not performed, however, due to the lack of test data resulting from a failure of the test article. Some theoretical and experimental aspects such as fin effect and round corner were found to reduce the discrepancy between prediction and test results.

  6. A new kid on the block: The Memory Validity Profile (MVP) in children with neurological conditions.

    PubMed

    Brooks, Brian L; Fay-McClymont, Taryn B; MacAllister, William S; Vasserman, Marsha; Sherman, Elisabeth M S

    2018-06-06

    Determining the validity of obtained data is an inherent part of a neuropsychological assessment. The purpose of this study was investigate the failure rate of the Memory Validity Profile (MVP) in a large clinical sample of children and adolescents with neurological diagnoses. Data were obtained from 261 consecutive patients (mean age = 12.0, SD = 3.9, range = 5-19) who were referred for a neuropsychological assessment in a tertiary care pediatric hospital and were administered the MVP. In this sample, 4.6% of youth failed the MVP. Mean administration time for the MVP was 7.4 min, although time to complete was not associated with failure rates. Failure rates were held relatively consistent at approximately 5% across age ranges, diagnoses, and psychomotor processing speed abilities. Having very low, below normal, or above normal intellectual abilities did not alter failure rate on the MVP. However, those with intellectual disability (i.e., IQ<70) had a higher fail rate at 12% on MVP Total Score, but only 6% on the MVP Visual portion. Failure rates on the MVP were associated with lower scores on memory tests. This study provides support for using the MVP in children as young as 5 years with neurological diagnoses.

  7. Predictions of structural integrity of steam generator tubes under normal operating, accident, an severe accident conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Majumdar, S.

    1997-02-01

    Available models for predicting failure of flawed and unflawed steam generator tubes under normal operating, accident, and severe accident conditions are reviewed. Tests conducted in the past, though limited, tended to show that the earlier flow-stress model for part-through-wall axial cracks overestimated the damaging influence of deep cracks. This observation was confirmed by further tests at high temperatures, as well as by finite-element analysis. A modified correlation for deep cracks can correct this shortcoming of the model. Recent tests have shown that lateral restraint can significantly increase the failure pressure of tubes with unsymmetrical circumferential cracks. This observation was confirmedmore » by finite-element analysis. The rate-independent flow stress models that are successful at low temperatures cannot predict the rate-sensitive failure behavior of steam generator tubes at high temperatures. Therefore, a creep rupture model for predicting failure was developed and validated by tests under various temperature and pressure loadings that can occur during postulated severe accidents.« less

  8. Cadaveric study validating in vitro monitoring techniques to measure the failure mechanism of glenoid implants against clinical CT.

    PubMed

    Junaid, Sarah; Gregory, Thomas; Fetherston, Shirley; Emery, Roger; Amis, Andrew A; Hansen, Ulrich

    2018-03-23

    Definite glenoid implant loosening is identifiable on radiographs, however, identifying early loosening still eludes clinicians. Methods to monitor glenoid loosening in vitro have not been validated to clinical imaging. This study investigates the correlation between in vitro measures and CT images. Ten cadaveric scapulae were implanted with a pegged glenoid implant and fatigue tested to failure. Each scapulae were cyclically loaded superiorly and CT scanned every 20,000 cycles until failure to monitor progressive radiolucent lines. Superior and inferior rim displacements were also measured. A finite element (FE) model of one scapula was used to analyze the interfacial stresses at the implant/cement and cement/bone interfaces. All ten implants failed inferiorly at the implant-cement interface, two also failed at the cement-bone interface inferiorly, and three showed superior failure. Failure occurred at of 80,966 ± 53,729 (mean ± SD) cycles. CT scans confirmed failure of the fixation, and in most cases, was observed either before or with visual failure. Significant correlations were found between inferior rim displacement, vertical head displacement and failure of the glenoid implant. The FE model showed peak tensile stresses inferiorly and high compressive stresses superiorly, corroborating experimental findings. In vitro monitoring methods correlated to failure progression in clinical CT images possibly indicating its capacity to detect loosening earlier for earlier clinical intervention if needed. Its use in detecting failure non-destructively for implant development and testing is also valuable. The study highlights failure at the implant-cement interface and early signs of failure are identifiable in CT images. © 2018 The Authors. Journal of Orthopaedic Research ® Published by Wiley Periodicals, Inc. on behalf of the Orthopaedic Research Society. J Orthop Res 9999:XX-XX, 2018. © 2018 The Authors. Journal of Orthopaedic Research® Published by Wiley Periodicals, Inc. on behalf of the Orthopaedic Research Society.

  9. Tests of the Construct Validity of Occupational Stress Measures with College Students: Failure to Support Discriminant Validity.

    ERIC Educational Resources Information Center

    Meier, Scott T.

    1991-01-01

    Examined correlations among stress, anxiety, and depression scales in 129 college students, as well as ability of measures of depression and anxiety to add to predictive power of occupational stress for recognition memory task and self-reported physical symptoms. Results indicated that stress, depression, and anxiety measures were moderately to…

  10. Validation of the Oudega diagnostic decision rule for diagnosing deep vein thrombosis in frail older out-of-hospital patients.

    PubMed

    Schouten, Henrike J; Koek, Huiberdina L; Oudega, Ruud; van Delden, Johannes J M; Moons, Karel G M; Geersing, Geert-Jan

    2015-02-01

    We aimed to validate the Oudega diagnostic decision rule-which was developed and validated among younger aged primary care patients-to rule-out deep vein thrombosis (DVT) in frail older outpatients. In older patients (>60 years, either community dwelling or residing in nursing homes) with clinically suspected DVT, physicians recorded the score on the Oudega rule and d-dimer test. DVT was confirmed with a composite reference standard including ultrasonography examination and 3-month follow-up. The proportion of patients with a very low probability of DVT according to the Oudega rule (efficiency), and the proportion of patients with symptomatic venous thromboembolism during 3 months follow-up within this 'very low risk' group (failure rate) was calculated. DVT occurred in 164 (47%) of the 348 study participants (mean age 81 years, 85% residing in nursing homes). The probability of DVT was very low in 69 patients (Oudega score ≤3 points plus a normal d-dimer test; efficiency 20%) of whom four had non-fatal DVT (failure rate 5.8%; 2.3-14%). With a simple revised version of the Oudega rule for older suspected patients, 43 patients had a low risk of DVT (12% of the total population) of whom only one had DVT (failure rate 2.3%; 0.4-12%). In older suspected patients, application of the original Oudega rule to exclude DVT resulted in a higher failure rate as compared to previous studies. A revised and simplified Oudega strategy specifically developed for elderly suspected patients resulted in a lower failure rate though at the expense of a lower efficiency. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Development and validation of a risk score for hospitalization for heart failure in patients with Type 2 diabetes mellitus.

    PubMed

    Yang, Xilin; Ma, Ronald C; So, Wing-Yee; Kong, Alice P; Ko, Gary T; Ho, Chun-Shun; Lam, Christopher W; Cockram, Clive S; Tong, Peter C; Chan, Juliana C

    2008-04-22

    There are no risk scores available for predicting heart failure in Type 2 diabetes mellitus (T2DM). Based on the Hong Kong Diabetes Registry, this study aimed to develop and validate a risk score for predicting heart failure that needs hospitalisation in T2DM. 7067 Hong Kong Chinese diabetes patients without history of heart failure, and without history and clinical evidence of coronary heart disease at baseline were analyzed. The subjects have been followed up for a median period of 5.5 years. Data were randomly and evenly assigned to a training dataset and a test dataset. Sex-stratified Cox proportional hazard regression was used to obtain predictors of HF-related hospitalization in the training dataset. Calibration was assessed using Hosmer-Lemeshow test and discrimination was examined using the area under receiver's operating characteristic curve (aROC) in the test dataset. During the follow-up, 274 patients developed heart failure event/s that needed hospitalisation. Age, body mass index (BMI), spot urinary albumin to creatinine ratio (ACR), HbA1c, blood haemoglobin (Hb) at baseline and coronary heart disease during follow-up were predictors of HF-related hospitalization in the training dataset. HF-related hospitalization risk score = 0.0709 x age (year) + 0.0627 x BMI (kg/m2) + 0.1363 x HbA1c(%) + 0.9915 x Log10(1+ACR) (mg/mmol) - 0.3606 x Blood Hb(g/dL) + 0.8161 x CHD during follow-up (1 if yes). The 5-year probability of heart failure = 1-S0(5)EXP{0.9744 x (Risk Score - 2.3961)}. Where S0(5) = 0.9888 if male and 0.9809 if female. The predicted and observed 5-year probabilities of HF-related hospitalization were similar (p > 0.20) and the adjusted aROC was 0.920 for 5 years of follow-up. The risk score had adequate performance. Further validations in other cohorts of patients with T2DM are needed before clinical use.

  12. Flight Test of an Adaptive Controller and Simulated Failure/Damage on the NASA NF-15B

    NASA Technical Reports Server (NTRS)

    Buschbacher, Mark; Maliska, Heather

    2006-01-01

    The method of flight-testing the Intelligent Flight Control System (IFCS) Second Generation (Gen-2) project on the NASA NF-15B is herein described. The Gen-2 project objective includes flight-testing a dynamic inversion controller augmented by a direct adaptive neural network to demonstrate performance improvements in the presence of simulated failure/damage. The Gen-2 objectives as implemented on the NASA NF-15B created challenges for software design, structural loading limitations, and flight test operations. Simulated failure/damage is introduced by modifying control surface commands, therefore requiring structural loads measurements. Flight-testing began with the validation of a structural loads model. Flight-testing of the Gen-2 controller continued, using test maneuvers designed in a sequenced approach. Success would clear the new controller with respect to dynamic response, simulated failure/damage, and with adaptation on and off. A handling qualities evaluation was conducted on the capability of the Gen-2 controller to restore aircraft response in the presence of a simulated failure/damage. Control room monitoring of loads sensors, flight dynamics, and controller adaptation, in addition to postflight data comparison to the simulation, ensured a safe methodology of buildup testing. Flight-testing continued without major incident to accomplish the project objectives, successfully uncovering strengths and weaknesses of the Gen-2 control approach in flight.

  13. Integrated Application of Active Controls (IAAC) technology to an advanced subsonic transport project: Test act system validation

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The primary objective of the Test Active Control Technology (ACT) System laboratory tests was to verify and validate the system concept, hardware, and software. The initial lab tests were open loop hardware tests of the Test ACT System as designed and built. During the course of the testing, minor problems were uncovered and corrected. Major software tests were run. The initial software testing was also open loop. These tests examined pitch control laws, wing load alleviation, signal selection/fault detection (SSFD), and output management. The Test ACT System was modified to interface with the direct drive valve (DDV) modules. The initial testing identified problem areas with DDV nonlinearities, valve friction induced limit cycling, DDV control loop instability, and channel command mismatch. The other DDV issue investigated was the ability to detect and isolate failures. Some simple schemes for failure detection were tested but were not completely satisfactory. The Test ACT System architecture continues to appear promising for ACT/FBW applications in systems that must be immune to worst case generic digital faults, and be able to tolerate two sequential nongeneric faults with no reduction in performance. The challenge in such an implementation would be to keep the analog element sufficiently simple to achieve the necessary reliability.

  14. Failure analysis on false call probe pins of microprocessor test equipment

    NASA Astrophysics Data System (ADS)

    Tang, L. W.; Ong, N. R.; Mohamad, I. S. B.; Alcain, J. B.; Retnasamy, V.

    2017-09-01

    A study has been conducted to investigate failure analysis on probe pins of test modules for microprocessor. The `health condition' of the probe pin is determined by the resistance value. A test module of 5V power supplied from Arduino UNO with "Four-wire Ohm measurement" method is implemented in this study to measure the resistance of the probe pins of a microprocessor. The probe pins from a scrapped computer motherboard is used as the test sample in this study. The functionality of the test module was validated with the pre-measurement experiment via VEE Pro software. Lastly, the experimental work have demonstrated that the implemented test module have the capability to identify the probe pin's `health condition' based on the measured resistance value.

  15. Optimization of Artificial Neural Network using Evolutionary Programming for Prediction of Cascading Collapse Occurrence due to the Hidden Failure Effect

    NASA Astrophysics Data System (ADS)

    Idris, N. H.; Salim, N. A.; Othman, M. M.; Yasin, Z. M.

    2018-03-01

    This paper presents the Evolutionary Programming (EP) which proposed to optimize the training parameters for Artificial Neural Network (ANN) in predicting cascading collapse occurrence due to the effect of protection system hidden failure. The data has been collected from the probability of hidden failure model simulation from the historical data. The training parameters of multilayer-feedforward with backpropagation has been optimized with objective function to minimize the Mean Square Error (MSE). The optimal training parameters consists of the momentum rate, learning rate and number of neurons in first hidden layer and second hidden layer is selected in EP-ANN. The IEEE 14 bus system has been tested as a case study to validate the propose technique. The results show the reliable prediction of performance validated through MSE and Correlation Coefficient (R).

  16. Spring performance tester for miniature extension springs

    DOEpatents

    Salzbrenner, Bradley; Boyce, Brad

    2017-05-16

    A spring performance tester and method of testing a spring are disclosed that has improved accuracy and precision over prior art spring testers. The tester can perform static and cyclic testing. The spring tester can provide validation for product acceptance as well as test for cyclic degradation of springs, such as the change in the spring rate and fatigue failure.

  17. Real-time sensor data validation

    NASA Technical Reports Server (NTRS)

    Bickmore, Timothy W.

    1994-01-01

    This report describes the status of an on-going effort to develop software capable of detecting sensor failures on rocket engines in real time. This software could be used in a rocket engine controller to prevent the erroneous shutdown of an engine due to sensor failures which would otherwise be interpreted as engine failures by the control software. The approach taken combines analytical redundancy with Bayesian belief networks to provide a solution which has well defined real-time characteristics and well-defined error rates. Analytical redundancy is a technique in which a sensor's value is predicted by using values from other sensors and known or empirically derived mathematical relations. A set of sensors and a set of relations among them form a network of cross-checks which can be used to periodically validate all of the sensors in the network. Bayesian belief networks provide a method of determining if each of the sensors in the network is valid, given the results of the cross-checks. This approach has been successfully demonstrated on the Technology Test Bed Engine at the NASA Marshall Space Flight Center. Current efforts are focused on extending the system to provide a validation capability for 100 sensors on the Space Shuttle Main Engine.

  18. Fail Safe, High Temperature Magnetic Bearings

    NASA Technical Reports Server (NTRS)

    Minihan, Thomas; Palazzolo, Alan; Kim, Yeonkyu; Lei, Shu-Liang; Kenny, Andrew; Na, Uhn Joo; Tucker, Randy; Preuss, Jason; Hunt, Andrew; Carter, Bart; hide

    2002-01-01

    This paper contributes to the magnetic bearing literature in two distinct areas: high temperature and redundant actuation. Design considerations and test results are given for the first published combined 538 C (1000 F) high speed rotating test performance of a magnetic bearing. Secondly, a significant extension of the flux isolation based, redundant actuator control algorithm is proposed to eliminate the prior deficiency of changing position stiffness after failure. The benefit of the novel extension was not experimentally demonstrated due to a high active stiffness requirement. In addition, test results are given for actuator failure tests at 399 C (750 F), 12,500 rpm. Finally, simulation results are presented confirming the experimental data and validating the redundant control algorithm.

  19. Word Memory Test Performance Across Cognitive Domains, Psychiatric Presentations, and Mild Traumatic Brain Injury.

    PubMed

    Rowland, Jared A; Miskey, Holly M; Brearly, Timothy W; Martindale, Sarah L; Shura, Robert D

    2017-05-01

    The current study addressed two aims: (i) determine how Word Memory Test (WMT) performance relates to test performance across numerous cognitive domains and (ii) evaluate how current psychiatric disorders or mild traumatic brain injury (mTBI) history affects performance on the WMT after excluding participants with poor symptom validity. Participants were 235 Iraq and Afghanistan-era veterans (Mage = 35.5) who completed a comprehensive neuropsychological battery. Participants were divided into two groups based on WMT performance (Pass = 193, Fail = 42). Tests were grouped into cognitive domains and an average z-score was calculated for each domain. Significant differences were found between those who passed and those who failed the WMT on the memory, attention, executive function, and motor output domain z-scores. WMT failure was associated with a larger performance decrement in the memory domain than the sensation or visuospatial-construction domains. Participants with a current psychiatric diagnosis or mTBI history were significantly more likely to fail the WMT, even after removing participants with poor symptom validity. Results suggest that the WMT is most appropriate for assessing validity in the domains of attention, executive function, motor output and memory, with little relationship to performance in domains of sensation or visuospatial-construction. Comprehensive cognitive batteries would benefit from inclusion of additional performance validity tests in these domains. Additionally, symptom validity did not explain higher rates of WMT failure in individuals with a current psychiatric diagnosis or mTBI history. Further research is needed to better understand how these conditions may affect WMT performance. Published by Oxford University Press 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  20. Primary care REFerral for EchocaRdiogram (REFER) in heart failure: a diagnostic accuracy study

    PubMed Central

    Taylor, Clare J; Roalfe, Andrea K; Iles, Rachel; Hobbs, FD Richard; Barton, P; Deeks, J; McCahon, D; Cowie, MR; Sutton, G; Davis, RC; Mant, J; McDonagh, T; Tait, L

    2017-01-01

    Background Symptoms of breathlessness, fatigue, and ankle swelling are common in general practice but deciding which patients are likely to have heart failure is challenging. Aim To evaluate the performance of a clinical decision rule (CDR), with or without N-Terminal pro-B type natriuretic peptide (NT-proBNP) assay, for identifying heart failure. Design and setting Prospective, observational, diagnostic validation study of patients aged >55 years, presenting with shortness of breath, lethargy, or ankle oedema, from 28 general practices in England. Method The outcome was test performance of the CDR and natriuretic peptide test in determining a diagnosis of heart failure. The reference standard was an expert consensus panel of three cardiologists. Results Three hundred and four participants were recruited, with 104 (34.2%; 95% confidence interval [CI] = 28.9 to 39.8) having a confirmed diagnosis of heart failure. The CDR+NT-proBNP had a sensitivity of 90.4% (95% CI = 83.0 to 95.3) and specificity 45.5% (95% CI = 38.5 to 52.7). NT-proBNP level alone with a cut-off <400 pg/ml had sensitivity 76.9% (95% CI = 67.6 to 84.6) and specificity 91.5% (95% CI = 86.7 to 95.0). At the lower cut-off of NT-proBNP <125 pg/ml, sensitivity was 94.2% (95% CI = 87.9 to 97.9) and specificity 49.0% (95% CI = 41.9 to 56.1). Conclusion At the low threshold of NT-proBNP <125 pg/ml, natriuretic peptide testing alone was better than a validated CDR+NT-proBNP in determining which patients presenting with symptoms went on to have a diagnosis of heart failure. The higher NT-proBNP threshold of 400 pg/ml may mean more than one in five patients with heart failure are not appropriately referred. Guideline natriuretic peptide thresholds may need to be revised. PMID:27919937

  1. SILHIL Replication of Electric Aircraft Powertrain Dynamics and Inner-Loop Control for V&V of System Health Management Routines

    NASA Technical Reports Server (NTRS)

    Bole, Brian; Teubert, Christopher Allen; Cuong Chi, Quach; Hogge, Edward; Vazquez, Sixto; Goebel, Kai; George, Vachtsevanos

    2013-01-01

    Software-in-the-loop and Hardware-in-the-loop testing of failure prognostics and decision making tools for aircraft systems will facilitate more comprehensive and cost-effective testing than what is practical to conduct with flight tests. A framework is described for the offline recreation of dynamic loads on simulated or physical aircraft powertrain components based on a real-time simulation of airframe dynamics running on a flight simulator, an inner-loop flight control policy executed by either an autopilot routine or a human pilot, and a supervisory fault management control policy. The creation of an offline framework for verifying and validating supervisory failure prognostics and decision making routines is described for the example of battery charge depletion failure scenarios onboard a prototype electric unmanned aerial vehicle.

  2. The Autonomic Symptom Profile: a new instrument to assess autonomic symptoms

    NASA Technical Reports Server (NTRS)

    Suarez, G. A.; Opfer-Gehrking, T. L.; Offord, K. P.; Atkinson, E. J.; O'Brien, P. C.; Low, P. A.

    1999-01-01

    OBJECTIVE: To develop a new specific instrument called the Autonomic Symptom Profile to measure autonomic symptoms and test its validity. BACKGROUND: Measuring symptoms is important in the evaluation of quality of life outcomes. There is no validated, self-completed questionnaire on the symptoms of patients with autonomic disorders. METHODS: The questionnaire is 169 items concerning different aspects of autonomic symptoms. The Composite Autonomic Symptom Scale (COMPASS) with item-weighting was established; higher scores indicate more or worse symptoms. Autonomic function tests were performed to generate the Composite Autonomic Scoring Scale (CASS) and to quantify autonomic deficits. We compared the results of the COMPASS with the CASS derived from the Autonomic Reflex Screen to evaluate validity. RESULTS: The instrument was tested in 41 healthy controls (mean age 46.6 years), 33 patients with nonautonomic peripheral neuropathies (mean age 59.5 years), and 39 patients with autonomic failure (mean age 61.1 years). COMPASS scores correlated well with the CASS, demonstrating an acceptable level of content and criterion validity. The mean (+/-SD) overall COMPASS score was 9.8 (+/-9) in controls, 25.9 (+/-17.9) in the patients with nonautonomic peripheral neuropathies, and 52.3 (+/-24.2) in the autonomic failure group. Scores of symptoms of orthostatic intolerance and secretomotor dysfunction best predicted the CASS on multiple stepwise regression analysis. CONCLUSIONS: We describe a questionnaire that measures autonomic symptoms and present evidence for its validity. The instrument shows promise in assessing autonomic symptoms in clinical trials and epidemiologic studies.

  3. Posttest analysis of the 1:6-scale reinforced concrete containment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pfeiffer, P.A.; Kennedy, J.M.; Marchertas, A.H.

    A prediction of the response of the Sandia National Laboratories 1:6- scale reinforced concrete containment model test was made by Argonne National Laboratory. ANL along with nine other organizations performed a detailed nonlinear response analysis of the 1:6-scale model containment subjected to overpressurization in the fall of 1986. The two-dimensional code TEMP-STRESS and the three-dimensional NEPTUNE code were utilized (1) to predict the global response of the structure, (2) to identify global failure sites and the corresponding failure pressures and (3) to identify some local failure sites and pressure levels. A series of axisymmetric models was studied with the two-dimensionalmore » computer program TEMP-STRESS. The comparison of these pretest computations with test data from the containment model has provided a test for the capability of the respective finite element codes to predict global failure modes, and hence serves as a validation of these codes. Only the two-dimensional analyses will be discussed in this paper. 3 refs., 10 figs.« less

  4. Handling Qualities of Model Reference Adaptive Controllers with Varying Complexity for Pitch-Roll Coupled Failures

    NASA Technical Reports Server (NTRS)

    Schaefer, Jacob; Hanson, Curt; Johnson, Marcus A.; Nguyen, Nhan

    2011-01-01

    Three model reference adaptive controllers (MRAC) with varying levels of complexity were evaluated on a high performance jet aircraft and compared along with a baseline nonlinear dynamic inversion controller. The handling qualities and performance of the controllers were examined during failure conditions that induce coupling between the pitch and roll axes. Results from flight tests showed with a roll to pitch input coupling failure, the handling qualities went from Level 2 with the baseline controller to Level 1 with the most complex MRAC tested. A failure scenario with the left stabilator frozen also showed improvement with the MRAC. Improvement in performance and handling qualities was generally seen as complexity was incrementally added; however, added complexity usually corresponds to increased verification and validation effort required for certification. The tradeoff between complexity and performance is thus important to a controls system designer when implementing an adaptive controller on an aircraft. This paper investigates this relation through flight testing of several controllers of vary complexity.

  5. Real-Time Sensor Validation, Signal Reconstruction, and Feature Detection for an RLV Propulsion Testbed

    NASA Technical Reports Server (NTRS)

    Jankovsky, Amy L.; Fulton, Christopher E.; Binder, Michael P.; Maul, William A., III; Meyer, Claudia M.

    1998-01-01

    A real-time system for validating sensor health has been developed in support of the reusable launch vehicle program. This system was designed for use in a propulsion testbed as part of an overall effort to improve the safety, diagnostic capability, and cost of operation of the testbed. The sensor validation system was designed and developed at the NASA Lewis Research Center and integrated into a propulsion checkout and control system as part of an industry-NASA partnership, led by Rockwell International for the Marshall Space Flight Center. The system includes modules for sensor validation, signal reconstruction, and feature detection and was designed to maximize portability to other applications. Review of test data from initial integration testing verified real-time operation and showed the system to perform correctly on both hard and soft sensor failure test cases. This paper discusses the design of the sensor validation and supporting modules developed at LeRC and reviews results obtained from initial test cases.

  6. Evaluation of tools used to measure calcium and/or dairy consumption in adults.

    PubMed

    Magarey, Anthea; Baulderstone, Lauren; Yaxley, Alison; Markow, Kylie; Miller, Michelle

    2015-05-01

    To identify and critique tools for the assessment of Ca and/or dairy intake in adults, in order to ascertain the most accurate and reliable tools available. A systematic review of the literature was conducted using defined inclusion and exclusion criteria. Articles reporting on originally developed tools or testing the reliability or validity of existing tools that measure Ca and/or dairy intake in adults were included. Author-defined criteria for reporting reliability and validity properties were applied. Studies conducted in Western countries. Adults. Thirty papers, utilising thirty-six tools assessing intake of dairy, Ca or both, were identified. Reliability testing was conducted on only two dairy and five Ca tools, with results indicating that only one dairy and two Ca tools were reliable. Validity testing was conducted for all but four Ca-only tools. There was high reliance in validity testing on lower-order tests such as correlation and failure to differentiate between statistical and clinically meaningful differences. Results of the validity testing suggest one dairy and five Ca tools are valid. Thus one tool was considered both reliable and valid for the assessment of dairy intake and only two tools proved reliable and valid for the assessment of Ca intake. While several tools are reliable and valid, their application across adult populations is limited by the populations in which they were tested. These results indicate a need for tools that assess Ca and/or dairy intake in adults to be rigorously tested for reliability and validity.

  7. The Hyper-X Flight Systems Validation Program

    NASA Technical Reports Server (NTRS)

    Redifer, Matthew; Lin, Yohan; Bessent, Courtney Amos; Barklow, Carole

    2007-01-01

    For the Hyper-X/X-43A program, the development of a comprehensive validation test plan played an integral part in the success of the mission. The goal was to demonstrate hypersonic propulsion technologies by flight testing an airframe-integrated scramjet engine. Preparation for flight involved both verification and validation testing. By definition, verification is the process of assuring that the product meets design requirements; whereas validation is the process of assuring that the design meets mission requirements for the intended environment. This report presents an overview of the program with emphasis on the validation efforts. It includes topics such as hardware-in-the-loop, failure modes and effects, aircraft-in-the-loop, plugs-out, power characterization, antenna pattern, integration, combined systems, captive carry, and flight testing. Where applicable, test results are also discussed. The report provides a brief description of the flight systems onboard the X-43A research vehicle and an introduction to the ground support equipment required to execute the validation plan. The intent is to provide validation concepts that are applicable to current, follow-on, and next generation vehicles that share the hybrid spacecraft and aircraft characteristics of the Hyper-X vehicle.

  8. A chi-square goodness-of-fit test for non-identically distributed random variables: with application to empirical Bayes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conover, W.J.; Cox, D.D.; Martz, H.F.

    1997-12-01

    When using parametric empirical Bayes estimation methods for estimating the binomial or Poisson parameter, the validity of the assumed beta or gamma conjugate prior distribution is an important diagnostic consideration. Chi-square goodness-of-fit tests of the beta or gamma prior hypothesis are developed for use when the binomial sample sizes or Poisson exposure times vary. Nine examples illustrate the application of the methods, using real data from such diverse applications as the loss of feedwater flow rates in nuclear power plants, the probability of failure to run on demand and the failure rates of the high pressure coolant injection systems atmore » US commercial boiling water reactors, the probability of failure to run on demand of emergency diesel generators in US commercial nuclear power plants, the rate of failure of aircraft air conditioners, baseball batting averages, the probability of testing positive for toxoplasmosis, and the probability of tumors in rats. The tests are easily applied in practice by means of corresponding Mathematica{reg_sign} computer programs which are provided.« less

  9. External validity of two nomograms for predicting distant brain failure after radiosurgery for brain metastases in a bi-institutional independent patient cohort.

    PubMed

    Prabhu, Roshan S; Press, Robert H; Boselli, Danielle M; Miller, Katherine R; Lankford, Scott P; McCammon, Robert J; Moeller, Benjamin J; Heinzerling, John H; Fasola, Carolina E; Patel, Kirtesh R; Asher, Anthony L; Sumrall, Ashley L; Curran, Walter J; Shu, Hui-Kuo G; Burri, Stuart H

    2018-03-01

    Patients treated with stereotactic radiosurgery (SRS) for brain metastases (BM) are at increased risk of distant brain failure (DBF). Two nomograms have been recently published to predict individualized risk of DBF after SRS. The goal of this study was to assess the external validity of these nomograms in an independent patient cohort. The records of consecutive patients with BM treated with SRS at Levine Cancer Institute and Emory University between 2005 and 2013 were reviewed. Three validation cohorts were generated based on the specific nomogram or recursive partitioning analysis (RPA) entry criteria: Wake Forest nomogram (n = 281), Canadian nomogram (n = 282), and Canadian RPA (n = 303) validation cohorts. Freedom from DBF at 1-year in the Wake Forest study was 30% compared with 50% in the validation cohort. The validation c-index for both the 6-month and 9-month freedom from DBF Wake Forest nomograms was 0.55, indicating poor discrimination ability, and the goodness-of-fit test for both nomograms was highly significant (p < 0.001), indicating poor calibration. The 1-year actuarial DBF in the Canadian nomogram study was 43.9% compared with 50.9% in the validation cohort. The validation c-index for the Canadian 1-year DBF nomogram was 0.56, and the goodness-of-fit test was also highly significant (p < 0.001). The validation accuracy and c-index of the Canadian RPA classification was 53% and 0.61, respectively. The Wake Forest and Canadian nomograms for predicting risk of DBF after SRS were found to have limited predictive ability in an independent bi-institutional validation cohort. These results reinforce the importance of validating predictive models in independent patient cohorts.

  10. Derivation and experimental verification of clock synchronization theory

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.

    1994-01-01

    The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Mid-Point Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the clock system's behavior. It is found that a 100% penalty is paid to tolerate worst case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as 3 clock ticks. Clock skew grows to 6 clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst case conditions. conditions.

  11. Graph-based real-time fault diagnostics

    NASA Technical Reports Server (NTRS)

    Padalkar, S.; Karsai, G.; Sztipanovits, J.

    1988-01-01

    A real-time fault detection and diagnosis capability is absolutely crucial in the design of large-scale space systems. Some of the existing AI-based fault diagnostic techniques like expert systems and qualitative modelling are frequently ill-suited for this purpose. Expert systems are often inadequately structured, difficult to validate and suffer from knowledge acquisition bottlenecks. Qualitative modelling techniques sometimes generate a large number of failure source alternatives, thus hampering speedy diagnosis. In this paper we present a graph-based technique which is well suited for real-time fault diagnosis, structured knowledge representation and acquisition and testing and validation. A Hierarchical Fault Model of the system to be diagnosed is developed. At each level of hierarchy, there exist fault propagation digraphs denoting causal relations between failure modes of subsystems. The edges of such a digraph are weighted with fault propagation time intervals. Efficient and restartable graph algorithms are used for on-line speedy identification of failure source components.

  12. Analysis, testing, and evaluation of faulted and unfaulted Wye, Delta, and open Delta connected electromechanical actuators

    NASA Technical Reports Server (NTRS)

    Nehl, T. W.; Demerdash, N. A.

    1983-01-01

    Mathematical models capable of simulating the transient, steady state, and faulted performance characteristics of various brushless dc machine-PSA (power switching assembly) configurations were developed. These systems are intended for possible future use as primemovers in EMAs (electromechanical actuators) for flight control applications. These machine-PSA configurations include wye, delta, and open-delta connected systems. The research performed under this contract was initially broken down into the following six tasks: development of mathematical models for various machine-PSA configurations; experimental validation of the model for failure modes; experimental validation of the mathematical model for shorted turn-failure modes; tradeoff study; and documentation of results and methodology.

  13. Effort, symptom validity testing, performance validity testing and traumatic brain injury.

    PubMed

    Bigler, Erin D

    2014-01-01

    To understand the neurocognitive effects of brain injury, valid neuropsychological test findings are paramount. This review examines the research on what has been referred to a symptom validity testing (SVT). Above a designated cut-score signifies a 'passing' SVT performance which is likely the best indicator of valid neuropsychological test findings. Likewise, substantially below cut-point performance that nears chance or is at chance signifies invalid test performance. Significantly below chance is the sine qua non neuropsychological indicator for malingering. However, the interpretative problems with SVT performance below the cut-point yet far above chance are substantial, as pointed out in this review. This intermediate, border-zone performance on SVT measures is where substantial interpretative challenges exist. Case studies are used to highlight the many areas where additional research is needed. Historical perspectives are reviewed along with the neurobiology of effort. Reasons why performance validity testing (PVT) may be better than the SVT term are reviewed. Advances in neuroimaging techniques may be key in better understanding the meaning of border zone SVT failure. The review demonstrates the problems with rigidity in interpretation with established cut-scores. A better understanding of how certain types of neurological, neuropsychiatric and/or even test conditions may affect SVT performance is needed.

  14. An Efficient Implementation of Fixed Failure-Rate Ratio Test for GNSS Ambiguity Resolution.

    PubMed

    Hou, Yanqing; Verhagen, Sandra; Wu, Jie

    2016-06-23

    Ambiguity Resolution (AR) plays a vital role in precise GNSS positioning. Correctly-fixed integer ambiguities can significantly improve the positioning solution, while incorrectly-fixed integer ambiguities can bring large positioning errors and, therefore, should be avoided. The ratio test is an extensively used test to validate the fixed integer ambiguities. To choose proper critical values of the ratio test, the Fixed Failure-rate Ratio Test (FFRT) has been proposed, which generates critical values according to user-defined tolerable failure rates. This contribution provides easy-to-implement fitting functions to calculate the critical values. With a massive Monte Carlo simulation, the functions for many different tolerable failure rates are provided, which enriches the choices of critical values for users. Moreover, the fitting functions for the fix rate are also provided, which for the first time allows users to evaluate the conditional success rate, i.e., the success rate once the integer candidates are accepted by FFRT. The superiority of FFRT over the traditional ratio test regarding controlling the failure rate and preventing unnecessary false alarms is shown by a simulation and a real data experiment. In the real data experiment with a baseline of 182.7 km, FFRT achieved much higher fix rates (up to 30% higher) and the same level of positioning accuracy from fixed solutions as compared to the traditional critical value.

  15. Definition and Demonstration of a Methodology for Validating Aircraft Trajectory Predictors

    NASA Technical Reports Server (NTRS)

    Vivona, Robert A.; Paglione, Mike M.; Cate, Karen T.; Enea, Gabriele

    2010-01-01

    This paper presents a new methodology for validating an aircraft trajectory predictor, inspired by the lessons learned from a number of field trials, flight tests and simulation experiments for the development of trajectory-predictor-based automation. The methodology introduces new techniques and a new multi-staged approach to reduce the effort in identifying and resolving validation failures, avoiding the potentially large costs associated with failures during a single-stage, pass/fail approach. As a case study, the validation effort performed by the Federal Aviation Administration for its En Route Automation Modernization (ERAM) system is analyzed to illustrate the real-world applicability of this methodology. During this validation effort, ERAM initially failed to achieve six of its eight requirements associated with trajectory prediction and conflict probe. The ERAM validation issues have since been addressed, but to illustrate how the methodology could have benefited the FAA effort, additional techniques are presented that could have been used to resolve some of these issues. Using data from the ERAM validation effort, it is demonstrated that these new techniques could have identified trajectory prediction error sources that contributed to several of the unmet ERAM requirements.

  16. Does an inter-flaw length control the accuracy of rupture forecasting in geological materials?

    NASA Astrophysics Data System (ADS)

    Vasseur, Jérémie; Wadsworth, Fabian B.; Heap, Michael J.; Main, Ian G.; Lavallée, Yan; Dingwell, Donald B.

    2017-10-01

    Multi-scale failure of porous materials is an important phenomenon in nature and in material physics - from controlled laboratory tests to rockbursts, landslides, volcanic eruptions and earthquakes. A key unsolved research question is how to accurately forecast the time of system-sized catastrophic failure, based on observations of precursory events such as acoustic emissions (AE) in laboratory samples, or, on a larger scale, small earthquakes. Until now, the length scale associated with precursory events has not been well quantified, resulting in forecasting tools that are often unreliable. Here we test the hypothesis that the accuracy of the forecast failure time depends on the inter-flaw distance in the starting material. We use new experimental datasets for the deformation of porous materials to infer the critical crack length at failure from a static damage mechanics model. The style of acceleration of AE rate prior to failure, and the accuracy of forecast failure time, both depend on whether the cracks can span the inter-flaw length or not. A smooth inverse power-law acceleration of AE rate to failure, and an accurate forecast, occurs when the cracks are sufficiently long to bridge pore spaces. When this is not the case, the predicted failure time is much less accurate and failure is preceded by an exponential AE rate trend. Finally, we provide a quantitative and pragmatic correction for the systematic error in the forecast failure time, valid for structurally isotropic porous materials, which could be tested against larger-scale natural failure events, with suitable scaling for the relevant inter-flaw distances.

  17. Performance and Symptom Validity Testing as a Function of Medical Board Evaluation in U.S. Military Service Members with a History of Mild Traumatic Brain Injury.

    PubMed

    Armistead-Jehle, Patrick; Cole, Wesley R; Stegman, Robert L

    2018-02-01

    The study was designed to replicate and extend pervious findings demonstrating the high rates of invalid neuropsychological testing in military service members (SMs) with a history of mild traumatic brain injury (mTBI) assessed in the context of a medical evaluation board (MEB). Two hundred thirty-one active duty SMs (61 of which were undergoing an MEB) underwent neuropsychological assessment. Performance validity (Word Memory Test) and symptom validity (MMPI-2-RF) test data were compared across those evaluated within disability (MEB) and clinical contexts. As with previous studies, there were significantly more individuals in an MEB context that failed performance (MEB = 57%, non-MEB = 31%) and symptom validity testing (MEB = 57%, non-MEB = 22%) and performance validity testing had a notable affect on cognitive test scores. Performance and symptom validity test failure rates did not vary as a function of the reason for disability evaluation when divided into behavioral versus physical health conditions. These data are consistent with past studies, and extends those studies by including symptom validity testing and investigating the effect of reason for MEB. This and previous studies demonstrate that more than 50% of SMs seen in the context of an MEB will fail performance validity tests and over-report on symptom validity measures. These results emphasize the importance of using both performance and symptom validity testing when evaluating SMs with a history of mTBI, especially if they are being seen for disability evaluations, in order to ensure the accuracy of cognitive and psychological test data. Published by Oxford University Press 2017. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  18. Matrix Dominated Failure of Fiber-Reinforced Composite Laminates Under Static and Dynamic Loading

    NASA Astrophysics Data System (ADS)

    Schaefer, Joseph Daniel

    Hierarchical material systems provide the unique opportunity to connect material knowledge to solving specific design challenges. Representing the quickest growing class of hierarchical materials in use, fiber-reinforced polymer composites (FRPCs) offer superior strength and stiffness-to-weight ratios, damage tolerance, and decreasing production costs compared to metals and alloys. However, the implementation of FRPCs has historically been fraught with inadequate knowledge of the material failure behavior due to incomplete verification of recent computational constitutive models and improper (or non-existent) experimental validation, which has severely slowed creation and development. Noted by the recent Materials Genome Initiative and the Worldwide Failure Exercise, current state of the art qualification programs endure a 20 year gap between material conceptualization and implementation due to the lack of effective partnership between computational coding (simulation) and experimental characterization. Qualification processes are primarily experiment driven; the anisotropic nature of composites predisposes matrix-dominant properties to be sensitive to strain rate, which necessitates extensive testing. To decrease the qualification time, a framework that practically combines theoretical prediction of material failure with limited experimental validation is required. In this work, the Northwestern Failure Theory (NU Theory) for composite lamina is presented as the theoretical basis from which the failure of unidirectional and multidirectional composite laminates is investigated. From an initial experimental characterization of basic lamina properties, the NU Theory is employed to predict the matrix-dependent failure of composites under any state of biaxial stress from quasi-static to 1000 s-1 strain rates. It was found that the number of experiments required to characterize the strain-rate-dependent failure of a new composite material was reduced by an order of magnitude, and the resulting strain-rate-dependence was applicable for a large class of materials. The presented framework provides engineers with the capability to quickly identify fiber and matrix combinations for a given application and determine the failure behavior over the range of practical loadings cases. The failure-mode-based NU Theory may be especially useful when partnered with computational approaches (which often employ micromechanics to determine constituent and constitutive response) to provide accurate validation of the matrix-dominated failure modes experienced by laminates during progressive failure.

  19. Genomic analysis of bone marrow failure and myelodysplastic syndromes reveals phenotypic and diagnostic complexity

    PubMed Central

    Zhang, Michael Y.; Keel, Siobán B.; Walsh, Tom; Lee, Ming K.; Gulsuner, Suleyman; Watts, Amanda C.; Pritchard, Colin C.; Salipante, Stephen J.; Jeng, Michael R.; Hofmann, Inga; Williams, David A.; Fleming, Mark D.; Abkowitz, Janis L.; King, Mary-Claire; Shimamura, Akiko

    2015-01-01

    Accurate and timely diagnosis of inherited bone marrow failure and inherited myelodysplastic syndromes is essential to guide clinical management. Distinguishing inherited from acquired bone marrow failure/myelodysplastic syndrome poses a significant clinical challenge. At present, diagnostic genetic testing for inherited bone marrow failure/myelodysplastic syndrome is performed gene-by-gene, guided by clinical and laboratory evaluation. We hypothesized that standard clinically-directed genetic testing misses patients with cryptic or atypical presentations of inherited bone marrow failure/myelodysplastic syndrome. In order to screen simultaneously for mutations of all classes in bone marrow failure/myelodysplastic syndrome genes, we developed and validated a panel of 85 genes for targeted capture and multiplexed massively parallel sequencing. In patients with clinical diagnoses of Fanconi anemia, genomic analysis resolved subtype assignment, including those of patients with inconclusive complementation test results. Eight out of 71 patients with idiopathic bone marrow failure or myelodysplastic syndrome were found to harbor damaging germline mutations in GATA2, RUNX1, DKC1, or LIG4. All 8 of these patients lacked classical clinical stigmata or laboratory findings of these syndromes and only 4 had a family history suggestive of inherited disease. These results reflect the extensive genetic heterogeneity and phenotypic complexity of bone marrow failure/myelodysplastic syndrome phenotypes. This study supports the integration of broad unbiased genetic screening into the diagnostic workup of children and young adults with bone marrow failure and myelodysplastic syndromes. PMID:25239263

  20. Measuring attention in very old adults using the Test of Everyday Attention.

    PubMed

    van der Leeuw, Guusje; Leveille, Suzanne G; Jones, Richard N; Hausdorff, Jeffrey M; McLean, Robert; Kiely, Dan K; Gagnon, Margaret; Milberg, William P

    2017-09-01

    There is a need for validated measures of attention for use in longitudinal studies of older populations. We studied 249 participants aged 80 to 101 years using the population-based MOBILIZE Boston Study. Four subscales of the Test of Everyday Attention (TEA) were included, measuring attention switching, selective, sustained and divided attention and a neuropsychological battery including validated measures of multiple cognitive domains measuring attention, executive function and memory. The TEA previously has not been validated in persons aged 80 and older. Among participants who completed the TEA, scores on other attentional measures strongly with TEA domains (R=.60-.70). Proportions of participants with incomplete TEA subscales ranged from 8% (selective attention) to 19% (attentional switching). Reasons for not completing TEA tests included failure to comprehend test instructions despite repetition and practice. These results demonstrate the challenges and potential value of the Test of Everyday Attention in studies of very old populations.

  1. Experimental Study On The Effect Of Micro-Cracks On Brazilian Tensile Strength

    NASA Astrophysics Data System (ADS)

    Wang, Xiangyu

    2015-12-01

    For coal mine ground control issues, it is necessary to propose a failure criteria accounting for the transversely isotropic behaviors of rocks. Hence, it is very helpful to provide experimental data for the validation of the failure criteria. In this paper, the method for preparing transversely isotropic specimens and the scheme of the Brazilian tensile strength test are presented. Results obtained from Brazilian split tests under dry and water-saturated conditions reflect the effect of the development direction β of the structural plane, such as the bedding fissure, on the tensile strength, ultimate displacement, failure mode, and the whole splitting process. The results show that the tensile strength decreases linearly with increasing β. The softening coefficient of the tensile strength shows a sinusoidal function. The values of the slope and inflection point for the curve vary at the different stages of the Brazilian test. The failure mode of the rock specimen presented in this paper generally coincides with the standard Brazilian splitting failure mode. Based on the test results, the major influencing factors for the Brazilian splitting strength are analyzed and a mathematical model for solving the Brazilian splitting strength is proposed. The findings in this paper would greatly benefit the coal mine ground control studies when the surrounding rocks of interest show severe transversely isotropic behaviors.

  2. The Implications of Symptom Validity Test Failure for Ability-Based Test Performance in a Pediatric Sample

    ERIC Educational Resources Information Center

    Kirkwood, Michael W.; Yeates, Keith Owen; Randolph, Christopher; Kirk, John W.

    2012-01-01

    If an examinee exerts inadequate effort to perform well during a psychological or neuropsychological exam, the resulting data will represent an inaccurate representation of the individual's true abilities and difficulties. In adult populations, methodologies to identify noncredible effort have grown exponentially in the last 2 decades. Though a…

  3. Super Learner Analysis of Electronic Adherence Data Improves Viral Prediction and May Provide Strategies for Selective HIV RNA Monitoring.

    PubMed

    Petersen, Maya L; LeDell, Erin; Schwab, Joshua; Sarovar, Varada; Gross, Robert; Reynolds, Nancy; Haberer, Jessica E; Goggin, Kathy; Golin, Carol; Arnsten, Julia; Rosen, Marc I; Remien, Robert H; Etoori, David; Wilson, Ira B; Simoni, Jane M; Erlen, Judith A; van der Laan, Mark J; Liu, Honghu; Bangsberg, David R

    2015-05-01

    Regular HIV RNA testing for all HIV-positive patients on antiretroviral therapy (ART) is expensive and has low yield since most tests are undetectable. Selective testing of those at higher risk of failure may improve efficiency. We investigated whether a novel analysis of adherence data could correctly classify virological failure and potentially inform a selective testing strategy. Multisite prospective cohort consortium. We evaluated longitudinal data on 1478 adult patients treated with ART and monitored using the Medication Event Monitoring System (MEMS) in 16 US cohorts contributing to the MACH14 consortium. Because the relationship between adherence and virological failure is complex and heterogeneous, we applied a machine-learning algorithm (Super Learner) to build a model for classifying failure and evaluated its performance using cross-validation. Application of the Super Learner algorithm to MEMS data, combined with data on CD4 T-cell counts and ART regimen, significantly improved classification of virological failure over a single MEMS adherence measure. Area under the receiver operating characteristic curve, evaluated on data not used in model fitting, was 0.78 (95% confidence interval: 0.75 to 0.80) and 0.79 (95% confidence interval: 0.76 to 0.81) for failure defined as single HIV RNA level >1000 copies per milliliter or >400 copies per milliliter, respectively. Our results suggest that 25%-31% of viral load tests could be avoided while maintaining sensitivity for failure detection at or above 95%, for a cost savings of $16-$29 per person-month. Our findings provide initial proof of concept for the potential use of electronic medication adherence data to reduce costs through behavior-driven HIV RNA testing.

  4. Super learner analysis of electronic adherence data improves viral prediction and may provide strategies for selective HIV RNA monitoring

    PubMed Central

    Petersen, Maya L.; LeDell, Erin; Schwab, Joshua; Sarovar, Varada; Gross, Robert; Reynolds, Nancy; Haberer, Jessica E.; Goggin, Kathy; Golin, Carol; Arnsten, Julia; Rosen, Marc; Remien, Robert; Etoori, David; Wilson, Ira; Simoni, Jane M.; Erlen, Judith A.; van der Laan, Mark J.; Liu, Honghu; Bangsberg, David R

    2015-01-01

    Objective Regular HIV RNA testing for all HIV positive patients on antiretroviral therapy (ART) is expensive and has low yield since most tests are undetectable. Selective testing of those at higher risk of failure may improve efficiency. We investigated whether a novel analysis of adherence data could correctly classify virological failure and potentially inform a selective testing strategy. Design Multisite prospective cohort consortium. Methods We evaluated longitudinal data on 1478 adult patients treated with ART and monitored using the Medication Event Monitoring System (MEMS) in 16 United States cohorts contributing to the MACH14 consortium. Since the relationship between adherence and virological failure is complex and heterogeneous, we applied a machine-learning algorithm (Super Learner) to build a model for classifying failure and evaluated its performance using cross-validation. Results Application of the Super Learner algorithm to MEMS data, combined with data on CD4+ T cell counts and ART regimen, significantly improved classification of virological failure over a single MEMS adherence measure. Area under the ROC curve, evaluated on data not used in model fitting, was 0.78 (95% CI: 0.75, 0.80) and 0.79 (95% CI: 0.76, 0.81) for failure defined as single HIV RNA level >1000 copies/ml or >400 copies/ml, respectively. Our results suggest 25–31% of viral load tests could be avoided while maintaining sensitivity for failure detection at or above 95%, for a cost savings of $16–$29 per person-month. Conclusions Our findings provide initial proof-of-concept for the potential use of electronic medication adherence data to reduce costs through behavior-driven HIV RNA testing. PMID:25942462

  5. Derivation and validation of a simple clinical risk-model in heart failure based on 6 minute walk test performance and NT-proBNP status--do we need specificity for sex and beta-blockers?

    PubMed

    Frankenstein, L; Goode, K; Ingle, L; Remppis, A; Schellberg, D; Nelles, M; Katus, H A; Clark, A L; Cleland, J G F; Zugck, C

    2011-02-17

    It is unclear whether risk prediction strategies in chronic heart failure (CHF) need to be specific for sex or beta-blockers. We examined this problem and developed and validated the consequent risk models based on 6-minute-walk-test and NT-proBNP. The derivation cohort comprised 636 German patients with systolic dysfunction. They were validated against 676 British patients with similar aetiology. ROC-curves for 1-year mortality identified cut-off values separately for specificity (none, sex, beta-blocker, both). Patients were grouped according to number of cut-offs met (group I/II/III - 0/1/2 cut-offs). Widest separation between groups was achieved with sex- and beta-blocker-specific cut offs. In the derivation population, 1-year mortality was 0%, 8%, 31% for group I, II and III, respectively. In the validation population, 1-year rates in the three risk groups were 2%, 7%, 14%, respectively, after application of the same cut-offs. Risk stratification for CHF should perhaps take sex and beta-blocker usage into account. We derived and independently validated relevant risk models based on 6-minute-walk-tests and NT-proBNP. Specifying sex and use of beta-blockers identified three distinct sub-groups with widely differing prognosis. In clinical practice, it may be appropriate to tailor the intensity of follow-up and/or the treatment strategy according to the risk-group. Copyright © 2009 Elsevier Ireland Ltd. All rights reserved.

  6. FDIR Strategy Validation with the B Method

    NASA Astrophysics Data System (ADS)

    Sabatier, D.; Dellandrea, B.; Chemouil, D.

    2008-08-01

    In a formation flying satellite system, the FDIR strategy (Failure Detection, Isolation and Recovery) is paramount. When a failure occurs, satellites should be able to take appropriate reconfiguration actions to obtain the best possible results given the failure, ranging from avoiding satellite-to-satellite collision to continuing the mission without disturbance if possible. To achieve this goal, each satellite in the formation has an implemented FDIR strategy that governs how it detects failures (from tests or by deduction) and how it reacts (reconfiguration using redundant equipments, avoidance manoeuvres, etc.). The goal is to protect the satellites first and the mission as much as possible. In a project initiated by the CNES, ClearSy experiments the B Method to validate the FDIR strategies developed by Thales Alenia Space, of the inter satellite positioning and communication devices that will be used for the SIMBOL-X (2 satellite configuration) and the PEGASE (3 satellite configuration) missions and potentially for other missions afterward. These radio frequency metrology sensor devices provide satellite positioning and inter satellite communication in formation flying. This article presents the results of this experience.

  7. Validation of bending tests by nanoindentation for micro-contact analysis of MEMS switches

    NASA Astrophysics Data System (ADS)

    Broue, Adrien; Fourcade, Thibaut; Dhennin, Jérémie; Courtade, Frédéric; Charvet, Pierre–Louis; Pons, Patrick; Lafontan, Xavier; Plana, Robert

    2010-08-01

    Research on contact characterization for microelectromechanical system (MEMS) switches has been driven by the necessity to reach a high-reliability level for micro-switch applications. One of the main failures observed during cycling of the devices is the increase of the electrical contact resistance. The key issue is the electromechanical behaviour of the materials used at the contact interface where the current flows through. Metal contact switches have a large and complex set of failure mechanisms according to the current level. This paper demonstrates the validity of a new methodology using a commercial nanoindenter coupled with electrical measurements on test vehicles specially designed to investigate the micro-scale contact physics. Dedicated validation tests and modelling are performed to assess the introduced methodology by analyzing the gold contact interface with 5 µm2 square bumps at various current levels. Contact temperature rise is measured, which affects the mechanical properties of the contact materials and modifies the contact topology. In addition, the data provide a better understanding of micro-contact behaviour related to the impact of current at low- to medium-power levels. This article was originally submitted for the special section 'Selected papers from the 20th Micromechanics Europe Workshop (MME 09) (Toulouse, France, 20-22 September 2009)', Journal of Micromechanics and Microengineering, volume 20, issue 6.

  8. Poor symptom and performance validity in regularly referred Hospital outpatients: Link with standard clinical measures, and role of incentives.

    PubMed

    Dandachi-FitzGerald, Brechje; van Twillert, Björn; van de Sande, Peter; van Os, Yindee; Ponds, Rudolf W H M

    2016-05-30

    We investigated the frequency of symptom validity test (SVT) failure and its clinical correlates in a large, heterogeneous sample of hospital outpatients referred for psychological assessment for clinical purposes. We studied patients (N=469), who were regularly referred for assessment to the psychology departments of five hospitals. Background characteristics, including information about incentives, were obtained with a checklist completed by the clinician. As a measure of over-reporting, the Structured Inventory of Malingered Symptomatology (SIMS) was administered to all patients. The Amsterdam Short-Term Memory test (ASTM), a cognitive underperformance measure, was only administered to patients who were referred for a neuropsychological assessment. Symptom over-reporting occurred in a minority of patients, ranging from 12% to 19% in the main diagnostic patient groups. Patients with morbid obesity had a low rate of over-reporting (1%). The SIMS was positively associated with levels of self-reported psychological symptoms. Cognitive underperformance occurred in 29.3% of the neuropsychological assessments. The ASTM was negatively associated with memory test performance. We found no association between SVT failure and financial incentives. Our results support the recommendation to routinely evaluate symptom validity in clinical assessments of hospital patients. The dynamics behind invalid symptom reporting need to be further elucidated. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Real-Time Sensor Validation System Developed for Reusable Launch Vehicle Testbed

    NASA Technical Reports Server (NTRS)

    Jankovsky, Amy L.

    1997-01-01

    A real-time system for validating sensor health has been developed for the reusable launch vehicle (RLV) program. This system, which is part of the propulsion checkout and control system (PCCS), was designed for use in an integrated propulsion technology demonstrator testbed built by Rockwell International and located at the NASA Marshall Space Flight Center. Work on the sensor health validation system, a result of an industry-NASA partnership, was completed at the NASA Lewis Research Center, then delivered to Marshall for integration and testing. The sensor validation software performs three basic functions: it identifies failed sensors, it provides reconstructed signals for failed sensors, and it identifies off-nominal system transient behavior that cannot be attributed to a failed sensor. The code is initiated by host software before the start of a propulsion system test, and it is called by the host program every control cycle. The output is posted to global memory for use by other PCCS modules. Output includes a list indicating the status of each sensor (i.e., failed, healthy, or reconstructed) and a list of features that are not due to a sensor failure. If a sensor failure is found, the system modifies that sensor's data array by substituting a reconstructed signal, when possible, for use by other PCCS modules.

  10. Centrifuge Modeling of Rainfall Induced Slope Failure

    NASA Astrophysics Data System (ADS)

    Ling, H.; Wu, M.

    2006-12-01

    Rainfall induces slope failure and debris flow which are considered as one of the major natural disasters. The scope of such failure is very large and it cannot be studied easily in the laboratory. Traditionally, small scale model tests are used to study such problem. Knowing that the behavior of soil is affected by the stress level, centrifuge modeling technique has been used to simulate more realistically full scale earth structures. In this study, two series of tests were conducted on slopes under the centrifugal field with and without the presence of rainfall. The soil used was a mixture of sand and 15 percent fines. The slopes of angle 60 degrees were prepared at optimum water content in order to achieve the maximum density. In the first series of tests, three different slope heights of 10 cm, 15 cm and 20 cm were used. The gravity was increased gradually until slope failure in order to obtain the prototype failure height. The slope model was cut after the test in order to obtain the configuration of failure surface. It was found that the slope geometry normalized by the height at failure provided unique results. Knowing the slope height or gravity at failure, the second series of tests with rainfall were conducted slightly below the critical height. That is, after attaining the desired gravity, the rainfall was induced in the centrifuge. Special nozzles were used and calibrated against different levels of gravity in order to obtain desired rainfall intensity. Five different rainfall intensities were used on the 15-cm slopes at 80g and 60g, which corresponded to 12 m and 9 m slope height, respectively. The duration until failure for different rainfall intensities was obtained. Similar to the first series of tests, the slope model was cut and investigated after the test. The results showed that the failure surface was not significantly affected by the rainfall. That is, the excess pore pressure induced by rainfall generated slope failure. The prediction curves of rainfall intensity versus duration were obtained from the test results. Such curves are extremely useful for disaster management. This study indicated feasibilities of using centrifuge modeling technique in simulating rainfall induced slope failure. The results obtained may also be used for validating numerical tools.

  11. Neurodevelopmental and Cognitive Outcomes in Children With Intestinal Failure.

    PubMed

    Chesley, Patrick M; Sanchez, Sabrina E; Melzer, Lilah; Oron, Assaf P; Horslen, Simon P; Bennett, F Curt; Javid, Patrick J

    2016-07-01

    Recent advances in medical and surgical management have led to improved long-term survival in children with intestinal failure. Yet, limited data exist on their neurodevelopmental and cognitive outcomes. The aim of the present study was to measure neurodevelopmental outcomes in children with intestinal failure. Children enrolled in a regional intestinal failure program underwent prospective neurodevelopmental and psychometric evaluation using a validated scoring tool. Cognitive impairment was defined as a mental developmental index <70. Neurodevelopmental impairment was defined as cerebral palsy, visual or hearing impairment, or cognitive impairment. Univariate analyses were performed using the Wilcoxon rank-sum test. Data are presented as median (range). Fifteen children with a remnant bowel length of 18 (5-85) cm were studied at age 17 (12-67) months. Thirteen patients remained dependent on parenteral nutrition. Twelve (80%) subjects scored within the normal range on cognitive testing. Each child with cognitive impairment was noted to have additional risk factors independent of intestinal failure including cardiac arrest and extreme prematurity. On univariate analysis, cognitive impairment was associated with longer inpatient hospital stays, increased number of surgical procedures, and prematurity (P < 0.02). In total, 4 (27%) children demonstrated findings consistent with neurodevelopmental impairment. A majority of children with intestinal failure demonstrated normal neurodevelopmental and cognitive outcomes on psychometric testing. These data suggest that children with intestinal failure without significant comorbidity may be at low risk for long-term neurodevelopmental impairment.

  12. Membrane Accelerated Stress Test Development for Polymer Electrolyte Fuel Cell Durability Validated Using Field and Drive Cycle Testing

    DOE PAGES

    Mukundan, Rangachary; Baker, Andrew M.; Kusoglu, Ahmet; ...

    2018-03-01

    A combined chemical/mechanical accelerated stress test (AST) was developed for proton exchange membrane (PEM) fuel cells based on relative humidity cycling (RHC) between dry and saturated gases at open circuit voltage (OCV). Membrane degradation and failure were investigated using scanning electron microscopy and small- and wide-angle X-ray scattering. Changes to membrane thickness, hydrophilic domain spacing, and crystallinity were observed to be most similar between field-operated cells and OCV RHC ASTs, where local thinning and divot-type defects are the primary failure modes. While RHC in air also reproduces these failure modes, it is not aggressive enough to differentiate between different membranemore » types in >1,333 hours (55 days) of testing. Conversely, steady-state OCV tests result in significant ionomer morphology changes and global thinning, which do not replicate field degradation and failure modes. It is inferred that during the OCV RHC AST, the decay of the membrane's mechanical properties is accelerated such that materials can be evaluated in hundreds, instead of thousands, of hours, while replicating the degradation and failure modes of field operation; associated AST protocols are recommended as OCV RHC at 90°C for 500 hours with wet/dry cycle durations of 30s/45s and 2m/2m for automotive and bus operation, respectively.« less

  13. Membrane Accelerated Stress Test Development for Polymer Electrolyte Fuel Cell Durability Validated Using Field and Drive Cycle Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukundan, Rangachary; Baker, Andrew M.; Kusoglu, Ahmet

    A combined chemical/mechanical accelerated stress test (AST) was developed for proton exchange membrane (PEM) fuel cells based on relative humidity cycling (RHC) between dry and saturated gases at open circuit voltage (OCV). Membrane degradation and failure were investigated using scanning electron microscopy and small- and wide-angle X-ray scattering. Changes to membrane thickness, hydrophilic domain spacing, and crystallinity were observed to be most similar between field-operated cells and OCV RHC ASTs, where local thinning and divot-type defects are the primary failure modes. While RHC in air also reproduces these failure modes, it is not aggressive enough to differentiate between different membranemore » types in >1,333 hours (55 days) of testing. Conversely, steady-state OCV tests result in significant ionomer morphology changes and global thinning, which do not replicate field degradation and failure modes. It is inferred that during the OCV RHC AST, the decay of the membrane's mechanical properties is accelerated such that materials can be evaluated in hundreds, instead of thousands, of hours, while replicating the degradation and failure modes of field operation; associated AST protocols are recommended as OCV RHC at 90°C for 500 hours with wet/dry cycle durations of 30s/45s and 2m/2m for automotive and bus operation, respectively.« less

  14. Lightning Pin Injection Testing on MOSFETS

    NASA Technical Reports Server (NTRS)

    Ely, Jay J.; Nguyen, Truong X.; Szatkowski, George N.; Koppen, Sandra V.; Mielnik, John J.; Vaughan, Roger K.; Wysocki, Philip F.; Celaya, Jose R.; Saha, Sankalita

    2009-01-01

    Lightning transients were pin-injected into metal-oxide-semiconductor field-effect transistors (MOSFETs) to induce fault modes. This report documents the test process and results, and provides a basis for subsequent lightning tests. MOSFETs may be present in DC-DC power supplies and electromechanical actuator circuits that may be used on board aircraft. Results show that unprotected MOSFET Gates are susceptible to failure, even when installed in systems in well-shielded and partial-shielded locations. MOSFET Drains and Sources are significantly less susceptible. Device impedance decreased (current increased) after every failure. Such a failure mode may lead to cascading failures, as the damaged MOSFET may allow excessive current to flow through other circuitry. Preliminary assessments on a MOSFET subjected to 20-stroke pin-injection testing demonstrate that Breakdown Voltage, Leakage Current and Threshold Voltage characteristics show damage, while the device continues to meet manufacturer performance specifications. The purpose of this research is to develop validated tools, technologies, and techniques for automated detection, diagnosis and prognosis that enable mitigation of adverse events during flight, such as from lightning transients; and to understand the interplay between lightning-induced surges and aging (i.e. humidity, vibration thermal stress, etc.) on component degradation.

  15. Fatigue of restorative materials.

    PubMed

    Baran, G; Boberick, K; McCool, J

    2001-01-01

    Failure due to fatigue manifests itself in dental prostheses and restorations as wear, fractured margins, delaminated coatings, and bulk fracture. Mechanisms responsible for fatigue-induced failure depend on material ductility: Brittle materials are susceptible to catastrophic failure, while ductile materials utilize their plasticity to reduce stress concentrations at the crack tip. Because of the expense associated with the replacement of failed restorations, there is a strong desire on the part of basic scientists and clinicians to evaluate the resistance of materials to fatigue in laboratory tests. Test variables include fatigue-loading mode and test environment, such as soaking in water. The outcome variable is typically fracture strength, and these data typically fit the Weibull distribution. Analysis of fatigue data permits predictive inferences to be made concerning the survival of structures fabricated from restorative materials under specified loading conditions. Although many dental-restorative materials are routinely evaluated, only limited use has been made of fatigue data collected in vitro: Wear of materials and the survival of porcelain restorations has been modeled by both fracture mechanics and probabilistic approaches. A need still exists for a clinical failure database and for the development of valid test methods for the evaluation of composite materials.

  16. 14 CFR 33.70 - Engine life-limited parts.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., hubs, shafts, high-pressure casings, and non-redundant mount components. For the purposes of this... life before hazardous engine effects can occur. These steps include validated analysis, test, or... assessments to address the potential for failure from material, manufacturing, and service induced anomalies...

  17. 14 CFR 33.70 - Engine life-limited parts.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., hubs, shafts, high-pressure casings, and non-redundant mount components. For the purposes of this... life before hazardous engine effects can occur. These steps include validated analysis, test, or... assessments to address the potential for failure from material, manufacturing, and service induced anomalies...

  18. Flight Test Approach to Adaptive Control Research

    NASA Technical Reports Server (NTRS)

    Pavlock, Kate Maureen; Less, James L.; Larson, David Nils

    2011-01-01

    The National Aeronautics and Space Administration s Dryden Flight Research Center completed flight testing of adaptive controls research on a full-scale F-18 testbed. The validation of adaptive controls has the potential to enhance safety in the presence of adverse conditions such as structural damage or control surface failures. This paper describes the research interface architecture, risk mitigations, flight test approach and lessons learned of adaptive controls research.

  19. An accelerating precursor to predict "time-to-failure" in creep and volcanic eruptions

    NASA Astrophysics Data System (ADS)

    Hao, Shengwang; Yang, Hang; Elsworth, Derek

    2017-09-01

    Real-time prediction by monitoring of the evolution of response variables is a central goal in predicting rock failure. A linear relation Ω˙Ω¨-1 = C(tf - t) has been developed to describe the time to failure, where Ω represents a response quantity, C is a constant and tf represents the failure time. Observations from laboratory creep failure experiments and precursors to volcanic eruptions are used to test the validity of the approach. Both cumulative and simple moving window techniques are developed to perform predictions and to illustrate the effects of data selection on the results. Laboratory creep failure experiments on granites show that the linear relation works well during the final approach to failure. For blind prediction, the simple moving window technique is preferred because it always uses the most recent data and excludes effects of early data deviating significantly from the predicted trend. When the predicted results show only small fluctuations, failure is imminent.

  20. Impact of applying the more stringent validation criteria of the revised European Society of Hypertension International Protocol 2010 on earlier validation studies.

    PubMed

    Stergiou, George S; Karpettas, Nikos; Atkins, Neil; O'Brien, Eoin

    2011-04-01

    Since 2002 when the European Society of Hypertension International Protocol (ESH-IP) was published it has become the preferred protocol for validating blood pressure monitors worldwide. In 2010, a revised version of the ESH-IP with more stringent criteria was published. This study assesses the impact of applying the revised ESH-IP criteria. A systematic literature review of ESH-IP studies reported between 2002 and 2010 was conducted. The impact of applying the ESH-IP 2010 criteria retrospectively on the data reported in these studies was investigated. The performance of the oscillometric devices in the last decade was also investigated on the basis of the ESH-IP criteria. Among 119 published studies, 112 with sufficient data were analyzed. According to ESH-IP 2002, the test device failed in 19 studies, whereas by applying the ESH-IP 2010 criteria in 28 additional studies increased the failure rate from 17 to 42%. Of these 28 studies, in 20 (71%) the test device failed at part 1 (accuracy per measurement) and in 22 (79%) at part 2 (accuracy per subject). Most of the failures involved the '5 mmHg or less' criterion. In the last decade there has been a consistent trend toward improved performance of oscillometric devices assessed on the basis of the ESH-IP criteria. This retrospective analysis shows that the stricter revised ESH-IP 2010 criteria will noticeably increase the failure rate of devices being validated. Oscillometric devices are becoming more accurate, and the revised ESH-IP by acknowledging this trend will allow more accurate devices to enter the market.

  1. Instantaneous and controllable integer ambiguity resolution: review and an alternative approach

    NASA Astrophysics Data System (ADS)

    Zhang, Jingyu; Wu, Meiping; Li, Tao; Zhang, Kaidong

    2015-11-01

    In the high-precision application of Global Navigation Satellite System (GNSS), integer ambiguity resolution is the key step to realize precise positioning and attitude determination. As the necessary part of quality control, integer aperture (IA) ambiguity resolution provides the theoretical and practical foundation for ambiguity validation. It is mainly realized by acceptance testing. Due to the constraint of correlation between ambiguities, it is impossible to realize the controlling of failure rate according to analytical formula. Hence, the fixed failure rate approach is implemented by Monte Carlo sampling. However, due to the characteristics of Monte Carlo sampling and look-up table, we have to face the problem of a large amount of time consumption if sufficient GNSS scenarios are included in the creation of look-up table. This restricts the fixed failure rate approach to be a post process approach if a look-up table is not available. Furthermore, if not enough GNSS scenarios are considered, the table may only be valid for a specific scenario or application. Besides this, the method of creating look-up table or look-up function still needs to be designed for each specific acceptance test. To overcome these problems in determination of critical values, this contribution will propose an instantaneous and CONtrollable (iCON) IA ambiguity resolution approach for the first time. The iCON approach has the following advantages: (a) critical value of acceptance test is independently determined based on the required failure rate and GNSS model without resorting to external information such as look-up table; (b) it can be realized instantaneously for most of IA estimators which have analytical probability formulas. The stronger GNSS model, the less time consumption; (c) it provides a new viewpoint to improve the research about IA estimation. To verify these conclusions, multi-frequency and multi-GNSS simulation experiments are implemented. Those results show that IA estimators based on iCON approach can realize controllable ambiguity resolution. Besides this, compared with ratio test IA based on look-up table, difference test IA and IA least square based on the iCON approach most of times have higher success rates and better controllability to failure rates.

  2. Temporal-varying failures of nodes in networks

    NASA Astrophysics Data System (ADS)

    Knight, Georgie; Cristadoro, Giampaolo; Altmann, Eduardo G.

    2015-08-01

    We consider networks in which random walkers are removed because of the failure of specific nodes. We interpret the rate of loss as a measure of the importance of nodes, a notion we denote as failure centrality. We show that the degree of the node is not sufficient to determine this measure and that, in a first approximation, the shortest loops through the node have to be taken into account. We propose approximations of the failure centrality which are valid for temporal-varying failures, and we dwell on the possibility of externally changing the relative importance of nodes in a given network by exploiting the interference between the loops of a node and the cycles of the temporal pattern of failures. In the limit of long failure cycles we show analytically that the escape in a node is larger than the one estimated from a stochastic failure with the same failure probability. We test our general formalism in two real-world networks (air-transportation and e-mail users) and show how communities lead to deviations from predictions for failures in hubs.

  3. Impact Testing of Aluminum 2024 and Titanium 6Al-4V for Material Model Development

    NASA Technical Reports Server (NTRS)

    Pereira, J. Michael; Revilock, Duane M.; Lerch, Bradley A.; Ruggeri, Charles R.

    2013-01-01

    One of the difficulties with developing and verifying accurate impact models is that parameters such as high strain rate material properties, failure modes, static properties, and impact test measurements are often obtained from a variety of different sources using different materials, with little control over consistency among the different sources. In addition there is often a lack of quantitative measurements in impact tests to which the models can be compared. To alleviate some of these problems, a project is underway to develop a consistent set of material property, impact test data and failure analysis for a variety of aircraft materials that can be used to develop improved impact failure and deformation models. This project is jointly funded by the NASA Glenn Research Center and the FAA William J. Hughes Technical Center. Unique features of this set of data are that all material property data and impact test data are obtained using identical material, the test methods and procedures are extensively documented and all of the raw data is available. Four parallel efforts are currently underway: Measurement of material deformation and failure response over a wide range of strain rates and temperatures and failure analysis of material property specimens and impact test articles conducted by The Ohio State University; development of improved numerical modeling techniques for deformation and failure conducted by The George Washington University; impact testing of flat panels and substructures conducted by NASA Glenn Research Center. This report describes impact testing which has been done on aluminum (Al) 2024 and titanium (Ti) 6Al-4vanadium (V) sheet and plate samples of different thicknesses and with different types of projectiles, one a regular cylinder and one with a more complex geometry incorporating features representative of a jet engine fan blade. Data from this testing will be used in validating material models developed under this program. The material tests and the material models developed in this program will be published in separate reports.

  4. FMEA of manual and automated methods for commissioning a radiotherapy treatment planning system.

    PubMed

    Wexler, Amy; Gu, Bruce; Goddu, Sreekrishna; Mutic, Maya; Yaddanapudi, Sridhar; Olsen, Lindsey; Harry, Taylor; Noel, Camille; Pawlicki, Todd; Mutic, Sasa; Cai, Bin

    2017-09-01

    To evaluate the level of risk involved in treatment planning system (TPS) commissioning using a manual test procedure, and to compare the associated process-based risk to that of an automated commissioning process (ACP) by performing an in-depth failure modes and effects analysis (FMEA). The authors collaborated to determine the potential failure modes of the TPS commissioning process using (a) approaches involving manual data measurement, modeling, and validation tests and (b) an automated process utilizing application programming interface (API) scripting, preloaded, and premodeled standard radiation beam data, digital heterogeneous phantom, and an automated commissioning test suite (ACTS). The severity (S), occurrence (O), and detectability (D) were scored for each failure mode and the risk priority numbers (RPN) were derived based on TG-100 scale. Failure modes were then analyzed and ranked based on RPN. The total number of failure modes, RPN scores and the top 10 failure modes with highest risk were described and cross-compared between the two approaches. RPN reduction analysis is also presented and used as another quantifiable metric to evaluate the proposed approach. The FMEA of a MTP resulted in 47 failure modes with an RPN ave of 161 and S ave of 6.7. The highest risk process of "Measurement Equipment Selection" resulted in an RPN max of 640. The FMEA of an ACP resulted in 36 failure modes with an RPN ave of 73 and S ave of 6.7. The highest risk process of "EPID Calibration" resulted in an RPN max of 576. An FMEA of treatment planning commissioning tests using automation and standardization via API scripting, preloaded, and pre-modeled standard beam data, and digital phantoms suggests that errors and risks may be reduced through the use of an ACP. © 2017 American Association of Physicists in Medicine.

  5. Target Soil Impact Verification: Experimental Testing and Kayenta Constitutive Modeling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broome, Scott Thomas; Flint, Gregory Mark; Dewers, Thomas

    2015-11-01

    This report details experimental testing and constitutive modeling of sandy soil deformation under quasi - static conditions. This is driven by the need to understand constitutive response of soil to target/component behavior upon impact . An experimental and constitutive modeling program was followed to determine elastic - plastic properties and a compressional failure envelope of dry soil . One hydrostatic, one unconfined compressive stress (UCS), nine axisymmetric compression (ACS) , and one uniaxial strain (US) test were conducted at room temperature . Elastic moduli, assuming isotropy, are determined from unload/reload loops and final unloading for all tests pre - failuremore » and increase monotonically with mean stress. Very little modulus degradation was discernable from elastic results even when exposed to mean stresses above 200 MPa . The failure envelope and initial yield surface were determined from peak stresses and observed onset of plastic yielding from all test results. Soil elasto - plastic behavior is described using the Brannon et al. (2009) Kayenta constitutive model. As a validation exercise, the ACS - parameterized Kayenta model is used to predict response of the soil material under uniaxial strain loading. The resulting parameterized and validated Kayenta model is of high quality and suitable for modeling sandy soil deformation under a range of conditions, including that for impact prediction.« less

  6. Comparative Effects of Urocortins and Stresscopin on Cardiac Myocyte Contractility

    PubMed Central

    Makarewich, Catherine A.; Troupes, Constantine D.; Schumacher, Sarah M.; Gross, Polina; Koch, Walter J.; Crandall, David L.; Houser, Steven R.

    2015-01-01

    Rationale There is a current need for development of new therapies for patients with heart failure. Objective To test the effects of members of the Corticotropin-Releasing Factor (CRF) family of peptides on myocyte contractility to validate them as potential heart failure therapeutics. Methods and Results Adult feline left ventricular myocytes (AFMs) were isolated and contractility was assessed in the presence and absence of CRF peptides Urocortin 2 (UCN2), Urocortin 3 (UCN3), Stresscopin (SCP), and the β-adrenergic agonist isoproterenol (Iso). An increase in fractional shortening and peak Ca2+ transient amplitude was seen in the presence of all CRF peptides. A decrease in Ca2+ decay rate (Tau) was also observed at all concentrations tested. cAMP generation was measured by ELISA in isolated AFMs in response to the CRF peptides and Iso and significant production was seen at all concentrations and time points tested. Conclusions The CRF family of peptides effectively increases cardiac contractility and should be evaluated as potential novel therapeutics for heart failure patients. PMID:26231084

  7. Quantifying the added value of BNP in suspected heart failure in general practice: an individual patient data meta-analysis.

    PubMed

    Kelder, Johannes C; Cowie, Martin R; McDonagh, Theresa A; Hardman, Suzanna M C; Grobbee, Diederick E; Cost, Bernard; Hoes, Arno W

    2011-06-01

    Diagnosing early stages of heart failure with mild symptoms is difficult. B-type natriuretic peptide (BNP) has promising biochemical test characteristics, but its diagnostic yield on top of readily available diagnostic knowledge has not been sufficiently quantified in early stages of heart failure. To quantify the added diagnostic value of BNP for the diagnosis of heart failure in a population relevant to GPs and validate the findings in an independent primary care patient population. Individual patient data meta-analysis followed by external validation. The additional diagnostic yield of BNP above standard clinical information was compared with ECG and chest x-ray results. Derivation was performed on two existing datasets from Hillingdon (n=127) and Rotterdam (n=149) while the UK Natriuretic Peptide Study (n=306) served as validation dataset. Included were patients with suspected heart failure referred to a rapid-access diagnostic outpatient clinic. Case definition was according to the ESC guideline. Logistic regression was used to assess discrimination (with the c-statistic) and calibration. Of the 276 patients in the derivation set, 30.8% had heart failure. The clinical model (encompassing age, gender, known coronary artery disease, diabetes, orthopnoea, elevated jugular venous pressure, crackles, pitting oedema and S3 gallop) had a c-statistic of 0.79. Adding, respectively, chest x-ray results, ECG results or BNP to the clinical model increased the c-statistic to 0.84, 0.85 and 0.92. Neither ECG nor chest x-ray added significantly to the 'clinical plus BNP' model. All models had adequate calibration. The 'clinical plus BNP' diagnostic model performed well in an independent cohort with comparable inclusion criteria (c-statistic=0.91 and adequate calibration). Using separate cut-off values for 'ruling in' (typically implying referral for echocardiography) and for 'ruling out' heart failure--creating a grey zone--resulted in insufficient proportions of patients with a correct diagnosis. BNP has considerable diagnostic value in addition to signs and symptoms in patients suspected of heart failure in primary care. However, using BNP alone with the currently recommended cut-off levels is not sufficient to make a reliable diagnosis of heart failure.

  8. Validation of Commercial Fiber Optic Components for Aerospace Environments

    NASA Technical Reports Server (NTRS)

    Ott, Melanie N.

    2005-01-01

    Full qualification for commercial photonic parts as defined by the Military specification system in the past, is not feasible. Due to changes in the photonic components industry and the Military specification system that NASA had relied upon so heavily in the past, an approach to technology validation of commercial off the shelf parts had to be devised. This approach involves knowledge of system requirements, environmental requirements and failure modes of the particular components under consideration. Synthesizing the criteria together with the major known failure modes to formulate a test plan is an effective way of establishing knowledge based "qualification". Although this does not provide the type of reliability assurance that the Military specification system did in the past, it is an approach that allows for increased risk mitigation. The information presented will introduce the audience to the technology validation approach that is currently applied at NASA for the usage of commercial-off-the-shelf (COTS) fiber optic components for space flight environments. The focus will be on how to establish technology validation criteria for commercial fiber products such that continued reliable performance is assured under the harsh environmental conditions of typical missions. The goal of this presentation is to provide the audience with an approach to formulating a COTS qualification test plan for these devices. Examples from past NASA missions will be discussed.

  9. Evolution of an interfacial crack on the concrete-embankment boundary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glascoe, Lee; Antoun, Tarabay; Kanarska, Yuliya

    2013-07-10

    Failure of a dam can have subtle beginnings. A small crack or dislocation at the interface of the concrete dam and the surrounding embankment soil initiated by, for example, a seismic or an explosive event can lead to a catastrophic failure of the dam. The dam may ‘self-rehabilitate’ if a properly designed granular filter is engineered around the embankment. Currently, the design criteria for such filters have only been based on experimental studies. We demonstrate the numerical prediction of filter effectiveness at the soil grain scale. This joint LLNL-ERDC basic research project, funded by the Department of Homeland Security’s Sciencemore » and Technology Directorate (DHS S&T), consists of validating advanced high performance computer simulations of soil erosion and transport of grain- and dam-scale models to detailed centrifuge and soil erosion tests. Validated computer predictions highlight that a resilient filter is consistent with the current design specifications for dam filters. These predictive simulations, unlike the design specifications, can be used to assess filter success or failure under different soil or loading conditions and can lead to meaningful estimates of the timing and nature of full-scale dam failure.« less

  10. Failure mode prediction for composite structural insulated panels with MgO board facings

    NASA Astrophysics Data System (ADS)

    Smakosz, Łukasz; Kreja, Ireneusz

    2018-01-01

    Sandwich panels are readily used in civil engineering due to their high strength to weight ratio and the ease and speed of assembly. The idea of a sandwich section is to combine thin and durable facings with a light-weight core and the choice of materials used allows obtaining the desired behaviour. Panels in consideration consist of MgO (magnesium oxide) board facings and expanded polystyrene core and are characterized by immunity to biological corrosion, a high thermal insulation and a relatively low impact on environment. Customizing the range of panels to meet market needs requires frequent size changes, leading to different failure modes, which are identified in a series of costly full-scale laboratory tests. A nonlinear numerical model was created with a use of a commercial ABAQUS code and a user-defined procedure, which is able to reproduce observed failure mechanisms; its parameters were established on the basis of small-scale tests and numerical experiments. The model was validated by a comparison with the results of the full-scale bending and compression tests. The results obtained were in satisfactory agreement with the test data.

  11. The Control Attitudes Scale-Revised: psychometric evaluation in three groups of patients with cardiac illness.

    PubMed

    Moser, Debra K; Riegel, Barbara; McKinley, Sharon; Doering, Lynn V; Meischke, Hendrika; Heo, Seongkum; Lennie, Terry A; Dracup, Kathleen

    2009-01-01

    Perceived control is a construct with important theoretical and clinical implications for healthcare providers, yet practical application of the construct in research and clinical practice awaits development of an easily administered instrument to measure perceived control with evidence of reliability and validity. To test the psychometric properties of the Control Attitudes Scale-Revised (CAS-R) using a sample of 3,396 individuals with coronary heart disease, 513 patients with acute myocardial infarction, and 146 patients with heart failure. Analyses were done separately in each patient group. Reliability was assessed using Cronbach's alpha to determine internal consistency, and item homogeneity was assessed using item-total and interitem correlations. Validity was examined using principal component analysis and testing hypotheses about known associations. Cronbach's alpha values for the CAS-R in patients with coronary heart disease, acute myocardial infarction, and heart failure were all greater than .70. Item-total and interitem correlation coefficients for all items were acceptable in the groups. In factor analyses, the same single factor was extracted in all groups, and all items were loaded moderately or strongly to the factor in each group. As hypothesized in the final construct validity test, in all groups, patients with higher levels of perceived control had less depression and less anxiety compared with those of patients who had lower levels of perceived control. This study provides evidence of the reliability and validity of the 8-item CAS-R as a measure of perceived control in patients with cardiac illness and provides important insight into a key patient construct.

  12. A dynamic integrated fault diagnosis method for power transformers.

    PubMed

    Gao, Wensheng; Bai, Cuifen; Liu, Tong

    2015-01-01

    In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.

  13. A Dynamic Integrated Fault Diagnosis Method for Power Transformers

    PubMed Central

    Gao, Wensheng; Liu, Tong

    2015-01-01

    In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841

  14. Quasi-Static 3-Point Reinforced Carbon-Carbon Bend Test and Analysis for Shuttle Orbiter Wing Leading Edge Impact Damage Thresholds

    NASA Technical Reports Server (NTRS)

    Fasanella, Edwin L.; Sotiris, Kellas

    2006-01-01

    Static 3-point bend tests of Reinforced Carbon-Carbon (RCC) were conducted to failure to provide data for additional validation of an LS-DYNA RCC model suitable for predicting the threshold of impact damage to shuttle orbiter wing leading edges. LS-DYNA predictions correlated well with the average RCC failure load, and were good in matching the load vs. deflection. However, correlating the detectable damage using NDE methods with the cumulative damage parameter in LS-DYNA material model 58 was not readily achievable. The difficulty of finding internal RCC damage with NDE and the high sensitivity of the mat58 damage parameter to the load near failure made the task very challenging. In addition, damage mechanisms for RCC due to dynamic impact of debris such as foam and ice and damage mechanisms due to a static loading were, as expected, not equivalent.

  15. Early reading performance: a comparison of teacher-based and test-based assessments.

    PubMed

    Kenny, D T; Chekaluk, E

    1993-04-01

    An unresolved question in early screening is whether test-based or teacher-based assessments should form the basis of the classification of children at risk of educational failure. Available structured teacher rating scales are lacking in predictive validity, and teacher predictions of students likely to experience reading difficulties have yielded disappointing true positive rates, with teachers failing to identify the majority of severely disabled readers. For this study, three educational screening instruments were developed: (a) a single teacher rating, categorizing children into three levels of reading ability (advanced, average, poor); (b) a 15-item teacher questionnaire designed to measure students' cognitive and language ability, attentional and behavioral characteristics, and academic performance; and (c) a battery of language and reading tests that are predictive of, or correlate with, reading failure. The concurrent validity of each instrument was assessed in a sample of 312 Australian schoolchildren from kindergarten, Year 1, and Year 2. Students were assessed at the end of the 1989 school year after having completed 1, 2, or 3 years of schooling. The results suggest that the nature of the skills required for success in reading changes in the first 3 years of schooling. Both teachers and tests concur more closely as children progress through the elementary years and as the risk behavior (reading) becomes more accessible to direct measurement. Carefully focused teacher rating scales may be a cost-effective means of identifying children at risk of reading failure. Improved teacher rating scales should be developed and used to assist in the early screening process.

  16. A Thermal Runaway Failure Model for Low-Voltage BME Ceramic Capacitors with Defects

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2017-01-01

    Reliability of base metal electrode (BME) multilayer ceramic capacitors (MLCCs) that until recently were used mostly in commercial applications, have been improved substantially by using new materials and processes. Currently, the inception of intrinsic wear-out failures in high quality capacitors became much greater than the mission duration in most high-reliability applications. However, in capacitors with defects degradation processes might accelerate substantially and cause infant mortality failures. In this work, a physical model that relates the presence of defects to reduction of breakdown voltages and decreasing times to failure has been suggested. The effect of the defect size has been analyzed using a thermal runaway model of failures. Adequacy of highly accelerated life testing (HALT) to predict reliability at normal operating conditions and limitations of voltage acceleration are considered. The applicability of the model to BME capacitors with cracks is discussed and validated experimentally.

  17. Prediction of Composite Laminate Strength Properties Using a Refined Zigzag Plate Element

    NASA Technical Reports Server (NTRS)

    Barut, Atila; Madenci, Erdogan; Tessler, Alexander

    2013-01-01

    This study presents an approach that uses the refined zigzag element, RZE(exp2,2) in conjunction with progressive failure criteria to predict the ultimate strength of composite laminates based on only ply-level strength properties. The methodology involves four major steps: (1) Determination of accurate stress and strain fields under complex loading conditions using RZE(exp2,2)-based finite element analysis, (2) Determination of failure locations and failure modes using the commonly accepted Hashin's failure criteria, (3) Recursive degradation of the material stiffness, and (4) Non-linear incremental finite element analysis to obtain stress redistribution until global failure. The validity of this approach is established by considering the published test data and predictions for (1) strength of laminates under various off-axis loading, (2) strength of laminates with a hole under compression, and (3) strength of laminates with a hole under tension.

  18. The Effects of Fiber Orientation and Adhesives on Tensile Properties of Carbon Fiber Reinforced Polymer Matrix Composite with Embedded Nickel-Titanium Shape Memory Alloys

    NASA Technical Reports Server (NTRS)

    Quade, Derek J.; Jana, Sadhan C.; Morscher, Gregory N.; Kannan, Manigandan; McCorkle, Linda S.

    2017-01-01

    Nickel-titanium (NiTi) shape memory alloy (SMA) sections were embedded within carbon fiber reinforced polymer matrix composite (CFRPPMC) laminates and their tensile properties were evaluated with simultaneous monitoring of modal acoustic emissions. The test specimens were fabricated in three different layup configurations and two different thin film adhesives were applied to bond the SMA with the PMC. A trio of acoustic sensors were attached to the specimens during tensile testing to monitor the modal acoustic emission (AE) as the materials experienced mechanical failure. The values of ultimate tensile strengths, strains, and moduli were obtained. Cumulative AE energy of events and specimen failure location were determined. In conjunction, optical and scanning electron microscopy techniques were used to examine the break areas of the specimens. The analysis of AE data revealed failure locations within the specimens which were validated from the microscopic images. The placement of 90 deg plies in the outer ply gave the strongest acoustic signals during break as well as the cleanest break of the samples tested. Overlapping 0 deg ply layers surrounding the SMA was found to be the best scenario to prevent failure of the specimen itself.

  19. Progressive Damage and Failure Analysis of Composite Laminates

    NASA Astrophysics Data System (ADS)

    Joseph, Ashith P. K.

    Composite materials are widely used in various industries for making structural parts due to higher strength to weight ratio, better fatigue life, corrosion resistance and material property tailorability. To fully exploit the capability of composites, it is required to know the load carrying capacity of the parts made of them. Unlike metals, composites are orthotropic in nature and fails in a complex manner under various loading conditions which makes it a hard problem to analyze. Lack of reliable and efficient failure analysis tools for composites have led industries to rely more on coupon and component level testing to estimate the design space. Due to the complex failure mechanisms, composite materials require a very large number of coupon level tests to fully characterize the behavior. This makes the entire testing process very time consuming and costly. The alternative is to use virtual testing tools which can predict the complex failure mechanisms accurately. This reduces the cost only to it's associated computational expenses making significant savings. Some of the most desired features in a virtual testing tool are - (1) Accurate representation of failure mechanism: Failure progression predicted by the virtual tool must be same as those observed in experiments. A tool has to be assessed based on the mechanisms it can capture. (2) Computational efficiency: The greatest advantages of a virtual tools are the savings in time and money and hence computational efficiency is one of the most needed features. (3) Applicability to a wide range of problems: Structural parts are subjected to a variety of loading conditions including static, dynamic and fatigue conditions. A good virtual testing tool should be able to make good predictions for all these different loading conditions. The aim of this PhD thesis is to develop a computational tool which can model the progressive failure of composite laminates under different quasi-static loading conditions. The analysis tool is validated by comparing the simulations against experiments for a selected number of quasi-static loading cases.

  20. The Severe Heart Failure Questionnaire: Italian translation and linguistic validation.

    PubMed

    Scarinzi, C; Berchialla, P; Ghidina, M; Rozbowsky, P; Pilotto, L; Albanese, M C; Fioretti, P M; Gregori, D

    2008-12-01

    The quality of life (QoL) is an important outcome indicator for heart failure management. As the use of a validate questionnaire in a different cultural context can affect data interpretation our main objective is the Italian translation and linguistic validation of the Severe Heart Failure Questionnaire (SHF) and its comparison with the MLHF (Minnesota Living with Heart Failure) Questionnaire. The SHF and "The Minnesota Living with Heart Failure Questionnaire" were translated. A consensus involving parallel back-translations was established among a group of cardiologists, psychologists and biostatisticians. SHF and MLHF were both administrated to a sample of 50 patients. The patients' median age was 63 years. Ace inhibitors therapy was administered in 88% of cases and betablockers in 56% of cases. Finally the Italian version of SHF correlates well with MLHF for all domains, except life satisfaction SHF domain. The Italian version of the SHF correlates well with MLHF for almost all domains and it represents a valid alternative for quality of life assessment in heart failure patients.

  1. Analysis of a Hybrid Wing Body Center Section Test Article

    NASA Technical Reports Server (NTRS)

    Wu, Hsi-Yung T.; Shaw, Peter; Przekop, Adam

    2013-01-01

    The hybrid wing body center section test article is an all-composite structure made of crown, floor, keel, bulkhead, and rib panels utilizing the Pultruded Rod Stitched Efficient Unitized Structure (PRSEUS) design concept. The primary goal of this test article is to prove that PRSEUS components are capable of carrying combined loads that are representative of a hybrid wing body pressure cabin design regime. This paper summarizes the analytical approach, analysis results, and failure predictions of the test article. A global finite element model of composite panels, metallic fittings, mechanical fasteners, and the Combined Loads Test System (COLTS) test fixture was used to conduct linear structural strength and stability analyses to validate the specimen under the most critical combination of bending and pressure loading conditions found in the hybrid wing body pressure cabin. Local detail analyses were also performed at locations with high stress concentrations, at Tee-cap noodle interfaces with surrounding laminates, and at fastener locations with high bearing/bypass loads. Failure predictions for different composite and metallic failure modes were made, and nonlinear analyses were also performed to study the structural response of the test article under combined bending and pressure loading. This large-scale specimen test will be conducted at the COLTS facility at the NASA Langley Research Center.

  2. Scaling effects in the static and dynamic response of graphite-epoxy beam-columns. Ph.D. Thesis - Virginia Polytechnic Inst. and State Univ.

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.

    1990-01-01

    Scale model technology represents one method of investigating the behavior of advanced, weight-efficient composite structures under a variety of loading conditions. It is necessary, however, to understand the limitations involved in testing scale model structures before the technique can be fully utilized. These limitations, or scaling effects, are characterized. in the large deflection response and failure of composite beams. Scale model beams were loaded with an eccentric axial compressive load designed to produce large bending deflections and global failure. A dimensional analysis was performed on the composite beam-column loading configuration to determine a model law governing the system response. An experimental program was developed to validate the model law under both static and dynamic loading conditions. Laminate stacking sequences including unidirectional, angle ply, cross ply, and quasi-isotropic were tested to examine a diversity of composite response and failure modes. The model beams were loaded under scaled test conditions until catastrophic failure. A large deflection beam solution was developed to compare with the static experimental results and to analyze beam failure. Also, the finite element code DYCAST (DYnamic Crash Analysis of STructure) was used to model both the static and impulsive beam response. Static test results indicate that the unidirectional and cross ply beam responses scale as predicted by the model law, even under severe deformations. In general, failure modes were consistent between scale models within a laminate family; however, a significant scale effect was observed in strength. The scale effect in strength which was evident in the static tests was also observed in the dynamic tests. Scaling of load and strain time histories between the scale model beams and the prototypes was excellent for the unidirectional beams, but inconsistent results were obtained for the angle ply, cross ply, and quasi-isotropic beams. Results show that valuable information can be obtained from testing on scale model composite structures, especially in the linear elastic response region. However, due to scaling effects in the strength behavior of composite laminates, caution must be used in extrapolating data taken from a scale model test when that test involves failure of the structure.

  3. Prediction of postoperative outcome after hepatectomy with a new bedside test for maximal liver function capacity.

    PubMed

    Stockmann, Martin; Lock, Johan F; Riecke, Björn; Heyne, Karsten; Martus, Peter; Fricke, Michael; Lehmann, Sina; Niehues, Stefan M; Schwabe, Michael; Lemke, Arne-Jörn; Neuhaus, Peter

    2009-07-01

    To validate the LiMAx test, a new bedside test for the determination of maximal liver function capacity based on C-methacetin kinetics. To investigate the diagnostic performance of different liver function tests and scores including the LiMAx test for the prediction of postoperative outcome after hepatectomy. Liver failure is a major cause of mortality after hepatectomy. Preoperative prediction of residual liver function has been limited so far. Sixty-four patients undergoing hepatectomy were analyzed in a prospective observational study. Volumetric analysis of the liver was carried out using preoperative computed tomography and intraoperative measurements. Perioperative factors associated with morbidity and mortality were analyzed. Cutoff values of the LiMAx test were evaluated by receiver operating characteristic. Residual LiMAx demonstrated an excellent linear correlation with residual liver volume (r = 0.94, P < 0.001) after hepatectomy. The multivariate analysis revealed LiMAx on postoperative day 1 as the only predictor of liver failure (P = 0.003) and mortality (P = 0.004). AUROC for the prediction of liver failure and liver failure related death by the LiMAx test was both 0.99. Preoperative volume/function analysis combining CT volumetry and LiMAx allowed an accurate calculation of the remnant liver function capacity prior to surgery (r = 0.85, P < 0.001). Residual liver function is the major factor influencing the outcome of patients after hepatectomy and can be predicted preoperatively by a combination of LiMAx and CT volumetry.

  4. Simulating direct shear tests with the Bullet physics library: A validation study.

    PubMed

    Izadi, Ehsan; Bezuijen, Adam

    2018-01-01

    This study focuses on the possible uses of physics engines, and more specifically the Bullet physics library, to simulate granular systems. Physics engines are employed extensively in the video gaming, animation and movie industries to create physically plausible scenes. They are designed to deliver a fast, stable, and optimal simulation of certain systems such as rigid bodies, soft bodies and fluids. This study focuses exclusively on simulating granular media in the context of rigid body dynamics with the Bullet physics library. The first step was to validate the results of the simulations of direct shear testing on uniform-sized metal beads on the basis of laboratory experiments. The difference in the average angle of mobilized frictions was found to be only 1.0°. In addition, a very close match was found between dilatancy in the laboratory samples and in the simulations. A comprehensive study was then conducted to determine the failure and post-failure mechanism. We conclude with the presentation of a simulation of a direct shear test on real soil which demonstrated that Bullet has all the capabilities needed to be used as software for simulating granular systems.

  5. Content validity of critical success factors for e-Government implementation in Indonesia

    NASA Astrophysics Data System (ADS)

    Napitupulu, D.; Syafrullah, M.; Rahim, R.; Amar, A.; Sucahyo, YG

    2018-05-01

    The purpose of this research is to validate the Critical Success Factors (CSFs) of e-Government implementation in Indonesia. The e-Government initiative conducted only to obey the regulation but ignoring the quality. Defining CSFs will help government agencies to avoid failure of e-Government projects. A survey with the questionnaire was used to validate the item of CSF based on expert judgment through two round of Delphi. The result showed from 67 subjects in instrument tested; there are 11 invalid items deleted and remain only 56 items that had good content validity and internal reliability. Therefore, all 56 CSFs should be adopted by government agencies in Indonesia to support e-Government implementation.

  6. Improving the performance of a filling line based on simulation

    NASA Astrophysics Data System (ADS)

    Jasiulewicz-Kaczmarek, M.; Bartkowiak, T.

    2016-08-01

    The paper describes the method of improving performance of a filling line based on simulation. This study concerns a production line that is located in a manufacturing centre of a FMCG company. A discrete event simulation model was built using data provided by maintenance data acquisition system. Two types of failures were identified in the system and were approximated using continuous statistical distributions. The model was validated taking into consideration line performance measures. A brief Pareto analysis of line failures was conducted to identify potential areas of improvement. Two improvements scenarios were proposed and tested via simulation. The outcome of the simulations were the bases of financial analysis. NPV and ROI values were calculated taking into account depreciation, profits, losses, current CIT rate and inflation. A validated simulation model can be a useful tool in maintenance decision-making process.

  7. Natriuretic peptide-guided management in heart failure.

    PubMed

    Chioncel, Ovidiu; Collins, Sean P; Greene, Stephen J; Ambrosy, Andrew P; Vaduganathan, Muthiah; Macarie, Cezar; Butler, Javed; Gheorghiade, Mihai

    2016-08-01

    Heart failure is a clinical syndrome that manifests from various cardiac and noncardiac abnormalities. Accordingly, rapid and readily accessible methods for diagnosis and risk stratification are invaluable for providing clinical care, deciding allocation of scare resources, and designing selection criteria for clinical trials. Natriuretic peptides represent one of the most important diagnostic and prognostic tools available for the care of heart failure patients. Natriuretic peptide testing has the distinct advantage of objectivity, reproducibility, and widespread availability.The concept of tailoring heart failure management to achieve a target value of natriuretic peptides has been tested in various clinical trials and may be considered as an effective method for longitudinal biomonitoring and guiding escalation of heart failure therapies with overall favorable results.Although heart failure trials support efficacy and safety of natriuretic peptide-guided therapy as compared with usual care, the relationship between natriuretic peptide trajectory and clinical benefit has not been uniform across the trials, and certain subgroups have not shown robust benefit. Furthermore, the precise natriuretic peptide value ranges and time intervals of testing are still under investigation. If natriuretic peptides fail to decrease following intensification of therapy, further work is needed to clarify the optimal pharmacologic approach. Despite decreasing natriuretic peptide levels, some patients may present with other high-risk features (e.g. elevated troponin). A multimarker panel investigating multiple pathological processes will likely be an optimal alternative, but this will require prospective validation.Future research will be needed to clarify the type and magnitude of the target natriuretic peptide therapeutic response, as well as the duration of natriuretic peptide-guided therapy in heart failure patients.

  8. Subject specific finite element modeling of periprosthetic femoral fracture using element deactivation to simulate bone failure.

    PubMed

    Miles, Brad; Kolos, Elizabeth; Walter, William L; Appleyard, Richard; Shi, Angela; Li, Qing; Ruys, Andrew J

    2015-06-01

    Subject-specific finite element (FE) modeling methodology could predict peri-prosthetic femoral fracture (PFF) for cementless hip arthoplasty in the early postoperative period. This study develops methodology for subject-specific finite element modeling by using the element deactivation technique to simulate bone failure and validate with experimental testing, thereby predicting peri-prosthetic femoral fracture in the early postoperative period. Material assignments for biphasic and triphasic models were undertaken. Failure modeling with the element deactivation feature available in ABAQUS 6.9 was used to simulate a crack initiation and propagation in the bony tissue based upon a threshold of fracture strain. The crack mode for the biphasic models was very similar to the experimental testing crack mode, with a similar shape and path of the crack. The fracture load is sensitive to the friction coefficient at the implant-bony interface. The development of a novel technique to simulate bone failure by element deactivation of subject-specific finite element models could aid prediction of fracture load in addition to fracture risk characterization for PFF. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  9. Design and evaluation of a failure detection and isolation algorithm for restructurable control systems

    NASA Technical Reports Server (NTRS)

    Weiss, Jerold L.; Hsu, John Y.

    1986-01-01

    The use of a decentralized approach to failure detection and isolation for use in restructurable control systems is examined. This work has produced: (1) A method for evaluating fundamental limits to FDI performance; (2) Application using flight recorded data; (3) A working control element FDI system with maximal sensitivity to critical control element failures; (4) Extensive testing on realistic simulations; and (5) A detailed design methodology involving parameter optimization (with respect to model uncertainties) and sensitivity analyses. This project has concentrated on detection and isolation of generic control element failures since these failures frequently lead to emergency conditions and since knowledge of remaining control authority is essential for control system redesign. The failures are generic in the sense that no temporal failure signature information was assumed. Thus, various forms of functional failures are treated in a unified fashion. Such a treatment results in a robust FDI system (i.e., one that covers all failure modes) but sacrifices some performance when detailed failure signature information is known, useful, and employed properly. It was assumed throughout that all sensors are validated (i.e., contain only in-spec errors) and that only the first failure of a single control element needs to be detected and isolated. The FDI system which has been developed will handle a class of multiple failures.

  10. Flight Approach to Adaptive Control Research

    NASA Technical Reports Server (NTRS)

    Pavlock, Kate Maureen; Less, James L.; Larson, David Nils

    2011-01-01

    The National Aeronautics and Space Administration's Dryden Flight Research Center completed flight testing of adaptive controls research on a full-scale F-18 testbed. The testbed served as a full-scale vehicle to test and validate adaptive flight control research addressing technical challenges involved with reducing risk to enable safe flight in the presence of adverse conditions such as structural damage or control surface failures. This paper describes the research interface architecture, risk mitigations, flight test approach and lessons learned of adaptive controls research.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonzon, L. L.; Hente, D. B.; Kukreti, B. M.

    The seismic-fragility response of naturally-aged, nuclear station, safety-related batteries is of interest for two reasons: (1) to determine actual failure modes and thresholds; and (2) to determine the validity of using the electrical capacity of individual cells as an indicator of the end-of-life of a battery, given a seismic event. This report covers the first test series of an extensive program using 12-year old, lead-calcium, Gould NCX-2250 cells, from the James A. Fitzpatrick Nuclear Power Station operated by the New York Power Authority. Seismic tests with three cell configurations were performed using a triaxial shake table: single-cell tests, rigidly mounted;more » multi-cell (three) tests, mounted in a typical battery rack; and single-cell tests specifically aimed towards examining propagation of pre-existing case cracks. In general the test philosophy was to monitor the electrical properties including discharge capacity of cells through a graduated series of g-level step increases until either the shake-table limits were reached or until electrical failure of the cells occurred. Of nine electrically active cells, six failed during seismic testing over a range of imposed g-level loads in excess of a 1-g ZPA. Post-test examination revealed a common failure mode, the cracking at the abnormally brittle, positive lead bus-bar/post interface; further examination showed that the failure zone was extremely coarse grained and extensively corroded. Presently accepted accelerated-aging methods for qualifying batteries, per IEEE Std. 535-1979, are based on plate growth, but these naturally-aged 12-year old cells showed no significant plate growth.« less

  12. Regenerative braking failures in battery electric vehicles and their impact on the driver.

    PubMed

    Cocron, Peter; Neumann, Isabel; Kreußlein, Maria; Wanner, Daniel; Bierbach, Maxim; Krems, Josef F

    2018-09-01

    A unique feature of battery electric vehicles (BEV) is their regenerative braking system (RBS) to recapture kinetic energy in deceleration maneuvers. If such a system is triggered via gas pedal, most deceleration maneuvers can be executed by just using this pedal. This impacts the driving task as different deceleration strategies can be applied. Previous research has indicated that a RBS failure leading to a sudden reduced deceleration represents an adverse event for BEV drivers. In the present study, we investigated such a failure's impact on the driver's evaluation and behavior. We conducted an experiment on a closed-off test track using a modified BEV that could temporarily switch off the RBS. One half of the 44 participants in the study received information about an upcoming RBS failure whereas the other half did not. While 91% of the drivers receiving prior information noticed the RBS failure, only 48% recognized it in the "uniformed" group. In general, the failure and the perception of its occurrence influenced the driver's evaluation and behavior more than receiving prior information. Nevertheless, under the tested conditions, drivers kept control and were able to compensate for the RBS failure. As the participants drove quite simple maneuvers in our experiment, further studies are needed to validate our findings using more complex driving settings. Given that RBS failures could have severe consequences, appropriate information and warning strategies for drivers are necessary. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. A model of the human observer and decision maker

    NASA Technical Reports Server (NTRS)

    Wewerinke, P. H.

    1981-01-01

    The decision process is described in terms of classical sequential decision theory by considering the hypothesis that an abnormal condition has occurred by means of a generalized likelihood ratio test. For this, a sufficient statistic is provided by the innovation sequence which is the result of the perception an information processing submodel of the human observer. On the basis of only two model parameters, the model predicts the decision speed/accuracy trade-off and various attentional characteristics. A preliminary test of the model for single variable failure detection tasks resulted in a very good fit of the experimental data. In a formal validation program, a variety of multivariable failure detection tasks was investigated and the predictive capability of the model was demonstrated.

  14. Testing and analysis of flat and curved panels with multiple cracks

    NASA Technical Reports Server (NTRS)

    Broek, David; Jeong, David Y.; Thomson, Douglas

    1994-01-01

    An experimental and analytical investigation of multiple cracking in various types of test specimens is described in this paper. The testing phase is comprised of a flat unstiffened panel series and curved stiffened and unstiffened panel series. The test specimens contained various configurations for initial damage. Static loading was applied to these specimens until ultimate failure, while loads and crack propagation were recorded. This data provides the basis for developing and validating methodologies for predicting linkup of multiple cracks, progression to failure, and overall residual strength. The results from twelve flat coupon and ten full scale curved panel tests are presented. In addition, an engineering analysis procedure was developed to predict multiple crack linkup. Reasonable agreement was found between predictions and actual test results for linkup and residual strength for both flat and curved panels. The results indicate that an engineering analysis approach has the potential to quantitatively assess the effect of multiple cracks in the arrest capability of an aircraft fuselage structure.

  15. Anovulatory and ovulatory infertility: results with simplified management.

    PubMed Central

    Hull, M G; Savage, P E; Bromham, D R

    1982-01-01

    A simplified scheme for the management of anovulatory and of ovulatory (usually called unexplained) infertility was evaluated in 244 women. Eighteen patients were excluded because of primary ovarian failure, 164 were treated for ovulatory failure, and 62 with ovulatory infertility remained untreated. Twenty-five patients had a properly validated negative postcoital test. In the remaining 201 patients the two-year conception rates were 96% in patients with amenorrhoea, 83% in those with oligomenorrhoea, 74% in those with luteal deficiency, and 88% in those with ovulatory infertility. Comparison with normal rates implied that amenorrhoea represents a pure form of ovulatory failure that is completely correctable whereas in other conditions unexplained factors also contribute to infertility though to a much smaller extent than was previously thought. PMID:6805656

  16. Risk prediction models for graft failure in kidney transplantation: a systematic review.

    PubMed

    Kaboré, Rémi; Haller, Maria C; Harambat, Jérôme; Heinze, Georg; Leffondré, Karen

    2017-04-01

    Risk prediction models are useful for identifying kidney recipients at high risk of graft failure, thus optimizing clinical care. Our objective was to systematically review the models that have been recently developed and validated to predict graft failure in kidney transplantation recipients. We used PubMed and Scopus to search for English, German and French language articles published in 2005-15. We selected studies that developed and validated a new risk prediction model for graft failure after kidney transplantation, or validated an existing model with or without updating the model. Data on recipient characteristics and predictors, as well as modelling and validation methods were extracted. In total, 39 articles met the inclusion criteria. Of these, 34 developed and validated a new risk prediction model and 5 validated an existing one with or without updating the model. The most frequently predicted outcome was graft failure, defined as dialysis, re-transplantation or death with functioning graft. Most studies used the Cox model. There was substantial variability in predictors used. In total, 25 studies used predictors measured at transplantation only, and 14 studies used predictors also measured after transplantation. Discrimination performance was reported in 87% of studies, while calibration was reported in 56%. Performance indicators were estimated using both internal and external validation in 13 studies, and using external validation only in 6 studies. Several prediction models for kidney graft failure in adults have been published. Our study highlights the need to better account for competing risks when applicable in such studies, and to adequately account for post-transplant measures of predictors in studies aiming at improving monitoring of kidney transplant recipients. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  17. Validation of PV-RPM Code in the System Advisor Model.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klise, Geoffrey Taylor; Lavrova, Olga; Freeman, Janine

    2017-04-01

    This paper describes efforts made by Sandia National Laboratories (SNL) and the National Renewable Energy Laboratory (NREL) to validate the SNL developed PV Reliability Performance Model (PV - RPM) algorithm as implemented in the NREL System Advisor Model (SAM). The PV - RPM model is a library of functions that estimates component failure and repair in a photovoltaic system over a desired simulation period. The failure and repair distributions in this paper are probabilistic representations of component failure and repair based on data collected by SNL for a PV power plant operating in Arizona. The validation effort focuses on whethermore » the failure and repair dist ributions used in the SAM implementation result in estimated failures that match the expected failures developed in the proof - of - concept implementation. Results indicate that the SAM implementation of PV - RPM provides the same results as the proof - of - concep t implementation, indicating the algorithms were reproduced successfully.« less

  18. A data driven partial ambiguity resolution: Two step success rate criterion, and its simulation demonstration

    NASA Astrophysics Data System (ADS)

    Hou, Yanqing; Verhagen, Sandra; Wu, Jie

    2016-12-01

    Ambiguity Resolution (AR) is a key technique in GNSS precise positioning. In case of weak models (i.e., low precision of data), however, the success rate of AR may be low, which may consequently introduce large errors to the baseline solution in cases of wrong fixing. Partial Ambiguity Resolution (PAR) is therefore proposed such that the baseline precision can be improved by fixing only a subset of ambiguities with high success rate. This contribution proposes a new PAR strategy, allowing to select the subset such that the expected precision gain is maximized among a set of pre-selected subsets, while at the same time the failure rate is controlled. These pre-selected subsets are supposed to obtain the highest success rate among those with the same subset size. The strategy is called Two-step Success Rate Criterion (TSRC) as it will first try to fix a relatively large subset with the fixed failure rate ratio test (FFRT) to decide on acceptance or rejection. In case of rejection, a smaller subset will be fixed and validated by the ratio test so as to fulfill the overall failure rate criterion. It is shown how the method can be practically used, without introducing a large additional computation effort. And more importantly, how it can improve (or at least not deteriorate) the availability in terms of baseline precision comparing to classical Success Rate Criterion (SRC) PAR strategy, based on a simulation validation. In the simulation validation, significant improvements are obtained for single-GNSS on short baselines with dual-frequency observations. For dual-constellation GNSS, the improvement for single-frequency observations on short baselines is very significant, on average 68%. For the medium- to long baselines, with dual-constellation GNSS the average improvement is around 20-30%.

  19. Radiomic machine-learning classifiers for prognostic biomarkers of advanced nasopharyngeal carcinoma.

    PubMed

    Zhang, Bin; He, Xin; Ouyang, Fusheng; Gu, Dongsheng; Dong, Yuhao; Zhang, Lu; Mo, Xiaokai; Huang, Wenhui; Tian, Jie; Zhang, Shuixing

    2017-09-10

    We aimed to identify optimal machine-learning methods for radiomics-based prediction of local failure and distant failure in advanced nasopharyngeal carcinoma (NPC). We enrolled 110 patients with advanced NPC. A total of 970 radiomic features were extracted from MRI images for each patient. Six feature selection methods and nine classification methods were evaluated in terms of their performance. We applied the 10-fold cross-validation as the criterion for feature selection and classification. We repeated each combination for 50 times to obtain the mean area under the curve (AUC) and test error. We observed that the combination methods Random Forest (RF) + RF (AUC, 0.8464 ± 0.0069; test error, 0.3135 ± 0.0088) had the highest prognostic performance, followed by RF + Adaptive Boosting (AdaBoost) (AUC, 0.8204 ± 0.0095; test error, 0.3384 ± 0.0097), and Sure Independence Screening (SIS) + Linear Support Vector Machines (LSVM) (AUC, 0.7883 ± 0.0096; test error, 0.3985 ± 0.0100). Our radiomics study identified optimal machine-learning methods for the radiomics-based prediction of local failure and distant failure in advanced NPC, which could enhance the applications of radiomics in precision oncology and clinical practice. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Human factors engineering and design validation for the redesigned follitropin alfa pen injection device.

    PubMed

    Mahony, Mary C; Patterson, Patricia; Hayward, Brooke; North, Robert; Green, Dawne

    2015-05-01

    To demonstrate, using human factors engineering (HFE), that a redesigned, pre-filled, ready-to-use, pre-asembled follitropin alfa pen can be used to administer prescribed follitropin alfa doses safely and accurately. A failure modes and effects analysis identified hazards and harms potentially caused by use errors; risk-control measures were implemented to ensure acceptable device use risk management. Participants were women with infertility, their significant others, and fertility nurse (FN) professionals. Preliminary testing included 'Instructions for Use' (IFU) and pre-validation studies. Validation studies used simulated injections in a representative use environment; participants received prior training on pen use. User performance in preliminary testing led to IFU revisions and a change to outer needle cap design to mitigate needle stick potential. In the first validation study (49 users, 343 simulated injections), in the FN group, one observed critical use error resulted in a device design modification and another in an IFU change. A second validation study tested the mitigation strategies; previously reported use errors were not repeated. Through an iterative process involving a series of studies, modifications were made to the pen design and IFU. Simulated-use testing demonstrated that the redesigned pen can be used to administer follitropin alfa effectively and safely.

  1. The reliability and validity of the Complex Task Performance Assessment: A performance-based assessment of executive function.

    PubMed

    Wolf, Timothy J; Dahl, Abigail; Auen, Colleen; Doherty, Meghan

    2017-07-01

    The objective of this study was to evaluate the inter-rater reliability, test-retest reliability, concurrent validity, and discriminant validity of the Complex Task Performance Assessment (CTPA): an ecologically valid performance-based assessment of executive function. Community control participants (n = 20) and individuals with mild stroke (n = 14) participated in this study. All participants completed the CTPA and a battery of cognitive assessments at initial testing. The control participants completed the CTPA at two different times one week apart. The intra-class correlation coefficient (ICC) for inter-rater reliability for the total score on the CTPA was .991. The ICCs for all of the sub-scores of the CTPA were also high (.889-.977). The CTPA total score was significantly correlated to Condition 4 of the DKEFS Color-Word Interference Test (p = -.425), and the Wechsler Test of Adult Reading (p  = -.493). Finally, there were significant differences between control subjects and individuals with mild stroke on the total score of the CTPA (p = .007) and all sub-scores except interpretation failures and total items incorrect. These results are also consistent with other current executive function performance-based assessments and indicate that the CTPA is a reliable and valid performance-based measure of executive function.

  2. Residual shear strength variability as a primary control on movement of landslides reactivated by earthquake-induced ground motion: Implications for coastal Oregon, U.S.

    USGS Publications Warehouse

    Schulz, William H.; Wang, Gonghui

    2014-01-01

    Most large seismogenic landslides are reactivations of preexisting landslides with basal shear zones in the residual strength condition. Residual shear strength often varies during rapid displacement, but the response of residual shear zones to seismic loading is largely unknown. We used a ring shear apparatus to perform simulated seismic loading tests, constant displacement rate tests, and tests during which shear stress was gradually varied on specimens from two landslides to improve understanding of coseismic landslide reactivation and to identify shear strength models valid for slow gravitational failure through rapid coseismic failure. The landslides we studied represent many along the Oregon, U.S., coast. Seismic loading tests resulted in (1) catastrophic failure involving unbounded displacement when stresses represented those for the existing landslides and (2) limited to unbounded displacement when stresses represented those for hypothetical dormant landslides, suggesting that coseismic landslide reactivation may be significant during future great earthquakes occurring near the Oregon Coast. Constant displacement rate tests indicated that shear strength decreased exponentially during the first few decimeters of displacement but increased logarithmically with increasing displacement rate when sheared at 0.001 cm s−1 or greater. Dynamic shear resistance estimated from shear strength models correlated well with stresses observed during seismic loading tests, indicating that displacement rate and amount primarily controlled failure characteristics. We developed a stress-based approach to estimate coseismic landslide displacement that utilizes the variable shear strength model. The approach produced results that compared favorably to observations made during seismic loading tests, indicating its utility for application to landslides.

  3. Production Maintenance Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jason Gabler, David Skinner

    2005-11-01

    PMI is a XML framework for formulating tests of software and software environments which operate in a relatively push button manner, i.e., can be automated, and that provide results that are readily consumable/publishable via RSS. Insofar as possible the tests are carried out in manner congruent with real usage. PMI drives shell scripts via a perl program which is charge of timing, validating each test, and controlling the flow through sets of tests. Testing in PMI is built up hierarchically. A suite of tests may start by testing basic functionalities (file system is writable, compiler is found and functions, shellmore » environment behaves as expected, etc.) and work up to large more complicated activities (execution of parallel code, file transfers, etc.) At each step in this hierarchy a failure leads to generation of a text message or RSS that can be tagged as to who should be notified of the failure. There are two functionalities that PMI has been directed at. 1) regular and automated testing of multi user environments and 2) version-wise testing of new software releases prior to their deployment in a production mode.« less

  4. Assessing performance and validating finite element simulations using probabilistic knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolin, Ronald M.; Rodriguez, E. A.

    Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrencemore » results are used to validate finite element predictions.« less

  5. NASA's Evolutionary Xenon Thruster (NEXT) Long-Duration Test as of 736 kg of Propellant Throughput

    NASA Technical Reports Server (NTRS)

    Shastry, Rohit; Herman, Daniel A.; Soulas, George C.; Patterson, Michael J.

    2012-01-01

    The NASA s Evolutionary Xenon Thruster (NEXT) program is developing the next-generation solar-electric ion propulsion system with significant enhancements beyond the state-of-the-art NASA Solar Electric Propulsion Technology Application Readiness (NSTAR) ion propulsion system to provide future NASA science missions with enhanced mission capabilities. A Long-Duration Test (LDT) was initiated in June 2005 to validate the thruster service life modeling and to qualify the thruster propellant throughput capability. The thruster has set electric propulsion records for the longest operating duration, highest propellant throughput, and most total impulse demonstrated. At the time of this publication, the NEXT LDT has surpassed 42,100 h of operation, processed more than 736 kg of xenon propellant, and demonstrated greater than 28.1 MN s total impulse. Thruster performance has been steady with negligible degradation. The NEXT thruster design has mitigated several lifetime limiting mechanisms encountered in the NSTAR design, including the NSTAR first failure mode, thereby drastically improving thruster capabilities. Component erosion rates and the progression of the predicted life-limiting erosion mechanism for the thruster compare favorably to pretest predictions based upon semi-empirical ion thruster models used in the thruster service life assessment. Service life model validation has been accomplished by the NEXT LDT. Assuming full-power operation until test article failure, the models and extrapolated erosion data predict penetration of the accelerator grid grooves after more than 45,000 hours of operation while processing over 800 kg of xenon propellant. Thruster failure due to degradation of the accelerator grid structural integrity is expected after groove penetration.

  6. NASA's Evolutionary Xenon Thruster (NEXT) Long-Duration Test as of 736 kg of Propellant Throughput

    NASA Technical Reports Server (NTRS)

    Shastry, Rohit; Herman, Daniel A.; Soulas, George C.; Patterson, Michael J.

    2012-01-01

    The NASA s Evolutionary Xenon Thruster (NEXT) program is developing the next-generation solar-electric ion propulsion system with significant enhancements beyond the state-of-the-art NASA Solar Electric Propulsion Technology Application Readiness (NSTAR) ion propulsion system to provide future NASA science missions with enhanced mission capabilities. A Long-Duration Test (LDT) was initiated in June 2005 to validate the thruster service life modeling and to qualify the thruster propellant throughput capability. The thruster has set electric propulsion records for the longest operating duration, highest propellant throughput, and most total impulse demonstrated. At the time of this publication, the NEXT LDT has surpassed 42,100 h of operation, processed more than 736 kg of xenon propellant, and demonstrated greater than 28.1 MN s total impulse. Thruster performance has been steady with negligible degradation. The NEXT thruster design has mitigated several lifetime limiting mechanisms encountered in the NSTAR design, including the NSTAR first failure mode, thereby drastically improving thruster capabilities. Component erosion rates and the progression of the predicted life-limiting erosion mechanism for the thruster compare favorably to pretest predictions based upon semi-empirical ion thruster models used in the thruster service life assessment. Service life model validation has been accomplished by the NEXT LDT. Assuming full-power operation until test article failure, the models and extrapolated erosion data predict penetration of the accelerator grid grooves after more than 45,000 hours of operation while processing over 800 kg of xenon propellant. Thruster failure due to degradation of the accelerator grid structural integrity is expected after

  7. STS-114 Engine Cut-off Sensor Anomaly Technical Consultation Report

    NASA Technical Reports Server (NTRS)

    Wilson, Timmy R.; Kichak, Robert A.; Ungar, Eugene K.; Cherney, Robert; Rickman, Steve L.

    2009-01-01

    The NESC consultation team participated in real-time troubleshooting of the Main Propulsion System (MPS) Engine Cutoff (ECO) sensor system failures during STS-114 launch countdown. The team assisted with External Tank (ET) thermal and ECO Point Sensor Box (PSB) circuit analyses, and made real-time inputs to the Space Shuttle Program (SSP) problem resolution teams. Several long-term recommendations resulted. One recommendation was to conduct cryogenic tests of the ECO sensors to validate, or disprove, the theory that variations in circuit impedance due to cryogenic effects on swaged connections within the sensor were the root cause of STS-114 failures.

  8. Development of Airport Surface Required Navigation Performance (RNP)

    NASA Technical Reports Server (NTRS)

    Cassell, Rick; Smith, Alex; Hicok, Dan

    1999-01-01

    The U.S. and international aviation communities have adopted the Required Navigation Performance (RNP) process for defining aircraft performance when operating the en-route, approach and landing phases of flight. RNP consists primarily of the following key parameters - accuracy, integrity, continuity, and availability. The processes and analytical techniques employed to define en-route, approach and landing RNP have been applied in the development of RNP for the airport surface. To validate the proposed RNP requirements several methods were used. Operational and flight demonstration data were analyzed for conformance with proposed requirements, as were several aircraft flight simulation studies. The pilot failure risk component was analyzed through several hypothetical scenarios. Additional simulator studies are recommended to better quantify crew reactions to failures as well as additional simulator and field testing to validate achieved accuracy performance, This research was performed in support of the NASA Low Visibility Landing and Surface Operations Programs.

  9. NASA-LaRc Flight-Critical Digital Systems Technology Workshop

    NASA Technical Reports Server (NTRS)

    Meissner, C. W., Jr. (Editor); Dunham, J. R. (Editor); Crim, G. (Editor)

    1989-01-01

    The outcome is documented of a Flight-Critical Digital Systems Technology Workshop held at NASA-Langley December 13 to 15 1988. The purpose of the workshop was to elicit the aerospace industry's view of the issues which must be addressed for the practical realization of flight-critical digital systems. The workshop was divided into three parts: an overview session; three half-day meetings of seven working groups addressing aeronautical and space requirements, system design for validation, failure modes, system modeling, reliable software, and flight test; and a half-day summary of the research issues presented by the working group chairmen. Issues that generated the most consensus across the workshop were: (1) the lack of effective design and validation methods with support tools to enable engineering of highly-integrated, flight-critical digital systems, and (2) the lack of high quality laboratory and field data on system failures especially due to electromagnetic environment (EME).

  10. Score tests for independence in semiparametric competing risks models.

    PubMed

    Saïd, Mériem; Ghazzali, Nadia; Rivest, Louis-Paul

    2009-12-01

    A popular model for competing risks postulates the existence of a latent unobserved failure time for each risk. Assuming that these underlying failure times are independent is attractive since it allows standard statistical tools for right-censored lifetime data to be used in the analysis. This paper proposes simple independence score tests for the validity of this assumption when the individual risks are modeled using semiparametric proportional hazards regressions. It assumes that covariates are available, making the model identifiable. The score tests are derived for alternatives that specify that copulas are responsible for a possible dependency between the competing risks. The test statistics are constructed by adding to the partial likelihoods for the individual risks an explanatory variable for the dependency between the risks. A variance estimator is derived by writing the score function and the Fisher information matrix for the marginal models as stochastic integrals. Pitman efficiencies are used to compare test statistics. A simulation study and a numerical example illustrate the methodology proposed in this paper.

  11. Compression After Impact on Honeycomb Core Sandwich Panels with Thin Facesheets, Part 2: Analysis

    NASA Technical Reports Server (NTRS)

    Mcquigg, Thomas D.; Kapania, Rakesh K.; Scotti, Stephen J.; Walker, Sandra P.

    2012-01-01

    A two part research study has been completed on the topic of compression after impact (CAI) of thin facesheet honeycomb core sandwich panels. The research has focused on both experiments and analysis in an effort to establish and validate a new understanding of the damage tolerance of these materials. Part 2, the subject of the current paper, is focused on the analysis, which corresponds to the CAI testings described in Part 1. Of interest, are sandwich panels, with aerospace applications, which consist of very thin, woven S2-fiberglass (with MTM45-1 epoxy) facesheets adhered to a Nomex honeycomb core. Two sets of materials, which were identical with the exception of the density of the honeycomb core, were tested in Part 1. The results highlighted the need for analysis methods which taken into account multiple failure modes. A finite element model (FEM) is developed here, in Part 2. A commercial implementation of the Multicontinuum Failure Theory (MCT) for progressive failure analysis (PFA) in composite laminates, Helius:MCT, is included in this model. The inclusion of PFA in the present model provided a new, unique ability to account for multiple failure modes. In addition, significant impact damage detail is included in the model. A sensitivity study, used to assess the effect of each damage parameter on overall analysis results, is included in an appendix. Analysis results are compared to the experimental results for each of the 32 CAI sandwich panel specimens tested to failure. The failure of each specimen is predicted using the high-fidelity, physicsbased analysis model developed here, and the results highlight key improvements in the understanding of honeycomb core sandwich panel CAI failure. Finally, a parametric study highlights the strength benefits compared to mass penalty for various core densities.

  12. Flight Validation of a Metrics Driven L(sub 1) Adaptive Control

    NASA Technical Reports Server (NTRS)

    Dobrokhodov, Vladimir; Kitsios, Ioannis; Kaminer, Isaac; Jones, Kevin D.; Xargay, Enric; Hovakimyan, Naira; Cao, Chengyu; Lizarraga, Mariano I.; Gregory, Irene M.

    2008-01-01

    The paper addresses initial steps involved in the development and flight implementation of new metrics driven L1 adaptive flight control system. The work concentrates on (i) definition of appropriate control driven metrics that account for the control surface failures; (ii) tailoring recently developed L1 adaptive controller to the design of adaptive flight control systems that explicitly address these metrics in the presence of control surface failures and dynamic changes under adverse flight conditions; (iii) development of a flight control system for implementation of the resulting algorithms onboard of small UAV; and (iv) conducting a comprehensive flight test program that demonstrates performance of the developed adaptive control algorithms in the presence of failures. As the initial milestone the paper concentrates on the adaptive flight system setup and initial efforts addressing the ability of a commercial off-the-shelf AP with and without adaptive augmentation to recover from control surface failures.

  13. Rigging Test Bed Development for Validation of Multi-Stage Decelerator Extractions

    NASA Technical Reports Server (NTRS)

    Kenig, Sivan J.; Gallon, John C.; Adams, Douglas S.; Rivellini, Tommaso P.

    2013-01-01

    The Low Density Supersonic Decelerator project is developing new decelerator systems for Mars entry which would include testing with a Supersonic Flight Dynamics Test Vehicle. One of the decelerator systems being developed is a large supersonic ringsail parachute. Due to the configuration of the vehicle it is not possible to deploy the parachute with a mortar which would be the preferred method for a spacecraft in a supersonic flow. Alternatively, a multi-stage extraction process using a ballute as a pilot is being developed for the test vehicle. The Rigging Test Bed is a test venue being constructed to perform verification and validation of this extraction process. The test bed consists of a long pneumatic piston device capable of providing a constant force simulating the ballute drag force during the extraction events. The extraction tests will take place both inside a high-bay for frequent tests of individual extraction stages and outdoors using a mobile hydraulic crane for complete deployment tests from initial pack pull out to canopy extraction. These tests will measure line tensions and use photogrammetry to track motion of the elements involved. The resulting data will be used to verify packing and rigging as well, as validate models and identify potential failure modes in order to finalize the design of the extraction system.

  14. Reliability, construct validity and determinants of 6-minute walk test performance in patients with chronic heart failure.

    PubMed

    Uszko-Lencer, Nicole H M K; Mesquita, Rafael; Janssen, Eefje; Werter, Christ; Brunner-La Rocca, Hans-Peter; Pitta, Fabio; Wouters, Emiel F M; Spruit, Martijn A

    2017-08-01

    In-depth analyses of the measurement properties of the 6-minute walk test (6MWT) in patients with chronic heart failure (CHF) are lacking. We investigated the reliability, construct validity, and determinants of the distance covered in the 6MWT (6MWD) in CHF patients. 337 patients were studied (median age 65years, 70% male, ejection fraction 35%). Participants performed two 6MWTs on subsequent days. Demographics, anthropometrics, clinical data, ejection fraction, maximal exercise capacity, body composition, lung function, and symptoms of anxiety and depression were also assessed. Construct validity was assessed in terms of convergent, discriminant and known-groups validity. Stepwise linear regression was used. 6MWT was reliable (ICC=0.90, P<0.0001). The learning effect was 31m (95%CI 27, 35m). Older age (≥65years), lower lung diffusing capacity (<80% predicted) and higher NYHA class (NYHA III) were associated with a lower likelihood of a meaningful increase in the second test (OR 0.45-0.56, P<0.05 for all). The best 6MWD had moderate-to-good correlations with peak exercise capacity (r s =0.54-0.69) and no-to-fair correlations with body composition, lung function, ejection fraction, and symptoms of anxiety and depression (r s =0.04-0.49). Patients with higher NYHA classes had lower 6MWD. 6MWD was independently associated with maximal power output during maximal exercise, estimated glomerular filtration rate and age (51.7% of the variability). 6MWT was found to be reliable and valid in patients with mild-to-moderate CHF. Maximal exercise capacity, renal function and age were significant determinants of the best 6MWD. These findings strengthen the clinical utility of the 6MWT in CHF. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Content validation of the operational definitions of the nursing diagnoses of activity intolerance, excess fluid volume, and decreased cardiac output in patients with heart failure.

    PubMed

    de Souza, Vanessa; Zeitoun, Sandra Salloum; Lopes, Camila Takao; de Oliveira, Ana Paula Dias; Lopes, Juliana de Lima; de Barros, Alba Lucia Botura Leite

    2014-06-01

    To consensually validate the operational definitions of the nursing diagnoses activity intolerance, excessive fluid volume, and decreased cardiac output in patients with decompensated heart failure. Consensual validation was performed in two stages: analogy by similarity of defining characteristics, and development of operational definitions and validation with experts. A total of 38 defining characteristics were found. Operational definitions were developed and content-validated. One hundred percent of agreement was achieved among the seven experts after five rounds. "Ascites" was added in the nursing diagnosis excessive fluid volume. The consensual validation improves interpretation of human response, grounding the selection of nursing interventions and contributing to improved nursing outcomes. Support the assessment of patients with decompensated heart failure. © 2013 NANDA International.

  16. Application of a Probalistic Sizing Methodology for Ceramic Structures

    NASA Astrophysics Data System (ADS)

    Rancurel, Michael; Behar-Lafenetre, Stephanie; Cornillon, Laurence; Leroy, Francois-Henri; Coe, Graham; Laine, Benoit

    2012-07-01

    Ceramics are increasingly used in the space industry to take advantage of their stability and high specific stiffness properties. Their brittle behaviour often leads to size them by increasing the safety factors that are applied on the maximum stresses. It induces to oversize the structures. This is inconsistent with the major driver in space architecture, the mass criteria. This paper presents a methodology to size ceramic structures based on their failure probability. Thanks to failure tests on samples, the Weibull law which characterizes the strength distribution of the material is obtained. A-value (Q0.0195%) and B-value (Q0.195%) are then assessed to take into account the limited number of samples. A knocked-down Weibull law that interpolates the A- & B- values is also obtained. Thanks to these two laws, a most-likely and a knocked- down prediction of failure probability are computed for complex ceramic structures. The application of this methodology and its validation by test is reported in the paper.

  17. Anchorage strength models for end-debonding predictions in RC beams strengthened with FRP composites

    NASA Astrophysics Data System (ADS)

    Nardini, V.; Guadagnini, M.; Valluzzi, M. R.

    2008-05-01

    The increase in the flexural capacity of RC beams obtained by externally bonding FRP composites to their tension side is often limited by the premature and brittle debonding of the external reinforcement. An in-depth understanding of this complex failure mechanism, however, has not yet been achieved. With specific regard to end-debonding failure modes, extensive experimental observations reported in the literature highlight the important distinction, often neglected in strength models proposed by researchers, between the peel-off and rip-off end-debonding types of failure. The peel-off failure is generally characterized by a failure plane located within the first few millimetres of the concrete cover, whilst the rip-off failure penetrates deeper into the concrete cover and propagates along the tensile steel reinforcement. A new rip-off strength model is described in this paper. The model proposed is based on the Chen and Teng peel-off model and relies upon additional theoretical considerations. The influence of the amount of the internal tensile steel reinforcement and the effective anchorage length of FRP are considered and discussed. The validity of the new model is analyzed further through comparisons with test results, findings of a numerical investigation, and a parametric study. The new rip-off strength model is assessed against a database comprising results from 62 beams tested by various researchers and is shown to yield less conservative results.

  18. Kidney Failure and ESRD in the Atherosclerosis Risk in Communities (ARIC) Study: Comparing Ascertainment of Treated and Untreated Kidney Failure in a Cohort Study

    PubMed Central

    Rebholz, Casey M.; Coresh, Josef; Ballew, Shoshana H.; McMahon, Blaithin; Whelton, Seamus P.; Selvin, Elizabeth; Grams, Morgan E.

    2015-01-01

    Background Linkage to the US Renal Data System (USRDS) registry is commonly used to identify end-stage renal disease (ESRD) cases, or kidney failure treated with dialysis or transplantation, but it underestimates the total burden of kidney failure. This study validates a kidney failure definition that includes both kidney failure treated and not treated by dialysis or transplantation. It compares kidney failure risk factors and outcomes using this broader definition to USRDS-identified ESRD risk factors and outcomes. Study Design Diagnostic test study with stratified random sampling of hospitalizations for chart review. Setting & Participants Atherosclerosis Risk in Communities Study (N=11,530; chart review n=546). Index Test USRDS-identified ESRD; treated or untreated kidney failure defined by USRDS-identified ESRD or International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM)/ICD-10-CM code from hospitalization or death. Reference Test For ESRD, determination of permanent dialysis or transplantation; for kidney failure, determination of permanent dialysis, transplantation, or eGFR <15 mL/min/1.73 m2. Results Over 13 years' median follow-up, 508 kidney failure cases were identified, including 173 (34.1%) from the USRDS registry. ESRD and kidney failure incidence were 1.23 and 3.66 cases per 1,000 person-years in the overall population, and 1.35 and 6.59 cases per 1,000 person-years among participants older than 70 years, respectively. Other risk factor associations were similar between ESRD and kidney failure, except diabetes and albuminuria which were stronger for ESRD. Survival at 1 and 5 years were 74.0% and 24.0% for ESRD and 59.8% and 31.6% for kidney failure, respectively. Sensitivity and specificity were 88.0% and 97.3% comparing the kidney failure ICD-9-CM/ICD-10-CM code algorithm to chart review; for USRDS-identified ESRD, sensitivity and specificity were 94.9% and 100.0%. Limitations Some medical charts were incomplete. Conclusions A kidney failure definition including treated and untreated disease identifies more cases than linkage to the USRDS registry alone, particularly among older adults. Future studies might consider reporting both USRDS-identified ESRD and a more inclusive kidney failure definition. PMID:25773483

  19. Wormhole Formation in RSRM Nozzle Joint Backfill

    NASA Technical Reports Server (NTRS)

    Stevens, J.

    2000-01-01

    The RSRM nozzle uses a barrier of RTV rubber upstream of the nozzle O-ring seals. Post flight inspection of the RSRM nozzle continues to reveal occurrence of "wormholes" into the RTV backfill. The term "wormholes", sometimes called "gas paths", indicates a gas flow path not caused by pre-existing voids, but by a little-understood internal failure mode of the material during motor operation. Fundamental understanding of the mechanics of the RSRM nozzle joints during motor operation, nonlinear viscoelastic characterization of the RTV backfill material, identification of the conditions that predispose the RTV to form wormholes, and screening of candidate replacement materials is being pursued by a joint effort between Thiokol Propulsion, NASA, and the Army Propulsion & Structures Directorate at Redstone Arsenal. The performance of the RTV backfill in the joint is controlled by the joint environment. Joint movement, which applies a tension and shear load on the material, coupled with the introduction of high pressure gas in combination create an environment that exceeds the capability of the material to withstand the wormhole effect. Little data exists to evaluate why the material fails under the modeled joint conditions, so an effort to characterize and evaluate the material under these conditions was undertaken. Viscoelastic property data from characterization testing will anchor structural analysis models. Data over a range of temperatures, environmental pressures, and strain rates was used to develop a nonlinear viscoelastic model to predict material performance, develop criteria for replacement materials, and quantify material properties influencing wormhole growth. Three joint simulation analogs were developed to analyze and validate joint thermal barrier (backfill) material performance. Two exploratory tests focus on detection of wormhole failure under specific motor operating conditions. A "validation" test system provides data to "validate" computer models and predictions. Finally, two candidate replacement materials are being screened and "validated" using the developed test systems.

  20. The Prognostic Accuracy of Suggested Predictors of Failure of Medical Management in Patients With Nontuberculous Spinal Epidural Abscess.

    PubMed

    Stratton, Alexandra; Faris, Peter; Thomas, Kenneth

    2018-05-01

    Retrospective cohort study. To test the external validity of the 2 published prediction criteria for failure of medical management in patients with spinal epidural abscess (SEA). Patients with SEA over a 10-year period at a tertiary care center were identified using ICD-10 (International Classification of Diseases, 10th Revision) diagnostic codes; electronic and paper charts were reviewed. The incidence of SEA and the proportion of patients with SEA that were treated medically were calculated. The rate of failure of medical management was determined. The published prediction models were applied to our data to determine how predictive they were of failure in our cohort. A total of 550 patients were identified using ICD-10 codes, 160 of whom had a magnetic resonance imaging-confirmed diagnosis of SEA. The incidence of SEA was 16 patients per year. Seventy-five patients were found to be intentionally managed medically and were included in the analysis. Thirteen of these 75 patients failed medical management (17%). Based on the published prediction criteria, 26% (Kim et al) and 45% (Patel et al) of our patients were expected to fail. Published prediction models for failure of medical management of SEA were not valid in our cohort. However, once calibrated to our cohort, Patel's model consisting of positive blood culture, presence of diabetes, white blood cells >12.5, and C-reactive protein >115 was the better model for our data.

  1. A systems approach to solder joint fatigue in spacecraft electronic packaging

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1991-01-01

    Differential expansion induced fatigue resulting from temperature cycling is a leading cause of solder joint failures in spacecraft. Achieving high reliability flight hardware requires that each element of the fatigue issue be addressed carefully. This includes defining the complete thermal-cycle environment to be experienced by the hardware, developing electronic packaging concepts that are consistent with the defined environments, and validating the completed designs with a thorough qualification and acceptance test program. This paper describes a useful systems approach to solder fatigue based principally on the fundamental log-strain versus log-cycles-to-failure behavior of fatigue. This fundamental behavior has been useful to integrate diverse ground test and flight operational thermal-cycle environments into a unified electronics design approach. Each element of the approach reflects both the mechanism physics that control solder fatigue, as well as the practical realities of the hardware build, test, delivery, and application cycle.

  2. Interactive Digital e-Health Game for Heart Failure Self-Management: A Feasibility Study.

    PubMed

    Radhakrishnan, Kavita; Toprac, Paul; O'Hair, Matt; Bias, Randolph; Kim, Miyong T; Bradley, Paul; Mackert, Michael

    2016-12-01

    To develop and test the prototype of a serious digital game for improving community-dwelling older adults' heart failure (HF) knowledge and self-management behaviors. The serious game innovatively incorporates evidence-based HF guidelines with contemporary game technology. The study included three phases: development of the game prototype, its usability assessment, and evaluation of the game's functionality. Usability testing included researchers' usability assessment, followed by research personnel's observations of participants playing the game, and participants' completion of a usability survey. Next, in a pretest-post-test design, validated instruments-the Atlanta Heart Failure Knowledge Test and the Self Care for Heart Failure Index-were used to measure improvement in HF self-management knowledge and behaviors related to HF self-maintenance, self-management, and self-efficacy, respectively. A postgame survey assessed participants' perceptions of the game. During usability testing, with seven participants, 100%, 100%, and 86% found the game easy to play, enjoyable, and helpful for learning about HF, respectively. In the subsequent functionality testing, with 19 participants, 89% found the game interesting, enjoyable, and easy to play. Playing the game resulted in a significant improvement in HF self-management knowledge, a nonsignificant improvement in self-reported behaviors related to HF self-maintenance, and no difference in HF self-efficacy scores. Participants with lower education level and age preferred games to any other medium for receiving information. It is feasible to develop a serious digital game that community-dwelling older adults with HF find both satisfying and acceptable and that can improve their self-management knowledge.

  3. Interactive Digital e-Health Game for Heart Failure Self-Management: A Feasibility Study

    PubMed Central

    Toprac, Paul; O'Hair, Matt; Bias, Randolph; Kim, Miyong T.; Bradley, Paul; Mackert, Michael

    2016-01-01

    Abstract Objective: To develop and test the prototype of a serious digital game for improving community-dwelling older adults' heart failure (HF) knowledge and self-management behaviors. The serious game innovatively incorporates evidence-based HF guidelines with contemporary game technology. Materials and Methods: The study included three phases: development of the game prototype, its usability assessment, and evaluation of the game's functionality. Usability testing included researchers' usability assessment, followed by research personnel's observations of participants playing the game, and participants' completion of a usability survey. Next, in a pretest–post-test design, validated instruments—the Atlanta Heart Failure Knowledge Test and the Self Care for Heart Failure Index—were used to measure improvement in HF self-management knowledge and behaviors related to HF self-maintenance, self-management, and self-efficacy, respectively. A postgame survey assessed participants' perceptions of the game. Results: During usability testing, with seven participants, 100%, 100%, and 86% found the game easy to play, enjoyable, and helpful for learning about HF, respectively. In the subsequent functionality testing, with 19 participants, 89% found the game interesting, enjoyable, and easy to play. Playing the game resulted in a significant improvement in HF self-management knowledge, a nonsignificant improvement in self-reported behaviors related to HF self-maintenance, and no difference in HF self-efficacy scores. Participants with lower education level and age preferred games to any other medium for receiving information. Conclusion: It is feasible to develop a serious digital game that community-dwelling older adults with HF find both satisfying and acceptable and that can improve their self-management knowledge. PMID:27976955

  4. Postbuckling and Growth of Delaminations in Composite Plates Subjected to Axial Compression

    NASA Technical Reports Server (NTRS)

    Reeder, James R.; Chunchu, Prasad B.; Song, Kyongchan; Ambur, Damodar R.

    2002-01-01

    The postbuckling response and growth of circular delaminations in flat and curved plates are investigated as part of a study to identify the criticality of delamination locations through the laminate thickness. The experimental results from tests on delaminated plates are compared with finite element analysis results generated using shell models. The analytical prediction of delamination growth is obtained by assessing the strain energy release rate results from the finite element model and comparing them to a mixed-mode fracture toughness failure criterion. The analytical results for onset of delamination growth compare well with experimental results generated using a 3-dimensional displacement visualization system. The record of delamination progression measured in this study has resulted in a fully 3-dimensional test case with which progressive failure models can be validated.

  5. Ductile Tearing of Thin Aluminum Plates Under Blast Loading. Predictions with Fully Coupled Models and Biaxial Material Response Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corona, Edmundo; Gullerud, Arne S.; Haulenbeek, Kimberly K.

    2015-06-01

    The work presented in this report concerns the response and failure of thin 2024- T3 aluminum alloy circular plates to a blast load produced by the detonation of a nearby spherical charge. The plates were fully clamped around the circumference and the explosive charge was located centrally with respect to the plate. The principal objective was to conduct a numerical model validation study by comparing the results of predictions to experimental measurements of plate deformation and failure for charges with masses in the vicinity of the threshold between no tearing and tearing of the plates. Stereo digital image correlation datamore » was acquired for all tests to measure the deflection and strains in the plates. The size of the virtual strain gage in the measurements, however, was relatively large, so the strain measurements have to be interpreted accordingly as lower bounds of the actual strains in the plate and of the severity of the strain gradients. A fully coupled interaction model between the blast and the deflection of the structure was considered. The results of the validation exercise indicated that the model predicted the deflection of the plates reasonably accurately as well as the distribution of strain on the plate. The estimation of the threshold charge based on a critical value of equivalent plastic strain measured in a bulge test, however, was not accurate. This in spite of efforts to determine the failure strain of the aluminum sheet under biaxial stress conditions. Further work is needed to be able to predict plate tearing with some degree of confidence. Given the current technology, at least one test under the actual blast conditions where the plate tears is needed to calibrate the value of equivalent plastic strain when failure occurs in the numerical model. Once that has been determined, the question of the explosive mass value at the threshold could be addressed with more confidence.« less

  6. Evaluation of tools used to measure calcium and/or dairy consumption in children and adolescents.

    PubMed

    Magarey, Anthea; Yaxley, Alison; Markow, Kylie; Baulderstone, Lauren; Miller, Michelle

    2014-08-01

    To identify and critique tools that assess Ca and/or dairy intake in children to ascertain the most accurate and reliable tools available. A systematic review of the literature was conducted using defined inclusion and exclusion criteria. Articles were included on the basis that they reported on a tool measuring Ca and/or dairy intake in children in Western countries and reported on originally developed tools or tested the validity or reliability of existing tools. Defined criteria for reporting reliability and validity properties were applied. Studies in Western countries. Children. Eighteen papers reporting on two tools that assessed dairy intake, ten that assessed Ca intake and five that assessed both dairy and Ca were identified. An examination of tool testing revealed high reliance on lower-order tests such as correlation and failure to differentiate between statistical and clinically meaningful significance. Only half of the tools were tested for reliability and results indicated that only one Ca tool and one dairy tool were reliable. Validation studies showed acceptable levels of agreement (<100 mg difference) and/or sensitivity (62-83 %) and specificity (55-77 %) in three Ca tools. With reference to the testing methodology and results, no tools were considered both valid and reliable for the assessment of dairy intake and only one tool proved valid and reliable for the assessment of Ca intake. These results clearly indicate the need for development and rigorous testing of tools to assess Ca and/or dairy intake in children and adolescents.

  7. Effect of LEO cycling on 125 Ah advanced design IPV nickel-hydrogen flight cells. An update

    NASA Technical Reports Server (NTRS)

    Smithrick, John J.; Hall, Stephen W.

    1991-01-01

    Validation testing of the NASA Lewis 125 Ah advanced design individual pressure vessel (IPV) nickel-hydrogen flight cells was conducted. Work consisted of characterization, storage, and cycle life testing. There was no capacity degradation after 52 days of storage with the cells in the discharged state, an open circuit, 0 C, and a hydrogen pressure of 14.5 psia. The catalyzed wall wick cells were cycled for over 11,000 cycles with no cell failures in the continuing test. One of the noncatalyzed wall wick cells failed.

  8. Comparison of three commercially available fit-test methods.

    PubMed

    Janssen, Larry L; Luinenburg, D Michael; Mullins, Haskell E; Nelson, Thomas J

    2002-01-01

    American National Standards Institute (ANSI) standard Z88.10, Respirator Fit Testing Methods, includes criteria to evaluate new fit-tests. The standard allows generated aerosol, particle counting, or controlled negative pressure quantitative fit-tests to be used as the reference method to determine acceptability of a new test. This study examined (1) comparability of three Occupational Safety and Health Administration-accepted fit-test methods, all of which were validated using generated aerosol as the reference method; and (2) the effect of the reference method on the apparent performance of a fit-test method under evaluation. Sequential fit-tests were performed using the controlled negative pressure and particle counting quantitative fit-tests and the bitter aerosol qualitative fit-test. Of 75 fit-tests conducted with each method, the controlled negative pressure method identified 24 failures; bitter aerosol identified 22 failures; and the particle counting method identified 15 failures. The sensitivity of each method, that is, agreement with the reference method in identifying unacceptable fits, was calculated using each of the other two methods as the reference. None of the test methods met the ANSI sensitivity criterion of 0.95 or greater when compared with either of the other two methods. These results demonstrate that (1) the apparent performance of any fit-test depends on the reference method used, and (2) the fit-tests evaluated use different criteria to identify inadequately fitting respirators. Although "acceptable fit" cannot be defined in absolute terms at this time, the ability of existing fit-test methods to reject poor fits can be inferred from workplace protection factor studies.

  9. A Zebrafish Heart Failure Model for Assessing Therapeutic Agents.

    PubMed

    Zhu, Xiao-Yu; Wu, Si-Qi; Guo, Sheng-Ya; Yang, Hua; Xia, Bo; Li, Ping; Li, Chun-Qi

    2018-03-20

    Heart failure is a leading cause of death and the development of effective and safe therapeutic agents for heart failure has been proven challenging. In this study, taking advantage of larval zebrafish, we developed a zebrafish heart failure model for drug screening and efficacy assessment. Zebrafish at 2 dpf (days postfertilization) were treated with verapamil at a concentration of 200 μM for 30 min, which were determined as optimum conditions for model development. Tested drugs were administered into zebrafish either by direct soaking or circulation microinjection. After treatment, zebrafish were randomly selected and subjected to either visual observation and image acquisition or record videos under a Zebralab Blood Flow System. The therapeutic effects of drugs on zebrafish heart failure were quantified by calculating the efficiency of heart dilatation, venous congestion, cardiac output, and blood flow dynamics. All 8 human heart failure therapeutic drugs (LCZ696, digoxin, irbesartan, metoprolol, qiliqiangxin capsule, enalapril, shenmai injection, and hydrochlorothiazide) showed significant preventive and therapeutic effects on zebrafish heart failure (p < 0.05, p < 0.01, and p < 0.001) in the zebrafish model. The larval zebrafish heart failure model developed and validated in this study could be used for in vivo heart failure studies and for rapid screening and efficacy assessment of preventive and therapeutic drugs.

  10. Tensile failure properties of the perinatal, neonatal, and pediatric cadaveric cervical spine.

    PubMed

    Luck, Jason F; Nightingale, Roger W; Song, Yin; Kait, Jason R; Loyd, Andre M; Myers, Barry S; Bass, Cameron R Dale

    2013-01-01

    Biomechanical tensile testing of perinatal, neonatal, and pediatric cadaveric cervical spines to failure. To assess the tensile failure properties of the cervical spine from birth to adulthood. Pediatric cervical spine biomechanical studies have been few due to the limited availability of pediatric cadavers. Therefore, scaled data based on human adult and juvenile animal studies have been used to augment the limited pediatric cadaver data. Despite these efforts, substantial uncertainty remains in our understanding of pediatric cervical spine biomechanics. A total of 24 cadaveric osteoligamentous head-neck complexes, 20 weeks gestation to 18 years, were sectioned into segments (occiput-C2 [O-C2], C4-C5, and C6-C7) and tested in tension to determine axial stiffness, displacement at failure, and load-to-failure. Tensile stiffness-to-failure (N/mm) increased by age (O-C2: 23-fold, neonate: 22 ± 7, 18 yr: 504; C4-C5: 7-fold, neonate: 71 ± 14, 18 yr: 509; C6-C7: 7-fold, neonate: 64 ± 17, 18 yr: 456). Load-to-failure (N) increased by age (O-C2: 13-fold, neonate: 228 ± 40, 18 yr: 2888; C4-C5: 9-fold, neonate: 207 ± 63, 18 yr: 1831; C6-C7: 10-fold, neonate: 174 ± 41, 18 yr: 1720). Normalized displacement at failure (mm/mm) decreased by age (O-C2: 6-fold, neonate: 0.34 ± 0.076, 18 yr: 0.059; C4-C5: 3-fold, neonate: 0.092 ± 0.015, 18 yr: 0.035; C6-C7: 2-fold, neonate: 0.088 ± 0.019, 18 yr: 0.037). Cervical spine tensile stiffness-to-failure and load-to-failure increased nonlinearly, whereas normalized displacement at failure decreased nonlinearly, from birth to adulthood. Pronounced ligamentous laxity observed at younger ages in the O-C2 segment quantitatively supports the prevalence of spinal cord injury without radiographic abnormality in the pediatric population. This study provides important and previously unavailable data for validating pediatric cervical spine models, for evaluating current scaling techniques and animal surrogate models, and for the development of more biofidelic pediatric crash test dummies.

  11. Laser Indirect Shock Welding of Fine Wire to Metal Sheet.

    PubMed

    Wang, Xiao; Huang, Tao; Luo, Yapeng; Liu, Huixia

    2017-09-12

    The purpose of this paper is to present an advanced method for welding fine wire to metal sheet, namely laser indirect shock welding (LISW). This process uses silica gel as driver sheet to accelerate the metal sheet toward the wire to obtain metallurgical bonding. A series of experiments were implemented to validate the welding ability of Al sheet/Cu wire and Al sheet/Ag wire. It was found that the use of a driver sheet can maintain high surface quality of the metal sheet. With the increase of laser pulse energy, the bonding area of the sheet/wire increased and the welding interfaces were nearly flat. Energy dispersive spectroscopy (EDS) results show that the intermetallic phases were absent and a short element diffusion layer which would limit the formation of the intermetallic phases emerging at the welding interface. A tensile shear test was used to measure the mechanical strength of the welding joints. The influence of laser pulse energy on the tensile failure modes was investigated, and two failure modes, including interfacial failure and failure through the wire, were observed. The nanoindentation test results indicate that as the distance to the welding interface decreased, the microhardness increased due to the plastic deformation becoming more violent.

  12. Warrior Injury Assessment Manikin (WIAMan) Lumbar Spine Model Validation: Development, Testing, and Analysis of Physical and Computational Models of the WIAMan Lumbar Spine Materials Demonstrator

    DTIC Science & Technology

    2016-08-01

    load. The 1 and 10 s-1 rate tests were run on a hydraulic high-rate Instron MTS (8821S), placed in a custom- designed tension fixture (Fig. 8...lateral compression prior to shear testing . The sides of the coupon rest on blocks at the bottom of the vice jaw to allow for travel of the center post ...mode of failure based on the lap shear testing . However, since the pretest spine survived all hits at the BRC speeds, it was decided to proceed with

  13. Assessment of Mudrock Brittleness with Micro-scratch Testing

    NASA Astrophysics Data System (ADS)

    Hernandez-Uribe, Luis Alberto; Aman, Michael; Espinoza, D. Nicolas

    2017-11-01

    Mechanical properties are essential for understanding natural and induced deformational behavior of geological formations. Brittleness characterizes energy dissipation rate and strain localization at failure. Brittleness has been investigated in hydrocarbon-bearing mudrocks in order to quantify the impact of hydraulic fracturing on the creation of complex fracture networks and surface area for reservoir drainage. Typical well logging correlations associate brittleness with carbonate content or dynamic elastic properties. However, an index of rock brittleness should involve actual rock failure and have a consistent method to quantify it. Here, we present a systematic method to quantify mudrock brittleness based on micro-mechanical measurements from the scratch test. Brittleness is formulated as the ratio of energy associated with brittle failure to the total energy required to perform a scratch. Soda lime glass and polycarbonate are used for comparison to identify failure in brittle and ductile mode and validate the developed method. Scratch testing results on mudrocks indicate that it is possible to use the recorded transverse force to estimate brittleness. Results show that tested samples rank as follows in increasing degree of brittleness: Woodford, Eagle Ford, Marcellus, Mancos, and Vaca Muerta. Eagle Ford samples show mixed ductile/brittle failure characteristics. There appears to be no definite correlation between micro-scratch brittleness and quartz or total carbonate content. Dolomite content shows a stronger correlation with brittleness than any other major mineral group. The scratch brittleness index correlates positively with increasing Young's modulus and decreasing Poisson's ratio, but shows deviations in rocks with distinct porosity and with stress-sensitive brittle/ductile behavior (Eagle Ford). The results of our study demonstrate that the micro-scratch test method can be used to investigate mudrock brittleness. The method is particularly useful for reservoir characterization methods that take advantage of drill cuttings or whenever large samples for triaxial testing or fracture mechanics testing cannot be recovered.

  14. Diagnosis of Fanconi anemia in patients with bone marrow failure

    PubMed Central

    Pinto, Fernando O.; Leblanc, Thierry; Chamousset, Delphine; Le Roux, Gwenaelle; Brethon, Benoit; Cassinat, Bruno; Larghero, Jérôme; de Villartay, Jean-Pierre; Stoppa-Lyonnet, Dominique; Baruchel, André; Socié, Gérard; Gluckman, Eliane; Soulier, Jean

    2009-01-01

    Background Patients with bone marrow failure and undiagnosed underlying Fanconi anemia may experience major toxicity if given standard-dose conditioning regimens for hematopoietic stem cell transplant. Due to clinical variability and/or potential emergence of genetic reversion with hematopoietic somatic mosaicism, a straightforward Fanconi anemia diagnosis can be difficult to make, and diagnostic strategies combining different assays in addition to classical breakage tests in blood may be needed. Design and Methods We evaluated Fanconi anemia diagnosis on blood lymphocytes and skin fibroblasts from a cohort of 87 bone marrow failure patients (55 children and 32 adults) with no obvious full clinical picture of Fanconi anemia, by performing a combination of chromosomal breakage tests, FANCD2-monoubiquitination assays, a new flow cytometry-based mitomycin C sensitivity test in fibroblasts, and, when Fanconi anemia was diagnosed, complementation group and mutation analyses. The mitomycin C sensitivity test in fibroblasts was validated on control Fanconi anemia and non-Fanconi anemia samples, including other chromosomal instability disorders. Results When this diagnosis strategy was applied to the cohort of bone marrow failure patients, 7 Fanconi anemia patients were found (3 children and 4 adults). Classical chromosomal breakage tests in blood detected 4, but analyses on fibroblasts were necessary to diagnose 3 more patients with hematopoietic somatic mosaicism. Importantly, Fanconi anemia was excluded in all the other patients who were fully evaluated. Conclusions In this large cohort of patients with bone marrow failure our results confirmed that when any clinical/biological suspicion of Fanconi anemia remains after chromosome breakage tests in blood, based on physical examination, history or inconclusive results, then further evaluation including fibroblast analysis should be made. For that purpose, the flow-based mitomycin C sensitivity test here described proved to be a reliable alternative method to evaluate Fanconi anemia phenotype in fibroblasts. This global strategy allowed early and accurate confirmation or rejection of Fanconi anemia diagnosis with immediate clinical impact for those who underwent hematopoietic stem cell transplant. PMID:19278965

  15. Health status in patients with coexistent COPD and heart failure: a validation and comparison between the Clinical COPD Questionnaire and the Minnesota Living with Heart Failure Questionnaire

    PubMed Central

    Berkhof, Farida F; Metzemaekers, Leola; Uil, Steven M; Kerstjens, Huib AM; van den Berg, Jan WK

    2014-01-01

    Background Chronic obstructive pulmonary disease (COPD) and heart failure (HF) are both common diseases that coexist frequently. Patients with both diseases have worse stable state health status when compared with patients with one of these diseases. In many outpatient clinics, health status is monitored routinely in COPD patients using the Clinical COPD Questionnaire (CCQ) and in HF patients with the Minnesota Living with Heart Failure Questionnaire (MLHF-Q). This study validated and compared which questionnaire, ie, the CCQ or the MLHF-Q, is suited best for patients with coexistent COPD and HF. Methods Patients with both COPD and HF and aged ≥40 years were included. Construct validity, internal consistency, test–retest reliability, and agreement were determined. The Short-Form 36 was used as the external criterion. All questionnaires were completed at baseline. The CCQ and MLHF-Q were repeated after 2 weeks, together with a global rating of change. Results Fifty-eight patients were included, of whom 50 completed the study. Construct validity was acceptable. Internal consistency was adequate for CCQ and MLHF-Q total and domain scores, with a Cronbach’s alpha ≥0.70. Reliability was adequate for MLHF-Q and CCQ total and domain scores, and intraclass correlation coefficients were 0.70–0.90, except for the CCQ symptom score (intraclass correlation coefficient 0.42). The standard error of measurement on the group level was smaller than the minimal clinical important difference for both questionnaires. However, the standard error of measurement on the individual level was larger than the minimal clinical important difference. Agreement was acceptable on the group level and limited on the individual level. Conclusion CCQ and MLHF-Q were both valid and reliable questionnaires for assessment of health status in patients with coexistent COPD and HF on the group level, and hence for research. However, in clinical practice, on the individual level, the characteristics of both questionnaires were not as good. There is room for a questionnaire with good evaluative properties on the individual level, preferably tested in a setting of patients with COPD or HF, or both. PMID:25285000

  16. Comparison of credible patients of very low intelligence and non-credible patients on neurocognitive performance validity indicators.

    PubMed

    Smith, Klayton; Boone, Kyle; Victor, Tara; Miora, Deborah; Cottingham, Maria; Ziegler, Elizabeth; Zeller, Michelle; Wright, Matthew

    2014-01-01

    The purpose of this archival study was to identify performance validity tests (PVTs) and standard IQ and neurocognitive test scores, which singly or in combination, differentiate credible patients of low IQ (FSIQ ≤ 75; n = 55) from non-credible patients. We compared the credible participants against a sample of 74 non-credible patients who appeared to have been attempting to feign low intelligence specifically (FSIQ ≤ 75), as well as a larger non-credible sample (n = 383) unselected for IQ. The entire non-credible group scored significantly higher than the credible participants on measures of verbal crystallized intelligence/semantic memory and manipulation of overlearned information, while the credible group performed significantly better on many processing speed and memory tests. Additionally, credible women showed faster finger-tapping speeds than non-credible women. The credible group also scored significantly higher than the non-credible subgroup with low IQ scores on measures of attention, visual perceptual/spatial tasks, processing speed, verbal learning/list learning, and visual memory, and credible women continued to outperform non-credible women on finger tapping. When cut-offs were selected to maintain approximately 90% specificity in the credible group, sensitivity rates were highest for verbal and visual memory measures (i.e., TOMM trials 1 and 2; Warrington Words correct and time; Rey Word Recognition Test total; RAVLT Effort Equation, Trial 5, total across learning trials, short delay, recognition, and RAVLT/RO discriminant function; and Digit Symbol recognition), followed by select attentional PVT scores (i.e., b Test omissions and time to recite four digits forward). When failure rates were tabulated across seven most sensitive scores, a cut-off of ≥ 2 failures was associated with 85.4% specificity and 85.7% sensitivity, while a cut-off of ≥ 3 failures resulted in 95.1% specificity and 66.0% sensitivity. Results are discussed in light of extant literature and directions for future research.

  17. The reliability and validity of Chinese version of SF36 v2 in aging patients with chronic heart failure.

    PubMed

    Dong, Aishu; Chen, Sisi; Zhu, Lianlian; Shi, Lingmin; Cai, Yueli; Zeng, Jingni; Guo, Wenjian

    2017-08-01

    Chronic heart failure (CHF), a major public health problem worldwide, seriously limits health-related quality of life (HRQOL). How to evaluate HRQOL in older patients with CHF remains a problem. To evaluate the reliability and validity of the Chinese version of the Medical Outcomes Study Short Form version 2 (SF-36v2) in CHF patients. From September 2012 to June 2014, we assessed QOL using the SF-36v2 in 171 aging participants with CHF in four cardiology departments. Convergent and discriminant validity, factorial validity, sensitivity among different NYHA classes and between different age groups, and reliability were determined using standard measurement methods. A total of 150 participants completed a structured questionnaire including general information and the Chinese SF-36v2; 132 questionnaires were considered valid, while 21 patients refused to take part. 25 of the 50 participants invited to complete the 2-week test-retest questionnaires returned completed questionnaires. The internal consistency reliability (Cronbach's α) of the total SF-36v2 was 0.92 (range 0.74-0.93). All hypothesized item-subscale correlations showed satisfactory convergent and discriminant validity. Sensitivity was measured in different NYHA classes and age groups. Comparison of different NYHA classes showed statistical significance, but there was no significant difference between age groups. We confirmed the SF-36v2 as a valid instrument for evaluating HRQOL Chinese CHF patients. Both reliability and validity were strongly satisfactory, but there was divergence in understanding subscales such as "social functioning" because of differing cultural background. The reliability, validity, and sensitivity of SF-36v2 in aging patients with CHF were acceptable.

  18. Standard intelligence tests are valid instruments for measuring the intellectual potential of urban children: comments on pitfalls in the measurement of intelligence.

    PubMed

    Sattler, J M

    1979-05-01

    Hardy, Welcher, Mellitis, and Kagan altered standard WISC administrative and scoring procedures and, from the resulting higher subtest scores, concluded that IQs based on standardized tests are inappropriate measures for inner-city children. Careful examination of their study reveals many methodological inadequacies and problematic interpretations. Three of these are as follows: (a) failure to use any external criterion to evaluate the validity of their testing-of-limits procedures; (b) the possibility of examiner and investigator bias; and (c) lack of any comparison group that might demonstrate that poor children would be helped more than others by the probes recommended. Their report creates misleading doubts about existing intelligence tests and does a disservice to inner-city children who need the benefits of the judicious use of diagnostic procedures, which include standardized intelligence tests. Consequently, their assertion concerning the inappropriateness of standardized test results for inner-city children is not only premature and misleading, but it is unwarranted as well.

  19. Monitoring of unstable slopes by MEMS tilting sensors and its application to early warning

    NASA Astrophysics Data System (ADS)

    Towhata, I.; Uchimura, T.; Seko, I.; Wang, L.

    2015-09-01

    The present paper addresses the newly developed early warning technology that can help mitigate the slope failure disasters during heavy rains. Many studies have been carried out in the recent times on early warning that is based on rainfall records. Although those rainfall criteria of slope failure tells the probability of disaster on a regional scale, it is difficult for them to judge the risk of particular slopes. This is because the rainfall intensity is spatially too variable to forecast and the early warning based on rainfall alone cannot take into account the effects of local geology, hydrology and topography that vary spatially as well. In this regard, the authors developed an alternative technology in which the slope displacement/deformation is monitored and early warning is issued when a new criterion is satisfied. The new MEMS-based sensor monitors the tilting angle of an instrument that is embedded at a very shallow depth and the record of the tilting angle corresponds to the lateral displacement at the slope surface. Thus, the rate of tilting angle that exceeds a new criterion value implies an imminent slope failure. This technology has been validated against several events of slope failures as well as against a field rainfall test. Those validations have made it possible to determine the criterion value of the rate of tilting angle to be 0.1 degree/hour. The advantage of the MEMS tilting sensor lies in its low cost. Hence, it is possible to install many low-cost sensors over a suspected slope in which the precise range of what is going to fall down during the next rainfall is unknown. In addition to the past validations, this paper also introduces a recent application to a failed slope in the Izu Oshima Island where a heavy rainfall-induced slope failure occurred in October, 2013.

  20. Assessing Spoken Language Competence in Children with Selective Mutism: Using Parents as Test Presenters

    ERIC Educational Resources Information Center

    Klein, Evelyn R.; Armstrong, Sharon Lee; Shipon-Blum, Elisa

    2013-01-01

    Children with selective mutism (SM) display a failure to speak in select situations despite speaking when comfortable. The purpose of this study was to obtain valid assessments of receptive and expressive language in 33 children (ages 5 to 12) with SM. Because some children with SM will speak to parents but not a professional, another purpose was…

  1. Using ACT Subscores to Identify at Risk Students in Business Statistics and Principles of Management Courses

    ERIC Educational Resources Information Center

    Welborn, Cliff Alan; Lester, Don; Parnell, John

    2015-01-01

    The American College Test (ACT) is utilized to determine academic success in business core courses at a midlevel state university. Specifically, subscores are compared to subsequent student grades in selected courses. The results indicate that ACT Mathematics and English subscores are a valid predictor of success or failure of business students in…

  2. Markov Chains For Testing Redundant Software

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Sjogren, Jon A.

    1990-01-01

    Preliminary design developed for validation experiment that addresses problems unique to assuring extremely high quality of multiple-version programs in process-control software. Approach takes into account inertia of controlled system in sense it takes more than one failure of control program to cause controlled system to fail. Verification procedure consists of two steps: experimentation (numerical simulation) and computation, with Markov model for each step.

  3. Damage tolerance of pressurized graphite/epoxy tape cylinders under uniaxial and biaxial loading. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Priest, Stacy Marie

    1993-01-01

    The damage tolerance behavior of internally pressurized, axially slit, graphite/epoxy tape cylinders was investigated. Specifically, the effects of axial stress, structural anisotropy, and subcritical damage were considered. In addition, the limitations of a methodology which uses coupon fracture data to predict cylinder failure were explored. This predictive methodology was previously shown to be valid for quasi-isotropic fabric and tape cylinders but invalid for structurally anisotropic (+/-45/90)(sub s) and (+/-45/0)(sub s) cylinders. The effects of axial stress and structural anisotropy were assessed by testing tape cylinders with (90/0/+/-45)(sub s), (+/-45/90)(sub s), and (+/-45/0)(sub s) layups in a uniaxial test apparatus, specially designed and built for this work, and comparing the results to previous tests conducted in biaxial loading. Structural anisotropy effects were also investigated by testing cylinders with the quasi-isotropic (0/+/-45/90)(sub s) layup which is a stacking sequence variation of the previously tested (90/0/+/-45)(sub s) layup with higher D(sub 16) and D(sub 26) terms but comparable D(sub 16) and D(sub 26) to D(sub 11) ratios. All cylinders tested and used for comparison are made from AS4/3501-6 graphite/epoxy tape and have a diameter of 305 mm. Cylinder slit lengths range from 12.7 to 50.8 mm. Failure pressures are lower for the uniaxially loaded cylinders in all cases. The smallest percent failure pressure decreases are observed for the (+/-45/90)(sub s) cylinders, while the greatest such decreases are observed for the (+/-45/0)(sub s) cylinders. The relative effects of the axial stress on the cylinder failure pressures do not correlate with the degree of structural coupling. The predictive methodology is not applicable for uniaxially loaded (+/-45/90)(sub s) and (+/-45/0)(sub s) cylinders, may be applicable for uniaxially loaded (90/0/+/-45)(sub s) cylinders, and is applicable for the biaxially loaded (90/0/+/-45)(sub s) and (0/+/-45/90)(sub s) cylinders. This indicates that the ratios of D(sub 16) and D(sub 26) to D(sub 11), as opposed to the absolute magnitudes of D(sub 16) and D(sub 26), may be important in the failure of these cylinders and in the applicability of the methodology. Discontinuities observed in the slit tip hoop strains for all the cylinders tested indicate that subcritical damage can play an important role in the failure of tape cylinders. This role varies with layup and loading condition and is likely coupled to the effects of structural anisotropy. Biaxial failure pressures may exceed the uniaxial values because the axial stress contributes to the formation of 0 deg ply splitting (accompanied by delamination) or similar stress-mitigating subcritical damage. The failure behavior of similar cylinders can also vary as a result of differences in the role of subcritical damage as observed for the case of a biaxially loaded (90/0/+/-45)(sub s) cylinder with a 12.7 mm slit. For this case, the methodology is valid when the initial coupon and cylinder fracture modes agree. However, the methodology underpredicts the failure pressure of the cylinder when a circumferential fracture path, suggestive of a 0 deg ply split, occurs at one slit tip. Thus, the failure behavior of some tape cylinders may be highly sensitive to the initial subcritical damage mechanism. Finite element analyses are recommended to determine how structural anisotropy and axial stress modify the slit tip stress states in cylinders from those found in flat plates since similarity of these stress states is a fundamental assumption of the current predictive methodology.

  4. Compression After Impact on Honeycomb Core Sandwich Panels With Thin Facesheets. Part 1; Experiments

    NASA Technical Reports Server (NTRS)

    McQuigg, Thomas D.; Kapania, Rakesh K.; Scotti, Stephen J.; Walker, Sandra P.

    2012-01-01

    A two part research study has been completed on the topic of compression after impact (CAI) of thin facesheet honeycomb core sandwich panels. The research has focused on both experiments and analysis in an effort to establish and validate a new understanding of the damage tolerance of these materials. Part one, the subject of the current paper, is focused on the experimental testing. Of interest are sandwich panels, with aerospace applications, which consist of very thin, woven S2-fiberglass (with MTM45-1 epoxy) facesheets adhered to a Nomex honeycomb core. Two sets of specimens, which were identical with the exception of the density of the honeycomb core, were tested. Static indentation and low velocity impact using a drop tower are used to study damage formation in these materials. A series of highly instrumented CAI tests was then completed. New techniques used to observe CAI response and failure include high speed video photography, as well as digital image correlation (DIC) for full-field deformation measurement. Two CAI failure modes, indentation propagation, and crack propagation, were observed. From the results, it can be concluded that the CAI failure mode of these panels depends solely on the honeycomb core density.

  5. Finite element analysis of steel fiber-reinforced concrete (SFRC): validation of experimental tensile capacity of dog-bone specimens

    NASA Astrophysics Data System (ADS)

    Islam, Md. Mashfiqul; Chowdhury, Md. Arman; Sayeed, Md. Abu; Hossain, Elsha Al; Ahmed, Sheikh Saleh; Siddique, Ashfia

    2014-09-01

    Finite element analyses are conducted to model the tensile capacity of steel fiber-reinforced concrete (SFRC). For this purpose dog-bone specimens are casted and tested under direct and uniaxial tension. Two types of aggregates (brick and stone) are used to cast the SFRC and plain concrete. The fiber volume ratio is maintained 1.5 %. Total 8 numbers of dog-bone specimens are made and tested in a 1000-kN capacity digital universal testing machine (UTM). The strain data are gathered employing digital image correlation technique from high-definition images and high-speed video clips. Then, the strain data are synthesized with the load data obtained from the load cell of the UTM. The tensile capacity enhancement is found 182-253 % compared to control specimen to brick SFRC and in case of stone SFRC the enhancement is 157-268 %. Fibers are found to enhance the tensile capacity as well as ductile properties of concrete that ensures to prevent sudden brittle failure. The dog-bone specimens are modeled in the ANSYS 10.0 finite element platform and analyzed to model the tensile capacity of brick and stone SFRC. The SOLID65 element is used to model the SFRC as well as plain concretes by optimizing the Poisson's ratio, modulus of elasticity, tensile strength and stress-strain relationships and also failure pattern as well as failure locations. This research provides information of the tensile capacity enhancement of SFRC made of both brick and stone which will be helpful for the construction industry of Bangladesh to introduce this engineering material in earthquake design. Last of all, the finite element outputs are found to hold good agreement with the experimental tensile capacity which validates the FE modeling.

  6. Modelling of Safety Instrumented Systems by using Bernoulli trials: towards the notion of odds on for SIS failures analysis

    NASA Astrophysics Data System (ADS)

    Cauffriez, Laurent

    2017-01-01

    This paper deals with the modeling of a random failures process of a Safety Instrumented System (SIS). It aims to identify the expected number of failures for a SIS during its lifecycle. Indeed, the fact that the SIS is a system being tested periodically gives the idea to apply Bernoulli trials to characterize the random failure process of a SIS and thus to verify if the PFD (Probability of Failing Dangerously) experimentally obtained agrees with the theoretical one. Moreover, the notion of "odds on" found in Bernoulli theory allows engineers and scientists determining easily the ratio between “outcomes with success: failure of SIS” and “outcomes with unsuccess: no failure of SIS” and to confirm that SIS failures occur sporadically. A Stochastic P-temporised Petri net is proposed and serves as a reference model for describing the failure process of a 1oo1 SIS architecture. Simulations of this stochastic Petri net demonstrate that, during its lifecycle, the SIS is rarely in a state in which it cannot perform its mission. Experimental results are compared to Bernoulli trials in order to validate the powerfulness of Bernoulli trials for the modeling of the failures process of a SIS. The determination of the expected number of failures for a SIS during its lifecycle opens interesting research perspectives for engineers and scientists by completing the notion of PFD.

  7. Minimizing false positive error with multiple performance validity tests: response to Bilder, Sugar, and Hellemann (2014 this issue).

    PubMed

    Larrabee, Glenn J

    2014-01-01

    Bilder, Sugar, and Hellemann (2014 this issue) contend that empirical support is lacking for use of multiple performance validity tests (PVTs) in evaluation of the individual case, differing from the conclusions of Davis and Millis (2014), and Larrabee (2014), who found no substantial increase in false positive rates using a criterion of failure of ≥ 2 PVTs and/or Symptom Validity Tests (SVTs) out of multiple tests administered. Reconsideration of data presented in Larrabee (2014) supports a criterion of ≥ 2 out of up to 7 PVTs/SVTs, as keeping false positive rates close to and in most cases below 10% in cases with bona fide neurologic, psychiatric, and developmental disorders. Strategies to minimize risk of false positive error are discussed, including (1) adjusting individual PVT cutoffs or criterion for number of PVTs failed, for examinees who have clinical histories placing them at risk for false positive identification (e.g., severe TBI, schizophrenia), (2) using the history of the individual case to rule out conditions known to result in false positive errors, (3) using normal performance in domains mimicked by PVTs to show that sufficient native ability exists for valid performance on the PVT(s) that have been failed, and (4) recognizing that as the number of PVTs/SVTs failed increases, the likelihood of valid clinical presentation decreases, with a corresponding increase in the likelihood of invalid test performance and symptom report.

  8. Impact of Gate 99mTc DTPA GFR, Serum Creatinine and Urea in Diagnosis of Patients with Chronic Kidney Failure

    PubMed Central

    Miftari, Rame; Nura, Adem; Topçiu-Shufta, Valdete; Miftari, Valon; Murseli, Arbenita; Haxhibeqiri, Valdete

    2017-01-01

    Aim: The aim of this study was determination of validity of 99mTcDTPA estimation of GFR for early detection of chronic kidney failure Material and methods: There were 110 patients (54 males and 56 females) with kidney disease referred for evaluation of renal function at UCC of Kosovo. All patients were included in two groups. In the first group were included 30 patients confirmed with renal failure, whereas in the second group were included 80 patients with other renal disease. In study were included only patients with ready results of creatinine, urea and glucose in the blood serum. For estimation of GFR we have used the Gate GFR DTPA method. The statistical data processing was conducted using statistical methods such as arithmetic average, the student t-test, percentage or rate, sensitivity, specificity and accuracy of the test. Results: The average age of all patients was 36 years old. The average age of female was 37 whereas of male 35. Patients with renal failure was significantly older than patients with other renal disease (p<0.005). Renal failure was found in 30 patients (27.27%). The concentration of urea and creatinine in blood serum of patients with renal failure were significantly higher than in patients with other renal disease (P< 0.00001). GFR in patients with renal failure were significantly lower than in patients with other renal disease, 51.75 ml/min (p<0.00001). Sensitivity of uremia and creatininemia for detection of renal failure were 83.33%, whereas sensitivity of 99mTcDTPA GFR was 100%. Specificity of uraemia and creatininemia were 63% whereas specificity of 99mTcDTPA GFR was 47.5%. Diagnostic accuracy of blood urea and creatinine in detecting of renal failure were 69%, whereas diagnostic accuracy of 99mTcDTPA GFR was 61.8%. Conclusion: Gate 99mTc DTPA scintigraphy in collaboration with biochemical tests are very sensitive methods for early detection of patients with chronic renal failure. PMID:28883673

  9. Cross-validation of the Dot Counting Test in a large sample of credible and non-credible patients referred for neuropsychological testing.

    PubMed

    McCaul, Courtney; Boone, Kyle B; Ermshar, Annette; Cottingham, Maria; Victor, Tara L; Ziegler, Elizabeth; Zeller, Michelle A; Wright, Matthew

    2018-01-18

    To cross-validate the Dot Counting Test in a large neuropsychological sample. Dot Counting Test scores were compared in credible (n = 142) and non-credible (n = 335) neuropsychology referrals. Non-credible patients scored significantly higher than credible patients on all Dot Counting Test scores. While the original E-score cut-off of ≥17 achieved excellent specificity (96.5%), it was associated with mediocre sensitivity (52.8%). However, the cut-off could be substantially lowered to ≥13.80, while still maintaining adequate specificity (≥90%), and raising sensitivity to 70.0%. Examination of non-credible subgroups revealed that Dot Counting Test sensitivity in feigned mild traumatic brain injury (mTBI) was 55.8%, whereas sensitivity was 90.6% in patients with non-credible cognitive dysfunction in the context of claimed psychosis, and 81.0% in patients with non-credible cognitive performance in depression or severe TBI. Thus, the Dot Counting Test may have a particular role in detection of non-credible cognitive symptoms in claimed psychiatric disorders. Alternative to use of the E-score, failure on ≥1 cut-offs applied to individual Dot Counting Test scores (≥6.0″ for mean grouped dot counting time, ≥10.0″ for mean ungrouped dot counting time, and ≥4 errors), occurred in 11.3% of the credible sample, while nearly two-thirds (63.6%) of the non-credible sample failed one of more of these cut-offs. An E-score cut-off of 13.80, or failure on ≥1 individual score cut-offs, resulted in few false positive identifications in credible patients, and achieved high sensitivity (64.0-70.0%), and therefore appear appropriate for use in identifying neurocognitive performance invalidity.

  10. Acetaminophen Adducts Detected in Serum of Pediatric Patients With Acute Liver Failure.

    PubMed

    Alonso, Estella M; James, Laura P; Zhang, Song; Squires, Robert H

    2015-07-01

    Previous studies in patients with acute liver failure identified acetaminophen (APAP) protein adducts in the serum of 12% and 19% of children and adults, respectively, with acute liver failure of indeterminate etiology. This article details the testing of APAP adducts in a subset (n = 393) of patients with varied diagnoses in the Pediatric Acute Liver Failure Study Group (PALFSG). Serum samples were available from 393 participants included in the PALFSG registry. Adduct measurement was performed using validated methods. Participants were grouped by diagnostic category as known APAP overdose, known other diagnosis, and indeterminate etiology. Demographic and clinical characteristics and participant outcomes were compared by adduct status (positive or negative) within each group. APAP adduct testing was positive in 86% of participants with known APAP overdose, 6% with other known diagnoses, and 11% with an indeterminate cause of liver failure. Adduct-positive participants were noted to have marked elevation of serum alanine aminotransferase and aspartate aminotransferase coupled with total serum bilirubin that was significantly lower than adduct-negative patients. In the indeterminate group, adduct-positive patients had different outcomes than adduct-negative patients (P = 0.03); spontaneous survival was 16 of 21 (76%) in adduct-positive patients versus 75 of 169 (44%) in adduct-negative patients. Prognosis did not vary by adduct status in patients with known diagnoses. Furthermore, study is needed to understand the relation of APAP exposure, as determined by the presence of APAP adducts, to the clinical phenotype and outcomes of children with acute liver failure.

  11. Statistical modeling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1992-01-01

    This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.

  12. The prone bridge test: Performance, validity, and reliability among older and younger adults.

    PubMed

    Bohannon, Richard W; Steffl, Michal; Glenney, Susan S; Green, Michelle; Cashwell, Leah; Prajerova, Kveta; Bunn, Jennifer

    2018-04-01

    The prone bridge maneuver, or plank, has been viewed as a potential alternative to curl-ups for assessing trunk muscle performance. The purpose of this study was to assess prone bridge test performance, validity, and reliability among younger and older adults. Sixty younger (20-35 years old) and 60 older (60-79 years old) participants completed this study. Groups were evenly divided by sex. Participants completed surveys regarding physical activity and abdominal exercise participation. Height, weight, body mass index (BMI), and waist circumference were measured. On two occasions, 5-9 days apart, participants held a prone bridge until volitional exhaustion or until repeated technique failure. Validity was examined using data from the first session: convergent validity by calculating correlations between survey responses, anthropometrics, and prone bridge time, known groups validity by using an ANOVA comparing bridge times of younger and older adults and of men and women. Test-retest reliability was examined by using a paired t-test to compare prone bridge times for Session1 and Session 2. Furthermore, an intraclass correlation coefficient (ICC) was used to characterize relative reliability and minimal detectable change (MDC 95% ) was used to describe absolute reliability. The mean prone bridge time was 145.3 ± 71.5 s, and was positively correlated with physical activity participation (p ≤ 0.001) and negatively correlated with BMI and waist circumference (p ≤ 0.003). Younger participants had significantly longer plank times than older participants (p = 0.003). The ICC between testing sessions was 0.915. The prone bridge test is a valid and reliable measure for evaluating abdominal performance in both younger and older adults. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Pitfalls in Prediction Modeling for Normal Tissue Toxicity in Radiation Therapy: An Illustration With the Individual Radiation Sensitivity and Mammary Carcinoma Risk Factor Investigation Cohorts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mbah, Chamberlain, E-mail: chamberlain.mbah@ugent.be; Department of Mathematical Modeling, Statistics, and Bioinformatics, Faculty of Bioscience Engineering, Ghent University, Ghent; Thierens, Hubert

    Purpose: To identify the main causes underlying the failure of prediction models for radiation therapy toxicity to replicate. Methods and Materials: Data were used from two German cohorts, Individual Radiation Sensitivity (ISE) (n=418) and Mammary Carcinoma Risk Factor Investigation (MARIE) (n=409), of breast cancer patients with similar characteristics and radiation therapy treatments. The toxicity endpoint chosen was telangiectasia. The LASSO (least absolute shrinkage and selection operator) logistic regression method was used to build a predictive model for a dichotomized endpoint (Radiation Therapy Oncology Group/European Organization for the Research and Treatment of Cancer score 0, 1, or ≥2). Internal areas undermore » the receiver operating characteristic curve (inAUCs) were calculated by a naïve approach whereby the training data (ISE) were also used for calculating the AUC. Cross-validation was also applied to calculate the AUC within the same cohort, a second type of inAUC. Internal AUCs from cross-validation were calculated within ISE and MARIE separately. Models trained on one dataset (ISE) were applied to a test dataset (MARIE) and AUCs calculated (exAUCs). Results: Internal AUCs from the naïve approach were generally larger than inAUCs from cross-validation owing to overfitting the training data. Internal AUCs from cross-validation were also generally larger than the exAUCs, reflecting heterogeneity in the predictors between cohorts. The best models with largest inAUCs from cross-validation within both cohorts had a number of common predictors: hypertension, normalized total boost, and presence of estrogen receptors. Surprisingly, the effect (coefficient in the prediction model) of hypertension on telangiectasia incidence was positive in ISE and negative in MARIE. Other predictors were also not common between the 2 cohorts, illustrating that overcoming overfitting does not solve the problem of replication failure of prediction models completely. Conclusions: Overfitting and cohort heterogeneity are the 2 main causes of replication failure of prediction models across cohorts. Cross-validation and similar techniques (eg, bootstrapping) cope with overfitting, but the development of validated predictive models for radiation therapy toxicity requires strategies that deal with cohort heterogeneity.« less

  14. Infrared thermography based diagnosis of inter-turn fault and cooling system failure in three phase induction motor

    NASA Astrophysics Data System (ADS)

    Singh, Gurmeet; Naikan, V. N. A.

    2017-12-01

    Thermography has been widely used as a technique for anomaly detection in induction motors. International Electrical Testing Association (NETA) proposed guidelines for thermographic inspection of electrical systems and rotating equipment. These guidelines help in anomaly detection and estimating its severity. However, it focus only on location of hotspot rather than diagnosing the fault. This paper addresses two such faults i.e. inter-turn fault and failure of cooling system, where both results in increase of stator temperature. Present paper proposes two thermal profile indicators using thermal analysis of IRT images. These indicators are in compliance with NETA standard. These indicators help in correctly diagnosing inter-turn fault and failure of cooling system. The work has been experimentally validated for healthy and with seeded faults scenarios of induction motors.

  15. Do heart and respiratory rate variability improve prediction of extubation outcomes in critically ill patients?

    PubMed Central

    2014-01-01

    Introduction Prolonged ventilation and failed extubation are associated with increased harm and cost. The added value of heart and respiratory rate variability (HRV and RRV) during spontaneous breathing trials (SBTs) to predict extubation failure remains unknown. Methods We enrolled 721 patients in a multicenter (12 sites), prospective, observational study, evaluating clinical estimates of risk of extubation failure, physiologic measures recorded during SBTs, HRV and RRV recorded before and during the last SBT prior to extubation, and extubation outcomes. We excluded 287 patients because of protocol or technical violations, or poor data quality. Measures of variability (97 HRV, 82 RRV) were calculated from electrocardiogram and capnography waveforms followed by automated cleaning and variability analysis using Continuous Individualized Multiorgan Variability Analysis (CIMVA™) software. Repeated randomized subsampling with training, validation, and testing were used to derive and compare predictive models. Results Of 434 patients with high-quality data, 51 (12%) failed extubation. Two HRV and eight RRV measures showed statistically significant association with extubation failure (P <0.0041, 5% false discovery rate). An ensemble average of five univariate logistic regression models using RRV during SBT, yielding a probability of extubation failure (called WAVE score), demonstrated optimal predictive capacity. With repeated random subsampling and testing, the model showed mean receiver operating characteristic area under the curve (ROC AUC) of 0.69, higher than heart rate (0.51), rapid shallow breathing index (RBSI; 0.61) and respiratory rate (0.63). After deriving a WAVE model based on all data, training-set performance demonstrated that the model increased its predictive power when applied to patients conventionally considered high risk: a WAVE score >0.5 in patients with RSBI >105 and perceived high risk of failure yielded a fold increase in risk of extubation failure of 3.0 (95% confidence interval (CI) 1.2 to 5.2) and 3.5 (95% CI 1.9 to 5.4), respectively. Conclusions Altered HRV and RRV (during the SBT prior to extubation) are significantly associated with extubation failure. A predictive model using RRV during the last SBT provided optimal accuracy of prediction in all patients, with improved accuracy when combined with clinical impression or RSBI. This model requires a validation cohort to evaluate accuracy and generalizability. Trial registration ClinicalTrials.gov NCT01237886. Registered 13 October 2010. PMID:24713049

  16. PIV Measurements of the CEV Hot Abort Motor Plume for CFD Validation

    NASA Technical Reports Server (NTRS)

    Wernet, Mark; Wolter, John D.; Locke, Randy; Wroblewski, Adam; Childs, Robert; Nelson, Andrea

    2010-01-01

    NASA s next manned launch platform for missions to the moon and Mars are the Orion and Ares systems. Many critical aspects of the launch system performance are being verified using computational fluid dynamics (CFD) predictions. The Orion Launch Abort Vehicle (LAV) consists of a tower mounted tractor rocket tasked with carrying the Crew Module (CM) safely away from the launch vehicle in the event of a catastrophic failure during the vehicle s ascent. Some of the predictions involving the launch abort system flow fields produced conflicting results, which required further investigation through ground test experiments. Ground tests were performed to acquire data from a hot supersonic jet in cross-flow for the purpose of validating CFD turbulence modeling relevant to the Orion Launch Abort Vehicle (LAV). Both 2-component axial plane Particle Image Velocimetry (PIV) and 3-component cross-stream Stereo Particle Image Velocimetry (SPIV) measurements were obtained on a model of an Abort Motor (AM). Actual flight conditions could not be simulated on the ground, so the highest temperature and pressure conditions that could be safely used in the test facility (nozzle pressure ratio 28.5 and a nozzle temperature ratio of 3) were used for the validation tests. These conditions are significantly different from those of the flight vehicle, but were sufficiently high enough to begin addressing turbulence modeling issues that predicated the need for the validation tests.

  17. Development of an Input Suite for an Orthotropic Composite Material Model

    NASA Technical Reports Server (NTRS)

    Hoffarth, Canio; Shyamsunder, Loukham; Khaled, Bilal; Rajan, Subramaniam; Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Blankenhorn, Gunther

    2017-01-01

    An orthotropic three-dimensional material model suitable for use in modeling impact tests has been developed that has three major components elastic and inelastic deformations, damage and failure. The material model has been implemented as MAT213 into a special version of LS-DYNA and uses tabulated data obtained from experiments. The prominent features of the constitutive model are illustrated using a widely-used aerospace composite the T800S3900-2B[P2352W-19] BMS8-276 Rev-H-Unitape fiber resin unidirectional composite. The input for the deformation model consists of experimental data from 12 distinct experiments at a known temperature and strain rate: tension and compression along all three principal directions, shear in all three principal planes, and off axis tension or compression tests in all three principal planes, along with other material constants. There are additional input associated with the damage and failure models. The steps in using this model are illustrated composite characterization tests, verification tests and a validation test. The results show that the developed and implemented model is stable and yields acceptably accurate results.

  18. OvidSP Medline-to-PubMed search filter translation: a methodology for extending search filter range to include PubMed's unique content.

    PubMed

    Damarell, Raechel A; Tieman, Jennifer J; Sladek, Ruth M

    2013-07-02

    PubMed translations of OvidSP Medline search filters offer searchers improved ease of access. They may also facilitate access to PubMed's unique content, including citations for the most recently published biomedical evidence. Retrieving this content requires a search strategy comprising natural language terms ('textwords'), rather than Medical Subject Headings (MeSH). We describe a reproducible methodology that uses a validated PubMed search filter translation to create a textword-only strategy to extend retrieval to PubMed's unique heart failure literature. We translated an OvidSP Medline heart failure search filter for PubMed and established version equivalence in terms of indexed literature retrieval. The PubMed version was then run within PubMed to identify citations retrieved by the filter's MeSH terms (Heart failure, Left ventricular dysfunction, and Cardiomyopathy). It was then rerun with the same MeSH terms restricted to searching on title and abstract fields (i.e. as 'textwords'). Citations retrieved by the MeSH search but not the textword search were isolated. Frequency analysis of their titles/abstracts identified natural language alternatives for those MeSH terms that performed less effectively as textwords. These terms were tested in combination to determine the best performing search string for reclaiming this 'lost set'. This string, restricted to searching on PubMed's unique content, was then combined with the validated PubMed translation to extend the filter's performance in this database. The PubMed heart failure filter retrieved 6829 citations. Of these, 834 (12%) failed to be retrieved when MeSH terms were converted to textwords. Frequency analysis of the 834 citations identified five high frequency natural language alternatives that could improve retrieval of this set (cardiac failure, cardiac resynchronization, left ventricular systolic dysfunction, left ventricular diastolic dysfunction, and LV dysfunction). Together these terms reclaimed 157/834 (18.8%) of lost citations. MeSH terms facilitate precise searching in PubMed's indexed subset. They may, however, work less effectively as search terms prior to subject indexing. A validated PubMed search filter can be used to develop a supplementary textword-only search strategy to extend retrieval to PubMed's unique content. A PubMed heart failure search filter is available on the CareSearch website (http://www.caresearch.com.au) providing access to both indexed and non-indexed heart failure evidence.

  19. Everyday cognitive failure and depressive symptoms predict fatigue in sarcoidosis: A prospective follow-up study.

    PubMed

    Hendriks, Celine; Drent, Marjolein; De Kleijn, Willemien; Elfferich, Marjon; Wijnen, Petal; De Vries, Jolanda

    2018-05-01

    Fatigue is a major and disabling problem in sarcoidosis. Knowledge concerning correlates of the development of fatigue and possible interrelationships is lacking. A conceptual model of fatigue was developed and tested. Sarcoidosis outpatients (n = 292) of Maastricht University Medical Center completed questionnaires regarding trait anxiety, depressive symptoms, cognitive failure, dyspnea, social support, and small fiber neuropathy (SFN) at baseline. Fatigue was assessed at 6 and 12 months. Sex, age, and time since diagnosis were taken from medical records. Pathways were estimated by means of path analyses in AMOS. Everyday cognitive failure, depressive symptoms, symptoms suggestive of SFN, and dyspnea were positive predictors of fatigue. Fit indices of the model were good. The model validly explains variation in fatigue. Everyday cognitive failure and depressive symptoms were the most important predictors of fatigue. In addition to physical functioning, cognitive and psychological aspects should be included in the management of sarcoidosis patients. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Voltage Fluctuation in a Supercapacitor During a High-g Impact

    PubMed Central

    Dai, Keren; Wang, Xiaofeng; Yin, Yajiang; Hao, Chenglong; You, Zheng

    2016-01-01

    Supercapacitors (SCs) are a type of energy storage device with high power density and long lifecycles. They have widespread applications, such as powering electric vehicles and micro scale devices. Working stability is one of the most important properties of SCs, and it is of significant importance to investigate the operational characteristics of SCs working under extreme conditions, particularly during high-g acceleration. In this paper, the failure mechanism of SCs upon high-g impact is thoroughly studied. Through an analysis of the intrinsic reaction mechanism during the high-g impact, a multi-faceted physics model is established. Additionally, a multi-field coupled kinetics simulation of the SC failure during a high-g impact is presented. Experimental tests are conducted that confirm the validity of the proposed model. The key factors of failure, such as discharge currents and discharging levels, are analyzed and discussed. Finally, a possible design is proposed to avoid the failure of SCs upon high-g impact. PMID:27958309

  1. Mechanical characterization and modeling of the deformation and failure of the highly crosslinked RTM6 epoxy resin

    NASA Astrophysics Data System (ADS)

    Morelle, X. P.; Chevalier, J.; Bailly, C.; Pardoen, T.; Lani, F.

    2017-08-01

    The nonlinear deformation and fracture of RTM6 epoxy resin is characterized as a function of strain rate and temperature under various loading conditions involving uniaxial tension, notched tension, uniaxial compression, torsion, and shear. The parameters of the hardening law depend on the strain-rate and temperature. The pressure-dependency and hardening law, as well as four different phenomenological failure criteria, are identified using a subset of the experimental results. Detailed fractography analysis provides insight into the competition between shear yielding and maximum principal stress driven brittle failure. The constitutive model and a stress-triaxiality dependent effective plastic strain based failure criterion are readily introduced in the standard version of Abaqus, without the need for coding user subroutines, and can thus be directly used as an input in multi-scale modeling of fibre-reinforced composite material. The model is successfully validated against data not used for the identification and through the full simulation of the crack propagation process in the V-notched beam shear test.

  2. Degradation of Leakage Currents and Reliability Prediction for Tantalum Capacitors

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2016-01-01

    Two types of failures in solid tantalum capacitors, catastrophic and parametric, and their mechanisms are described. Analysis of voltage and temperature reliability acceleration factors reported in literature shows a wide spread of results and requires more investigation. In this work, leakage currents in two types of chip tantalum capacitors were monitored during highly accelerated life testing (HALT) at different temperatures and voltages. Distributions of degradation rates were approximated using a general log-linear Weibull model and yielded voltage acceleration constants B = 9.8 +/- 0.5 and 5.5. The activation energies were Ea = 1.65 eV and 1.42 eV. The model allows for conservative estimations of times to failure and was validated by long-term life test data. Parametric degradation and failures are reversible and can be annealed at high temperatures. The process is attributed to migration of charged oxygen vacancies that reduce the barrier height at the MnO2/Ta2O5 interface and increase injection of electrons from the MnO2 cathode. Analysis showed that the activation energy of the vacancies' migration is 1.1 eV.

  3. Flexural testing on carbon fibre laminates taking into account their different behaviour under tension and compression

    NASA Astrophysics Data System (ADS)

    Serna Moreno, M. C.; Romero Gutierrez, A.; Martínez Vicente, J. L.

    2016-07-01

    An analytical model has been derived for describing the results of three-point-bending tests in materials with different behaviour under tension and compression. The shift of the neutral plane and the damage initiation mode and its location have been defined. The validity of the equations has been reviewed by testing carbon fibre-reinforced polymers (CFRP), typically employed in different weight-critical applications. Both unidirectional and cross-ply laminates have been studied. The initial failure mode produced depends directly on the beam span- thickness relation. Therefore, specimens with different thicknesses have been analysed for examining the damage initiation due to either the bending moment or the out-of-plane shear load. The experimental description of the damage initiation and evolution has been shown by means of optical microscopy. The good agreement between the analytical estimations and the experimental results shows the validity of the analytical model exposed.

  4. Out-of-plane buckling of pantographic fabrics in displacement-controlled shear tests: experimental results and model validation

    NASA Astrophysics Data System (ADS)

    Barchiesi, Emilio; Ganzosch, Gregor; Liebold, Christian; Placidi, Luca; Grygoruk, Roman; Müller, Wolfgang H.

    2018-01-01

    Due to the latest advancements in 3D printing technology and rapid prototyping techniques, the production of materials with complex geometries has become more affordable than ever. Pantographic structures, because of their attractive features, both in dynamics and statics and both in elastic and inelastic deformation regimes, deserve to be thoroughly investigated with experimental and theoretical tools. Herein, experimental results relative to displacement-controlled large deformation shear loading tests of pantographic structures are reported. In particular, five differently sized samples are analyzed up to first rupture. Results show that the deformation behavior is strongly nonlinear, and the structures are capable of undergoing large elastic deformations without reaching complete failure. Finally, a cutting edge model is validated by means of these experimental results.

  5. Effect of KOH concentration on LEO cycle life of IPV nickel-hydrogen flight battery cells

    NASA Technical Reports Server (NTRS)

    Smithrick, John J.; Hall, Stephen W.

    1990-01-01

    A breakthrough in low earth orbit (LEO) cycle life of individual pressure vessel (IPV) nickel hydrogen battery cells was reported. The cycle life of boiler plate cells containing 26 percent potassium hydroxide (KOH) electrolyte was about 40,000 LEO cycles compared to 3500 cycles for cells containing 31 percent KOH. The effect of KOH concentration on cycle life was studied. The cycle regime was a stressful accelerated LEO, which consisted of a 27.5 min charge followed by a 17.5 min charge (2 x normal rate). The depth of discharge (DOD) was 80 percent. The cell temperature was maintained at 23 C. The next step is to validate these results using flight hardware and a real time LEO test. NASA Lewis has a contract with the Naval Weapons Support Center (NWSC), Crane, Indiana, to validate the boiler plate test results. Six 48 A-hr Hughes recirculation design IPV nickel-hydrogen flight battery cells are being evaluated. Three of the cells contain 26 percent KOH (test cells) and three contain 31 percent KOH (control cells). They are undergoing real time LEO cycle life testing. The cycle regime is a 90-min LEO orbit consisting of a 54-min charge followed by a 36-min discharge. The depth-of-discharge is 80 percent. The cell temperature is maintained at 10 C. The cells were cycled for over 8000 cycles in the continuing test. There were no failures for the cells containing 26 percent KOH. There was two failures, however, for the cells containing 31 percent KOH.

  6. Effect of KOH concentration on LEO cycle life of IPV nickel-hydrogen flight battery cells

    NASA Technical Reports Server (NTRS)

    Smithrick, John J.; Hall, Stephen W.

    1990-01-01

    A breakthrough in the low-earth-orbit (LEO) cycle life of individual pressure vessel (IPV) nickel hydrogen battery cells is reported. The cycle life of boiler plate cells containing 26 percent potassium hydroxide (KOH) electrolyte was about 40,000 LEO cycles compared to 3500 cycles for cells containing 31 percent KOH. The effect of KOH concentration on cycle life was studied. The cycle regime was a stressful accelerated LEO, which consisted of a 27.5 min charge followed by a 17.5 min charge (2 x normal rate). The depth of discharge (DOD) was 80 percent. The cell temperature was maintained at 23 C. The next step is to validate these results using flight hardware and real time LEO test. NASA Lewis has a contract with the Naval Weapons Support Center (NWSC), Crane, Indiana to validate the boiler plate test results. Six 48 A-hr Hughes recirculation design IPV nickel-hydrogen flight battery cells are being evaluated. Three of the cells contain 26 percent KOH (test cells) and three contain 31 percent KOH (control cells). They are undergoing real time LEO cycle life testing. The cycle regime is a 90-min LEO orbit consisting of a 54-min charge followed by a 36-min discharge. The depth-of-discharge is 80 percent. The cell temperature is maintained at 10 C. The cells were cycled for over 8000 cycles in the continuing test. There were no failures for the cells containing 26 percent KOH. There were two failures, however, for the cells containing 31 percent KOH.

  7. Characterization of Triaxial Braided Composite Material Properties for Impact Simulation

    NASA Technical Reports Server (NTRS)

    Roberts, Gary D.; Goldberg, Robert K.; Biniendak, Wieslaw K.; Arnold, William A.; Littell, Justin D.; Kohlman, Lee W.

    2009-01-01

    The reliability of impact simulations for aircraft components made with triaxial braided carbon fiber composites is currently limited by inadequate material property data and lack of validated material models for analysis. Improvements to standard quasi-static test methods are needed to account for the large unit cell size and localized damage within the unit cell. The deformation and damage of a triaxial braided composite material was examined using standard quasi-static in-plane tension, compression, and shear tests. Some modifications to standard test specimen geometries are suggested, and methods for measuring the local strain at the onset of failure within the braid unit cell are presented. Deformation and damage at higher strain rates is examined using ballistic impact tests on 61- by 61- by 3.2-mm (24- by 24- by 0.125-in.) composite panels. Digital image correlation techniques were used to examine full-field deformation and damage during both quasi-static and impact tests. An impact analysis method is presented that utilizes both local and global deformation and failure information from the quasi-static tests as input for impact simulations. Improvements that are needed in test and analysis methods for better predictive capability are examined.

  8. Signal processing and neural network toolbox and its application to failure diagnosis and prognosis

    NASA Astrophysics Data System (ADS)

    Tu, Fang; Wen, Fang; Willett, Peter K.; Pattipati, Krishna R.; Jordan, Eric H.

    2001-07-01

    Many systems are comprised of components equipped with self-testing capability; however, if the system is complex involving feedback and the self-testing itself may occasionally be faulty, tracing faults to a single or multiple causes is difficult. Moreover, many sensors are incapable of reliable decision-making on their own. In such cases, a signal processing front-end that can match inference needs will be very helpful. The work is concerned with providing an object-oriented simulation environment for signal processing and neural network-based fault diagnosis and prognosis. In the toolbox, we implemented a wide range of spectral and statistical manipulation methods such as filters, harmonic analyzers, transient detectors, and multi-resolution decomposition to extract features for failure events from data collected by data sensors. Then we evaluated multiple learning paradigms for general classification, diagnosis and prognosis. The network models evaluated include Restricted Coulomb Energy (RCE) Neural Network, Learning Vector Quantization (LVQ), Decision Trees (C4.5), Fuzzy Adaptive Resonance Theory (FuzzyArtmap), Linear Discriminant Rule (LDR), Quadratic Discriminant Rule (QDR), Radial Basis Functions (RBF), Multiple Layer Perceptrons (MLP) and Single Layer Perceptrons (SLP). Validation techniques, such as N-fold cross-validation and bootstrap techniques, are employed for evaluating the robustness of network models. The trained networks are evaluated for their performance using test data on the basis of percent error rates obtained via cross-validation, time efficiency, generalization ability to unseen faults. Finally, the usage of neural networks for the prediction of residual life of turbine blades with thermal barrier coatings is described and the results are shown. The neural network toolbox has also been applied to fault diagnosis in mixed-signal circuits.

  9. Self-care confidence may be more important than cognition to influence self-care behaviors in adults with heart failure: Testing a mediation model.

    PubMed

    Vellone, Ercole; Pancani, Luca; Greco, Andrea; Steca, Patrizia; Riegel, Barbara

    2016-08-01

    Cognitive impairment can reduce the self-care abilities of heart failure patients. Theory and preliminary evidence suggest that self-care confidence may mediate the relationship between cognition and self-care, but further study is needed to validate this finding. The aim of this study was to test the mediating role of self-care confidence between specific cognitive domains and heart failure self-care. Secondary analysis of data from a descriptive study. Three out-patient sites in Pennsylvania and Delaware, USA. A sample of 280 adults with chronic heart failure, 62 years old on average and mostly male (64.3%). Data on heart failure self-care and self-care confidence were collected with the Self-Care of Heart Failure Index 6.2. Data on cognition were collected by trained research assistants using a neuropsychological test battery measuring simple and complex attention, processing speed, working memory, and short-term memory. Sociodemographic data were collected by self-report. Clinical information was abstracted from the medical record. Mediation analysis was performed with structural equation modeling and indirect effects were evaluated with bootstrapping. Most participants had at least 1 impaired cognitive domain. In mediation models, self-care confidence consistently influenced self-care and totally mediated the relationship between simple attention and self-care and between working memory and self-care (comparative fit index range: .929-.968; root mean squared error of approximation range: .032-.052). Except for short-term memory, which had a direct effect on self-care maintenance, the other cognitive domains were unrelated to self-care. Self-care confidence appears to be an important factor influencing heart failure self-care even in patients with impaired cognition. As few studies have successfully improved cognition, interventions addressing confidence should be considered as a way to improve self-care in this population. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Clinical usefulness of the definitions for defining characteristics of activity intolerance, excess fluid volume and decreased cardiac output in decompensated heart failure: a descriptive exploratory study.

    PubMed

    de Souza, Vanessa; Zeitoun, Sandra Salloum; Lopes, Camila Takao; de Oliveira, Ana Paula Dias; Lopes, Juliana de Lima; de Barros, Alba Lucia Bottura Leite

    2015-09-01

    To assess the clinical usefulness of the operational definitions for the defining characteristics of the NANDA International nursing diagnoses, activity intolerance, decreased cardiac output and excess fluid volume, and the concomitant presence of those diagnoses in patients with decompensated heart failure. Content validity of the operational definitions for the defining characteristics of activity intolerance, excess fluid volume and decreased cardiac output have been previously validated by experts. Their clinical usefulness requires clinical validation. This was a descriptive exploratory study. Two expert nurses independently assessed 25 patients with decompensated heart failure for the presence or absence of 29 defining characteristics. Interrater reliability was analysed using the Kappa coefficient as a measure of clinical usefulness. The Fisher's exact test was used to test the association of the defining characteristics of activity intolerance and excess fluid volume in the presence of decreased cardiac output, and the correlation between the three diagnoses. Assessments regarding the presence of all defining characteristics reached 100% agreement, except with anxiety. Five defining characteristics of excess fluid volume were significantly associated with the presence of decreased cardiac output. Concomitant presence of the three diagnoses occurred in 80% of the patients. However, there was no significant correlation between the three diagnoses. The operational definitions for the diagnoses had strong interrater reliability, therefore they were considered clinically useful. Only five defining characteristics were representative of the association between excess fluid volume and decreased cardiac output. Therefore, excess fluid volume is related to decreased cardiac output, although these diagnoses are not necessarily associated with activity intolerance. The operational definitions may favour early recognition of the sequence of responses to decompensation, guiding the choice of common interventions to improve or resolve excess fluid volume and decreased cardiac output. © 2015 John Wiley & Sons Ltd.

  11. Laser Indirect Shock Welding of Fine Wire to Metal Sheet

    PubMed Central

    Wang, Xiao; Huang, Tao; Luo, Yapeng; Liu, Huixia

    2017-01-01

    The purpose of this paper is to present an advanced method for welding fine wire to metal sheet, namely laser indirect shock welding (LISW). This process uses silica gel as driver sheet to accelerate the metal sheet toward the wire to obtain metallurgical bonding. A series of experiments were implemented to validate the welding ability of Al sheet/Cu wire and Al sheet/Ag wire. It was found that the use of a driver sheet can maintain high surface quality of the metal sheet. With the increase of laser pulse energy, the bonding area of the sheet/wire increased and the welding interfaces were nearly flat. Energy dispersive spectroscopy (EDS) results show that the intermetallic phases were absent and a short element diffusion layer which would limit the formation of the intermetallic phases emerging at the welding interface. A tensile shear test was used to measure the mechanical strength of the welding joints. The influence of laser pulse energy on the tensile failure modes was investigated, and two failure modes, including interfacial failure and failure through the wire, were observed. The nanoindentation test results indicate that as the distance to the welding interface decreased, the microhardness increased due to the plastic deformation becoming more violent. PMID:28895900

  12. Strain Rate Dependant Material Model for Orthotropic Metals

    NASA Astrophysics Data System (ADS)

    Vignjevic, Rade

    2016-08-01

    In manufacturing processes anisotropic metals are often exposed to the loading with high strain rates in the range from 102 s-1 to 106 s-1 (e.g. stamping, cold spraying and explosive forming). These types of loading often involve generation and propagation of shock waves within the material. The material behaviour under such a complex loading needs to be accurately modelled, in order to optimise the manufacturing process and achieve appropriate properties of the manufactured component. The presented research is related to development and validation of a thermodynamically consistent physically based constitutive model for metals under high rate loading. The model is capable of modelling damage, failure and formation and propagation of shock waves in anisotropic metals. The model has two main parts: the strength part which defines the material response to shear deformation and an equation of state (EOS) which defines the material response to isotropic volumetric deformation [1]. The constitutive model was implemented into the transient nonlinear finite element code DYNA3D [2] and our in house SPH code. Limited model validation was performed by simulating a number of high velocity material characterisation and validation impact tests. The new damage model was developed in the framework of configurational continuum mechanics and irreversible thermodynamics with internal state variables. The use of the multiplicative decomposition of deformation gradient makes the model applicable to arbitrary plastic and damage deformations. To account for the physical mechanisms of failure, the concept of thermally activated damage initially proposed by Tuller and Bucher [3], Klepaczko [4] was adopted as the basis for the new damage evolution model. This makes the proposed damage/failure model compatible with the Mechanical Threshold Strength (MTS) model Follansbee and Kocks [5], 1988; Chen and Gray [6] which was used to control evolution of flow stress during plastic deformation. In addition the constitutive model is coupled with a vector shock equation of state which allows for modelling of shock wave propagation in orthotropic the material. Parameters for the new constitutive model are typically derived on the basis of the tensile tests (performed over a range of temperatures and strain rates), plate impact tests and Taylor anvil tests. The model was applied to simulate explosively driven fragmentation, blast loading and cold spraying impacts.

  13. Meso-modeling of Carbon Fiber Composite for Crash Safety Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Shih-Po; Chen, Yijung; Zeng, Danielle

    2017-04-06

    In the conventional approach, the material properties for crash safety simulations are typically obtained from standard coupon tests, where the test results only provide single layer material properties used in crash simulations. However, the lay-up effects for the failure behaviors of the real structure were not considered in numerical simulations. Hence, there was discrepancy between the crash simulations and experimental tests. Consequently, an intermediate stage is required for accurate predictions. Some component tests are required to correlate the material models in the intermediate stage. In this paper, a Mazda Tube under high-impact velocity is chosen as an example for themore » crash safety analysis. The tube consists of 24 layers of uni-directional (UD) carbon fiber composite materials, in which 4 layers are perpendicular to, while the other layers are parallel to the impact direction. An LS-DYNA meso-model was constructed with orthotropic material models counting for the single-layer material behaviors. Between layers, a node-based tie-break contact was used for modeling the delamination of the composite material. Since fiber directions are not single-oriented, the lay-up effects could be an important effect. From the first numerical trial, premature material failure occurred due to the use of material parameters obtained directly from the coupon tests. Some parametric studies were conducted to identify the cause of the numerical instability. The finding is that the material failure strength used in the numerical model needs to be enlarged to stabilize the numerical model. Some hypothesis was made to provide the foundation for enlarging the failure strength and the corresponding experiments will be conducted to validate the hypothesis.« less

  14. Thermal barrier coating life prediction model development

    NASA Technical Reports Server (NTRS)

    Hillery, R. V.; Pilsner, B. H.; Mcknight, R. L.; Cook, T. S.; Hartle, M. S.

    1988-01-01

    This report describes work performed to determine the predominat modes of degradation of a plasma sprayed thermal barrier coating system and to develop and verify life prediction models accounting for these degradation modes. The primary TBC system consisted of a low pressure plasma sprayed NiCrAlY bond coat, an air plasma sprayed ZrO2-Y2O3 top coat, and a Rene' 80 substrate. The work was divided into 3 technical tasks. The primary failure mode to be addressed was loss of the zirconia layer through spalling. Experiments showed that oxidation of the bond coat is a significant contributor to coating failure. It was evident from the test results that the species of oxide scale initially formed on the bond coat plays a role in coating degradation and failure. It was also shown that elevated temperature creep of the bond coat plays a role in coating failure. An empirical model was developed for predicting the test life of specimens with selected coating, specimen, and test condition variations. In the second task, a coating life prediction model was developed based on the data from Task 1 experiments, results from thermomechanical experiments performed as part of Task 2, and finite element analyses of the TBC system during thermal cycles. The third and final task attempted to verify the validity of the model developed in Task 2. This was done by using the model to predict the test lives of several coating variations and specimen geometries, then comparing these predicted lives to experimentally determined test lives. It was found that the model correctly predicts trends, but that additional refinement is needed to accurately predict coating life.

  15. [The French translation and cultural adaptation of the SRI questionnaire. A questionnaire to assess health-related quality of life in patients with chronic respiratory failure and domiciliary ventilation].

    PubMed

    Cuvelier, A; Lamia, B; Molano, L-C; Muir, J-F; Windisch, W

    2012-05-01

    We performed the French translation and cross-cultural adaptation of the Severe Respiratory Insufficiency (SRI) questionnaire. Written and validated in German, this questionnaire evaluates health-related quality of life in patients treated with domiciliary ventilation for chronic respiratory failure. Four bilingual German-French translators and a linguist were recruited to produce translations and back-translations of the questionnaire constituted of 49 items in seven domains. Two successive versions were generated and compared to the original questionnaire. The difficulty of the translation and the naturalness were quantified for each item using a 1-10 scale and their equivalence to their original counterpart was graded from A to C. The translated questionnaire was finally tested in a pilot study, which included 15 representative patients. The difficulty of the first translation and the first back-translation was respectively quantified as 2.5 (range 1-5.5) and 1.5 (range 1-6) on the 10-point scale (P=0.0014). The naturalness and the equivalence of 8/49 items were considered as insufficient, which led to the production of a second translation and a second back-translation. The meanings of two items needed clarification during the pilot study. The French translation of the SRI questionnaire represents a new instrument for clinical research in patients treated with domiciliary ventilation for chronic respiratory failure. Its validity needs to be tested in a multicenter study. Copyright © 2012 SPLF. Published by Elsevier Masson SAS. All rights reserved.

  16. Performance degradation mechanisms and modes in terrestrial photovoltaic arrays and technology for their diagnosis

    NASA Technical Reports Server (NTRS)

    Noel, G. T.; Sliemers, F. A.; Derringer, G. C.; Wood, V. E.; Wilkes, K. E.; Gaines, G. B.; Carmichael, D. C.

    1978-01-01

    Accelerated life-prediction test methodologies have been developed for the validation of a 20-year service life for low-cost photovoltaic arrays. Array failure modes, relevant materials property changes, and primary degradation mechanisms are discussed as a prerequisite to identifying suitable measurement techniques and instruments. Measurements must provide sufficient confidence to permit selection among alternative designs and materials and to stimulate widespread deployment of such arrays. Furthermore, the diversity of candidate materials and designs, and the variety of potential environmental stress combinations, degradation mechanisms and failure modes require that combinations of measurement techniques be identified which are suitable for the characterization of various encapsulation system-cell structure-environment combinations.

  17. Some important considerations in the development of stress corrosion cracking test methods.

    NASA Technical Reports Server (NTRS)

    Wei, R. P.; Novak, S. R.; Williams, D. P.

    1972-01-01

    Discussion of some of the precaution needs the development of fracture-mechanics based test methods for studying stress corrosion cracking involves. Following a review of pertinent analytical fracture mechanics considerations and of basic test methods, the implications for test corrosion cracking studies of the time-to-failure determining kinetics of crack growth and life are examined. It is shown that the basic assumption of the linear-elastic fracture mechanics analyses must be clearly recognized and satisfied in experimentation and that the effects of incubation and nonsteady-state crack growth must also be properly taken into account in determining the crack growth kinetics, if valid data are to be obtained from fracture-mechanics based test methods.

  18. Validating older adults' reports of less mind-wandering: An examination of eye movements and dispositional influences.

    PubMed

    Frank, David J; Nara, Brent; Zavagnin, Michela; Touron, Dayna R; Kane, Michael J

    2015-06-01

    The Control Failures × Concerns theory perspective proposes that mind-wandering occurs, in part, because of failures to inhibit distracting thoughts from entering consciousness (McVay & Kane, 2012). Despite older adults (OAs) exhibiting poorer inhibition, they report less mind-wandering than do young adults (YAs). Proposed explanations include (a) that OAs' thought reports are less valid due to an unawareness of, or reluctance to report, task-unrelated thoughts (TUTs) and (b) that dispositional factors protect OAs from mind-wandering. The primary goal of the current study was to test the validity of thought reports via eye-tracking. A secondary goal was to examine whether OAs' greater mindfulness (Splevins, Smith, & Simpson, 2009) or more positive mood (Carstensen, Isaacowitz, & Charles, 1999) protects them from TUTs. We found that eye movement patterns predicted OAs' TUT reports and YAs' task-related interference (TRI, or thoughts about one's performance) reports. Additionally, poor comprehension was associated with more TUTs in both age groups and more TRI in YAs. These results support the validity of OAs' thought reports. Concerning the second aim of the study, OAs' greater tendency to observe their surroundings (a facet of mindfulness) was related to increased TRI, and OAs' more positive mood and greater motivation partially mediated age differences in TUTs. OAs' reduced TUT reports appear to be genuine and potentially related to dispositional factors. (c) 2015 APA, all rights reserved.

  19. Numerical simulations in the development of propellant management devices

    NASA Astrophysics Data System (ADS)

    Gaulke, Diana; Winkelmann, Yvonne; Dreyer, Michael

    Propellant management devices (PMDs) are used for positioning the propellant at the propel-lant port. It is important to provide propellant without gas bubbles. Gas bubbles can inflict cavitation and may lead to system failures in the worst case. Therefore, the reliable operation of such devices must be guaranteed. Testing these complex systems is a very intricate process. Furthermore, in most cases only tests with downscaled geometries are possible. Numerical sim-ulations are used here as an aid to optimize the tests and to predict certain results. Based on these simulations, parameters can be determined in advance and parts of the equipment can be adjusted in order to minimize the number of experiments. In return, the simulations are validated regarding the test results. Furthermore, if the accuracy of the numerical prediction is verified, then numerical simulations can be used for validating the scaling of the experiments. This presentation demonstrates some selected numerical simulations for the development of PMDs at ZARM.

  20. Risk analysis by FMEA as an element of analytical validation.

    PubMed

    van Leeuwen, J F; Nauta, M J; de Kaste, D; Odekerken-Rombouts, Y M C F; Oldenhof, M T; Vredenbregt, M J; Barends, D M

    2009-12-05

    We subjected a Near-Infrared (NIR) analytical procedure used for screening drugs on authenticity to a Failure Mode and Effects Analysis (FMEA), including technical risks as well as risks related to human failure. An FMEA team broke down the NIR analytical method into process steps and identified possible failure modes for each step. Each failure mode was ranked on estimated frequency of occurrence (O), probability that the failure would remain undetected later in the process (D) and severity (S), each on a scale of 1-10. Human errors turned out to be the most common cause of failure modes. Failure risks were calculated by Risk Priority Numbers (RPNs)=O x D x S. Failure modes with the highest RPN scores were subjected to corrective actions and the FMEA was repeated, showing reductions in RPN scores and resulting in improvement indices up to 5.0. We recommend risk analysis as an addition to the usual analytical validation, as the FMEA enabled us to detect previously unidentified risks.

  1. Reduced Physical Fitness in Patients With Heart Failure as a Possible Risk Factor for Impaired Driving Performance

    PubMed Central

    Alosco, Michael L.; Penn, Marc S.; Spitznagel, Mary Beth; Cleveland, Mary Jo; Ott, Brian R.

    2015-01-01

    OBJECTIVE. Reduced physical fitness secondary to heart failure (HF) may contribute to poor driving; reduced physical fitness is a known correlate of cognitive impairment and has been associated with decreased independence in driving. No study has examined the associations among physical fitness, cognition, and driving performance in people with HF. METHOD. Eighteen people with HF completed a physical fitness assessment, a cognitive test battery, and a validated driving simulator scenario. RESULTS. Partial correlations showed that poorer physical fitness was correlated with more collisions and stop signs missed and lower scores on a composite score of attention, executive function, and psychomotor speed. Cognitive dysfunction predicted reduced driving simulation performance. CONCLUSION. Reduced physical fitness in participants with HF was associated with worse simulated driving, possibly because of cognitive dysfunction. Larger studies using on-road testing are needed to confirm our findings and identify clinical interventions to maximize safe driving. PMID:26122681

  2. A pulse-controlled modified-burst test instrument for accident-tolerant fuel cladding

    DOE PAGES

    Cinbiz, M. Nedim; Brown, Nicholas R.; Terrani, Kurt A.; ...

    2017-06-03

    Pellet-cladding mechanical interaction due to thermal expansion of nuclear fuel pellets during a reactivity-initiated accident (RIA) is a potential mechanism for failure of nuclear fuel cladding. To investigate the mechanical behavior of cladding during an RIA, we developed a mechanical pulse-controlled modified burst test instrument that simulates transient events with a pulse width from 10 to 300 ms. This paper includes validation tests of unirradiated and prehydrided ZIRLO cladding tubes. A ZIRLO cladding sample with a hydrogen content of 168 wt. ppm showed ductile behavior and failed at the maximum limits of the test setup with hoop strain to failuremore » greater than 9.2%. ZIRLO samples showed high resistance to failure even at very high hydrogen contents (1,466 wt. ppm). When the hydrogen content was increased to 1,554 wt. ppm, brittle-like behavior was observed at a hoop strain of 2.5%. Preliminary scoping tests at room temperature with FeCrAl tubes were conducted to imitate the pulse behavior of transient test reactors during integral tests. The preliminary FeCrAl tests are informative from the perspective of characterizing the test rig and supporting the design of integral tests for current and potentially accident tolerant cladding materials.« less

  3. A pulse-controlled modified-burst test instrument for accident-tolerant fuel cladding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cinbiz, M. Nedim; Brown, Nicholas R.; Terrani, Kurt A.

    Pellet-cladding mechanical interaction due to thermal expansion of nuclear fuel pellets during a reactivity-initiated accident (RIA) is a potential mechanism for failure of nuclear fuel cladding. To investigate the mechanical behavior of cladding during an RIA, we developed a mechanical pulse-controlled modified burst test instrument that simulates transient events with a pulse width from 10 to 300 ms. This paper includes validation tests of unirradiated and prehydrided ZIRLO cladding tubes. A ZIRLO cladding sample with a hydrogen content of 168 wt. ppm showed ductile behavior and failed at the maximum limits of the test setup with hoop strain to failuremore » greater than 9.2%. ZIRLO samples showed high resistance to failure even at very high hydrogen contents (1,466 wt. ppm). When the hydrogen content was increased to 1,554 wt. ppm, brittle-like behavior was observed at a hoop strain of 2.5%. Preliminary scoping tests at room temperature with FeCrAl tubes were conducted to imitate the pulse behavior of transient test reactors during integral tests. The preliminary FeCrAl tests are informative from the perspective of characterizing the test rig and supporting the design of integral tests for current and potentially accident tolerant cladding materials.« less

  4. Failure Characteristics of Granite Influenced by Sample Height-to-Width Ratios and Intermediate Principal Stress Under True-Triaxial Unloading Conditions

    NASA Astrophysics Data System (ADS)

    Li, Xibing; Feng, Fan; Li, Diyuan; Du, Kun; Ranjith, P. G.; Rostami, Jamal

    2018-05-01

    The failure modes and peak unloading strength of a typical hard rock, Miluo granite, with particular attention to the sample height-to-width ratio (between 2 and 0.5), and the intermediate principal stress was investigated using a true-triaxial test system. The experimental results indicate that both sample height-to-width ratios and intermediate principal stress have an impact on the failure modes, peak strength and severity of rockburst in hard rock under true-triaxial unloading conditions. For longer rectangular specimens, the transition of failure mode from shear to slabbing requires higher intermediate principal stress. With the decrease in sample height-to-width ratios, slabbing failure is more likely to occur under the condition of lower intermediate principal stress. For same intermediate principal stress, the peak unloading strength monotonically increases with the decrease in sample height-to-width. However, the peak unloading strength as functions of intermediate principal stress for different types of rock samples (with sample height-to-width ratio of 2, 1 and 0.5) all present the pattern of initial increase, followed by a subsequent decrease. The curves fitted to octahedral shear stress as a function of mean effective stress also validate the applicability of the Mogi-Coulomb failure criterion for all considered rock sizes under true-triaxial unloading conditions, and the corresponding cohesion C and internal friction angle φ are calculated. The severity of strainburst of granite depends on the sample height-to-width ratios and intermediate principal stress. Therefore, different supporting strategies are recommended in deep tunneling projects and mining activities. Moreover, the comparison of test results of different σ 2/ σ 3 also reveals the little influence of minimum principal stress on failure characteristics of granite during the true-triaxial unloading process.

  5. Health information systems: failure, success and improvisation.

    PubMed

    Heeks, Richard

    2006-02-01

    The generalised assumption of health information systems (HIS) success is questioned by a few commentators in the medical informatics field. They point to widespread HIS failure. The purpose of this paper was therefore to develop a better conceptual foundation for, and practical guidance on, health information systems failure (and success). Literature and case analysis plus pilot testing of developed model. Defining HIS failure and success is complex, and the current evidence base on HIS success and failure rates was found to be weak. Nonetheless, the best current estimate is that HIS failure is an important problem. The paper therefore derives and explains the "design-reality gap" conceptual model. This is shown to be robust in explaining multiple cases of HIS success and failure, yet provides a contingency that encompasses the differences which exist in different HIS contexts. The design-reality gap model is piloted to demonstrate its value as a tool for risk assessment and mitigation on HIS projects. It also throws into question traditional, structured development methodologies, highlighting the importance of emergent change and improvisation in HIS. The design-reality gap model can be used to address the problem of HIS failure, both as a post hoc evaluative tool and as a pre hoc risk assessment and mitigation tool. It also validates a set of methods, techniques, roles and competencies needed to support the dynamic improvisations that are found to underpin cases of HIS success.

  6. Validation of a Novel Molecular Host Response Assay to Diagnose Infection in Hospitalized Patients Admitted to the ICU With Acute Respiratory Failure.

    PubMed

    Koster-Brouwer, Maria E; Verboom, Diana M; Scicluna, Brendon P; van de Groep, Kirsten; Frencken, Jos F; Janssen, Davy; Schuurman, Rob; Schultz, Marcus J; van der Poll, Tom; Bonten, Marc J M; Cremer, Olaf L

    2018-03-01

    Discrimination between infectious and noninfectious causes of acute respiratory failure is difficult in patients admitted to the ICU after a period of hospitalization. Using a novel biomarker test (SeptiCyte LAB), we aimed to distinguish between infection and inflammation in this population. Nested cohort study. Two tertiary mixed ICUs in the Netherlands. Hospitalized patients with acute respiratory failure requiring mechanical ventilation upon ICU admission from 2011 to 2013. Patients having an established infection diagnosis or an evidently noninfectious reason for intubation were excluded. None. Blood samples were collected upon ICU admission. Test results were categorized into four probability bands (higher bands indicating higher infection probability) and compared with the infection plausibility as rated by post hoc assessment using strict definitions. Of 467 included patients, 373 (80%) were treated for a suspected infection at admission. Infection plausibility was classified as ruled out, undetermined, or confirmed in 135 (29%), 135 (29%), and 197 (42%) patients, respectively. Test results correlated with infection plausibility (Spearman's rho 0.332; p < 0.001). After exclusion of undetermined cases, positive predictive values were 29%, 54%, and 76% for probability bands 2, 3, and 4, respectively, whereas the negative predictive value for band 1 was 76%. Diagnostic discrimination of SeptiCyte LAB and C-reactive protein was similar (p = 0.919). Among hospitalized patients admitted to the ICU with clinical uncertainty regarding the etiology of acute respiratory failure, the diagnostic value of SeptiCyte LAB was limited.

  7. Longitudinally Jointed Edge-wise Compression Honeycomb Composite Sandwich Coupon Testing and FE Analysis: Three Methods of Strain Measurement, and Comparison

    NASA Technical Reports Server (NTRS)

    Farrokh, Babak; AbdulRahim, Nur Aida; Segal, Ken; Fan, Terry; Jones, Justin; Hodges, Ken; Mashni, Noah; Garg, Naman; Sang, Alex; Gifford, Dawn; hide

    2013-01-01

    Three means (i.e., typical foil strain gages, fiber optic sensors, and a digital image correlation (DIC) system) were implemented to measure strains on the back and front surfaces of a longitudinally jointed curved test article subjected to edge-wise compression testing, at NASA Goddard Space Flight Center, according to ASTM C364. The Pre-test finite element analysis (FEA) was conducted to assess ultimate failure load and predict strain distribution pattern throughout the test coupon. The predicted strain pattern contours were then utilized as guidelines for installing the strain measurement instrumentations. The strain gages and fiber optic sensors were bonded on the specimen at locations with nearly the same strain values, as close as possible to each other, so that, comparisons between the measured strains by strain gages and fiber optic sensors, as well as the DIC system are justified. The test article was loaded to failure (at approximately 38 kips), at the strain value of approximately 10,000mu epsilon As a part of this study, the validity of the measured strains by fiber optic sensors is examined against the strain gage and DIC data, and also will be compared with FEA predictions.

  8. Lifetime prediction and reliability estimation methodology for Stirling-type pulse tube refrigerators by gaseous contamination accelerated degradation testing

    NASA Astrophysics Data System (ADS)

    Wan, Fubin; Tan, Yuanyuan; Jiang, Zhenhua; Chen, Xun; Wu, Yinong; Zhao, Peng

    2017-12-01

    Lifetime and reliability are the two performance parameters of premium importance for modern space Stirling-type pulse tube refrigerators (SPTRs), which are required to operate in excess of 10 years. Demonstration of these parameters provides a significant challenge. This paper proposes a lifetime prediction and reliability estimation method that utilizes accelerated degradation testing (ADT) for SPTRs related to gaseous contamination failure. The method was experimentally validated via three groups of gaseous contamination ADT. First, the performance degradation model based on mechanism of contamination failure and material outgassing characteristics of SPTRs was established. Next, a preliminary test was performed to determine whether the mechanism of contamination failure of the SPTRs during ADT is consistent with normal life testing. Subsequently, the experimental program of ADT was designed for SPTRs. Then, three groups of gaseous contamination ADT were performed at elevated ambient temperatures of 40 °C, 50 °C, and 60 °C, respectively and the estimated lifetimes of the SPTRs under normal condition were obtained through acceleration model (Arrhenius model). The results show good fitting of the degradation model with the experimental data. Finally, we obtained the reliability estimation of SPTRs through using the Weibull distribution. The proposed novel methodology enables us to take less than one year time to estimate the reliability of the SPTRs designed for more than 10 years.

  9. NLP based congestive heart failure case finding: A prospective analysis on statewide electronic medical records.

    PubMed

    Wang, Yue; Luo, Jin; Hao, Shiying; Xu, Haihua; Shin, Andrew Young; Jin, Bo; Liu, Rui; Deng, Xiaohong; Wang, Lijuan; Zheng, Le; Zhao, Yifan; Zhu, Chunqing; Hu, Zhongkai; Fu, Changlin; Hao, Yanpeng; Zhao, Yingzhen; Jiang, Yunliang; Dai, Dorothy; Culver, Devore S; Alfreds, Shaun T; Todd, Rogow; Stearns, Frank; Sylvester, Karl G; Widen, Eric; Ling, Xuefeng B

    2015-12-01

    In order to proactively manage congestive heart failure (CHF) patients, an effective CHF case finding algorithm is required to process both structured and unstructured electronic medical records (EMR) to allow complementary and cost-efficient identification of CHF patients. We set to identify CHF cases from both EMR codified and natural language processing (NLP) found cases. Using narrative clinical notes from all Maine Health Information Exchange (HIE) patients, the NLP case finding algorithm was retrospectively (July 1, 2012-June 30, 2013) developed with a random subset of HIE associated facilities, and blind-tested with the remaining facilities. The NLP based method was integrated into a live HIE population exploration system and validated prospectively (July 1, 2013-June 30, 2014). Total of 18,295 codified CHF patients were included in Maine HIE. Among the 253,803 subjects without CHF codings, our case finding algorithm prospectively identified 2411 uncodified CHF cases. The positive predictive value (PPV) is 0.914, and 70.1% of these 2411 cases were found to be with CHF histories in the clinical notes. A CHF case finding algorithm was developed, tested and prospectively validated. The successful integration of the CHF case findings algorithm into the Maine HIE live system is expected to improve the Maine CHF care. Copyright © 2015. Published by Elsevier Ireland Ltd.

  10. Flight Testing an Iced Business Jet for Flight Simulation Model Validation

    NASA Technical Reports Server (NTRS)

    Ratvasky, Thomas P.; Barnhart, Billy P.; Lee, Sam; Cooper, Jon

    2007-01-01

    A flight test of a business jet aircraft with various ice accretions was performed to obtain data to validate flight simulation models developed through wind tunnel tests. Three types of ice accretions were tested: pre-activation roughness, runback shapes that form downstream of the thermal wing ice protection system, and a wing ice protection system failure shape. The high fidelity flight simulation models of this business jet aircraft were validated using a software tool called "Overdrive." Through comparisons of flight-extracted aerodynamic forces and moments to simulation-predicted forces and moments, the simulation models were successfully validated. Only minor adjustments in the simulation database were required to obtain adequate match, signifying the process used to develop the simulation models was successful. The simulation models were implemented in the NASA Ice Contamination Effects Flight Training Device (ICEFTD) to enable company pilots to evaluate flight characteristics of the simulation models. By and large, the pilots confirmed good similarities in the flight characteristics when compared to the real airplane. However, pilots noted pitch up tendencies at stall with the flaps extended that were not representative of the airplane and identified some differences in pilot forces. The elevator hinge moment model and implementation of the control forces on the ICEFTD were identified as a driver in the pitch ups and control force issues, and will be an area for future work.

  11. Round-robin analysis of the behavior of a 1:6-scale reinforced concrete containment model pressurized to failure: Posttest evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clauss, D.B.

    A 1:6-scale model of a reinforced concrete containment building was pressurized incrementally to failure at a remote site at Sandia National Laboratories. The response of the model was recorded with more than 1000 channels of data (primarily strain and displacement measurements) at 37 discrete pressure levels. The primary objective of this test was to generate data that could be used to validate methods for predicting the performance of containment buildings subject to loads beyond their design basis. Extensive analyses were conducted before the test to predict the behavior of the model. Ten organizations in Europe and the US conducted independentmore » analyses of the model and contributed to a report on the pretest predictions. Predictions included structural response at certain predetermined locations in the model as well as capacity and failure mode. This report discusses comparisons between the pretest predictions and the experimental results. Posttest evaluations that were conducted to provide additional insight into the model behavior are also described. The significance of the analysis and testing of the 1:6-scale model to performance evaluations of actual containments subject to beyond design basis loads is also discussed. 70 refs., 428 figs., 24 tabs.« less

  12. Experimental and Numerical Analysis of Triaxially Braided Composites Utilizing a Modified Subcell Modeling Approach

    NASA Technical Reports Server (NTRS)

    Cater, Christopher; Xiao, Xinran; Goldberg, Robert K.; Kohlman, Lee W.

    2015-01-01

    A combined experimental and analytical approach was performed for characterizing and modeling triaxially braided composites with a modified subcell modeling strategy. Tensile coupon tests were conducted on a [0deg/60deg/-60deg] braided composite at angles of 0deg, 30deg, 45deg, 60deg and 90deg relative to the axial tow of the braid. It was found that measured coupon strength varied significantly with the angle of the applied load and each coupon direction exhibited unique final failures. The subcell modeling approach implemented into the finite element software LS-DYNA was used to simulate the various tensile coupon test angles. The modeling approach was successful in predicting both the coupon strength and reported failure mode for the 0deg, 30deg and 60deg loading directions. The model over-predicted the strength in the 90deg direction; however, the experimental results show a strong influence of free edge effects on damage initiation and failure. In the absence of these local free edge effects, the subcell modeling approach showed promise as a viable and computationally efficient analysis tool for triaxially braided composite structures. Future work will focus on validation of the approach for predicting the impact response of the braided composite against flat panel impact tests.

  13. Experimental and Numerical Analysis of Triaxially Braided Composites Utilizing a Modified Subcell Modeling Approach

    NASA Technical Reports Server (NTRS)

    Cater, Christopher; Xiao, Xinran; Goldberg, Robert K.; Kohlman, Lee W.

    2015-01-01

    A combined experimental and analytical approach was performed for characterizing and modeling triaxially braided composites with a modified subcell modeling strategy. Tensile coupon tests were conducted on a [0deg/60deg/-60deg] braided composite at angles [0deg, 30deg, 45deg, 60deg and 90deg] relative to the axial tow of the braid. It was found that measured coupon strength varied significantly with the angle of the applied load and each coupon direction exhibited unique final failures. The subcell modeling approach implemented into the finite element software LS-DYNA was used to simulate the various tensile coupon test angles. The modeling approach was successful in predicting both the coupon strength and reported failure mode for the 0deg, 30deg and 60deg loading directions. The model over-predicted the strength in the 90deg direction; however, the experimental results show a strong influence of free edge effects on damage initiation and failure. In the absence of these local free edge effects, the subcell modeling approach showed promise as a viable and computationally efficient analysis tool for triaxially braided composite structures. Future work will focus on validation of the approach for predicting the impact response of the braided composite against flat panel impact tests.

  14. Validation of Different Combination of Three Reversing Half-Hitches Alternating Posts (RHAPs) Effects on Arthroscopic Knot Integrity.

    PubMed

    Chong, Alexander Cm; Prohaska, Daniel J; Bye, Brian P

    2017-05-01

    With arthroscopic techniques being used, the importance of knot tying has been examined. Previous literature has examined the use of reversing half-hitches on alternating posts (RHAPs) on knot security. Separately, there has been research regarding different suture materials commonly used in the operating room. The specific aim of this study was to validate the effect of different stacked half-hitch configuration and different braided suture materials on arthroscopic knot integrity. Three different suture materials tied with five different RHAPs in arthroscopic knots were compared. A single load-to-failure test was performed and the mean ultimate clinical failure load was obtained. Significant knot holding strength improvement was found when one half-hitch was reversed as compared to baseline knot. When two of the half-hitches were reversed, there was a greater improvement with all knots having a mean ultimate clinical failure load greater than 150 newtons (N). Comparison of the suture materials demonstrated a higher mean ultimate clinical failure load when Force Fiber ® was used and at least one half-hitch was reversed. Knots tied with either Force Fiber ® or Orthocord ® showed 0% chance of knot slippage while knots tied with FiberWire ® or braided fishing line had about 10 and 30% knot slippage chances, respectively. A significant effect was observed in regards to both stacked half-hitch configuration and suture materials used on knot loop and knot security. Caution should be used with tying three RHAPs in arthroscopic surgery, particularly with a standard knot pusher and arthroscopic cannulas. The findings of this study indicated the importance of three RHAPs in performing arthroscopic knot tying and provided evidence regarding discrepancies of maximum clinical failure loads observed between orthopaedic surgeons, thereby leading to better surgical outcomes in the future.

  15. J-2X Abort System Development

    NASA Technical Reports Server (NTRS)

    Santi, Louis M.; Butas, John P.; Aguilar, Robert B.; Sowers, Thomas S.

    2008-01-01

    The J-2X is an expendable liquid hydrogen (LH2)/liquid oxygen (LOX) gas generator cycle rocket engine that is currently being designed as the primary upper stage propulsion element for the new NASA Ares vehicle family. The J-2X engine will contain abort logic that functions as an integral component of the Ares vehicle abort system. This system is responsible for detecting and responding to conditions indicative of impending Loss of Mission (LOM), Loss of Vehicle (LOV), and/or catastrophic Loss of Crew (LOC) failure events. As an earth orbit ascent phase engine, the J-2X is a high power density propulsion element with non-negligible risk of fast propagation rate failures that can quickly lead to LOM, LOV, and/or LOC events. Aggressive reliability requirements for manned Ares missions and the risk of fast propagating J-2X failures dictate the need for on-engine abort condition monitoring and autonomous response capability as well as traditional abort agents such as the vehicle computer, flight crew, and ground control not located on the engine. This paper describes the baseline J-2X abort subsystem concept of operations, as well as the development process for this subsystem. A strategy that leverages heritage system experience and responds to an evolving engine design as well as J-2X specific test data to support abort system development is described. The utilization of performance and failure simulation models to support abort system sensor selection, failure detectability and discrimination studies, decision threshold definition, and abort system performance verification and validation is outlined. The basis for abort false positive and false negative performance constraints is described. Development challenges associated with information shortfalls in the design cycle, abort condition coverage and response assessment, engine-vehicle interface definition, and abort system performance verification and validation are also discussed.

  16. Structural qualification testing and operational loading on a fiberglass rotor blade for the Mod-OA wind turbine

    NASA Technical Reports Server (NTRS)

    Sullivan, T. L.

    1983-01-01

    Fatigue tests were performed on full- and half-scale root end sections, first to qualify the root retention design, and second to induce failure. Test methodology and results are presented. Two operational blades were proof tested to design limit load to ascertain buckling resistance. Measurements of natural frequency, damping ratio, and deflection under load made on the operational blades are documented. The tests showed that all structural design requirements were met or exceeded. Blade loads measured during 3000 hr of field operation were close to those expected. The measured loads validated the loads used in the fatigue tests and gave high confidence in the ability of the blades to achieve design life.

  17. False-Positive Error Rates for Reliable Digit Span and Auditory Verbal Learning Test Performance Validity Measures in Amnestic Mild Cognitive Impairment and Early Alzheimer Disease.

    PubMed

    Loring, David W; Goldstein, Felicia C; Chen, Chuqing; Drane, Daniel L; Lah, James J; Zhao, Liping; Larrabee, Glenn J

    2016-06-01

    The objective is to examine failure on three embedded performance validity tests [Reliable Digit Span (RDS), Auditory Verbal Learning Test (AVLT) logistic regression, and AVLT recognition memory] in early Alzheimer disease (AD; n = 178), amnestic mild cognitive impairment (MCI; n = 365), and cognitively intact age-matched controls (n = 206). Neuropsychological tests scores were obtained from subjects participating in the Alzheimer's Disease Neuroimaging Initiative (ADNI). RDS failure using a ≤7 RDS threshold was 60/178 (34%) for early AD, 52/365 (14%) for MCI, and 17/206 (8%) for controls. A ≤6 RDS criterion reduced this rate to 24/178 (13%) for early AD, 15/365 (4%) for MCI, and 7/206 (3%) for controls. AVLT logistic regression probability of ≥.76 yielded unacceptably high false-positive rates in both clinical groups [early AD = 149/178 (79%); MCI = 159/365 (44%)] but not cognitively intact controls (13/206, 6%). AVLT recognition criterion of ≤9/15 classified 125/178 (70%) of early AD, 155/365 (42%) of MCI, and 18/206 (9%) of control scores as invalid, which decreased to 66/178 (37%) for early AD, 46/365 (13%) for MCI, and 10/206 (5%) for controls when applying a ≤5/15 criterion. Despite high false-positive rates across individual measures and thresholds, combining RDS ≤ 6 and AVLT recognition ≤9/15 classified only 9/178 (5%) of early AD and 4/365 (1%) of MCI patients as invalid performers. Embedded validity cutoffs derived from mixed clinical groups produce unacceptably high false-positive rates in MCI and early AD. Combining embedded PVT indicators lowers the false-positive rate. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Assessment of compressive failure process of cortical bone materials using damage-based model.

    PubMed

    Ng, Theng Pin; R Koloor, S S; Djuansjah, J R P; Abdul Kadir, M R

    2017-02-01

    The main failure factors of cortical bone are aging or osteoporosis, accident and high energy trauma or physiological activities. However, the mechanism of damage evolution coupled with yield criterion is considered as one of the unclear subjects in failure analysis of cortical bone materials. Therefore, this study attempts to assess the structural response and progressive failure process of cortical bone using a brittle damaged plasticity model. For this reason, several compressive tests are performed on cortical bone specimens made of bovine femur, in order to obtain the structural response and mechanical properties of the material. Complementary finite element (FE) model of the sample and test is prepared to simulate the elastic-to-damage behavior of the cortical bone using the brittle damaged plasticity model. The FE model is validated in a comparative method using the predicted and measured structural response as load-compressive displacement through simulation and experiment. FE results indicated that the compressive damage initiated and propagated at central region where maximum equivalent plastic strain is computed, which coincided with the degradation of structural compressive stiffness followed by a vast amount of strain energy dissipation. The parameter of compressive damage rate, which is a function dependent on damage parameter and the plastic strain is examined for different rates. Results show that considering a similar rate to the initial slope of the damage parameter in the experiment would give a better sense for prediction of compressive failure. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Dynamic Brazilian Test of Rock Under Intermediate Strain Rate: Pendulum Hammer-Driven SHPB Test and Numerical Simulation

    NASA Astrophysics Data System (ADS)

    Zhu, W. C.; Niu, L. L.; Li, S. H.; Xu, Z. H.

    2015-09-01

    The tensile strength of rock subjected to dynamic loading constitutes many engineering applications such as rock drilling and blasting. The dynamic Brazilian test of rock specimens was conducted with the split Hopkinson pressure bar (SHPB) driven by pendulum hammer, in order to determine the indirect tensile strength of rock under an intermediate strain rate ranging from 5.2 to 12.9 s-1, which is achieved when the incident bar is impacted by pendulum hammer with different velocities. The incident wave excited by pendulum hammer is triangular in shape, featuring a long rising time, and it is considered to be helpful for achieving a constant strain rate in the rock specimen. The dynamic indirect tensile strength of rock increases with strain rate. Then, the numerical simulator RFPA-Dynamics, a well-recognized software for simulating the rock failure under dynamic loading, is validated by reproducing the Brazilian test of rock when the incident stress wave retrieved at the incident bar is input as the boundary condition, and then it is employed to study the Brazilian test of rock under the higher strain rate. Based on the numerical simulation, the strain-rate dependency of tensile strength and failure pattern of the Brazilian disc specimen under the intermediate strain rate are numerically simulated, and the associated failure mechanism is clarified. It is deemed that the material heterogeneity should be a reason for the strain-rate dependency of rock.

  20. Biaxial flexure strength determination of endodontically accessed ceramic restorations.

    PubMed

    Kelly, R D; Fleming, G J P; Hooi, P; Palin, W M; Addison, O

    2014-08-01

    To report analytic solutions capable of identifying failure stresses from the biaxial flexure testing of geometries representative of endodontic access cavities prepared through dental restorative materials. The ring-on-ring biaxial flexure strength of annular discs with a central circular hole supported peripherally by a knife-edge support and loaded evenly at the upper edge of the central hole were solved using general expressions of deformations, moments and shears for flat plates of a constant thickness. To validate the solutions, finite element analyses were performed. A three-dimensional one-quarter model of the test was generated using a linear P-code FEA software and the boundary conditions represented the experimental test configuration whereby symmetry planes defined the full model. To enable comparison of the maximum principal stresses with experimental derived data, three groups of nominally identical feldspathic ceramic disks (n=30) were fabricated. Specimens from Group A received a 4mm diameter representative endodontic access cavity and were tested in ring-on-ring. Group B and C specimens remained intact and were tested in ring-on-ring and ball-on-ring, respectively, to give insight into strength scaling effects. Fractography was used to confirm failure origins, and statistical analysis of fracture strength data was performed using one-way ANOVAs (P<0.05) and a Weibull approach. The developed analytical solutions were demonstrated to deviate <1% from the finite element prediction in the configuration studied. Fractography confirmed the failure origin of tested samples to coincide with the predicted stress maxima and the area where fracture is observed to originate clinically. Specimens from the three experimental groups A-C exhibited different strengths which correlated with the volume scaling effects on measured strength. The solutions provided will enable geometric and materials variables to be systematically studied and remove the need for load-to-failure 'crunch the crown' testing. Copyright © 2014 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  1. Implementation and Quality Control of Lung Cancer EGFR Genetic Testing by MALDI-TOF Mass Spectrometry in Taiwan Clinical Practice

    PubMed Central

    Su, Kang-Yi; Kao, Jau-Tsuen; Ho, Bing-Ching; Chen, Hsuan-Yu; Chang, Gee-Cheng; Ho, Chao-Chi; Yu, Sung-Liang

    2016-01-01

    Molecular diagnostics in cancer pharmacogenomics is indispensable for making targeted therapy decisions especially in lung cancer. For routine clinical practice, the flexible testing platform and implemented quality system are important for failure rate and turnaround time (TAT) reduction. We established and validated the multiplex EGFR testing by MALDI-TOF MS according to ISO15189 regulation and CLIA recommendation in Taiwan. Totally 8,147 cases from Aug-2011 to Jul-2015 were assayed and statistical characteristics were reported. The intra-run precision of EGFR mutation frequency was CV 2.15% (L858R) and 2.77% (T790M); the inter-run precision was CV 3.50% (L858R) and 2.84% (T790M). Accuracy tests by consensus reference biomaterials showed 100% consistence with datasheet (public database). Both analytical sensitivity and specificity were 100% while taking Sanger sequencing as the gold-standard method for comparison. EGFR mutation frequency of peripheral blood mononuclear cell for reference range determination was 0.002 ± 0.016% (95% CI: 0.000–0.036) (L858R) and 0.292 ± 0.289% (95% CI: 0.000–0.871) (T790M). The average TAT was 4.5 working days and the failure rate was less than 0.1%. In conclusion, this study provides a comprehensive report of lung cancer EGFR mutation detection from platform establishment, method validation to clinical routine practice. It may be a reference model for molecular diagnostics in cancer pharmacogenomics. PMID:27480787

  2. A contemporary approach to validity arguments: a practical guide to Kane's framework.

    PubMed

    Cook, David A; Brydges, Ryan; Ginsburg, Shiphra; Hatala, Rose

    2015-06-01

    Assessment is central to medical education and the validation of assessments is vital to their use. Earlier validity frameworks suffer from a multiplicity of types of validity or failure to prioritise among sources of validity evidence. Kane's framework addresses both concerns by emphasising key inferences as the assessment progresses from a single observation to a final decision. Evidence evaluating these inferences is planned and presented as a validity argument. We aim to offer a practical introduction to the key concepts of Kane's framework that educators will find accessible and applicable to a wide range of assessment tools and activities. All assessments are ultimately intended to facilitate a defensible decision about the person being assessed. Validation is the process of collecting and interpreting evidence to support that decision. Rigorous validation involves articulating the claims and assumptions associated with the proposed decision (the interpretation/use argument), empirically testing these assumptions, and organising evidence into a coherent validity argument. Kane identifies four inferences in the validity argument: Scoring (translating an observation into one or more scores); Generalisation (using the score[s] as a reflection of performance in a test setting); Extrapolation (using the score[s] as a reflection of real-world performance), and Implications (applying the score[s] to inform a decision or action). Evidence should be collected to support each of these inferences and should focus on the most questionable assumptions in the chain of inference. Key assumptions (and needed evidence) vary depending on the assessment's intended use or associated decision. Kane's framework applies to quantitative and qualitative assessments, and to individual tests and programmes of assessment. Validation focuses on evaluating the key claims, assumptions and inferences that link assessment scores with their intended interpretations and uses. The Implications and associated decisions are the most important inferences in the validity argument. © 2015 John Wiley & Sons Ltd.

  3. Estimating earthquake-induced failure probability and downtime of critical facilities.

    PubMed

    Porter, Keith; Ramer, Kyle

    2012-01-01

    Fault trees have long been used to estimate failure risk in earthquakes, especially for nuclear power plants (NPPs). One interesting application is that one can assess and manage the probability that two facilities - a primary and backup - would be simultaneously rendered inoperative in a single earthquake. Another is that one can calculate the probabilistic time required to restore a facility to functionality, and the probability that, during any given planning period, the facility would be rendered inoperative for any specified duration. A large new peer-reviewed library of component damageability and repair-time data for the first time enables fault trees to be used to calculate the seismic risk of operational failure and downtime for a wide variety of buildings other than NPPs. With the new library, seismic risk of both the failure probability and probabilistic downtime can be assessed and managed, considering the facility's unique combination of structural and non-structural components, their seismic installation conditions, and the other systems on which the facility relies. An example is offered of real computer data centres operated by a California utility. The fault trees were created and tested in collaboration with utility operators, and the failure probability and downtime results validated in several ways.

  4. Is it possible to predict office hysteroscopy failure?

    PubMed

    Cobellis, Luigi; Castaldi, Maria Antonietta; Giordano, Valentino; De Franciscis, Pasquale; Signoriello, Giuseppe; Colacurci, Nicola

    2014-10-01

    The purpose of this study was to develop a clinical tool, the HFI (Hysteroscopy Failure Index), which gives criteria to predict hysteroscopic examination failure. This was a retrospective diagnostic test study, aimed to validate the HFI, set at the Department of Gynaecology, Obstetric and Reproductive Science of the Second University of Naples, Italy. The HFI was applied to our database of 995 consecutive women, who underwent office based to assess abnormal uterine bleeding (AUB), infertility, cervical polyps, and abnormal sonographic patterns (postmenopausal endometrial thickness of more than 5mm, endometrial hyperechogenic spots, irregular endometrial line, suspect of uterine septa). Demographic characteristics, previous surgery, recurrent infections, sonographic data, Estro-Progestins, IUD and menopausal status were collected. Receiver operating characteristic (ROC) curve analysis was used to assess the ability of the model to identify patients who were correctly identified (true positives) divided by the total number of failed hysteroscopies (true positives+false negatives). Positive and Negative Likelihood Ratios with 95%CI were calculated. The HFI score is able to predict office hysteroscopy failure in 76% of cases. Moreover, the Positive likelihood ratio was 11.37 (95% CI: 8.49-15.21), and the Negative likelihood ratio was 0.33 (95% CI: 0.27-0.41). Hysteroscopy failure index was able to retrospectively predict office hysteroscopy failure. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. Thermal runaway detection of cylindrical 18650 lithium-ion battery under quasi-static loading conditions

    NASA Astrophysics Data System (ADS)

    Sheikh, Muhammad; Elmarakbi, Ahmed; Elkady, Mustafa

    2017-12-01

    This paper focuses on state of charge (SOC) dependent mechanical failure analysis of 18650 lithium-ion battery to detect signs of thermal runaway. Quasi-static loading conditions are used with four test protocols (Rod, Circular punch, three-point bend and flat plate) to analyse the propagation of mechanical failures and failure induced temperature changes. Finite element analysis (FEA) is used to model single battery cell with the concentric layered formation which represents a complete cell. The numerical simulation model is designed with solid element formation where stell casing and all layers followed the same formation, and fine mesh is used for all layers. Experimental work is also performed to analyse deformation of 18650 lithium-ion cell. The numerical simulation model is validated with experimental results. Deformation of cell mimics thermal runaway and various thermal runaway detection strategies are employed in this work including, force-displacement, voltage-temperature, stress-strain, SOC dependency and separator failure. Results show that cell can undergo severe conditions even with no fracture or rupture, these conditions may slow to develop but they can lead to catastrophic failures. The numerical simulation technique is proved to be useful in predicting initial battery failures, and results are in good correlation with the experimental results.

  6. Risk analysis of analytical validations by probabilistic modification of FMEA.

    PubMed

    Barends, D M; Oldenhof, M T; Vredenbregt, M J; Nauta, M J

    2012-05-01

    Risk analysis is a valuable addition to validation of an analytical chemistry process, enabling not only detecting technical risks, but also risks related to human failures. Failure Mode and Effect Analysis (FMEA) can be applied, using a categorical risk scoring of the occurrence, detection and severity of failure modes, and calculating the Risk Priority Number (RPN) to select failure modes for correction. We propose a probabilistic modification of FMEA, replacing the categorical scoring of occurrence and detection by their estimated relative frequency and maintaining the categorical scoring of severity. In an example, the results of traditional FMEA of a Near Infrared (NIR) analytical procedure used for the screening of suspected counterfeited tablets are re-interpretated by this probabilistic modification of FMEA. Using this probabilistic modification of FMEA, the frequency of occurrence of undetected failure mode(s) can be estimated quantitatively, for each individual failure mode, for a set of failure modes, and the full analytical procedure. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Implementation of an Adaptive Controller System from Concept to Flight Test

    NASA Technical Reports Server (NTRS)

    Larson, Richard R.; Burken, John J.; Butler, Bradley S.; Yokum, Steve

    2009-01-01

    The National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California) is conducting ongoing flight research using adaptive controller algorithms. A highly modified McDonnell-Douglas NF-15B airplane called the F-15 Intelligent Flight Control System (IFCS) is used to test and develop these algorithms. Modifications to this airplane include adding canards and changing the flight control systems to interface a single-string research controller processor for neural network algorithms. Research goals include demonstration of revolutionary control approaches that can efficiently optimize aircraft performance in both normal and failure conditions and advancement of neural-network-based flight control technology for new aerospace system designs. This report presents an overview of the processes utilized to develop adaptive controller algorithms during a flight-test program, including a description of initial adaptive controller concepts and a discussion of modeling formulation and performance testing. Design finalization led to integration with the system interfaces, verification of the software, validation of the hardware to the requirements, design of failure detection, development of safety limiters to minimize the effect of erroneous neural network commands, and creation of flight test control room displays to maximize human situational awareness; these are also discussed.

  8. Psychometrics of the Zarit Burden Interview in Caregivers of Patients With Heart Failure.

    PubMed

    Al-Rawashdeh, Sami Y; Lennie, Terry A; Chung, Misook L

    Identification of family caregivers who are burdened by the caregiving experience is vital to prevention of poor outcomes associated with caregiving. The Zarit Burden Interview (ZBI), a well-known measure of caregiving burden in caregivers of patients with dementia, has been used without being validated in caregivers of patients with heart failure (HF). The purpose of this study is to examine the reliability and validity of the ZBI in caregivers of patients with HF. A total of 124 primary caregivers of patients with HF completed survey questionnaires. Caregiving burden was measured by the ZBI. Reliability was examined using Cronbach's α and item-total/item-item correlations. Convergent validity was examined using correlations with the Oberst Caregiving Burden Scale. Construct validity was demonstrated by exploratory factor analysis and known hypothesis testing (ie, the hypothesis of the association between caregiving burden and depressive symptoms). Cronbach's α for the ZBI was .921. The ZBI had good item-total (r = 0.395-0.764) and item-item (mean r = 0.365) correlations. Significant correlations between the ZBI and the Oberst Caregiving Burden Scale (r = 0.466 for the caregiving time subscale and 0.583 for the caregiving task difficulty subscale; P < .001 for both) supported convergent validity. Four factors were identified (ie, consequences of caregiving, patient's dependence, exhaustion with caregiving and uncertainty, and guilt and fear for the patient's future) using factor analysis, which are consistent with previous studies. Caregivers with high burden scores had significantly higher depressive symptoms than did caregivers with lower burden scores (7.0 ± 6.8 vs 3.1 ± 4.3; P < .01). The findings provide evidence that the ZBI is a reliable and valid measure for assessing burden in caregivers of patients with HF.

  9. Experimental strength of restorations with fibre posts at different stages, with and without using a simulated ligament.

    PubMed

    Pérez-González, A; González-Lluch, C; Sancho-Bru, J L; Rodríguez-Cervantes, P J; Barjau-Escribano, A; Forner-Navarro, L

    2012-03-01

    The aim of this study was to analyse the strength and failure mode of teeth restored with fibre posts under retention and flexural-compressive loads at different stages of the restoration and to analyse whether including a simulated ligament in the experimental setup has any effect on the strength or the failure mode. Thirty human maxillary central incisors were distributed in three different groups to be restored with simulation of different restoration stages (1: only post, 2: post and core, 3: post-core and crown), using Rebilda fibre posts. The specimens were inserted in resin blocks and loaded by means of a universal testing machine until failure under tension (stage 1) and 50º flexion (stages 2-3). Half the specimens in each group were restored using a simulated ligament between root dentine and resin block and the other half did not use this element. Failure in stage 1 always occurred at the post-dentine interface, with a mean failure load of 191·2 N. Failure in stage 2 was located mainly in the core or coronal dentine (mean failure load of 505·9 N). Failure in stage 3 was observed in the coronal dentine (mean failure load 397·4 N). Failure loads registered were greater than expected masticatory loads. Fracture modes were mostly reparable, thus indicating that this post is clinically valid at the different stages of restoration studied. The inclusion of the simulated ligament in the experimental system did not show a statistically significant effect on the failure load or the failure mode. © 2011 Blackwell Publishing Ltd.

  10. Modeling Dynamic Anisotropic Behaviour and Spall Failure in Commercial Aluminium Alloys AA7010

    NASA Astrophysics Data System (ADS)

    Mohd Nor, M. K.; Ma'at, N.; Ho, C. S.

    2018-04-01

    This paper presents a finite strain constitutive model to predict a complex elastoplastic deformation behaviour involves very high pressures and shockwaves in orthotropic materials of aluminium alloys. The previous published constitutive model is used as a reference to start the development in this work. The proposed formulation that used a new definition of Mandel stress tensor to define Hill's yield criterion and a new shock equation of state (EOS) of the generalised orthotropic pressure is further enhanced with Grady spall failure model to closely predict shockwave propagation and spall failure in the chosen commercial aluminium alloy. This hyperelastic-plastic constitutive model is implemented as a new material model in the Lawrence Livermore National Laboratory (LLNL)-DYNA3D code of UTHM's version, named Material Type 92 (Mat92). The implementations of a new EOS of the generalised orthotropic pressure including the spall failure are also discussed in this paper. The capability of the proposed constitutive model to capture the complex behaviour of the selected material is validated against range of Plate Impact Test data at 234, 450 and 895 ms-1 impact velocities.

  11. A Diagnostic Approach for Electro-Mechanical Actuators in Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Balaban, Edward; Saxena, Abhinav; Bansal, Prasun; Goebel, Kai Frank; Stoelting, Paul; Curran, Simon

    2009-01-01

    Electro-mechanical actuators (EMA) are finding increasing use in aerospace applications, especially with the trend towards all all-electric aircraft and spacecraft designs. However, electro-mechanical actuators still lack the knowledge base accumulated for other fielded actuator types, particularly with regard to fault detection and characterization. This paper presents a thorough analysis of some of the critical failure modes documented for EMAs and describes experiments conducted on detecting and isolating a subset of them. The list of failures has been prepared through an extensive Failure Modes and Criticality Analysis (FMECA) reference, literature review, and accessible industry experience. Methods for data acquisition and validation of algorithms on EMA test stands are described. A variety of condition indicators were developed that enabled detection, identification, and isolation among the various fault modes. A diagnostic algorithm based on an artificial neural network is shown to operate successfully using these condition indicators and furthermore, robustness of these diagnostic routines to sensor faults is demonstrated by showing their ability to distinguish between them and component failures. The paper concludes with a roadmap leading from this effort towards developing successful prognostic algorithms for electromechanical actuators.

  12. Anomaly Monitoring Method for Key Components of Satellite

    PubMed Central

    Fan, Linjun; Xiao, Weidong; Tang, Jun

    2014-01-01

    This paper presented a fault diagnosis method for key components of satellite, called Anomaly Monitoring Method (AMM), which is made up of state estimation based on Multivariate State Estimation Techniques (MSET) and anomaly detection based on Sequential Probability Ratio Test (SPRT). On the basis of analysis failure of lithium-ion batteries (LIBs), we divided the failure of LIBs into internal failure, external failure, and thermal runaway and selected electrolyte resistance (R e) and the charge transfer resistance (R ct) as the key parameters of state estimation. Then, through the actual in-orbit telemetry data of the key parameters of LIBs, we obtained the actual residual value (R X) and healthy residual value (R L) of LIBs based on the state estimation of MSET, and then, through the residual values (R X and R L) of LIBs, we detected the anomaly states based on the anomaly detection of SPRT. Lastly, we conducted an example of AMM for LIBs, and, according to the results of AMM, we validated the feasibility and effectiveness of AMM by comparing it with the results of threshold detective method (TDM). PMID:24587703

  13. WSEAT Shock Testing Margin Assessment Using Energy Spectra Final Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sisemore, Carl; Babuska, Vit; Booher, Jason

    Several programs at Sandia National Laboratories have adopted energy spectra as a metric to relate the severity of mechanical insults to structural capacity. The purpose being to gain insight into the system's capability, reliability, and to quantify the ultimate margin between the normal operating envelope and the likely system failure point -- a system margin assessment. The fundamental concern with the use of energy metrics was that the applicability domain and implementation details were not completely defined for many problems of interest. The goal of this WSEAT project was to examine that domain of applicability and work out the necessarymore » implementation details. The goal of this project was to provide experimental validation for the energy spectra based methods in the context of margin assessment as they relate to shock environments. The extensive test results concluded that failure predictions using energy methods did not agree with failure predictions using S-N data. As a result, a modification to the energy methods was developed following the form of Basquin's equation to incorporate the power law exponent for fatigue damage. This update to the energy-based framework brings the energy based metrics into agreement with experimental data and historical S-N data.« less

  14. OvidSP Medline-to-PubMed search filter translation: a methodology for extending search filter range to include PubMed's unique content

    PubMed Central

    2013-01-01

    Background PubMed translations of OvidSP Medline search filters offer searchers improved ease of access. They may also facilitate access to PubMed’s unique content, including citations for the most recently published biomedical evidence. Retrieving this content requires a search strategy comprising natural language terms (‘textwords’), rather than Medical Subject Headings (MeSH). We describe a reproducible methodology that uses a validated PubMed search filter translation to create a textword-only strategy to extend retrieval to PubMed’s unique heart failure literature. Methods We translated an OvidSP Medline heart failure search filter for PubMed and established version equivalence in terms of indexed literature retrieval. The PubMed version was then run within PubMed to identify citations retrieved by the filter’s MeSH terms (Heart failure, Left ventricular dysfunction, and Cardiomyopathy). It was then rerun with the same MeSH terms restricted to searching on title and abstract fields (i.e. as ‘textwords’). Citations retrieved by the MeSH search but not the textword search were isolated. Frequency analysis of their titles/abstracts identified natural language alternatives for those MeSH terms that performed less effectively as textwords. These terms were tested in combination to determine the best performing search string for reclaiming this ‘lost set’. This string, restricted to searching on PubMed’s unique content, was then combined with the validated PubMed translation to extend the filter’s performance in this database. Results The PubMed heart failure filter retrieved 6829 citations. Of these, 834 (12%) failed to be retrieved when MeSH terms were converted to textwords. Frequency analysis of the 834 citations identified five high frequency natural language alternatives that could improve retrieval of this set (cardiac failure, cardiac resynchronization, left ventricular systolic dysfunction, left ventricular diastolic dysfunction, and LV dysfunction). Together these terms reclaimed 157/834 (18.8%) of lost citations. Conclusions MeSH terms facilitate precise searching in PubMed’s indexed subset. They may, however, work less effectively as search terms prior to subject indexing. A validated PubMed search filter can be used to develop a supplementary textword-only search strategy to extend retrieval to PubMed’s unique content. A PubMed heart failure search filter is available on the CareSearch website (http://www.caresearch.com.au) providing access to both indexed and non-indexed heart failure evidence. PMID:23819658

  15. Acoustic emission evaluation of reinforced concrete bridge beam with graphite composite laminate

    NASA Astrophysics Data System (ADS)

    Johnson, Dan E.; Shen, H. Warren; Finlayson, Richard D.

    2001-07-01

    A test was recently conducted on August 1, 2000 at the FHwA Non-Destructive Evaluation Validation Center, sponsored by The New York State DOT, to evaluate a graphite composite laminate as an effective form of retrofit for reinforced concrete bridge beam. One portion of this testing utilized Acoustic Emission Monitoring for Evaluation of the beam under test. Loading was applied to this beam using a two-point loading scheme at FHwA's facility. This load was applied in several incremental loadings until the failure of the graphite composite laminate took place. Each loading culminated by either visual crack location or large audible emissions from the beam. Between tests external cracks were located visually and highlighted and the graphite epoxy was checked for delamination. Acoustic Emission data was collected to locate cracking areas of the structure during the loading cycles. To collect this Acoustic Emission data, FHwA and NYSDOT utilized a Local Area Monitor, an Acoustic Emission instrument developed in a cooperative effort between FHwA and Physical Acoustics Corporation. Eight Acoustic Emission sensors were attached to the structure, with four on each side, in a symmetrical fashion. As testing progressed and culminated with beam failure, Acoustic Emission data was gathered and correlated against time and test load. This paper will discuss the analysis of this test data.

  16. Modeling Adhesive Anchors in a Discrete Element Framework

    PubMed Central

    Marcon, Marco; Vorel, Jan; Ninčević, Krešimir; Wan-Wendner, Roman

    2017-01-01

    In recent years, post-installed anchors are widely used to connect structural members and to fix appliances to load-bearing elements. A bonded anchor typically denotes a threaded bar placed into a borehole filled with adhesive mortar. The high complexity of the problem, owing to the multiple materials and failure mechanisms involved, requires a numerical support for the experimental investigation. A reliable model able to reproduce a system’s short-term behavior is needed before the development of a more complex framework for the subsequent investigation of the lifetime of fasteners subjected to various deterioration processes can commence. The focus of this contribution is the development and validation of such a model for bonded anchors under pure tension load. Compression, modulus, fracture and splitting tests are performed on standard concrete specimens. These serve for the calibration and validation of the concrete constitutive model. The behavior of the adhesive mortar layer is modeled with a stress-slip law, calibrated on a set of confined pull-out tests. The model validation is performed on tests with different configurations comparing load-displacement curves, crack patterns and concrete cone shapes. A model sensitivity analysis and the evaluation of the bond stress and slippage along the anchor complete the study. PMID:28786964

  17. Factors affecting stress assisted corrosion cracking of carbon steel under industrial boiler conditions

    NASA Astrophysics Data System (ADS)

    Yang, Dong

    Failure of carbon steel boiler tubes from waterside has been reported in the utility boilers and industrial boilers for a long time. In industrial boilers, most waterside tube cracks are found near heavy attachment welds on the outer surface and are typically blunt, with multiple bulbous features indicating a discontinuous growth. These types of tube failures are typically referred to as stress assisted corrosion (SAC). For recovery boilers in the pulp and paper industry, these failures are particularly important as any water leak inside the furnace can potentially lead to smelt-water explosion. Metal properties, environmental variables, and stress conditions are the major factors influencing SAC crack initation and propagation in carbon steel boiler tubes. Slow strain rate tests (SSRT) were conducted under boiler water conditions to study the effect of temperature, oxygen level, and stress conditions on crack initation and propagation on SA-210 carbon steel samples machined out of boiler tubes. Heat treatments were also performed to develop various grain size and carbon content on carbon steel samples, and SSRTs were conducted on these samples to examine the effect of microstructure features on SAC cracking. Mechanisms of SAC crack initation and propagation were proposed and validated based on interrupted slow strain tests (ISSRT). Water chemistry guidelines are provided to prevent SAC and fracture mechanics model is developed to predict SAC failure on industrial boiler tubes.

  18. The MMPI-2 Symptom Validity Scale (FBS) not influenced by medical impairment: a large sleep center investigation.

    PubMed

    Greiffenstein, Manfred F

    2010-06-01

    The Symptom Validity Scale (Minnesota Multiphasic Personality Inventory-2-FBS [MMPI-2-FBS]) is a standard MMPI-2 validity scale measuring overstatement of somatic distress and subjective disability. Some critics assert the MMPI-2-FBS misclassifies too many medically impaired persons as malingering symptoms. This study tests the assertion of malingering misclassification with a large sample of 345 medical inpatients undergoing sleep studies that standardly included MMPI-2 testing. The variables included standard MMPI-2 validity scales (Lie Scale [L], Infrequency Scale [F], K-Correction [K]; FBS), objective medical data (e.g., body mass index, pulse oximetry), and polysomnographic scores (e.g., apnea/hypopnea index). The results showed the FBS had no substantial or unique association with medical/sleep variables, produced false positive rates <20% (median = 9, range = 4-11), and male inpatients showed marginally higher failure rates than females. The MMPI-2-FBS appears to have acceptable specificity, because it did not misclassify as biased responders those medical patients with sleep problems, male or female, with primary gain only (reducing sickness). Medical impairment does not appear to be a major influence on deviant MMPI-2-FBS scores.

  19. Does True Neurocognitive Dysfunction Contribute to Minnesota Multiphasic Personality Inventory-2nd Edition-Restructured Form Cognitive Validity Scale Scores?

    PubMed

    Martin, Phillip K; Schroeder, Ryan W; Heinrichs, Robin J; Baade, Lyle E

    2015-08-01

    Previous research has demonstrated RBS and FBS-r to identify non-credible reporters of cognitive symptoms, but the extent that these scales might be influenced by true neurocognitive dysfunction has not been previously studied. The present study examined the relationship between these cognitive validity scales and neurocognitive performance across seven domains of cognitive functioning, both before and after controlling for PVT status in 120 individuals referred for neuropsychological evaluations. Variance in RBS, but not FBS-r, was significantly accounted for by neurocognitive test performance across most cognitive domains. After controlling for PVT status, however, relationships between neurocognitive test performance and validity scales were no longer significant for RBS, and remained non-significant for FBS-r. Additionally, PVT failure accounted for a significant proportion of the variance in both RBS and FBS-r. Results support both the convergent and discriminant validity of RBS and FBS-r. As neither scale was impacted by true neurocognitive dysfunction, these findings provide further support for the use of RBS and FBS-r in neuropsychological evaluations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Numerical Simulation and Experimental Validation of Failure Caused by Vibration of a Fan

    NASA Astrophysics Data System (ADS)

    Zhou, Qiang; Han, Wu; Feng, Jianmei; Jia, Xiaohan; Peng, Xueyuan

    2017-08-01

    This paper presents the root cause analysis of an unexpected fracture occurred on the blades of a motor fan used in a natural gas reciprocating compressor unit. A finite element model was established to investigate the natural frequencies and modal shapes of the fan, and a modal test was performed to verify the numerical results. It was indicated that the numerical results agreed well with experimental data. The third order natural frequency was close to the six times excitation frequency, and the corresponding modal shape was the combination of bending and torsional vibration, which consequently contributed to low-order resonance and fracture failure of the fan. The torsional moment obtained by a torsional vibration analysis of the compressor shaft system was exerted on the numerical model of the fan to evaluate the dynamic stress response of the fan. The results showed that the stress concentration regions on the numerical model were consistent with the location of fractures on the fan. Based on the numerical simulation and experimental validation, some recommendations were given to improve the reliability of the motor fan.

  1. Differential sensitivity of the Response Bias Scale (RBS) and MMPI-2 validity scales to memory complaints.

    PubMed

    Gervais, Roger O; Ben-Porath, Yossef S; Wygant, Dustin B; Green, Paul

    2008-12-01

    The MMPI-2 Response Bias Scale (RBS) is designed to detect response bias in forensic neuropsychological and disability assessment settings. Validation studies have demonstrated that the scale is sensitive to cognitive response bias as determined by failure on the Word Memory Test (WMT) and other symptom validity tests. Exaggerated memory complaints are a common feature of cognitive response bias. The present study was undertaken to determine the extent to which the RBS is sensitive to memory complaints and how it compares in this regard to other MMPI-2 validity scales and indices. This archival study used MMPI-2 and Memory Complaints Inventory (MCI) data from 1550 consecutive non-head-injury disability-related referrals to the first author's private practice. ANOVA results indicated significant increases in memory complaints across increasing RBS score ranges with large effect sizes. Regression analyses indicated that the RBS was a better predictor of the mean memory complaints score than the F, F(B), and F(P) validity scales and the FBS. There was no correlation between the RBS and the CVLT, an objective measure of verbal memory. These findings suggest that elevated scores on the RBS are associated with over-reporting of memory problems, which provides further external validation of the RBS as a sensitive measure of cognitive response bias. Interpretive guidelines for the RBS are provided.

  2. Frailty Assessment in Heart Failure: an Overview of the Multi-domain Approach.

    PubMed

    McDonagh, Julee; Ferguson, Caleb; Newton, Phillip J

    2018-02-01

    The study aims (1) to provide a contemporary description of frailty assessment in heart failure and (2) to provide an overview of multi-domain frailty assessment in heart failure. Frailty assessment is an important predictive measure for mortality and hospitalisation in individuals with heart failure. To date, there are no frailty assessment instruments validated for use in heart failure. This has resulted in significant heterogeneity between studies regarding the assessment of frailty. The most common frailty assessment instrument used in heart failure is the Frailty Phenotype which focuses on five physical domains of frailty; the appropriateness a purely physical measure of frailty in individuals with heart failure who frequently experience decreased exercise tolerance and shortness of breath is yet to be determined. A limited number of studies have approached frailty assessment using a multi-domain view which may be more clinically relevant in heart failure. There remains a lack of consensus regarding frailty assessment and an absence of a validated instrument in heart failure. Despite this, frailty continues to be assessed frequently, primarily for research purposes, using predominantly physical frailty measures. A more multidimensional view of frailty assessment using a multi-domain approach will likely be more sensitive to identifying at risk patients.

  3. Computational Modeling and Experimental Validation of Shock Induced Damage in Woven E-Glass/Vinylester Laminates

    NASA Astrophysics Data System (ADS)

    Hufner, D. R.; Augustine, M. R.

    2018-05-01

    A novel experimental method was developed to simulate underwater explosion pressure pulses within a laboratory environment. An impact-based experimental apparatus was constructed; capable of generating pressure pulses with basic character similar to underwater explosions, while also allowing the pulse to be tuned to different intensities. Having the capability to vary the shock impulse was considered essential to producing various levels of shock-induced damage without the need to modify the fixture. The experimental apparatus and test method are considered ideal for investigating the shock response of composite material systems and/or experimental validation of new material models. One such test program is presented herein, in which a series of E-glass/Vinylester laminates were subjected to a range of shock pulses that induced varying degrees of damage. Analysis-test correlations were performed using a rate-dependent constitutive model capable of representing anisotropic damage and ultimate yarn failure. Agreement between analytical predictions and experimental results was considered acceptable.

  4. A rational approach to legacy data validation when transitioning between electronic health record systems.

    PubMed

    Pageler, Natalie M; Grazier G'Sell, Max Jacob; Chandler, Warren; Mailes, Emily; Yang, Christine; Longhurst, Christopher A

    2016-09-01

    The objective of this project was to use statistical techniques to determine the completeness and accuracy of data migrated during electronic health record conversion. Data validation during migration consists of mapped record testing and validation of a sample of the data for completeness and accuracy. We statistically determined a randomized sample size for each data type based on the desired confidence level and error limits. The only error identified in the post go-live period was a failure to migrate some clinical notes, which was unrelated to the validation process. No errors in the migrated data were found during the 12- month post-implementation period. Compared to the typical industry approach, we have demonstrated that a statistical approach to sampling size for data validation can ensure consistent confidence levels while maximizing efficiency of the validation process during a major electronic health record conversion. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Development of the Orion Crew-Service Module Umbilical Retention and Release Mechanism

    NASA Technical Reports Server (NTRS)

    Delap, Damon C.; Glidden, Joel Micah; Lamoreaux, Christopher

    2013-01-01

    The Orion CSM umbilical retention and release mechanism supports and protects all of the cross-module commodities between the spacecrafts crew and service modules. These commodities include explosive transfer lines, wiring for power and data, and flexible hoses for ground purge and life support systems. The mechanism employs a single separation interface which is retained with pyrotechnically actuated separation bolts and supports roughly two dozen electrical and fluid connectors. When module separation is commanded, either for nominal on-orbit CONOPS or in the event of an abort, the mechanism must release the separation interface and sever all commodity connections within milliseconds of command receipt. There are a number of unique and novel aspects of the design solution developed by the Orion mechanisms team. The design is highly modular and can easily be adapted to other vehiclesmodules and alternate commodity sets. It will be flight tested during Orions Exploration Flight Test 1 (EFT-1) in 2014, and the Orion team anticipates reuse of the design for all future missions. The design packages fluid, electrical, and ordnance disconnects in a single separation interface. It supports abort separations even in cases where aerodynamic loading prevents the deployment of the umbilical arm. Unlike the Apollo CSM umbilical which was a destructive separation device, the Orion design is resettable and flight units can be tested for separation performance prior to flight.Initial development testing of the mechanisms separation interface resulted in binding failures due to connector misalignments. The separation interface was redesigned with a robust linear guide system, and the connector separation and boom deployment were separated into two discretely sequenced events. These changes addressed the root cause of the binding failure by providing better control of connector alignment. The new design was tuned and validated analytically via Monte Carlo simulation. The analytical validation was followed by a repeat of the initial test suite plus test cases at thermal extremes and test cases with imposed mechanical failures demonstrating fault tolerance. The mechanism was then exposed to the qualification vibration environment. Finally, separation testing was performed at full speed with live ordnance.All tests of the redesigned mechanism resulted in successful separation of the umbilical interface with adequate force margins and timing. The test data showed good agreement with the predictions of the Monte Carlo simulation. The simulation proved invaluable due to the number of variables affecting the separation and the uncertainty associated with each. The simulation allowed for rapid assessment of numerous trades and contingency scenarios, and can be easily reconfigured for varying commodity sets and connector layouts.

  6. Thermal-Structural Analysis of PICA Tiles for Solar Tower Test

    NASA Technical Reports Server (NTRS)

    Agrawal, Parul; Empey, Daniel M.; Squire, Thomas H.

    2009-01-01

    Thermal protection materials used in spacecraft heatshields are subjected to severe thermal and mechanical loading environments during re-entry into earth atmosphere. In order to investigate the reliability of PICA tiles in the presence of high thermal gradients as well as mechanical loads, the authors designed and conducted solar-tower tests. This paper presents the design and analysis work for this tests series. Coupled non-linear thermal-mechanical finite element analyses was conducted to estimate in-depth temperature distribution and stress contours for various cases. The first set of analyses performed on isolated PICA tile showed that stresses generated during the tests were below the PICA allowable limit and should not lead to any catastrophic failure during the test. The tests results were consistent with analytical predictions. The temperature distribution and magnitude of the measured strains were also consistent with predicted values. The second test series is designed to test the arrayed PICA tiles with various gap-filler materials. A nonlinear contact method is used to model the complex geometry with various tiles. The analyses for these coupons predict the stress contours in PICA and inside gap fillers. Suitable mechanical loads for this architecture will be predicted, which can be applied during the test to exceed the allowable limits and demonstrate failure modes. Thermocouple and strain-gauge data obtained from the solar tower tests will be used for subsequent analyses and validation of FEM models.

  7. Expression of FOXP3, CD68, and CD20 at Diagnosis in the Microenvironment of Classical Hodgkin Lymphoma Is Predictive of Outcome

    PubMed Central

    Greaves, Paul; Clear, Andrew; Coutinho, Rita; Wilson, Andrew; Matthews, Janet; Owen, Andrew; Shanyinde, Milensu; Lister, T. Andrew; Calaminici, Maria; Gribben, John G.

    2013-01-01

    Purpose The immune microenvironment is key to the pathophysiology of classical Hodgkin lymphoma (CHL). Twenty percent of patients experience failure of their initial treatment, and others receive excessively toxic treatment. Prognostic scores and biomarkers have yet to influence outcomes significantly. Previous biomarker studies have been limited by the extent of tissue analyzed, statistical inconsistencies, and failure to validate findings. We aimed to overcome these limitations by validating recently identified microenvironment biomarkers (CD68, FOXP3, and CD20) in a new patient cohort with a greater extent of tissue and by using rigorous statistical methodology. Patients and Methods Diagnostic tissue from 122 patients with CHL was microarrayed and stained, and positive cells were counted across 10 to 20 high-powered fields per patient by using an automated system. Two statistical analyses were performed: a categorical analysis with test/validation set-defined cut points and Kaplan-Meier estimated outcome measures of 5-year overall survival (OS), disease-specific survival (DSS), and freedom from first-line treatment failure (FFTF) and an independent multivariate analysis of absolute uncategorized counts. Results Increased CD20 expression confers superior OS. Increased FOXP3 expression confers superior OS, and increased CD68 confers inferior FFTF and OS. FOXP3 varies independently of CD68 expression and retains significance when analyzed as a continuous variable in multivariate analysis. A simple score combining FOXP3 and CD68 discriminates three groups: FFTF 93%, 62%, and 47% (P < .001), DSS 93%, 82%, and 63% (P = .03), and OS 93%, 82%, and 59% (P = .002). Conclusion We have independently validated CD68, FOXP3, and CD20 as prognostic biomarkers in CHL, and we demonstrate, to the best of our knowledge for the first time, that combining FOXP3 and CD68 may further improve prognostic stratification. PMID:23045593

  8. A study of unstable rock failures using finite difference and discrete element methods

    NASA Astrophysics Data System (ADS)

    Garvey, Ryan J.

    Case histories in mining have long described pillars or faces of rock failing violently with an accompanying rapid ejection of debris and broken material into the working areas of the mine. These unstable failures have resulted in large losses of life and collapses of entire mine panels. Modern mining operations take significant steps to reduce the likelihood of unstable failure, however eliminating their occurrence is difficult in practice. Researchers over several decades have supplemented studies of unstable failures through the application of various numerical methods. The direction of the current research is to extend these methods and to develop improved numerical tools with which to study unstable failures in underground mining layouts. An extensive study is first conducted on the expression of unstable failure in discrete element and finite difference methods. Simulated uniaxial compressive strength tests are run on brittle rock specimens. Stable or unstable loading conditions are applied onto the brittle specimens by a pair of elastic platens with ranging stiffnesses. Determinations of instability are established through stress and strain histories taken for the specimen and the system. Additional numerical tools are then developed for the finite difference method to analyze unstable failure in larger mine models. Instability identifiers are established for assessing the locations and relative magnitudes of unstable failure through measures of rapid dynamic motion. An energy balance is developed which calculates the excess energy released as a result of unstable equilibria in rock systems. These tools are validated through uniaxial and triaxial compressive strength tests and are extended to models of coal pillars and a simplified mining layout. The results of the finite difference simulations reveal that the instability identifiers and excess energy calculations provide a generalized methodology for assessing unstable failures within potentially complex mine models. These combined numerical tools may be applied in future studies to design primary and secondary supports in bump-prone conditions, evaluate retreat mining cut sequences, asses pillar de-stressing techniques, or perform backanalyses on unstable failures in select mining layouts.

  9. Preemployment physical evaluation.

    PubMed

    Jackson, A S

    1994-01-01

    There is a growing trend toward using preemployment tests to select employees for physically demanding jobs. Women are, in increasing numbers, entering physically demanding occupations that were traditionally dominated by men. Under current Federal employment law, it is illegal to disqualify an employee for a job because of race, color, religion, sex, national origin, and with the recent passage of the American Disabilities Act (ADA), handicap. Because of gender differences in strength, body composition, and VO2max, preemployment tests for physically demanding jobs tend to screen out more females than males. Employers are using preemployment tests not only to enhance worker productivity, but also to minimize the threat of litigation for discriminatory hiring practices and to reduce the risk of musculoskeletal injuries. The primary ergonomic methods used in industry to reduce the risk of back injuries are preemployment testing and job redesign. When a test results in adverse impact, the validity of the test must be established. Validity in this context means that the test represents or predicts the applicant's capacity to perform the job. Criterion-related, content, and construct validation studies are the means used to establish validity. The validity of preemployment hiring practices for physically demanding jobs has been decided in the courts. The most common reason for ruling an employment practice invalid is the failure to show that the test measured important job behaviors. Much of this litigation has involved height and weight requirements for public safety jobs. The courts have generally ruled that using height and weight standards as a criteria for employment is illegal because they were not job related. If fitness tests comprise part or all of the preemployment test, it is essential to demonstrate that the fitness component is related to job performance. Although there are many factors to consider when establishing a cut score, there is a growing trend toward establishing the cut score on the basis of the job's physical demands, defined by VO2max and strength. This literature is limited because most validation studies are not published. They more typically take the form of a technical report to the governmental agency or company that funded the project. There are published preemployment validation studies for outdoor telephone craft jobs involving pole-climbing tasks; firefighters; highway patrol officers; steel workers; underground coal miners; chemical plant workers; electrical transmission lineworkers; and various military jobs.

  10. Extended test of a xenon hollow cathode for a space plasma contactor

    NASA Technical Reports Server (NTRS)

    Sarver-Verhey, Timothy R.

    1994-01-01

    Implementation of a hollow cathode plasma contactor for charge control on the Space Station has required validation of long-life hollow cathodes. A test series of hollow cathodes and hollow cathode plasma contactors was initiated as part of the plasma contactor development program. An on-going wear-test of a hollow cathode has demonstrated cathode operation in excess of 4700 hours with small changes in operating parameters. The discharge experienced 4 shutdowns during the test, all of which were due to test facility failures or expellant replenishment. In all cases, the cathode was reignited at approximately 42 volts and resumed typical operation. This test represents the longest demonstrated stable operation of a high current (greater than 1A) xenon hollow cathode reported to date.

  11. Continuing life test of a xenon hollow cathode for a space plasma contactor

    NASA Technical Reports Server (NTRS)

    Sarver-Verhey, Timothy R.

    1994-01-01

    Implementation of a hollow cathode plasma contactor for charge control on the Space Station has required validation of long-life hollow cathodes. A test series of hollow cathodes and hollow cathode plasma contactors was initiated as part of the plasma contactor development program. An on-going wear-test of a hollow cathode has demonstrated cathode operation in excess of 10,000 hours with small changes in operating parameters. The discharge has experienced 10 shutdowns during the test, all of which were due to test facility failures or expellant replenishment. In all cases, the cathode was re-ignited at approximately 42 volts and resumed typical operation. This test represents the longest demonstrated stable operation of a high current (greater than 1 A) xenon hollow cathode reported to date.

  12. Spacecraft Testing Programs: Adding Value to the Systems Engineering Process

    NASA Technical Reports Server (NTRS)

    Britton, Keith J.; Schaible, Dawn M.

    2011-01-01

    Testing has long been recognized as a critical component of spacecraft development activities - yet many major systems failures may have been prevented with more rigorous testing programs. The question is why is more testing not being conducted? Given unlimited resources, more testing would likely be included in a spacecraft development program. Striking the right balance between too much testing and not enough has been a long-term challenge for many industries. The objective of this paper is to discuss some of the barriers, enablers, and best practices for developing and sustaining a strong test program and testing team. This paper will also explore the testing decision factors used by managers; the varying attitudes toward testing; methods to develop strong test engineers; and the influence of behavior, culture and processes on testing programs. KEY WORDS: Risk, Integration and Test, Validation, Verification, Test Program Development

  13. Experimental and Numerical Studies on the Formability of Materials in Hot Stamping and Cold Die Quenching Processes

    NASA Astrophysics Data System (ADS)

    Li, N.; Mohamed, M. S.; Cai, J.; Lin, J.; Balint, D.; Dean, T. A.

    2011-05-01

    Formability of steel and aluminium alloys in hot stamping and cold die quenching processes is studied in this research. Viscoplastic-damage constitutive equations are developed and determined from experimental data for the prediction of viscoplastic flow and ductility of the materials. The determined unified constitutive equations are then implemented into the commercial Finite Element code Abaqus/Explicit via a user defined subroutine, VUMAT. An FE process simulation model and numerical procedures are established for the modeling of hot stamping processes for a spherical part with a central hole. Different failure modes (failure takes place either near the central hole or in the mid span of the part) are obtained. To validate the simulation results, a test programme is developed, a test die set has been designed and manufactured, and tests have been carried out for the materials with different forming rates. It has been found that very close agreements between experimental and numerical process simulation results are obtained for the ranges of temperatures and forming rates carried out.

  14. Consistency of FMEA used in the validation of analytical procedures.

    PubMed

    Oldenhof, M T; van Leeuwen, J F; Nauta, M J; de Kaste, D; Odekerken-Rombouts, Y M C F; Vredenbregt, M J; Weda, M; Barends, D M

    2011-02-20

    In order to explore the consistency of the outcome of a Failure Mode and Effects Analysis (FMEA) in the validation of analytical procedures, an FMEA was carried out by two different teams. The two teams applied two separate FMEAs to a High Performance Liquid Chromatography-Diode Array Detection-Mass Spectrometry (HPLC-DAD-MS) analytical procedure used in the quality control of medicines. Each team was free to define their own ranking scales for the probability of severity (S), occurrence (O), and detection (D) of failure modes. We calculated Risk Priority Numbers (RPNs) and we identified the failure modes above the 90th percentile of RPN values as failure modes needing urgent corrective action; failure modes falling between the 75th and 90th percentile of RPN values were identified as failure modes needing necessary corrective action, respectively. Team 1 and Team 2 identified five and six failure modes needing urgent corrective action respectively, with two being commonly identified. Of the failure modes needing necessary corrective actions, about a third were commonly identified by both teams. These results show inconsistency in the outcome of the FMEA. To improve consistency, we recommend that FMEA is always carried out under the supervision of an experienced FMEA-facilitator and that the FMEA team has at least two members with competence in the analytical method to be validated. However, the FMEAs of both teams contained valuable information that was not identified by the other team, indicating that this inconsistency is not always a drawback. Copyright © 2010 Elsevier B.V. All rights reserved.

  15. Robust validation of approximate 1-matrix functionals with few-electron harmonium atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cioslowski, Jerzy, E-mail: jerzy@wmf.univ.szczecin.pl; Piris, Mario; Matito, Eduard

    2015-12-07

    A simple comparison between the exact and approximate correlation components U of the electron-electron repulsion energy of several states of few-electron harmonium atoms with varying confinement strengths provides a stringent validation tool for 1-matrix functionals. The robustness of this tool is clearly demonstrated in a survey of 14 known functionals, which reveals their substandard performance within different electron correlation regimes. Unlike spot-testing that employs dissociation curves of diatomic molecules or more extensive benchmarking against experimental atomization energies of molecules comprising some standard set, the present approach not only uncovers the flaws and patent failures of the functionals but, even moremore » importantly, also allows for pinpointing their root causes. Since the approximate values of U are computed at exact 1-densities, the testing requires minimal programming and thus is particularly suitable for rapid screening of new functionals.« less

  16. A unified phase-field theory for the mechanics of damage and quasi-brittle failure

    NASA Astrophysics Data System (ADS)

    Wu, Jian-Ying

    2017-06-01

    Being one of the most promising candidates for the modeling of localized failure in solids, so far the phase-field method has been applied only to brittle fracture with very few exceptions. In this work, a unified phase-field theory for the mechanics of damage and quasi-brittle failure is proposed within the framework of thermodynamics. Specifically, the crack phase-field and its gradient are introduced to regularize the sharp crack topology in a purely geometric context. The energy dissipation functional due to crack evolution and the stored energy functional of the bulk are characterized by a crack geometric function of polynomial type and an energetic degradation function of rational type, respectively. Standard arguments of thermodynamics then yield the macroscopic balance equation coupled with an extra evolution law of gradient type for the crack phase-field, governed by the aforesaid constitutive functions. The classical phase-field models for brittle fracture are recovered as particular examples. More importantly, the constitutive functions optimal for quasi-brittle failure are determined such that the proposed phase-field theory converges to a cohesive zone model for a vanishing length scale. Those general softening laws frequently adopted for quasi-brittle failure, e.g., linear, exponential, hyperbolic and Cornelissen et al. (1986) ones, etc., can be reproduced or fit with high precision. Except for the internal length scale, all the other model parameters can be determined from standard material properties (i.e., Young's modulus, failure strength, fracture energy and the target softening law). Some representative numerical examples are presented for the validation. It is found that both the internal length scale and the mesh size have little influences on the overall global responses, so long as the former can be well resolved by sufficiently fine mesh. In particular, for the benchmark tests of concrete the numerical results of load versus displacement curve and crack paths both agree well with the experimental data, showing validity of the proposed phase-field theory for the modeling of damage and quasi-brittle failure in solids.

  17. Is the Sørensen test valid to assess muscle fatigue of the trunk extensor muscles?

    PubMed

    Demoulin, Christophe; Boyer, Mathieu; Duchateau, Jacques; Grosdent, Stéphanie; Jidovtseff, Boris; Crielaard, Jean-Michel; Vanderthommen, Marc

    2016-01-01

    Very few studies have quantified the degree of fatigue characterized by the decline in the maximal voluntary contraction (MVC) force of the trunk extensors induced by the widely used Sørensen test. Measure the degree of fatigue of the trunk extensor muscles induced by the Sørensen test. Eighty young healthy subjects were randomly divided into a control group (CG) and an experimental group (EG), each including 50% of the two genders. The EG performed an isometric MVC of the trunk extensors (pre-fatigue test) followed by the Sørensen test, the latter being immediately followed by another MVC (post-fatigue test). The CG performed only the pre- and post-fatigue tests without any exertion in between. The comparison of the pre- and post-fatigue tests revealed a significant (P< 0.05) decrease in MVC force normalized by body mass (-13%) in the EG, whereas a small increase occurred in the CG (+2.7%, P= 0.001). This study shows that the Sørensen test performed until failure in a young healthy population results in a reduced ability of the trunk extensor muscles to generate maximal force, and indicates that this test is valid for the assessment of fatigue in trunk extensor muscles.

  18. Combining the test of memory malingering trial 1 with behavioral responses improves the detection of effort test failure.

    PubMed

    Denning, John Henry

    2014-01-01

    Validity measures derived from the Test of Memory Malingering Trial 1 (TOMM1) and errors across the first 10 items of TOMM1 (TOMMe10) may be further enhanced by combining these scores with "embedded" behavioral responses while patients complete these measures. In a sample of nondemented veterans (n = 151), five possible behavioral responses observed during completion of the first 10 items of the TOMM were combined with TOMM1 and TOMMe10 to assess any increased sensitivity in predicting Medical Symptom Validity Test (MSVT) performance. Both TOMM1 and TOMMe10 alone were highly accurate overall in predicting MSVT performance (TOMM1 [area under the curve (AUC)] = .95, TOMMe10 [AUC] = .92). The combination of TOMM measures and behavioral responses did not increase overall accuracy rates; however, when specificity was held at approximately 90%, there was a slight increase in sensitivity (+7%) for both TOMM measures when combined with the number of "point and name" responses. Examples are provided demonstrating that at a given TOMM score (TOMM1 or TOMMe10), with an increase in "point and name" responses, there is an incremental increase in the probability of failing the MSVT. Exploring the utility of combining freestanding or embedded validity measures with behavioral features during test administration should be encouraged.

  19. Reducing the Risk of Human Space Missions with INTEGRITY

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Dillon-Merill, Robin L.; Tri, Terry O.; Henninger, Donald L.

    2003-01-01

    The INTEGRITY Program will design and operate a test bed facility to help prepare for future beyond-LEO missions. The purpose of INTEGRITY is to enable future missions by developing, testing, and demonstrating advanced human space systems. INTEGRITY will also implement and validate advanced management techniques including risk analysis and mitigation. One important way INTEGRITY will help enable future missions is by reducing their risk. A risk analysis of human space missions is important in defining the steps that INTEGRITY should take to mitigate risk. This paper describes how a Probabilistic Risk Assessment (PRA) of human space missions will help support the planning and development of INTEGRITY to maximize its benefits to future missions. PRA is a systematic methodology to decompose the system into subsystems and components, to quantify the failure risk as a function of the design elements and their corresponding probability of failure. PRA provides a quantitative estimate of the probability of failure of the system, including an assessment and display of the degree of uncertainty surrounding the probability. PRA provides a basis for understanding the impacts of decisions that affect safety, reliability, performance, and cost. Risks with both high probability and high impact are identified as top priority. The PRA of human missions beyond Earth orbit will help indicate how the risk of future human space missions can be reduced by integrating and testing systems in INTEGRITY.

  20. Atherosclerotic renal artery stenosis: epidemiology, cardiovascular outcomes, and clinical prediction rules.

    PubMed

    Zoccali, Carmine; Mallamaci, Francesca; Finocchiaro, Pietro

    2002-11-01

    Atherosclerotic renal artery stenosis is the most common primary disease of the renal arteries, and it is associated with two major clinical syndromes, ischemic renal disease and hypertension. The prevalence of this disease in the population is undefined because there is no simple and reliable test that can be applied on a large scale. Renal artery involvement in patients with coronary heart disease and/or heart failure is frequent, and it may influence cardiovascular outcomes and survival in these patients. Suspecting renal arterial stenosis in patients with recurrent episodes of pulmonary edema is justified by observations showing that about one third of elderly patients with heart failure display atherosclerotic renal disease. Whether interventions aimed at restoring arterial patency may reduce the high mortality in patients with heart failure is still unclear because, to date, no prospective study has been carried out in these patients. Increased awareness of the need for cost containment has renewed the interest in clinical cues for suspecting renovascular hypertension. In this regard, the DRASTIC study constitutes an important attempt at validating clinical prediction rules. In this study, a clinical rule was derived that predicted renal artery stenosis as efficiently as renal scintigraphy (sensitivity: clinical rule, 65% versus scintigraphy, 72%; specificity: 87% versus 92%). When tested in a systematic and quantitative manner, clinical findings can perform as accurately as more complex tests in the detection of renal artery stenosis.

  1. Validity and reliability of the Hexoskin® wearable biometric vest during maximal aerobic power testing in elite cyclists.

    PubMed

    Elliot, Catherine A; Hamlin, Michael J; Lizamore, Catherine A

    2017-07-28

    The purpose of this study was to investigate the validity and reliability of the Hexoskin® vest for measuring respiration and heart rate (HR) in elite cyclists during a progressive test to exhaustion. Ten male elite cyclists (age 28.8 ± 12.5 yr, height 179.3 ± 6.0 cm, weight 73.2 ± 9.1 kg, V˙ O2max 60.7 ± 7.8 ml.kg.min mean ± SD) conducted a maximal aerobic cycle ergometer test using a ramped protocol (starting at 100W with 25W increments each min to failure) during two separate occasions over a 3-4 day period. Compared to the criterion measure (Metamax 3B) the Hexoskin® vest showed mainly small typical errors (1.3-6.2%) for HR and breathing frequency (f), but larger typical errors (9.5-19.6%) for minute ventilation (V˙E) during the progressive test to exhaustion. The typical error indicating the reliability of the Hexoskin® vest at moderate intensity exercise between tests was small for HR (2.6-2.9%) and f (2.5-3.2%) but slightly larger for V˙E (5.3-7.9%). We conclude that the Hexoskin® vest is sufficiently valid and reliable for measurements of HR and f in elite athletes during high intensity cycling but the calculated V˙E value the Hexoskin® vest produces during such exercise should be used with caution due to the lower validity and reliability of this variable.

  2. Evaluating the Medical Symptom Validity Test (MSVT) in a Sample of Veterans Between the Ages of 18 to 64.

    PubMed

    Reslan, Summar; Axelrod, Bradley N

    2017-01-01

    The purpose of the current study was to compare three potential profiles of the Medical Symptom Validity Test (MSVT; Pass, Genuine Memory Impairment Profile [GMIP], and Fail) on other freestanding and embedded performance validity tests (PVTs). Notably, a quantitatively computed version of the GMIP was utilized in this investigation. Data obtained from veterans referred for a neuropsychological evaluation in a metropolitan Veteran Affairs medical center were included (N = 494). Individuals age 65 and older were not included to exclude individuals with dementia from this investigation. The sample revealed 222 (45%) in the Pass group. Of the 272 who failed the easy subtests of the MSVT, 221 (81%) met quantitative criteria for the GMIP and 51 (19%) were classified as Fail. The Pass group failed fewer freestanding and embedded PVTs and obtained higher raw scores on all PVTs than both GMIP and Fail groups. The differences in performances of the GMIP and Fail groups were minimal. Specifically, GMIP protocols failed fewer freestanding PVTs than the Fail group; failure on embedded PVTs did not differ between GMIP and Fail. The MSVT GMIP incorporates the presence of clinical correlates of disability to assist with this distinction, but future research should consider performances on other freestanding measures of performance validity to differentiate cognitive impairment from invalidity.

  3. Translation and linguistic validation of the Composite Autonomic Symptom Score COMPASS 31.

    PubMed

    Pierangeli, Giulia; Turrini, Alessandra; Giannini, Giulia; Del Sorbo, Francesca; Calandra-Buonaura, Giovanna; Guaraldi, Pietro; Bacchi Reggiani, Maria Letizia; Cortelli, Pietro

    2015-10-01

    The aim of our study was to translate and to do a linguistic validation of the Composite Autonomic Symptom Score COMPASS 31. COMPASS 31 is a self-assessment instrument including 31 items assessing six domains of autonomic functions: orthostatic intolerance, vasomotor, secretomotor, gastrointestinal, bladder, and pupillomotor functions. This questionnaire has been created by the Autonomic group of the Mayo Clinic from two previous versions: the Autonomic Symptom Profile (ASP) composed of 169 items and the following COMPASS with 72 items selected from the ASP. We translated the questionnaire by means of a standardized forward and back-translation procedure. Thirty-six subjects, 25 patients with autonomic failure of different aethiologies and 11 healthy controls filled in the COMPASS 31 twice, 4 ± 1 weeks apart, once in Italian and once in English in a randomized order. The test-retest showed a significant correlation between the Italian and the English versions as total score. The evaluation of single domains by means of Pearson correlation when applicable or by means of Spearman test showed a significant correlation between the English and the Italian COMPASS 31 version for all clinical domains except the vasomotor one for the lack of scoring. The comparison between the patients with autonomic failure and healthy control groups showed significantly higher total scores in patients with respect to controls confirming the high sensitivity of COMPASS 31 in revealing autonomic symptoms.

  4. Development and validation of rear impact computer simulation model of an adult manual transit wheelchair with a seated occupant.

    PubMed

    Salipur, Zdravko; Bertocci, Gina

    2010-01-01

    It has been shown that ANSI WC19 transit wheelchairs that are crashworthy in frontal impact exhibit catastrophic failures in rear impact and may not be able to provide stable seating support and thus occupant protection for the wheelchair occupant. Thus far only limited sled test and computer simulation data have been available to study rear impact wheelchair safety. Computer modeling can be used as an economic and comprehensive tool to gain critical knowledge regarding wheelchair integrity and occupant safety. This study describes the development and validation of a computer model simulating an adult wheelchair-seated occupant subjected to a rear impact event. The model was developed in MADYMO and validated rigorously using the results of three similar sled tests conducted to specifications provided in the draft ISO/TC 173 standard. Outcomes from the model can provide critical wheelchair loading information to wheelchair and tiedown manufacturers, resulting in safer wheelchair designs for rear impact conditions. (c) 2009 IPEM. Published by Elsevier Ltd. All rights reserved.

  5. Development and validation of a prognostic score to predict mortality in patients with acute-on-chronic liver failure.

    PubMed

    Jalan, Rajiv; Saliba, Faouzi; Pavesi, Marco; Amoros, Alex; Moreau, Richard; Ginès, Pere; Levesque, Eric; Durand, Francois; Angeli, Paolo; Caraceni, Paolo; Hopf, Corinna; Alessandria, Carlo; Rodriguez, Ezequiel; Solis-Muñoz, Pablo; Laleman, Wim; Trebicka, Jonel; Zeuzem, Stefan; Gustot, Thierry; Mookerjee, Rajeshwar; Elkrief, Laure; Soriano, German; Cordoba, Joan; Morando, Filippo; Gerbes, Alexander; Agarwal, Banwari; Samuel, Didier; Bernardi, Mauro; Arroyo, Vicente

    2014-11-01

    Acute-on-chronic liver failure (ACLF) is a frequent syndrome (30% prevalence), characterized by acute decompensation of cirrhosis, organ failure(s) and high short-term mortality. This study develops and validates a specific prognostic score for ACLF patients. Data from 1349 patients included in the CANONIC study were used. First, a simplified organ function scoring system (CLIF Consortium Organ Failure score, CLIF-C OFs) was developed to diagnose ACLF using data from all patients. Subsequently, in 275 patients with ACLF, CLIF-C OFs and two other independent predictors of mortality (age and white blood cell count) were combined to develop a specific prognostic score for ACLF (CLIF Consortium ACLF score [CLIF-C ACLFs]). A concordance index (C-index) was used to compare the discrimination abilities of CLIF-C ACLF, MELD, MELD-sodium (MELD-Na), and Child-Pugh (CPs) scores. The CLIF-C ACLFs was validated in an external cohort and assessed for sequential use. The CLIF-C ACLFs showed a significantly higher predictive accuracy than MELDs, MELD-Nas, and CPs, reducing (19-28%) the corresponding prediction error rates at all main time points after ACLF diagnosis (28, 90, 180, and 365 days) in both the CANONIC and the external validation cohort. CLIF-C ACLFs computed at 48 h, 3-7 days, and 8-15 days after ACLF diagnosis predicted the 28-day mortality significantly better than at diagnosis. The CLIF-C ACLFs at ACLF diagnosis is superior to the MELDs and MELD-Nas in predicting mortality. The CLIF-C ACLFs is a clinically relevant, validated scoring system that can be used sequentially to stratify the risk of mortality in ACLF patients. Copyright © 2014 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.

  6. Status of the NASA's Evolutionary Xenon Thruster (NEXT) Long-Duration Test After 30,352 Hours of Operation

    NASA Technical Reports Server (NTRS)

    Herman, Daniel A.

    2010-01-01

    The NASA s Evolutionary Xenon Thruster (NEXT) program is tasked with significantly improving and extending the capabilities of current state-of-the-art NSTAR thruster. The service life capability of the NEXT ion thruster is being assessed by thruster wear test and life-modeling of critical thruster components, such as the ion optics and cathodes. The NEXT Long-Duration Test (LDT) was initiated to validate and qualify the NEXT thruster propellant throughput capability. The NEXT thruster completed the primary goal of the LDT; namely to demonstrate the project qualification throughput of 450 kg by the end of calendar year 2009. The NEXT LDT has demonstrated 30,352 hr of operation and processed 490 kg of xenon throughput--surpassing the NSTAR Extended Life Test hours demonstrated and more than double the throughput demonstrated by the NSTAR flight-spare. Thruster performance changes have been consistent with a priori predictions. Thruster erosion has been minimal and consistent with the thruster service life assessment, which predicts the first failure mode at greater than 750 kg throughput. The life-limiting failure mode for NEXT is predicted to be loss of structural integrity of the accelerator grid due to erosion by charge-exchange ions.

  7. NASA's Evolutionary Xenon Thruster (NEXT) Component Verification Testing

    NASA Technical Reports Server (NTRS)

    Herman, Daniel A.; Pinero, Luis R.; Sovey, James S.

    2009-01-01

    Component testing is a critical facet of the comprehensive thruster life validation strategy devised by the NASA s Evolutionary Xenon Thruster (NEXT) program. Component testing to-date has consisted of long-duration high voltage propellant isolator and high-cycle heater life validation testing. The high voltage propellant isolator, a heritage design, will be operated under different environmental condition in the NEXT ion thruster requiring verification testing. The life test of two NEXT isolators was initiated with comparable voltage and pressure conditions with a higher temperature than measured for the NEXT prototype-model thruster. To date the NEXT isolators have accumulated 18,300 h of operation. Measurements indicate a negligible increase in leakage current over the testing duration to date. NEXT 1/2 in. heaters, whose manufacturing and control processes have heritage, were selected for verification testing based upon the change in physical dimensions resulting in a higher operating voltage as well as potential differences in thermal environment. The heater fabrication processes, developed for the International Space Station (ISS) plasma contactor hollow cathode assembly, were utilized with modification of heater dimensions to accommodate a larger cathode. Cyclic testing of five 1/22 in. diameter heaters was initiated to validate these modified fabrication processes while retaining high reliability heaters. To date two of the heaters have been cycled to 10,000 cycles and suspended to preserve hardware. Three of the heaters have been cycled to failure giving a B10 life of 12,615 cycles, approximately 6,000 more cycles than the established qualification B10 life of the ISS plasma contactor heaters.

  8. [Failure modes and effects analysis in the prescription, validation and dispensing process].

    PubMed

    Delgado Silveira, E; Alvarez Díaz, A; Pérez Menéndez-Conde, C; Serna Pérez, J; Rodríguez Sagrado, M A; Bermejo Vicedo, T

    2012-01-01

    To apply a failure modes and effects analysis to the prescription, validation and dispensing process for hospitalised patients. A work group analysed all of the stages included in the process from prescription to dispensing, identifying the most critical errors and establishing potential failure modes which could produce a mistake. The possible causes, their potential effects, and the existing control systems were analysed to try and stop them from developing. The Hazard Score was calculated, choosing those that were ≥ 8, and a Severity Index = 4 was selected independently of the hazard Score value. Corrective measures and an implementation plan were proposed. A flow diagram that describes the whole process was obtained. A risk analysis was conducted of the chosen critical points, indicating: failure mode, cause, effect, severity, probability, Hazard Score, suggested preventative measure and strategy to achieve so. Failure modes chosen: Prescription on the nurse's form; progress or treatment order (paper); Prescription to incorrect patient; Transcription error by nursing staff and pharmacist; Error preparing the trolley. By applying a failure modes and effects analysis to the prescription, validation and dispensing process, we have been able to identify critical aspects, the stages in which errors may occur and the causes. It has allowed us to analyse the effects on the safety of the process, and establish measures to prevent or reduce them. Copyright © 2010 SEFH. Published by Elsevier Espana. All rights reserved.

  9. Rodent heart failure models do not reflect the human circulating microRNA signature in heart failure.

    PubMed

    Vegter, Eline L; Ovchinnikova, Ekaterina S; Silljé, Herman H W; Meems, Laura M G; van der Pol, Atze; van der Velde, A Rogier; Berezikov, Eugene; Voors, Adriaan A; de Boer, Rudolf A; van der Meer, Peter

    2017-01-01

    We recently identified a set of plasma microRNAs (miRNAs) that are downregulated in patients with heart failure in comparison with control subjects. To better understand their meaning and function, we sought to validate these circulating miRNAs in 3 different well-established rat and mouse heart failure models, and correlated the miRNAs to parameters of cardiac function. The previously identified let-7i-5p, miR-16-5p, miR-18a-5p, miR-26b-5p, miR-27a-3p, miR-30e-5p, miR-199a-3p, miR-223-3p, miR-423-3p, miR-423-5p and miR-652-3p were measured by means of quantitative real time polymerase chain reaction (qRT-PCR) in plasma samples of 8 homozygous TGR(mREN2)27 (Ren2) transgenic rats and 8 (control) Sprague-Dawley rats, 6 mice with angiotensin II-induced heart failure (AngII) and 6 control mice, and 8 mice with ischemic heart failure and 6 controls. Circulating miRNA levels were compared between the heart failure animals and healthy controls. Ren2 rats, AngII mice and mice with ischemic heart failure showed clear signs of heart failure, exemplified by increased left ventricular and lung weights, elevated end-diastolic left ventricular pressures, increased expression of cardiac stress markers and reduced left ventricular ejection fraction. All miRNAs were detectable in plasma from rats and mice. No significant differences were observed between the circulating miRNAs in heart failure animals when compared to the healthy controls (all P>0.05) and no robust associations with cardiac function could be found. The previous observation that miRNAs circulate in lower levels in human patients with heart failure could not be validated in well-established rat and mouse heart failure models. These results question the translation of data on human circulating miRNA levels to experimental models, and vice versa the validity of experimental miRNA data for human heart failure.

  10. Perforation of thin aluminum alloy plates by blunt projectiles: An experimental and numerical investigation

    NASA Astrophysics Data System (ADS)

    Wei, G.; Zhang, W.

    2014-04-01

    Reducing the armor weight has become a research focus in terms of armored material. Due to high strength-to-density ratio, aluminum alloy has become a potential light armored material. In this study, both lab-scale ballistic test and finite element simulation were adopted to examine the ballistic resistance of aluminum alloy targets. Blunt high strength steel projectiles with 12.7 mm diameter were launched by light gas gun against 3.3 mm thickness 7A04 aluminum alloy plates at a velocity of 90~170 m/s. The ballistic limit velocity was obtained. Plugging failure and obvious structure deformation of targets were observed. Corresponding 2D finite element simulations were conducted by ABAQUS/EXPLICIT combined with material performance testing. The validity of numerical simulations was verified by comparing with the experimental results. Detailed analysis of the failure modes and characters of the targets were carried out to reveal the target damage mechanism combined with the numerical simulation.

  11. Design of Low Complexity Model Reference Adaptive Controllers

    NASA Technical Reports Server (NTRS)

    Hanson, Curt; Schaefer, Jacob; Johnson, Marcus; Nguyen, Nhan

    2012-01-01

    Flight research experiments have demonstrated that adaptive flight controls can be an effective technology for improving aircraft safety in the event of failures or damage. However, the nonlinear, timevarying nature of adaptive algorithms continues to challenge traditional methods for the verification and validation testing of safety-critical flight control systems. Increasingly complex adaptive control theories and designs are emerging, but only make testing challenges more difficult. A potential first step toward the acceptance of adaptive flight controllers by aircraft manufacturers, operators, and certification authorities is a very simple design that operates as an augmentation to a non-adaptive baseline controller. Three such controllers were developed as part of a National Aeronautics and Space Administration flight research experiment to determine the appropriate level of complexity required to restore acceptable handling qualities to an aircraft that has suffered failures or damage. The controllers consist of the same basic design, but incorporate incrementally-increasing levels of complexity. Derivations of the controllers and their adaptive parameter update laws are presented along with details of the controllers implementations.

  12. Porting Inition and Failure to Linked Cheetah

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vitello, P; Souers, P C

    2007-07-18

    Linked CHEETAH is a thermo-chemical code coupled to a 2-D hydrocode. Initially, a quadratic-pressure dependent kinetic rate was used, which worked well in modeling prompt detonation of explosives of large size, but does not work on other aspects of explosive behavior. The variable-pressure Tarantula reactive flow rate model was developed with JWL++ in order to also describe failure and initiation, and we have moved this model into Linked CHEETAH. The model works by turning on only above a pressure threshold, where a slow turn-on creates initiation. At a higher pressure, the rate suddenly leaps to a large value over amore » small pressure range. A slowly failing cylinder will see a rapidly declining rate, which pushes it quickly into failure. At a high pressure, the detonation rate is constant. A sequential validation procedure is used, which includes metal-confined cylinders, rate-sticks, corner-turning, initiation and threshold, gap tests and air gaps. The size (diameter) effect is central to the calibration.« less

  13. Piloted simulation study of the effects of an automated trim system on flight characteristics of a light twin-engine airplane with one engine inoperative

    NASA Technical Reports Server (NTRS)

    Stewart, E. C.; Brown, P. W.; Yenni, K. R.

    1986-01-01

    A simulation study was conducted to investigate the piloting problems associated with failure of an engine on a generic light twin-engine airplane. A primary piloting problem for a light twin-engine airplane after an engine failure is maintaining precise control of the airplane in the presence of large steady control forces. To address this problem, a simulated automatic trim system which drives the trim tabs as an open-loop function of propeller slipstream measurements was developed. The simulated automatic trim system was found to greatly increase the controllability in asymmetric powered flight without having to resort to complex control laws or an irreversible control system. However, the trim-tab control rates needed to produce the dramatic increase in controllability may require special design consideration for automatic trim system failures. Limited measurements obtained in full-scale flight tests confirmed the fundamental validity of the proposed control law.

  14. Large Area Nondestructive Evaluation of a Fatigue Loaded Composite Structure

    NASA Technical Reports Server (NTRS)

    Zalameda, Joseph N.; Burke, Eric R.; Horne, Michael R.; Madaras, Eric I.

    2016-01-01

    Large area nondestructive evaluation (NDE) inspections are required for fatigue testing of composite structures to track damage initiation and growth. Of particular interest is the progression of damage leading to ultimate failure to validate damage progression models. In this work, passive thermography and acoustic emission NDE were used to track damage growth up to failure of a composite three-stringer panel. Fourteen acoustic emission sensors were placed on the composite panel. The signals from the array were acquired simultaneously and allowed for acoustic emission location. In addition, real time thermal data of the composite structure were acquired during loading. Details are presented on the mapping of the acoustic emission locations directly onto the thermal imagery to confirm areas of damage growth leading to ultimate failure. This required synchronizing the acoustic emission and thermal data with the applied loading. In addition, processing of the thermal imagery which included contrast enhancement, removal of optical barrel distortion and correction of angular rotation before mapping the acoustic event locations are discussed.

  15. Efficient selective screening for heart failure in elderly men and women from the community: A diagnostic individual participant data meta-analysis

    PubMed Central

    Kievit, Rogier F; Hoes, Arno W; Bots, Michiel L; van Riet, Evelien ES; van Mourik, Yvonne; Bertens, Loes CM; Boonman-de Winter, Leandra JM; den Ruijter, Hester M; Rutten, Frans H

    2018-01-01

    Background Prevalence of undetected heart failure in older individuals is high in the community, with patients being at increased risk of morbidity and mortality due to the chronic and progressive nature of this complex syndrome. An essential, yet currently unavailable, strategy to pre-select candidates eligible for echocardiography to confirm or exclude heart failure would identify patients earlier, enable targeted interventions and prevent disease progression. The aim of this study was therefore to develop and validate such a model that can be implemented clinically. Methods and results Individual patient data from four primary care screening studies were analysed. From 1941 participants >60 years old, 462 were diagnosed with heart failure, according to criteria of the European Society of Cardiology heart failure guidelines. Prediction models were developed in each cohort followed by cross-validation, omitting each of the four cohorts in turn. The model consisted of five independent predictors; age, history of ischaemic heart disease, exercise-related shortness of breath, body mass index and a laterally displaced/broadened apex beat, with no significant interaction with sex. The c-statistic ranged from 0.70 (95% confidence interval (CI) 0.64–0.76) to 0.82 (95% CI 0.78–0.87) at cross-validation and the calibration was reasonable with Observed/Expected ratios ranging from 0.86 to 1.15. The clinical model improved with the addition of N-terminal pro B-type natriuretic peptide with the c-statistic increasing from 0.76 (95% CI 0.70–0.81) to 0.89 (95% CI 0.86–0.92) at cross-validation. Conclusion Easily obtainable patient characteristics can select older men and women from the community who are candidates for echocardiography to confirm or refute heart failure. PMID:29327942

  16. Efficient selective screening for heart failure in elderly men and women from the community: A diagnostic individual participant data meta-analysis.

    PubMed

    Kievit, Rogier F; Gohar, Aisha; Hoes, Arno W; Bots, Michiel L; van Riet, Evelien Es; van Mourik, Yvonne; Bertens, Loes Cm; Boonman-de Winter, Leandra Jm; den Ruijter, Hester M; Rutten, Frans H

    2018-03-01

    Background Prevalence of undetected heart failure in older individuals is high in the community, with patients being at increased risk of morbidity and mortality due to the chronic and progressive nature of this complex syndrome. An essential, yet currently unavailable, strategy to pre-select candidates eligible for echocardiography to confirm or exclude heart failure would identify patients earlier, enable targeted interventions and prevent disease progression. The aim of this study was therefore to develop and validate such a model that can be implemented clinically. Methods and results Individual patient data from four primary care screening studies were analysed. From 1941 participants >60 years old, 462 were diagnosed with heart failure, according to criteria of the European Society of Cardiology heart failure guidelines. Prediction models were developed in each cohort followed by cross-validation, omitting each of the four cohorts in turn. The model consisted of five independent predictors; age, history of ischaemic heart disease, exercise-related shortness of breath, body mass index and a laterally displaced/broadened apex beat, with no significant interaction with sex. The c-statistic ranged from 0.70 (95% confidence interval (CI) 0.64-0.76) to 0.82 (95% CI 0.78-0.87) at cross-validation and the calibration was reasonable with Observed/Expected ratios ranging from 0.86 to 1.15. The clinical model improved with the addition of N-terminal pro B-type natriuretic peptide with the c-statistic increasing from 0.76 (95% CI 0.70-0.81) to 0.89 (95% CI 0.86-0.92) at cross-validation. Conclusion Easily obtainable patient characteristics can select older men and women from the community who are candidates for echocardiography to confirm or refute heart failure.

  17. Full-Scaled Advanced Systems Testbed: Ensuring Success of Adaptive Control Research Through Project Lifecycle Risk Mitigation

    NASA Technical Reports Server (NTRS)

    Pavlock, Kate M.

    2011-01-01

    The National Aeronautics and Space Administration's Dryden Flight Research Center completed flight testing of adaptive controls research on the Full-Scale Advance Systems Testbed (FAST) in January of 2011. The research addressed technical challenges involved with reducing risk in an increasingly complex and dynamic national airspace. Specific challenges lie with the development of validated, multidisciplinary, integrated aircraft control design tools and techniques to enable safe flight in the presence of adverse conditions such as structural damage, control surface failures, or aerodynamic upsets. The testbed is an F-18 aircraft serving as a full-scale vehicle to test and validate adaptive flight control research and lends a significant confidence to the development, maturation, and acceptance process of incorporating adaptive control laws into follow-on research and the operational environment. The experimental systems integrated into FAST were designed to allow for flexible yet safe flight test evaluation and validation of modern adaptive control technologies and revolve around two major hardware upgrades: the modification of Production Support Flight Control Computers (PSFCC) and integration of two, fourth-generation Airborne Research Test Systems (ARTS). Post-hardware integration verification and validation provided the foundation for safe flight test of Nonlinear Dynamic Inversion and Model Reference Aircraft Control adaptive control law experiments. To ensure success of flight in terms of cost, schedule, and test results, emphasis on risk management was incorporated into early stages of design and flight test planning and continued through the execution of each flight test mission. Specific consideration was made to incorporate safety features within the hardware and software to alleviate user demands as well as into test processes and training to reduce human factor impacts to safe and successful flight test. This paper describes the research configuration, experiment functionality, overall risk mitigation, flight test approach and results, and lessons learned of adaptive controls research of the Full-Scale Advanced Systems Testbed.

  18. A Unified Constitutive Model for Subglacial Till, Part I: The Disturbed State Concept

    NASA Astrophysics Data System (ADS)

    Jenson, J. W.; Desai, C. S.; Clark, P. U.; Contractor, D. N.; Sane, S. M.; Carlson, A. E.

    2006-12-01

    Classical plasticity models such as Mohr-Coulomb may not adequately represent the full range of possible motion and failure in tills underlying ice sheets. Such models assume that deformations are initially elastic, and that when a peak or failure stress level is reached the system experiences sudden failure, after which the stress remains constant and the deformations can tend to infinite magnitudes. However, theory suggests that the actual behavior of deforming materials, including granular materials such as glacial till, can involve plastic or irreversible strains almost from the beginning, in which localized zones of microcracking and "failure" can be distributed over the material element. As the loading increases, and with associated plastic and creep deformations, the distributed failure zones coalesce. When the extent of such coalesced zones reaches critical values of stresses and strains, the critical condition (failure) can occur in the till, which would cause associated movements of the ice sheet. Failure or collapse then may occur at much larger strain levels. Classical models (e.g., Mohr-Coulomb) may therefore not be able to fully and realistically characterize deformation behavior and the gradual developments of localized failures tending to the global failure and movements. We present and propose the application of the Disturbed State Concept (DSC), a unified model that incorporates the actual pre- and post-failure behavior, for characterizing the behavior of subglacial tills. In this presentation (Part I), we describe the DSC and propose its application to subglacial till. Part II (Desai et al.) describes our application of the DSC with laboratory testing, model calibration, and validations to evaluate the mechanical properties of two regionally significant Pleistocene tills.

  19. Structural Testing of the Blade Reliability Collaborative Effect of Defect Wind Turbine Blades

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desmond, M.; Hughes, S.; Paquette, J.

    Two 8.3-meter (m) wind turbine blades intentionally constructed with manufacturing flaws were tested to failure at the National Wind Technology Center (NWTC) at the National Renewable Energy Laboratory (NREL) south of Boulder, Colorado. Two blades were tested; one blade was manufactured with a fiberglass spar cap and the second blade was manufactured with a carbon fiber spar cap. Test loading primarily consisted of flap fatigue loading of the blades, with one quasi-static ultimate load case applied to the carbon fiber spar cap blade. Results of the test program were intended to provide the full-scale test data needed for validation ofmore » model and coupon test results of the effect of defects in wind turbine blade composite materials. Testing was part of the Blade Reliability Collaborative (BRC) led by Sandia National Laboratories (SNL). The BRC seeks to develop a deeper understanding of the causes of unexpected blade failures (Paquette 2012), and to develop methods to enable blades to survive to their expected operational lifetime. Recent work in the BRC includes examining and characterizing flaws and defects known to exist in wind turbine blades from manufacturing processes (Riddle et al. 2011). Recent results from reliability databases show that wind turbine rotor blades continue to be a leading contributor to turbine downtime (Paquette 2012).« less

  20. Measuring the effect of inter-study variability on estimating prediction error.

    PubMed

    Ma, Shuyi; Sung, Jaeyun; Magis, Andrew T; Wang, Yuliang; Geman, Donald; Price, Nathan D

    2014-01-01

    The biomarker discovery field is replete with molecular signatures that have not translated into the clinic despite ostensibly promising performance in predicting disease phenotypes. One widely cited reason is lack of classification consistency, largely due to failure to maintain performance from study to study. This failure is widely attributed to variability in data collected for the same phenotype among disparate studies, due to technical factors unrelated to phenotypes (e.g., laboratory settings resulting in "batch-effects") and non-phenotype-associated biological variation in the underlying populations. These sources of variability persist in new data collection technologies. Here we quantify the impact of these combined "study-effects" on a disease signature's predictive performance by comparing two types of validation methods: ordinary randomized cross-validation (RCV), which extracts random subsets of samples for testing, and inter-study validation (ISV), which excludes an entire study for testing. Whereas RCV hardwires an assumption of training and testing on identically distributed data, this key property is lost in ISV, yielding systematic decreases in performance estimates relative to RCV. Measuring the RCV-ISV difference as a function of number of studies quantifies influence of study-effects on performance. As a case study, we gathered publicly available gene expression data from 1,470 microarray samples of 6 lung phenotypes from 26 independent experimental studies and 769 RNA-seq samples of 2 lung phenotypes from 4 independent studies. We find that the RCV-ISV performance discrepancy is greater in phenotypes with few studies, and that the ISV performance converges toward RCV performance as data from additional studies are incorporated into classification. We show that by examining how fast ISV performance approaches RCV as the number of studies is increased, one can estimate when "sufficient" diversity has been achieved for learning a molecular signature likely to translate without significant loss of accuracy to new clinical settings.

  1. A new instrument to measure quality of life of heart failure family caregivers.

    PubMed

    Nauser, Julie A; Bakas, Tamilyn; Welch, Janet L

    2011-01-01

    Family caregivers of heart failure (HF) patients experience poor physical and mental health leading to poor quality of life. Although several quality-of-life measures exist, they are often too generic to capture the unique experience of this population. The purpose of this study was to evaluate the psychometric properties of the Family Caregiver Quality of Life (FAMQOL) Scale that was designed to assess the physical, psychological, social, and spiritual dimensions of quality of life among caregivers of HF patients. Psychometric testing of the FAMQOL with 100 HF family caregivers was conducted using item analysis, Cronbach α, intraclass correlation, factor analysis, and hierarchical multiple regression guided by a conceptual model. Caregivers were predominately female (89%), white, (73%), and spouses (62%). Evidence of internal consistency reliability (α=.89) was provided for the FAMQOL, with item-total correlations of 0.39 to 0.74. Two-week test-retest reliability was supported by an intraclass correlation coefficient of 0.91. Using a 1-factor solution and principal axis factoring, loadings ranged from 0.31 to 0.78, with 41% of the variance explained by the first factor (eigenvalue=6.5). With hierarchical multiple regression, 56% of the FAMQOL variance was explained by model constructs (F8,91=16.56, P<.001). Criterion-related validity was supported by correlations with SF-36 General (r=0.45, P<.001) and Mental (r=0.59, P<.001) Health subscales and Bakas Caregiving Outcomes Scale (r=0.73, P<.001). Evidence of internal and test-retest reliability and construct and criterion validity was provided for physical, psychological, and social well-being subscales. The 16-item FAMQOL is a brief, easy-to-administer instrument that has evidence of reliability and validity in HF family caregivers. Physical, psychological, and social well-being can be measured with 4-item subscales. The FAMQOL scale could serve as a valuable measure in research, as well as an assessment tool to identify caregivers in need of intervention.

  2. Predicting Early Mortality After Hip Fracture Surgery: The Hip Fracture Estimator of Mortality Amsterdam.

    PubMed

    Karres, Julian; Kieviet, Noera; Eerenberg, Jan-Peter; Vrouenraets, Bart C

    2018-01-01

    Early mortality after hip fracture surgery is high and preoperative risk assessment for the individual patient is challenging. A risk model could identify patients in need of more intensive perioperative care, provide insight in the prognosis, and allow for risk adjustment in audits. This study aimed to develop and validate a risk prediction model for 30-day mortality after hip fracture surgery: the Hip fracture Estimator of Mortality Amsterdam (HEMA). Data on 1050 consecutive patients undergoing hip fracture surgery between 2004 and 2010 were retrospectively collected and randomly split into a development cohort (746 patients) and validation cohort (304 patients). Logistic regression analysis was performed in the development cohort to determine risk factors for the HEMA. Discrimination and calibration were assessed in both cohorts using the area under the receiver operating characteristic curve (AUC), the Hosmer-Lemeshow goodness-of-fit test, and by stratification into low-, medium- and high-risk groups. Nine predictors for 30-day mortality were identified and used in the final model: age ≥85 years, in-hospital fracture, signs of malnutrition, myocardial infarction, congestive heart failure, current pneumonia, renal failure, malignancy, and serum urea >9 mmol/L. The HEMA showed good discrimination in the development cohort (AUC = 0.81) and the validation cohort (AUC = 0.79). The Hosmer-Lemeshow test indicated no lack of fit in either cohort (P > 0.05). The HEMA is based on preoperative variables and can be used to predict the risk of 30-day mortality after hip fracture surgery for the individual patient. Prognostic Level II. See Instructions for Authors for a complete description of levels of evidence.

  3. An anisotropic elastoplastic constitutive formulation generalised for orthotropic materials

    NASA Astrophysics Data System (ADS)

    Mohd Nor, M. K.; Ma'at, N.; Ho, C. S.

    2018-03-01

    This paper presents a finite strain constitutive model to predict a complex elastoplastic deformation behaviour that involves very high pressures and shockwaves in orthotropic materials using an anisotropic Hill's yield criterion by means of the evolving structural tensors. The yield surface of this hyperelastic-plastic constitutive model is aligned uniquely within the principal stress space due to the combination of Mandel stress tensor and a new generalised orthotropic pressure. The formulation is developed in the isoclinic configuration and allows for a unique treatment for elastic and plastic orthotropy. An isotropic hardening is adopted to define the evolution of plastic orthotropy. The important feature of the proposed hyperelastic-plastic constitutive model is the introduction of anisotropic effect in the Mie-Gruneisen equation of state (EOS). The formulation is further combined with Grady spall failure model to predict spall failure in the materials. The proposed constitutive model is implemented as a new material model in the Lawrence Livermore National Laboratory (LLNL)-DYNA3D code of UTHM's version, named Material Type 92 (Mat92). The combination of the proposed stress tensor decomposition and the Mie-Gruneisen EOS requires some modifications in the code to reflect the formulation of the generalised orthotropic pressure. The validation approach is also presented in this paper for guidance purpose. The \\varvec{ψ} tensor used to define the alignment of the adopted yield surface is first validated. This is continued with an internal validation related to elastic isotropic, elastic orthotropic and elastic-plastic orthotropic of the proposed formulation before a comparison against range of plate impact test data at 234, 450 and {895 ms}^{-1} impact velocities is performed. A good agreement is obtained in each test.

  4. Confidence in outcome estimates from systematic reviews used in informed consent.

    PubMed

    Fritz, Robert; Bauer, Janet G; Spackman, Sue S; Bains, Amanjyot K; Jetton-Rangel, Jeanette

    2016-12-01

    Evidence-based dentistry now guides informed consent in which clinicians are obliged to provide patients with the most current, best evidence, or best estimates of outcomes, of regimens, therapies, treatments, procedures, materials, and equipment or devices when developing personal oral health care, treatment plans. Yet, clinicians require that the estimates provided from systematic reviews be verified to their validity, reliability, and contextualized as to performance competency so that clinicians may have confidence in explaining outcomes to patients in clinical practice. The purpose of this paper was to describe types of informed estimates from which clinicians may have confidence in their capacity to assist patients in competent decision-making, one of the most important concepts of informed consent. Using systematic review methodology, researchers provide clinicians with valid best estimates of outcomes regarding a subject of interest from best evidence. Best evidence is verified through critical appraisals using acceptable sampling methodology either by scoring instruments (Timmer analysis) or checklist (grade), a Cochrane Collaboration standard that allows transparency in open reviews. These valid best estimates are then tested for reliability using large databases. Finally, valid and reliable best estimates are assessed for meaning using quantification of margins and uncertainties. Through manufacturer and researcher specifications, quantification of margins and uncertainties develops a performance competency continuum by which valid, reliable best estimates may be contextualized for their performance competency: at a lowest margin performance competency (structural failure), high margin performance competency (estimated true value of success), or clinically determined critical values (clinical failure). Informed consent may be achieved when clinicians are confident of their ability to provide useful and accurate best estimates of outcomes regarding regimens, therapies, treatments, and equipment or devices to patients in their clinical practices and when developing personal, oral health care, treatment plans. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Classification and regression tree analysis of acute-on-chronic hepatitis B liver failure: Seeing the forest for the trees.

    PubMed

    Shi, K-Q; Zhou, Y-Y; Yan, H-D; Li, H; Wu, F-L; Xie, Y-Y; Braddock, M; Lin, X-Y; Zheng, M-H

    2017-02-01

    At present, there is no ideal model for predicting the short-term outcome of patients with acute-on-chronic hepatitis B liver failure (ACHBLF). This study aimed to establish and validate a prognostic model by using the classification and regression tree (CART) analysis. A total of 1047 patients from two separate medical centres with suspected ACHBLF were screened in the study, which were recognized as derivation cohort and validation cohort, respectively. CART analysis was applied to predict the 3-month mortality of patients with ACHBLF. The accuracy of the CART model was tested using the area under the receiver operating characteristic curve, which was compared with the model for end-stage liver disease (MELD) score and a new logistic regression model. CART analysis identified four variables as prognostic factors of ACHBLF: total bilirubin, age, serum sodium and INR, and three distinct risk groups: low risk (4.2%), intermediate risk (30.2%-53.2%) and high risk (81.4%-96.9%). The new logistic regression model was constructed with four independent factors, including age, total bilirubin, serum sodium and prothrombin activity by multivariate logistic regression analysis. The performances of the CART model (0.896), similar to the logistic regression model (0.914, P=.382), exceeded that of MELD score (0.667, P<.001). The results were confirmed in the validation cohort. We have developed and validated a novel CART model superior to MELD for predicting three-month mortality of patients with ACHBLF. Thus, the CART model could facilitate medical decision-making and provide clinicians with a validated practical bedside tool for ACHBLF risk stratification. © 2016 John Wiley & Sons Ltd.

  6. Fatigue lifetime prediction of a reduced-diameter dental implant system: Numerical and experimental study.

    PubMed

    Duan, Yuanyuan; Gonzalez, Jorge A; Kulkarni, Pratim A; Nagy, William W; Griggs, Jason A

    2018-06-16

    To validate the fatigue lifetime of a reduced-diameter dental implant system predicted by three-dimensional finite element analysis (FEA) by testing physical implant specimens using an accelerated lifetime testing (ALT) strategy with the apparatus specified by ISO 14801. A commercially-available reduced-diameter titanium dental implant system (Straumann Standard Plus NN) was digitized using a micro-CT scanner. Axial slices were processed using an interactive medical image processing software (Mimics) to create 3D models. FEA analysis was performed in ABAQUS, and fatigue lifetime was predicted using fe-safe ® software. The same implant specimens (n=15) were tested at a frequency of 2Hz on load frames using apparatus specified by ISO 14801 and ALT. Multiple step-stress load profiles with various aggressiveness were used to improve testing efficiency. Fatigue lifetime statistics of physical specimens were estimated in a reliability analysis software (ALTA PRO). Fractured specimens were examined using SEM with fractographic technique to determine the failure mode. FEA predicted lifetime was within the 95% confidence interval of lifetime estimated by experimental results, which suggested that FEA prediction was accurate for this implant system. The highest probability of failure was located at the root of the implant body screw thread adjacent to the simulated bone level, which also agreed with the failure origin in physical specimens. Fatigue lifetime predictions based on finite element modeling could yield similar results in lieu of physical testing, allowing the use of virtual testing in the early stages of future research projects on implant fatigue. Copyright © 2018 The Academy of Dental Materials. Published by Elsevier Inc. All rights reserved.

  7. Developmental validation of a Cannabis sativa STR multiplex system for forensic analysis.

    PubMed

    Howard, Christopher; Gilmore, Simon; Robertson, James; Peakall, Rod

    2008-09-01

    A developmental validation study based on recommendations of the Scientific Working Group on DNA Analysis Methods (SWGDAM) was conducted on a multiplex system of 10 Cannabis sativa short tandem repeat loci. Amplification of the loci in four multiplex reactions was tested across DNA from dried root, stem, and leaf sources, and DNA from fresh, frozen, and dried leaf tissue with a template DNA range of 10.0-0.01 ng. The loci were amplified and scored consistently for all DNA sources when DNA template was in the range of 10.0-1.0 ng. Some allelic dropout and PCR failure occurred in reactions with lower template DNA amounts. Overall, amplification was best using 10.0 ng of template DNA from dried leaf tissue indicating that this is the optimal source material. Cross species amplification was observed in Humulus lupulus for three loci but there was no allelic overlap. This is the first study following SWGDAM validation guidelines to validate short tandem repeat markers for forensic use in plants.

  8. Utility of the response bias scale (RBS) and other MMPI-2 validity scales in predicting TOMM performance.

    PubMed

    Whitney, Kriscinda A; Davis, Jeremy J; Shepard, Polly H; Herman, Steven M

    2008-01-01

    The present study represents a replication and extension of the original Response Bias Scale (RBS) validation study. In addition to examining the relationship between the Test of Memory Malingering (TOMM), RBS, and several other well-researched Minnesota Multiphasic Personality Inventory 2 (MMPI-2) validity scales (i.e., F, Fb, Fp, and the Fake Bad Scale), the present study also included the recently developed Infrequency Post-Traumatic Stress Disorder Scale and the Henry-Heilbronner Index (HHI) of the MMPI-2. Findings from this retrospective data analysis (N=46) demonstrated the superiority of the RBS, and to a certain extent the HHI, over other MMPI-2 validity scales in predicting TOMM failure within the outpatient Veterans Affairs population. Results of the current study confirm the clinical utility of the RBS and suggest that, particularly if the MMPI-2 is an existing part of the neuropsychological assessment, examination of RBS scores is an efficient means of detecting negative response bias.

  9. Validation and Potential Mechanisms of Red Cell Distribution Width as a Prognostic Marker in Heart Failure

    PubMed Central

    ALLEN, LARRY A.; FELKER, G. MICHAEL; MEHRA, MANDEEP R.; CHIONG, JUN R.; DUNLAP, STEPHANIE H.; GHALI, JALAL K.; LENIHAN, DANIEL J.; OREN, RON M.; WAGONER, LYNNE E.; SCHWARTZ, TODD A.; ADAMS, KIRKWOOD F.

    2014-01-01

    Background: Adverse outcomes have recently been linked to elevated red cell distribution width (RDW) in heart failure. Our study sought to validate the prognostic value of RDW in heart failure and to explore the potential mechanisms underlying this association. Methods and Results: Data from the Study of Anemia in a Heart Failure Population (STAMINA-HFP) registry, a prospective, multicenter cohort of ambulatory patients with heart failure supported multivariable modeling to assess relationships between RDW and outcomes. The association between RDW and iron metabolism, inflammation, and neurohormonal activation was studied in a separate cohort of heart failure patients from the United Investigators to Evaluate Heart Failure (UNITE-HF) Biomarker registry. RDW was independently predictive of outcome (for each 1% increase in RDW, hazard ratio for mortality 1.06, 95% CI 1.01-1.12; hazard ratio for hospitalization or mortality 1.06; 95% CI 1.02-1.10) after adjustment for other covariates. Increasing RDW correlated with decreasing hemoglobin, increasing interleukin-6, and impaired iron mobilization. Conclusions: Our results confirm previous observations that RDW is a strong, independent predictor of adverse outcome in chronic heart failure and suggest elevated RDW may indicate inflammatory stress and impaired iron mobilization. These findings encourage further research into the relationship between heart failure and the hematologic system. PMID:20206898

  10. Validation and potential mechanisms of red cell distribution width as a prognostic marker in heart failure.

    PubMed

    Allen, Larry A; Felker, G Michael; Mehra, Mandeep R; Chiong, Jun R; Dunlap, Stephanie H; Ghali, Jalal K; Lenihan, Daniel J; Oren, Ron M; Wagoner, Lynne E; Schwartz, Todd A; Adams, Kirkwood F

    2010-03-01

    Adverse outcomes have recently been linked to elevated red cell distribution width (RDW) in heart failure. Our study sought to validate the prognostic value of RDW in heart failure and to explore the potential mechanisms underlying this association. Data from the Study of Anemia in a Heart Failure Population (STAMINA-HFP) registry, a prospective, multicenter cohort of ambulatory patients with heart failure supported multivariable modeling to assess relationships between RDW and outcomes. The association between RDW and iron metabolism, inflammation, and neurohormonal activation was studied in a separate cohort of heart failure patients from the United Investigators to Evaluate Heart Failure (UNITE-HF) Biomarker registry. RDW was independently predictive of outcome (for each 1% increase in RDW, hazard ratio for mortality 1.06, 95% CI 1.01-1.12; hazard ratio for hospitalization or mortality 1.06; 95% CI 1.02-1.10) after adjustment for other covariates. Increasing RDW correlated with decreasing hemoglobin, increasing interleukin-6, and impaired iron mobilization. Our results confirm previous observations that RDW is a strong, independent predictor of adverse outcome in chronic heart failure and suggest elevated RDW may indicate inflammatory stress and impaired iron mobilization. These findings encourage further research into the relationship between heart failure and the hematologic system. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  11. Sexual Modes Questionnaire (SMQ): Translation and Psychometric Properties of the Italian Version of the Automatic Thought Scale.

    PubMed

    Nimbi, Filippo Maria; Tripodi, Francesca; Simonelli, Chiara; Nobre, Pedro

    2018-03-01

    The Sexual Modes Questionnaire (SMQ) is a validated and widespread used tool to assess the association among negative automatic thoughts, emotions, and sexual response during sexual activity in men and women. To test the psychometric characteristics of the Italian version of the SMQ focusing on the Automatic Thoughts subscale (SMQ-AT). After linguistic translation, the psychometric properties (internal consistency, construct, and discriminant validity) were evaluated. 1,051 participants (425 men and 626 women, 776 healthy and 275 clinical groups complaining about sexual problems) participated in the present study. 2 confirmatory factor analyses were conducted to test the fit of the original factor structures of the SMQ versions. In addition, 2 principal component analyses were performed to highlight 2 new factorial structures that were further validated with confirmatory factor analyses. Cronbach α and composite reliability were used as internal consistency measures and differences between clinical and control groups were run to test the discriminant validity for the male and female versions. The associations with emotions and sexual functioning measures also are reported. Principal component analyses identified 5 factors in the male version: erection concerns thoughts, lack of erotic thoughts, age- and body-related thoughts, negative thoughts toward sex, and worries about partner's evaluation and failure anticipation thoughts. In the female version 6 factors were found: sexual abuse thoughts, lack of erotic thoughts, low self-body image thoughts, failure and disengagement thoughts, sexual passivity and control, and partner's lack of affection. Confirmatory factor analysis supported the adequacy of the factor structure for men and women. Moreover, the SMQ showed a strong association with emotional response and sexual functioning, differentiating between clinical and control groups. This measure is useful to evaluate patients and design interventions focused on negative automatic thoughts during sexual activity and to develop multicultural research. This study reports on the translation and validation of the Italian version of a clinically useful and widely used measure (assessing automatic thoughts during sexual activity). Limits regarding sampling technique and use of the Automatic Thoughts subscale are discussed in the article. The present findings support the validity and the internal consistency of the Italian version of the SMQ-AT and allow the assessment of negative automatic thoughts during sexual activity for clinical and research purposes. Nimbi FM, Tripodi F, Simonelli C, Nobre P. Sexual Modes Questionnaire (SMQ): Translation and Psychometric Properties of the Italian Version of the Automatic Thought Scale. J Sex Med 2018;15:396-409. Copyright © 2018 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.

  12. Ultra Barrier Topsheet Film for Flexible Photovoltaics with 3M Company

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Funkenbusch, Arnie; Ruth, Charles

    2014-12-30

    In this DOE sponsored program, 3M achieved the critical UBT features to enable durable flexible high efficiency modules to be produced by a range of customers who have now certified the 3M UBT and are actively developing said flexible modules. The specific objectives and accomplishments of the work under this program were; Scale-up the current Generation-1 UBT from 12” width, as made on 3M’s pilot line, to 1+meter width full-scale manufacturing, while maintaining baseline performance metrics (see table below); This objective was fully met; Validate service life of Generation-1 UBT for the 25+ year lifetime demanded by the photovoltaic market;more » Aggressive testing revealed potential failure modes in the Gen 1 UBT. Deficiencies were identified and corrective action taken in the Gen 2 UBT; Develop a Generation-2 UBT on the pilot line, targeting improved performance relative to baseline, including higher %T (percent transmission), lower water vapor transmission rate (WVTR) with targets based on what the technology needs for 25 year lifetime, proven lifetime of 25 years in solar module construction in the field, and lower cost; Testing of UBT Gen 2 under a wide range of conditions presented in this report failed to reveal any failure mode. Therefore UBT Gen 2 is known to be highly durable. 3M will continue to test towards statistically validating a 25 year lifetime under 3M funding; Transfer Generation-2 UBT from the pilot line to the full-scale manufacturing line within three years; and This objective was fully met.« less

  13. Automated extraction of ejection fraction for quality measurement using regular expressions in Unstructured Information Management Architecture (UIMA) for heart failure.

    PubMed

    Garvin, Jennifer H; DuVall, Scott L; South, Brett R; Bray, Bruce E; Bolton, Daniel; Heavirland, Julia; Pickard, Steve; Heidenreich, Paul; Shen, Shuying; Weir, Charlene; Samore, Matthew; Goldstein, Mary K

    2012-01-01

    Left ventricular ejection fraction (EF) is a key component of heart failure quality measures used within the Department of Veteran Affairs (VA). Our goals were to build a natural language processing system to extract the EF from free-text echocardiogram reports to automate measurement reporting and to validate the accuracy of the system using a comparison reference standard developed through human review. This project was a Translational Use Case Project within the VA Consortium for Healthcare Informatics. We created a set of regular expressions and rules to capture the EF using a random sample of 765 echocardiograms from seven VA medical centers. The documents were randomly assigned to two sets: a set of 275 used for training and a second set of 490 used for testing and validation. To establish the reference standard, two independent reviewers annotated all documents in both sets; a third reviewer adjudicated disagreements. System test results for document-level classification of EF of <40% had a sensitivity (recall) of 98.41%, a specificity of 100%, a positive predictive value (precision) of 100%, and an F measure of 99.2%. System test results at the concept level had a sensitivity of 88.9% (95% CI 87.7% to 90.0%), a positive predictive value of 95% (95% CI 94.2% to 95.9%), and an F measure of 91.9% (95% CI 91.2% to 92.7%). An EF value of <40% can be accurately identified in VA echocardiogram reports. An automated information extraction system can be used to accurately extract EF for quality measurement.

  14. Validating FMEA output against incident learning data: A study in stereotactic body radiation therapy.

    PubMed

    Yang, F; Cao, N; Young, L; Howard, J; Logan, W; Arbuckle, T; Sponseller, P; Korssjoen, T; Meyer, J; Ford, E

    2015-06-01

    Though failure mode and effects analysis (FMEA) is becoming more widely adopted for risk assessment in radiation therapy, to our knowledge, its output has never been validated against data on errors that actually occur. The objective of this study was to perform FMEA of a stereotactic body radiation therapy (SBRT) treatment planning process and validate the results against data recorded within an incident learning system. FMEA on the SBRT treatment planning process was carried out by a multidisciplinary group including radiation oncologists, medical physicists, dosimetrists, and IT technologists. Potential failure modes were identified through a systematic review of the process map. Failure modes were rated for severity, occurrence, and detectability on a scale of one to ten and risk priority number (RPN) was computed. Failure modes were then compared with historical reports identified as relevant to SBRT planning within a departmental incident learning system that has been active for two and a half years. Differences between FMEA anticipated failure modes and existing incidents were identified. FMEA identified 63 failure modes. RPN values for the top 25% of failure modes ranged from 60 to 336. Analysis of the incident learning database identified 33 reported near-miss events related to SBRT planning. Combining both methods yielded a total of 76 possible process failures, of which 13 (17%) were missed by FMEA while 43 (57%) identified by FMEA only. When scored for RPN, the 13 events missed by FMEA ranked within the lower half of all failure modes and exhibited significantly lower severity relative to those identified by FMEA (p = 0.02). FMEA, though valuable, is subject to certain limitations. In this study, FMEA failed to identify 17% of actual failure modes, though these were of lower risk. Similarly, an incident learning system alone fails to identify a large number of potentially high-severity process errors. Using FMEA in combination with incident learning may render an improved overview of risks within a process.

  15. Validation of Heart Failure Events in the Antihypertensive and Lipid Lowering Treatment to Prevent Heart Attack Trial (ALLHAT) Participants Assigned to Doxazosin and Chlorthalidone

    PubMed Central

    Piller, Linda B; Davis, Barry R; Cutler, Jeffrey A; Cushman, William C; Wright, Jackson T; Williamson, Jeff D; Leenen, Frans HH; Einhorn, Paula T; Randall, Otelio S; Golden, John S; Haywood, L Julian

    2002-01-01

    Background The Antihypertensive and Lipid Lowering Treatment to Prevent Heart Attack Trial (ALLHAT) is a randomized, double-blind, active-controlled trial designed to compare the rate of coronary heart disease events in high-risk hypertensive participants initially randomized to a diuretic (chlorthalidone) versus each of three alternative antihypertensive drugs: alpha-adrenergic blocker (doxazosin), ACE-inhibitor (lisinopril), and calcium-channel blocker (amlodipine). Combined cardiovascular disease risk was significantly increased in the doxazosin arm compared to the chlorthalidone arm (RR 1.25; 95% CI, 1.17–1.33; P < .001), with a doubling of heart failure (fatal, hospitalized, or non-hospitalized but treated) (RR 2.04; 95% CI, 1.79–2.32; P < .001). Questions about heart failure diagnostic criteria led to steps to validate these events further. Methods and Results Baseline characteristics (age, race, sex, blood pressure) did not differ significantly between treatment groups (P < .05) for participants with heart failure events. Post-event pharmacologic management was similar in both groups and generally conformed to accepted heart failure therapy. Central review of a small sample of cases showed high adherence to ALLHAT heart failure criteria. Of 105 participants with quantitative ejection fraction measurements provided, (67% by echocardiogram, 31% by catheterization), 29/46 (63%) from the chlorthalidone group and 41/59 (70%) from the doxazosin group were at or below 40%. Two-year heart failure case-fatalities (22% and 19% in the doxazosin and chlorthalidone groups, respectively) were as expected and did not differ significantly (RR 0.96; 95% CI, 0.67–1.38; P = 0.83). Conclusion Results of the validation process supported findings of increased heart failure in the ALLHAT doxazosin treatment arm compared to the chlorthalidone treatment arm. PMID:12459039

  16. Development and Evaluation of TiAl Sheet Structures for Hypersonic Applications

    NASA Technical Reports Server (NTRS)

    Draper, S. L.; Krause, D.; Lerch, B.; Locci, I. E.; Doehnert, B.; Nigam, R.; Das, G.; Sickles, P.; Tabernig, B.; Reger, N.; hide

    2007-01-01

    A cooperative program between the National Aeronautics and Space Administration (NASA), the Austrian Space Agency (ASA), Pratt & Whitney, Engineering Evaluation and Design, and Plansee AG was undertaken to determine the feasibility of achieving significant weight reduction of hypersonic propulsion system structures through the utilization of TiAl. A trade study defined the weight reduction potential of TiAl technologies as 25 to 35 percent compared to the baseline Ni-base superalloy for a stiffener structure in an inlet, combustor, and nozzle section of a hypersonic scramjet engine (ref. 1). A scramjet engine inlet cowl flap was designed, along with a representative subelement, using design practices unique to TiAl. A sub-element was fabricated and tested to assess fabricability and structural performance and validate the design system. The TiAl alloy selected was Plansee's third generation alloy Gamma Met PX (Plansee AG ), a high temperature, high strength gamma-TiAl alloy with high Nb content (refs. 2 and 3). Characterization of Gamma Met PX sheet, including tensile, creep, and fatigue testing was performed. Additionally, design-specific coupons were fabricated and tested in order to improve subelement test predictions. Based on the sheet characterization and results of the coupon tests, the subelement failure location and failure load were accurately predicted.

  17. Verification and Validation Methodology of Real-Time Adaptive Neural Networks for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Gupta, Pramod; Loparo, Kenneth; Mackall, Dale; Schumann, Johann; Soares, Fola

    2004-01-01

    Recent research has shown that adaptive neural based control systems are very effective in restoring stability and control of an aircraft in the presence of damage or failures. The application of an adaptive neural network with a flight critical control system requires a thorough and proven process to ensure safe and proper flight operation. Unique testing tools have been developed as part of a process to perform verification and validation (V&V) of real time adaptive neural networks used in recent adaptive flight control system, to evaluate the performance of the on line trained neural networks. The tools will help in certification from FAA and will help in the successful deployment of neural network based adaptive controllers in safety-critical applications. The process to perform verification and validation is evaluated against a typical neural adaptive controller and the results are discussed.

  18. Failure prediction of thin beryllium sheets used in spacecraft structures

    NASA Technical Reports Server (NTRS)

    Roschke, Paul N.; Mascorro, Edward; Papados, Photios; Serna, Oscar R.

    1991-01-01

    The primary objective of this study is to develop a method for prediction of failure of thin beryllium sheets that undergo complex states of stress. Major components of the research include experimental evaluation of strength parameters for cross-rolled beryllium sheet, application of the Tsai-Wu failure criterion to plate bending problems, development of a high order failure criterion, application of the new criterion to a variety of structures, and incorporation of both failure criteria into a finite element code. A Tsai-Wu failure model for SR-200 sheet material is developed from available tensile data, experiments carried out by NASA on two circular plates, and compression and off-axis experiments performed in this study. The failure surface obtained from the resulting criterion forms an ellipsoid. By supplementing experimental data used in the the two-dimensional criterion and modifying previously suggested failure criteria, a multi-dimensional failure surface is proposed for thin beryllium structures. The new criterion for orthotropic material is represented by a failure surface in six-dimensional stress space. In order to determine coefficients of the governing equation, a number of uniaxial, biaxial, and triaxial experiments are required. Details of these experiments and a complementary ultrasonic investigation are described in detail. Finally, validity of the criterion and newly determined mechanical properties is established through experiments on structures composed of SR200 sheet material. These experiments include a plate-plug arrangement under a complex state of stress and a series of plates with an out-of-plane central point load. Both criteria have been incorporated into a general purpose finite element analysis code. Numerical simulation incrementally applied loads to a structural component that is being designed and checks each nodal point in the model for exceedance of a failure criterion. If stresses at all locations do not exceed the failure criterion, the load is increased and the process is repeated. Failure results for the plate-plug and clamped plate tests are accurate to within 2 percent.

  19. Ambulatory heart rate range predicts mode-specific mortality and hospitalisation in chronic heart failure

    PubMed Central

    Cubbon, Richard M; Ruff, Naomi; Groves, David; Eleuteri, Antonio; Denby, Christine; Kearney, Lorraine; Ali, Noman; Walker, Andrew M N; Jamil, Haqeel; Gierula, John; Gale, Chris P; Batin, Phillip D; Nolan, James; Shah, Ajay M; Fox, Keith A A; Sapsford, Robert J; Witte, Klaus K; Kearney, Mark T

    2016-01-01

    Objective We aimed to define the prognostic value of the heart rate range during a 24 h period in patients with chronic heart failure (CHF). Methods Prospective observational cohort study of 791 patients with CHF associated with left ventricular systolic dysfunction. Mode-specific mortality and hospitalisation were linked with ambulatory heart rate range (AHRR; calculated as maximum minus minimum heart rate using 24 h Holter monitor data, including paced and non-sinus complexes) in univariate and multivariate analyses. Findings were then corroborated in a validation cohort of 408 patients with CHF with preserved or reduced left ventricular ejection fraction. Results After a mean 4.1 years of follow-up, increasing AHRR was associated with reduced risk of all-cause, sudden, non-cardiovascular and progressive heart failure death in univariate analyses. After accounting for characteristics that differed between groups above and below median AHRR using multivariate analysis, AHRR remained strongly associated with all-cause mortality (HR 0.991/bpm increase in AHRR (95% CI 0.999 to 0.982); p=0.046). AHRR was not associated with the risk of any non-elective hospitalisation, but was associated with heart-failure-related hospitalisation. AHRR was modestly associated with the SD of normal-to-normal beats (R2=0.2; p<0.001) and with peak exercise-test heart rate (R2=0.33; p<0.001). Analysis of the validation cohort revealed AHRR to be associated with all-cause and mode-specific death as described in the derivation cohort. Conclusions AHRR is a novel and readily available prognosticator in patients with CHF, which may reflect autonomic tone and exercise capacity. PMID:26674986

  20. Technology Insertion (TI)/Industrial Process Improvement (IPI) Task Order Number 1. Quick Fix Plan for WR-ALC, 7 RCC’s

    DTIC Science & Technology

    1989-09-25

    Orders and test specifications. Some mandatory replacement of high failure items are directed by Technical Orders to extend MTBF. Precision bearing and...Experience is very high but natural attrition is reducing the numbers faster than training is furnishing younger mechanics. Surge conditions would be...model validation run output revealed that utilization of equipment is very low and manpower is high . Based on this analysis and the brainstorming

  1. Ceramic applications in turbine engines. [for improved component performance and reduced fuel usage

    NASA Technical Reports Server (NTRS)

    Hudson, M. S.; Janovicz, M. A.; Rockwood, F. A.

    1980-01-01

    Ceramic material characterization and testing of ceramic nozzle vanes, turbine tip shrouds, and regenerators disks at 36 C above the baseline engine TIT and the design, analysis, fabrication and development activities are described. The design of ceramic components for the next generation engine to be operated at 2070 F was completed. Coupons simulating the critical 2070 F rotor blade was hot spin tested for failure with sufficient margin to quality sintered silicon nitride and sintered silicon carbide, validating both the attachment design and finite element strength. Progress made in increasing strength, minimizing variability, and developing nondestructive evaluation techniques is reported.

  2. Accelerated life-test methods and results for implantable electronic devices with adhesive encapsulation.

    PubMed

    Huang, Xuechen; Denprasert, Petcharat May; Zhou, Li; Vest, Adriana Nicholson; Kohan, Sam; Loeb, Gerald E

    2017-09-01

    We have developed and applied new methods to estimate the functional life of miniature, implantable, wireless electronic devices that rely on non-hermetic, adhesive encapsulants such as epoxy. A comb pattern board with a high density of interdigitated electrodes (IDE) could be used to detect incipient failure from water vapor condensation. Inductive coupling of an RF magnetic field was used to provide DC bias and to detect deterioration of an encapsulated comb pattern. Diodes in the implant converted part of the received energy into DC bias on the comb pattern. The capacitance of the comb pattern forms a resonant circuit with the inductor by which the implant receives power. Any moisture affects both the resonant frequency and the Q-factor of the resonance of the circuitry, which was detected wirelessly by its effects on the coupling between two orthogonal RF coils placed around the device. Various defects were introduced into the comb pattern devices to demonstrate sensitivity to failures and to correlate these signals with visual inspection of failures. Optimized encapsulation procedures were validated in accelerated life tests of both comb patterns and a functional neuromuscular stimulator under development. Strong adhesive bonding between epoxy and electronic circuitry proved to be necessary and sufficient to predict 1 year packaging reliability of 99.97% for the neuromuscular stimulator.

  3. Effect of stacking angles on mechanical properties and damage propagation of plain woven carbon fiber laminates

    NASA Astrophysics Data System (ADS)

    Zhuang, Weimin; Ao, Wenhong

    2018-03-01

    Damage propagation induced failure is a predominant damage mechanism. This study is aimed at assessing the damage state and damage propagation induced failure with different stacking angles, of woven carbon fiber/epoxy laminates subjected to quasi-static tensile and bending load. Different stages of damage processing and damage behavior under the bending load are investigated by Scanning Electron Microscopy (SEM). The woven carbon fiber/epoxy laminates which are stacked at six different angles (0°, 15°, 30°, 45°, 60°, 75°) with eight plies have been analyzed: [0]8, [15]8, [30]8, [45]8, [60]8, [75]8. Three-point bending test and quasi-static tensile test are used in validating the woven carbon fiber/epoxy laminates’ mechanical properties. Furthermore, the damage propagation and failure modes observed under flexural loading is correlated with flexural force and load-displacement behaviour respectively for the laminates. The experimental results have indicated that [45]8 laminate exhibits the best flexural performance in terms of energy absorption duo to its pseudo-ductile behaviour but the tensile strength and flexural strength drastically decreased compared to [0]8 laminate. Finally, SEM micrographs of specimens and fracture surfaces are used to reveal the different types of damage of the laminates with different stacking angles.

  4. Modeling Grade IV Gas Emboli using a Limited Failure Population Model with Random Effects

    NASA Technical Reports Server (NTRS)

    Thompson, Laura A.; Conkin, Johnny; Chhikara, Raj S.; Powell, Michael R.

    2002-01-01

    Venous gas emboli (VGE) (gas bubbles in venous blood) are associated with an increased risk of decompression sickness (DCS) in hypobaric environments. A high grade of VGE can be a precursor to serious DCS. In this paper, we model time to Grade IV VGE considering a subset of individuals assumed to be immune from experiencing VGE. Our data contain monitoring test results from subjects undergoing up to 13 denitrogenation test procedures prior to exposure to a hypobaric environment. The onset time of Grade IV VGE is recorded as contained within certain time intervals. We fit a parametric (lognormal) mixture survival model to the interval-and right-censored data to account for the possibility of a subset of "cured" individuals who are immune to the event. Our model contains random subject effects to account for correlations between repeated measurements on a single individual. Model assessments and cross-validation indicate that this limited failure population mixture model is an improvement over a model that does not account for the potential of a fraction of cured individuals. We also evaluated some alternative mixture models. Predictions from the best fitted mixture model indicate that the actual process is reasonably approximated by a limited failure population model.

  5. Health assessment of cooling fan bearings using wavelet-based filtering.

    PubMed

    Miao, Qiang; Tang, Chao; Liang, Wei; Pecht, Michael

    2012-12-24

    As commonly used forced convection air cooling devices in electronics, cooling fans are crucial for guaranteeing the reliability of electronic systems. In a cooling fan assembly, fan bearing failure is a major failure mode that causes excessive vibration, noise, reduction in rotation speed, locked rotor, failure to start, and other problems; therefore, it is necessary to conduct research on the health assessment of cooling fan bearings. This paper presents a vibration-based fan bearing health evaluation method using comblet filtering and exponentially weighted moving average. A new health condition indicator (HCI) for fan bearing degradation assessment is proposed. In order to collect the vibration data for validation of the proposed method, a cooling fan accelerated life test was conducted to simulate the lubricant starvation of fan bearings. A comparison between the proposed method and methods in previous studies (i.e., root mean square, kurtosis, and fault growth parameter) was carried out to assess the performance of the HCI. The analysis results suggest that the HCI can identify incipient fan bearing failures and describe the bearing degradation process. Overall, the work presented in this paper provides a promising method for fan bearing health evaluation and prognosis.

  6. In Vitro Hepatic Trans-Differentiation of Human Mesenchymal Stem Cells Using Sera from Congestive/Ischemic Liver during Cardiac Failure

    PubMed Central

    Bishi, Dillip Kumar; Mathapati, Santosh; Cherian, Kotturathu Mammen; Guhathakurta, Soma; Verma, Rama Shanker

    2014-01-01

    Cellular therapy for end-stage liver failures using human mesenchymal stem cells (hMSCs)-derived hepatocytes is a potential alternative to liver transplantation. Hepatic trans-differentiation of hMSCs is routinely accomplished by induction with commercially available recombinant growth factors, which is of limited clinical applications. In the present study, we have evaluated the potential of sera from cardiac-failure-associated congestive/ischemic liver patients for hepatic trans-differentiation of hMSCs. Results from such experiments were confirmed through morphological changes and expression of hepatocyte-specific markers at molecular and cellular level. Furthermore, the process of mesenchymal-to-epithelial transition during hepatic trans-differentiation of hMSCs was confirmed by elevated expression of E-Cadherin and down-regulation of Snail. The functionality of hMSCs-derived hepatocytes was validated by various liver function tests such as albumin synthesis, urea release, glycogen accumulation and presence of a drug inducible cytochrome P450 system. Based on these findings, we conclude that sera from congestive/ischemic liver during cardiac failure support a liver specific microenvironment for effective hepatic trans-differentiation of hMSCs in vitro. PMID:24642599

  7. Health Assessment of Cooling Fan Bearings Using Wavelet-Based Filtering

    PubMed Central

    Miao, Qiang; Tang, Chao; Liang, Wei; Pecht, Michael

    2013-01-01

    As commonly used forced convection air cooling devices in electronics, cooling fans are crucial for guaranteeing the reliability of electronic systems. In a cooling fan assembly, fan bearing failure is a major failure mode that causes excessive vibration, noise, reduction in rotation speed, locked rotor, failure to start, and other problems; therefore, it is necessary to conduct research on the health assessment of cooling fan bearings. This paper presents a vibration-based fan bearing health evaluation method using comblet filtering and exponentially weighted moving average. A new health condition indicator (HCI) for fan bearing degradation assessment is proposed. In order to collect the vibration data for validation of the proposed method, a cooling fan accelerated life test was conducted to simulate the lubricant starvation of fan bearings. A comparison between the proposed method and methods in previous studies (i.e., root mean square, kurtosis, and fault growth parameter) was carried out to assess the performance of the HCI. The analysis results suggest that the HCI can identify incipient fan bearing failures and describe the bearing degradation process. Overall, the work presented in this paper provides a promising method for fan bearing health evaluation and prognosis. PMID:23262486

  8. Evolution of Exchangeable Copper and Relative Exchangeable Copper through the Course of Wilson's Disease in the Long Evans Cinnamon Rat

    PubMed Central

    Schmitt, Françoise; Podevin, Guillaume; Poupon, Joël; Roux, Jérôme; Legras, Pierre; Trocello, Jean-Marc; Woimant, France; Laprévote, Olivier; NGuyen, Tuan Huy; Balkhi, Souleiman El

    2013-01-01

    Background Wilson's disease (WD) is an inherited disorder of copper metabolism leading to liver failure and/or neurological impairment. Its diagnosis often remains difficult even with genetic testing. Relative exchangeable copper (REC) has recently been described as a reliable serum diagnostic marker for WD. Methodology/Principal Findings The aim of this study was to validate the use of REC in the Long Evans Cinnamon (LEC) rat, an animal model for WD, and to study its relevance under different conditions in comparison with conventional markers. Two groups of LEC rats and one group of Long-Evans (LE) rats were clinically and biologically monitored from 6 to 28 weeks of age. One group of LEC rats was given copper-free food. The other groups had normal food. Blood samples were collected each month and different serum markers for WD (namely ceruloplasmin oxidase activity, exchangeable copper (CuEXC), total serum copper and REC) and acute liver failure (serum transaminases and bilirubinemia) were tested. Every LEC rat under normal food developed acute liver failure (ALF), with 40% global mortality. Serum transaminases and bilirubinemia along with total serum copper and exchangeable copper levels increased with the onset of acute liver failure. A correlation was observed between CuEXC values and the severity of ALF. Cut-off values were different between young and adult rats and evolved because of age and/or liver failure. Only REC, with values >19%, was able to discriminate LEC groups from the LE control group at every time point in the study. REC sensitivity and specificity reached 100% in adults rats. Conclusions/Significance REC appears to be independent of demographic or clinical data in LEC rats. It is a very simple and reliable blood test for the diagnosis of copper toxicosis owing to a lack of ATP7B function. CuEXC can be used as an accurate biomarker of copper overload. PMID:24358170

  9. Dynamic TIMI Risk Score for STEMI

    PubMed Central

    Amin, Sameer T.; Morrow, David A.; Braunwald, Eugene; Sloan, Sarah; Contant, Charles; Murphy, Sabina; Antman, Elliott M.

    2013-01-01

    Background Although there are multiple methods of risk stratification for ST‐elevation myocardial infarction (STEMI), this study presents a prospectively validated method for reclassification of patients based on in‐hospital events. A dynamic risk score provides an initial risk stratification and reassessment at discharge. Methods and Results The dynamic TIMI risk score for STEMI was derived in ExTRACT‐TIMI 25 and validated in TRITON‐TIMI 38. Baseline variables were from the original TIMI risk score for STEMI. New variables were major clinical events occurring during the index hospitalization. Each variable was tested individually in a univariate Cox proportional hazards regression. Variables with P<0.05 were incorporated into a full multivariable Cox model to assess the risk of death at 1 year. Each variable was assigned an integer value based on the odds ratio, and the final score was the sum of these values. The dynamic score included the development of in‐hospital MI, arrhythmia, major bleed, stroke, congestive heart failure, recurrent ischemia, and renal failure. The C‐statistic produced by the dynamic score in the derivation database was 0.76, with a net reclassification improvement (NRI) of 0.33 (P<0.0001) from the inclusion of dynamic events to the original TIMI risk score. In the validation database, the C‐statistic was 0.81, with a NRI of 0.35 (P=0.01). Conclusions This score is a prospectively derived, validated means of estimating 1‐year mortality of STEMI at hospital discharge and can serve as a clinically useful tool. By incorporating events during the index hospitalization, it can better define risk and help to guide treatment decisions. PMID:23525425

  10. Dynamic TIMI risk score for STEMI.

    PubMed

    Amin, Sameer T; Morrow, David A; Braunwald, Eugene; Sloan, Sarah; Contant, Charles; Murphy, Sabina; Antman, Elliott M

    2013-01-29

    Although there are multiple methods of risk stratification for ST-elevation myocardial infarction (STEMI), this study presents a prospectively validated method for reclassification of patients based on in-hospital events. A dynamic risk score provides an initial risk stratification and reassessment at discharge. The dynamic TIMI risk score for STEMI was derived in ExTRACT-TIMI 25 and validated in TRITON-TIMI 38. Baseline variables were from the original TIMI risk score for STEMI. New variables were major clinical events occurring during the index hospitalization. Each variable was tested individually in a univariate Cox proportional hazards regression. Variables with P<0.05 were incorporated into a full multivariable Cox model to assess the risk of death at 1 year. Each variable was assigned an integer value based on the odds ratio, and the final score was the sum of these values. The dynamic score included the development of in-hospital MI, arrhythmia, major bleed, stroke, congestive heart failure, recurrent ischemia, and renal failure. The C-statistic produced by the dynamic score in the derivation database was 0.76, with a net reclassification improvement (NRI) of 0.33 (P<0.0001) from the inclusion of dynamic events to the original TIMI risk score. In the validation database, the C-statistic was 0.81, with a NRI of 0.35 (P=0.01). This score is a prospectively derived, validated means of estimating 1-year mortality of STEMI at hospital discharge and can serve as a clinically useful tool. By incorporating events during the index hospitalization, it can better define risk and help to guide treatment decisions.

  11. Traceability validation of a high speed short-pulse testing method used in LED production

    NASA Astrophysics Data System (ADS)

    Revtova, Elena; Vuelban, Edgar Moreno; Zhao, Dongsheng; Brenkman, Jacques; Ulden, Henk

    2017-12-01

    Industrial processes of LED (light-emitting diode) production include LED light output performance testing. Most of them are monitored and controlled by optically, electrically and thermally measuring LEDs by high speed short-pulse measurement methods. However, these are not standardized and a lot of information is proprietary that it is impossible for third parties, such as NMIs, to trace and validate. It is known, that these techniques have traceability issue and metrological inadequacies. Often due to these, the claimed performance specifications of LEDs are overstated, which consequently results to manufacturers experiencing customers' dissatisfaction and a large percentage of failures in daily use of LEDs. In this research a traceable setup is developed to validate one of the high speed testing techniques, investigate inadequacies and work out the traceability issues. A well-characterised short square pulse of 25 ms is applied to chip-on-board (CoB) LED modules to investigate the light output and colour content. We conclude that the short-pulse method is very efficient in case a well-defined electrical current pulse is applied and the stabilization time of the device is "a priori" accurately determined. No colour shift is observed. The largest contributors to the measurement uncertainty include badly-defined current pulse and inaccurate calibration factor.

  12. A Comparison of the Fagerström Test for Cigarette Dependence and Cigarette Dependence Scale in a Treatment-Seeking Sample of Pregnant Smokers

    PubMed Central

    Singleton, Edward G.; Heishman, Stephen J.

    2016-01-01

    Introduction: Valid and reliable brief measures of cigarette dependence are essential for research purposes and effective clinical care. Two widely-used brief measures of cigarette dependence are the six-item Fagerström Test for Cigarette Dependence (FTCD) and five-item Cigarette Dependence Scale (CDS-5). Their respective metric characteristics among pregnant smokers have not yet been studied. Methods: This was a secondary analysis of data of pregnant smokers (N = 476) enrolled in a smoking cessation study. We assessed internal consistency, reliability, and examined correlations between the instruments and smoking-related behaviors for construct validity. We evaluated predictive validity by testing how well the measures predict abstinence 2 weeks after quit date. Results: Cronbach’s alpha coefficient for the CDS-5 was 0.62 and for the FTCD 0.55. Measures were strongly correlated with each other, although FTCD, but not CDS-5, was associated with saliva cotinine concentration. The FTCD, CDS-5, craving to smoke, and withdrawal symptoms failed to predict smoking status 2 weeks following the quit date. Conclusions: Suboptimal reliability estimates and failure to predict short-term smoking call into question the value of including either of the brief measures in studies that aim to explain the obstacles to smoking cessation during pregnancy. PMID:25995159

  13. Development of an Electronic Medical Record Based Alert for Risk of HIV Treatment Failure in a Low-Resource Setting

    PubMed Central

    Puttkammer, Nancy; Zeliadt, Steven; Balan, Jean Gabriel; Baseman, Janet; Destiné, Rodney; Domerçant, Jean Wysler; France, Garilus; Hyppolite, Nathaelf; Pelletier, Valérie; Raphael, Nernst Atwood; Sherr, Kenneth; Yuhas, Krista; Barnhart, Scott

    2014-01-01

    Background The adoption of electronic medical record systems in resource-limited settings can help clinicians monitor patients' adherence to HIV antiretroviral therapy (ART) and identify patients at risk of future ART failure, allowing resources to be targeted to those most at risk. Methods Among adult patients enrolled on ART from 2005–2013 at two large, public-sector hospitals in Haiti, ART failure was assessed after 6–12 months on treatment, based on the World Health Organization's immunologic and clinical criteria. We identified models for predicting ART failure based on ART adherence measures and other patient characteristics. We assessed performance of candidate models using area under the receiver operating curve, and validated results using a randomly-split data sample. The selected prediction model was used to generate a risk score, and its ability to differentiate ART failure risk over a 42-month follow-up period was tested using stratified Kaplan Meier survival curves. Results Among 923 patients with CD4 results available during the period 6–12 months after ART initiation, 196 (21.2%) met ART failure criteria. The pharmacy-based proportion of days covered (PDC) measure performed best among five possible ART adherence measures at predicting ART failure. Average PDC during the first 6 months on ART was 79.0% among cases of ART failure and 88.6% among cases of non-failure (p<0.01). When additional information including sex, baseline CD4, and duration of enrollment in HIV care prior to ART initiation were added to PDC, the risk score differentiated between those who did and did not meet failure criteria over 42 months following ART initiation. Conclusions Pharmacy data are most useful for new ART adherence alerts within iSanté. Such alerts offer potential to help clinicians identify patients at high risk of ART failure so that they can be targeted with adherence support interventions, before ART failure occurs. PMID:25390044

  14. Development of an electronic medical record based alert for risk of HIV treatment failure in a low-resource setting.

    PubMed

    Puttkammer, Nancy; Zeliadt, Steven; Balan, Jean Gabriel; Baseman, Janet; Destiné, Rodney; Domerçant, Jean Wysler; France, Garilus; Hyppolite, Nathaelf; Pelletier, Valérie; Raphael, Nernst Atwood; Sherr, Kenneth; Yuhas, Krista; Barnhart, Scott

    2014-01-01

    The adoption of electronic medical record systems in resource-limited settings can help clinicians monitor patients' adherence to HIV antiretroviral therapy (ART) and identify patients at risk of future ART failure, allowing resources to be targeted to those most at risk. Among adult patients enrolled on ART from 2005-2013 at two large, public-sector hospitals in Haiti, ART failure was assessed after 6-12 months on treatment, based on the World Health Organization's immunologic and clinical criteria. We identified models for predicting ART failure based on ART adherence measures and other patient characteristics. We assessed performance of candidate models using area under the receiver operating curve, and validated results using a randomly-split data sample. The selected prediction model was used to generate a risk score, and its ability to differentiate ART failure risk over a 42-month follow-up period was tested using stratified Kaplan Meier survival curves. Among 923 patients with CD4 results available during the period 6-12 months after ART initiation, 196 (21.2%) met ART failure criteria. The pharmacy-based proportion of days covered (PDC) measure performed best among five possible ART adherence measures at predicting ART failure. Average PDC during the first 6 months on ART was 79.0% among cases of ART failure and 88.6% among cases of non-failure (p<0.01). When additional information including sex, baseline CD4, and duration of enrollment in HIV care prior to ART initiation were added to PDC, the risk score differentiated between those who did and did not meet failure criteria over 42 months following ART initiation. Pharmacy data are most useful for new ART adherence alerts within iSanté. Such alerts offer potential to help clinicians identify patients at high risk of ART failure so that they can be targeted with adherence support interventions, before ART failure occurs.

  15. Intramedullary nailing in opening wedge high tibial osteotomy-in vitro test for validation of a method of fixation.

    PubMed

    Burchard, Rene; Katerla, Denise; Hammer, Marina; Pahlkötter, Anke; Soost, Christian; Dietrich, Gerhard; Ohrndorf, Arne; Richter, Wolfgang; Lengsfeld, Markus; Christ, Hans-Jürgen; Graw, Jan Adriaan; Fritzen, Claus-Peter

    2018-02-01

    Opening wedge high tibial osteotomy (HTO) as a treatment in unicompartimental osteoarthritis of the knee can significantly relieve pain and prevent or at least delay an early joint replacement. The fixation of the osteotomy has undergone development and refinements during the last years. The angle-stable plate fixator is currently one of the most commonly used plates in HTOs. The angular stable fixation between screws and the plate offers a high primary stability to retain the correction with early weight-bearing protocols. This surgical technique is performed as a standard of care and generally well tolerated by the patients. Nevertheless, some studies observed that many patients complained about discomfort related to the implant. Therefore, the stability of two different intramedullary nails, a short implant used in humeral fractures and a long device used in tibial fractures for stabilization in valgus HTOs, was investigated as an alternative fixation technique. The plate fixator was defined as reference standard. Nine synthetic tibia models were standardly osteotomized and stabilized by one of the fixation devices. Axial compression was realized using a special testing machine and two protocols were performed: a multi-step fatigue test and a load-to-failure test. Overall motion, medial, and lateral displacements were documented. Fractures always occurred at the lateral cortex. Axial cyclic loading up to 800 N was tolerated by all implants without failure. The tibia nail provided highest fatigue strength under the load-to-failure conditions. The results suggest that intramedullary nailing might be used as an alternative concept in HTO.

  16. Peridynamics for failure and residual strength prediction of fiber-reinforced composites

    NASA Astrophysics Data System (ADS)

    Colavito, Kyle

    Peridynamics is a reformulation of classical continuum mechanics that utilizes integral equations in place of partial differential equations to remove the difficulty in handling discontinuities, such as cracks or interfaces, within a body. Damage is included within the constitutive model; initiation and propagation can occur without resorting to special crack growth criteria necessary in other commonly utilized approaches. Predicting damage and residual strengths of composite materials involves capturing complex, distinct and progressive failure modes. The peridynamic laminate theory correctly predicts the load redistribution in general laminate layups in the presence of complex failure modes through the use of multiple interaction types. This study presents two approaches to obtain the critical peridynamic failure parameters necessary to capture the residual strength of a composite structure. The validity of both approaches is first demonstrated by considering the residual strength of isotropic materials. The peridynamic theory is used to predict the crack growth and final failure load in both a diagonally loaded square plate with a center crack, as well as a four-point shear specimen subjected to asymmetric loading. This study also establishes the validity of each approach by considering composite laminate specimens in which each failure mode is isolated. Finally, the failure loads and final failure modes are predicted in a laminate with various hole diameters subjected to tensile and compressive loads.

  17. Changes in the endurance shuttle walk test in COPD patients with chronic respiratory failure after pulmonary rehabilitation: the minimal important difference obtained with anchor- and distribution-based method.

    PubMed

    Altenburg, Wytske A; Duiverman, Marieke L; Ten Hacken, Nick H T; Kerstjens, Huib A M; de Greef, Mathieu H G; Wijkstra, Peter J; Wempe, Johan B

    2015-02-19

    Although the endurance shuttle walk test (ESWT) has proven to be responsive to change in exercise capacity after pulmonary rehabilitation (PR) for COPD, the minimally important difference (MID) has not yet been established. We aimed to establish the MID of the ESWT in patients with severe COPD and chronic hypercapnic respiratory failure following PR. Data were derived from a randomized controlled trial, investigating the value of noninvasive positive pressure ventilation added to PR. Fifty-five patients with stable COPD, GOLD stage IV, with chronic respiratory failure were included (mean (SD) FEV1 31.1 (12.0) % pred, age 62 (9) y). MID estimates of the ESWT in seconds, percentage and meters change were calculated with anchor based and distribution based methods. Six minute walking distance (6MWD), peak work rate on bicycle ergometry (Wpeak) and Chronic Respiratory Questionnaire (CRQ) were used as anchors and Cohen's effect size was used as distribution based method. The estimated MID of the ESWT with the different anchors ranged from 186-199 s, 76-82% and 154-164 m. Using the distribution based method the MID was 144 s, 61% and 137 m. Estimates of the MID for the ESWT after PR showed only small differences using different anchors in patients with COPD and chronic respiratory failure. Therefore we recommend using a range of 186-199 s, 76-82% or 154-164 m as MID of the ESWT in COPD patients with chronic respiratory failure. Further research in larger populations should elucidate whether this cut-off value is also valid in other COPD populations and with other interventions. ClinicalTrials.Gov (ID NCT00135538).

  18. Validating Cellular Automata Lava Flow Emplacement Algorithms with Standard Benchmarks

    NASA Astrophysics Data System (ADS)

    Richardson, J. A.; Connor, L.; Charbonnier, S. J.; Connor, C.; Gallant, E.

    2015-12-01

    A major existing need in assessing lava flow simulators is a common set of validation benchmark tests. We propose three levels of benchmarks which test model output against increasingly complex standards. First, imulated lava flows should be morphologically identical, given changes in parameter space that should be inconsequential, such as slope direction. Second, lava flows simulated in simple parameter spaces can be tested against analytical solutions or empirical relationships seen in Bingham fluids. For instance, a lava flow simulated on a flat surface should produce a circular outline. Third, lava flows simulated over real world topography can be compared to recent real world lava flows, such as those at Tolbachik, Russia, and Fogo, Cape Verde. Success or failure of emplacement algorithms in these validation benchmarks can be determined using a Bayesian approach, which directly tests the ability of an emplacement algorithm to correctly forecast lava inundation. Here we focus on two posterior metrics, P(A|B) and P(¬A|¬B), which describe the positive and negative predictive value of flow algorithms. This is an improvement on less direct statistics such as model sensitivity and the Jaccard fitness coefficient. We have performed these validation benchmarks on a new, modular lava flow emplacement simulator that we have developed. This simulator, which we call MOLASSES, follows a Cellular Automata (CA) method. The code is developed in several interchangeable modules, which enables quick modification of the distribution algorithm from cell locations to their neighbors. By assessing several different distribution schemes with the benchmark tests, we have improved the performance of MOLASSES to correctly match early stages of the 2012-3 Tolbachik Flow, Kamchakta Russia, to 80%. We also can evaluate model performance given uncertain input parameters using a Monte Carlo setup. This illuminates sensitivity to model uncertainty.

  19. Do complaints of everyday cognitive failures in high schizotypy relate to emotional working memory deficits in the lab?

    PubMed

    Carrigan, Nicole; Barkus, Emma; Ong, Adriel; Wei, Maryann

    2017-10-01

    Individuals high on schizotypy complain of increased cognitive failures in everyday life. However, the neuropsychological performance of this group does not consistently indicate underlying ability deficits. It is possible that current neuropsychological tests lack ecological validity. Given the increased affective reactivity of high schizotypes, they may be more sensitive to emotional content interfering with cognitive ability. This study sought to explore whether an affective n-back working memory task would elicit impaired performance in schizotypy, echoing complaints concerning real world cognition. 127 healthy participants completed self-report measures of schizotypy and cognitive failures and an affective n-back working memory task. This task was varied across three levels of load (1- to 3-back) and four types of stimulus emotion (neutral, fearful, happy, sad). Differences between high (n=39) and low (n=48) schizotypy groups on performance outcomes of hits and false alarms were examined, with emotion and load as within-groups variables. As expected, high schizotypes reported heightened vulnerability to cognitive failures. They also demonstrated a relative working memory impairment for emotional versus neutral stimuli, whereas low schizotypes did not. High schizotypes performed most poorly in response to fearful stimuli. For false alarms, there was an interaction between schizotypy, load, and emotion, such that high schizotypy was associated with deficits in response to fearful stimuli only at higher levels of task difficulty. Inclusion of self-reported cognitive failures did not account for this. These findings suggest that the "gap" between subjective and objective cognition in schizotypy may reflect the heightened emotional demands associated with cognitive functioning in the real world, although other factors also seem to play a role. There is a need to improve the ecological validity of objective assessments, whilst also recognizing that self-reported cognitive failures tap into a range of factors difficult to assess in the lab, including emotion. Cognitive interventions for at-risk individuals will likely be more beneficial if they address emotional processing alongside other aspects of cognition. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. A simple landslide susceptibility analysis for hazard and risk assessment in developing countries

    NASA Astrophysics Data System (ADS)

    Guinau, M.; Vilaplana, J. M.

    2003-04-01

    In recent years, a number of techniques and methodologies have been developed for mitigating natural disasters. The complexity of these methodologies and the scarcity of material and data series justify the need for simple methodologies to obtain the necessary information for minimising the effects of catastrophic natural phenomena. The work with polygonal maps using a GIS allowed us to develop a simple methodology, which was developed in an area of 473 Km2 in the Departamento de Chinandega (NW Nicaragua). This area was severely affected by a large number of landslides (mainly debris flows), triggered by the Hurricane Mitch rainfalls in October 1998. With the aid of aerial photography interpretation at 1:40.000 scale, amplified to 1:20.000, and detailed field work, a landslide map at 1:10.000 scale was constructed. The failure zones of landslides were digitized in order to obtain a failure zone digital map. A terrain unit digital map, in which a series of physical-environmental terrain factors are represented, was also used. Dividing the studied area into two zones (A and B) with homogeneous physical and environmental characteristics, allows us to develop the proposed methodology and to validate it. In zone A, the failure zone digital map is superimposed onto the terrain unit digital map to establish the relationship between the different terrain factors and the failure zones. The numerical expression of this relationship enables us to classify the terrain by its landslide susceptibility. In zone B, this numerical relationship was employed to obtain a landslide susceptibility map, obviating the need for a failure zone map. The validity of the methodology can be tested in this area by using the degree of superposition of the susceptibility map and the failure zone map. The implementation of the methodology in tropical countries with physical and environmental characteristics similar to those of the study area allows us to carry out a landslide susceptibility analysis in areas where landslide records do not exist. This analysis is essential to landslide hazard and risk assessment, which is necessary to determine the actions for mitigating landslide effects, e.g. land planning, emergency aid actions, etc.

  1. Test Facility Simulation Results for Aerospace Loss-of-Lubrication of Spur Gears

    NASA Technical Reports Server (NTRS)

    Handschuh, Robert F.; Gargano, Lucas J.

    2014-01-01

    Prior to receiving airworthiness certification, extensive testing is required during the development of rotary wing aircraft drive systems. Many of these tests are conducted to demonstrate the drive system's ability to operate at extreme conditions, beyond that called for in the normal to maximum power operating range. One of the most extreme tests is referred to as the loss-of-lubrication or run dry test. During this test, the drive system is expected to last at least 30 min without failure while the primary lubrication system is disabled for predetermined, scripted flight conditions. Failure of this test can lead to a partial redesign of the drive system or the addition of an emergency lubrication system. Either of these solutions can greatly increase the aircraft drive system cost and weight and extend the schedule for obtaining airworthiness certification. Recent work at NASA Glenn Research Center focused on performing tests, in a relevant aerospace environment, to simulate the behavior of spur gears under loss-of-lubrication conditions. Tests were conducted using a test facility that was used in the past for spur gear contact fatigue testing. A loss-oflubrication test is initiated by shutting off the single into mesh lubricating jet. The test proceeds until the gears fail and can no longer deliver the applied torque. The observed failures are typically plastically deformed gear teeth, due to the high tooth temperatures, that are no longer in mesh. The effect of several different variables to gear tooth condition during loss-of-lubrication have been tested such as gear pitch, materials, shrouding, lubrication condition, and emergency supplied mist lubrication in earlier testing at NASA. Recent testing has focused on newer aerospace gear steels and imbedding thermocouples in the shrouding to measure the air-oil temperatures flung off the gear teeth. Along with the instrumented shrouding, an instrumented spur gear was also tested. The instrumented spur gear had five thermocouples installed at different locations on the gear tooth and web. The data from these two types of measurements provided important information as to the thermal environment during the loss-of-lubrication event. This data is necessary to validate on-going modeling efforts.

  2. Automatic Integration Testbeds validation on Open Science Grid

    NASA Astrophysics Data System (ADS)

    Caballero, J.; Thapa, S.; Gardner, R.; Potekhin, M.

    2011-12-01

    A recurring challenge in deploying high quality production middleware is the extent to which realistic testing occurs before release of the software into the production environment. We describe here an automated system for validating releases of the Open Science Grid software stack that leverages the (pilot-based) PanDA job management system developed and used by the ATLAS experiment. The system was motivated by a desire to subject the OSG Integration Testbed to more realistic validation tests. In particular those which resemble to every extent possible actual job workflows used by the experiments thus utilizing job scheduling at the compute element (CE), use of the worker node execution environment, transfer of data to/from the local storage element (SE), etc. The context is that candidate releases of OSG compute and storage elements can be tested by injecting large numbers of synthetic jobs varying in complexity and coverage of services tested. The native capabilities of the PanDA system can thus be used to define jobs, monitor their execution, and archive the resulting run statistics including success and failure modes. A repository of generic workflows and job types to measure various metrics of interest has been created. A command-line toolset has been developed so that testbed managers can quickly submit "VO-like" jobs into the system when newly deployed services are ready for testing. A system for automatic submission has been crafted to send jobs to integration testbed sites, collecting the results in a central service and generating regular reports for performance and reliability.

  3. Experimental validation of Critical Temperature-Pressure theory of scuffing

    NASA Astrophysics Data System (ADS)

    Lee, Si C.; Chen, Huanliang

    1995-07-01

    A series of experiments was conducted for validating a newly developed theory of scuffing. The Critical temperature-Pressure (CTP) theory is based on the physisorption behavior of lubricants and is capable of predicting the onset of scuffing failures over a wide range of operating conditions, including the contacts operating in the boundary lubrication and in the partial elastohydrodynamic lubrication (EHL) regimes. According to the CTP theory, failures occur when the contact temperature exceeds a certain critical value which is a function of the lubricant pressure generated by the hydrodynamic action of the EHL contact. A special device capable of simulating the ambient conditions of the partial EHL conjunctions (of contact temperature, pressure, and the lubricant pressure) was constructed. A ball-on-flat type wear tester was put inside a pressure vessel, completely immersed in a highly pressurized bath of mineral oil. The temperature on the flat specimen was gradually increased while the ball was slowly traversed. At a certain critical temmperature, the friction force abruptly jumped indicating the incipiency of the lubrication breakdown. This experiment was repeated for several levels of hydrostatic pressure and the corresponding critical temperatures were obtained. The test results showed an excellent correlation with the newly developed CTP theory.

  4. RIA simulation tests using driver tube for ATF cladding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cinbiz, Mahmut N.; Brown, N. R.; Lowden, R. R.

    Pellet-cladding mechanical interaction (PCMI) is a potential failure mechanism for accident-tolerant fuel (ATF) cladding candidates during a reactivity-initiated accident (RIA). This report summarizes Fiscal Year (FY) 2017 research activities that were undertaken to evaluate the PCMI-like hoop-strain-driven mechanical response of ATF cladding candidates. To achieve various RIA-like conditions, a modified-burst test (MBT) device was developed to produce different mechanical pulses. The calibration of the MBT instrument was accomplished by performing mechanical tests on unirradiated Generation-I iron-chromium-aluminum (FeCrAl) alloy samples. Shakedown tests were also conducted in both FY 2016 and FY 2017 using unirradiated hydrided ZIRLO™ tube samples. This milestone reportmore » focuses on testing of ATF materials, but the benchmark tests with hydrided ZIRLO™ tube samples are documented in a recent journal article.a For the calibration and benchmark tests, the hoop strain was monitored using strain gauges attached to the sample surface in the hoop direction. A novel digital image correlation (DIC) system composed of a single high-speed camera and an array of six mirrors was developed for the MBT instrument to better resolve the failure behavior of samples and to provide useful data for validation of high-fidelity modeling and simulation tools. The DIC system enable a 360° view of a sample’s outer surface. This feature was added to the instrument to determine the precise failure location on a sample’s surface for strain predictions. The DIC system was tested on several silicon carbide fiber/silicon carbide matrix (SiC/SiC) composite tube samples at various pressurization rates of the driver tube (which correspond to the strain rates for the samples). The hoop strains for various loading conditions were determined for the SiC/SiC composite tube samples. Future work is planned to enhance understanding of the failure behavior of the ATF cladding candidates of age-hardened FeCrAl alloys and SiC/SiC composites in detail during RIA conditions informed by the computational studies performed under the US Department of Energy Office of Nuclear Energy Advanced Fuels Campaign. The testing instrument and the new DIC system will be further developed to reach different stress-state conditions and to perform tests at elevated temperatures.« less

  5. Nickel-Hydrogen Cell Testing Experience, NASA/Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Rao, Gopalakrishna M.

    1999-01-01

    The objectives of the project were to test the Nickel-Hydrogen Cell to: (1) verify the Aerospace Cell Flight Worthiness, (2) Elucidate the Aerospace Cell Thermal Behavior, (3) Develop the Aerospace Battery Assembly Design(s) and In-orbit Battery Management plan(s) and (4) Understand the Aerospace Cell Failure Mechanism. The tests included the LEO and GEO Life cycle tests, Calorimetric Analysis, Destructive Physical analysis, and special tests. Charts show the Mission Profile Cycling Data, Stress Cycling Data. The test data complies with the mission requirements, validating the flight worthiness of batteries. The nominal stress and mission profile cycling performance test shows the charge voltage as high as 1.60V and recharge ratio greater than 1.05. It is apparent that the electrochemical signatures alone do not provide conclusive proof for Nickel precharge. The researchers recommend a gas and positive plate analyses for further confirmation.

  6. Fracture Probability of MEMS Optical Devices for Space Flight Applications

    NASA Technical Reports Server (NTRS)

    Fettig, Rainer K.; Kuhn, Jonathan L.; Moseley, S. Harvey; Kutyrev, Alexander S.; Orloff, Jon

    1999-01-01

    A bending fracture test specimen design is presented for thin elements used in optical devices for space flight applications. The specimen design is insensitive to load position, avoids end effect complications, and can be used to measure strength of membranes less than 2 microns thick. The theoretical equations predicting stress at failure are presented, and a detailed finite element model is developed to validate the equations for this application. An experimental procedure using a focused ion beam machine is outlined, and results from preliminary tests of 1.9 microns thick single crystal silicon are presented. These tests are placed in the context of a methodology for the design and evaluation of mission critical devices comprised of large arrays of cells.

  7. Forward Skirt Structural Testing on the Space Launch System (SLS) Program

    NASA Technical Reports Server (NTRS)

    Lohrer, J. D.; Wright, R. D.

    2016-01-01

    Structural testing was performed to evaluate heritage forward skirts from the Space Shuttle program for use on the Space Launch System (SLS) program. One forward skirt is located in each solid rocket booster. Heritage forward skirts are aluminum 2219 welded structures. Loads are applied at the forward skirt thrust post and ball assembly. Testing was needed because SLS ascent loads are roughly 40% higher than Space Shuttle loads. Testing objectives were to determine margins of safety, demonstrate reliability, and validate analytical models. Two forward skirts were structurally tested using the test configuration. The test stand applied loads to the thrust post. Four hydraulic actuators were used to apply axial load and two hydraulic actuators were used to apply radial and tangential loads. The first test was referred to as FSTA-1 (Forward Skirt Structural Test Article) and was performed in April/May 2014. The purpose of FSTA-1 was to verify the ultimate capability of the forward skirt subjected to ascent ultimate loads. Testing consisted of two liftoff load cases taken to 100% limit load followed by an ascent load case taken to 110% limit load. The forward skirt was unloaded to no load after each test case. Lastly, the forward skirt was tested to 140% limit and then to failure using the ascent loads. The second test was referred to as FSTA-2 and performed in July/August of 2014. The purpose of FSTA-2 was to verify the ultimate capability of the forward skirt subjected to liftoff ultimate loads. Testing consisted of six liftoff load cases taken to 100% limit load followed by the six liftoff cases taken to 140% limit load. Two ascent load cases were then tested to 100% limit load. The forward skirt was unloaded to no load after each test case. Lastly, the forward skirt was tested to 140% limit and then to failure using the ascent loads. The forward skirts on FSTA-1 and FSTA-2 successfully carried all applied liftoff and ascent load cases. Both FSTA-1 and FSTA-2 were tested to failure by increasing the ascent loads. Failure occurred in the forward skirt thrust post radius. The forward skirts on FSTA-1 and FSTA-2 had nearly identical failure modes. FSTA-1 failed at 1.72 times limit load and FSTA-2 failed at 1.62 times limit load. This difference is primarily attributed to variation in material properties in the thrust post region. Test data were obtained from strain gages, deflection gages, ARAMIS digital strain measurement, acoustic emissions, and high-speed video. Strain gage data and ARAMIS strain were compared to finite element (FE) analysis predictions. Both the forward skirt and tooling were modeled. This allows the analysis to simulate the loading as close as possible to actual test configuration. FSTA-1 and FSTA-2 were instrumented with over 200 strain gages to ensure all possible failure modes could be captured. However, it turned out that three gages provided critical strain data. One was located in the post bore and two on the post radius. More gages were not specified due to space limitations and the desire to not interfere with the use of the ARAMIS system on the post radius. Measured strains were compared to analysis results for the load cycle to failure. Note that FSTA-1 gages were lost before failure was reached. FSTA-2 gages made it to the failure load but one of the radius gages was lost before testing began. This gage was not replaced because of the time and cost associated with disassembly of the test structure. Correlation to analysis was excellent for FSTA-1. FSTA-2 was not quite as good because there was more residual strain from previous load cycles. FSTA-2 was loaded and unloaded with 12 liftoff cases and two ascent cases before taking the skirt to failure. FSTA-1 only had two liftoff cases and one ascent case before taking the skirt to failure. The ARAMIS system was used to determine strain at the post radius by processing digital images of a speckled paint pattern. Digital cameras recorded images of the speckled paint pattern. ARAMIS strain results for FSTA-2 just prior to failure. Note a high strain location develops near the left side. This high strain compares well to analysis prediction for both FSTA-1 and FSTA-2. The strain at this location was also plotted versus limit load. Both FSTA-1 and FSTA-2 had excellent correlation between ARAMIS and analysis strains. Acoustic emission (AE) sensors were used to monitor for damage formation that may occur during testing (e.g., crack formation and growth or propagation). AE was very important because after disassembly of FSTA-1, a crack was observed in the ball fitting radius. The ball fitting did not crack on FSTA-2. AE data was used to reconstruct when the crack occurred. The AE energy versus time plot for FSTA. The energy increased considerably at 850 seconds (152% limit load), indicating a crack could have formed at this point. The only visual evidence found that could have corresponded to this was the crack that initiated in the ball fitting. The cracks in the forward skirt aluminum structures would likely have been lower energy due to a lower modulus and all that were found after failure correlated to occurring after the initial crack in the post radius. This was verified by high-speed cameras used to record the failure.

  8. Design and Testing of CPAS Main Deployment Bag Energy Modulator

    NASA Technical Reports Server (NTRS)

    Mollmann, Catherine

    2017-01-01

    During the developmental testing program for CPAS (Capsule Parachute Assembly System), the parachute system for the NASA Orion Crew Module, simulation revealed that high loads may be experienced by the pilot risers during the most devere deployment conditions. As the role of the pilot parachutes is to deploy the main parachutes, these high loads introduced the possibility of main deployment failure. In order to mitigate these high loads, a set of energy modulators was incorporated between the pilot riser and the main deployment bag. An extensive developmental program was implemented to ensure the adequacy of these energy modulators. After initial design comparisons, the energy modulator design was validated through slow-speed joint tests as well as through high-speed bungee tests. This paper documents the design, development, and results of multiple tests completed on the final design.

  9. Longitudinally Jointed Edge-Wise Compression HoneyComb Composite Sandwich Coupon Testing And Fe Analysis: Three Methods of Strain Measurement, And Comparison

    NASA Technical Reports Server (NTRS)

    Farrokh, Babak; Rahim, Nur Aida Abul; Segal, Ken; Fan, Terry; Jones, Justin; Hodges, Ken; Mashni, Noah; Garg, Naman; Sang, Alex

    2013-01-01

    Three distinct strain measurement methods (i.e., foil resistance strain gages, fiber optic strain sensors, and a three-dimensional digital image photogrammetry that gives full field strain and displacement measurements) were implemented to measure strains on the back and front surfaces of a longitudinally jointed curved test article subjected to edge-wise compression testing, at NASA Goddard Space Flight Center, according to ASTM C364. The pre-test finite element analysis (FEA) was conducted to assess ultimate failure load and predict strain distribution pattern throughout the test coupon. The predicted strain pattern contours were then utilized as guidelines for installing the strain measurement instrumentations. The foil resistance strain gages and fiber optic strain sensors were bonded on the specimen at locations with nearly the same analytically predicted strain values, and as close as possible to each other, so that, comparisons between the measured strains by strain gages and fiber optic sensors, as well as the three-dimensional digital image photogrammetric system are relevant. The test article was loaded to failure (at 167 kN), at the compressive strain value of 10,000 micro epsilon. As a part of this study, the validity of the measured strains by fiber optic sensors is examined against the foil resistance strain gages and the three-dimensional digital image photogrammetric data, and comprehensive comparisons are made with FEA predictions.

  10. Accelerated Aging in Electrolytic Capacitors for Prognostics

    NASA Technical Reports Server (NTRS)

    Celaya, Jose R.; Kulkarni, Chetan; Saha, Sankalita; Biswas, Gautam; Goebel, Kai Frank

    2012-01-01

    The focus of this work is the analysis of different degradation phenomena based on thermal overstress and electrical overstress accelerated aging systems and the use of accelerated aging techniques for prognostics algorithm development. Results on thermal overstress and electrical overstress experiments are presented. In addition, preliminary results toward the development of physics-based degradation models are presented focusing on the electrolyte evaporation failure mechanism. An empirical degradation model based on percentage capacitance loss under electrical overstress is presented and used in: (i) a Bayesian-based implementation of model-based prognostics using a discrete Kalman filter for health state estimation, and (ii) a dynamic system representation of the degradation model for forecasting and remaining useful life (RUL) estimation. A leave-one-out validation methodology is used to assess the validity of the methodology under the small sample size constrain. The results observed on the RUL estimation are consistent through the validation tests comparing relative accuracy and prediction error. It has been observed that the inaccuracy of the model to represent the change in degradation behavior observed at the end of the test data is consistent throughout the validation tests, indicating the need of a more detailed degradation model or the use of an algorithm that could estimate model parameters on-line. Based on the observed degradation process under different stress intensity with rest periods, the need for more sophisticated degradation models is further supported. The current degradation model does not represent the capacitance recovery over rest periods following an accelerated aging stress period.

  11. Publication bias and the failure of replication in experimental psychology.

    PubMed

    Francis, Gregory

    2012-12-01

    Replication of empirical findings plays a fundamental role in science. Among experimental psychologists, successful replication enhances belief in a finding, while a failure to replicate is often interpreted to mean that one of the experiments is flawed. This view is wrong. Because experimental psychology uses statistics, empirical findings should appear with predictable probabilities. In a misguided effort to demonstrate successful replication of empirical findings and avoid failures to replicate, experimental psychologists sometimes report too many positive results. Rather than strengthen confidence in an effect, too much successful replication actually indicates publication bias, which invalidates entire sets of experimental findings. Researchers cannot judge the validity of a set of biased experiments because the experiment set may consist entirely of type I errors. This article shows how an investigation of the effect sizes from reported experiments can test for publication bias by looking for too much successful replication. Simulated experiments demonstrate that the publication bias test is able to discriminate biased experiment sets from unbiased experiment sets, but it is conservative about reporting bias. The test is then applied to several studies of prominent phenomena that highlight how publication bias contaminates some findings in experimental psychology. Additional simulated experiments demonstrate that using Bayesian methods of data analysis can reduce (and in some cases, eliminate) the occurrence of publication bias. Such methods should be part of a systematic process to remove publication bias from experimental psychology and reinstate the important role of replication as a final arbiter of scientific findings.

  12. Full-Scale Crash Test and Finite Element Simulation of a Composite Prototype Helicopter

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.; Fasanella, Edwin L.; Boitnott, Richard L.; Lyle, Karen H.

    2003-01-01

    A full-scale crash test of a prototype composite helicopter was performed at the Impact Dynamics Research Facility at NASA Langley Research Center in 1999 to obtain data for validation of a finite element crash simulation. The helicopter was the flight test article built by Sikorsky Aircraft during the Advanced Composite Airframe Program (ACAP). The composite helicopter was designed to meet the stringent Military Standard (MIL-STD-1290A) crashworthiness criteria and was outfitted with two crew and two troop seats and four anthropomorphic dummies. The test was performed at 38-ft/s vertical and 32.5-ft/s horizontal velocity onto a rigid surface. An existing modal-vibration model of the Sikorsky ACAP helicopter was converted into a model suitable for crash simulation. A two-stage modeling approach was implemented and an external user-defined subroutine was developed to represent the complex landing gear response. The crash simulation was executed with a nonlinear, explicit transient dynamic finite element code. Predictions of structural deformation and failure, the sequence of events, and the dynamic response of the airframe structure were generated and the numerical results were correlated with the experimental data to validate the simulation. The test results, the model development, and the test-analysis correlation are described.

  13. Quantitative validation of carbon-fiber laminate low velocity impact simulations

    DOE PAGES

    English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.

    2015-09-26

    Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less

  14. Failures in Hybrid Microcircuits During Environmental Testing. History Cases

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander

    2008-01-01

    This purpose of this viewgraph presentation is to discuss failures in hermetic hybrids observed at the GSFC PA Lab during environmental stress testing. The cases discussed are: Case I. Substrate metallization failures during Thermal cycling (TC). Case II. Flex lid-induced failure. Case Ill. Hermeticity failures during TC. Case IV. Die metallization cracking during TC. and how many test cycles and parts is necessary? Case V. Wire Bond failures after life test. Case VI. Failures caused by Au/In IMC growth.

  15. Practical Application of a Subscale Transport Aircraft for Flight Research in Control Upset and Failure Conditions

    NASA Technical Reports Server (NTRS)

    Cunningham, Kevin; Foster, John V.; Morelli, Eugene A.; Murch, Austin M.

    2008-01-01

    Over the past decade, the goal of reducing the fatal accident rate of large transport aircraft has resulted in research aimed at the problem of aircraft loss-of-control. Starting in 1999, the NASA Aviation Safety Program initiated research that included vehicle dynamics modeling, system health monitoring, and reconfigurable control systems focused on flight regimes beyond the normal flight envelope. In recent years, there has been an increased emphasis on adaptive control technologies for recovery from control upsets or failures including damage scenarios. As part of these efforts, NASA has developed the Airborne Subscale Transport Aircraft Research (AirSTAR) flight facility to allow flight research and validation, and system testing for flight regimes that are considered too risky for full-scale manned transport airplane testing. The AirSTAR facility utilizes dynamically-scaled vehicles that enable the application of subscale flight test results to full scale vehicles. This paper describes the modeling and simulation approach used for AirSTAR vehicles that supports the goals of efficient, low-cost and safe flight research in abnormal flight conditions. Modeling of aerodynamics, controls, and propulsion will be discussed as well as the application of simulation to flight control system development, test planning, risk mitigation, and flight research.

  16. Implementation of an Adaptive Controller System from Concept to Flight Test

    NASA Technical Reports Server (NTRS)

    Larson, Richard R.; Burken, John J.; Butler, Bradley S.

    2009-01-01

    The National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California) is conducting ongoing flight research using adaptive controller algorithms. A highly modified McDonnell-Douglas NF-15B airplane called the F-15 Intelligent Flight Control System (IFCS) was used for these algorithms. This airplane has been modified by the addition of canards and by changing the flight control systems to interface a single-string research controller processor for neural network algorithms. Research goals included demonstration of revolutionary control approaches that can efficiently optimize aircraft performance for both normal and failure conditions, and to advance neural-network-based flight control technology for new aerospace systems designs. Before the NF-15B IFCS airplane was certified for flight test, however, certain processes needed to be completed. This paper presents an overview of these processes, including a description of the initial adaptive controller concepts followed by a discussion of modeling formulation and performance testing. Upon design finalization, the next steps are: integration with the system interfaces, verification of the software, validation of the hardware to the requirements, design of failure detection, development of safety limiters to minimize the effect of erroneous neural network commands, and creation of flight test control room displays to maximize human situational awareness.

  17. NASA Double Asteroid Redirection Test (Dart) Trajectory Validation and Robustness

    NASA Technical Reports Server (NTRS)

    Sarli, Bruno V.; Ozimek, Martin T.; Atchison, Justin A.; Englander, Jacob A.; Barbee, Brent W.

    2017-01-01

    The Double Asteroid Redirection Test (DART) mission will be the first to test the concept of a kinetic impactor. Several studies have been made on asteroid redirection and impact mitigation, however, to this date no mission tested the proposed concepts. An impact study on a representative body allows the measurement of the effects on the target's orbit and physical structure. With this goal, DART's objective is to verify the effectiveness of the kinetic impact concept for planetary defense. The spacecraft uses solar electric propulsion to escape Earth, flyby (138971) 2001 CB21 for impart rehearsal, and impact the secondary body of the (65803) Didymos system. This work focuses on the interplanetary trajectory design part of the mission with the validation of the baseline trajectory, performance comparison to other mission objectives, and assessment of the baseline robustness to missed thrust events. Results show a good performance of the selected trajectory for different mission objectives: latest possible escape date, maximum kinetic energy on impact, shortest possible time of flight, and use of an Earth swing-by. The baseline trajectory was shown to be robust to a missed thrust with 1% of fuel margin being enough to recover the mission for failures of more than 14 days.

  18. Development and Implementation of a Hardware In-the-Loop Test Bed for Unmanned Aerial Vehicle Control Algorithms

    NASA Technical Reports Server (NTRS)

    Nyangweso, Emmanuel; Bole, Brian

    2014-01-01

    Successful prediction and management of battery life using prognostic algorithms through ground and flight tests is important for performance evaluation of electrical systems. This paper details the design of test beds suitable for replicating loading profiles that would be encountered in deployed electrical systems. The test bed data will be used to develop and validate prognostic algorithms for predicting battery discharge time and battery failure time. Online battery prognostic algorithms will enable health management strategies. The platform used for algorithm demonstration is the EDGE 540T electric unmanned aerial vehicle (UAV). The fully designed test beds developed and detailed in this paper can be used to conduct battery life tests by controlling current and recording voltage and temperature to develop a model that makes a prediction of end-of-charge and end-of-life of the system based on rapid state of health (SOH) assessment.

  19. Analysis of flexural strength and contact pressure after simulated chairside adjustment of pressed lithium disilicate glass-ceramic.

    PubMed

    Ramadhan, Ali; Thompson, Geoffrey A; Maroulakos, Georgios; Berzins, David

    2018-04-30

    Research evaluating load-to-failure of pressed lithium disilicate glass-ceramic (LDGC) with a clinically validated test after adjustment and repair procedures is scarce. The purpose of this in vitro study was to investigate the effect of the simulated chairside adjustment of the intaglio surface of monolithic pressed LDGC and procedures intended to repair damage. A total of 423 IPS e.max Press (Ivoclar Vivadent AG) disks (15 mm diameter, 1 mm height) were used in the study. The material was tested by using an equibiaxial loading arrangement (n≥30/group) and a contact pressure test (n≥20/group). Specimens were assigned to 1 of 14 groups. One-half was assigned to the equibiaxial load test and the other half underwent contact pressure testing. Testing was performed in 2 parts, before glazing and after glazing. Before-glazing specimens were devested and entered in the test protocol, while after-glazing specimens were devested and glazed before entering the test protocol. Equibiaxial flexure test specimens were placed on a ring-on-ring apparatus and loaded until failure. Contact pressure specimens were cemented to epoxy resin blocks with a resin cement and loaded with a 50-mm diameter hemisphere until failure. Tests were performed on a universal testing machine with a crosshead speed of 0.5 mm/min. Weibull statistics and likelihood ratio contour plots determined intergroup differences (95% confidence bounds). Before glazing, the equibiaxial flexural strength test and the Weibull and likelihood ratio contour plots demonstrated a significantly higher failure strength for 1EC (188 MPa) than that of the damaged and/or repaired groups. Glazing following diamond-adjustment (1EGG) was the most beneficial post-damage procedure (176 MPa). Regarding the contact pressure test, the Weibull and likelihood ratio contour plots revealed no significant difference between the 1PC (98 MPa) and 1PGG (98 MPa) groups. Diamond-adjustment, without glazing (1EG and 1PG), resulted in the next-to-lowest equibiaxial flexure strength and the lowest contact pressure. After glazing, the strength of all the groups, when subjected to glazing following devesting, increased in comparison with corresponding groups in the before-glazing part of the study. A glazing treatment improved the mechanical properties of diamond-adjusted IPS e.max Press disks when evaluated by equibiaxial flexure and contact pressure tests. Copyright © 2018 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  20. Constitutive law for thermally-activated plasticity of recrystallized tungsten

    NASA Astrophysics Data System (ADS)

    Zinovev, Aleksandr; Terentyev, Dmitry; Dubinko, Andrii; Delannay, Laurent

    2017-12-01

    A physically-based constitutive law relevant for ITER-specification tungsten grade in as-recrystallized state is proposed. The material demonstrates stages III and IV of the plastic deformation, in which hardening rate does not drop to zero with the increase of applied stress. Despite the classical Kocks-Mecking model, valid at stage III, the strain hardening asymptotically decreases resembling a hyperbolic function. The material parameters are fitted by relying on tensile test data and by requiring that the strain and stress at the onset of diffuse necking (uniform elongation and ultimate tensile strength correspondingly) as well as the yield stress be reproduced. The model is then validated in the temperature range 300-600 °C with the help of finite element analysis of tensile tests which confirms the reproducibility of the experimental engineering curves up to the onset of diffuse necking, beyond which the development of ductile damage accelerates the material failure. This temperature range represents the low temperature application window for tungsten as divertor material in fusion reactor ITER.

  1. Gender and age related predictive value of walk test in heart failure: do anthropometrics matter in clinical practice?

    PubMed

    Frankenstein, L; Remppis, A; Graham, J; Schellberg, D; Sigg, C; Nelles, M; Katus, H A; Zugck, C

    2008-07-21

    The six-minute walk test (6 WT) is a valid and reliable predictor of morbidity and mortality in chronic heart failure (CHF) patients, frequently used as an endpoint or target in clinical trials. As opposed to spiroergometry, improvement of its prognostic accuracy by correction for height, weight, age and gender has not yet been attempted comprehensively despite known influences of these parameters. We recorded the 6 WT of 1035 CHF patients, attending clinic from 1995 to 2005. The 1-year prognostic value of 6 WT was calculated, alone and after correction for height, weight, BMI and/or age. Analysis was performed on the entire cohort, on males and females separately and stratified according to BMI (<25, 25-30 and >30 kg/m(2)). 6 WT weakly correlated with age (r=-0.32; p<0.0001), height (r=0.2; p<0.0001), weight (r=0.11; p<0.001), not with BMI (r=0.01; p=ns). The 6 WT was a strong predictor of 1-year mortality in both genders, both as a single and age corrected parameter. Parameters derived from correction of 6 WT for height, weight or BMI did not improve the prognostic value in univariate analysis for either gender. Comparison of the receiver operated characteristics showed no significant gain in prognostic accuracy from any derived variable, either for males or females. The six-minute walk test is a valid tool for risk prediction in both male and female CHF patients. In both genders, correcting 6 WT distance for height, weight or BMI alone, or adjusting for age, does not increase the prognostic power of this tool.

  2. Automated extraction of ejection fraction for quality measurement using regular expressions in Unstructured Information Management Architecture (UIMA) for heart failure

    PubMed Central

    DuVall, Scott L; South, Brett R; Bray, Bruce E; Bolton, Daniel; Heavirland, Julia; Pickard, Steve; Heidenreich, Paul; Shen, Shuying; Weir, Charlene; Samore, Matthew; Goldstein, Mary K

    2012-01-01

    Objectives Left ventricular ejection fraction (EF) is a key component of heart failure quality measures used within the Department of Veteran Affairs (VA). Our goals were to build a natural language processing system to extract the EF from free-text echocardiogram reports to automate measurement reporting and to validate the accuracy of the system using a comparison reference standard developed through human review. This project was a Translational Use Case Project within the VA Consortium for Healthcare Informatics. Materials and methods We created a set of regular expressions and rules to capture the EF using a random sample of 765 echocardiograms from seven VA medical centers. The documents were randomly assigned to two sets: a set of 275 used for training and a second set of 490 used for testing and validation. To establish the reference standard, two independent reviewers annotated all documents in both sets; a third reviewer adjudicated disagreements. Results System test results for document-level classification of EF of <40% had a sensitivity (recall) of 98.41%, a specificity of 100%, a positive predictive value (precision) of 100%, and an F measure of 99.2%. System test results at the concept level had a sensitivity of 88.9% (95% CI 87.7% to 90.0%), a positive predictive value of 95% (95% CI 94.2% to 95.9%), and an F measure of 91.9% (95% CI 91.2% to 92.7%). Discussion An EF value of <40% can be accurately identified in VA echocardiogram reports. Conclusions An automated information extraction system can be used to accurately extract EF for quality measurement. PMID:22437073

  3. Quality assurance and quality control for autonomously collected geoscience data

    NASA Astrophysics Data System (ADS)

    Versteeg, R. J.; Richardson, A.; Labrecque, D.

    2006-12-01

    The growing interest in processes, coupled with the reduction in cost and complexity of sensors which allow for continuous data collection and transmission is giving rise to vast amounts of semi autonomously collected data. Such data is typically collected from a range of physical and chemical sensors and transmitted - either at the time of collection, or periodically as a collection of measurements - to a central server. Such setups can collect vast amounts of data. In cases where power is not an issue one datapoint can be collected every minute, resulting in tens of thousands of data points per month per sensor. Especially in cases in which multiple sensors are deployed it is infeasible to examine each individual datapoint for each individual sensor, and users typically will look at aggregates of such data on a periodic (once a week to once every few months) basis. Such aggregates (and the timelag between data collection and data evaluation) will impact the ability to rapidly identify and resolve data issues. Thus, there is a need to integrate data qa/qc rules and procedures in the data collection process. These should be implemented such that data is analyzed for compliance the moment it arrives at the server, and that any issues with this data result in notification of cognizant personnel. Typical issues (encountered in the field) include complete system failure (resulting in no data arriving at all), to complete sensor failure (data is collected, but is meaningless), to partial sensor failure (sensor gives erratic readings, or starts to exhibit a bias) to partial powerloss (system collects and transmits data only intermittently). We have implemented a suite of such rules and tests as part of the INL developed performance monitoring system. These rules are invoked as part of a data qa/qc workflow, and result in quality indicators for each datapoint as well as user alerts in case of issues. Tests which are applied to the data include tests on individual datapoints, tests on suites of datapoints, and tests applied over the whole dataset. Example of tests include: Did data arrive on time, is received data in a valid format, are all measurements present, is data within valid range, is data collected at appropriate time intervals, are the statistics of the data changing over time and is the data collected within an appropriate instrument calibration window? This approach, which is executed automatically on all data provides data end users with confidence and auditability regarding the quality and useability of autonomously collected data.

  4. Designing a Technology Enhanced Practice for Home Nursing Care of Patients with Congestive Heart Failure

    PubMed Central

    Casper, Gail R.; Karsh, Ben-Tzion; K.L., Calvin; Carayon, Pascale; Grenier, Anne-Sophie; Sebern, Margaret; Burke, Laura J.; Brennan, Patricia F.

    2005-01-01

    This paper describes the process we used to design the HeartCare website to support Technology Enhanced Practice (TEP) for home care nurses engaged in providing care for patients with Congestive Heart Failure (CHF). Composed of communication, information, and self-monitoring functions, the HeartCare website is aimed at supporting best practice nursing care for these patients. Its unique focus is professional practice, thus the scope of this project is greater and more abstract than those focusing on a task or set of activities. A modified macroergonomic analysis, design work system analysis, and focus groups utilizing participatory design methodology were undertaken to characterize the nursing practice model. Design of the HeartCare website required synthesizing the extant practice model and the agency’s evidence-based heart failure protocols, identifying aspects of practice that could be enhanced by supporting technology, and delineation of functional requirements of the Enhanced HeartCare technology. Validation and refinement of the website and planning for user training activities will be accomplished through a two-stage usability testing strategy. PMID:16779013

  5. Congestive Heart Failure Cardiopoietic Regenerative Therapy (CHART-1) trial design.

    PubMed

    Bartunek, Jozef; Davison, Beth; Sherman, Warren; Povsic, Thomas; Henry, Timothy D; Gersh, Bernard; Metra, Marco; Filippatos, Gerasimos; Hajjar, Roger; Behfar, Atta; Homsy, Christian; Cotter, Gad; Wijns, William; Tendera, Michal; Terzic, Andre

    2016-02-01

    Cardiopoiesis is a conditioning programme that aims to upgrade the cardioregenerative aptitude of patient-derived stem cells through lineage specification. Cardiopoietic stem cells tested initially for feasibility and safety exhibited signs of clinical benefit in patients with ischaemic heart failure (HF) warranting definitive evaluation. Accordingly, CHART-1 is designed as a large randomized, sham-controlled multicentre study aimed to validate cardiopoietic stem cell therapy. Patients (n = 240) with chronic HF secondary to ischaemic heart disease, reduced LVEF (<35%), and at high risk for recurrent HF-related events, despite optimal medical therapy, will be randomized 1:1 to receive 600 × 10(6) bone marrow-derived and lineage-directed autologous cardiopoietic stem cells administered via a retention-enhanced intramyocardial injection catheter or a sham procedure. The primary efficacy endpoint is a hierarchical composite of mortality, worsening HF, Minnesota Living with Heart Failure Questionnaire score, 6 min walk test, LV end-systolic volume, and LVEF at 9 months. The secondary efficacy endpoint is the time to cardiovascular death or worsening HF at 12 months. Safety endpoints include mortality, readmissions, aborted sudden deaths, and serious adverse events at 12 and 24 months. The CHART-1 clinical trial is powered to examine the therapeutic impact of lineage-directed stem cells as a strategy to achieve cardiac regeneration in HF populations. On completion, CHART-1 will offer a definitive evaluation of the efficacy and safety of cardiopoietic stem cells in the treatment of chronic ischaemic HF. NCT01768702. © 2015 The Authors European Journal of Heart Failure © 2015 European Society of Cardiology.

  6. Oxygen sensor signal validation for the safety of the rebreather diver.

    PubMed

    Sieber, Arne; L'abbate, Antonio; Bedini, Remo

    2009-03-01

    In electronically controlled, closed-circuit rebreather diving systems, the partial pressure of oxygen inside the breathing loop is controlled with three oxygen sensors, a microcontroller and a solenoid valve - critical components that may fail. State-of-the-art detection of sensor failure, based on a voting algorithm, may fail under circumstances where two or more sensors show the same but incorrect values. The present paper details a novel rebreather controller that offers true sensor-signal validation, thus allowing efficient and reliable detection of sensor failure. The core components of this validation system are two additional solenoids, which allow an injection of oxygen or diluent gas directly across the sensor membrane.

  7. Software reliability perspectives

    NASA Technical Reports Server (NTRS)

    Wilson, Larry; Shen, Wenhui

    1987-01-01

    Software which is used in life critical functions must be known to be highly reliable before installation. This requires a strong testing program to estimate the reliability, since neither formal methods, software engineering nor fault tolerant methods can guarantee perfection. Prior to the final testing software goes through a debugging period and many models have been developed to try to estimate reliability from the debugging data. However, the existing models are poorly validated and often give poor performance. This paper emphasizes the fact that part of their failures can be attributed to the random nature of the debugging data given to these models as input, and it poses the problem of correcting this defect as an area of future research.

  8. Orthogonal series generalized likelihood ratio test for failure detection and isolation. [for aircraft control

    NASA Technical Reports Server (NTRS)

    Hall, Steven R.; Walker, Bruce K.

    1990-01-01

    A new failure detection and isolation algorithm for linear dynamic systems is presented. This algorithm, the Orthogonal Series Generalized Likelihood Ratio (OSGLR) test, is based on the assumption that the failure modes of interest can be represented by truncated series expansions. This assumption leads to a failure detection algorithm with several desirable properties. Computer simulation results are presented for the detection of the failures of actuators and sensors of a C-130 aircraft. The results show that the OSGLR test generally performs as well as the GLR test in terms of time to detect a failure and is more robust to failure mode uncertainty. However, the OSGLR test is also somewhat more sensitive to modeling errors than the GLR test.

  9. Creep rupture behavior of unidirectional advanced composites

    NASA Technical Reports Server (NTRS)

    Yeow, Y. T.

    1980-01-01

    A 'material modeling' methodology for predicting the creep rupture behavior of unidirectional advanced composites is proposed. In this approach the parameters (obtained from short-term tests) required to make the predictions are the three principal creep compliance master curves and their corresponding quasi-static strengths tested at room temperature (22 C). Using these parameters in conjunction with a failure criterion, creep rupture envelopes can be generated for any combination of in-plane loading conditions and ambient temperature. The analysis was validated experimentally for one composite system, the T300/934 graphite-epoxy system. This was done by performing short-term creep tests (to generate the principal creep compliance master curves with the time-temperature superposition principle) and relatively long-term creep rupture tensile tests of off-axis specimens at 180 C. Good to reasonable agreement between experimental and analytical results is observed.

  10. Using Modeling and Simulation to Predict Operator Performance and Automation-Induced Complacency With Robotic Automation: A Case Study and Empirical Validation.

    PubMed

    Wickens, Christopher D; Sebok, Angelia; Li, Huiyang; Sarter, Nadine; Gacy, Andrew M

    2015-09-01

    The aim of this study was to develop and validate a computational model of the automation complacency effect, as operators work on a robotic arm task, supported by three different degrees of automation. Some computational models of complacency in human-automation interaction exist, but those are formed and validated within the context of fairly simplified monitoring failures. This research extends model validation to a much more complex task, so that system designers can establish, without need for human-in-the-loop (HITL) experimentation, merits and shortcomings of different automation degrees. We developed a realistic simulation of a space-based robotic arm task that could be carried out with three different levels of trajectory visualization and execution automation support. Using this simulation, we performed HITL testing. Complacency was induced via several trials of correctly performing automation and then was assessed on trials when automation failed. Following a cognitive task analysis of the robotic arm operation, we developed a multicomponent model of the robotic operator and his or her reliance on automation, based in part on visual scanning. The comparison of model predictions with empirical results revealed that the model accurately predicted routine performance and predicted the responses to these failures after complacency developed. However, the scanning models do not account for the entire attention allocation effects of complacency. Complacency modeling can provide a useful tool for predicting the effects of different types of imperfect automation. The results from this research suggest that focus should be given to supporting situation awareness in automation development. © 2015, Human Factors and Ergonomics Society.

  11. Herbal and Dietary Supplement Induced Liver Injury

    PubMed Central

    de Boer, Ynto S.; Sherker, Averell H.

    2016-01-01

    Summary The increase in the use of herbal and dietary supplements (HDS) over the last decades has been accompanied with an increase in the reports of HDS associated hepatotoxicity. The spectrum of HDS induced liver injury is diverse and the outcome may vary from transient liver test elevations to fulminant hepatic failure resulting in death or requiring liver transplantation. There are no validated standardized tools to establish the diagnosis, but some HDS products do have a typical clinical signature that may help to identify HDS induced liver injury. PMID:27842768

  12. Static Properties of Fibre Metal Laminates

    NASA Astrophysics Data System (ADS)

    Hagenbeek, M.; van Hengel, C.; Bosker, O. J.; Vermeeren, C. A. J. R.

    2003-07-01

    In this article a brief overview of the static properties of Fibre Metal Laminates is given. Starting with the stress-strain relation, an effective calculation tool for uniaxial stress-strain curves is given. The method is valid for all Glare types. The Norris failure model is described in combination with a Metal Volume Fraction approach leading to a useful tool to predict allowable blunt notch strength. The Volume Fraction approach is also useful in the case of the shear yield strength of Fibre Metal Laminates. With the use of the Iosipescu test shear yield properties are measured.

  13. Instrumentation for In-Flight SSME Rocket Engine Plume Spectroscopy

    NASA Technical Reports Server (NTRS)

    Madzsar, George C.; Bickford, Randall L.; Duncan, David B.

    1994-01-01

    This paper describes instrumentation that is under development for an in-flight demonstration of a plume spectroscopy system on the space shuttle main engine. The instrumentation consists of a nozzle mounted optical probe for observation of the plume, and a spectrometer for identification and quantification of plume content. This instrumentation, which is intended for use as a diagnostic tool to detect wear and incipient failure in rocket engines, will be validated by a hardware demonstration on the Technology Test Bed engine at the Marshall Space Flight Center.

  14. Development and Validation of a Lifecycle-based Prognostics Architecture with Test Bed Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hines, J. Wesley; Upadhyaya, Belle; Sharp, Michael

    On-line monitoring and tracking of nuclear plant system and component degradation is being investigated as a method for improving the safety, reliability, and maintainability of aging nuclear power plants. Accurate prediction of the current degradation state of system components and structures is important for accurate estimates of their remaining useful life (RUL). The correct quantification and propagation of both the measurement uncertainty and model uncertainty is necessary for quantifying the uncertainty of the RUL prediction. This research project developed and validated methods to perform RUL estimation throughout the lifecycle of plant components. Prognostic methods should seamlessly operate from beginning ofmore » component life (BOL) to end of component life (EOL). We term this "Lifecycle Prognostics." When a component is put into use, the only information available may be past failure times of similar components used in similar conditions, and the predicted failure distribution can be estimated with reliability methods such as Weibull Analysis (Type I Prognostics). As the component operates, it begins to degrade and consume its available life. This life consumption may be a function of system stresses, and the failure distribution should be updated to account for the system operational stress levels (Type II Prognostics). When degradation becomes apparent, this information can be used to again improve the RUL estimate (Type III Prognostics). This research focused on developing prognostics algorithms for the three types of prognostics, developing uncertainty quantification methods for each of the algorithms, and, most importantly, developing a framework using Bayesian methods to transition between prognostic model types and update failure distribution estimates as new information becomes available. The developed methods were then validated on a range of accelerated degradation test beds. The ultimate goal of prognostics is to provide an accurate assessment for RUL predictions, with as little uncertainty as possible. From a reliability and maintenance standpoint, there would be improved safety by avoiding all failures. Calculated risk would decrease, saving money by avoiding unnecessary maintenance. One major bottleneck for data-driven prognostics is the availability of run-to-failure degradation data. Without enough degradation data leading to failure, prognostic models can yield RUL distributions with large uncertainty or mathematically unsound predictions. To address these issues a "Lifecycle Prognostics" method was developed to create RUL distributions from Beginning of Life (BOL) to End of Life (EOL). This employs established Type I, II, and III prognostic methods, and Bayesian transitioning between each Type. Bayesian methods, as opposed to classical frequency statistics, show how an expected value, a priori, changes with new data to form a posterior distribution. For example, when you purchase a component you have a prior belief, or estimation, of how long it will operate before failing. As you operate it, you may collect information related to its condition that will allow you to update your estimated failure time. Bayesian methods are best used when limited data are available. The use of a prior also means that information is conserved when new data are available. The weightings of the prior belief and information contained in the sampled data are dependent on the variance (uncertainty) of the prior, the variance (uncertainty) of the data, and the amount of measured data (number of samples). If the variance of the prior is small compared to the uncertainty of the data, the prior will be weighed more heavily. However, as more data are collected, the data will be weighted more heavily and will eventually swamp out the prior in calculating the posterior distribution of model parameters. Fundamentally Bayesian analysis updates a prior belief with new data to get a posterior belief. The general approach to applying the Bayesian method to lifecycle prognostics consisted of identifying the prior, which is the RUL estimate and uncertainty from the previous prognostics type, and combining it with observational data related to the newer prognostics type. The resulting lifecycle prognostics algorithm uses all available information throughout the component lifecycle.« less

  15. A Simple, Evidence-Based Approach to Help Guide Diagnosis of Heart Failure with Preserved Ejection Fraction.

    PubMed

    Reddy, Yogesh N V; Carter, Rickey E; Obokata, Masaru; Redfield, Margaret M; Borlaug, Barry A

    2018-05-23

    Background -Diagnosis of heart failure with preserved ejection fraction (HFpEF) is challenging in euvolemic patients with dyspnea, and no evidence-based criteria are available. We sought to develop and then validate non-invasive diagnostic criteria that could be used to estimate the likelihood that HFpEF is present among patients with unexplained dyspnea in order to guide further testing. Methods -Consecutive patients with unexplained dyspnea referred for invasive hemodynamic exercise testing were retrospectively evaluated. Diagnosis of HFpEF (case) or non-cardiac dyspnea (control) was ascertained by invasive hemodynamic exercise testing. Logistic regression was performed to evaluate the ability of clinical findings to discriminate cases from controls. A scoring system was developed and then validated in a separate test cohort. Results -The derivation cohort included 414 consecutive patients (267 HFpEF and 147 controls, HFpEF prevalence 64%). The test cohort included 100 consecutive patients (61 HFpEF, prevalence 61%). Obesity, atrial fibrillation, age>60 years, treatment with 2 or more antihypertensives, echocardiographic E/e' ratio>9 and echocardiographic pulmonary artery systolic pressure>35 mmHg were selected as the final set of predictive variables. A weighted score based on these six variables was used to create a composite score (H 2 FPEF score) ranging from 0-9. The odds of HFpEF doubled for each 1 unit score increase [OR 1.98 [1.74-2.30], p<0.0001], with an AUC of 0.841 (p<0.0001). The H 2 FPEF score was superior to a currently-used algorithm based upon expert consensus (increase in AUC of +0.169 [+0.120 to +0.217], p<0.0001). Performance in the independent test cohort was maintained [AUC 0.886, p<0.0001]. Conclusions -The H 2 FPEF score, which relies upon simple clinical characteristics and echocardiography, enables discrimination of HFpEF from non-cardiac causes of dyspnea, and can assist in determination of the need for further diagnostic testing in the evaluation of patients with unexplained exertional dyspnea.

  16. Space Shuttle Solid Rocket Booster Decelerator Subsystem Drop Test 3 - Anatomy of a failure

    NASA Technical Reports Server (NTRS)

    Runkle, R. E.; Woodis, W. R.

    1979-01-01

    A test failure dramatically points out a design weakness or the limits of the material in the test article. In a low budget test program, with a very limited number of tests, a test failure sparks supreme efforts to investigate, analyze, and/or explain the anomaly and to improve the design such that the failure will not recur. The third air drop of the Space Shuttle Solid Rocket Booster Recovery System experienced such a dramatic failure. On air drop 3, the 54-ft drogue parachute was totally destroyed 0.7 sec after deployment. The parachute failure investigation, based on analysis of drop test data and supporting ground element test results is presented. Drogue design modifications are also discussed.

  17. Limited improvement of incorporating primary circulating prostate cells with the CAPRA score to predict biochemical failure-free outcome of radical prostatectomy for prostate cancer.

    PubMed

    Murray, Nigel P; Aedo, Socrates; Fuentealba, Cynthia; Jacob, Omar; Reyes, Eduardo; Novoa, Camilo; Orellana, Sebastian; Orellana, Nelson

    2016-10-01

    To establish a prediction model for early biochemical failure based on the Cancer of the Prostate Risk Assessment (CAPRA) score, the presence or absence of primary circulating prostate cells (CPC) and the number of primary CPC (nCPC)/8ml blood sample is detected before surgery. A prospective single-center study of men who underwent radical prostatectomy as monotherapy for prostate cancer. Clinical-pathological findings were used to calculate the CAPRA score. Before surgery blood was taken for CPC detection, mononuclear cells were obtained using differential gel centrifugation, and CPCs identified using immunocytochemistry. A CPC was defined as a cell expressing prostate-specific antigen and P504S, and the presence or absence of CPCs and the number of cells detected/8ml blood sample was registered. Patients were followed up for up to 5 years; biochemical failure was defined as a prostate-specific antigen>0.2ng/ml. The validity of the CAPRA score was calibrated using partial validation, and the fractional polynomial Cox proportional hazard regression was used to build 3 models, which underwent a decision analysis curve to determine the predictive value of the 3 models with respect to biochemical failure. A total of 267 men participated, mean age 65.80 years, and after 5 years of follow-up the biochemical-free survival was 67.42%. The model using CAPRA score showed a hazards ratio (HR) of 5.76 between low and high-risk groups, that of CPC with a HR of 26.84 between positive and negative groups, and the combined model showed a HR of 4.16 for CAPRA score and 19.93 for CPC. Using the continuous variable nCPC, there was no improvement in the predictive value of the model compared with the model using a positive-negative result of CPC detection. The combined CAPRA-nCPC model showed an improvement of the predictive performance for biochemical failure using the Harrell׳s C concordance test and a net benefit on DCA in comparison with either model used separately. The use of primary CPC as a predictive factor based on their presence or absence did not predict aggressive disease or biochemical failure. Although the use of a combined CAPRA-nCPC model improves the prediction of biochemical failure in patients undergoing radical prostatectomy for prostate cancer, this is minimal. The use of the presence or absence of primary CPCs alone did not predict aggressive disease or biochemical failure. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Validation of RNAi Silencing Efficiency Using Gene Array Data shows 18.5% Failure Rate across 429 Independent Experiments.

    PubMed

    Munkácsy, Gyöngyi; Sztupinszki, Zsófia; Herman, Péter; Bán, Bence; Pénzváltó, Zsófia; Szarvas, Nóra; Győrffy, Balázs

    2016-09-27

    No independent cross-validation of success rate for studies utilizing small interfering RNA (siRNA) for gene silencing has been completed before. To assess the influence of experimental parameters like cell line, transfection technique, validation method, and type of control, we have to validate these in a large set of studies. We utilized gene chip data published for siRNA experiments to assess success rate and to compare methods used in these experiments. We searched NCBI GEO for samples with whole transcriptome analysis before and after gene silencing and evaluated the efficiency for the target and off-target genes using the array-based expression data. Wilcoxon signed-rank test was used to assess silencing efficacy and Kruskal-Wallis tests and Spearman rank correlation were used to evaluate study parameters. All together 1,643 samples representing 429 experiments published in 207 studies were evaluated. The fold change (FC) of down-regulation of the target gene was above 0.7 in 18.5% and was above 0.5 in 38.7% of experiments. Silencing efficiency was lowest in MCF7 and highest in SW480 cells (FC = 0.59 and FC = 0.30, respectively, P = 9.3E-06). Studies utilizing Western blot for validation performed better than those with quantitative polymerase chain reaction (qPCR) or microarray (FC = 0.43, FC = 0.47, and FC = 0.55, respectively, P = 2.8E-04). There was no correlation between type of control, transfection method, publication year, and silencing efficiency. Although gene silencing is a robust feature successfully cross-validated in the majority of experiments, efficiency remained insufficient in a significant proportion of studies. Selection of cell line model and validation method had the highest influence on silencing proficiency.

  19. [Development and Validation of the Academic Resilience Inventory for Nursing Students in Taiwan].

    PubMed

    Li, Cheng-Chieh; Wei, Chi-Fang; Tung, Yuk-Ying

    2017-10-01

    Failure to cope with learning pressures has been shown to influence the learning achievement and professional performance of nursing students. In order to enable nursing students to adapt successfully to their academic stress, it is essential to explore their academic resilience in the process of learning. To develop the Academic Resilience Inventory for Nursing Students (ARINS) and to test its reliability and validity. A total of 611 nursing students in central and southern Taiwan were recruited as participants. We divided the sample into two subsamples randomly using R software. The first sample was used to conduct item analysis and exploratory factor analysis. The other sample was used to conduct confirmatory factor analysis, cross validation, and criterion-related validity. There are 15 items in the ARINS, with cognitive maturity, emotional regulation, and help-seeking behavior used as the measurement indicators of academic resilience in nursing students. The assessed goodness-of-fit index indicates that the model fit the data well based upon the CFA and has good convergent validity and discriminant validity. Criterion-related validity was supported by the correlation among ARINS, learning performance and attitude, hope and optimistic, and depression. The ARINS has good reliability and validation and is a suitable measure of academic resilience in nursing students. It is helpful for nursing students to examine their academic stress and coping efficacy in the learning process.

  20. Role of failure-mechanism identification in accelerated testing

    NASA Technical Reports Server (NTRS)

    Hu, J. M.; Barker, D.; Dasgupta, A.; Arora, A.

    1993-01-01

    Accelerated life testing techniques provide a short-cut method to investigate the reliability of electronic devices with respect to certain dominant failure mechanisms that occur under normal operating conditions. However, accelerated tests have often been conducted without knowledge of the failure mechanisms and without ensuring that the test accelerated the same mechanism as that observed under normal operating conditions. This paper summarizes common failure mechanisms in electronic devices and packages and investigates possible failure mechanism shifting during accelerated testing.

  1. Modeling Composite Laminate Crushing for Crash Analysis

    NASA Technical Reports Server (NTRS)

    Fleming, David C.; Jones, Lisa (Technical Monitor)

    2002-01-01

    Crash modeling of composite structures remains limited in application and has not been effectively demonstrated as a predictive tool. While the global response of composite structures may be well modeled, when composite structures act as energy-absorbing members through direct laminate crushing the modeling accuracy is greatly reduced. The most efficient composite energy absorbing structures, in terms of energy absorbed per unit mass, are those that absorb energy through a complex progressive crushing response in which fiber and matrix fractures on a small scale dominate the behavior. Such failure modes simultaneously include delamination of plies, failure of the matrix to produce fiber bundles, and subsequent failure of fiber bundles either in bending or in shear. In addition, the response may include the significant action of friction, both internally (between delaminated plies or fiber bundles) or externally (between the laminate and the crushing surface). A figure shows the crushing damage observed in a fiberglass composite tube specimen, illustrating the complexity of the response. To achieve a finite element model of such complex behavior is an extremely challenging problem. A practical crushing model based on detailed modeling of the physical mechanisms of crushing behavior is not expected in the foreseeable future. The present research describes attempts to model composite crushing behavior using a novel hybrid modeling procedure. Experimental testing is done is support of the modeling efforts, and a test specimen is developed to provide data for validating laminate crushing models.

  2. Fracture toughness versus micro-tensile bond strength testing of adhesive-dentin interfaces.

    PubMed

    De Munck, Jan; Luehrs, Anne-Katrin; Poitevin, André; Van Ende, Annelies; Van Meerbeek, Bart

    2013-06-01

    To assess interfacial fracture toughness of different adhesive approaches and compare to a standard micro-tensile bond-strength (μTBS) test. Chevron-notched beam fracture toughness (CNB) was measured following a modified ISO 24370 standard. Composite bars with dimensions of 3.0×4.0×25 mm were prepared, with the adhesive-dentin interface in the middle. At the adhesive-dentin interface, a chevron notch was prepared using a 0.15 mm thin diamond blade mounted in a water-cooled diamond saw. Each specimen was loaded until failure in a 4-point bend test setup and the fracture toughness was calculated according to the ISO specifications. Similarly, adhesive-dentin micro-specimens (1.0×1.0×8-10 mm) were stressed in tensile until failure to determine the μTBS. A positive correlation (r(2)=0.64) was observed between CNB and μTBS, which however was only nearly statistically significant, mainly due to the dissimilar outcome of Scotchbond Universal (3M ESPE). While few μTBS specimens failed at the adhesive-dentin interface, almost all CNB specimens failed interfacially at the notch tip. Weibull moduli for interfacial fracture toughness were much higher than for μTBS (3.8-11.5 versus 2.7-4.8, respectively), especially relevant with regard to early failures. Although the ranking of the adhesives on their bonding effectiveness tested using CNB and μTBS corresponded well, the outcome of CNB appeared more reliable and less variable. Fracture toughness measurement is however more laborious and requires specific equipment. The μTBS nevertheless appeared to remain a valid method to assess bonding effectiveness in a versatile way. Copyright © 2013 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  3. Significance testing of clinical data using virus dynamics models with a Markov chain Monte Carlo method: application to emergence of lamivudine-resistant hepatitis B virus.

    PubMed Central

    Burroughs, N J; Pillay, D; Mutimer, D

    1999-01-01

    Bayesian analysis using a virus dynamics model is demonstrated to facilitate hypothesis testing of patterns in clinical time-series. Our Markov chain Monte Carlo implementation demonstrates that the viraemia time-series observed in two sets of hepatitis B patients on antiviral (lamivudine) therapy, chronic carriers and liver transplant patients, are significantly different, overcoming clinical trial design differences that question the validity of non-parametric tests. We show that lamivudine-resistant mutants grow faster in transplant patients than in chronic carriers, which probably explains the differences in emergence times and failure rates between these two sets of patients. Incorporation of dynamic models into Bayesian parameter analysis is of general applicability in medical statistics. PMID:10643081

  4. Adaptive Flight Control Research at NASA

    NASA Technical Reports Server (NTRS)

    Motter, Mark A.

    2008-01-01

    A broad overview of current adaptive flight control research efforts at NASA is presented, as well as some more detailed discussion of selected specific approaches. The stated objective of the Integrated Resilient Aircraft Control Project, one of NASA s Aviation Safety programs, is to advance the state-of-the-art of adaptive controls as a design option to provide enhanced stability and maneuverability margins for safe landing in the presence of adverse conditions such as actuator or sensor failures. Under this project, a number of adaptive control approaches are being pursued, including neural networks and multiple models. Validation of all the adaptive control approaches will use not only traditional methods such as simulation, wind tunnel testing and manned flight tests, but will be augmented with recently developed capabilities in unmanned flight testing.

  5. Distant failure prediction for early stage NSCLC by analyzing PET with sparse representation

    NASA Astrophysics Data System (ADS)

    Hao, Hongxia; Zhou, Zhiguo; Wang, Jing

    2017-03-01

    Positron emission tomography (PET) imaging has been widely explored for treatment outcome prediction. Radiomicsdriven methods provide a new insight to quantitatively explore underlying information from PET images. However, it is still a challenging problem to automatically extract clinically meaningful features for prognosis. In this work, we develop a PET-guided distant failure predictive model for early stage non-small cell lung cancer (NSCLC) patients after stereotactic ablative radiotherapy (SABR) by using sparse representation. The proposed method does not need precalculated features and can learn intrinsically distinctive features contributing to classification of patients with distant failure. The proposed framework includes two main parts: 1) intra-tumor heterogeneity description; and 2) dictionary pair learning based sparse representation. Tumor heterogeneity is initially captured through anisotropic kernel and represented as a set of concatenated vectors, which forms the sample gallery. Then, given a test tumor image, its identity (i.e., distant failure or not) is classified by applying the dictionary pair learning based sparse representation. We evaluate the proposed approach on 48 NSCLC patients treated by SABR at our institute. Experimental results show that the proposed approach can achieve an area under the characteristic curve (AUC) of 0.70 with a sensitivity of 69.87% and a specificity of 69.51% using a five-fold cross validation.

  6. Does the 6-minute walk test predict the prognosis in patients with NYHA class II or III chronic heart failure?

    PubMed

    Roul, G; Germain, P; Bareiss, P

    1998-09-01

    We prospectively evaluated the potential of the 6-minute walk test compared with peak VO2 in predicting outcome of patients with New York Heart Association (NYHA) class II or III heart failure. Patients with a history of heart failure caused by systolic dysfunction were included. The combined final outcome (death or hospitalization for heart failure) was used as the judgment criterion. One hundred twenty-one patients (age 59+/-11 years; left ventricular ejection fraction 29.6%+/-13%) were included and followed for 1.53+/-0.98 years. Patients were separated into two groups according to outcome: group 1 (G1, 74 patients), without events, and group 2 (G2, 47 patients), who reached the combined end point. Peak VO2 was clearly different between G1 and G2 (18.5+/-4 vs. 13.9+/-4 ml/kg/min, p=0.0001) but not the distance walked (448+/-92 vs 410+/-126 m; p=0.084, not significant). Survival analysis showed that unlike peak VO2, the distance covered was barely distinguishable between the groups (p < 0.08). However, receiver operating characteristic curves revealed that the best performances for the 6-minute walk test were obtained for subjects walking < or =300 m. These patients had a worse prognosis than those walking farther (p=0.013). In this subset of patients, there was a significant correlation between distance covered and peak VO2 (r=0.65, p=0.011). Thus it appears that the more severely affected patients have a daily activity level relatively close to their maximal exercise capacity. Nevertheless, the 300 m threshold suggested by this study needs to be validated in an independent population. A distance walked in 6 minutes < or =300 m can predict outcome. Moreover, in these cases there is a significant correlation between the 6-minute walk test and peak VO2 demonstrating the potential of this simple procedure as a first-line screening test for this subset of patients.

  7. Detonation failure characterization of non-ideal explosives

    NASA Astrophysics Data System (ADS)

    Janesheski, Robert S.; Groven, Lori J.; Son, Steven

    2012-03-01

    Non-ideal explosives are currently poorly characterized, hence limiting the modeling of them. Current characterization requires large-scale testing to obtain steady detonation wave characterization for analysis due to the relatively thick reaction zones. Use of a microwave interferometer applied to small-scale confined transient experiments is being implemented to allow for time resolved characterization of a failing detonation. The microwave interferometer measures the position of a failing detonation wave in a tube that is initiated with a booster charge. Experiments have been performed with ammonium nitrate and various fuel compositions (diesel fuel and mineral oil). It was observed that the failure dynamics are influenced by factors such as chemical composition and confiner thickness. Future work is planned to calibrate models to these small-scale experiments and eventually validate the models with available large scale experiments. This experiment is shown to be repeatable, shows dependence on reactive properties, and can be performed with little required material.

  8. Assessment of Intralaminar Progressive Damage and Failure Analysis Using an Efficient Evaluation Framework

    NASA Technical Reports Server (NTRS)

    Hyder, Imran; Schaefer, Joseph; Justusson, Brian; Wanthal, Steve; Leone, Frank; Rose, Cheryl

    2017-01-01

    Reducing the timeline for development and certification for composite structures has been a long standing objective of the aerospace industry. This timeline can be further exacerbated when attempting to integrate new fiber-reinforced composite materials due to the large number of testing required at every level of design. computational progressive damage and failure analysis (PDFA) attempts to mitigate this effect; however, new PDFA methods have been slow to be adopted in industry since material model evaluation techniques have not been fully defined. This study presents an efficient evaluation framework which uses a piecewise verification and validation (V&V) approach for PDFA methods. Specifically, the framework is applied to evaluate PDFA research codes within the context of intralaminar damage. Methods are incrementally taken through various V&V exercises specifically tailored to study PDFA intralaminar damage modeling capability. Finally, methods are evaluated against a defined set of success criteria to highlight successes and limitations.

  9. UAS-Systems Integration, Validation, and Diagnostics Simulation Capability

    NASA Technical Reports Server (NTRS)

    Buttrill, Catherine W.; Verstynen, Harry A.

    2014-01-01

    As part of the Phase 1 efforts of NASA's UAS-in-the-NAS Project a task was initiated to explore the merits of developing a system simulation capability for UAS to address airworthiness certification requirements. The core of the capability would be a software representation of an unmanned vehicle, including all of the relevant avionics and flight control system components. The specific system elements could be replaced with hardware representations to provide Hardware-in-the-Loop (HWITL) test and evaluation capability. The UAS Systems Integration and Validation Laboratory (UAS-SIVL) was created to provide a UAS-systems integration, validation, and diagnostics hardware-in-the-loop simulation capability. This paper discusses how SIVL provides a robust and flexible simulation framework that permits the study of failure modes, effects, propagation paths, criticality, and mitigation strategies to help develop safety, reliability, and design data that can assist with the development of certification standards, means of compliance, and design best practices for civil UAS.

  10. Spacecraft Parachute Recovery System Testing from a Failure Rate Perspective

    NASA Technical Reports Server (NTRS)

    Stewart, Christine E.

    2013-01-01

    Spacecraft parachute recovery systems, especially those with a parachute cluster, require testing to identify and reduce failures. This is especially important when the spacecraft in question is human-rated. Due to the recent effort to make spaceflight affordable, the importance of determining a minimum requirement for testing has increased. The number of tests required to achieve a mature design, with a relatively constant failure rate, can be estimated from a review of previous complex spacecraft recovery systems. Examination of the Apollo parachute testing and the Shuttle Solid Rocket Booster recovery chute system operation will clarify at which point in those programs the system reached maturity. This examination will also clarify the risks inherent in not performing a sufficient number of tests prior to operation with humans on-board. When looking at complex parachute systems used in spaceflight landing systems, a pattern begins to emerge regarding the need for a minimum amount of testing required to wring out the failure modes and reduce the failure rate of the parachute system to an acceptable level for human spaceflight. Not only a sufficient number of system level testing, but also the ability to update the design as failure modes are found is required to drive the failure rate of the system down to an acceptable level. In addition, sufficient data and images are necessary to identify incipient failure modes or to identify failure causes when a system failure occurs. In order to demonstrate the need for sufficient system level testing prior to an acceptable failure rate, the Apollo Earth Landing System (ELS) test program and the Shuttle Solid Rocket Booster Recovery System failure history will be examined, as well as some experiences in the Orion Capsule Parachute Assembly System will be noted.

  11. A maximum entropy fracture model for low and high strain-rate fracture in TinSilverCopper alloys

    NASA Astrophysics Data System (ADS)

    Chan, Dennis K.

    SnAgCu solder alloys exhibit significant rate-dependent constitutive behavior. Solder joints made of these alloys exhibit failure modes that are also rate-dependent. Solder joints are an integral part of microelectronic packages and are subjected to a wide variety of loading conditions which range from thermo-mechanical fatigue to impact loading. Consequently, there is a need for non-empirical rate-dependent failure theory that is able to accurately predict fracture in these solder joints. In the present thesis, various failure models are first reviewed. But, these models are typically empirical or are not valid for solder joints due to limiting assumptions such as elastic behavior. Here, the development and validation of a maximum entropy fracture model (MEFM) valid for low strain-rate fracture in SnAgCu solders is presented. To this end, work on characterizing SnAgCu solder behavior at low strain-rates using a specially designed tester to estimate parameters for constitutive models is presented. Next, the maximum entropy fracture model is reviewed. This failure model uses a single damage accumulation parameter and relates the risk of fracture to accumulated inelastic dissipation. A methodology is presented to extract this model parameter through a custom-built microscale mechanical tester for Sn3.8Ag0.7Cu solder. This single parameter is used to numerically simulate fracture in two solder joints with entirely different geometries. The simulations are compared to experimentally observed fracture in these same packages. Following the simulations of fracture at low strain rate, the constitutive behavior of solder alloys across nine decades of strain rates through MTS compression tests and split-Hopkinson bar are presented. Preliminary work on using orthogonal machining as novel technique of material characterization at high strain rates is also presented. The resultant data from the MTS compression and split-Hopkinson bar tester is used to demonstrate the localization of stress to the interface of solder joints at high strain rates. The MEFM is further extended to predict failure in brittle materials. Such an extension allows for fracture prediction within intermetallic compounds (IMCs) in solder joints. It has been experimentally observed that the failure mode shifts from bulk solder to the IMC layer with increasing loading rates. The extension of the MEFM would allow for prediction of the fracture mode within the solder joint under different loading conditions. A fracture model capable of predicting failure modes at higher strain rates is necessary, as mobile electronics are becoming ubiquitous. Mobile devices are prone to being dropped which can induce loading rates within solder joints that are much larger than experienced under thermo-mechanical fatigue. A range of possible damage accumulation parameters for Cu6Sn 5 is determined for the MEFM. A value within the aforementioned range is used to demonstrate the increasing likelihood of IMC fracture in solder joints with larger loading rates. The thesis is concluded with remarks about ongoing work that include determining a more accurate damage accumulation parameter for Cu6Sn 5 IMC, and on using machining as a technique for extracting failure parameters for the MEFM.

  12. [Prefrontal clinical symptoms in daily living: screening assessment by means of the short Prefrontal Symptoms Inventory (PSI-20)].

    PubMed

    Pedrero-Pérez, Eduardo J; Ruiz-Sánchez de León, José M; Morales-Alonso, Sara; Pedrero-Aguilar, Jara; Fernández-Méndez, Laura M

    2015-05-01

    Estimation of daily symptoms of frontal dysfunction is considered to be essential in order to endow neuro-psychological assessments with ecological validity. The questionnaires available today were constructed to estimate executive problems in daily life in populations with neurological damage. There is a need for instruments focused on measuring these behaviours in the general population or in clinical populations with mild or moderate impairment. To examine the factorial validity and to find evidence of concurrent validity of the short version of the Prefrontal Symptoms Inventory. Three samples were obtained: the first, from the Internet (n = 504); the second, in a non-clinical population by means of paper and pencil (n = 1,257); and the third, from patients being treated for substance addiction (n = 602). A factorial analysis without restraints was used on the first sample and the results were submitted to confirmatory factorial analysis on the other two samples. The three-factor structure that was found was confirmed with excellent indicators of fit in the other two samples. Evidence of concurrent validity was found with quality of life and mental health tests. We propose a short questionnaire for detecting failures of a prefrontal origin in daily living, which improves on the psychometric qualities of similar tests, but is oriented towards severe neurological pathologies. The structural stability of the test ensures it can be used in the general population, for the early detection of cognitive impairment, and in clinical populations with mild or moderate deterioration. A set of criteria are proposed for use in interpreting the results.

  13. Three-dimensional finite element analysis of the shear bond test.

    PubMed

    DeHoff, P H; Anusavice, K J; Wang, Z

    1995-03-01

    The purpose of this study was to use finite element analyses to model the planar shear bond test and to evaluate the effects of modulus values, bonding agent thickness, and loading conditions on the stress distribution in the dentin adjacent to the bonding agent-dentin interface. All calculations were performed with the ANSYS finite element program. The planar shear bond test was modeled as a cylinder of resin-based composite bonded to a cylindrical dentin substrate. The effects of material, geometry and loading variables were determined primarily by use of a three-dimensional structural element. Several runs were also made using an axisymmetric element with harmonic loading and a plane strain element to determine whether two-dimensional analyses yield valid results. Stress calculations using three-dimensional finite element analyses confirmed the presence of large stress concentration effects for all stress components at the bonding agent-dentin interface near the application of the load. The maximum vertical shear stress generally occurs approximately 0.3 mm below the loading site and then decreases sharply in all directions. The stresses reach relatively uniform conditions within about 0.5 mm of the loading site and then increase again as the lower region of the interface is approached. Calculations using various loading conditions indicated that a wire-loop method of loading leads to smaller stress concentration effects, but a shear bond strength determined by dividing a failure load by the cross-sectional area grossly underestimates the true interfacial bond strength. Most dental researchers are using tensile and shear bond tests to predict the effects of process and material variables on the clinical performance of bonding systems but no evidence has yet shown that bond strength is relevant to clinical performance. A critical factor in assessing the usefulness of bond tests is a thorough understanding of the stress states that cause failure in the bond test and then to assess whether these stress states also exist in the clinical situation. Finite element analyses can help to answer this question but much additional work is needed to identify the failure modes in service and to relate these failures to particular loading conditions. The present study represents only a first step in understanding the stress states in the planar shear bond test.

  14. Sample features associated with success rates in population-based EGFR mutation testing.

    PubMed

    Shiau, Carolyn J; Babwah, Jesse P; da Cunha Santos, Gilda; Sykes, Jenna R; Boerner, Scott L; Geddie, William R; Leighl, Natasha B; Wei, Cuihong; Kamel-Reid, Suzanne; Hwang, David M; Tsao, Ming-Sound

    2014-07-01

    Epidermal growth factor receptor (EGFR) mutation testing has become critical in the treatment of patients with advanced non-small-cell lung cancer. This study involves a large cohort and epidemiologically unselected series of EGFR mutation testing for patients with nonsquamous non-small-cell lung cancer in a North American population to determine sample-related factors that influence success in clinical EGFR testing. Data from consecutive cases of Canadian province-wide testing at a centralized diagnostic laboratory for a 24-month period were reviewed. Samples were tested for exon-19 deletion and exon-21 L858R mutations using a validated polymerase chain reaction method with 1% to 5% detection sensitivity. From 2651 samples submitted, 2404 samples were tested with 2293 samples eligible for analysis (1780 histology and 513 cytology specimens). The overall test-failure rate was 5.4% with overall mutation rate of 20.6%. No significant differences in the failure rate, mutation rate, or mutation type were found between histology and cytology samples. Although tumor cellularity was significantly associated with test-success or mutation rates in histology and cytology specimens, respectively, mutations could be detected in all specimen types. Significant rates of EGFR mutation were detected in cases with thyroid transcription factor (TTF)-1-negative immunohistochemistry (6.7%) and mucinous component (9.0%). EGFR mutation testing should be attempted in any specimen, whether histologic or cytologic. Samples should not be excluded from testing based on TTF-1 status or histologic features. Pathologists should report the amount of available tumor for testing. However, suboptimal samples with a negative EGFR mutation result should be considered for repeat testing with an alternate sample.

  15. Self-control depletion and nicotine deprivation as precipitants of smoking cessation failure: A human laboratory model.

    PubMed

    Heckman, Bryan W; MacQueen, David A; Marquinez, Nicole S; MacKillop, James; Bickel, Warren K; Brandon, Thomas H

    2017-04-01

    The need to understand potential precipitants of smoking relapse is exemplified by relapse rates as high as 95%. The Self-Control Strength model, which proposes that self-control is dependent upon limited resources and susceptible to fatigue, may offer insight into relapse processes. The current study tested the hypothesis that self-control depletion (SCD), produced from engagement in emotional suppression, would serve as a novel antecedent for cessation failure, as indexed by a validated laboratory analogue of smoking lapse and relapse. We also examined whether SCD effects interacted with those of a well-established relapse precipitant (i.e., nicotine deprivation). Craving and behavioral economic indices (delay discounting and demand) were tested as hypothesized mechanisms for increased cessation failure. Ultimately, a moderated mediation model was used to test nicotine deprivation as a hypothesized moderator of SCD effects. We used a 2 × 2 (12-hr deprivation vs. no deprivation; SCD vs. no SCD) factorial between-subjects design (N = 128 smokers). The primary hypothesis of the study was supported, as SCD increased lapse behavior (p = .04). Nicotine deprivation significantly increased craving, cigarette demand, delay discounting, and lapse behavior. No main effects were found for SCD on putative mediators (i.e., craving, demand, and discounting), but the SCD and deprivation manipulations interacted upon craving (p = .04). The moderated mediation model was significant. SCD was found to increase craving among nicotine deprived smokers, which mediated effects on lapse behavior. SCD appears to play an important role in smoking relapse and may be a viable target for intervention. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. Self-Control Depletion and Nicotine Deprivation as Precipitants of Smoking Cessation Failure: A Human Laboratory Model

    PubMed Central

    Heckman, Bryan W.; MacQueen, David A.; Marquinez, Nicole S.; MacKillop, James; Bickel, Warren K.; Brandon, Thomas H.

    2017-01-01

    Objective The need to understand potential precipitants of smoking relapse is exemplified by relapse rates as high as 95%. The Self-Control Strength model, which proposes that self-control is dependent upon limited resources and susceptible to fatigue, may offer insight into relapse processes. The current study tested the hypothesis that self-control depletion (SCD), produced from engagement in emotional suppression, would serve as a novel antecedent for cessation failure, as indexed by a validated laboratory analogue of smoking lapse and relapse. We also examined whether SCD effects interacted with those of a well-established relapse precipitant (i.e., nicotine deprivation). Craving and behavioral economic indices (delay discounting and demand) were tested as hypothesized mechanisms for increased cessation failure. Ultimately, a moderated mediation model was used to test nicotine deprivation as a hypothesized moderator of SCD effects. Method We used a 2 ×2 (12-hour deprivation vs. no deprivation; SCD vs. no SCD) factorial between-subjects design (N = 128 smokers). Results The primary hypothesis of the study was supported, as SCD increased lapse behavior (p = .04). Nicotine deprivation significantly increased craving, cigarette demand, delay discounting, and lapse behavior. No main effects were found for SCD on putative mediators (i.e., craving, demand, discounting), but the SCD and deprivation manipulations interacted upon craving (p = .04). The moderated mediation model was significant. SCD was found to increase craving among nicotine deprived smokers, which mediated effects on lapse behavior. Conclusions SCD appears to play an important role in smoking relapse and may be a viable target for intervention. PMID:28333537

  17. A failure of conflict to modulate dual-stream processing may underlie the formation and maintenance of delusions.

    PubMed

    Speechley, W J; Murray, C B; McKay, R M; Munz, M T; Ngan, E T C

    2010-03-01

    Dual-stream information processing proposes that reasoning is composed of two interacting processes: a fast, intuitive system (Stream 1) and a slower, more logical process (Stream 2). In non-patient controls, divergence of these streams may result in the experience of conflict, modulating decision-making towards Stream 2, and initiating a more thorough examination of the available evidence. In delusional schizophrenia patients, a failure of conflict to modulate decision-making towards Stream 2 may reduce the influence of contradictory evidence, resulting in a failure to correct erroneous beliefs. Delusional schizophrenia patients and non-patient controls completed a deductive reasoning task requiring logical validity judgments of two-part conditional statements. Half of the statements were characterized by a conflict between logical validity (Stream 2) and content believability (Stream 1). Patients were significantly worse than controls in determining the logical validity of both conflict and non-conflict conditional statements. This between groups difference was significantly greater for the conflict condition. The results are consistent with the hypothesis that delusional schizophrenia patients fail to use conflict to modulate towards Stream 2 when the two streams of reasoning arrive at incompatible judgments. This finding provides encouraging preliminary support for the Dual-Stream Modulation Failure model of delusion formation and maintenance. 2009 Elsevier Masson SAS. All rights reserved.

  18. Using the Folstein Mini Mental State Exam (MMSE) to explore methodological issues in cognitive aging research.

    PubMed

    Monroe, Todd; Carter, Michael

    2012-09-01

    Cognitive scales are used frequently in geriatric research and practice. These instruments are constructed with underlying assumptions that are a part of their validation process. A common measurement scale used in older adults is the Folstein Mini Mental State Exam (MMSE). The MMSE was designed to screen for cognitive impairment and is used often in geriatric research. This paper has three aims. Aim one was to explore four potential threats to validity in the use of the MMSE: (1) administering the exam without meeting the underlying assumptions, (2) not reporting that the underlying assumptions were assessed prior to test administration, (3) use of variable and inconsistent cut-off scores for the determination of presence of cognitive impairment, and (4) failure to adjust the scores based on the demographic characteristics of the tested subject. Aim two was to conduct a literature search to determine if the assumptions of (1) education level assessment, (2) sensory assessment, and (3) language fluency were being met and clearly reported in published research using the MMSE. Aim three was to provide recommendations to minimalize threats to validity in research studies that use cognitive scales, such as the MMSE. We found inconsistencies in published work in reporting whether or not subjects meet the assumptions that underlie a reliable and valid MMSE score. These inconsistencies can pose threats to the reliability of exam results. Fourteen of the 50 studies reviewed reported inclusion of all three of these assumptions. Inconsistencies in reporting the inclusion of the underlying assumptions for a reliable score could mean that subjects were not appropriate to be tested by use of the MMSE or that an appropriate test administration of the MMSE was not clearly reported. Thus, the research literature could have threats to both validity and reliability based on misuse of or improper reported use of the MMSE. Six recommendations are provided to minimalize these threats in future research.

  19. Evaluation of 2 cognitive abilities tests in a dual-task environment

    NASA Technical Reports Server (NTRS)

    Vidulich, M. A.; Tsang, P. S.

    1986-01-01

    Most real world operators are required to perform multiple tasks simultaneously. In some cases, such as flying a high performance aircraft or trouble shooting a failing nuclear power plant, the operator's ability to time share or process in parallel" can be driven to extremes. This has created interest in selection tests of cognitive abilities. Two tests that have been suggested are the Dichotic Listening Task and the Cognitive Failures Questionnaire. Correlations between these test results and time sharing performance were obtained and the validity of these tests were examined. The primary task was a tracking task with dynamically varying bandwidth. This was performed either alone or concurrently with either another tracking task or a spatial transformation task. The results were: (1) An unexpected negative correlation was detected between the two tests; (2) The lack of correlation between either test and task performance made the predictive utility of the tests scores appear questionable; (3) Pilots made more errors on the Dichotic Listening Task than college students.

  20. Kidney Failure and ESRD in the Atherosclerosis Risk in Communities (ARIC) Study: Comparing Ascertainment of Treated and Untreated Kidney Failure in a Cohort Study.

    PubMed

    Rebholz, Casey M; Coresh, Josef; Ballew, Shoshana H; McMahon, Blaithin; Whelton, Seamus P; Selvin, Elizabeth; Grams, Morgan E

    2015-08-01

    Linkage to the US Renal Data System (USRDS) registry commonly is used to identify end-stage renal disease (ESRD) cases, or kidney failure treated with dialysis or transplantation, but it underestimates the total burden of kidney failure. This study validates a kidney failure definition that includes both kidney failure treated and not treated by dialysis or transplantation. It compares kidney failure risk factors and outcomes using this broader definition with USRDS-identified ESRD risk factors and outcomes. Diagnostic test study with stratified random sampling of hospitalizations for chart review. Atherosclerosis Risk in Communities Study (n=11,530; chart review, n=546). USRDS-identified ESRD; treated or untreated kidney failure defined by USRDS-identified ESRD or International Classification of Diseases, Ninth or Tenth Revision, Clinical Modification (ICD-9-CM/ICD-10-CM) code for hospitalization or death. For ESRD, determination of permanent dialysis therapy or transplantation; for kidney failure, determination of permanent dialysis therapy, transplantation, or estimated glomerular filtration rate < 15 mL/min/1.73 m(2). During 13 years' median follow-up, 508 kidney failure cases were identified, including 173 (34.1%) from the USRDS registry. ESRD and kidney failure incidence were 1.23 and 3.66 cases per 1,000 person-years in the overall population and 1.35 and 6.59 cases per 1,000 person-years among participants older than 70 years, respectively. Other risk-factor associations were similar between ESRD and kidney failure, except diabetes and albuminuria, which were stronger for ESRD. Survivals at 1 and 5 years were 74.0% and 24.0% for ESRD and 59.8% and 31.6% for kidney failure, respectively. Sensitivity and specificity were 88.0% and 97.3% comparing the kidney failure ICD-9-CM/ICD-10-CM code algorithm to chart review; for USRDS-identified ESRD, sensitivity and specificity were 94.9% and 100.0%. Some medical charts were incomplete. A kidney failure definition including treated and untreated disease identifies more cases than linkage to the USRDS registry alone, particularly among older adults. Future studies might consider reporting both USRDS-identified ESRD and a more inclusive kidney failure definition. Copyright © 2015 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  1. Individual prediction of heart failure among childhood cancer survivors.

    PubMed

    Chow, Eric J; Chen, Yan; Kremer, Leontien C; Breslow, Norman E; Hudson, Melissa M; Armstrong, Gregory T; Border, William L; Feijen, Elizabeth A M; Green, Daniel M; Meacham, Lillian R; Meeske, Kathleen A; Mulrooney, Daniel A; Ness, Kirsten K; Oeffinger, Kevin C; Sklar, Charles A; Stovall, Marilyn; van der Pal, Helena J; Weathers, Rita E; Robison, Leslie L; Yasui, Yutaka

    2015-02-10

    To create clinically useful models that incorporate readily available demographic and cancer treatment characteristics to predict individual risk of heart failure among 5-year survivors of childhood cancer. Survivors in the Childhood Cancer Survivor Study (CCSS) free of significant cardiovascular disease 5 years after cancer diagnosis (n = 13,060) were observed through age 40 years for the development of heart failure (ie, requiring medications or heart transplantation or leading to death). Siblings (n = 4,023) established the baseline population risk. An additional 3,421 survivors from Emma Children's Hospital (Amsterdam, the Netherlands), the National Wilms Tumor Study, and the St Jude Lifetime Cohort Study were used to validate the CCSS prediction models. Heart failure occurred in 285 CCSS participants. Risk scores based on selected exposures (sex, age at cancer diagnosis, and anthracycline and chest radiotherapy doses) achieved an area under the curve of 0.74 and concordance statistic of 0.76 at or through age 40 years. Validation cohort estimates ranged from 0.68 to 0.82. Risk scores were collapsed to form statistically distinct low-, moderate-, and high-risk groups, corresponding to cumulative incidences of heart failure at age 40 years of 0.5% (95% CI, 0.2% to 0.8%), 2.4% (95% CI, 1.8% to 3.0%), and 11.7% (95% CI, 8.8% to 14.5%), respectively. In comparison, siblings had a cumulative incidence of 0.3% (95% CI, 0.1% to 0.5%). Using information available to clinicians soon after completion of childhood cancer therapy, individual risk for subsequent heart failure can be predicted with reasonable accuracy and discrimination. These validated models provide a framework on which to base future screening strategies and interventions. © 2014 by American Society of Clinical Oncology.

  2. Neural Network-Based Sensor Validation for Turboshaft Engines

    NASA Technical Reports Server (NTRS)

    Moller, James C.; Litt, Jonathan S.; Guo, Ten-Huei

    1998-01-01

    Sensor failure detection, isolation, and accommodation using a neural network approach is described. An auto-associative neural network is configured to perform dimensionality reduction on the sensor measurement vector and provide estimated sensor values. The sensor validation scheme is applied in a simulation of the T700 turboshaft engine in closed loop operation. Performance is evaluated based on the ability to detect faults correctly and maintain stable and responsive engine operation. The set of sensor outputs used for engine control forms the network input vector. Analytical redundancy is verified by training networks of successively smaller bottleneck layer sizes. Training data generation and strategy are discussed. The engine maintained stable behavior in the presence of sensor hard failures. With proper selection of fault determination thresholds, stability was maintained in the presence of sensor soft failures.

  3. SMART empirical approaches for predicting field performance of PV modules from results of reliability tests

    NASA Astrophysics Data System (ADS)

    Hardikar, Kedar Y.; Liu, Bill J. J.; Bheemreddy, Venkata

    2016-09-01

    Gaining an understanding of degradation mechanisms and their characterization are critical in developing relevant accelerated tests to ensure PV module performance warranty over a typical lifetime of 25 years. As newer technologies are adapted for PV, including new PV cell technologies, new packaging materials, and newer product designs, the availability of field data over extended periods of time for product performance assessment cannot be expected within the typical timeframe for business decisions. In this work, to enable product design decisions and product performance assessment for PV modules utilizing newer technologies, Simulation and Mechanism based Accelerated Reliability Testing (SMART) methodology and empirical approaches to predict field performance from accelerated test results are presented. The method is demonstrated for field life assessment of flexible PV modules based on degradation mechanisms observed in two accelerated tests, namely, Damp Heat and Thermal Cycling. The method is based on design of accelerated testing scheme with the intent to develop relevant acceleration factor models. The acceleration factor model is validated by extensive reliability testing under different conditions going beyond the established certification standards. Once the acceleration factor model is validated for the test matrix a modeling scheme is developed to predict field performance from results of accelerated testing for particular failure modes of interest. Further refinement of the model can continue as more field data becomes available. While the demonstration of the method in this work is for thin film flexible PV modules, the framework and methodology can be adapted to other PV products.

  4. Refusal to participate in heart failure studies: do age and gender matter?

    PubMed Central

    Harrison, Jordan M; Jung, Miyeon; Lennie, Terry A; Moser, Debra K; Smith, Dean G; Dunbar, Sandra B; Ronis, David L; Koelling, Todd M; Giordani, Bruno; Riley, Penny L; Pressler, Susan J

    2018-01-01

    Aims and objectives The objective of this retrospective study was to evaluate reasons heart failure patients decline study participation, to inform interventions to improve enrollment. Background Failure to enrol older heart failure patients (age > 65) and women in studies may lead to sampling bias, threatening study validity. Design This study was a retrospective analysis of refusal data from four heart failure studies that enrolled 788 patients in four states. Methods Chi-Square and a pooled t-test were computed to analyse refusal data (n = 300) obtained from heart failure patients who were invited to participate in one of the four studies but declined. Results Refusal reasons from 300 patients (66% men, mean age 65 33) included: not interested (n = 163), too busy (n = 64), travel burden (n = 50), too sick (n = 38), family problems (n = 14), too much commitment (n = 13) and privacy concerns (n = 4). Chi-Square analyses showed no differences in frequency of reasons (p > 0 05) between men and women. Patients who refused were older, on average, than study participants. Conclusions Some reasons were patient-dependent; others were study-dependent. With ‘not interested’ as the most common reason, cited by over 50% of patients who declined, recruitment measures should be targeted at stimulating patients’ interest. Additional efforts may be needed to recruit older participants. However, reasons for refusal were consistent regardless of gender. Relevance to clinical practice Heart failure researchers should proactively approach a greater proportion of women and patients over age 65. With no gender differences in type of reasons for refusal, similar recruitment strategies can be used for men and women. However, enrolment of a representative proportion of women in heart failure studies has proven elusive and may require significant effort from researchers. Employing strategies to stimulate interest in studies is essential for recruiting heart failure patients, who overwhelmingly cited lack of interest as the top reason for refusal. PMID:26914834

  5. USE OF BROMOERGOCRYPTINE IN THE VALIDATION OF PROTOCOLS FOR THE ASSESSMENT OF MECHANISMS OF EARLY PREGNANCY LOSS IN THE RAT

    EPA Science Inventory

    Validated protocols for evaluating maternally mediated mechanisms of early pregnancy failure in rodents are needed for use in the risk assessment process. To supplement previous efforts in the validation of a panel of protocols assembled for this purpose, bromoergocryptine (BEC) ...

  6. The role of low cognitive effort and negative symptoms in neuropsychological impairment in schizophrenia.

    PubMed

    Strauss, Gregory P; Morra, Lindsay F; Sullivan, Sara K; Gold, James M

    2015-03-01

    Two experiments were conducted to examine whether insufficient effort, negative symptoms (e.g., avolition, anhedonia), and psychological variables (e.g., anhedonia and perception of low cognitive resources) predict generalized neurocognitive impairment in individuals with schizophrenia (SZ). In Experiment 1, participants included 97 individuals with SZ and 63 healthy controls (CN) who completed the Victoria Symptom Validity Test (VSVT), the MATRICS Consensus Cognitive Battery (MCCB), and self-report anhedonia questionnaires. In Experiment 2, participants included 46 individuals with SZ and 33 CN who completed Green's Word Memory Test (WMT), the MCCB, and self-reports of anhedonia, defeatist performance beliefs, and negative expectancy appraisals. RESULTS indicated that a low proportion of individuals with SZ failed effort testing (1.0% Experiment 1; 15.2% Experiment 2); however, global neurocognitive impairment was significantly predicted by low effort and negative symptoms. Findings indicate that low effort does not threaten the validity of neuropsychological test results in the majority of individuals with schizophrenia; however, effort testing may be useful in SZ patients with severe negative symptoms who may be more likely to put forth insufficient effort due to motivational problems. Although the base rate of failure is relatively low, it may be beneficial to screen for insufficient effort in SZ and exclude individuals who fail effort testing from pharmacological or cognitive remediation trials. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  7. NASA Lewis advanced IPV nickel-hydrogen technology

    NASA Technical Reports Server (NTRS)

    Smithrick, John J.; Britton, Doris L.

    1993-01-01

    Individual pressure vessel (IPV) nickel-hydrogen technology was advanced at NASA Lewis and under Lewis contracts. Some of the advancements are as follows: to use 26 percent potassium hydroxide electrolyte to improve cycle life and performance, to modify the state of the art cell design to eliminate identified failure modes and further improve cycle life, and to develop a lightweight nickel electrode to reduce battery mass, hence reduce launch and/or increase satellite payload. A breakthrough in the LEO cycle life of individual pressure vessel nickel-hydrogen battery cells was reported. The cycle life of boiler plate cells containing 26 percent KOH electrolyte was about 40,000 accelerated LEO cycles at 80 percent DOD compared to 3,500 cycles for cells containing 31 percent KOH. Results of the boiler plate cell tests have been validated at NWSC, Crane, Indiana. Forty-eight ampere-hour flight cells containing 26 and 31 percent KOH have undergone real time LEO cycle life testing at an 80 percent DOD, 10 C. The three cells containing 26 percent KOH failed on the average at cycle 19,500. The three cells containing 31 percent KOH failed on the average at cycle 6,400. Validation testing of NASA Lewis 125 Ah advanced design IPV nickel-hydrogen flight cells is also being conducted at NWSC, Crane, Indiana under a NASA Lewis contract. This consists of characterization, storage, and cycle life testing. There was no capacity degradation after 52 days of storage with the cells in the discharged state, on open circuit, 0 C, and a hydrogen pressure of 14.5 psia. The catalyzed wall wick cells have been cycled for over 22,694 cycles with no cell failures in the continuing test. All three of the non-catalyzed wall wick cells failed (cycles 9,588; 13,900; and 20,575). Cycle life test results of the Fibrex nickel electrode has demonstrated the feasibility of an improved nickel electrode giving a higher specific energy nickel-hydrogen cell. A nickel-hydrogen boiler plate cell using an 80 mil thick, 90 percent porous Fibrex nickel electrode has been cycled for 10,000 cycles at 40 percent DOD.

  8. The Caregiver Burden Questionnaire for Heart Failure (CBQ-HF): face and content validity

    PubMed Central

    2013-01-01

    Background A new caregiver burden questionnaire for heart failure (CBQ-HF v1.0) was developed based on previously conducted qualitative interviews with HF caregivers and with input from HF clinical experts. Version 1.0 of the CBQ-HF included 41 items measuring the burden associated with caregiving in the following domains: physical, emotional/psychological, social, and impact on caregiver’s life. Following initial development, the next stage was to evaluate caregivers’ understanding of the questionnaire items and their conceptual relevance. Methods To evaluate the face and content validity of the new questionnaire, cognitive interviews were conducted with caregivers of heart failure patients. The cognitive interviews included a “think aloud” exercise as the patient completed the CBQ-HF, followed by more specific probing questions to better understand caregivers’ understanding, interpretation and the relevance of the instructions, items, response scales and recall period. Results Eighteen caregivers of heart failure patients were recruited. The mean age of the caregivers was 50 years (SD = 10.2). Eighty-three percent of caregivers were female and most commonly the patient was either a spouse (44%) or a parent (28%). Among the patients 55% were NYHA Class 2 and 45% were NYHA Class 3 or 4. The caregiver cognitive interviews demonstrated that the CBQ-HF was well understood, relevant and consistently interpreted. From the initial 41 item questionnaire, fifteen items were deleted due to conceptual overlap and/or item redundancy. The final 26-item CBQ-HF (v3.0) uses a 5-point Likert severity scale, assessing 4 domains of physical, emotional/psychological, social and lifestyle burdens using a 4-week recall period. Conclusions The CBQ-HF (v3.0) is a comprehensive and relevant measure of subjective caregiver burden with strong content validity. This study has established that the CBQ-HF (v3.0) has strong face and content validity and should be valuable as an outcomes measure to help understand and monitor the relationship between patient heart failure severity and caregiver burden. A Translatability AssessmentSM of the measure has since been performed confirming the cultural appropriateness of the measure and psychometric validation is planned for the future to further explore the reliability, and validity of the new questionnaire in a larger caregiver sample. PMID:23706131

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.

    Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less

  10. Validation techniques for fault emulation of SRAM-based FPGAs

    DOE PAGES

    Quinn, Heather; Wirthlin, Michael

    2015-08-07

    A variety of fault emulation systems have been created to study the effect of single-event effects (SEEs) in static random access memory (SRAM) based field-programmable gate arrays (FPGAs). These systems are useful for augmenting radiation-hardness assurance (RHA) methodologies for verifying the effectiveness for mitigation techniques; understanding error signatures and failure modes in FPGAs; and failure rate estimation. For radiation effects researchers, it is important that these systems properly emulate how SEEs manifest in FPGAs. If the fault emulation systems does not mimic the radiation environment, the system will generate erroneous data and incorrect predictions of behavior of the FPGA inmore » a radiation environment. Validation determines whether the emulated faults are reasonable analogs to the radiation-induced faults. In this study we present methods for validating fault emulation systems and provide several examples of validated FPGA fault emulation systems.« less

  11. Taming Test Anxiety: The Activation of Failure-Related Concepts Enhances Cognitive Test Performance of Test-Anxious Students

    ERIC Educational Resources Information Center

    Tempel, Tobias; Neumann, Roland

    2016-01-01

    We investigated processes underlying performance decrements of highly test-anxious persons. Three experiments contrasted conditions that differed in the degree of activation of concepts related to failure. Participants memorized a list of words either containing words related to failure or containing no words related to failure in Experiment 1. In…

  12. 49 CFR Appendix D to Part 230 - Civil Penalty Schedule

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... the boiler 1,000 2,000 230.36Hydrostatic testing of boilers: (a) Failure to perform hydrostatic test of boiler as required 1,500 3,000 (b) Failure to properly perform hydrostatic test 1,500 3,000 (c) Failure to properly inspect boiler after conducting hydrostatic test above MAWP 1,500 3,000 230.37 Failure...

  13. Validation of Heat Transfer Thermal Decomposition and Container Pressurization of Polyurethane Foam.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Sarah Nicole; Dodd, Amanda B.; Larsen, Marvin E.

    Polymer foam encapsulants provide mechanical, electrical, and thermal isolation in engineered systems. In fire environments, gas pressure from thermal decomposition of polymers can cause mechanical failure of sealed systems. In this work, a detailed uncertainty quantification study of PMDI-based polyurethane foam is presented to assess the validity of the computational model. Both experimental measurement uncertainty and model prediction uncertainty are examined and compared. Both the mean value method and Latin hypercube sampling approach are used to propagate the uncertainty through the model. In addition to comparing computational and experimental results, the importance of each input parameter on the simulation resultmore » is also investigated. These results show that further development in the physics model of the foam and appropriate associated material testing are necessary to improve model accuracy.« less

  14. Validation of a Spanish translation of the CLOX for use in Hispanic samples: the Hispanic EPESE study.

    PubMed

    Royall, Donald R; Espino, David V; Polk, Marsha J; Verdeja, Regina; Vale, Sandra; Gonzales, Hector; Palmer, Raymond R; Markides, Kyriakos P

    2003-02-01

    Clock drawing tests (CDT) appear to be less vulnerable to linguistic, cultural, or educational bias than traditional dementia screening instruments. We investigated a Spanish language translation of CLOX: an executive CDT, in a community sample of Hispanic elders. In-home CLOX evaluations of 1309 Mexican-American elders were reviewed. Both CLOX1 (an executive CDT) and CLOX2 (a constructional CDT) showed good internal consistency (Chronbach's alpha; both alpha = 0.82). Cultural-demographic variables had little effect on CLOX scores. Although language had a significant effect on CLOX1 failure rates, this was not mediated by age, education, acculturation or income. These results suggest that the Spanish CLOX can be validly administered to community-based Hispanic elder samples regardless of education or acculturation. Copyright 2003 John Wiley & Sons, Ltd.

  15. IRAC Full-Scale Flight Testbed Capabilities

    NASA Technical Reports Server (NTRS)

    Lee, James A.; Pahle, Joseph; Cogan, Bruce R.; Hanson, Curtis E.; Bosworth, John T.

    2009-01-01

    Overview: Provide validation of adaptive control law concepts through full scale flight evaluation in a representative avionics architecture. Develop an understanding of aircraft dynamics of current vehicles in damaged and upset conditions Real-world conditions include: a) Turbulence, sensor noise, feedback biases; and b) Coupling between pilot and adaptive system. Simulated damage includes 1) "B" matrix (surface) failures; and 2) "A" matrix failures. Evaluate robustness of control systems to anticipated and unanticipated failures.

  16. Gender-Related and Age-Related Differences in Implantable Defibrillator Recipients: Results From the Pacemaker and Implantable Defibrillator Leads Survival Study ("PAIDLESS").

    PubMed

    Feldman, Alyssa M; Kersten, Daniel J; Chung, Jessica A; Asheld, Wilbur J; Germano, Joseph; Islam, Shahidul; Cohen, Todd J

    2015-12-01

    The purpose of this study was to investigate the influences of gender and age on defibrillator lead failure and patient mortality. The specific influences of gender and age on defibrillator lead failure have not previously been investigated. This study analyzed the differences in gender and age in relation to defibrillator lead failure and mortality of patients in the Pacemaker and Implantable Defibrillator Leads Survival Study ("PAIDLESS"). PAIDLESS includes all patients at Winthrop University Hospital who underwent defibrillator lead implantation between February 1, 1996 and December 31, 2011. Male and female patients were compared within each age decile, beginning at 15 years old, to analyze lead failure and patient mortality. Statistical analyses were performed using Wilcoxon rank-sum test, Fisher's exact test, Kaplan-Meier analysis, and multivariable Cox regression models. P<.05 was considered statistically significant. No correction for multiple comparisons was performed for the subgroup analyses. A total of 3802 patients (2812 men and 990 women) were included in the analysis. The mean age was 70 ± 13 years (range, 15-94 years). Kaplan-Meier analysis found that between 45 and 54 years of age, leads implanted in women failed significantly faster than in men (P=.03). Multivariable Cox regression models were built to validate this finding, and they confirmed that male gender was an independent protective factor of lead failure in the 45 to 54 years group (for male gender: HR, 0.37; 95% confidence interval, 0.14-0.96; P=.04). Lead survival time for women in this age group was 13.4 years (standard error, 0.6), while leads implanted in men of this age group survived 14.7 years (standard error, 0.3). Although there were significant differences in lead failure, no differences in mortality between the genders were found for any ages or within each decile. This study is the first to compare defibrillator lead failure and patient mortality in relation to gender and age deciles at a single large implanting center. Within the 45 to 54 years group, leads implanted in women failed faster than in men. Male gender was found to be an independent protective factor in lead survival. This study emphasizes the complex interplay between gender and age with respect to implantable defibrillator lead failure and mortality.

  17. Validation test of 125 Ah advanced design IPV nickel-hydrogen flight cells

    NASA Technical Reports Server (NTRS)

    Smithrick, John J.; Hall, Stephen W.

    1993-01-01

    An update of validation test results confirming the advanced design nickel-hydrogen cell is presented. An advanced 125 Ah individual pressure vessel Ni-H cell was designed. The primary function of the advanced cell is to store and deliver energy for long-term LEO spacecraft missions. The new features of this design are: (1) use of 26 percent rather than 31 percent KOH electrolyte; (2) use of a patented catalyzed wall wick; (3) use of serrated-edge separators to facilitate gaseous O and H flow within the cell, while maintaining physical contact with the wall wick for electrolyte management; and (4) use of a floating rather than a fixed stack to accommodate Ni electrode expansion due to charge/discharge cycling. The significant improvements resulting from these innovations are extended cycle life; enhanced thermal, electrolyte, and oxygen management; and accommodation of Ni electrode expansion. Six 125 Ah flight cells based on this design were fabricated; the catalyzed wall wick cells have been cycled for over 19,000 cycles with no cell failures in the continuing test. Two of the noncatalyzed wall wick cells failed (cycles 9588 and 13,900).

  18. Ambulatory heart rate range predicts mode-specific mortality and hospitalisation in chronic heart failure.

    PubMed

    Cubbon, Richard M; Ruff, Naomi; Groves, David; Eleuteri, Antonio; Denby, Christine; Kearney, Lorraine; Ali, Noman; Walker, Andrew M N; Jamil, Haqeel; Gierula, John; Gale, Chris P; Batin, Phillip D; Nolan, James; Shah, Ajay M; Fox, Keith A A; Sapsford, Robert J; Witte, Klaus K; Kearney, Mark T

    2016-02-01

    We aimed to define the prognostic value of the heart rate range during a 24 h period in patients with chronic heart failure (CHF). Prospective observational cohort study of 791 patients with CHF associated with left ventricular systolic dysfunction. Mode-specific mortality and hospitalisation were linked with ambulatory heart rate range (AHRR; calculated as maximum minus minimum heart rate using 24 h Holter monitor data, including paced and non-sinus complexes) in univariate and multivariate analyses. Findings were then corroborated in a validation cohort of 408 patients with CHF with preserved or reduced left ventricular ejection fraction. After a mean 4.1 years of follow-up, increasing AHRR was associated with reduced risk of all-cause, sudden, non-cardiovascular and progressive heart failure death in univariate analyses. After accounting for characteristics that differed between groups above and below median AHRR using multivariate analysis, AHRR remained strongly associated with all-cause mortality (HR 0.991/bpm increase in AHRR (95% CI 0.999 to 0.982); p=0.046). AHRR was not associated with the risk of any non-elective hospitalisation, but was associated with heart-failure-related hospitalisation. AHRR was modestly associated with the SD of normal-to-normal beats (R(2)=0.2; p<0.001) and with peak exercise-test heart rate (R(2)=0.33; p<0.001). Analysis of the validation cohort revealed AHRR to be associated with all-cause and mode-specific death as described in the derivation cohort. AHRR is a novel and readily available prognosticator in patients with CHF, which may reflect autonomic tone and exercise capacity. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  19. MO-G-BRE-09: Validating FMEA Against Incident Learning Data: A Study in Stereotactic Body Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, F; Cao, N; Young, L

    2014-06-15

    Purpose: Though FMEA (Failure Mode and Effects Analysis) is becoming more widely adopted for risk assessment in radiation therapy, to our knowledge it has never been validated against actual incident learning data. The objective of this study was to perform an FMEA analysis of an SBRT (Stereotactic Body Radiation Therapy) treatment planning process and validate this against data recorded within an incident learning system. Methods: FMEA on the SBRT treatment planning process was carried out by a multidisciplinary group including radiation oncologists, medical physicists, and dosimetrists. Potential failure modes were identified through a systematic review of the workflow process. Failuremore » modes were rated for severity, occurrence, and detectability on a scale of 1 to 10 and RPN (Risk Priority Number) was computed. Failure modes were then compared with historical reports identified as relevant to SBRT planning within a departmental incident learning system that had been active for two years. Differences were identified. Results: FMEA identified 63 failure modes. RPN values for the top 25% of failure modes ranged from 60 to 336. Analysis of the incident learning database identified 33 reported near-miss events related to SBRT planning. FMEA failed to anticipate 13 of these events, among which 3 were registered with severity ratings of severe or critical in the incident learning system. Combining both methods yielded a total of 76 failure modes, and when scored for RPN the 13 events missed by FMEA ranked within the middle half of all failure modes. Conclusion: FMEA, though valuable, is subject to certain limitations, among them the limited ability to anticipate all potential errors for a given process. This FMEA exercise failed to identify a significant number of possible errors (17%). Integration of FMEA with retrospective incident data may be able to render an improved overview of risks within a process.« less

  20. An End-To-End Test of A Simulated Nuclear Electric Propulsion System

    NASA Technical Reports Server (NTRS)

    VanDyke, Melissa; Hrbud, Ivana; Goddfellow, Keith; Rodgers, Stephen L. (Technical Monitor)

    2002-01-01

    The Safe Affordable Fission Engine (SAFE) test series addresses Phase I Space Fission Systems issues in it particular non-nuclear testing and system integration issues leading to the testing and non-nuclear demonstration of a 400-kW fully integrated flight unit. The first part of the SAFE 30 test series demonstrated operation of the simulated nuclear core and heat pipe system. Experimental data acquired in a number of different test scenarios will validate existing computational models, demonstrated system flexibility (fast start-ups, multiple start-ups/shut downs), simulate predictable failure modes and operating environments. The objective of the second part is to demonstrate an integrated propulsion system consisting of a core, conversion system and a thruster where the system converts thermal heat into jet power. This end-to-end system demonstration sets a precedent for ground testing of nuclear electric propulsion systems. The paper describes the SAFE 30 end-to-end system demonstration and its subsystems.

  1. Real-Time Smart Grids Control for Preventing Cascading Failures and Blackout using Neural Networks: Experimental Approach for N-1-1 Contingency

    NASA Astrophysics Data System (ADS)

    Zarrabian, Sina; Belkacemi, Rabie; Babalola, Adeniyi A.

    2016-12-01

    In this paper, a novel intelligent control is proposed based on Artificial Neural Networks (ANN) to mitigate cascading failure (CF) and prevent blackout in smart grid systems after N-1-1 contingency condition in real-time. The fundamental contribution of this research is to deploy the machine learning concept for preventing blackout at early stages of its occurrence and to make smart grids more resilient, reliable, and robust. The proposed method provides the best action selection strategy for adaptive adjustment of generators' output power through frequency control. This method is able to relieve congestion of transmission lines and prevent consecutive transmission line outage after N-1-1 contingency condition. The proposed ANN-based control approach is tested on an experimental 100 kW test system developed by the authors to test intelligent systems. Additionally, the proposed approach is validated on the large-scale IEEE 118-bus power system by simulation studies. Experimental results show that the ANN approach is very promising and provides accurate and robust control by preventing blackout. The technique is compared to a heuristic multi-agent system (MAS) approach based on communication interchanges. The ANN approach showed more accurate and robust response than the MAS algorithm.

  2. Reliability Programs for Nonelectronic Designs. Volume 2

    DTIC Science & Technology

    1983-04-01

    afforded. Differ- ences between critical and minor failures must be defined in the RFP so that the test need not be stopped for minor failures. However...not be afforded. Specialized test plans must be developed for nonelectronic equipment. First, differences between critical and minor failures must be...determined prior to initiating the test program so that the test need not be stopped for minor failures. Second, although the test must be interrupted

  3. Silky bent grass resistance to herbicides: one year of monitoring in Belgium.

    PubMed

    Henriet, F; Bodson, B; Morales, R Meza

    2013-01-01

    Silky bent grass (Apera spica-venti (L.) P. Beauv.) is a common weed of cereal crops widely spread in Northern and Easthern Europe (Germany, Czech Republic,...), Northern Asia, Siberia and Canada. Up to now, no resistant case has been detected in Belgium but some chemical weeding failures have been observed in Wallonia fields. During summer 2011, 37 seed samples of Apera spica-venti were collected in Wallonia and submitted to resistance tests in controlled conditions. Three modes of action were tested: acetyl coenzyme-A carboxylase inhibitors (pinoxaden and cycloxydim), acetolactate synthase inhibitors (mesosulfuron+iodosulfu-ron, pyroxsulam and sulfometuron) and photosynthesis inhibitors (isoproturon). One susceptible standard population was included in the test in order to validate it and to permit wild populations classification according to "R" rating system developed by Moss et al (2007). Most of populations were susceptible but some populations showed resistance to at least one of the three tested modes of action.

  4. NASA Double Asteroid Redirection Test (DART) Trajectory Validation and Robutness

    NASA Technical Reports Server (NTRS)

    Sarli, Bruno V.; Ozimek, Martin T.; Atchison, Justin A.; Englander, Jacob A.; Barbee, Brent W.

    2017-01-01

    The Double Asteroid Redirection Test (DART) mission will be the first to test the concept of a kinetic impactor. Several studies have been made on asteroid redirection and impact mitigation, however, to this date no mission tested the proposed concepts. An impact study on a representative body allows the measurement of the effects on the target's orbit and physical structure. With this goal, DART's objective is to verify the effectiveness of the kinetic impact concept for planetary defense. The spacecraft uses solar electric propulsion to escape Earth, fly by (138971) 2001 CB21 for impact rehearsal, and impact Didymos-B, the secondary body of the binary (65803) Didymos system. This work focuses on the heliocentric transfer design part of the mission with the validation of the baseline trajectory, performance comparison to other mission objectives, and assessment of the baseline robustness to missed thrust events. Results show a good performance of the selected trajectory for different mission objectives: latest possible escape date, maximum kinetic energy on impact, shortest possible time of flight, and use of an Earth swing-by. The baseline trajectory was shown to be robust to a missed thrust with 1% of fuel margin being enough to recover the mission for failures of more than 14 days.

  5. Crack Growth Simulation and Residual Strength Prediction in Airplane Fuselages

    NASA Technical Reports Server (NTRS)

    Chen, Chuin-Shan; Wawrzynek, Paul A.; Ingraffea, Anthony R.

    1999-01-01

    The objectives were to create a capability to simulate curvilinear crack growth and ductile tearing in aircraft fuselages subjected to widespread fatigue damage and to validate with tests. Analysis methodology and software program (FRANC3D/STAGS) developed herein allows engineers to maintain aging aircraft economically, while insuring continuous airworthiness, and to design more damage-tolerant aircraft for the next generation. Simulations of crack growth in fuselages were described. The crack tip opening angle (CTOA) fracture criterion, obtained from laboratory tests, was used to predict fracture behavior of fuselage panel tests. Geometrically nonlinear, elastic-plastic, thin shell finite element crack growth analyses were conducted. Comparisons of stress distributions, multiple stable crack growth history, and residual strength between measured and predicted results were made to assess the validity of the methodology. Incorporation of residual plastic deformations and tear strap failure was essential for accurate residual strength predictions. Issue related to predicting crack trajectory in fuselages were also discussed. A directional criterion, including T-stress and fracture toughness orthotropy, was developed. Curvilinear crack growth was simulated in coupon and fuselage panel tests. Both T-stress and fracture toughness orthotropy were essential to predict the observed crack paths. Flapping of fuselages were predicted. Measured and predicted results agreed reasonable well.

  6. Annual Report Nucelar Energy Research and Development Program Nuclear Energy Research Initiative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hively, LM

    2003-02-13

    NERI Project No.2000-0109 began in August 2000 and has three tasks. The first project year addressed Task 1, namely development of nonlinear prognostication for critical equipment in nuclear power facilities. That work is described in the first year's annual report (ORNLTM-2001/195). The current (second) project year (FY02) addresses Task 2, while the third project year will address Tasks 2-3. This report describes the work for the second project year, spanning August 2001 through August 2002, including status of the tasks, issues and concerns, cost performance, and status summary of tasks. The objective of the second project year's work is amore » compelling demonstration of the nonlinear prognostication algorithm using much more data. The guidance from Dr. Madeline Feltus (DOE/NE-20) is that it would be preferable to show forewarning of failure for different kinds of nuclear-grade equipment, as opposed to many different failure modes from one piece of equipment. Long-term monitoring of operational utility equipment is possible in principle, but is not practically feasible for the following reason. Time and funding constraints for this project do not allow us to monitor the many machines (thousands) that will be necessary to obtain even a few failure sequences, due to low failure rates (<10{sup -3}/year) in the operational environment. Moreover, the ONLY way to guarantee a controlled failure sequence is to seed progressively larger faults in the equipment or to overload the equipment for accelerated tests. Both of these approaches are infeasible for operational utility machinery, but are straight-forward in a test environment. Our subcontractor has provided such test sequences. Thus, we have revised Tasks 2.1-2.4 to analyze archival test data from such tests. The second phase of our work involves validation of the nonlinear prognostication over the second and third years of the proposed work. Recognizing the inherent limitations outlined in the previous paragraph, Dr. Feltus urged Oak Ridge National Laboratory (ORNL) to contact other researchers for additional data from other test equipment. Consequently, we have revised the work plan for Tasks 2.1-2.2, with corresponding changes to the work plan as shown in the Status Summary of NERI Tasks. The revised tasks are as follows: Task 2.1--ORNL will obtain test data from a subcontractor and other researchers for various test equipment. This task includes development of a test plan or a description of the historical testing, as appropriate: test facility, equipment to be tested, choice of failure mode(s), testing protocol, data acquisition equipment, and resulting data from the test sequence. ORNL will analyze this data for quality, and subsequently via the nonlinear paradigm for prognostication. Task 2.2--ORNL will evaluate the prognostication capability of the nonlinear paradigm. The comparison metrics for reliability of the predictions will include the true positives, true negatives, and the forewarning times. Task 2.3--ORNL will improve the nonlinear paradigm as appropriate, in accord with the results of Tasks 2.1-2.2, to maximize the rate of true positive and true negative indications of failure. Maximal forewarning time is also highly desirable. Task 2.4--ORNL will develop advanced algorithms for the phase-space distribution function (PS-DF) pattern change recognition, based on the results of Task 2.3. This implementation will provide a capability for automated prognostication, as part of the maintenance decision-making. Appendix A provides a detailed description of the analysis methods, which include conventional statistics, traditional nonlinear measures, and ORNL's patented nonlinear PSDM. The body of this report focuses on results of this analysis.« less

  7. Summary: Experimental validation of real-time fault-tolerant systems

    NASA Technical Reports Server (NTRS)

    Iyer, R. K.; Choi, G. S.

    1992-01-01

    Testing and validation of real-time systems is always difficult to perform since neither the error generation process nor the fault propagation problem is easy to comprehend. There is no better substitute to results based on actual measurements and experimentation. Such results are essential for developing a rational basis for evaluation and validation of real-time systems. However, with physical experimentation, controllability and observability are limited to external instrumentation that can be hooked-up to the system under test. And this process is quite a difficult, if not impossible, task for a complex system. Also, to set up such experiments for measurements, physical hardware must exist. On the other hand, a simulation approach allows flexibility that is unequaled by any other existing method for system evaluation. A simulation methodology for system evaluation was successfully developed and implemented and the environment was demonstrated using existing real-time avionic systems. The research was oriented toward evaluating the impact of permanent and transient faults in aircraft control computers. Results were obtained for the Bendix BDX 930 system and Hamilton Standard EEC131 jet engine controller. The studies showed that simulated fault injection is valuable, in the design stage, to evaluate the susceptibility of computing sytems to different types of failures.

  8. NREL Begins On-Site Validation of Drivetrain Gearbox and Bearings | News |

    Science.gov Websites

    drivetrain failure often leads to higher-than-expected operations and maintenance costs. NREL researchers operations and maintenance costs for the wind industry. The validation is expected to last through the spring

  9. SASSYS pretest analysis of the THORS-SHRS experiments. [LMFBR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bordner, G.L.; Dunn, F.E.

    The THORS Facility at ORNL was recently modified to allow the testing of two parallel 19-pin simulated fueled subassemblies under natural circulation conditions similar to those that might occur during a partial failure of the shutdown heat removal system (SHRS) of a liquid-metal fast breeder reactor. The planned experimental program included a series of tests at various inlet plenum temperatures to determine boiling threshold power levels and the power range for stable boiling during natural circulation operation. Pretest calculations were performed at ANL, which supplement those carried out at ORNL for the purposes of validating the SASSYS model in themore » natural circulation regime and of providing data which would be useful in planning the experiments.« less

  10. Potential surrogate endpoints for prostate cancer survival: analysis of a phase III randomized trial.

    PubMed

    Ray, Michael E; Bae, Kyounghwa; Hussain, Maha H A; Hanks, Gerald E; Shipley, William U; Sandler, Howard M

    2009-02-18

    The identification of surrogate endpoints for prostate cancer-specific survival may shorten the length of clinical trials for prostate cancer. We evaluated distant metastasis and general clinical treatment failure as potential surrogates for prostate cancer-specific survival by use of data from the Radiation Therapy and Oncology Group 92-02 randomized trial. Patients (n = 1554 randomly assigned and 1521 evaluable for this analysis) with locally advanced prostate cancer had been treated with 4 months of neoadjuvant and concurrent androgen deprivation therapy with external beam radiation therapy and then randomly assigned to no additional therapy (control arm) or 24 additional months of androgen deprivation therapy (experimental arm). Data from landmark analyses at 3 and 5 years for general clinical treatment failure (defined as documented local disease progression, regional or distant metastasis, initiation of androgen deprivation therapy, or a prostate-specific antigen level of 25 ng/mL or higher after radiation therapy) and/or distant metastasis were tested as surrogate endpoints for prostate cancer-specific survival at 10 years by use of Prentice's four criteria. All statistical tests were two-sided. At 3 years, 1364 patients were alive and contributed data for analysis. Both distant metastasis and general clinical treatment failure at 3 years were consistent with all four of Prentice's criteria for being surrogate endpoints for prostate cancer-specific survival at 10 years. At 5 years, 1178 patients were alive and contributed data for analysis. Although prostate cancer-specific survival was not statistically significantly different between treatment arms at 5 years (P = .08), both endpoints were consistent with Prentice's remaining criteria. Distant metastasis and general clinical treatment failure at 3 years may be candidate surrogate endpoints for prostate cancer-specific survival at 10 years. These endpoints, however, must be validated in other datasets.

  11. Robot-assisted training for heart failure patients - a small pilot study.

    PubMed

    Schoenrath, Felix; Markendorf, Susanne; Brauchlin, Andreas Emil; Frank, Michelle; Wilhelm, Markus Johannes; Saleh, Lanja; Riener, Robert; Schmied, Christian Marc; Falk, Volkmar

    2015-12-01

    The objective of this study was assess robot-assisted gait therapy with the Lokomat® system in heart failure patients. Patients (n = 5) with stable heart failure and a left ventricular ejection fraction of less than 45% completed a four-week aerobic training period with three trainings per week and an integrated dynamic resistance training of the lower limbs. Patients underwent testing of cardiac and inflammatory biomarkers. A cardiopulmonary exercise test, a quality of life score and an evaluation of the muscular strength by measuring the peak quadriceps force was performed. No adverse events occurred. The combined training resulted in an improvement in peak work rate (range: 6% to 36%) and peak quadriceps force (range: 3% to 80%) in all participants. Peak oxygen consumption (range: –3% to + 61%) increased in three, and oxygen pulse (range: –7% to + 44%) in four of five patients. The quality of life assessment indicated better well-being in all participants. NT-ProBNP (+233 to –733 ng/ml) and the inflammatory biomarkers (hsCRP and IL6) decreased in four of five patients (IL 6: +0.5 to –2 mg/l, hsCRP: +0.2 to –6.5 mg/l). Robot-assisted gait therapy with the Lokomat® System is feasible in heart failure patients and was safe in this trial. The combined aerobic and resistance training intervention with augmented feedback resulted in benefits in exercise capacity, muscle strength and quality of life, as well as an improvement of cardiac (NT-ProBNP) and inflammatory (IL6, hsCRP) biomarkers. Results can only be considered as preliminary and need further validation in larger studies. (ClinicalTrials.gov number, NCT 02146196)

  12. Validity Evidence for a Serious Game to Assess Performance on Critical Pediatric Emergency Medicine Scenarios.

    PubMed

    Gerard, James M; Scalzo, Anthony J; Borgman, Matthew A; Watson, Christopher M; Byrnes, Chelsie E; Chang, Todd P; Auerbach, Marc; Kessler, David O; Feldman, Brian L; Payne, Brian S; Nibras, Sohail; Chokshi, Riti K; Lopreiato, Joseph O

    2018-06-01

    We developed a first-person serious game, PediatricSim, to teach and assess performances on seven critical pediatric scenarios (anaphylaxis, bronchiolitis, diabetic ketoacidosis, respiratory failure, seizure, septic shock, and supraventricular tachycardia). In the game, players are placed in the role of a code leader and direct patient management by selecting from various assessment and treatment options. The objective of this study was to obtain supportive validity evidence for the PediatricSim game scores. Game content was developed by 11 subject matter experts and followed the American Heart Association's 2011 Pediatric Advanced Life Support Provider Manual and other authoritative references. Sixty subjects with three different levels of experience were enrolled to play the game. Before game play, subjects completed a 40-item written pretest of knowledge. Game scores were compared between subject groups using scoring rubrics developed for the scenarios. Validity evidence was established and interpreted according to Messick's framework. Content validity was supported by a game development process that involved expert experience, focused literature review, and pilot testing. Subjects rated the game favorably for engagement, realism, and educational value. Interrater agreement on game scoring was excellent (intraclass correlation coefficient = 0.91, 95% confidence interval = 0.89-0.9). Game scores were higher for attendings followed by residents then medical students (Pc < 0.01) with large effect sizes (1.6-4.4) for each comparison. There was a very strong, positive correlation between game and written test scores (r = 0.84, P < 0.01). These findings contribute validity evidence for PediatricSim game scores to assess knowledge of pediatric emergency medicine resuscitation.

  13. The Chelsea critical care physical assessment tool (CPAx): validation of an innovative new tool to measure physical morbidity in the general adult critical care population; an observational proof-of-concept pilot study.

    PubMed

    Corner, E J; Wood, H; Englebretsen, C; Thomas, A; Grant, R L; Nikoletou, D; Soni, N

    2013-03-01

    To develop a scoring system to measure physical morbidity in critical care - the Chelsea Critical Care Physical Assessment Tool (CPAx). The development process was iterative involving content validity indices (CVI), a focus group and an observational study of 33 patients to test construct validity against the Medical Research Council score for muscle strength, peak cough flow, Australian Therapy Outcome Measures score, Glasgow Coma Scale score, Bloomsbury sedation score, Sequential Organ Failure Assessment score, Short Form 36 (SF-36) score, days of mechanical ventilation and inter-rater reliability. Trauma and general critical care patients from two London teaching hospitals. Users of the CPAx felt that it possessed content validity, giving a final CVI of 1.00 (P<0.05). Construct validation data showed moderate to strong significant correlations between the CPAx score and all secondary measures, apart from the mental component of the SF-36 which demonstrated weak correlation with the CPAx score (r=0.024, P=0.720). Reliability testing showed internal consistency of α=0.798 and inter-rater reliability of κ=0.988 (95% confidence interval 0.791 to 1.000) between five raters. This pilot work supports proof of concept of the CPAx as a measure of physical morbidity in the critical care population, and is a cogent argument for further investigation of the scoring system. Copyright © 2012 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  14. Validation test of advanced technology for IPV nickel-hydrogen flight cells: Update

    NASA Technical Reports Server (NTRS)

    Smithrick, John J.; Hall, Stephen W.

    1992-01-01

    Individual pressure vessel (IPV) nickel-hydrogen technology was advanced at NASA Lewis and under Lewis contracts with the intention of improving cycle life and performance. One advancement was to use 26 percent potassium hydroxide (KOH) electrolyte to improve cycle life. Another advancement was to modify the state-of-the-art cell design to eliminate identified failure modes. The modified design is referred to as the advanced design. A breakthrough in the low-earth-orbit (LEO) cycle life of IPV nickel-hydrogen cells has been previously reported. The cycle life of boiler plate cells containing 26 percent KOH electrolyte was about 40,000 LEO cycles compared to 3,500 cycles for cells containing 31 percent KOH. The boiler plate test results are in the process of being validated using flight hardware and real time LEO testing at the Naval Weapons Support Center (NWSC), Crane, Indiana under a NASA Lewis Contract. An advanced 125 Ah IPV nickel-hydrogen cell was designed. The primary function of the advanced cell is to store and deliver energy for long-term, LEO spacecraft missions. The new features of this design are: (1) use of 26 percent rather than 31 percent KOH electrolyte; (2) use of a patented catalyzed wall wick; (3) use of serrated-edge separators to facilitate gaseous oxygen and hydrogen flow within the cell, while still maintaining physical contact with the wall wick for electrolyte management; and (4) use of a floating rather than a fixed stack (state-of-the-art) to accommodate nickel electrode expansion due to charge/discharge cycling. The significant improvements resulting from these innovations are: extended cycle life; enhanced thermal, electrolyte, and oxygen management; and accommodation of nickel electrode expansion. The advanced cell design is in the process of being validated using real time LEO cycle life testing of NWSC, Crane, Indiana. An update of validation test results confirming this technology is presented.

  15. Fundamental Research on Percussion Drilling: Improved rock mechanics analysis, advanced simulation technology, and full-scale laboratory investigations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael S. Bruno

    This report summarizes the research efforts on the DOE supported research project Percussion Drilling (DE-FC26-03NT41999), which is to significantly advance the fundamental understandings of the physical mechanisms involved in combined percussion and rotary drilling, and thereby facilitate more efficient and lower cost drilling and exploration of hard-rock reservoirs. The project has been divided into multiple tasks: literature reviews, analytical and numerical modeling, full scale laboratory testing and model validation, and final report delivery. Literature reviews document the history, pros and cons, and rock failure physics of percussion drilling in oil and gas industries. Based on the current understandings, a conceptualmore » drilling model is proposed for modeling efforts. Both analytical and numerical approaches are deployed to investigate drilling processes such as drillbit penetration with compression, rotation and percussion, rock response with stress propagation, damage accumulation and failure, and debris transportation inside the annulus after disintegrated from rock. For rock mechanics modeling, a dynamic numerical tool has been developed to describe rock damage and failure, including rock crushing by compressive bit load, rock fracturing by both shearing and tensile forces, and rock weakening by repetitive compression-tension loading. Besides multiple failure criteria, the tool also includes a damping algorithm to dissipate oscillation energy and a fatigue/damage algorithm to update rock properties during each impact. From the model, Rate of Penetration (ROP) and rock failure history can be estimated. For cuttings transport in annulus, a 3D numerical particle flowing model has been developed with aid of analytical approaches. The tool can simulate cuttings movement at particle scale under laminar or turbulent fluid flow conditions and evaluate the efficiency of cutting removal. To calibrate the modeling efforts, a series of full-scale fluid hammer drilling tests, as well as single impact tests, have been designed and executed. Both Berea sandstone and Mancos shale samples are used. In single impact tests, three impacts are sequentially loaded at the same rock location to investigate rock response to repetitive loadings. The crater depth and width are measured as well as the displacement and force in the rod and the force in the rock. Various pressure differences across the rock-indentor interface (i.e. bore pressure minus pore pressure) are used to investigate the pressure effect on rock penetration. For hammer drilling tests, an industrial fluid hammer is used to drill under both underbalanced and overbalanced conditions. Besides calibrating the modeling tool, the data and cuttings collected from the tests indicate several other important applications. For example, different rock penetrations during single impact tests may reveal why a fluid hammer behaves differently with diverse rock types and under various pressure conditions at the hole bottom. On the other hand, the shape of the cuttings from fluid hammer tests, comparing to those from traditional rotary drilling methods, may help to identify the dominant failure mechanism that percussion drilling relies on. If so, encouraging such a failure mechanism may improve hammer performance. The project is summarized in this report. Instead of compiling the information contained in the previous quarterly or other technical reports, this report focuses on the descriptions of tasks, findings, and conclusions, as well as the efforts on promoting percussion drilling technologies to industries including site visits, presentations, and publications. As a part of the final deliveries, the 3D numerical model for rock mechanics is also attached.« less

  16. Formal Specification and Validation of a Hybrid Connectivity Restoration Algorithm for Wireless Sensor and Actor Networks †

    PubMed Central

    Imran, Muhammad; Zafar, Nazir Ahmad

    2012-01-01

    Maintaining inter-actor connectivity is extremely crucial in mission-critical applications of Wireless Sensor and Actor Networks (WSANs), as actors have to quickly plan optimal coordinated responses to detected events. Failure of a critical actor partitions the inter-actor network into disjoint segments besides leaving a coverage hole, and thus hinders the network operation. This paper presents a Partitioning detection and Connectivity Restoration (PCR) algorithm to tolerate critical actor failure. As part of pre-failure planning, PCR determines critical/non-critical actors based on localized information and designates each critical node with an appropriate backup (preferably non-critical). The pre-designated backup detects the failure of its primary actor and initiates a post-failure recovery process that may involve coordinated multi-actor relocation. To prove the correctness, we construct a formal specification of PCR using Z notation. We model WSAN topology as a dynamic graph and transform PCR to corresponding formal specification using Z notation. Formal specification is analyzed and validated using the Z Eves tool. Moreover, we simulate the specification to quantitatively analyze the efficiency of PCR. Simulation results confirm the effectiveness of PCR and the results shown that it outperforms contemporary schemes found in the literature.

  17. First overpower tests of metallic IFR [Integral Fast Reactor] fuel in TREAT [Transient Reactor Test Facility]: Data and analysis from tests M5, M6, and M7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauer, T. H.; Robinson, W. R.; Holland, J. W.

    1989-12-01

    Results and analyses of margin to cladding failure and pre-failure axial expansion of metallic fuel are reported for TREAT in-pile transient overpower tests M5--M7. These are the first such tests on reference binary and ternary alloy fuel of the Integral Fast Reactor (IFR) concept with burnup ranging from 1 to 10 at. %. In all cases, test fuel was subjected to an exponential power rise on an 8 s period until either incipient or actual cladding failure was achieved. Objectives, designs and methods are described with emphasis on developments unique to metal fuel safety testing. The resulting database for claddingmore » failure threshold and prefailure fuel expansion is presented. The nature of the observed cladding failure and resultant fuel dispersals is described. Simple models of cladding failures and pre-failure axial expansions are described and compared with experimental results. Reported results include: temperature, flow, and pressure data from test instrumentation; fuel motion diagnostic data principally from the fast neutron hodoscope; and test remains described from both destructive and non-destructive post-test examination. 24 refs., 144 figs., 17 tabs.« less

  18. Cascading failure in the wireless sensor scale-free networks

    NASA Astrophysics Data System (ADS)

    Liu, Hao-Ran; Dong, Ming-Ru; Yin, Rong-Rong; Han, Li

    2015-05-01

    In the practical wireless sensor networks (WSNs), the cascading failure caused by a failure node has serious impact on the network performance. In this paper, we deeply research the cascading failure of scale-free topology in WSNs. Firstly, a cascading failure model for scale-free topology in WSNs is studied. Through analyzing the influence of the node load on cascading failure, the critical load triggering large-scale cascading failure is obtained. Then based on the critical load, a control method for cascading failure is presented. In addition, the simulation experiments are performed to validate the effectiveness of the control method. The results show that the control method can effectively prevent cascading failure. Project supported by the Natural Science Foundation of Hebei Province, China (Grant No. F2014203239), the Autonomous Research Fund of Young Teacher in Yanshan University (Grant No. 14LGB017) and Yanshan University Doctoral Foundation, China (Grant No. B867).

  19. A Manual Control Test for the Detection and Deterrence of Impaired Drivers

    NASA Technical Reports Server (NTRS)

    Stein, A. C.; Allen, R. W.; Jex, H. R.

    1984-01-01

    A brief manual control test and a decision strategy were developed, laboratory tested, and field validated which provide a means for detecting human operator impairment from alcohol or other drugs. The test requires the operator to stabilize progressively unstable controlled element dynamics. Control theory and experimental data verify that the human operator's control ability on this task is constrained by basic cybernetic characteristics, and that task performance is reliably affected by impairment effects on these characteristics. Assessment of human operator control ability is determined by a statistically based decision strategy. The operator is allowed several chances to exceed a preset pass criterion. Procedures are described for setting the pass criterion based on individual ability and a desired unimpaired failure rate. These procedures were field tested with apparatus installed in automobiles that were designed to discourage drunk drivers from operating their vehicles. This test program demonstrated that the control task and detection strategy could be applied in a practical setting to screen human operators for impairment in their basic cybernetic skills.

  20. End-to-End Demonstrator of the Safe Affordable Fission Engine (SAFE) 30: Power Conversion and Ion Engine Operation

    NASA Technical Reports Server (NTRS)

    Hrbud, Ivana; VanDyke, Melissa; Houts, Mike; Goodfellow, Keith; Schafer, Charles (Technical Monitor)

    2001-01-01

    The Safe Affordable Fission Engine (SAFE) test series addresses Phase 1 Space Fission Systems issues in particular non-nuclear testing and system integration issues leading to the testing and non-nuclear demonstration of a 400-kW fully integrated flight unit. The first part of the SAFE 30 test series demonstrated operation of the simulated nuclear core and heat pipe system. Experimental data acquired in a number of different test scenarios will validate existing computational models, demonstrated system flexibility (fast start-ups, multiple start-ups/shut downs), simulate predictable failure modes and operating environments. The objective of the second part is to demonstrate an integrated propulsion system consisting of a core, conversion system and a thruster where the system converts thermal heat into jet power. This end-to-end system demonstration sets a precedent for ground testing of nuclear electric propulsion systems. The paper describes the SAFE 30 end-to-end system demonstration and its subsystems.

  1. Measurement of Plastic Stress and Strain for Analytical Method Verification (MSFC Center Director's Discretionary Fund Project No. 93-08)

    NASA Technical Reports Server (NTRS)

    Price, J. M.; Steeve, B. E.; Swanson, G. R.

    1999-01-01

    The analytical prediction of stress, strain, and fatigue life at locations experiencing local plasticity is full of uncertainties. Much of this uncertainty arises from the material models and their use in the numerical techniques used to solve plasticity problems. Experimental measurements of actual plastic strains would allow the validity of these models and solutions to be tested. This memorandum describes how experimental plastic residual strain measurements were used to verify the results of a thermally induced plastic fatigue failure analysis of a space shuttle main engine fuel pump component.

  2. Highly reliable oxide VCSELs for datacom applications

    NASA Astrophysics Data System (ADS)

    Aeby, Ian; Collins, Doug; Gibson, Brian; Helms, Christopher J.; Hou, Hong Q.; Lou, Wenlin; Bossert, David J.; Wang, Charlie X.

    2003-06-01

    In this paper we describe the processes and procedures that have been developed to ensure high reliability for Emcore"s 850 nm oxide confined GaAs VCSELs. Evidence from on-going accelerated life testing and other reliability studies that confirm that this process yields reliable products will be discussed. We will present data and analysis techniques used to determine the activation energy and acceleration factors for the dominant wear-out failure mechanisms for our devices as well as our estimated MTTF of greater than 2 million use hours. We conclude with a summary of internal verification and field return rate validation data.

  3. Applications of the Petri net to simulate, test, and validate the performance and safety of complex, heterogeneous, multi-modality patient monitoring alarm systems.

    PubMed

    Sloane, E B; Gelhot, V

    2004-01-01

    This research is motivated by the rapid pace of medical device and information system integration. Although the ability to interconnect many medical devices and information systems may help improve patient care, there is no way to detect if incompatibilities between one or more devices might cause critical events such as patient alarms to go unnoticed or cause one or more of the devices to become stuck in a disabled state. Petri net tools allow automated testing of all possible states and transitions between devices and/or systems to detect potential failure modes in advance. This paper describes an early research project to use Petri nets to simulate and validate a multi-modality central patient monitoring system. A free Petri net tool, HPSim, is used to simulate two wireless patient monitoring networks: one with 44 heart monitors and a central monitoring system and a second version that includes an additional 44 wireless pulse oximeters. In the latter Petri net simulation, a potentially dangerous heart arrhythmia and pulse oximetry alarms were detected.

  4. Porting Initiation and Failure into Linked CHEETAH

    NASA Astrophysics Data System (ADS)

    Souers, Clark; Vitello, Peter

    2007-06-01

    Linked CHEETAH is a thermo-chemical code coupled to a 2-D hydrocode. Initially, a quadratic-pressure dependent kinetic rate was used, which worked well in modeling prompt detonation of explosives of large size, but does not work on other aspects of explosive behavior. The variable-pressure Tarantula reactive flow rate model was developed with JWL++ in order to also describe failure and initiation, and we have moved this model into Linked CHEETAH. The model works by turning on only above a pressure threshold, where a slow turn-on creates initiation. At a higher pressure, the rate suddenly leaps to a large value over a small pressure range. A slowly failing cylinder will see a rapidly declining rate, which pushes it quickly into failure. At a high pressure, the detonation rate is constant. A sequential validation procedure is used, which includes metal-confined cylinders, rate-sticks, corner-turning, initiation and threshold, gap tests and air gaps. The size (diameter) effect is central to the calibration. This work was performed under the auspices of the U.S. Department of Energy by the University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.

  5. A Unified Constitutive Model for Subglacial Till, Part II: Laboratory Tests, Disturbed State Modeling, and Validation for Two Subglacial Tills

    NASA Astrophysics Data System (ADS)

    Desai, C. S.; Sane, S. M.; Jenson, J. W.; Contractor, D. N.; Carlson, A. E.; Clark, P. U.

    2006-12-01

    This presentation, which is complementary to Part I (Jenson et al.), describes the application of the Disturbed State Concept (DSC) constitutive model to define the behavior of the deforming sediment (till) underlying glaciers and ice sheets. The DSC includes elastic, plastic, and creep strains, and microstructural changes leading to degradation, failure, and sometimes strengthening or healing. Here, we describe comprehensive laboratory experiments conducted on samples of two regionally significant tills deposited by the Laurentide Ice Sheet: the Tiskilwa Till and Sky Pilot Till. The tests are used to determine the parameters to calibrate the DSC model, which is validated with respect to the laboratory tests by comparing the predictions with test data used to find the parameters, and also comparing them with independent tests not used to find the parameters. Discussion of the results also includes comparison of the DSC model with the classical Mohr-Coulomb model, which has been commonly used for glacial tills. A numerical procedure based on finite element implementation of the DSC is used to simulate an idealized field problem, and its predictions are discussed. Based on these analyses, the unified DSC model is proposed to provide an improved model for subglacial tills compared to other models used commonly, and thus to provide the potential for improved predictions of ice sheet movements.

  6. Coupled Hydro-Mechanical Constitutive Model for Vegetated Soils: Validation and Applications

    NASA Astrophysics Data System (ADS)

    Switala, Barbara Maria; Veenhof, Rick; Wu, Wei; Askarinejad, Amin

    2016-04-01

    It is well known, that presence of vegetation influences stability of the slope. However, the quantitative assessment of this contribution remains challenging. It is essential to develop a numerical model, which combines mechanical root reinforcement and root water uptake, and allows modelling rainfall induced landslides of vegetated slopes. Therefore a novel constitutive formulation is proposed, which is based on the modified Cam-clay model for unsaturated soils. Mechanical root reinforcement is modelled introducing a new constitutive parameter, which governs the evolution of the Cam-clay failure surface with the degree of root reinforcement. Evapotranspiration is modelled in terms of the root water uptake, defined as a sink term in the water flow continuity equation. The original concept is extended for different shapes of the root architecture in three dimensions, and combined with the mechanical model. The model is implemented in the research finite element code Comes-Geo, and in the commercial software Abaqus. The formulation is tested, performing a series of numerical examples, which allow validation of the concept. The direct shear test and the triaxial test are modelled in order to test the performance of the mechanical part of the model. In order to validate the hydrological part of the constitutive formulation, evapotranspiration from the vegetated box is simulated and compared with the experimental results. Obtained numerical results exhibit a good agreement with the experimental data. The implemented model is capable of reproducing results of basic geotechnical laboratory tests. Moreover, the constitutive formulation can be used to model rainfall induced landslides of vegetated slopes, taking into account the most important factors influencing the slope stability (root reinforcement and evapotranspiration).

  7. Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model: A Web-based program designed to evaluate the cost-effectiveness of disease management programs in heart failure.

    PubMed

    Reed, Shelby D; Neilson, Matthew P; Gardner, Matthew; Li, Yanhong; Briggs, Andrew H; Polsky, Daniel E; Graham, Felicia L; Bowers, Margaret T; Paul, Sara C; Granger, Bradi B; Schulman, Kevin A; Whellan, David J; Riegel, Barbara; Levy, Wayne C

    2015-11-01

    Heart failure disease management programs can influence medical resource use and quality-adjusted survival. Because projecting long-term costs and survival is challenging, a consistent and valid approach to extrapolating short-term outcomes would be valuable. We developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model, a Web-based simulation tool designed to integrate data on demographic, clinical, and laboratory characteristics; use of evidence-based medications; and costs to generate predicted outcomes. Survival projections are based on a modified Seattle Heart Failure Model. Projections of resource use and quality of life are modeled using relationships with time-varying Seattle Heart Failure Model scores. The model can be used to evaluate parallel-group and single-cohort study designs and hypothetical programs. Simulations consist of 10,000 pairs of virtual cohorts used to generate estimates of resource use, costs, survival, and incremental cost-effectiveness ratios from user inputs. The model demonstrated acceptable internal and external validity in replicating resource use, costs, and survival estimates from 3 clinical trials. Simulations to evaluate the cost-effectiveness of heart failure disease management programs across 3 scenarios demonstrate how the model can be used to design a program in which short-term improvements in functioning and use of evidence-based treatments are sufficient to demonstrate good long-term value to the health care system. The Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model provides researchers and providers with a tool for conducting long-term cost-effectiveness analyses of disease management programs in heart failure. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. A Comparison of the Fagerström Test for Cigarette Dependence and Cigarette Dependence Scale in a Treatment-Seeking Sample of Pregnant Smokers.

    PubMed

    Berlin, Ivan; Singleton, Edward G; Heishman, Stephen J

    2016-04-01

    Valid and reliable brief measures of cigarette dependence are essential for research purposes and effective clinical care. Two widely-used brief measures of cigarette dependence are the six-item Fagerström Test for Cigarette Dependence (FTCD) and five-item Cigarette Dependence Scale (CDS-5). Their respective metric characteristics among pregnant smokers have not yet been studied. This was a secondary analysis of data of pregnant smokers (N = 476) enrolled in a smoking cessation study. We assessed internal consistency, reliability, and examined correlations between the instruments and smoking-related behaviors for construct validity. We evaluated predictive validity by testing how well the measures predict abstinence 2 weeks after quit date. Cronbach's alpha coefficient for the CDS-5 was 0.62 and for the FTCD 0.55. Measures were strongly correlated with each other, although FTCD, but not CDS-5, was associated with saliva cotinine concentration. The FTCD, CDS-5, craving to smoke, and withdrawal symptoms failed to predict smoking status 2 weeks following the quit date. Suboptimal reliability estimates and failure to predict short-term smoking call into question the value of including either of the brief measures in studies that aim to explain the obstacles to smoking cessation during pregnancy. © The Author 2015. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Shearography for Non-destructive Inspection with applications to BAT Mask Tile Adhesive Bonding and Specular Surface Honeycomb Panels

    NASA Technical Reports Server (NTRS)

    Lysak, Daniel B.

    2003-01-01

    The applicability of shearography techniques for non-destructive evaluation in two unique application areas is examined. In the first application, shearography is used to evaluate the quality of adhesive bonds holding lead tiles to the B.4T gamma ray mask for the NASA Swift program. Using a vibration excitation, the more poorly bonded tiles are readily identifiable in the shearography image. A quantitative analysis is presented that compares the shearography results with a destructive pull test measuring the force at bond failure. The second application is to evaluate the bonding between the skin and core of a honeycomb structure with a specular (mirror-like) surface. In standard shearography techniques, the object under test must have a diffuse surface to generate the speckle patterns in laser light, which are then sheared. A novel configuration using the specular surface as a mirror to image speckles from a diffuser is presented, opening up the use of shearography to a new class of objects that could not have been examined with the traditional approach. This new technique readily identifies large scale bond failures in the panel, demonstrating the validity of this approach.

  10. NASA's Evolutionary Xenon Thruster (NEXT) Project Qualification Propellant Throughput Milestone: Performance, Erosion, and Thruster Service Life Prediction After 450 kg

    NASA Technical Reports Server (NTRS)

    Herman, Daniel A.

    2010-01-01

    The NASA s Evolutionary Xenon Thruster (NEXT) program is tasked with significantly improving and extending the capabilities of current state-of-the-art NSTAR thruster. The service life capability of the NEXT ion thruster is being assessed by thruster wear test and life-modeling of critical thruster components, such as the ion optics and cathodes. The NEXT Long-Duration Test (LDT) was initiated to validate and qualify the NEXT thruster propellant throughput capability. The NEXT thruster completed the primary goal of the LDT; namely to demonstrate the project qualification throughput of 450 kg by the end of calendar year 2009. The NEXT LDT has demonstrated 28,500 hr of operation and processed 466 kg of xenon throughput--more than double the throughput demonstrated by the NSTAR flight-spare. Thruster performance changes have been consistent with a priori predictions. Thruster erosion has been minimal and consistent with the thruster service life assessment, which predicts the first failure mode at greater than 750 kg throughput. The life-limiting failure mode for NEXT is predicted to be loss of structural integrity of the accelerator grid due to erosion by charge-exchange ions.

  11. A Multisite, Randomized Controlled Clinical Trial of Computerized Cognitive Remediation Therapy for Schizophrenia.

    PubMed

    Gomar, Jesús J; Valls, Elia; Radua, Joaquim; Mareca, Celia; Tristany, Josep; del Olmo, Francisco; Rebolleda-Gil, Carlos; Jañez-Álvarez, María; de Álvaro, Francisco J; Ovejero, María R; Llorente, Ana; Teixidó, Cristina; Donaire, Ana M; García-Laredo, Eduardo; Lazcanoiturburu, Andrea; Granell, Luis; Mozo, Cristina de Pablo; Pérez-Hernández, Mónica; Moreno-Alcázar, Ana; Pomarol-Clotet, Edith; McKenna, Peter J

    2015-11-01

    The effectiveness of cognitive remediation therapy (CRT) for the neuropsychological deficits seen in schizophrenia is supported by meta-analysis. However, a recent methodologically rigorous trial had negative findings. In this study, 130 chronic schizophrenic patients were randomly assigned to computerized CRT, an active computerized control condition (CC) or treatment as usual (TAU). Primary outcome measures were 2 ecologically valid batteries of executive function and memory, rated under blind conditions; other executive and memory tests and a measure of overall cognitive function were also employed. Carer ratings of executive and memory failures in daily life were obtained before and after treatment. Computerized CRT was found to produce improvement on the training tasks, but this did not transfer to gains on the primary outcome measures and most other neuropsychological tests in comparison to either CC or TAU conditions. Nor did the intervention result in benefits on carer ratings of daily life cognitive failures. According to this study, computerized CRT is not effective in schizophrenia. The use of both active and passive CCs suggests that nature of the control group is not an important factor influencing results. © The Author 2015. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center.

  12. A new qualitative acoustic emission parameter based on Shannon's entropy for damage monitoring

    NASA Astrophysics Data System (ADS)

    Chai, Mengyu; Zhang, Zaoxiao; Duan, Quan

    2018-02-01

    An important objective of acoustic emission (AE) non-destructive monitoring is to accurately identify approaching critical damage and to avoid premature failure by means of the evolutions of AE parameters. One major drawback of most parameters such as count and rise time is that they are strongly dependent on the threshold and other settings employed in AE data acquisition system. This may hinder the correct reflection of original waveform generated from AE sources and consequently bring difficulty for the accurate identification of the critical damage and early failure. In this investigation, a new qualitative AE parameter based on Shannon's entropy, i.e. AE entropy is proposed for damage monitoring. Since it derives from the uncertainty of amplitude distribution of each AE waveform, it is independent of the threshold and other time-driven parameters and can characterize the original micro-structural deformations. Fatigue crack growth test on CrMoV steel and three point bending test on a ductile material are conducted to validate the feasibility and effectiveness of the proposed parameter. The results show that the new parameter, compared to AE amplitude, is more effective in discriminating the different damage stages and identifying the critical damage.

  13. Predictors of memory performance among Taiwanese postmenopausal women with heart failure.

    PubMed

    Chou, Cheng-Chen; Pressler, Susan J; Giordani, Bruno

    2014-09-01

    There are no studies describing the nature of memory deficits among women with heart failure (HF). The aims of this study were to examine memory performance among Taiwanese women with HF compared with age- and education-matched healthy women, and to evaluate factors that explain memory performance in women with HF. Seventy-six women with HF and 64 healthy women were recruited in Taiwan. Women completed working, verbal, and visual memory tests; HF severity was collected from the medical records. Women with HF performed significantly worse than healthy women on tests of working memory and verbal memory. Among women with HF, older age explained poorer working memory, and older age, higher HF severity, more comorbidities, and systolic HF explained poorer verbal memory. Menopausal symptoms were not associated with memory performance. Results of the study validate findings of memory loss in HF patients from the United States and Europe in a culturally different sample of women. Working memory and verbal memory were worse in Taiwanese women with HF compared with healthy participants. Studies are needed to determine mechanisms of memory deficits in these women and develop interventions to improve memory. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Eigentumors for prediction of treatment failure in patients with early-stage breast cancer using dynamic contrast-enhanced MRI: a feasibility study

    NASA Astrophysics Data System (ADS)

    Chan, H. M.; van der Velden, B. H. M.; E Loo, C.; Gilhuijs, K. G. A.

    2017-08-01

    We present a radiomics model to discriminate between patients at low risk and those at high risk of treatment failure at long-term follow-up based on eigentumors: principal components computed from volumes encompassing tumors in washin and washout images of pre-treatment dynamic contrast-enhanced (DCE-) MR images. Eigentumors were computed from the images of 563 patients from the MARGINS study. Subsequently, a least absolute shrinkage selection operator (LASSO) selected candidates from the components that contained 90% of the variance of the data. The model for prediction of survival after treatment (median follow-up time 86 months) was based on logistic regression. Receiver operating characteristic (ROC) analysis was applied and area-under-the-curve (AUC) values were computed as measures of training and cross-validated performances. The discriminating potential of the model was confirmed using Kaplan-Meier survival curves and log-rank tests. From the 322 principal components that explained 90% of the variance of the data, the LASSO selected 28 components. The ROC curves of the model yielded AUC values of 0.88, 0.77 and 0.73, for the training, leave-one-out cross-validated and bootstrapped performances, respectively. The bootstrapped Kaplan-Meier survival curves confirmed significant separation for all tumors (P  <  0.0001). Survival analysis on immunohistochemical subgroups shows significant separation for the estrogen-receptor subtype tumors (P  <  0.0001) and the triple-negative subtype tumors (P  =  0.0039), but not for tumors of the HER2 subtype (P  =  0.41). The results of this retrospective study show the potential of early-stage pre-treatment eigentumors for use in prediction of treatment failure of breast cancer.

  15. Development and evaluation of a composite risk score to predict kidney transplant failure.

    PubMed

    Moore, Jason; He, Xiang; Shabir, Shazia; Hanvesakul, Rajesh; Benavente, David; Cockwell, Paul; Little, Mark A; Ball, Simon; Inston, Nicholas; Johnston, Atholl; Borrows, Richard

    2011-05-01

    Although risk factors for kidney transplant failure are well described, prognostic risk scores to estimate risk in prevalent transplant recipients are limited. Development and validation of risk-prediction instruments. The development data set included 2,763 prevalent patients more than 12 months posttransplant enrolled into the LOTESS (Long Term Efficacy and Safety Surveillance) Study. The validation data set included 731 patients who underwent transplant at a single UK center. Estimated glomerular filtration rate (eGFR) and other risk factors were evaluated using Cox regression. Scores for death-censored and overall transplant failure were based on the summed hazard ratios for baseline predictor variables. Predictive performance was assessed using calibration (Hosmer-Lemeshow statistic), discrimination (C statistic), and clinical reclassification (net reclassification improvement) compared with eGFR alone. In the development data set, 196 patients died and another 225 experienced transplant failure. eGFR, recipient age, race, serum urea and albumin levels, declining eGFR, and prior acute rejection predicted death-censored transplant failure. eGFR, recipient age, sex, serum urea and albumin levels, and declining eGFR predicted overall transplant failure. In the validation data set, 44 patients died and another 101 experienced transplant failure. The weighted scores comprising these variables showed adequate discrimination and calibration for death-censored (C statistic, 0.83; 95% CI, 0.75-0.91; Hosmer-Lemeshow χ(2)P = 0.8) and overall (C statistic, 0.70; 95% CI, 0.64-0.77; Hosmer-Lemeshow χ(2)P = 0.5) transplant failure. However, the scores failed to reclassify risk compared with eGFR alone (net reclassification improvements of 7.6% [95% CI, -0.2 to 13.4; P = 0.09] and 4.3% [95% CI, -2.7 to 11.8; P = 0.3] for death-censored and overall transplant failure, respectively). Retrospective analysis of predominantly cyclosporine-treated patients; limited study size and categorization of variables may limit power to detect effect. Although the scores performed well regarding discrimination and calibration, clinically relevant risk reclassification over eGFR alone was not evident, emphasizing the stringent requirements for such scores. Further studies are required to develop and refine this process. Copyright © 2011 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  16. NASA Structural Analysis Report on the American Airlines Flight 587 Accident - Local Analysis of the Right Rear Lug

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S; Glaessgen, Edward H.; Mason, Brian H; Krishnamurthy, Thiagarajan; Davila, Carlos G

    2005-01-01

    A detailed finite element analysis of the right rear lug of the American Airlines Flight 587 - Airbus A300-600R was performed as part of the National Transportation Safety Board s failure investigation of the accident that occurred on November 12, 2001. The loads experienced by the right rear lug are evaluated using global models of the vertical tail, local models near the right rear lug, and a global-local analysis procedure. The right rear lug was analyzed using two modeling approaches. In the first approach, solid-shell type modeling is used, and in the second approach, layered-shell type modeling is used. The solid-shell and the layered-shell modeling approaches were used in progressive failure analyses (PFA) to determine the load, mode, and location of failure in the right rear lug under loading representative of an Airbus certification test conducted in 1985 (the 1985-certification test). Both analyses were in excellent agreement with each other on the predicted failure loads, failure mode, and location of failure. The solid-shell type modeling was then used to analyze both a subcomponent test conducted by Airbus in 2003 (the 2003-subcomponent test) and the accident condition. Excellent agreement was observed between the analyses and the observed failures in both cases. From the analyses conducted and presented in this paper, the following conclusions were drawn. The moment, Mx (moment about the fuselage longitudinal axis), has significant effect on the failure load of the lugs. Higher absolute values of Mx give lower failure loads. The predicted load, mode, and location of the failure of the 1985-certification test, 2003-subcomponent test, and the accident condition are in very good agreement. This agreement suggests that the 1985-certification and 2003- subcomponent tests represent the accident condition accurately. The failure mode of the right rear lug for the 1985-certification test, 2003-subcomponent test, and the accident load case is identified as a cleavage-type failure. For the accident case, the predicted failure load for the right rear lug from the PFA is greater than 1.98 times the limit load of the lugs. I.

  17. Structural Analysis of the Right Rear Lug of American Airlines Flight 587

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.; Glaessgen, Edward H.; Mason, Brian H.; Krishnamurthy, Thiagarajan; Davila, Carlos G.

    2006-01-01

    A detailed finite element analysis of the right rear lug of the American Airlines Flight 587 - Airbus A300-600R was performed as part of the National Transportation Safety Board s failure investigation of the accident that occurred on November 12, 2001. The loads experienced by the right rear lug are evaluated using global models of the vertical tail, local models near the right rear lug, and a global-local analysis procedure. The right rear lug was analyzed using two modeling approaches. In the first approach, solid-shell type modeling is used, and in the second approach, layered-shell type modeling is used. The solid-shell and the layered-shell modeling approaches were used in progressive failure analyses (PFA) to determine the load, mode, and location of failure in the right rear lug under loading representative of an Airbus certification test conducted in 1985 (the 1985-certification test). Both analyses were in excellent agreement with each other on the predicted failure loads, failure mode, and location of failure. The solid-shell type modeling was then used to analyze both a subcomponent test conducted by Airbus in 2003 (the 2003-subcomponent test) and the accident condition. Excellent agreement was observed between the analyses and the observed failures in both cases. The moment, Mx (moment about the fuselage longitudinal axis), has significant effect on the failure load of the lugs. Higher absolute values of Mx give lower failure loads. The predicted load, mode, and location of the failure of the 1985- certification test, 2003-subcomponent test, and the accident condition are in very good agreement. This agreement suggests that the 1985-certification and 2003-subcomponent tests represent the accident condition accurately. The failure mode of the right rear lug for the 1985-certification test, 2003-subcomponent test, and the accident load case is identified as a cleavage-type failure. For the accident case, the predicted failure load for the right rear lug from the PFA is greater than 1.98 times the limit load of the lugs.

  18. Comprehensive analysis of cochlear implant failure: usefulness of clinical symptom-based algorithm combined with in situ integrity testing.

    PubMed

    Yamazaki, Hiroshi; O'Leary, Stephen; Moran, Michelle; Briggs, Robert

    2014-04-01

    Accurate diagnosis of cochlear implant failures is important for management; however, appropriate strategies to assess possible device failures are not always clear. The purpose of this study is to understand correlation between causes of device failure and the presenting clinical symptoms as well as results of in situ integrity testing and to propose effective strategies for diagnosis of device failure. Retrospective case review. Cochlear implant center at a tertiary referral hospital. Twenty-seven cases with suspected device failure of Cochlear Nucleus systems (excluding CI512 failures) on the basis of deterioration in auditory perception from January 2000 to September 2012 in the Melbourne cochlear implant clinic. Clinical presentations and types of abnormalities on in situ integrity testing were compared with modes of device failure detected by returned device analysis. Sudden deterioration in auditory perception was always observed in cases with "critical damage": either fracture of the integrated circuit or most or all of the electrode wires. Subacute or gradually progressive deterioration in auditory perception was significantly associated with a more limited number of broken electrode wires. Cochlear implant mediated auditory and nonauditory symptoms were significantly associated with an insulation problem. An algorithm based on the time course of deterioration in auditory perception and cochlear implant-mediated auditory and nonauditory symptoms was developed on the basis of these retrospective analyses, to help predict the mode of device failure. In situ integrity testing, which included close monitoring of device function in routine programming sessions as well as repeating the manufacturer's integrity test battery, was sensitive enough to detect malfunction in all suspected device failures, and each mode of device failure showed a characteristic abnormality on in situ integrity testing. Our clinical manifestation-based algorithm combined with in situ integrity testing may be useful for accurate diagnosis and appropriate management of device failure. Close monitoring of device function in routine programming sessions as well as repeating the manufacturer's integrity test battery is important if the initial in situ integrity testing is inconclusive because objective evidence of failure in the implanted device is essential to recommend explantation/reimplantation.

  19. Quantitative Approach to Failure Mode and Effect Analysis for Linear Accelerator Quality Assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Daniel, Jennifer C., E-mail: jennifer.odaniel@duke.edu; Yin, Fang-Fang

    Purpose: To determine clinic-specific linear accelerator quality assurance (QA) TG-142 test frequencies, to maximize physicist time efficiency and patient treatment quality. Methods and Materials: A novel quantitative approach to failure mode and effect analysis is proposed. Nine linear accelerator-years of QA records provided data on failure occurrence rates. The severity of test failure was modeled by introducing corresponding errors into head and neck intensity modulated radiation therapy treatment plans. The relative risk of daily linear accelerator QA was calculated as a function of frequency of test performance. Results: Although the failure severity was greatest for daily imaging QA (imaging vsmore » treatment isocenter and imaging positioning/repositioning), the failure occurrence rate was greatest for output and laser testing. The composite ranking results suggest that performing output and lasers tests daily, imaging versus treatment isocenter and imaging positioning/repositioning tests weekly, and optical distance indicator and jaws versus light field tests biweekly would be acceptable for non-stereotactic radiosurgery/stereotactic body radiation therapy linear accelerators. Conclusions: Failure mode and effect analysis is a useful tool to determine the relative importance of QA tests from TG-142. Because there are practical time limitations on how many QA tests can be performed, this analysis highlights which tests are the most important and suggests the frequency of testing based on each test's risk priority number.« less

  20. An improved method for testing tension properties of fiber-reinforced polymer rebar

    NASA Astrophysics Data System (ADS)

    Yuan, Guoqing; Ma, Jian; Dong, Guohua

    2010-03-01

    We have conducted a series of tests to measure tensile strength and modulus of elasticity of fiber reinforced polymer (FRP) rebar. In these tests, the ends of each rebar specimen were embedded in steel tube filled with expansive cement, and the rebar was loaded by gripping the tubes with the conventional fixture during the tensile tests. However, most of specimens were failed at the ends where the section changed abruptly. Numerical simulations of the stress field at bar ends in such tests by ANSYS revealed that such unexpected failure modes were caused by the test setup. The changing abruptly of the section induced stress concentration. So the test results would be regarded as invalid. An improved testing method is developed in this paper to avoid this issue. A transition part was added between the free segment of the rebar and the tube, which could eliminate the stress concentration effectively and thus yield more accurate values for the properties of FRP rebar. The validity of the proposed method was demonstrated by both experimental tests and numerical analysis.

  1. An improved method for testing tension properties of fiber-reinforced polymer rebar

    NASA Astrophysics Data System (ADS)

    Yuan, Guoqing; Ma, Jian; Dong, Guohua

    2009-12-01

    We have conducted a series of tests to measure tensile strength and modulus of elasticity of fiber reinforced polymer (FRP) rebar. In these tests, the ends of each rebar specimen were embedded in steel tube filled with expansive cement, and the rebar was loaded by gripping the tubes with the conventional fixture during the tensile tests. However, most of specimens were failed at the ends where the section changed abruptly. Numerical simulations of the stress field at bar ends in such tests by ANSYS revealed that such unexpected failure modes were caused by the test setup. The changing abruptly of the section induced stress concentration. So the test results would be regarded as invalid. An improved testing method is developed in this paper to avoid this issue. A transition part was added between the free segment of the rebar and the tube, which could eliminate the stress concentration effectively and thus yield more accurate values for the properties of FRP rebar. The validity of the proposed method was demonstrated by both experimental tests and numerical analysis.

  2. Carbon Fiber Strand Tensile Failure Dynamic Event Characterization

    NASA Technical Reports Server (NTRS)

    Johnson, Kenneth L.; Reeder, James

    2016-01-01

    There are few if any clear, visual, and detailed images of carbon fiber strand failures under tension useful for determining mechanisms, sequences of events, different types of failure modes, etc. available to researchers. This makes discussion of physics of failure difficult. It was also desired to find out whether the test article-to-test rig interface (grip) played a part in some failures. These failures have nothing to do with stress rupture failure, thus representing a source of waste for the larger 13-00912 investigation into that specific failure type. Being able to identify or mitigate any competing failure modes would improve the value of the 13-00912 test data. The beginnings of the solution to these problems lay in obtaining images of strand failures useful for understanding physics of failure and the events leading up to failure. Necessary steps include identifying imaging techniques that result in useful data, using those techniques to home in on where in a strand and when in the sequence of events one should obtain imaging data.

  3. Numerical and experimental analysis of an in-scale masonry cross-vault prototype up to failure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rossi, Michela; Calderini, Chiara; Lagomarsino, Sergio

    2015-12-31

    A heterogeneous full 3D non-linear FE approach is validated against experimental results obtained on an in-scale masonry cross vault assembled with dry joints, and subjected to various loading conditions consisting on imposed displacement combinations to the abutments. The FE model relies into a discretization of the blocks by means of few rigid-infinitely resistant parallelepiped elements interacting by means of planar four-noded interfaces, where all the deformation (elastic and inelastic) occurs. The investigated response mechanisms of vault are the shear in-plane distortion and the longitudinal opening and closing mechanism at the abutments. After the validation of the approach on the experimentallymore » tested cross-vault, a sensitivity analysis is conducted on the same geometry, but in real scale, varying mortar joints mechanical properties, in order to furnish useful hints for safety assessment, especially in presence of seismic action.« less

  4. Validating Coherence Measurements Using Aligned and Unaligned Coherence Functions

    NASA Technical Reports Server (NTRS)

    Miles, Jeffrey Hilton

    2006-01-01

    This paper describes a novel approach based on the use of coherence functions and statistical theory for sensor validation in a harsh environment. By the use of aligned and unaligned coherence functions and statistical theory one can test for sensor degradation, total sensor failure or changes in the signal. This advanced diagnostic approach and the novel data processing methodology discussed provides a single number that conveys this information. This number as calculated with standard statistical procedures for comparing the means of two distributions is compared with results obtained using Yuen's robust statistical method to create confidence intervals. Examination of experimental data from Kulite pressure transducers mounted in a Pratt & Whitney PW4098 combustor using spectrum analysis methods on aligned and unaligned time histories has verified the effectiveness of the proposed method. All the procedures produce good results which demonstrates how robust the technique is.

  5. Coupled incompressible Smoothed Particle Hydrodynamics model for continuum-based modelling sediment transport

    NASA Astrophysics Data System (ADS)

    Pahar, Gourabananda; Dhar, Anirban

    2017-04-01

    A coupled solenoidal Incompressible Smoothed Particle Hydrodynamics (ISPH) model is presented for simulation of sediment displacement in erodible bed. The coupled framework consists of two separate incompressible modules: (a) granular module, (b) fluid module. The granular module considers a friction based rheology model to calculate deviatoric stress components from pressure. The module is validated for Bagnold flow profile and two standardized test cases of sediment avalanching. The fluid module resolves fluid flow inside and outside porous domain. An interaction force pair containing fluid pressure, viscous term and drag force acts as a bridge between two different flow modules. The coupled model is validated against three dambreak flow cases with different initial conditions of movable bed. The simulated results are in good agreement with experimental data. A demonstrative case considering effect of granular column failure under full/partial submergence highlights the capability of the coupled model for application in generalized scenario.

  6. Water Impact Test and Simulation of a Composite Energy Absorbing Fuselage Section

    NASA Technical Reports Server (NTRS)

    Fasanella, Edwin L.; Jackson, Karen E.; Sparks, Chad; Sareen, Ashish

    2003-01-01

    In March 2002, a 25-ft/s vertical drop test of a composite fuselage section was conducted onto water. The purpose of the test was to obtain experimental data characterizing the structural response of the fuselage section during water impact for comparison with two previous drop tests that were performed onto a rigid surface and soft soil. For the drop test, the fuselage section was configured with ten 100-lb. lead masses, five per side, that were attached to seat rails mounted to the floor. The fuselage section was raised to a height of 10-ft. and dropped vertically into a 15-ft. diameter pool filled to a depth of 3.5-ft. with water. Approximately 70 channels of data were collected during the drop test at a 10-kHz sampling rate. The test data were used to validate crash simulations of the water impact that were developed using the nonlinear, explicit transient dynamic codes, MSC.Dytran and LS-DYNA. The fuselage structure was modeled using shell and solid elements with a Lagrangian mesh, and the water was modeled with both Eulerian and Lagrangian techniques. The fluid-structure interactions were executed using the fast general coupling in MSC.Dytran and the Arbitrary Lagrange-Euler (ALE) coupling in LS-DYNA. Additionally, the smooth particle hydrodynamics (SPH) meshless Lagrangian technique was used in LS-DYNA to represent the fluid. The simulation results were correlated with the test data to validate the modeling approach. Additional simulation studies were performed to determine how changes in mesh density, mesh uniformity, fluid viscosity, and failure strain influence the test-analysis correlation.

  7. Factors Related to the Academic Success and Failure of College Football Players: The Case of the Mental Dropout.

    ERIC Educational Resources Information Center

    Lang, Gale; And Others

    1988-01-01

    Examines variables used to predict the academic success or failure of college football players. Valid predictors include the following: (1) high school grades; (2) repeating a year in school; (3) feelings towards school; (4) discipline history; (5) mother's education; and (6) high school background. (FMW)

  8. 40 CFR 49.10711 - Federal Implementation Plan for the Astaris-Idaho LLC Facility (formerly owned by FMC Corporation...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... section, consistent with any averaging period specified for averaging the results of monitoring. Fugitive... beneficial. Monitoring malfunction means any sudden, infrequent, not reasonably preventable failure of the monitoring to provide valid data. Monitoring failures that are caused in part by poor maintenance or careless...

  9. 40 CFR 49.10711 - Federal Implementation Plan for the Astaris-Idaho LLC Facility (formerly owned by FMC Corporation...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... section, consistent with any averaging period specified for averaging the results of monitoring. Fugitive... beneficial. Monitoring malfunction means any sudden, infrequent, not reasonably preventable failure of the monitoring to provide valid data. Monitoring failures that are caused in part by poor maintenance or careless...

  10. Simulation-driven machine learning: Bearing fault classification

    NASA Astrophysics Data System (ADS)

    Sobie, Cameron; Freitas, Carina; Nicolai, Mike

    2018-01-01

    Increasing the accuracy of mechanical fault detection has the potential to improve system safety and economic performance by minimizing scheduled maintenance and the probability of unexpected system failure. Advances in computational performance have enabled the application of machine learning algorithms across numerous applications including condition monitoring and failure detection. Past applications of machine learning to physical failure have relied explicitly on historical data, which limits the feasibility of this approach to in-service components with extended service histories. Furthermore, recorded failure data is often only valid for the specific circumstances and components for which it was collected. This work directly addresses these challenges for roller bearings with race faults by generating training data using information gained from high resolution simulations of roller bearing dynamics, which is used to train machine learning algorithms that are then validated against four experimental datasets. Several different machine learning methodologies are compared starting from well-established statistical feature-based methods to convolutional neural networks, and a novel application of dynamic time warping (DTW) to bearing fault classification is proposed as a robust, parameter free method for race fault detection.

  11. Failure analysis of a tool steel torque shaft

    NASA Technical Reports Server (NTRS)

    Reagan, J. R.

    1981-01-01

    A low design load drive shaft used to deliver power from an experimental exhaust heat recovery system to the crankshaft of an experimental diesel truck engine failed during highway testing. An independent testing laboratory analyzed the failure by routine metallography and attributed the failure to fatigue induced by a banded microstructure. Visual examination by NASA of the failed shaft plus the knowledge of the torsional load that it carried pointed to a 100 percent ductile failure with no evidence of fatigue. Scanning electron microscopy confirmed this. Torsional test specimens were produced from pieces of the failed shaft and torsional overload testing produced identical failures to that which had occurred in the truck engine. This pointed to a failure caused by a high overload and although the microstructure was defective it was not the cause of the failure.

  12. Multiaxial Creep-Fatigue and Creep-Ratcheting Failures of Grade 91 and Haynes 230 Alloys Toward Addressing Design Issues of Gen IV Nuclear Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hassan, Tasnim; Lissenden, Cliff; Carroll, Laura

    The proposed research will develop systematic sets of uniaxial and multiaxial experimental data at a very high temperature (850-950°C) for Alloy 617. The loading histories to be prescribed in the experiments will induce creep-fatigue and creep-ratcheting failure mechanisms. These experimental responses will be scrutinized in order to quantify the influences of temperature and creep on fatigue and ratcheting failures. A unified constitutive model (UCM) will be developed and validated against these experimental responses. The improved UCM will be incorporated into the widely used finite element commercial software packages ANSYS. The modified ANSYS will be validated so that it can bemore » used for evaluating the very high temperature ASME-NH design-by-analysis methodology for Alloy 617 and thereby addressing the ASME-NH design code issues.« less

  13. Estimation procedures to measure and monitor failure rates of components during thermal-vacuum testing

    NASA Technical Reports Server (NTRS)

    Williams, R. E.; Kruger, R.

    1980-01-01

    Estimation procedures are described for measuring component failure rates, for comparing the failure rates of two different groups of components, and for formulating confidence intervals for testing hypotheses (based on failure rates) that the two groups perform similarly or differently. Appendix A contains an example of an analysis in which these methods are applied to investigate the characteristics of two groups of spacecraft components. The estimation procedures are adaptable to system level testing and to monitoring failure characteristics in orbit.

  14. A systematic review of cognitive failures in daily life: Healthy populations.

    PubMed

    Carrigan, Nicole; Barkus, Emma

    2016-04-01

    Cognitive failures are minor errors in thinking reported by clinical and non-clinical individuals during everyday life. It is not yet clear how subjectively-reported cognitive failures relate to objective neuropsychological ability. We aimed to consolidate the definition of cognitive failures, outline evidence for the relationship with objective cognition, and develop a unified model of factors that increase cognitive failures. We conducted a systematic review of cognitive failures, identifying 45 articles according to the PRISMA statement. Failures were defined as reflecting proneness to errors in 'real world' planned thought and action. Vulnerability to failures was not consistently associated with objective cognitive performance. A range of stable and variable factors were linked to increased risk of cognitive failures. We conclude that cognitive failures measure real world cognitive capacity rather than pure 'unchallenged' ability. Momentary state may interact with predisposing trait factors to increase the likelihood of failures occurring. Inclusion of self-reported cognitive failures in objective cognitive research will increase the translational relevance of ability into more ecologically valid aspects of real world functioning. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. GeneXpert HIV-1 quant assay, a new tool for scale up of viral load monitoring in the success of ART programme in India.

    PubMed

    Kulkarni, Smita; Jadhav, Sushama; Khopkar, Priyanka; Sane, Suvarna; Londhe, Rajkumar; Chimanpure, Vaishali; Dhilpe, Veronica; Ghate, Manisha; Yelagate, Rajendra; Panchal, Narayan; Rahane, Girish; Kadam, Dilip; Gaikwad, Nitin; Rewari, Bharat; Gangakhedkar, Raman

    2017-07-21

    Recent WHO guidelines identify virologic monitoring for diagnosing and confirming ART failure. In view of this, validation and scale up of point of care viral load technologies is essential in resource limited settings. A systematic validation of the GeneXpert® HIV-1 Quant assay (a point-of-care technology) in view of scaling up HIV-1 viral load in India to monitor the success of national ART programme was carried out. Two hundred nineteen plasma specimens falling in nine viral load ranges (<40 to >5 L copies/ml) were tested by the Abbott m2000rt Real Time and GeneXpert HIV-1 Quant assays. Additionally, 20 seronegative; 16 stored specimens and 10 spiked controls were also tested. Statistical analysis was done using Stata/IC and sensitivity, specificity, PPV, NPV and %misclassification rates were calculated as per DHSs/AISs, WHO, NACO cut-offs for virological failure. The GeneXpert assay compared well with the Abbott assay with a higher sensitivity (97%), specificity (97-100%) and concordance (91.32%). The correlation between two assays (r = 0.886) was statistically significant (p < 0.01), the linear regression showed a moderate fit (R 2  = 0.784) and differences were within limits of agreement. Reproducibility showed an average variation of 4.15 and 3.52% while Lower limit of detection (LLD) and Upper limit of detection (ULD) were 42 and 1,740,000 copies/ml respectively. The misclassification rates for three viral load cut offs were not statistically different (p = 0.736). All seronegative samples were negative and viral loads of the stored samples showed a good fit (R 2  = 0.896 to 0.982). The viral load results of GeneXpert HIV-1 Quant assay compared well with Abbott HIV-1 m2000 Real Time PCR; suggesting its use as a Point of care assay for viral load estimation in resource limited settings. Its ease of performance and rapidity will aid in timely diagnosis of ART failures, integrated HIV-TB management and will facilitate the UNAIDS 90-90-90 target.

  16. Disturbed State constitutive modeling of two Pleistocene tills

    NASA Astrophysics Data System (ADS)

    Sane, S. M.; Desai, C. S.; Jenson, J. W.; Contractor, D. N.; Carlson, A. E.; Clark, P. U.

    2008-02-01

    The Disturbed State Concept (DSC) provides a general approach for constitutive modeling of deforming materials. Here, we briefly explain the DSC and present the results of laboratory tests on two regionally significant North American tills, along with the results of a numerical simulation to predict the behavior of one of the tills in an idealized physical system. Laboratory shear tests showed that plastic strain starts almost from the beginning of loading, and that failure and resulting motion begin at a critical disturbance, when about 85% of the mass has reached the fully adjusted or critical state. Specimens of both tills exhibited distributed strain, deforming into barrel shapes without visible shear planes. DSC parameters obtained from shear and creep tests were validated by comparing model predictions against test data used to find the parameters, as well as against data from independent tests. The DSC parameters from one of the tills were applied in a finite-element simulation to predict gravity-induced motion for a 5000-m long, 100-m thick slab of ice coupled to an underlying 1.5-m thick layer of till set on a 4° incline, with pore-water pressure in the till at 90% of the load. The simulation predicted that in the middle segment of the till layer (i.e., from x=2000 to 3000 m) the induced (computed) shear stress, strain, and disturbance increase gradually with the applied shear stress. Induced shear stress peaks at ˜60 kPa. The critical disturbance, at which failure occurs, is observed after the peak shear stress, at an induced shear stress of ˜23 kPa and shear strain of ˜0.75 in the till. Calculated horizontal displacement over the height of the entire till section at the applied shear stress of 65 kPa is ˜4.5 m. We note that the numerical prediction of critical disturbance, when the displacement shows a sharp change in rate, compares very well with the occurrence of critical disturbance observed in the laboratory triaxial tests, when a sharp change in the rate of strain occurs. This implies that the failure and concomitant initiation of motion occur near the residual state, at large strains. In contrast to the Mohr-Coulomb model, which predicts failure and motion at very small (elastic) strain, the DSC thus predicts failure and initiation of motion after the till has undergone considerable (plastic) strain. These results suggest that subglacial till may be able to sustain stress in the vicinity of 20 kPa even after the motion begins. They also demonstrate the potential of the DSC to model not only local behavior, including potential "sticky spot" mechanisms, but also global behavior for soft-bedded ice.

  17. Cardiopulmonary exercise testing and prognosis in heart failure due to systolic left ventricular dysfunction: a validation study of the European Society of Cardiology Guidelines and Recommendations (2008) and further developments.

    PubMed

    Corrà, Ugo; Giordano, Andrea; Mezzani, Alessandro; Gnemmi, Marco; Pistono, Massimo; Caruso, Roberto; Giannuzzi, Pantaleo

    2012-02-01

    The study aims were to validate the cardiopulmonary exercise testing (CPET) parameters recommended by the European Society of Cardiology 2008 Guidelines for risk assessment in heart failure (HF) (ESC-predictors) and to verify the predictive role of 11 supplementary CPET (S-predictors) parameters. We followed 749 HF patients for cardiovascular death and urgent heart transplantation for 3 years: 139 (19%) patients had cardiac events. ESC-predictors - peak oxygen consumption (VO(2)), slope of minute ventilation vs carbon dioxide production (VE/VCO(2)) and exertional oscillatory ventilation - were all related to outcome at univariate and multivariable analysis. The ESC/2008 prototype based on ESC-predictors presented a Harrell's C concordance index of 0.725, with a likely χ2 of 98.31. S-predictors - predicted peak VO(2), peak oxygen pulse, peak respiratory exchange ratio, peak circulatory power, peak VE/VCO(2), VE/VCO(2) slope normalized by peak VO(2), VO(2) efficiency slope, ventilatory anaerobic threshold detection, peak end-tidal CO(2) partial pressure, peak heart rate, and peak systolic arterial blood pressure (SBP) - were all linked to outcome at univariate analysis. When individually added to the ESC/2008 prototype, only peak SBP and peak O(2) pulse significantly improved the model discrimination ability: the ESC + peak SBP prototype had a Harrell's C index 0.750 and reached the highest likely χ2 (127.16, p < 0.0001). We evaluated the longest list of CPET prognostic parameters yet studied in HF: ESC-predictors were independent predictors of cardiovascular events, and the ESC prototype showed a convincing predictive capacity, whereas none of 11 S-predictors enhanced the prognostic performance, except peak SBP.

  18. Investigating failure behavior and origins under supposed "shear bond" loading.

    PubMed

    Sultan, Hassam; Kelly, J Robert; Kazemi, Reza B

    2015-07-01

    This study evaluated failure behavior when resin-composite cylinders bonded to dentin fractured under traditional "shear" testing. Failure was assessed by scaling of failure loads to changes in cylinder radii and fracture surface analysis. Three stress models were examined including failure by: bonded area; flat-on-cylinder contact; and, uniformly-loaded, cantilevered-beam. Nine 2-mm dentin occlusal dentin discs for each radii tested were embedded in resin and bonded to resin-composite cylinders; radii (mm)=0.79375; 1.5875; 2.38125; 3.175. Samples were "shear" tested at 1.0mm/min. Following testing, disks were finished with silicone carbide paper (240-600grit) to remove residual composite debris and tested again using different radii. Failure stresses were calculated for: "shear"; flat-on-cylinder contact; and, bending of a uniformly-loaded cantilevered beam. Stress equations and constants were evaluated for each model. Fracture-surface analysis was performed. Failure stresses calculated as flat-on-cylinder contact scaled best with its radii relationship. Stress equation constants were constant for failure from the outside surface of the loaded cylinders and not with the bonded surface area or cantilevered beam. Contact failure stresses were constant over all specimen sizes. Fractography reinforced that failures originated from loaded cylinder surface and were unrelated to the bonded surface area. "Shear bond" testing does not appear to test the bonded interface. Load/area "stress" calculations have no physical meaning. While failure is related to contact stresses, the mechanism(s) likely involve non-linear damage accumulation, which may only indirectly be influenced by the interface. Copyright © 2015 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather; Wirthlin, Michael

    A variety of fault emulation systems have been created to study the effect of single-event effects (SEEs) in static random access memory (SRAM) based field-programmable gate arrays (FPGAs). These systems are useful for augmenting radiation-hardness assurance (RHA) methodologies for verifying the effectiveness for mitigation techniques; understanding error signatures and failure modes in FPGAs; and failure rate estimation. For radiation effects researchers, it is important that these systems properly emulate how SEEs manifest in FPGAs. If the fault emulation systems does not mimic the radiation environment, the system will generate erroneous data and incorrect predictions of behavior of the FPGA inmore » a radiation environment. Validation determines whether the emulated faults are reasonable analogs to the radiation-induced faults. In this study we present methods for validating fault emulation systems and provide several examples of validated FPGA fault emulation systems.« less

  20. Efficient Actor Recovery Paradigm for Wireless Sensor and Actor Networks

    PubMed Central

    Mahjoub, Reem K.; Elleithy, Khaled

    2017-01-01

    The actor nodes are the spine of wireless sensor and actor networks (WSANs) that collaborate to perform a specific task in an unverified and uneven environment. Thus, there is a possibility of high failure rate in such unfriendly scenarios due to several factors such as power consumption of devices, electronic circuit failure, software errors in nodes or physical impairment of the actor nodes and inter-actor connectivity problem. Therefore, it is extremely important to discover the failure of a cut-vertex actor and network-disjoint in order to improve the Quality-of-Service (QoS). In this paper, we propose an Efficient Actor Recovery (EAR) paradigm to guarantee the contention-free traffic-forwarding capacity. The EAR paradigm consists of a Node Monitoring and Critical Node Detection (NMCND) algorithm that monitors the activities of the nodes to determine the critical node. In addition, it replaces the critical node with backup node prior to complete node-failure which helps balancing the network performance. The packets are handled using Network Integration and Message Forwarding (NIMF) algorithm that determines the source of forwarding the packets; either from actor or sensor. This decision-making capability of the algorithm controls the packet forwarding rate to maintain the network for a longer time. Furthermore, for handling the proper routing strategy, Priority-Based Routing for Node Failure Avoidance (PRNFA) algorithm is deployed to decide the priority of the packets to be forwarded based on the significance of information available in the packet. To validate the effectiveness of the proposed EAR paradigm, the proposed algorithms were tested using OMNET++ simulation. PMID:28420102

Top