Sample records for reliable causing continual

  1. Heroic Reliability Improvement in Manned Space Systems

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2017-01-01

    System reliability can be significantly improved by a strong continued effort to identify and remove all the causes of actual failures. Newly designed systems often have unexpected high failure rates which can be reduced by successive design improvements until the final operational system has an acceptable failure rate. There are many causes of failures and many ways to remove them. New systems may have poor specifications, design errors, or mistaken operations concepts. Correcting unexpected problems as they occur can produce large early gains in reliability. Improved technology in materials, components, and design approaches can increase reliability. The reliability growth is achieved by repeatedly operating the system until it fails, identifying the failure cause, and fixing the problem. The failure rate reduction that can be obtained depends on the number and the failure rates of the correctable failures. Under the strong assumption that the failure causes can be removed, the decline in overall failure rate can be predicted. If a failure occurs at the rate of lambda per unit time, the expected time before the failure occurs and can be corrected is 1/lambda, the Mean Time Before Failure (MTBF). Finding and fixing a less frequent failure with the rate of lambda/2 per unit time requires twice as long, time of 1/(2 lambda). Cutting the failure rate in half requires doubling the test and redesign time and finding and eliminating the failure causes.Reducing the failure rate significantly requires a heroic reliability improvement effort.

  2. Reliability Growth of Tactical Coolers at CMC Electronics Cincinnati: 1/5-Watt Cooler Test Report

    NASA Astrophysics Data System (ADS)

    Kuo, D. T.; Lody, T. D.

    2004-06-01

    CMC Electronics Cincinnati (CMC) is conducting a reliability growth program to extend the life of tactical Stirling-cycle cryocoolers. The continuous product improvement processes consist of testing production coolers to failure, determining the root cause, incorporating improvements and verification. The most recent life data for the 1/5-Watt Cooler (Model B512B) is presented with a discussion of leading root causes and potential improvements. The mean time to failure (MTTF) life of the coolers was found to be 22,552 hours with the root cause of failure attributed to the accumulation of methane and carbon dioxide in the cooler and the wear of the piston.

  3. Continuation of down-hole geophysical testing for rock sockets.

    DOT National Transportation Integrated Search

    2013-11-01

    Site characterization for the design of deep foundations is crucial for ensuring a reliable and economic substructure design, as unanticipated site conditions can cause significant problems and disputes during construction. Traditional invasive explo...

  4. A study on reliability of power customer in distribution network

    NASA Astrophysics Data System (ADS)

    Liu, Liyuan; Ouyang, Sen; Chen, Danling; Ma, Shaohua; Wang, Xin

    2017-05-01

    The existing power supply reliability index system is oriented to power system without considering actual electricity availability in customer side. In addition, it is unable to reflect outage or customer’s equipment shutdown caused by instantaneous interruption and power quality problem. This paper thus makes a systematic study on reliability of power customer. By comparing with power supply reliability, reliability of power customer is defined and extracted its evaluation requirements. An indexes system, consisting of seven customer indexes and two contrast indexes, are designed to describe reliability of power customer from continuity and availability. In order to comprehensively and quantitatively evaluate reliability of power customer in distribution networks, reliability evaluation method is proposed based on improved entropy method and the punishment weighting principle. Practical application has proved that reliability index system and evaluation method for power customer is reasonable and effective.

  5. Reliability Growth in Space Life Support Systems

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2014-01-01

    A hardware system's failure rate often increases over time due to wear and aging, but not always. Some systems instead show reliability growth, a decreasing failure rate with time, due to effective failure analysis and remedial hardware upgrades. Reliability grows when failure causes are removed by improved design. A mathematical reliability growth model allows the reliability growth rate to be computed from the failure data. The space shuttle was extensively maintained, refurbished, and upgraded after each flight and it experienced significant reliability growth during its operational life. In contrast, the International Space Station (ISS) is much more difficult to maintain and upgrade and its failure rate has been constant over time. The ISS Carbon Dioxide Removal Assembly (CDRA) reliability has slightly decreased. Failures on ISS and with the ISS CDRA continue to be a challenge.

  6. Detection and Prevention of Arrhythmias During Space Flight

    NASA Technical Reports Server (NTRS)

    Pillai, Dilip; Rosenbaum, David; Liszka, Kathy; York, David; Mackin, Michael; Lichter, Michael

    2004-01-01

    Objectives of this research include:determine if orthogonal lead sets can; determine if orthogonal lead sets can correct artifactual ECG changes caused by correct artifactual ECG changes caused by microgravity- induced alterations in cardiac position; determine if markers of susceptibility to SCD (TWA and QT restitution) can be reliably measured during space flight; determine the effects of continuous microgravity on markers of susceptibility to SCD.

  7. Continuous- and discrete-time stimulus sequences for high stimulus rate paradigm in evoked potential studies.

    PubMed

    Wang, Tao; Huang, Jiang-hua; Lin, Lin; Zhan, Chang'an A

    2013-01-01

    To obtain reliable transient auditory evoked potentials (AEPs) from EEGs recorded using high stimulus rate (HSR) paradigm, it is critical to design the stimulus sequences of appropriate frequency properties. Traditionally, the individual stimulus events in a stimulus sequence occur only at discrete time points dependent on the sampling frequency of the recording system and the duration of stimulus sequence. This dependency likely causes the implementation of suboptimal stimulus sequences, sacrificing the reliability of resulting AEPs. In this paper, we explicate the use of continuous-time stimulus sequence for HSR paradigm, which is independent of the discrete electroencephalogram (EEG) recording system. We employ simulation studies to examine the applicability of the continuous-time stimulus sequences and the impacts of sampling frequency on AEPs in traditional studies using discrete-time design. Results from these studies show that the continuous-time sequences can offer better frequency properties and improve the reliability of recovered AEPs. Furthermore, we find that the errors in the recovered AEPs depend critically on the sampling frequencies of experimental systems, and their relationship can be fitted using a reciprocal function. As such, our study contributes to the literature by demonstrating the applicability and advantages of continuous-time stimulus sequences for HSR paradigm and by revealing the relationship between the reliability of AEPs and sampling frequencies of the experimental systems when discrete-time stimulus sequences are used in traditional manner for the HSR paradigm.

  8. Temperature and nutrient effects on periphyton associated bacterial communities in continuous flow-through estuarine mesocosms

    EPA Science Inventory

    Nutrient pollution is a leading cause of water quality impairments and degraded aquatic ecosystem condition. Reliable and reproducible indicators of ecosystem condition are needed to help manage nutrient pollution. The diatom component of periphyton has been used as a water qua...

  9. Regression dilution bias: tools for correction methods and sample size calculation.

    PubMed

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  10. A reliable sewage quality abnormal event monitoring system.

    PubMed

    Li, Tianling; Winnel, Melissa; Lin, Hao; Panther, Jared; Liu, Chang; O'Halloran, Roger; Wang, Kewen; An, Taicheng; Wong, Po Keung; Zhang, Shanqing; Zhao, Huijun

    2017-09-15

    With closing water loop through purified recycled water, wastewater becomes a part of source water, requiring reliable wastewater quality monitoring system (WQMS) to manage wastewater source and mitigate potential health risks. However, the development of reliable WQMS is fatally constrained by severe contamination and biofouling of sensors due to the hostile analytical environment of wastewaters, especially raw sewages, that challenges the limit of existing sensing technologies. In this work, we report a technological solution to enable the development of WQMS for real-time abnormal event detection with high reliability and practicality. A vectored high flow hydrodynamic self-cleaning approach and a dual-sensor self-diagnostic concept are adopted for WQMS to effectively encounter vital sensor failing issues caused by contamination and biofouling and ensure the integrity of sensing data. The performance of the WQMS has been evaluated over a 3-year trial period at different sewage catchment sites across three Australian states. It has demonstrated that the developed WQMS is capable of continuously operating in raw sewage for a prolonged period up to 24 months without maintenance and failure, signifying the high reliability and practicality. The demonstrated WQMS capability to reliably acquire real-time wastewater quality information leaps forward the development of effective wastewater source management system. The reported self-cleaning and self-diagnostic concepts should be applicable to other online water quality monitoring systems, opening a new way to encounter the common reliability and stability issues caused by sensor contamination and biofouling. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Salmonella chronic carriage: epidemiology, diagnosis and gallbladder persistence

    PubMed Central

    Gunn, John S.; Marshall, Joanna M.; Baker, Stephen; Dongol, Sabina; Charles, Richelle C.; Ryan, Edward T.

    2014-01-01

    Typhoid (enteric fever) remains a major cause of morbidity and mortality worldwide, causing over 21 million new infections annually, with the majority of deaths occurring in young children. As typhoid fever-causing Salmonella have no known environmental reservoir, the chronic, asymptomatic carrier state is thought to be a key feature of continued maintenance of the bacterium within human populations. In spite of the importance of this disease to public health, our understanding of the molecular mechanisms that catalyze carriage, as well as our ability to reliably identify and treat the Salmonella carrier state, have only recently begun to advance. PMID:25065707

  12. Multi-version software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1989-01-01

    A number of experimental and theoretical issues associated with the practical use of multi-version software to provide run-time tolerance to software faults were investigated. A specialized tool was developed and evaluated for measuring testing coverage for a variety of metrics. The tool was used to collect information on the relationships between software faults and coverage provided by the testing process as measured by different metrics (including data flow metrics). Considerable correlation was found between coverage provided by some higher metrics and the elimination of faults in the code. Back-to-back testing was continued as an efficient mechanism for removal of un-correlated faults, and common-cause faults of variable span. Software reliability estimation methods was also continued based on non-random sampling, and the relationship between software reliability and code coverage provided through testing. New fault tolerance models were formulated. Simulation studies of the Acceptance Voting and Multi-stage Voting algorithms were finished and it was found that these two schemes for software fault tolerance are superior in many respects to some commonly used schemes. Particularly encouraging are the safety properties of the Acceptance testing scheme.

  13. Cuff for Blood-Vessel Pressure Measurements

    NASA Technical Reports Server (NTRS)

    Shimizu, M.

    1982-01-01

    Pressure within blood vessel is measured by new cufflike device without penetration of vessel. Device continuously monitors blood pressure for up to 6 months or longer without harming vessel. Is especially useful for vessels smaller than 4 or 5 millimeters in diameter. Invasive methods damage vessel wall, disturb blood flow, and cause clotting. They do not always give reliable pressure measurements over prolonged periods.

  14. The Modified Reasons for Smoking Scale: factorial structure, validity and reliability in pregnant smokers.

    PubMed

    De Wilde, Katrien Sophie; Tency, Inge; Boudrez, Hedwig; Temmerman, Marleen; Maes, Lea; Clays, Els

    2016-06-01

    Smoking during pregnancy can cause several maternal and neonatal health risks, yet a considerable number of pregnant women continue to smoke. The objectives of this study were to test the factorial structure, validity and reliability of the Dutch version of the Modified Reasons for Smoking Scale (MRSS) in a sample of smoking pregnant women and to understand reasons for continued smoking during pregnancy. A longitudinal design was performed. Data of 97 pregnant smokers were collected during prenatal consultation. Structural equation modelling was performed to assess the construct validity of the MRSS: an exploratory factor analysis was conducted, followed by a confirmatory factor analysis.Test-retest reliability (<16 weeks and 32-34 weeks pregnancy) and internal consistency were assessed using the intraclass correlation coefficient and the Cronbach's alpha, respectively. To verify concurrent validity, Mann-Whitney U-tests were performed examining associations between the MRSS subscales and nicotine dependence, daily consumption, depressive symptoms and intention to quit. We found a factorial structure for the MRSS of 11 items within five subscales in order of importance: tension reduction, addiction, pleasure, habit and social function. Results for internal consistency and test-retest reliability were good to acceptable. There were significant associations of nicotine dependence with tension reduction and addiction and of daily consumption with addiction and habit. Validity and reliability of the MRSS were shown in a sample of pregnant smokers. Tension reduction was the most important reason for continued smoking, followed by pleasure and addiction. Although the score for nicotine dependence was low, addiction was an important reason for continued smoking during pregnancy; therefore, nicotine replacement therapy could be considered. Half of the respondents experienced depressive symptoms. Hence, it is important to identify those women who need more specialized care, which can include not only smoking cessation counselling but also treatment for depression. © 2016 John Wiley & Sons, Ltd.

  15. Reliability evaluation methodology for NASA applications

    NASA Technical Reports Server (NTRS)

    Taneja, Vidya S.

    1992-01-01

    Liquid rocket engine technology has been characterized by the development of complex systems containing large number of subsystems, components, and parts. The trend to even larger and more complex system is continuing. The liquid rocket engineers have been focusing mainly on performance driven designs to increase payload delivery of a launch vehicle for a given mission. In otherwords, although the failure of a single inexpensive part or component may cause the failure of the system, reliability in general has not been considered as one of the system parameters like cost or performance. Up till now, quantification of reliability has not been a consideration during system design and development in the liquid rocket industry. Engineers and managers have long been aware of the fact that the reliability of the system increases during development, but no serious attempts have been made to quantify reliability. As a result, a method to quantify reliability during design and development is needed. This includes application of probabilistic models which utilize both engineering analysis and test data. Classical methods require the use of operating data for reliability demonstration. In contrast, the method described in this paper is based on similarity, analysis, and testing combined with Bayesian statistical analysis.

  16. 78 FR 21879 - Improving 9-1-1 Reliability; Reliability and Continuity of Communications Networks, Including...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-12

    ... maps? What are the public safety and homeland security implications of public disclosure of key network... 13-33] Improving 9-1-1 Reliability; Reliability and Continuity of Communications Networks, Including... improve the reliability and resiliency of the Nation's 9-1-1 networks. The Notice of Proposed Rulemaking...

  17. 40 CFR 75.42 - Reliability criteria.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Reliability criteria. 75.42 Section 75...) CONTINUOUS EMISSION MONITORING Alternative Monitoring Systems § 75.42 Reliability criteria. To demonstrate reliability equal to or better than the continuous emission monitoring system, the owner or operator shall...

  18. Epidemiological Study of Mild Tramautic Brain Injury Sequelae Cause by Blast Exposure During Operations Iraqi Freedom and Enduring Freedom

    DTIC Science & Technology

    2012-10-01

    Principal Investigator September, 2008 20 (see: VCU sub- award) David X. Cifu, MD Co-Investigator September, 2008 5 (VCU sub-award) Jessica...proven reliability: Wechsler Test of Adult Reading (WTAR, pre- morbid IQ estimate),(Mathias, Bowden, Bigler, & Rosenfeld, 2007) Conners Continuous...II (CVLT-II) (learning and working memory),(Vanderploeg et al., 2005) Wechsler Adult Intelligence Scale III (WAIS-III) items: Digit Symbol Coding

  19. The reliability of cause-of-death coding in The Netherlands.

    PubMed

    Harteloh, Peter; de Bruin, Kim; Kardaun, Jan

    2010-08-01

    Cause-of-death statistics are a major source of information for epidemiological research or policy decisions. Information on the reliability of these statistics is important for interpreting trends in time or differences between populations. Variations in coding the underlying cause of death could hinder the attribution of observed differences to determinants of health. Therefore we studied the reliability of cause-of-death statistics in The Netherlands. We performed a double coding study. Death certificates from the month of May 2005 were coded again in 2007. Each death certificate was coded manually by four coders. Reliability was measured by calculating agreement between coders (intercoder agreement) and by calculating the consistency of each individual coder in time (intracoder agreement). Our analysis covered an amount of 10,833 death certificates. The intercoder agreement of four coders on the underlying cause of death was 78%. In 2.2% of the cases coders agreed on a change of the code assigned in 2005. The (mean) intracoder agreement of four coders was 89%. Agreement was associated with the specificity of the ICD-10 code (chapter, three digits, four digits), the age of the deceased, the number of coders and the number of diseases reported on the death certificate. The reliability of cause-of-death statistics turned out to be high (>90%) for major causes of death such as cancers and acute myocardial infarction. For chronic diseases, such as diabetes and renal insufficiency, reliability was low (<70%). The reliability of cause-of-death statistics varies by ICD-10 code/chapter. A statistical office should provide coders with (additional) rules for coding diseases with a low reliability and evaluate these rules regularly. Users of cause-of-death statistics should exercise caution when interpreting causes of death with a low reliability. Studies of reliability should take into account the number of coders involved and the number of codes on a death certificate.

  20. Reliability analysis of instrument design of noninvasive bone marrow disease detector

    NASA Astrophysics Data System (ADS)

    Su, Yu; Li, Ting; Sun, Yunlong

    2016-02-01

    Bone marrow is an important hematopoietic organ, and bone marrow lesions (BMLs) may cause a variety of complications with high death rate and short survival time. Early detection and follow up care are particularly important. But the current diagnosis methods rely on bone marrow biopsy/puncture, with significant limitations such as invasion, complex operation, high risk, and discontinuous. It is highly in need of a non-invasive, safe, easily operated, and continuous monitoring technology. So we proposed to design a device aimed for detecting bone marrow lesions, which was based on near infrared spectrum technology. Then we fully tested its reliabilities, including the sensitivity, specificity, signal-to-noise ratio (SNR), stability, and etc. Here, we reported this sequence of reliability test experiments, the experimental results, and the following data analysis. This instrument was shown to be very sensitive, with distinguishable concentration less than 0.002 and with good linearity, stability and high SNR. Finally, these reliability-test data supported the promising clinical diagnosis and surgery guidance of our novel instrument in detection of BMLs.

  1. Engineering report on the OAO-2 Wisconsin experiment package

    NASA Technical Reports Server (NTRS)

    Bendell, C. B.

    1972-01-01

    The continued useful operation of the OAO-2 Wisconsin Experiment Package (WEP) for almost three years after its December 1968 launch is evidence of a superior engineering accomplishment. Reliability features of the experiment concept and design which have contributed to its long life are presented. Data anomalies and partial failures are summarized along with conclusions regarding their causes. The thermal, vacuum and radiation effects of the space environment are shown to be minimal and quite localized within the WEP.

  2. Seeking order amidst chaos: a systematic review of classification systems for causes of stillbirth and neonatal death, 2009-2014.

    PubMed

    Leisher, Susannah Hopkins; Teoh, Zheyi; Reinebrant, Hanna; Allanson, Emma; Blencowe, Hannah; Erwich, Jan Jaap; Frøen, J Frederik; Gardosi, Jason; Gordijn, Sanne; Gülmezoglu, A Metin; Heazell, Alexander E P; Korteweg, Fleurisca; Lawn, Joy; McClure, Elizabeth M; Pattinson, Robert; Smith, Gordon C S; Tunçalp, Ӧzge; Wojcieszek, Aleena M; Flenady, Vicki

    2016-10-05

    Each year, about 5.3 million babies die in the perinatal period. Understanding of causes of death is critical for prevention, yet there is no globally acceptable classification system. Instead, many disparate systems have been developed and used. We aimed to identify all systems used or created between 2009 and 2014, with their key features, including extent of alignment with the International Classification of Diseases (ICD) and variation in features by region, to inform the World Health Organization's development of a new global approach to classifying perinatal deaths. A systematic literature review (CINAHL, EMBASE, Medline, Global Health, and PubMed) identified published and unpublished studies and national reports describing new classification systems or modifications of existing systems for causes of perinatal death, or that used or tested such systems, between 2009 and 2014. Studies reporting ICD use only were excluded. Data were independently double-extracted (except from non-English publications). Subgroup analyses explored variation by extent and region. Eighty-one systems were identified as new, modifications of existing systems, or having been used between 2009 and 2014, with an average of ten systems created/modified each year. Systems had widely varying characteristics: (i) comprehensiveness (40 systems classified both stillbirths and neonatal deaths); (ii) extent of use (systems were created in 28 countries and used in 40; 17 were created for national use; 27 were widely used); (iii) accessibility (three systems available in e-format); (iv) underlying cause of death (64 systems required a single cause of death); (v) reliability (10 systems tested for reliability, with overall Kappa scores ranging from .35-.93); and (vi) ICD alignment (17 systems used ICD codes). Regional databases were not searched, so system numbers may be underestimated. Some non-differential misclassification of systems was possible. The plethora of systems in use, and continuing system development, hamper international efforts to improve understanding of causes of death. Recognition of the features of currently used systems, combined with a better understanding of the drivers of continued system creation, may help the development of a truly effective global system.

  3. Solar powered oxygen systems in remote health centers in Papua New Guinea: a large scale implementation effectiveness trial.

    PubMed

    Duke, Trevor; Hwaihwanje, Ilomo; Kaupa, Magdalynn; Karubi, Jonah; Panauwe, Doreen; Sa'avu, Martin; Pulsan, Francis; Prasad, Peter; Maru, Freddy; Tenambo, Henry; Kwaramb, Ambrose; Neal, Eleanor; Graham, Hamish; Izadnegahdar, Rasa

    2017-06-01

    Pneumonia is the largest cause of child deaths in Papua New Guinea (PNG), and hypoxaemia is the major complication causing death in childhood pneumonia, and hypoxaemia is a major factor in deaths from many other common conditions, including bronchiolitis, asthma, sepsis, malaria, trauma, perinatal problems, and obstetric emergencies. A reliable source of oxygen therapy can reduce mortality from pneumonia by up to 35%. However, in low and middle income countries throughout the world, improved oxygen systems have not been implemented at large scale in remote, difficult to access health care settings, and oxygen is often unavailable at smaller rural hospitals or district health centers which serve as the first point of referral for childhood illnesses. These hospitals are hampered by lack of reliable power, staff training and other basic services. We report the methodology of a large implementation effectiveness trial involving sustainable and renewable oxygen and power systems in 36 health facilities in remote rural areas of PNG. The methodology is a before-and after evaluation involving continuous quality improvement, and a health systems approach. We describe this model of implementation as the considerations and steps involved have wider implications in health systems in other countries. The implementation steps include: defining the criteria for where such an intervention is appropriate, assessment of power supplies and power requirements, the optimal design of a solar power system, specifications for oxygen concentrators and other oxygen equipment that will function in remote environments, installation logistics in remote settings, the role of oxygen analyzers in monitoring oxygen concentrator performance, the engineering capacity required to sustain a program at scale, clinical guidelines and training on oxygen equipment and the treatment of children with severe respiratory infection and other critical illnesses, program costs, and measurement of processes and outcomes to support continuous quality improvement. This study will evaluate the feasibility and sustainability issues in improving oxygen systems and providing reliable power on a large scale in remote rural settings in PNG, and the impact of this on child mortality from pneumonia over 3 years post-intervention. Taking a continuous quality improvement approach can be transformational for remote health services.

  4. Strategy for continuous improvement in IC manufacturability, yield, and reliability

    NASA Astrophysics Data System (ADS)

    Dreier, Dean J.; Berry, Mark; Schani, Phil; Phillips, Michael; Steinberg, Joe; DePinto, Gary

    1993-01-01

    Continual improvements in yield, reliability and manufacturability measure a fab and ultimately result in Total Customer Satisfaction. A new organizational and technical methodology for continuous defect reduction has been established in a formal feedback loop, which relies on yield and reliability, failed bit map analysis, analytical tools, inline monitoring, cross functional teams and a defect engineering group. The strategy requires the fastest detection, identification and implementation of possible corrective actions. Feedback cycle time is minimized at all points to improve yield and reliability and reduce costs, essential for competitiveness in the memory business. Payoff was a 9.4X reduction in defectivity and a 6.2X improvement in reliability of 256 K fast SRAMs over 20 months.

  5. Effectiveness of different approaches to disseminating traveler information on travel time reliability.

    DOT National Transportation Integrated Search

    2014-01-01

    The second Strategic Highway Research Program (SHRP 2) Reliability program aims to improve trip time reliability by reducing the frequency and effects of events that cause travel times to fluctuate unpredictably. Congestion caused by unreliable, or n...

  6. The Reliability and Validity of Discrete and Continuous Measures of Psychopathology: A Quantitative Review

    ERIC Educational Resources Information Center

    Markon, Kristian E.; Chmielewski, Michael; Miller, Christopher J.

    2011-01-01

    In 2 meta-analyses involving 58 studies and 59,575 participants, we quantitatively summarized the relative reliability and validity of continuous (i.e., dimensional) and discrete (i.e., categorical) measures of psychopathology. Overall, results suggest an expected 15% increase in reliability and 37% increase in validity through adoption of a…

  7. A solid criterion based on strict LMI without invoking equality constraint for stabilization of continuous singular systems.

    PubMed

    Zhang, Xuefeng; Chen, YangQuan

    2017-11-01

    The paper considers the stabilization issue of linear continuous singular systems by dealing with strict linear matrix inequalities (LMIs) without invoking equality constraint and proposes a complete and effective solved LMIs formulation. The criterion is necessary and sufficient condition and can be directly solved the feasible solutions with LMI toolbox and is much more tractable and reliable in numerical simulation than existing results, which involve positive semi-definite LMIs with equality constraints. The most important property of the criterion proposed in the paper is that it can overcome the drawbacks of the invalidity caused by the singularity of Ω=PE T +SQ for stabilization of singular systems. Two counterexamples are presented to avoid the disadvantages of the existing condition of stabilization of continuous singular systems. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreck, S. J.; Schepers, J. G.

    Continued inquiry into rotor and blade aerodynamics remains crucial for achieving accurate, reliable prediction of wind turbine power performance under yawed conditions. To exploit key advantages conferred by controlled inflow conditions, we used EU-JOULE DATA Project and UAE Phase VI experimental data to characterize rotor power production under yawed conditions. Anomalies in rotor power variation with yaw error were observed, and the underlying fluid dynamic interactions were isolated. Unlike currently recognized influences caused by angled inflow and skewed wake, which may be considered potential flow interactions, these anomalies were linked to pronounced viscous and unsteady effects.

  9. Effects of solar flares on the ionosphere of Mars.

    PubMed

    Mendillo, Michael; Withers, Paul; Hinson, David; Rishbeth, Henry; Reinisch, Bodo

    2006-02-24

    All planetary atmospheres respond to the enhanced x-rays and ultraviolet (UV) light emitted from the Sun during a flare. Yet only on Earth are observations so continuous that the consequences of these essentially unpredictable events can be measured reliably. Here, we report observations of solar flares, causing up to 200% enhancements to the ionosphere of Mars, as recorded by the Mars Global Surveyor in April 2001. Modeling the altitude dependence of these effects requires that relative enhancements in the soft x-ray fluxes far exceed those in the UV.

  10. Reliability of Task-Based fMRI for Preoperative Planning: A Test-Retest Study in Brain Tumor Patients and Healthy Controls

    PubMed Central

    Morrison, Melanie A.; Churchill, Nathan W.; Cusimano, Michael D.; Schweizer, Tom A.; Das, Sunit; Graham, Simon J.

    2016-01-01

    Background Functional magnetic resonance imaging (fMRI) continues to develop as a clinical tool for patients with brain cancer, offering data that may directly influence surgical decisions. Unfortunately, routine integration of preoperative fMRI has been limited by concerns about reliability. Many pertinent studies have been undertaken involving healthy controls, but work involving brain tumor patients has been limited. To develop fMRI fully as a clinical tool, it will be critical to examine these reliability issues among patients with brain tumors. The present work is the first to extensively characterize differences in activation map quality between brain tumor patients and healthy controls, including the effects of tumor grade and the chosen behavioral testing paradigm on reliability outcomes. Method Test-retest data were collected for a group of low-grade (n = 6) and high-grade glioma (n = 6) patients, and for matched healthy controls (n = 12), who performed motor and language tasks during a single fMRI session. Reliability was characterized by the spatial overlap and displacement of brain activity clusters, BOLD signal stability, and the laterality index. Significance testing was performed to assess differences in reliability between the patients and controls, and low-grade and high-grade patients; as well as between different fMRI testing paradigms. Results There were few significant differences in fMRI reliability measures between patients and controls. Reliability was significantly lower when comparing high-grade tumor patients to controls, or to low-grade tumor patients. The motor task produced more reliable activation patterns than the language tasks, as did the rhyming task in comparison to the phonemic fluency task. Conclusion In low-grade glioma patients, fMRI data are as reliable as healthy control subjects. For high-grade glioma patients, further investigation is required to determine the underlying causes of reduced reliability. To maximize reliability outcomes, testing paradigms should be carefully selected to generate robust activation patterns. PMID:26894279

  11. 78 FR 70163 - Communication of Operational Information between Natural Gas Pipelines and Electric Transmission...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-22

    ... a continued high level of reliability of interstate natural gas pipelines and that this will, in turn, ensure a continued high level of reliability of the electric transmission grid.\\40\\ Consumers...

  12. Software reliability experiments data analysis and investigation

    NASA Technical Reports Server (NTRS)

    Walker, J. Leslie; Caglayan, Alper K.

    1991-01-01

    The objectives are to investigate the fundamental reasons which cause independently developed software programs to fail dependently, and to examine fault tolerant software structures which maximize reliability gain in the presence of such dependent failure behavior. The authors used 20 redundant programs from a software reliability experiment to analyze the software errors causing coincident failures, to compare the reliability of N-version and recovery block structures composed of these programs, and to examine the impact of diversity on software reliability using subpopulations of these programs. The results indicate that both conceptually related and unrelated errors can cause coincident failures and that recovery block structures offer more reliability gain than N-version structures if acceptance checks that fail independently from the software components are available. The authors present a theory of general program checkers that have potential application for acceptance tests.

  13. Demonstrating the Safety and Reliability of a New System or Spacecraft: Incorporating Analyses and Reviews of the Design and Processing in Determining the Number of Tests to be Conducted

    NASA Technical Reports Server (NTRS)

    Vesely, William E.; Colon, Alfredo E.

    2010-01-01

    Design Safety/Reliability is associated with the probability of no failure-causing faults existing in a design. Confidence in the non-existence of failure-causing faults is increased by performing tests with no failure. Reliability-Growth testing requirements are based on initial assurance and fault detection probability. Using binomial tables generally gives too many required tests compared to reliability-growth requirements. Reliability-Growth testing requirements are based on reliability principles and factors and should be used.

  14. Solar powered oxygen systems in remote health centers in Papua New Guinea: a large scale implementation effectiveness trial

    PubMed Central

    Duke, Trevor; Hwaihwanje, Ilomo; Kaupa, Magdalynn; Karubi, Jonah; Panauwe, Doreen; Sa’avu, Martin; Pulsan, Francis; Prasad, Peter; Maru, Freddy; Tenambo, Henry; Kwaramb, Ambrose; Neal, Eleanor; Graham, Hamish; Izadnegahdar, Rasa

    2017-01-01

    Background Pneumonia is the largest cause of child deaths in Papua New Guinea (PNG), and hypoxaemia is the major complication causing death in childhood pneumonia, and hypoxaemia is a major factor in deaths from many other common conditions, including bronchiolitis, asthma, sepsis, malaria, trauma, perinatal problems, and obstetric emergencies. A reliable source of oxygen therapy can reduce mortality from pneumonia by up to 35%. However, in low and middle income countries throughout the world, improved oxygen systems have not been implemented at large scale in remote, difficult to access health care settings, and oxygen is often unavailable at smaller rural hospitals or district health centers which serve as the first point of referral for childhood illnesses. These hospitals are hampered by lack of reliable power, staff training and other basic services. Methods We report the methodology of a large implementation effectiveness trial involving sustainable and renewable oxygen and power systems in 36 health facilities in remote rural areas of PNG. The methodology is a before–and after evaluation involving continuous quality improvement, and a health systems approach. We describe this model of implementation as the considerations and steps involved have wider implications in health systems in other countries. Results The implementation steps include: defining the criteria for where such an intervention is appropriate, assessment of power supplies and power requirements, the optimal design of a solar power system, specifications for oxygen concentrators and other oxygen equipment that will function in remote environments, installation logistics in remote settings, the role of oxygen analyzers in monitoring oxygen concentrator performance, the engineering capacity required to sustain a program at scale, clinical guidelines and training on oxygen equipment and the treatment of children with severe respiratory infection and other critical illnesses, program costs, and measurement of processes and outcomes to support continuous quality improvement. Conclusions This study will evaluate the feasibility and sustainability issues in improving oxygen systems and providing reliable power on a large scale in remote rural settings in PNG, and the impact of this on child mortality from pneumonia over 3 years post–intervention. Taking a continuous quality improvement approach can be transformational for remote health services. PMID:28567280

  15. Software Fault Tolerance: A Tutorial

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2000-01-01

    Because of our present inability to produce error-free software, software fault tolerance is and will continue to be an important consideration in software systems. The root cause of software design errors is the complexity of the systems. Compounding the problems in building correct software is the difficulty in assessing the correctness of software for highly complex systems. After a brief overview of the software development processes, we note how hard-to-detect design faults are likely to be introduced during development and how software faults tend to be state-dependent and activated by particular input sequences. Although component reliability is an important quality measure for system level analysis, software reliability is hard to characterize and the use of post-verification reliability estimates remains a controversial issue. For some applications software safety is more important than reliability, and fault tolerance techniques used in those applications are aimed at preventing catastrophes. Single version software fault tolerance techniques discussed include system structuring and closure, atomic actions, inline fault detection, exception handling, and others. Multiversion techniques are based on the assumption that software built differently should fail differently and thus, if one of the redundant versions fails, it is expected that at least one of the other versions will provide an acceptable output. Recovery blocks, N-version programming, and other multiversion techniques are reviewed.

  16. Unconventional Rotor Power Response to Yaw Error Variations

    DOE PAGES

    Schreck, S. J.; Schepers, J. G.

    2014-12-16

    Continued inquiry into rotor and blade aerodynamics remains crucial for achieving accurate, reliable prediction of wind turbine power performance under yawed conditions. To exploit key advantages conferred by controlled inflow conditions, we used EU-JOULE DATA Project and UAE Phase VI experimental data to characterize rotor power production under yawed conditions. Anomalies in rotor power variation with yaw error were observed, and the underlying fluid dynamic interactions were isolated. Unlike currently recognized influences caused by angled inflow and skewed wake, which may be considered potential flow interactions, these anomalies were linked to pronounced viscous and unsteady effects.

  17. Extrasystoles: side effect of kangaroo care?

    PubMed

    Kluthe, Christof; Wauer, Roland R; Rüdiger, Mario

    2004-09-01

    To present an unpublished reason for an arrhythmic electrocardiogram (ECG) recording during kangaroo care in a preterm infant. Case report. Preterm infant. A preterm infant exhibited cardiac arrhythmia on the ECG monitor during kangaroo care, leading to interruption of kangarooing. Arrhythmia disappeared after placing the baby back into the incubator. The most likely reasons for arrhythmia were excluded. However, arrhythmia reappeared upon continuation of kangaroo care. ECG monitoring revealed the reason for the monitoring error. ECG monitoring during kangaroo care should cause error because of superimposed electric activity from the parent. Oxygen saturation represents a more reliable method of monitoring during kangaroo care.

  18. Comparison of Reliability Measures under Factor Analysis and Item Response Theory

    ERIC Educational Resources Information Center

    Cheng, Ying; Yuan, Ke-Hai; Liu, Cheng

    2012-01-01

    Reliability of test scores is one of the most pervasive psychometric concepts in measurement. Reliability coefficients based on a unifactor model for continuous indicators include maximal reliability rho and an unweighted sum score-based omega, among many others. With increasing popularity of item response theory, a parallel reliability measure pi…

  19. High-power UV-LED degradation: Continuous and cycled working condition influence

    NASA Astrophysics Data System (ADS)

    Arques-Orobon, F. J.; Nuñez, N.; Vazquez, M.; Segura-Antunez, C.; González-Posadas, V.

    2015-09-01

    High-power (HP) UV-LEDs can replace UV lamps for real-time fluoro-sensing applications by allowing portable and autonomous systems. However, HP UV-LEDs are not a mature technology, and there are still open issues regarding their performance evolution over time. This paper presents a reliability study of 3 W UV-LEDs, with special focus on LED degradation for two working conditions: continuous and cycled (30 s ON and 30 s OFF). Accelerated life tests are developed to evaluate the influence of temperature and electrical working conditions in high-power LEDs degradation, being the predominant failure mechanism the degradation of the package. An analysis that includes dynamic thermal and optical HP UV-LED measurements has been performed. Static thermal and stress simulation analysis with the finite element method (FEM) identifies the causes of package degradation. Accelerated life test results prove that HP UV-LEDs working in cycled condition have a better performance than those working in continuous condition.

  20. Thermographic In-Situ Process Monitoring of the Electron Beam Melting Technology used in Additive Manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dinwiddie, Ralph Barton; Dehoff, Ryan R; Lloyd, Peter D

    2013-01-01

    Oak Ridge National Laboratory (ORNL) has been utilizing the ARCAM electron beam melting technology to additively manufacture complex geometric structures directly from powder. Although the technology has demonstrated the ability to decrease costs, decrease manufacturing lead-time and fabricate complex structures that are impossible to fabricate through conventional processing techniques, certification of the component quality can be challenging. Because the process involves the continuous deposition of successive layers of material, each layer can be examined without destructively testing the component. However, in-situ process monitoring is difficult due to metallization on inside surfaces caused by evaporation and condensation of metal from themore » melt pool. This work describes a solution to one of the challenges to continuously imaging inside of the chamber during the EBM process. Here, the utilization of a continuously moving Mylar film canister is described. Results will be presented related to in-situ process monitoring and how this technique results in improved mechanical properties and reliability of the process.« less

  1. 76 FR 23812 - Reliability and Continuity of Communications Networks, Including Broadband Technologies; Effects...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-28

    ... FEDERAL COMMUNICATIONS COMMISSION [PS Docket Nos. 11-60 and 10-92; ET Docket No. 06-119] Reliability and Continuity of Communications Networks, Including Broadband Technologies; Effects on Broadband Communications Networks of Damage or Failure of Network Equipment or Severe Overload; Independent Panel Reviewing...

  2. A systematic review of statistical methods used to test for reliability of medical instruments measuring continuous variables.

    PubMed

    Zaki, Rafdzah; Bulgiba, Awang; Nordin, Noorhaire; Azina Ismail, Noor

    2013-06-01

    Reliability measures precision or the extent to which test results can be replicated. This is the first ever systematic review to identify statistical methods used to measure reliability of equipment measuring continuous variables. This studyalso aims to highlight the inappropriate statistical method used in the reliability analysis and its implication in the medical practice. In 2010, five electronic databases were searched between 2007 and 2009 to look for reliability studies. A total of 5,795 titles were initially identified. Only 282 titles were potentially related, and finally 42 fitted the inclusion criteria. The Intra-class Correlation Coefficient (ICC) is the most popular method with 25 (60%) studies having used this method followed by the comparing means (8 or 19%). Out of 25 studies using the ICC, only 7 (28%) reported the confidence intervals and types of ICC used. Most studies (71%) also tested the agreement of instruments. This study finds that the Intra-class Correlation Coefficient is the most popular method used to assess the reliability of medical instruments measuring continuous outcomes. There are also inappropriate applications and interpretations of statistical methods in some studies. It is important for medical researchers to be aware of this issue, and be able to correctly perform analysis in reliability studies.

  3. Aerospace Safety Advisory Panel

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The results of the Panel's activities are presented in a set of findings and recommendations. Highlighted here are both improvements in NASA's safety and reliability activities and specific areas where additional gains might be realized. One area of particular concern involves the curtailment or elimination of Space Shuttle safety and reliability enhancements. Several findings and recommendations address this area of concern, reflecting the opinion that safety and reliability enhancements are essential to the continued successful operation of the Space Shuttle. It is recommended that a comprehensive and continuing program of safety and reliability improvements in all areas of Space Shuttle hardware/software be considered an inherent component of ongoing Space Shuttle operations.

  4. Reliability Evaluation for Clustered WSNs under Malware Propagation

    PubMed Central

    Shen, Shigen; Huang, Longjun; Liu, Jianhua; Champion, Adam C.; Yu, Shui; Cao, Qiying

    2016-01-01

    We consider a clustered wireless sensor network (WSN) under epidemic-malware propagation conditions and solve the problem of how to evaluate its reliability so as to ensure efficient, continuous, and dependable transmission of sensed data from sensor nodes to the sink. Facing the contradiction between malware intention and continuous-time Markov chain (CTMC) randomness, we introduce a strategic game that can predict malware infection in order to model a successful infection as a CTMC state transition. Next, we devise a novel measure to compute the Mean Time to Failure (MTTF) of a sensor node, which represents the reliability of a sensor node continuously performing tasks such as sensing, transmitting, and fusing data. Since clustered WSNs can be regarded as parallel-serial-parallel systems, the reliability of a clustered WSN can be evaluated via classical reliability theory. Numerical results show the influence of parameters such as the true positive rate and the false positive rate on a sensor node’s MTTF. Furthermore, we validate the method of reliability evaluation for a clustered WSN according to the number of sensor nodes in a cluster, the number of clusters in a route, and the number of routes in the WSN. PMID:27294934

  5. Reliability Evaluation for Clustered WSNs under Malware Propagation.

    PubMed

    Shen, Shigen; Huang, Longjun; Liu, Jianhua; Champion, Adam C; Yu, Shui; Cao, Qiying

    2016-06-10

    We consider a clustered wireless sensor network (WSN) under epidemic-malware propagation conditions and solve the problem of how to evaluate its reliability so as to ensure efficient, continuous, and dependable transmission of sensed data from sensor nodes to the sink. Facing the contradiction between malware intention and continuous-time Markov chain (CTMC) randomness, we introduce a strategic game that can predict malware infection in order to model a successful infection as a CTMC state transition. Next, we devise a novel measure to compute the Mean Time to Failure (MTTF) of a sensor node, which represents the reliability of a sensor node continuously performing tasks such as sensing, transmitting, and fusing data. Since clustered WSNs can be regarded as parallel-serial-parallel systems, the reliability of a clustered WSN can be evaluated via classical reliability theory. Numerical results show the influence of parameters such as the true positive rate and the false positive rate on a sensor node's MTTF. Furthermore, we validate the method of reliability evaluation for a clustered WSN according to the number of sensor nodes in a cluster, the number of clusters in a route, and the number of routes in the WSN.

  6. [Intraoperative monitoring of oxygen tissue pressure: Applications in vascular neurosurgery].

    PubMed

    Arikan, Fuat; Vilalta, Jordi; Torne, Ramon; Chocron, Ivette; Rodriguez-Tesouro, Ana; Sahuquillo, Juan

    2014-01-01

    Ischemic lesions related to surgical procedures are a major cause of postoperative morbidity in patients with cerebral vascular disease. There are different systems of neuromonitoring to detect intraoperative ischemic events, including intraoperative monitoring of oxygen tissue pressure (PtiO2). The aim of this article was to describe, through the discussion of 4 cases, the usefulness of intraoperative PtiO2 monitoring during vascular neurosurgery. In presenting these cases, we demonstrate that monitoring PtiO2 is a reliable way to detect early ischemic events during surgical procedures. Continuous monitoring of PtiO2 in an area at risk allows the surgeon to resolve the cause of the ischemic event before it evolves to an established cerebral infarction. Copyright © 2014 Sociedad Española de Neurocirugía. Published by Elsevier España. All rights reserved.

  7. Updated Reliability Evaluation of V730 Transmission

    DOT National Transportation Integrated Search

    1983-11-01

    This report culminates a two-year review of factors concerning the reliability of the Detroit Diesel Allison V730 automatic three-speed transmission for urban transit buses. This report is a continuing examination of the transmission's reliability. M...

  8. Looking forward and back to relapse: implications for research and practice.

    PubMed

    Connors, G J; Longabaugh, R; Miller, W R

    1996-12-01

    In this commentary, the three principal investigators of the Relapse Replication and Extension Project (RREP) reflect on clinical and research implications of study findings from the three collaborating sites. A primary purpose of RREP was to study the reliability and validity of a taxonomy of relapse antecedents originally proposed by Marlatt two decades ago. Under the best of research conditions, with extensive training and practice, it was difficult to achieve reliability of coding with the original three-level system, although with only two levels of classification more reasonable albeit variable reliability was found. Modifications may improve the taxonomy's reliability, but RREP data indicate that a more appropriate strategy is to measure possible antecedents of relapse by continuous scales such as those provided by Annis, Heather and Litman. There is reasonably consistent evidence for two common antecedents of relapse: negative emotional states, and positive emotional states in a social context. Antecedents of relapse show only modest consistency within individuals from one occasion to the next. The causes to which clients attribute relapses may exert a significant effect on future drinking episodes. Stable and internal attributions, such as are commonly associated with a dispositional disease model, may serve to perpetuate relapse. From the RREP studies, the availability of coping skills appears to be a potent protective factor, and ineffective coping a consistent predictor of relapse. Implications for clinical research and practice are considered.

  9. PV Module Reliability Workshop | Photovoltaic Research | NREL

    Science.gov Websites

    -year old PV system in Quebec, Canada-Alex Bradley, Tanya Dhir, Yves Poissant Solar panel design factors PV Module Reliability Workshop PV Module Reliability Workshop Tuesday, February 24, 2015 Chair : Michael Kempe The 2015 PV Module Reliability Workshop (PVMRW) continued in the tradition of this annual

  10. Arthritis of the thumb and digits: current concepts.

    PubMed

    Bernstein, Richard A

    2015-01-01

    Osteoarthritis of the hand continues to be a problem in an aging population and affects the proximal and distal interphalangeal, metacarpophalangeal, and carpometacarpal joints in the hands. Heberden nodes develop in the distal interphalangeal joints and typically present as a deformed and enlarged joint and can cause pain. Surgery rarely is necessary because functional difficulties are uncommon; however, there may be problems if the metacarpophalangeal and proximal interphalangeal joints are involved because cartilage destruction generates pain and causes weakness and motion loss. Implant arthroplasty typically can improve pain but does not reliably improve range of motion, and complication and revision rates are substantial. Arthrodesis continues as a treatment for digital osteoarthritis, but the surgeon must balance the risks of complications with the benefits of improved patient outcomes. The opposable thumb, which is critical for hand dexterity and strength, can be severely disabled by basal joint arthritis. The complex architecture of the basal joint continues to be defined by its relationship to the surrounding bony and ligamentous anatomy and its effect on the trapeziometacarpal joint. Nonsurgical treatment may be beneficial, but surgical options, including arthroscopy, osteotomy, and arthroplasty, should be considered if nonsurgical management fails. Prosthetic arthroplasty has a historically poor record; therefore, trapeziectomy remains the hallmark of current reconstructive techniques. Ligament reconstruction and tendon interposition arthroplasty are the most commonly performed surgical procedures, but hematoma distraction arthroplasty and various methods of suspensionplasty also are currently used.

  11. Techniques to evaluate the importance of common cause degradation on reliability and safety of nuclear weapons.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darby, John L.

    2011-05-01

    As the nuclear weapon stockpile ages, there is increased concern about common degradation ultimately leading to common cause failure of multiple weapons that could significantly impact reliability or safety. Current acceptable limits for the reliability and safety of a weapon are based on upper limits on the probability of failure of an individual item, assuming that failures among items are independent. We expanded the current acceptable limits to apply to situations with common cause failure. Then, we developed a simple screening process to quickly assess the importance of observed common degradation for both reliability and safety to determine if furthermore » action is necessary. The screening process conservatively assumes that common degradation is common cause failure. For a population with between 100 and 5000 items we applied the screening process and conclude the following. In general, for a reliability requirement specified in the Military Characteristics (MCs) for a specific weapon system, common degradation is of concern if more than 100(1-x)% of the weapons are susceptible to common degradation, where x is the required reliability expressed as a fraction. Common degradation is of concern for the safety of a weapon subsystem if more than 0.1% of the population is susceptible to common degradation. Common degradation is of concern for the safety of a weapon component or overall weapon system if two or more components/weapons in the population are susceptible to degradation. Finally, we developed a technique for detailed evaluation of common degradation leading to common cause failure for situations that are determined to be of concern using the screening process. The detailed evaluation requires that best estimates of common cause and independent failure probabilities be produced. Using these techniques, observed common degradation can be evaluated for effects on reliability and safety.« less

  12. Arrhythmia during extracorporeal shock wave lithotripsy.

    PubMed

    Zeng, Z R; Lindstedt, E; Roijer, A; Olsson, S B

    1993-01-01

    A prospective study of arrhythmia during extracorporeal shock wave lithotripsy (ESWL) was performed in 50 patients, using an EDAP LT01 piezoelectric lithotriptor. The 12-lead standard ECG was recorded continuously for 10 min before and during treatment. One or more atrial and/or ventricular ectopic beats occurred during ESWL in 15 cases (30%). The occurrence of arrhythmia was similar during right-sided and left-sided treatment. One patient developed multifocal ventricular premature beats and ventricular bigeminy; another had cardiac arrest for 13.5 s. It was found that various irregularities of the heart rhythm can be caused even by treatment with a lithotriptor using piezoelectric energy to create the shock wave. No evidence was found, however, that the shock wave itself rather than vagal activation and the action of sedo-analgesia was the cause of the arrhythmia. For patients with severe underlying heart disease and a history of complex arrhythmia, we suggest that the ECG be monitored during treatment. In other cases, we have found continuous monitoring of oxygen saturation and pulse rate with a pulse oximeter to be perfectly reliable for raising the alarm when depression of respiration and vaso-vagal reactions occur.

  13. Detecting persons concealed in a vehicle

    DOEpatents

    Tucker, Jr., Raymond W.

    2005-03-29

    An improved method for detecting the presence of humans or animals concealed within in a vehicle uses a combination of the continuous wavelet transform and a ratio-based energy calculation to determine whether the motion detected using seismic sensors placed on the vehicle is due to the presence of a heartbeat within the vehicle or is the result of motion caused by external factors such as the wind. The method performs well in the presence of light to moderate ambient wind levels, producing far fewer false alarm indications. The new method significantly improves the range of ambient environmental conditions under which human presence detection systems can reliably operate.

  14. The Role of Phase Changes in TiO2/Pt/TiO2 Filaments

    NASA Astrophysics Data System (ADS)

    Bíró, Ferenc; Hajnal, Zoltán; Dücső, Csaba; Bársony, István

    2018-04-01

    This work analyses the role of phase changes in TiO2/Pt/TiO2 layer stacks for micro-heater application regarding their stability and reliable operation. The polycrystalline Pt layer wrapped in a TiO2 adhesion layer underwent a continuous recrystallisation in a self-heating operation causing a drift in the resistance ( R) versus temperature ( T) performance. Simultaneously, the TiO2 adhesion layer also deteriorates at high temperature by phase changes from amorphous to anatase and rutile crystallite formation, which not only influences the Pt diffusion in different migration phenomena, but also reduces the cross section of the Pt heater wire. Thorough scanning electron microscopy, energy dispersive spectroscopy, cross-sectional transmission electron microscopy (XTEM) and electron beam diffraction analysis of the structures operated at increasing temperature revealed the elemental structural processes leading to the instabilities and the accelerated degradation, resulting in rapid breakdown of the heater wire. Owing to stability and reliability criteria, the conditions for safe operation of these layer structures could be determined.

  15. Test-retest reliability of a continuous glucose monitoring system in individuals with type 2 diabetes.

    PubMed

    Terada, Tasuku; Loehr, Sarah; Guigard, Emmanuel; McCargar, Linda J; Bell, Gordon J; Senior, Peter; Boulé, Normand G

    2014-08-01

    This study determined the test-retest reliability of a continuous glucose monitoring system (CGMS) (iPro™2; Medtronic, Northridge, CA) under standardized conditions in individuals with type 2 diabetes (T2D). Fourteen individuals with T2D spent two nonconsecutive days in a calorimetry unit. On both days, meals, medication, and exercise were standardized. Glucose concentrations were measured continuously by CGMS, from which daily mean glucose concentration (GLU(mean)), time spent in hyperglycemia (t(>10.0 mmol/L)), and meal, exercise, and nocturnal mean glucose concentrations, as well as glycemic variability (SD(w), percentage coefficient of variation [%cv(w)], mean amplitude of glycemic excursions [MAGEc, MAGE(ave), and MAGE(abs.gos)], and continuous overlapping net glycemic action [CONGA(n)]) were estimated. Absolute and relative reliabilities were investigated using coefficient of variation (CV) and intraclass correlation, respectively. Relative reliability ranged from 0.77 to 0.95 (P<0.05) for GLU(mean) and meal, exercise, and nocturnal glycemia with CV ranging from 3.9% to 11.7%. Despite significant relative reliability (R=0.93; P<0.01), t(>10.0 mmol/L) showed larger CV (54.7%). Among the different glycemic variability measures, a significant between-day difference was observed in MAGEc, MAGE(ave), CONGA6, and CONGA12. The remaining measures (i.e., SD(w), %cv(w), MAGE(abs.gos), and CONGA1-4) indicated no between-day differences and significant relative reliability. In individuals with T2D, CGMS-estimated glycemic profiles were characterized by high relative and absolute reliability for both daily and shorter-term measurements as represented by GLUmean and meal, exercise, and nocturnal glycemia. Among the different methods to calculate glycemic variability, our results showed SD(w), %cv(w), MAGE(abs.gos), and CONGAn with n ≤ 4 were reliable measures. These results suggest the usefulness of CGMS in clinical trials utilizing repeated measured.

  16. Using Multivariate Generalizability Theory to Assess the Effect of Content Stratification on the Reliability of a Performance Assessment

    ERIC Educational Resources Information Center

    Keller, Lisa A.; Clauser, Brian E.; Swanson, David B.

    2010-01-01

    In recent years, demand for performance assessments has continued to grow. However, performance assessments are notorious for lower reliability, and in particular, low reliability resulting from task specificity. Since reliability analyses typically treat the performance tasks as randomly sampled from an infinite universe of tasks, these estimates…

  17. Cause-and-effect mapping of critical events.

    PubMed

    Graves, Krisanne; Simmons, Debora; Galley, Mark D

    2010-06-01

    Health care errors are routinely reported in the scientific and public press and have become a major concern for most Americans. In learning to identify and analyze errors health care can develop some of the skills of a learning organization, including the concept of systems thinking. Modern experts in improving quality have been working in other high-risk industries since the 1920s making structured organizational changes through various frameworks for quality methods including continuous quality improvement and total quality management. When using these tools, it is important to understand systems thinking and the concept of processes within organization. Within these frameworks of improvement, several tools can be used in the analysis of errors. This article introduces a robust tool with a broad analytical view consistent with systems thinking, called CauseMapping (ThinkReliability, Houston, TX, USA), which can be used to systematically analyze the process and the problem at the same time. Copyright 2010 Elsevier Inc. All rights reserved.

  18. 46 CFR 169.619 - Reliability.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false Reliability. 169.619 Section 169.619 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) NAUTICAL SCHOOLS SAILING SCHOOL VESSELS Machinery and Electrical Steering Systems § 169.619 Reliability. (a) Except where the OCMI judges it impracticable, the...

  19. 46 CFR 169.619 - Reliability.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 7 2011-10-01 2011-10-01 false Reliability. 169.619 Section 169.619 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) NAUTICAL SCHOOLS SAILING SCHOOL VESSELS Machinery and Electrical Steering Systems § 169.619 Reliability. (a) Except where the OCMI judges it impracticable, the...

  20. Minimum Control Requirements for Advanced Life Support Systems

    NASA Technical Reports Server (NTRS)

    Boulange, Richard; Jones, Harry; Jones, Harry

    2002-01-01

    Advanced control technologies are not necessary for the safe, reliable and continuous operation of Advanced Life Support (ALS) systems. ALS systems can and are adequately controlled by simple, reliable, low-level methodologies and algorithms. The automation provided by advanced control technologies is claimed to decrease system mass and necessary crew time by reducing buffer size and minimizing crew involvement. In truth, these approaches increase control system complexity without clearly demonstrating an increase in reliability across the ALS system. Unless these systems are as reliable as the hardware they control, there is no savings to be had. A baseline ALS system is presented with the minimal control system required for its continuous safe reliable operation. This baseline control system uses simple algorithms and scheduling methodologies and relies on human intervention only in the event of failure of the redundant backup equipment. This ALS system architecture is designed for reliable operation, with minimal components and minimal control system complexity. The fundamental design precept followed is "If it isn't there, it can't fail".

  1. Making statistical inferences about software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1988-01-01

    Failure times of software undergoing random debugging can be modelled as order statistics of independent but nonidentically distributed exponential random variables. Using this model inferences can be made about current reliability and, if debugging continues, future reliability. This model also shows the difficulty inherent in statistical verification of very highly reliable software such as that used by digital avionics in commercial aircraft.

  2. Lifetime evaluation of large format CMOS mixed signal infrared devices

    NASA Astrophysics Data System (ADS)

    Linder, A.; Glines, Eddie

    2015-09-01

    New large scale foundry processes continue to produce reliable products. These new large scale devices continue to use industry best practice to screen for failure mechanisms and validate their long lifetime. The Failure-in-Time analysis in conjunction with foundry qualification information can be used to evaluate large format device lifetimes. This analysis is a helpful tool when zero failure life tests are typical. The reliability of the device is estimated by applying the failure rate to the use conditions. JEDEC publications continue to be the industry accepted methods.

  3. Eye of the Beholder: Stage Entrance Behavior and Facial Expression Affect Continuous Quality Ratings in Music Performance

    PubMed Central

    Waddell, George; Williamon, Aaron

    2017-01-01

    Judgments of music performance quality are commonly employed in music practice, education, and research. However, previous studies have demonstrated the limited reliability of such judgments, and there is now evidence that extraneous visual, social, and other “non-musical” features can unduly influence them. The present study employed continuous measurement techniques to examine how the process of forming a music quality judgment is affected by the manipulation of temporally specific visual cues. Video footage comprising an appropriate stage entrance and error-free performance served as the standard condition (Video 1). This footage was manipulated to provide four additional conditions, each identical save for a single variation: an inappropriate stage entrance (Video 2); the presence of an aural performance error midway through the piece (Video 3); the same error accompanied by a negative facial reaction by the performer (Video 4); the facial reaction with no corresponding aural error (Video 5). The participants were 53 musicians and 52 non-musicians (N = 105) who individually assessed the performance quality of one of the five randomly assigned videos via a digital continuous measurement interface and headphones. The results showed that participants viewing the “inappropriate” stage entrance made judgments significantly more quickly than those viewing the “appropriate” entrance, and while the poor entrance caused significantly lower initial scores among those with musical training, the effect did not persist long into the performance. The aural error caused an immediate drop in quality judgments that persisted to a lower final score only when accompanied by the frustrated facial expression from the pianist; the performance error alone caused a temporary drop only in the musicians' ratings, and the negative facial reaction alone caused no reaction regardless of participants' musical experience. These findings demonstrate the importance of visual information in forming evaluative and aesthetic judgments in musical contexts and highlight how visual cues dynamically influence those judgments over time. PMID:28487662

  4. Tiger in the fault tree jungle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, P.

    1976-01-01

    There is yet little evidence of serious efforts to apply formal reliability analysis methods to evaluate, or even to identify, potential common-mode failures (CMF) of reactor safeguard systems. The prospects for event logic modeling in this regard are examined by the primitive device of reviewing actual CMF experience in terms of what the analyst might have perceived a priori. Further insights of the probability and risks aspects of CMFs are sought through consideration of three key likelihood factors: (1) prior probability of cause ever existing, (2) opportunities for removing cause, and (3) probability that a CMF cause will be activatedmore » by conditions associated with a real system challenge. It was concluded that the principal needs for formal logical discipline in the endeavor to decrease CMF-related risks are to discover and to account for strong ''energetic'' dependency couplings that could arise in the major accidents usually classed as ''hypothetical.'' This application would help focus research, design and quality assurance efforts to cope with major CMF causes. But without extraordinary challenges to the reactor safeguard systems, there must continue to be virtually no statistical evidence pertinent to that class of failure dependencies.« less

  5. EMT-defibrillation: a recipe for saving lives.

    PubMed

    Paris, P M

    1988-05-01

    Sudden cardiac death is the number-one cause of death in this country. It has long been known that most of these deaths occur outside of the hospital, therefore necessitating an approach to the problem involving prehospital care. The development of advanced life support emergency medical systems has had a dramatic impact on improving survival in selected communities. Most of the country continues to see little result because of our inability to provide timely defibrillation. Automatic external defibrillators now provide a safe, reliable, proven method to increase the number of "saves" in rural, urban, and suburban communities. This new tool, if widely used, will allow us to save scores of "hearts too good to die."

  6. Continuous estimates on the earthquake early warning magnitude by use of the near-field acceleration records

    NASA Astrophysics Data System (ADS)

    Li, Jun; Jin, Xing; Wei, Yongxiang; Zhang, Hongcai

    2013-10-01

    In this article, the seismic records of Japan's Kik-net are selected to measure the acceleration, displacement, and effective peak acceleration of each seismic record within a certain time after P wave, then a continuous estimation is given on earthquake early warning magnitude through statistical analysis method, and Wenchuan earthquake record is utilized to check the method. The results show that the reliability of earthquake early warning magnitude continuously increases with the increase of the seismic information, the biggest residual happens if the acceleration is adopted to fit earthquake magnitude, which may be caused by rich high-frequency components and large dispersion of peak value in acceleration record, the influence caused by the high-frequency components can be effectively reduced if the effective peak acceleration and peak displacement is adopted, it is estimated that the dispersion of earthquake magnitude obviously reduces, but it is easy for peak displacement to be affected by long-period drifting. In various components, the residual enlargement phenomenon at vertical direction is almost unobvious, thus it is recommended in this article that the effective peak acceleration at vertical direction is preferred to estimate earthquake early warning magnitude. Through adopting Wenchuan strong earthquake record to check the method mentioned in this article, it is found that this method can be used to quickly, stably, and accurately estimate the early warning magnitude of this earthquake, which shows that this method is completely applicable for earthquake early warning.

  7. Common Cause Failures and Ultra Reliability

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2012-01-01

    A common cause failure occurs when several failures have the same origin. Common cause failures are either common event failures, where the cause is a single external event, or common mode failures, where two systems fail in the same way for the same reason. Common mode failures can occur at different times because of a design defect or a repeated external event. Common event failures reduce the reliability of on-line redundant systems but not of systems using off-line spare parts. Common mode failures reduce the dependability of systems using off-line spare parts and on-line redundancy.

  8. Stop Blaming Disasters on Forces Beyond Our Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matalucci, R.V.

    1999-04-09

    As we enter the new millennium, let us recognize that the losses resulting from natural or malevolent events that cause major property damage, severe injuries, and unnecessary death are not always due to forces beyond our control. We can prevent these losses by changing the way we think and act about design and construction projects. New tools, technologies, and techniques can improve structural safety, security, and reliability and protect owners, occupants, and users against loss and casualties. Hurricane Mitch, the African embassy bombings, the ice storms in Canada and the northeastern US last winter, the Oklahoma City bombing, flooding andmore » earthquakes in California, tornadoes and flooding in Florida, and wildfires in the Southwest are threats to the safety and security of the public and the reliability of our constructed environment. Today's engineering design community must recognize these threats and address them in our standards, building codes, and designs. We know that disasters will continue to strike and we must reduce their impact on the public. We must demand and create innovative solutions that assure a higher level of structural performance when disasters strike.« less

  9. Do Decapod Crustaceans Have Nociceptors for Extreme pH?

    PubMed Central

    Puri, Sakshi; Faulkes, Zen

    2010-01-01

    Background Nociception is the physiological detection of noxious stimuli. Because of its obvious importance, nociception is expected to be widespread across animal taxa and to trigger robust behaviours reliably. Nociception in invertebrates, such as crustaceans, is poorly studied. Methodology/Principal Findings Three decapod crustacean species were tested for nociceptive behaviour: Louisiana red swamp crayfish (Procambarus clarkii), white shrimp (Litopenaeus setiferus), and grass shrimp (Palaemonetes sp.). Applying sodium hydroxide, hydrochloric acid, or benzocaine to the antennae caused no change in behaviour in the three species compared to controls. Animals did not groom the stimulated antenna, and there was no difference in movement of treated individuals and controls. Extracellular recordings of antennal nerves in P. clarkii revealed continual spontaneous activity, but no neurons that were reliably excited by the application of concentrated sodium hydroxide or hydrochloric acid. Conclusions/Significance Previously reported responses to extreme pH are either not consistently evoked across species or were mischaracterized as nociception. There was no behavioural or physiological evidence that the antennae contained specialized nociceptors that responded to pH. PMID:20422026

  10. Reliability Standards of Complex Engineering Systems

    NASA Astrophysics Data System (ADS)

    Galperin, E. M.; Zayko, V. A.; Gorshkalev, P. A.

    2017-11-01

    Production and manufacture play an important role in today’s modern society. Industrial production is nowadays characterized by increased and complex communications between its parts. The problem of preventing accidents in a large industrial enterprise becomes especially relevant. In these circumstances, the reliability of enterprise functioning is of particular importance. Potential damage caused by an accident at such enterprise may lead to substantial material losses and, in some cases, can even cause a loss of human lives. That is why industrial enterprise functioning reliability is immensely important. In terms of their reliability, industrial facilities (objects) are divided into simple and complex. Simple objects are characterized by only two conditions: operable and non-operable. A complex object exists in more than two conditions. The main characteristic here is the stability of its operation. This paper develops the reliability indicator combining the set theory methodology and a state space method. Both are widely used to analyze dynamically developing probability processes. The research also introduces a set of reliability indicators for complex technical systems.

  11. Reliability Considerations for Ultra- Low Power Space Applications

    NASA Technical Reports Server (NTRS)

    White, Mark; Johnston, Allan

    2012-01-01

    NASA, the aerospace community, and other high reliability (hi-rel) users of advanced microelectronic products face many challenges as technology continues to scale into the deep sub- micron region and ULP devices are sought after. Technology trends, ULP microelectronics, scaling and performance tradeoffs, reliability considerations, and spacecraft environments will be presented from a ULP perspective for space applications.

  12. Transportation reliability and trip satisfaction.

    DOT National Transportation Integrated Search

    2012-10-01

    Travel delays and associated costs have become a major problem in Michigan over the : past several decades as congestion has continued to increase, creating significant negative : impacts on travel reliability on many roadways throughout the State. T...

  13. Large Area Silicon Sheet by EFG. [quality control and productivity of edge-defined film-fed growth of ribbons

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Influences on ribbon quality which might be caused by various materials of construction which are used in the growth furnace were assessed. At the present level of ribbon quality, which has produced 8.5% to 9.5% efficient solar cells, no particular influence of any furnace part was detected. The experiments led to the suspicion that the general environment and the somewhat unoptimized materials handling procedures might be responsible for the current variations in ribbon quality and that, therefore, continuous work with this furnace under rather more stringent environmental conditions and operating procedures could perhaps improve materials quality to some extent. The work on the multiple furnace was continued with two multiple growth runs being performed. In these runs, the melt replenishment system performed poorly and extensive modifications to it were designed to make reliable melt feeding for five ribbon growth possible. Additional characterization techniques for wide ribbons, stress measurements, and growth dynamics experiments are reported.

  14. Semi-continuous anaerobic digestion of extruded OFMSW: Process performance and energetics evaluation.

    PubMed

    Mu, Lan; Zhang, Lei; Zhu, Kongyun; Ma, Jiao; Li, Aimin

    2018-01-01

    Recently, extrusion press treatment shows some promising advantages for effectively separating of organic fraction of municipal solid waste (OFMSW) from the mixed MSW, which is critical for their following high-efficiency treatment. In this study, an extruded OFMSW obtained from a demonstrated MSW treatment plant was characterized, and submitted to a series of semi-continuous anaerobic experiments to examine its biodegradability and process stability. The results indicated that the extruded OFMSW was a desirable substrate with a high biochemical methane potential (BMP), balanced nutrients and reliable stability. For increasing organic loading rates (OLRs), feeding higher volatile solid (VS) contents in feedstock was much better than shortening the hydraulic retention times (HRTs), while excessively high contents caused a low biodegradability due to the mass transfer limitation. For energetics evaluation, a high electricity output of 129.19-156.37kWh/ton raw MSW was obtained, which was further improved by co-digestion with food waste. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Scientific assessment of accuracy, skill and reliability of ocean probabilistic forecast products.

    NASA Astrophysics Data System (ADS)

    Wei, M.; Rowley, C. D.; Barron, C. N.; Hogan, P. J.

    2016-02-01

    As ocean operational centers are increasingly adopting and generating probabilistic forecast products for their customers with valuable forecast uncertainties, how to assess and measure these complicated probabilistic forecast products objectively is challenging. The first challenge is how to deal with the huge amount of the data from the ensemble forecasts. The second one is how to describe the scientific quality of probabilistic products. In fact, probabilistic forecast accuracy, skills, reliability, resolutions are different attributes of a forecast system. We briefly introduce some of the fundamental metrics such as the Reliability Diagram, Reliability, Resolution, Brier Score (BS), Brier Skill Score (BSS), Ranked Probability Score (RPS), Ranked Probability Skill Score (RPSS), Continuous Ranked Probability Score (CRPS), and Continuous Ranked Probability Skill Score (CRPSS). The values and significance of these metrics are demonstrated for the forecasts from the US Navy's regional ensemble system with different ensemble members. The advantages and differences of these metrics are studied and clarified.

  16. Modelling utility-scale wind power plants. Part 2: Capacity credit

    NASA Astrophysics Data System (ADS)

    Milligan, Michael R.

    2000-10-01

    As the worldwide use of wind turbine generators in utility-scale applications continues to increase, it will become increasingly important to assess the economic and reliability impact of these intermittent resources. Although the utility industry appears to be moving towards a restructured environment, basic economic and reliability issues will continue to be relevant to companies involved with electricity generation. This article is the second in a two-part series that addresses modelling approaches and results that were obtained in several case studies and research projects at the National Renewable Energy Laboratory (NREL). This second article focuses on wind plant capacity credit as measured with power system reliability indices. Reliability-based methods of measuring capacity credit are compared with wind plant capacity factor. The relationship between capacity credit and accurate wind forecasting is also explored. Published in 2000 by John Wiley & Sons, Ltd.

  17. Digital Sensor Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, Ken D.; Quinn, Edward L.; Mauck, Jerry L.

    The nuclear industry has been slow to incorporate digital sensor technology into nuclear plant designs due to concerns with digital qualification issues. However, the benefits of digital sensor technology for nuclear plant instrumentation are substantial in terms of accuracy and reliability. This paper, which refers to a final report issued in 2013, demonstrates these benefits in direct comparisons of digital and analog sensor applications. Improved accuracy results from the superior operating characteristics of digital sensors. These include improvements in sensor accuracy and drift and other related parameters which reduce total loop uncertainty and thereby increase safety and operating margins. Anmore » example instrument loop uncertainty calculation for a pressure sensor application is presented to illustrate these improvements. This is a side-by-side comparison of the instrument loop uncertainty for both an analog and a digital sensor in the same pressure measurement application. Similarly, improved sensor reliability is illustrated with a sample calculation for determining the probability of failure on demand, an industry standard reliability measure. This looks at equivalent analog and digital temperature sensors to draw the comparison. The results confirm substantial reliability improvement with the digital sensor, due in large part to ability to continuously monitor the health of a digital sensor such that problems can be immediately identified and corrected. This greatly reduces the likelihood of a latent failure condition of the sensor at the time of a design basis event. Notwithstanding the benefits of digital sensors, there are certain qualification issues that are inherent with digital technology and these are described in the report. One major qualification impediment for digital sensor implementation is software common cause failure (SCCF).« less

  18. Pyrotechnic system failures: Causes and prevention

    NASA Technical Reports Server (NTRS)

    Bement, Laurence J.

    1988-01-01

    Although pyrotechnics have successfully accomplished many critical mechanical spacecraft functions, such as ignition, severance, jettisoning and valving (excluding propulsion), failures continue to occur. Provided is a listing of 84 failures of pyrotechnic hardware with completed design over a 23-year period, compiled informally by experts from every NASA Center, as well as the Air Force Space Division and the Naval Surface Warfare Center. Analyses are presented as to when and where these failures occurred, their technical source or cause, followed by the reasons why and how these kinds of failures persist. The major contributor is a fundamental lack of understanding of the functional mechanisms of pyrotechnic devices and systems, followed by not recognizing pyrotechnics as an engineering technology, insufficient manpower with hands-on experience, too few test facilities, and inadequate guidelines and specifications for design, development, qualification and acceptance. Recommendations are made on both a managerial and technical basis to prevent failures, increase reliability, improve existing and future designs, and develop the technology to meet future requirements.

  19. Structural Testing of the Blade Reliability Collaborative Effect of Defect Wind Turbine Blades

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desmond, M.; Hughes, S.; Paquette, J.

    Two 8.3-meter (m) wind turbine blades intentionally constructed with manufacturing flaws were tested to failure at the National Wind Technology Center (NWTC) at the National Renewable Energy Laboratory (NREL) south of Boulder, Colorado. Two blades were tested; one blade was manufactured with a fiberglass spar cap and the second blade was manufactured with a carbon fiber spar cap. Test loading primarily consisted of flap fatigue loading of the blades, with one quasi-static ultimate load case applied to the carbon fiber spar cap blade. Results of the test program were intended to provide the full-scale test data needed for validation ofmore » model and coupon test results of the effect of defects in wind turbine blade composite materials. Testing was part of the Blade Reliability Collaborative (BRC) led by Sandia National Laboratories (SNL). The BRC seeks to develop a deeper understanding of the causes of unexpected blade failures (Paquette 2012), and to develop methods to enable blades to survive to their expected operational lifetime. Recent work in the BRC includes examining and characterizing flaws and defects known to exist in wind turbine blades from manufacturing processes (Riddle et al. 2011). Recent results from reliability databases show that wind turbine rotor blades continue to be a leading contributor to turbine downtime (Paquette 2012).« less

  20. Inter-rater reliability of the PIPES tool: validation of a surgical capacity index for use in resource-limited settings.

    PubMed

    Markin, Abraham; Barbero, Roxana; Leow, Jeffrey J; Groen, Reinou S; Perlman, Greg; Habermann, Elizabeth B; Apelgren, Keith N; Kushner, Adam L; Nwomeh, Benedict C

    2014-09-01

    In response to the need for simple, rapid means of quantifying surgical capacity in low resource settings, Surgeons OverSeas (SOS) developed the personnel, infrastructure, procedures, equipment and supplies (PIPES) tool. The present investigation assessed the inter-rater reliability of the PIPES tool. As part of a government assessment of surgical services in Santa Cruz, Bolivia, the PIPES tool was translated into Spanish and applied in interviews with physicians at 31 public hospitals. An additional interview was conducted with nurses at a convenience sample of 25 of these hospitals. Physician and nurse responses were then compared to generate an estimate of reliability. For dichotomous survey items, inter-rater reliability between physicians and nurses was assessed using the Cohen's kappa statistic and percent agreement. The Pearson correlation coefficient was used to assess agreement for continuous items. Cohen's kappa was 0.46 for infrastructure, 0.43 for procedures, 0.26 for equipment, and 0 for supplies sections. The median correlation coefficient was 0.91 for continuous items. Correlation was 0.79 for the PIPES index, and ranged from 0.32 to 0.98 for continuous response items. Reliability of the PIPES tool was moderate for the infrastructure and procedures sections, fair for the equipment section, and poor for supplies section when comparing surgeons' responses to nurses' responses-an extremely rigorous test of reliability. These results indicate that the PIPES tool is an effective measure of surgical capacity but that the equipment and supplies sections may need to be revised.

  1. An Investment Level Decision Method to Secure Long-term Reliability

    NASA Astrophysics Data System (ADS)

    Bamba, Satoshi; Yabe, Kuniaki; Seki, Tomomichi; Shibaya, Tetsuji

    The slowdown in power demand increase and facility replacement causes the aging and lower reliability in power facility. And the aging is followed by the rapid increase of repair and replacement when many facilities reach their lifetime in future. This paper describes a method to estimate the repair and replacement costs in future by applying the life-cycle cost model and renewal theory to the historical data. This paper also describes a method to decide the optimum investment plan, which replaces facilities in the order of cost-effectiveness by setting replacement priority formula, and the minimum investment level to keep the reliability. Estimation examples applied to substation facilities show that the reasonable and leveled future cash-out can keep the reliability by lowering the percentage of replacements caused by fatal failures.

  2. Child safety driver assistant system and its acceptance.

    PubMed

    Quendler, Elisabeth; Diskus, Christian; Pohl, Alfred; Buchegger, Thomas; Beranek, Ernst; Boxberger, Josef

    2009-01-01

    Farming machinery incidents frequently cause the injury and death of children on farms worldwide. The two main causes of this problem are the driver's view being restricted by construction and/or environmental factors and insufficient risk awareness by children and parents. It is difficult to separate working and living areas on family farms, and the adult supervision necessary to avoid work accidents is often lacking. For this reason, additional preventive measures are required to reduce the number of crushings. Electronic tools that deliver information about the presence of children in the blind spots surrounding vehicles and their attached machines can be very effective. Such an electronic device must cover all security gaps around operating agricultural vehicles and their attached machines, ensure collision-free stopping in risk situations, and be inexpensive. Wireless sensor network and electrical near-field electronic components are suited to the development of low-cost wireless detection devices. For reliable detection in a versatile environment, it is necessary for children to continuously wear a slumbering transponder. This means that children and adults must have a high acceptance of the device, which can be improved by easy usability, design, and service quality. The developed demonstrator achieved detection distances of up to 40 m in the far field and 2.5 m in the near field. Recognized far-field sensor detection weaknesses, determined by user-friendliness tests, are false alarms in farmyards and around buildings. The detection distance and reliability of the near-field sensor varied with the design of the attached machines' metallic components.

  3. Update on the Human Broad Tapeworm (Genus Diphyllobothrium), Including Clinical Relevance

    PubMed Central

    Scholz, Tomáš; Garcia, Hector H.; Kuchta, Roman; Wicht, Barbara

    2009-01-01

    Summary: Tapeworms (Cestoda) continue to be an important cause of morbidity in humans worldwide. Diphyllobothriosis, a human disease caused by tapeworms of the genus Diphyllobothrium, is the most important fish-borne zoonosis caused by a cestode parasite. Up to 20 million humans are estimated to be infected worldwide. Besides humans, definitive hosts of Diphyllobothrium include piscivorous birds and mammals, which represent a significant zoonotic reservoir. The second intermediate hosts include both freshwater and marine fish, especially anadromous species such as salmonids. The zoonosis occurs most commonly in countries where the consumption of raw or marinated fish is a frequent practice. Due to the increasing popularity of dishes utilizing uncooked fish, numerous cases of human infections have appeared recently, even in the most developed countries. As many as 14 valid species of Diphyllobothrium can cause human diphyllobothriosis, with D. latum and D. nihonkaiense being the most important pathogens. In this paper, all taxa from humans reported are reviewed, with brief information on their life history and their current distribution. Data on diagnostics, epidemiology, clinical relevance, and control of the disease are also summarized. The importance of reliable identification of human-infecting species with molecular tools (sequences of mitochondrial genes) as well as the necessity of epidemiological studies aimed at determining the sources of infections are pointed out. PMID:19136438

  4. Flexible Transparent Conductive Films with High Performance and Reliability Using Hybrid Structures of Continuous Metal Nanofiber Networks for Flexible Optoelectronics.

    PubMed

    Park, Juyoung; Hyun, Byung Gwan; An, Byeong Wan; Im, Hyeon-Gyun; Park, Young-Geun; Jang, Junho; Park, Jang-Ung; Bae, Byeong-Soo

    2017-06-21

    We report an Ag nanofiber-embedded glass-fabric reinforced hybrimer (AgNF-GFRHybrimer) composite film as a reliable and high-performance flexible transparent conducting film. The continuous AgNF network provides superior optoelectronic properties of the composite film by minimizing transmission loss and junction resistance. In addition, the excellent thermal/chemical stability and mechanical durability of the GFRHybrimer matrix provides enhanced mechanical durability and reliability of the final AgNF-GFRHybrimer composite film. To demonstrate the availability of our AgNF-GFRHybrimer composite as a transparent conducting film, we fabricated a flexible organic light-emitting diode (OLED) device on the AgNF-GFRHybrimer film; the OLED showed stable operation during a flexing.

  5. Analysis of strain gage reliability in F-100 jet engine testing at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Holanda, R.

    1983-01-01

    A reliability analysis was performed on 64 strain gage systems mounted on the 3 rotor stages of the fan of a YF-100 engine. The strain gages were used in a 65 hour fan flutter research program which included about 5 hours of blade flutter. The analysis was part of a reliability improvement program. Eighty-four percent of the strain gages survived the test and performed satisfactorily. A post test analysis determined most failure causes. Five failures were caused by open circuits, three failed gages showed elevated circuit resistance, and one gage circuit was grounded. One failure was undetermined.

  6. Technology developments toward 30-year-life of photovoltaic modules

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1984-01-01

    As part of the United States National Photovoltaics Program, the Jet Propulsion Laboratory's Flat-Plate Solar Array Project (FSA) has maintained a comprehensive reliability and engineering sciences activity addressed toward understanding the reliability attributes of terrestrial flat-plate photovoltaic arrays and to deriving analysis and design tools necessary to achieve module designs with a 30-year useful life. The considerable progress to date stemming from the ongoing reliability research is discussed, and the major areas requiring continued research are highlighted. The result is an overview of the total array reliability problem and of available means of achieving high reliability at minimum cost.

  7. The continuing problem of human African trypanosomiasis (sleeping sickness).

    PubMed

    Kennedy, Peter G E

    2008-08-01

    Human African trypanosomiasis, also known as sleeping sickness, is a neglected disease, and it continues to pose a major threat to 60 million people in 36 countries in sub-Saharan Africa. Transmitted by the bite of the tsetse fly, the disease is caused by protozoan parasites of the genus Trypanosoma and comes in two types: East African human African trypanosomiasis caused by Trypanosoma brucei rhodesiense and the West African form caused by Trypanosoma brucei gambiense. There is an early or hemolymphatic stage and a late or encephalitic stage, when the parasites cross the blood-brain barrier to invade the central nervous system. Two critical current issues are disease staging and drug therapy, especially for late-stage disease. Lumbar puncture to analyze cerebrospinal fluid will remain the only method of disease staging until reliable noninvasive methods are developed, but there is no widespread consensus as to what exactly defines biologically central nervous system disease or what specific cerebrospinal fluid findings should justify drug therapy for late-stage involvement. All four main drugs used for human African trypanosomiasis are toxic, and melarsoprol, the only drug that is effective for both types of central nervous system disease, is so toxic that it kills 5% of patients who receive it. Eflornithine, alone or combined with nifurtimox, is being used increasingly as first-line therapy for gambiense disease. There is a pressing need for an effective, safe oral drug for both stages of the disease, but this will require a significant increase in investment for new drug discovery from Western governments and the pharmaceutical industry.

  8. Construction of Response Surface with Higher Order Continuity and Its Application to Reliability Engineering

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, T.; Romero, V. J.

    2002-01-01

    The usefulness of piecewise polynomials with C1 and C2 derivative continuity for response surface construction method is examined. A Moving Least Squares (MLS) method is developed and compared with four other interpolation methods, including kriging. First the selected methods are applied and compared with one another in a two-design variables problem with a known theoretical response function. Next the methods are tested in a four-design variables problem from a reliability-based design application. In general the piecewise polynomial with higher order derivative continuity methods produce less error in the response prediction. The MLS method was found to be superior for response surface construction among the methods evaluated.

  9. Timely detection and monitoring of oil leakage by satellite optical data.

    NASA Astrophysics Data System (ADS)

    Grimaldi, C. S. L.; Coviello, I.; Lacava, T.; Pergola, N.; Tramutoli, V.

    2009-04-01

    Sea oil pollution can derive from different sources. Accidental release of oil into the oceans caused by "human errors" (tankers collisions and/or shipwrecks) or natural hazards (hurricanes, landslides, earthquakes) have remarkable ecological impact on maritime and coastal environments. Katrina Hurricane, for example, hitting oil and gas infrastructures off USA coasts caused the destruction of more than 100 platforms and the release into the sea of more than 10,000 gallons of crude oil. In order to reduce the environmental impact of such kind of technological hazards, timely detection and continuously updated information are fundamental. Satellite remote sensing can give a significant contribution in such a direction. Nowadays, SAR (Synthetic Aperture Radar) technology has been recognized as the most efficient for oil spill detection and mapping, thanks to the high spatial resolution and all-time/weather capability of the present operational sensors. Anyway, due to their current revisiting cycles, SAR systems cannot be profitably used for a rapid detection and for a continuous and near real-time monitoring of these phenomena. Until COSMO-Skymed SAR constellation, that will be able to improve SAR observational frequency, will not be fully operational, passive optical sensors on board meteorological satellites, thanks to their high temporal resolution, may represent a suitable alternative for early detection and continuous monitoring of oil spills, provided that adequate and reliable data analysis techniques exist. Recently, an innovative technique for oil spill detection and monitoring, based on the general Robust Satellite Techniques (RST) approach, has been proposed. It exploits the multi-temporal analysis of optical data acquired by both AVHRR (Advanced Very High Resolution Radiometer) and MODIS (Moderate Resolution Imaging Spectroradiometer) sensors in order to detect, automatically and timely, the presence of oil spill over the sea surface, trying to minimize the "false-detections" possibly caused by spurious effects (e.g. clouds). In this paper, preliminary results obtained applying the proposed methodology to different test-cases are shown and discussed.

  10. Design of RF MEMS switches without pull-in instability

    NASA Astrophysics Data System (ADS)

    Proctor, W. Cyrus; Richards, Gregory P.; Shen, Chongyi; Skorczewski, Tyler; Wang, Min; Zhang, Jingyan; Zhong, Peng; Massad, Jordan E.; Smith, Ralph

    2010-04-01

    Micro-electro-mechanical systems (MEMS) switches for radio-frequency (RF) signals have certain advantages over solid-state switches, such as lower insertion loss, higher isolation, and lower static power dissipation. Mechanical dynamics can be a determining factor for the reliability of RF MEMS. The RF MEMS ohmic switch discussed in this paper consists of a plate suspended over an actuation pad by four double-cantilever springs. Closing the switch with a simple step actuation voltage typically causes the plate to rebound from its electrical contacts. The rebound interrupts the signal continuity and degrades the performance, reliability and durability of the switch. The switching dynamics are complicated by a nonlinear, electrostatic pull-in instability that causes high accelerations. Slow actuation and tailored voltage control signals can mitigate switch bouncing and effects of the pull-in instability; however, slow switching speed and overly-complex input signals can significantly penalize overall system-level performance. Examination of a balanced and optimized alternative switching solution is sought. A step toward one solution is to consider a pull-in-free switch design. In this paper, determine how simple RC-circuit drive signals and particular structural properties influence the mechanical dynamics of an RF MEMS switch designed without a pull-in instability. The approach is to develop a validated modeling capability and subsequently study switch behavior for variable drive signals and switch design parameters. In support of project development, specifiable design parameters and constraints will be provided. Moreover, transient data of RF MEMS switches from laser Doppler velocimetry will be provided for model validation tasks. Analysis showed that a RF MEMS switch could feasibly be designed with a single pulse waveform and no pull-in instability and achieve comparable results to previous waveform designs. The switch design could reliably close in a timely manner, with small contact velocity, usually with little to no rebound even when considering manufacturing variability.

  11. Talking about the Automobile Braking System

    NASA Astrophysics Data System (ADS)

    Xu, Zhiqiang

    2017-12-01

    With the continuous progress of society, the continuous development of the times, people’s living standards continue to improve, people continue to improve the pursuit. With the rapid development of automobile manufacturing, the car will be all over the tens of thousands of households, the increase in car traffic, a direct result of the incidence of traffic accidents. Brake system is the guarantee of the safety of the car, its technical condition is good or bad, directly affect the operational safety and transportation efficiency, so the brake system is absolutely reliable. The requirements of the car on the braking system is to have a certain braking force to ensure reliable work in all cases, light and flexible operation. Normal braking should be good performance, in addition to a foot sensitive, the emergency brake four rounds can not be too long, not partial, not ring.

  12. Predicting Cost/Reliability/Maintainability of Advanced General Aviation Avionics Equipment

    NASA Technical Reports Server (NTRS)

    Davis, M. R.; Kamins, M.; Mooz, W. E.

    1978-01-01

    A methodology is provided for assisting NASA in estimating the cost, reliability, and maintenance (CRM) requirements for general avionics equipment operating in the 1980's. Practical problems of predicting these factors are examined. The usefulness and short comings of different approaches for modeling coast and reliability estimates are discussed together with special problems caused by the lack of historical data on the cost of maintaining general aviation avionics. Suggestions are offered on how NASA might proceed in assessing cost reliability CRM implications in the absence of reliable generalized predictive models.

  13. Ultra Reliable Closed Loop Life Support for Long Space Missions

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Ewert, Michael K.

    2010-01-01

    Spacecraft human life support systems can achieve ultra reliability by providing sufficient spares to replace all failed components. The additional mass of spares for ultra reliability is approximately equal to the original system mass, provided that the original system reliability is not too low. Acceptable reliability can be achieved for the Space Shuttle and Space Station by preventive maintenance and by replacing failed units. However, on-demand maintenance and repair requires a logistics supply chain in place to provide the needed spares. In contrast, a Mars or other long space mission must take along all the needed spares, since resupply is not possible. Long missions must achieve ultra reliability, a very low failure rate per hour, since they will take years rather than weeks and cannot be cut short if a failure occurs. Also, distant missions have a much higher mass launch cost per kilogram than near-Earth missions. Achieving ultra reliable spacecraft life support systems with acceptable mass will require a well-planned and extensive development effort. Analysis must determine the reliability requirement and allocate it to subsystems and components. Ultra reliability requires reducing the intrinsic failure causes, providing spares to replace failed components and having "graceful" failure modes. Technologies, components, and materials must be selected and designed for high reliability. Long duration testing is needed to confirm very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The system must be designed, developed, integrated, and tested with system reliability in mind. Maintenance and reparability of failed units must not add to the probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass should start soon since it must be a long term effort.

  14. Interrater reliability of the injury reporting of the injury surveillance system used in international athletics championships.

    PubMed

    Edouard, Pascal; Junge, Astrid; Kiss-Polauf, Marianna; Ramirez, Christophe; Sousa, Monica; Timpka, Toomas; Branco, Pedro

    2018-03-01

    The quality of epidemiological injury data depends on the reliability of reporting to an injury surveillance system. Ascertaining whether all physicians/physiotherapists report the same information for the same injury case is of major interest to determine data validity. The aim of this study was therefore to analyse the data collection reliability through the analysis of the interrater reliability. Cross-sectional survey. During the 2016 European Athletics Advanced Athletics Medicine Course in Amsterdam, all national medical teams were asked to complete seven virtual case reports on a standardised injury report form using the same definitions and classifications of injuries as the international athletics championships injury surveillance protocol. The completeness of data and the Fleiss' kappa coefficients for the inter-rater reliability were calculated for: sex, age, event, circumstance, location, type, assumed cause and estimated time-loss. Forty-one team physicians and physiotherapists of national medical teams participated in the study (response rate 89.1%). Data completeness was 96.9%. The Fleiss' kappa coefficients were: almost perfect for sex (k=1), injury location (k=0.991), event (k=0.953), circumstance (k=0.942), and age (k=0.870), moderate for type (k=0.507), fair for assumed cause (k=0.394), and poor for estimated time-loss (k=0.155). The injury surveillance system used during international athletics championships provided reliable data for "sex", "location", "event", "circumstance", and "age". More caution should be taken for "assumed cause" and "type", and even more for "estimated time-loss". This injury surveillance system displays satisfactory data quality (reliable data and high data completeness), and thus, can be recommended as tool to collect epidemiology information on injuries during international athletics championships. Copyright © 2018 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  15. SIERRA - A 3-D device simulator for reliability modeling

    NASA Astrophysics Data System (ADS)

    Chern, Jue-Hsien; Arledge, Lawrence A., Jr.; Yang, Ping; Maeda, John T.

    1989-05-01

    SIERRA is a three-dimensional general-purpose semiconductor-device simulation program which serves as a foundation for investigating integrated-circuit (IC) device and reliability issues. This program solves the Poisson and continuity equations in silicon under dc, transient, and small-signal conditions. Executing on a vector/parallel minisupercomputer, SIERRA utilizes a matrix solver which uses an incomplete LU (ILU) preconditioned conjugate gradient square (CGS, BCG) method. The ILU-CGS method provides a good compromise between memory size and convergence rate. The authors have observed a 5x to 7x speedup over standard direct methods in simulations of transient problems containing highly coupled Poisson and continuity equations such as those found in reliability-oriented simulations. The application of SIERRA to parasitic CMOS latchup and dynamic random-access memory single-event-upset studies is described.

  16. Weighted integration of short-term memory and sensory signals in the oculomotor system.

    PubMed

    Deravet, Nicolas; Blohm, Gunnar; de Xivry, Jean-Jacques Orban; Lefèvre, Philippe

    2018-05-01

    Oculomotor behaviors integrate sensory and prior information to overcome sensory-motor delays and noise. After much debate about this process, reliability-based integration has recently been proposed and several models of smooth pursuit now include recurrent Bayesian integration or Kalman filtering. However, there is a lack of behavioral evidence in humans supporting these theoretical predictions. Here, we independently manipulated the reliability of visual and prior information in a smooth pursuit task. Our results show that both smooth pursuit eye velocity and catch-up saccade amplitude were modulated by visual and prior information reliability. We interpret these findings as the continuous reliability-based integration of a short-term memory of target motion with visual information, which support modeling work. Furthermore, we suggest that saccadic and pursuit systems share this short-term memory. We propose that this short-term memory of target motion is quickly built and continuously updated, and constitutes a general building block present in all sensorimotor systems.

  17. Apollo experience report: Reliability and quality assurance

    NASA Technical Reports Server (NTRS)

    Sperber, K. P.

    1973-01-01

    The reliability of the Apollo spacecraft resulted from the application of proven reliability and quality techniques and from sound management, engineering, and manufacturing practices. Continual assessment of these techniques and practices was made during the program, and, when deficiencies were detected, adjustments were made and the deficiencies were effectively corrected. The most significant practices, deficiencies, adjustments, and experiences during the Apollo Program are described in this report. These experiences can be helpful in establishing an effective base on which to structure an efficient reliability and quality assurance effort for future space-flight programs.

  18. Calibration plots for risk prediction models in the presence of competing risks.

    PubMed

    Gerds, Thomas A; Andersen, Per K; Kattan, Michael W

    2014-08-15

    A predicted risk of 17% can be called reliable if it can be expected that the event will occur to about 17 of 100 patients who all received a predicted risk of 17%. Statistical models can predict the absolute risk of an event such as cardiovascular death in the presence of competing risks such as death due to other causes. For personalized medicine and patient counseling, it is necessary to check that the model is calibrated in the sense that it provides reliable predictions for all subjects. There are three often encountered practical problems when the aim is to display or test if a risk prediction model is well calibrated. The first is lack of independent validation data, the second is right censoring, and the third is that when the risk scale is continuous, the estimation problem is as difficult as density estimation. To deal with these problems, we propose to estimate calibration curves for competing risks models based on jackknife pseudo-values that are combined with a nearest neighborhood smoother and a cross-validation approach to deal with all three problems. Copyright © 2014 John Wiley & Sons, Ltd.

  19. Chip-scale thermal management of high-brightness LED packages

    NASA Astrophysics Data System (ADS)

    Arik, Mehmet; Weaver, Stanton

    2004-10-01

    The efficiency and reliability of the solid-state lighting devices strongly depend on successful thermal management. Light emitting diodes, LEDs, are a strong candidate for the next generation, general illumination applications. LEDs are making great strides in terms of lumen performance and reliability, however the barrier to widespread use in general illumination still remains the cost or $/Lumen. LED packaging designers are pushing the LED performance to its limits. This is resulting in increased drive currents, and thus the need for lower thermal resistance packaging designs. As the power density continues to rise, the integrity of the package electrical and thermal interconnect becomes extremely important. Experimental results with high brightness LED packages show that chip attachment defects can cause significant thermal gradients across the LED chips leading to premature failures. A numerical study was also carried out with parametric models to understand the chip active layer temperature profile variation due to the bump defects. Finite element techniques were utilized to evaluate the effects of localized hot spots at the chip active layer. The importance of "zero defects" in one of the more popular interconnect schemes; the "epi down" soldered flip chip configuration is investigated and demonstrated.

  20. Experiences with Two Reliability Data Collection Efforts (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, S.; Lantz, E.

    2013-08-01

    This presentation, given by NREL at the Wind Reliability Experts Meeting in Albuquerque, New Mexico, outlines the causes of wind plant operational expenditures and gearbox failures and describes NREL's efforts to create a gearbox failure database.

  1. 76 FR 66057 - North American Electric Reliability Corporation; Order Approving Regional Reliability Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-25

    ... system conditions when the system experiences dynamic events such as low frequency oscillations, or... R8 requires that dynamic disturbance recorders function continuously. To capture system disturbance... recording capability necessary to monitor the response of the Bulk-Power System to system disturbances...

  2. Correcting Fallacies in Validity, Reliability, and Classification

    ERIC Educational Resources Information Center

    Sijtsma, Klaas

    2009-01-01

    This article reviews three topics from test theory that continue to raise discussion and controversy and capture test theorists' and constructors' interest. The first topic concerns the discussion of the methodology of investigating and establishing construct validity; the second topic concerns reliability and its misuse, alternative definitions…

  3. Pipeline monitoring with unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Kochetkova, L. I.

    2018-05-01

    Pipeline leakage during transportation of combustible substances leads to explosion and fire thus causing death of people and destruction of production and accommodation facilities. Continuous pipeline monitoring allows identifying leaks in due time and quickly taking measures for their elimination. The paper describes the solution of identification of pipeline leakage using unmanned aerial vehicles. It is recommended to apply the spectral analysis with input RGB signal to identify pipeline damages. The application of multi-zone digital images allows defining potential spill of oil hydrocarbons as well as possible soil pollution. The method of multi-temporal digital images within the visible region makes it possible to define changes in soil morphology for its subsequent analysis. The given solution is cost efficient and reliable thus allowing reducing timing and labor resources in comparison with other methods of pipeline monitoring.

  4. Community-acquired pneumonia management and outcomes in the era of health information technology.

    PubMed

    Mecham, Ian D; Vines, Caroline; Dean, Nathan C

    2017-11-01

    Pneumonia continues to be a leading cause of hospitalization and mortality. Implementation of health information technology (HIT) can lead to cost savings and improved care. In this review, we examine the literature on the use of HIT in the management of community-acquired pneumonia. We also discuss barriers to adoption of technology in managing pneumonia, the reliability and quality of electronic health data in pneumonia research, how technology has assisted pneumonia diagnosis and outcomes research. The goal of using HIT is to develop and deploy generalizable, real-time, computerized clinical decision support integrated into usual pneumonia care. A friendly user interface that does not disrupt efficiency and demonstrates improved clinical outcomes should result in widespread adoption. © 2017 Asian Pacific Society of Respirology.

  5. Electrically induced spontaneous emission in open electronic system

    NASA Astrophysics Data System (ADS)

    Wang, Rulin; Zhang, Yu; Yam, Chiyung; Computation Algorithms Division (CSRC) Team; Theoretical; Computational Chemistry (HKU) Collaboration

    A quantum mechanical approach is formulated for simulation of electroluminescence process in open electronic system. Based on nonequilibrium Green's function quantum transport equations and combining with photon-electron interaction, this method is used to describe electrically induced spontaneous emission caused by electron-hole recombination. The accuracy and reliability of simulation depends critically on correct description of the electronic band structure and the electron occupancy in the system. In this work, instead of considering electron-hole recombination in discrete states in the previous work, we take continuous states into account to simulate the spontaneous emission in open electronic system, and discover that the polarization of emitted photon is closely related to its propagation direction. Numerical studies have been performed to silicon nanowire-based P-N junction with different bias voltage.

  6. Experiments in fault tolerant software reliability

    NASA Technical Reports Server (NTRS)

    Mcallister, David F.; Tai, K. C.; Vouk, Mladen A.

    1987-01-01

    The reliability of voting was evaluated in a fault-tolerant software system for small output spaces. The effectiveness of the back-to-back testing process was investigated. Version 3.0 of the RSDIMU-ATS, a semi-automated test bed for certification testing of RSDIMU software, was prepared and distributed. Software reliability estimation methods based on non-random sampling are being studied. The investigation of existing fault-tolerance models was continued and formulation of new models was initiated.

  7. Mobile phone radiation health risk controversy: the reliability and sufficiency of science behind the safety standards.

    PubMed

    Leszczynski, Dariusz; Xu, Zhengping

    2010-01-27

    There is ongoing discussion whether the mobile phone radiation causes any health effects. The International Commission on Non-Ionizing Radiation Protection, the International Committee on Electromagnetic Safety and the World Health Organization are assuring that there is no proven health risk and that the present safety limits protect all mobile phone users. However, based on the available scientific evidence, the situation is not as clear. The majority of the evidence comes from in vitro laboratory studies and is of very limited use for determining health risk. Animal toxicology studies are inadequate because it is not possible to "overdose" microwave radiation, as it is done with chemical agents, due to simultaneous induction of heating side-effects. There is a lack of human volunteer studies that would, in unbiased way, demonstrate whether human body responds at all to mobile phone radiation. Finally, the epidemiological evidence is insufficient due to, among others, selection and misclassification bias and the low sensitivity of this approach in detection of health risk within the population. This indicates that the presently available scientific evidence is insufficient to prove reliability of the current safety standards. Therefore, we recommend to use precaution when dealing with mobile phones and, whenever possible and feasible, to limit body exposure to this radiation. Continuation of the research on mobile phone radiation effects is needed in order to improve the basis and the reliability of the safety standards.

  8. Mobile phone radiation health risk controversy: the reliability and sufficiency of science behind the safety standards

    PubMed Central

    2010-01-01

    There is ongoing discussion whether the mobile phone radiation causes any health effects. The International Commission on Non-Ionizing Radiation Protection, the International Committee on Electromagnetic Safety and the World Health Organization are assuring that there is no proven health risk and that the present safety limits protect all mobile phone users. However, based on the available scientific evidence, the situation is not as clear. The majority of the evidence comes from in vitro laboratory studies and is of very limited use for determining health risk. Animal toxicology studies are inadequate because it is not possible to "overdose" microwave radiation, as it is done with chemical agents, due to simultaneous induction of heating side-effects. There is a lack of human volunteer studies that would, in unbiased way, demonstrate whether human body responds at all to mobile phone radiation. Finally, the epidemiological evidence is insufficient due to, among others, selection and misclassification bias and the low sensitivity of this approach in detection of health risk within the population. This indicates that the presently available scientific evidence is insufficient to prove reliability of the current safety standards. Therefore, we recommend to use precaution when dealing with mobile phones and, whenever possible and feasible, to limit body exposure to this radiation. Continuation of the research on mobile phone radiation effects is needed in order to improve the basis and the reliability of the safety standards. PMID:20205835

  9. An overall strategy based on regression models to estimate relative survival and model the effects of prognostic factors in cancer survival studies.

    PubMed

    Remontet, L; Bossard, N; Belot, A; Estève, J

    2007-05-10

    Relative survival provides a measure of the proportion of patients dying from the disease under study without requiring the knowledge of the cause of death. We propose an overall strategy based on regression models to estimate the relative survival and model the effects of potential prognostic factors. The baseline hazard was modelled until 10 years follow-up using parametric continuous functions. Six models including cubic regression splines were considered and the Akaike Information Criterion was used to select the final model. This approach yielded smooth and reliable estimates of mortality hazard and allowed us to deal with sparse data taking into account all the available information. Splines were also used to model simultaneously non-linear effects of continuous covariates and time-dependent hazard ratios. This led to a graphical representation of the hazard ratio that can be useful for clinical interpretation. Estimates of these models were obtained by likelihood maximization. We showed that these estimates could be also obtained using standard algorithms for Poisson regression. Copyright 2006 John Wiley & Sons, Ltd.

  10. Upcoming Methods and Specifications of Continuous Intraocular Pressure Monitoring Systems for Glaucoma

    PubMed Central

    Molaei, Amir; Karamzadeh, Vahid; Safi, Sare; Esfandiari, Hamed; Dargahi, Javad; Khosravi, Mohammad Azam

    2018-01-01

    Glaucoma is the leading cause of irreversible blindness and vision loss in the world. Although intraocular pressure (IOP) is no longer considered the only risk factor for glaucoma, it is still the most important one. In most cases, high IOP is secondary to trabecular meshwork dysfunction. High IOP leads to compaction of the lamina cribrosa and subsequent damage to retinal ganglion cell axons. Damage to the optic nerve head is evident on funduscopy as posterior bowing of the lamina cribrosa and increased cupping. Currently, the only documented method to slow or halt the progression of this disease is to decrease the IOP; hence, accurate IOP measurement is crucial not only for diagnosis, but also for the management. Due to the dynamic nature and fluctuation of the IOP, a single clinical measurement is not a reliable indicator of diurnal IOP; it requires 24-hour monitoring methods. Technological advances in microelectromechanical systems and microfluidics provide a promising solution for the effective measurement of IOP. This paper provides a broad overview of the upcoming technologies to be used for continuous IOP monitoring. PMID:29403593

  11. Behaviour State Analysis in Rett Syndrome: Continuous Data Reliability Measurement

    ERIC Educational Resources Information Center

    Woodyatt, Gail; Marinac, Julie; Darnell, Ross; Sigafoos, Jeff; Halle, James

    2004-01-01

    Awareness of optimal behaviour states of children with profound intellectual disability has been reported in the literature as a potentially useful tool for planning intervention within this population. Some arguments have been raised, however, which question the reliability and validity of previously published work on behaviour state analysis.…

  12. 75 FR 32209 - North San Pablo Bay Restoration and Reuse Project

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-07

    ... Napa counties are facing long-term water supply shortfalls. Surface and groundwater supplies within... water levels and water quality. Recycled water can augment local water supplies on a regional basis... reliability. Additionally, reliable water supply is needed in order to continue the restoration of tidal...

  13. Evaluation of continuous air monitor placement in a plutonium facility.

    PubMed

    Whicker, J J; Rodgers, J C; Fairchild, C I; Scripsick, R C; Lopez, R C

    1997-05-01

    Department of Energy appraisers found continuous air monitors at Department of Energy plutonium facilities alarmed less than 30% of the time when integrated room plutonium air concentrations exceeded 500 DAC-hours. Without other interventions, this alarm percentage suggests the possibility that workers could be exposed to high airborne concentrations without continuous air monitor alarms. Past research has shown that placement of continuous air monitors is a critical component in rapid and reliable detection of airborne releases. At Los Alamos National Laboratory and many other Department of Energy plutonium facilities, continuous air monitors have been primarily placed at ventilation exhaust points. The purpose of this study was to evaluate and compare the effectiveness of exhaust register placement of workplace continuous air monitors with other sampling locations. Polydisperse oil aerosols were released from multiple locations in two plutonium laboratories at Los Alamos National Laboratory. An array of laser particle counters positioned in the rooms measured time-resolved aerosol dispersion. Results showed alternative placement of air samplers generally resulted in aerosol detection that was faster, often more sensitive, and equally reliable compared with samplers at exhaust registers.

  14. Reliability analysis method of a solar array by using fault tree analysis and fuzzy reasoning Petri net

    NASA Astrophysics Data System (ADS)

    Wu, Jianing; Yan, Shaoze; Xie, Liyang

    2011-12-01

    To address the impact of solar array anomalies, it is important to perform analysis of the solar array reliability. This paper establishes the fault tree analysis (FTA) and fuzzy reasoning Petri net (FRPN) models of a solar array mechanical system and analyzes reliability to find mechanisms of the solar array fault. The index final truth degree (FTD) and cosine matching function (CMF) are employed to resolve the issue of how to evaluate the importance and influence of different faults. So an improvement reliability analysis method is developed by means of the sorting of FTD and CMF. An example is analyzed using the proposed method. The analysis results show that harsh thermal environment and impact caused by particles in space are the most vital causes of the solar array fault. Furthermore, other fault modes and the corresponding improvement methods are discussed. The results reported in this paper could be useful for the spacecraft designers, particularly, in the process of redesigning the solar array and scheduling its reliability growth plan.

  15. Why Does Exposure to Arsenic from Drinking Groundwater in Asian Megadeltas Continue to be High?

    NASA Astrophysics Data System (ADS)

    van Geen, A.; Ahmed, K. M.; Ahmed, E. B.; Choudhury, I.; Mozumder, M. R. H.; Bostick, B. C.; Mailloux, B. J.; Knappett, P. S.; Schlosser, P.

    2014-12-01

    Concentrations of arsenic in groundwater pumped from a significant fraction of the millions of shallow tubewells installed, mostly privately, across S/SE Asia exceed the WHO guideline value of 10 ug/L by a factor of 10 to 100. The resulting exposure has been linked to cancers and cardio-vascular disease in adults and inhibited intellectual function in children. In Bangladesh, the most affected country, the impact of early mitigation efforts relying on water treatment has been limited by the cost and logistics of maintenance. A simpler approach based on switching human consumption to low-arsenic wells has proved to be more resilient although it remains far from sufficiently adopted. A decade ago, there was concern that low-arsenic wells might become contaminated upon use. Observations and modeling have since shown that groundwater arsenic concentrations are likely to rise only in certain hydrogeologically vulnerable areas and then only gradually. Our recently completed blanket-testing campaign of 50,000 wells in 300 villages of Bangladesh has shown that, instead, a leading cause of current exposure is that households have continued to install wells and typically have nowhere to turn for a reliable arsenic test. The same campaign has shown that another reason for continued exposure is that deeper wells that are low in arsenic and whose installation has been subsidized by the Bangladesh government are not located to maximize public access. The geographic clustering of these deep wells suggests that, all too often, their location is decided on the basis of political allegiance rather than need. Such obstacles to lowering arsenic exposure might be overcome with more widespread testing and the public posting of maps of test results also showing where deep wells have been installed. We will show that obtaining and sharing such information has been greatly facilitated by a reliable field-kit for arsenic and the increasing use of smartphones in Bangladesh.

  16. Interval estimation and optimal design for the within-subject coefficient of variation for continuous and binary variables

    PubMed Central

    Shoukri, Mohamed M; Elkum, Nasser; Walter, Stephen D

    2006-01-01

    Background In this paper we propose the use of the within-subject coefficient of variation as an index of a measurement's reliability. For continuous variables and based on its maximum likelihood estimation we derive a variance-stabilizing transformation and discuss confidence interval construction within the framework of a one-way random effects model. We investigate sample size requirements for the within-subject coefficient of variation for continuous and binary variables. Methods We investigate the validity of the approximate normal confidence interval by Monte Carlo simulations. In designing a reliability study, a crucial issue is the balance between the number of subjects to be recruited and the number of repeated measurements per subject. We discuss efficiency of estimation and cost considerations for the optimal allocation of the sample resources. The approach is illustrated by an example on Magnetic Resonance Imaging (MRI). We also discuss the issue of sample size estimation for dichotomous responses with two examples. Results For the continuous variable we found that the variance stabilizing transformation improves the asymptotic coverage probabilities on the within-subject coefficient of variation for the continuous variable. The maximum like estimation and sample size estimation based on pre-specified width of confidence interval are novel contribution to the literature for the binary variable. Conclusion Using the sample size formulas, we hope to help clinical epidemiologists and practicing statisticians to efficiently design reliability studies using the within-subject coefficient of variation, whether the variable of interest is continuous or binary. PMID:16686943

  17. Methodology for Physics and Engineering of Reliable Products

    NASA Technical Reports Server (NTRS)

    Cornford, Steven L.; Gibbel, Mark

    1996-01-01

    Physics of failure approaches have gained wide spread acceptance within the electronic reliability community. These methodologies involve identifying root cause failure mechanisms, developing associated models, and utilizing these models to inprove time to market, lower development and build costs and higher reliability. The methodology outlined herein sets forth a process, based on integration of both physics and engineering principles, for achieving the same goals.

  18. Modeling and experimental investigation of thermal-mechanical-electric coupling dynamics in a standing wave ultrasonic motor

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Yao, Zhiyuan; He, Yigang; Dai, Shichao

    2017-09-01

    Ultrasonic motor operation relies on high-frequency vibration of a piezoelectric vibrator and interface friction between the stator and rotor/slider, which can cause temperature rise of the motor under continuous operation, and can affect motor parameters and performance in turn. In this paper, an integral model is developed to study the thermal-mechanical-electric coupling dynamics in a typical standing wave ultrasonic motor. Stick-slip motion at the contact interface and the temperature dependence of material parameters of the stator are taken into account in this model. The elastic, piezoelectric and dielectric material coefficients of the piezoelectric ceramic, as a function of temperature, are determined experimentally using a resonance method. The critical parameters in the model are identified via measured results. The resulting model can be used to evaluate the variation in output characteristics of the motor caused by the thermal-mechanical-electric coupling effects. Furthermore, the dynamic temperature rise of the motor can be accurately predicted under different input parameters using the developed model, which will contribute to improving the reliable life of a motor for long-term running.

  19. A sexually transmitted disease: History of AIDS through philately.

    PubMed

    Vatanoğlu, Emine Elif; Ataman, Ahmet Doğan

    2011-01-01

    AIDS has become the new plague; a disease that is not only physically and psychologically debilitating, but culturally and socially devastating as well. Like the plague, AIDS has caused fear, prejudice and even panic in society. Although there are remarkable improvements in the diagnosis and treatment of the disease, AIDS continues its grim passage around the globe. After a slight downturn in the early 1990's, it then returned with a vengeance. By the end of the 20(th) century, AIDS was reliably estimated to have caused over 20 million deaths throughout the world. At the same time, 40 million people were estimated to be HIV positive. This paper provides an overview of the history of AIDS, including the discovery and its progress in the world through philately. Philately is the study of stamps and postal history and other related items. Philately involves more than just stamp collecting, it contains the study of the design and educational impact of a philatelic material. We have presented AIDS stamps produced world-wide to emphasize the history of AIDS.

  20. Current Status and Future Prospect of K-NET and KiK-net

    NASA Astrophysics Data System (ADS)

    Aoi, S.; Kunugi, T.; Suzuki, W.; Nakamura, H.; Fujiwara, H.

    2014-12-01

    During 18 years since the deployment of K-NET following the Kobe earthquake, our attention has mainly focused on rapidity of the data collection and an unfailing and reliable observation. In this presentation, we review three generations of the instruments employed by K-NET and KiK-net from these two points of view.At beginning of the 2000's, we newly developed the second generation instruments (K-NET02, K-NET02A, KiK-net06) to replace the first generation instruments (K-NET95, SMAC-MDK) employed when the networks were constructed in the 1990's. These instruments have an automatic dial-out function. It takes typically 2-5 s to establish communication and a few seconds to send the pre-trigger data. After that, data is available typically within a 1.5 s delay. Not only waveform data but also strong motion indexes such as real-time intensity, PGA, PGV, PGD, and response spectra are continuously sent once a second.After the 2011 Tohoku earthquake, we have developed the third generation instruments (K-NET11, KiK-net11) and have replaced almost half of the all stations country wide. Main improvement of this instrument is more unfailing and reliable observation. Because we have often experienced very large ground motions (e.g. 45 records exceeding gravity), the maximum measureable range was expanded from 2000 gal to 4000 gal for the second generation instrument, and to 8000 gal for the third. For the third generation instrument, in case of power failure, observation (including transmission of data) works for seven days thanks to the backup battery, while for the second generation instruments it works only for one day. By adding an oblique component to the three-component accelerometers, we could automatically distinguish shaking data from noise such as electric pulses which may cause a false alarm in EEW. Implementation to guarantee the continuity of observation under severe conditions such as during the Tohoku earthquake is very important, as well as a highly efficient observation. Owning to the drastic progress of information technologies, continuous observation has become technically and economically feasible and some of stations are experimentally equipped with a continuous communication line. Continuous observation offers very important information to help mitigating ongoing earthquake disasters.

  1. Composite Reliability of a Workplace-Based Assessment Toolbox for Postgraduate Medical Education

    ERIC Educational Resources Information Center

    Moonen-van Loon, J. M. W.; Overeem, K.; Donkers, H. H. L. M.; van der Vleuten, C. P. M.; Driessen, E. W.

    2013-01-01

    In recent years, postgraduate assessment programmes around the world have embraced workplace-based assessment (WBA) and its related tools. Despite their widespread use, results of studies on the validity and reliability of these tools have been variable. Although in many countries decisions about residents' continuation of training and…

  2. Blinded evaluation of interrater reliability of an operative competency assessment tool for direct laryngoscopy and rigid bronchoscopy.

    PubMed

    Ishman, Stacey L; Benke, James R; Johnson, Kaalan Erik; Zur, Karen B; Jacobs, Ian N; Thorne, Marc C; Brown, David J; Lin, Sandra Y; Bhatti, Nasir; Deutsch, Ellen S

    2012-10-01

    OBJECTIVES To confirm interrater reliability using blinded evaluation of a skills-assessment instrument to assess the surgical performance of resident and fellow trainees performing pediatric direct laryngoscopy and rigid bronchoscopy in simulated models. DESIGN Prospective, paired, blinded observational validation study. SUBJECTS Paired observers from multiple institutions simultaneously evaluated residents and fellows who were performing surgery in an animal laboratory or using high-fidelity manikins. The evaluators had no previous affiliation with the residents and fellows and did not know their year of training. INTERVENTIONS One- and 2-page versions of an objective structured assessment of technical skills (OSATS) assessment instrument composed of global and a task-specific surgical items were used to evaluate surgical performance. RESULTS Fifty-two evaluations were completed by 17 attending evaluators. The instrument agreement for the 2-page assessment was 71.4% when measured as a binary variable (ie, competent vs not competent) (κ = 0.38; P = .08). Evaluation as a continuous variable revealed a 42.9% percentage agreement (κ = 0.18; P = .14). The intraclass correlation was 0.53, considered substantial/good interrater reliability (69% reliable). For the 1-page instrument, agreement was 77.4% when measured as a binary variable (κ = 0.53, P = .0015). Agreement when evaluated as a continuous measure was 71.0% (κ = 0.54, P < .001). The intraclass correlation was 0.73, considered high interrater reliability (85% reliable). CONCLUSIONS The OSATS assessment instrument is an effective tool for evaluating surgical performance among trainees with acceptable interrater reliability in a simulator setting. Reliability was good for both the 1- and 2-page OSATS checklists, and both serve as excellent tools to provide immediate formative feedback on operational competency.

  3. Diverse Redundant Systems for Reliable Space Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2015-01-01

    Reliable life support systems are required for deep space missions. The probability of a fatal life support failure should be less than one in a thousand in a multi-year mission. It is far too expensive to develop a single system with such high reliability. Using three redundant units would require only that each have a failure probability of one in ten over the mission. Since the system development cost is inverse to the failure probability, this would cut cost by a factor of one hundred. Using replaceable subsystems instead of full systems would further cut cost. Using full sets of replaceable components improves reliability more than using complete systems as spares, since a set of components could repair many different failures instead of just one. Replaceable components would require more tools, space, and planning than full systems or replaceable subsystems. However, identical system redundancy cannot be relied on in practice. Common cause failures can disable all the identical redundant systems. Typical levels of common cause failures will defeat redundancy greater than two. Diverse redundant systems are required for reliable space life support. Three, four, or five diverse redundant systems could be needed for sufficient reliability. One system with lower level repair could be substituted for two diverse systems to save cost.

  4. Municipal resilience: A paradigm shift in emergency and continuity management.

    PubMed

    Solecki, Greg; Luchia, Mike

    More than a decade of emergency and continuity management vision was instrumental in providing the unprecedented level of response and recovery from the great flood of 2013. Earlier assessments, planning and validation promulgated development of corporate continuity, emergency and contingency plans along with tactical, strategic and recovery operations centres that all led to a reliable emergency management model that will continue to provide the backbone for municipal resilience.

  5. A study on the real-time reliability of on-board equipment of train control system

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Li, Shiwei

    2018-05-01

    Real-time reliability evaluation is conducive to establishing a condition based maintenance system for the purpose of guaranteeing continuous train operation. According to the inherent characteristics of the on-board equipment, the connotation of reliability evaluation of on-board equipment is defined and the evaluation index of real-time reliability is provided in this paper. From the perspective of methodology and practical application, the real-time reliability of the on-board equipment is discussed in detail, and the method of evaluating the realtime reliability of on-board equipment at component level based on Hidden Markov Model (HMM) is proposed. In this method the performance degradation data is used directly to realize the accurate perception of the hidden state transition process of on-board equipment, which can achieve a better description of the real-time reliability of the equipment.

  6. MEMS reliability: coming of age

    NASA Astrophysics Data System (ADS)

    Douglass, Michael R.

    2008-02-01

    In today's high-volume semiconductor world, one could easily take reliability for granted. As the MOEMS/MEMS industry continues to establish itself as a viable alternative to conventional manufacturing in the macro world, reliability can be of high concern. Currently, there are several emerging market opportunities in which MOEMS/MEMS is gaining a foothold. Markets such as mobile media, consumer electronics, biomedical devices, and homeland security are all showing great interest in microfabricated products. At the same time, these markets are among the most demanding when it comes to reliability assurance. To be successful, each company developing a MOEMS/MEMS device must consider reliability on an equal footing with cost, performance and manufacturability. What can this maturing industry learn from the successful development of DLP technology, air bag accelerometers and inkjet printheads? This paper discusses some basic reliability principles which any MOEMS/MEMS device development must use. Examples from the commercially successful and highly reliable Digital Micromirror Device complement the discussion.

  7. The R and M 2000 Process and Reliability and Maintainability Management: Attitudes of Senior Level Managers in Aeronautical Systems Division

    DTIC Science & Technology

    1988-09-01

    I .. I . . .. . - - AFIT/GLM/LSM/88S-59 THE R& M 2000 PROCESS AND RELIABILITY AND MAINTAINABILITY...respondents provided verbal responses to this question. Although one-half of these responses spoke 65 k - ’ ’ ’ i l l I l l i favorably of R& M 2000 , there were...GROUP SUB-GROUP Attitudes, Reliability, Maintainability, R& M , R&" 2000 , 05 01 I Aeronautical Systems Division, ASD 19. ABSTRACT (Continue on

  8. Markov and semi-Markov processes as a failure rate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabski, Franciszek

    2016-06-08

    In this paper the reliability function is defined by the stochastic failure rate process with a non negative and right continuous trajectories. Equations for the conditional reliability functions of an object, under assumption that the failure rate is a semi-Markov process with an at most countable state space are derived. A proper theorem is presented. The linear systems of equations for the appropriate Laplace transforms allow to find the reliability functions for the alternating, the Poisson and the Furry-Yule failure rate processes.

  9. History of Reliability and Quality Assurance at Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Childers, Frank M.

    2004-01-01

    This Kennedy Historical Document (KHD) provides a unique historical perspective of the organizational and functional responsibilities for the manned and un-manned programs at Kennedy Space Center, Florida. As systems become more complex and hazardous, the attention to detailed planning and execution continues to be a challenge. The need for a robust reliability and quality assurance program will always be a necessity to ensure mission success. As new space missions are defined and technology allows for continued access to space, these programs cannot be compromised. The organizational structure that has provided the reliability and quality assurance functions for both the manned and unmanned programs has seen many changes since the first group came to Florida in the 1950's. The roles of government and contractor personnel have changed with each program and organizational alignment has changed based on that responsibility. The organizational alignment of the personnel performing these functions must ensure independent assessment of the processes.

  10. Development of software to improve AC power quality on large spacecraft

    NASA Technical Reports Server (NTRS)

    Kraft, L. Alan

    1991-01-01

    To insure the reliability of a 20 kHz, alternating current (AC) power system on spacecraft, it is essential to analyze its behavior under many adverse operating conditions. Some of these conditions include overloads, short circuits, switching surges, and harmonic distortions. Harmonic distortions can become a serious problem. It can cause malfunctions in equipment that the power system is supplying, and, during distortions such as voltage resonance, it can cause equipment and insulation failures due to the extreme peak voltages. To address the harmonic distortion issue, work was begun under the 1990 NASA-ASEE Summer Faculty Fellowship Program. Software, originally developed by EPRI, called HARMFLO, a power flow program capable of analyzing harmonic conditions on three phase, balanced, 60 Hz AC power systems, was modified to analyze single phase, 20 kHz, AC power systems. Since almost all of the equipment used on spacecraft power systems is electrically different from equipment used on terrestrial power systems, it was also necessary to develop mathematical models for the equipment to be used on the spacecraft. The modelling was also started under the same fellowship work period. Details of the modifications and models completed during the 1990 NASA-ASEE Summer Faculty Fellowship Program can be found in a project report. As a continuation of the work to develop a complete package necessary for the full analysis of spacecraft AC power system behavior, deployment work has continued through NASA Grant NAG3-1254. This report details the work covered by the above mentioned grant.

  11. Development and Reliability Testing of a Fast-Food Restaurant Observation Form.

    PubMed

    Rimkus, Leah; Ohri-Vachaspati, Punam; Powell, Lisa M; Zenk, Shannon N; Quinn, Christopher M; Barker, Dianne C; Pugach, Oksana; Resnick, Elissa A; Chaloupka, Frank J

    2015-01-01

    To develop a reliable observational data collection instrument to measure characteristics of the fast-food restaurant environment likely to influence consumer behaviors, including product availability, pricing, and promotion. The study used observational data collection. Restaurants were in the Chicago Metropolitan Statistical Area. A total of 131 chain fast-food restaurant outlets were included. Interrater reliability was measured for product availability, pricing, and promotion measures on a fast-food restaurant observational data collection instrument. Analysis was done with Cohen's κ coefficient and proportion of overall agreement for categorical variables and intraclass correlation coefficient (ICC) for continuous variables. Interrater reliability, as measured by average κ coefficient, was .79 for menu characteristics, .84 for kids' menu characteristics, .92 for food availability and sizes, .85 for beverage availability and sizes, .78 for measures on the availability of nutrition information,.75 for characteristics of exterior advertisements, and .62 and .90 for exterior and interior characteristics measures, respectively. For continuous measures, average ICC was .88 for food pricing measures, .83 for beverage prices, and .65 for counts of exterior advertisements. Over 85% of measures demonstrated substantial or almost perfect agreement. Although some measures required revision or protocol clarification, results from this study suggest that the instrument may be used to reliably measure the fast-food restaurant environment.

  12. Development and reliability testing of a food store observation form.

    PubMed

    Rimkus, Leah; Powell, Lisa M; Zenk, Shannon N; Han, Euna; Ohri-Vachaspati, Punam; Pugach, Oksana; Barker, Dianne C; Resnick, Elissa A; Quinn, Christopher M; Myllyluoma, Jaana; Chaloupka, Frank J

    2013-01-01

    To develop a reliable food store observational data collection instrument to be used for measuring product availability, pricing, and promotion. Observational data collection. A total of 120 food stores (26 supermarkets, 34 grocery stores, 54 gas/convenience stores, and 6 mass merchandise stores) in the Chicago metropolitan statistical area. Inter-rater reliability for product availability, pricing, and promotion measures on a food store observational data collection instrument. Cohen's kappa coefficient and proportion of overall agreement for dichotomous variables and intra-class correlation coefficient for continuous variables. Inter-rater reliability, as measured by average kappa coefficient, was 0.84 for food and beverage product availability measures, 0.80 for interior store characteristics, and 0.70 for exterior store characteristics. For continuous measures, average intra-class correlation coefficient was 0.82 for product pricing measures; 0.90 for counts of fresh, frozen, and canned fruit and vegetable options; and 0.85 for counts of advertisements on the store exterior and property. The vast majority of measures demonstrated substantial or almost perfect agreement. Although some items may require revision, results suggest that the instrument may be used to reliably measure the food store environment. Copyright © 2013 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  13. Competing risk models in reliability systems, an exponential distribution model with Bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, I.

    2018-03-01

    The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.

  14. LANL continuity of operations plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Senutovitch, Diane M

    2010-12-22

    The Los Alamos National Laboratory (LANL) is a premier national security research institution, delivering scientific and engineering solutions for the nation's most crucial and complex problems. Our primary responsibility is to ensure the safety, security, and reliability of the nation's nuclear stockpile. LANL emphasizes worker safety, effective operational safeguards and security, and environmental stewardship, outstanding science remains the foundation of work at the Laboratory. In addition to supporting the Laboratory's core national security mission, our work advances bioscience, chemistry, computer science, earth and environmental sciences, materials science, and physics disciplines. To accomplish LANL's mission, we must ensure that the Laboratorymore » EFs continue to be performed during a continuity event, including localized acts of nature, accidents, technological or attack-related emergencies, and pandemic or epidemic events. The LANL Continuity of Operations (COOP) Plan documents the overall LANL COOP Program and provides the operational framework to implement continuity policies, requirements, and responsibilities at LANL, as required by DOE 0 150.1, Continuity Programs, May 2008. LANL must maintain its ability to perform the nation's PMEFs, which are: (1) maintain the safety and security of nuclear materials in the DOE Complex at fixed sites and in transit; (2) respond to a nuclear incident, both domestically and internationally, caused by terrorist activity, natural disaster, or accident, including mobilizing the resources to support these efforts; and (3) support the nation's energy infrastructure. This plan supports Continuity of Operations for Los Alamos National Laboratory (LANL). This plan issues LANL policy as directed by the DOE 0 150.1, Continuity Programs, and provides direction for the orderly continuation of LANL EFs for 30 days of closure or 60 days for a pandemic/epidemic event. Initiation of COOP operations may be required to support an allhazards event, including a national security emergency, major fire, catastrophic natural disaster, man-made disaster, terrorism event, or technological disaster by rendering LANL buildings, infrastructure, or Technical Areas unsafe, temporarily unusable, or inaccessible.« less

  15. Integrating Reliability Analysis with a Performance Tool

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Ulrey, Michael

    1995-01-01

    A large number of commercial simulation tools support performance oriented studies of complex computer and communication systems. Reliability of these systems, when desired, must be obtained by remodeling the system in a different tool. This has obvious drawbacks: (1) substantial extra effort is required to create the reliability model; (2) through modeling error the reliability model may not reflect precisely the same system as the performance model; (3) as the performance model evolves one must continuously reevaluate the validity of assumptions made in that model. In this paper we describe an approach, and a tool that implements this approach, for integrating a reliability analysis engine into a production quality simulation based performance modeling tool, and for modeling within such an integrated tool. The integrated tool allows one to use the same modeling formalisms to conduct both performance and reliability studies. We describe how the reliability analysis engine is integrated into the performance tool, describe the extensions made to the performance tool to support the reliability analysis, and consider the tool's performance.

  16. Non-invasive continuous blood pressure monitoring of tachycardic episodes during interventional electrophysiology.

    PubMed

    Maggi, Roberto; Viscardi, Valentina; Furukawa, Toshiyuki; Brignole, Michele

    2010-11-01

    We thought to evaluate feasibility of continuous non-invasive blood pressure monitoring during procedures of interventional electrophysiology. We evaluated continuous non-invasive finger blood pressure (BP) monitoring by means of the Nexfin device in 22 patients (mean age 70 ± 24 years), undergoing procedures of interventional electrophysiology, in critical situations of hypotension caused by tachyarrhythmias or by intermittent incremental ventricular temporary pacing till to the maximum tolerated systolic BP fall (mean 61 ± 14 mmHg per patient at a rate of 195 ± 37 bpm). In all patients, Nexfin was able to detect immediately, at the onset of tachyarrythmia, the changes in BP and recorded reliable waveforms. The quality of the signal was arbitrarily classified as excellent in 11 cases, good in 10 cases, and sufficient in 1 case. In basal conditions, calibrations of the signal occurred every 49.2 ± 24.3 s and accounted for 4% of total monitoring time; during tachyarrhythmias their frequency increased to one every 12.7 s and accounted for 19% of total recording duration. A linear correlation for a range of BP values from 41 to 190 mmHg was found between non-invasive and intra-arterial BP among a total of 1055 beats from three patients who underwent simultaneous recordings with both methods (coefficient of correlation of 0.81, P < 0.0001). In conclusion, continuous non-invasive BP monitoring is feasible in the clinical practise of an interventional electrophysiology laboratory without the need of utilization of an intra-arterial BP line.

  17. Methods and Costs to Achieve Ultra Reliable Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2012-01-01

    A published Mars mission is used to explore the methods and costs to achieve ultra reliable life support. The Mars mission and its recycling life support design are described. The life support systems were made triply redundant, implying that each individual system will have fairly good reliability. Ultra reliable life support is needed for Mars and other long, distant missions. Current systems apparently have insufficient reliability. The life cycle cost of the Mars life support system is estimated. Reliability can be increased by improving the intrinsic system reliability, adding spare parts, or by providing technically diverse redundant systems. The costs of these approaches are estimated. Adding spares is least costly but may be defeated by common cause failures. Using two technically diverse systems is effective but doubles the life cycle cost. Achieving ultra reliability is worth its high cost because the penalty for failure is very high.

  18. A Study of Reasons for Participation in Continuing Professional Education in the U.S. Nuclear Power Industry

    ERIC Educational Resources Information Center

    McCamey, Randy B.

    2003-01-01

    The need for workers in the U.S. nuclear power industry to continually update their knowledge, skills, and abilities is critical to the safe and reliable operation of the country's nuclear power facilities. To improve their skills, knowledge, and abilities, many professionals in the nuclear power industry participate in continuing professional…

  19. Development of KSC program for investigating and generating field failure rates. Reliability handbook for ground support equipment

    NASA Technical Reports Server (NTRS)

    Bloomquist, C. E.; Kallmeyer, R. H.

    1972-01-01

    Field failure rates and confidence factors are presented for 88 identifiable components of the ground support equipment at the John F. Kennedy Space Center. For most of these, supplementary information regarding failure mode and cause is tabulated. Complete reliability assessments are included for three systems, eight subsystems, and nine generic piece-part classifications. Procedures for updating or augmenting the reliability results are also included.

  20. Early Validity and Reliability Data for Two Instruments Assessing the Predispositions People Have toward Technology Use: Continued Integration of Quantitative and Qualitative Methods.

    ERIC Educational Resources Information Center

    Scherer, Marcia J.; McKee, Barbara G.

    Validity and reliability data are presented for two instruments for assessing the predispositions that people have toward the use of assistive and educational technologies. The two instruments, the Assistive Technology Device Predisposition Assessment (ATDPA) and the Educational Technology Predisposition Assessment (ETPA), are self-report…

  1. Continuous analysis of nitrogen dioxide in gas streams of plants

    NASA Technical Reports Server (NTRS)

    Durkin, W. T.; Kispert, R. C.

    1969-01-01

    Analyzer and sampling system continuously monitors nitrogen dioxide concentrations in the feed and tail gas streams of a facility recovering nitric acid. The system, using a direct calorimetric approach, makes use of readily available equipment and is flexible and reliable in operation.

  2. Continual Response Measurement: Design and Validation.

    ERIC Educational Resources Information Center

    Baggaley, Jon

    1987-01-01

    Discusses reliability and validity of continual response measurement (CRM), a computer-based measurement technique, and its use in social science research. Highlights include the importance of criterion-referencing the data, guidelines for designing studies using CRM, examples typifying their deductive and inductive functions, and a discussion of…

  3. Sensing system development for HOV/HOT (high occupancy vehicle) lane monitoring.

    DOT National Transportation Integrated Search

    2011-02-01

    With continued interest in the efficient use of roadways the ability to monitor the use of HOV/HOT lanes is essential for management, planning and operation. A system to reliably monitor these lanes on a continuous basis and provide usage statistics ...

  4. Sensing system development for HOV/HOT (high occupancy vehicle) lane monitoring.

    DOT National Transportation Integrated Search

    2011-02-01

    . : ii : ABSTRACT : With continued interest in the efficient use of roadways the ability to monitor the use of HOV/HOT lanes is essential for management, planning and operation. A system to reliably monitor these lanes on a continuous basis and provi...

  5. 40 CFR 57.405 - Formulation, approval, and implementation of requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... completion of the measures specified in the approved plan evaluating the performance and adequacy of the SCS.... The reliability study shall include a comprehensive analysis of the system's operation during one or... (CONTINUED) AIR PROGRAMS (CONTINUED) PRIMARY NONFERROUS SMELTER ORDERS Supplementary Control System...

  6. 40 CFR 57.405 - Formulation, approval, and implementation of requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... completion of the measures specified in the approved plan evaluating the performance and adequacy of the SCS.... The reliability study shall include a comprehensive analysis of the system's operation during one or... (CONTINUED) AIR PROGRAMS (CONTINUED) PRIMARY NONFERROUS SMELTER ORDERS Supplementary Control System...

  7. 40 CFR 57.405 - Formulation, approval, and implementation of requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... completion of the measures specified in the approved plan evaluating the performance and adequacy of the SCS.... The reliability study shall include a comprehensive analysis of the system's operation during one or... (CONTINUED) AIR PROGRAMS (CONTINUED) PRIMARY NONFERROUS SMELTER ORDERS Supplementary Control System...

  8. 40 CFR 57.405 - Formulation, approval, and implementation of requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... completion of the measures specified in the approved plan evaluating the performance and adequacy of the SCS.... The reliability study shall include a comprehensive analysis of the system's operation during one or... (CONTINUED) AIR PROGRAMS (CONTINUED) PRIMARY NONFERROUS SMELTER ORDERS Supplementary Control System...

  9. Ceramic bearings with bilayer coating in cementless total hip arthroplasty. A safe solution. A retrospective study of one hundred and twenty six cases with more than ten years' follow-up.

    PubMed

    Ferreira, André; Aslanian, Thierry; Dalin, Thibaud; Picaud, Jean

    2017-05-01

    Using a ceramic-ceramic bearings, cementless total hip arthroplasty (THA) has provided good clinical results. To ensure longevity a good quality fixation of the implants is mandatory. Different surface treatments had been used, with inconsistent results. We hypothesized that a "bilayer coating" applied to both THA components using validated technology will provide a long-lasting and reliable bone fixation. We studied the survival and bone integration of a continuous, single-surgeon, retrospective series of 126 THA cases (116 patients) with an average follow-up of 12.2 years (minimum 10 years). The THA consisted of cementless implants with a bilayer coating of titanium and hydroxyapatite and used a ceramic-ceramic bearing. With surgical revision for any cause (except infection) as the end point, THA survival was 95.1 % at 13 years. Stem (98.8 %) and cup (98.6 %) survival was similar at 13 years. Bone integration was confirmed in 100 % of implants (Engh-Massin score of 17.42 and ARA score of 5.94). There were no instances of loosening. Revisions were performed because of instability (1.6 %), prosthetic impingement or material-related issues. A bilayer titanium and hydroxyapatite coating provides strong, fast, reliable osseo integration, without deterioration at the interface or release of damaging particles. The good clinical outcomes expected of ceramic bearings were achieved, as were equally reliable stem and cup fixation.

  10. Reliable intraocular pressure measurement using automated radio-wave telemetry.

    PubMed

    Paschalis, Eleftherios I; Cade, Fabiano; Melki, Samir; Pasquale, Louis R; Dohlman, Claes H; Ciolino, Joseph B

    2014-01-01

    To present an autonomous intraocular pressure (IOP) measurement technique using a wireless implantable transducer (WIT) and a motion sensor. The WIT optical aid was implanted within the ciliary sulcus of a normotensive rabbit eye after extracapsular clear lens extraction. An autonomous wireless data system (AWDS) comprising of a WIT and an external antenna aided by a motion sensor provided continuous IOP readings. The sensitivity of the technique was determined by the ability to detect IOP changes resulting from the administration of latanoprost 0.005% or dorzolamide 2%, while the reliability was determined by the agreement between baseline and vehicle (saline) IOP. On average, 12 diurnal and 205 nocturnal IOP measurements were performed with latanoprost, and 26 diurnal and 205 nocturnal measurements with dorzolamide. No difference was found between mean baseline IOP (13.08±2.2 mmHg) and mean vehicle IOP (13.27±2.1 mmHg) (P=0.45), suggesting good measurement reliability. Both antiglaucoma medications caused significant IOP reduction compared to baseline; latanoprost reduced mean IOP by 10% (1.3±3.54 mmHg; P<0.001), and dorzolamide by 5% (0.62±2.22 mmHg; P<0.001). Use of latanoprost resulted in an overall twofold higher IOP reduction compared to dorzolamide (P<0.001). Repeatability was ±1.8 mmHg, assessed by the variability of consecutive IOP measurements performed in a short period of time (≤1 minute), during which the IOP is not expected to change. IOP measurements in conscious rabbits obtained without the need for human interactions using the AWDS are feasible and provide reproducible results.

  11. Test-retest reliability of jump execution variables using mechanography: a comparison of jump protocols

    USDA-ARS?s Scientific Manuscript database

    Mechanography during the vertical jump may enhance screening and determining mechanistic causes for functional deficits that reduce physical performance. Utility of jump mechanography for evaluation is limited by scant test-retest reliability data on force-time variables. This study examined the tes...

  12. An Architectural Concept for Intrusion Tolerance in Air Traffic Networks

    NASA Technical Reports Server (NTRS)

    Maddalon, Jeffrey M.; Miner, Paul S.

    2003-01-01

    The goal of an intrusion tolerant network is to continue to provide predictable and reliable communication in the presence of a limited num ber of compromised network components. The behavior of a compromised network component ranges from a node that no longer responds to a nod e that is under the control of a malicious entity that is actively tr ying to cause other nodes to fail. Most current data communication ne tworks do not include support for tolerating unconstrained misbehavio r of components in the network. However, the fault tolerance communit y has developed protocols that provide both predictable and reliable communication in the presence of the worst possible behavior of a limited number of nodes in the system. One may view a malicious entity in a communication network as a node that has failed and is behaving in an arbitrary manner. NASA/Langley Research Center has developed one such fault-tolerant computing platform called SPIDER (Scalable Proces sor-Independent Design for Electromagnetic Resilience). The protocols and interconnection mechanisms of SPIDER may be adapted to large-sca le, distributed communication networks such as would be required for future Air Traffic Management systems. The predictability and reliabi lity guarantees provided by the SPIDER protocols have been formally v erified. This analysis can be readily adapted to similar network stru ctures.

  13. Obtaining reliable phase-gradient delays from otoacoustic emission data.

    PubMed

    Shera, Christopher A; Bergevin, Christopher

    2012-08-01

    Reflection-source otoacoustic emission phase-gradient delays are widely used to obtain noninvasive estimates of cochlear function and properties, such as the sharpness of mechanical tuning and its variation along the length of the cochlear partition. Although different data-processing strategies are known to yield different delay estimates and trends, their relative reliability has not been established. This paper uses in silico experiments to evaluate six methods for extracting delay trends from reflection-source otoacoustic emissions (OAEs). The six methods include both previously published procedures (e.g., phase smoothing, energy-weighting, data exclusion based on signal-to-noise ratio) and novel strategies (e.g., peak-picking, all-pass factorization). Although some of the methods perform well (e.g., peak-picking), others introduce substantial bias (e.g., phase smoothing) and are not recommended. In addition, since standing waves caused by multiple internal reflection can complicate the interpretation and compromise the application of OAE delays, this paper develops and evaluates two promising signal-processing strategies, the first based on time-frequency filtering using the continuous wavelet transform and the second on cepstral analysis, for separating the direct emission from its subsequent reflections. Altogether, the results help to resolve previous disagreements about the frequency dependence of human OAE delays and the sharpness of cochlear tuning while providing useful analysis methods for future studies.

  14. Reliability of vibration energy harvesters of metal-based PZT thin films

    NASA Astrophysics Data System (ADS)

    Tsujiura, Y.; Suwa, E.; Kurokawa, F.; Hida, H.; Kanno, I.

    2014-11-01

    This paper describes the reliability of piezoelectric vibration energy harvesters (PVEHs) of Pb(Zr,Ti)O3 (PZT) thin films on metal foil cantilevers. The PZT thin films were directly deposited onto the Pt-coated stainless-steel (SS430) cantilevers by rf-magnetron sputtering, and we observed their aging behavior of power generation characteristics under the resonance vibration condition for three days. During the aging measurement, there was neither fatigue failure nor degradation of dielectric properties in our PVEHs (length: 13 mm, width: 5.0 mm, thickness: 104 μm) even under a large excitation acceleration of 25 m/s2. However, we observed clear degradation of the generated electric voltage depending on excitation acceleration. The decay rate of the output voltage was 5% from the start of the measurement at 25 m/s2. The transverse piezoelectric coefficient (e31,f) also degraded with almost the same decay rate as that of the output voltage; this indicates that the degradation of output voltage was mainly caused by that of piezoelectric properties. From the decay curves, the output powers are estimated to degrade 7% at 15 m/s2 and 36% at 25 m/s2 if we continue to excite the PVEHs for 30 years.

  15. The use of contraception for patients after bariatric surgery.

    PubMed

    Ostrowska, Lucyna; Lech, Medard; Stefańska, Ewa; Jastrzębska-Mierzyńska, Marta; Smarkusz, Joanna

    2016-01-01

    Obesity in women of reproductive age is a serious concern regarding reproductive health. In many cases of infertility in obese women, reduction of body weight may lead to spontaneous pregnancy, without the need for more specific methods of treatment. Bariatric surgery is safe and is the most effective method for body weight reduction in obese and very obese patients. In practice there are two bariatric techniques; gastric banding, which leads to weight loss through intake restriction, and gastric bypass, leads to weight loss through food malabsorption. Gastric bypass surgery (the more frequently performed procedure), in most cases, leads to changes in eating habits and may result in vomiting, diarrhea and rapid body mass reduction. There are reliable data describing the continuous increase in the number of women who are trying to conceive, or are already pregnant, following bariatric surgery. Most medical specialists advise women to avoid pregnancy within 12-18 months after bariatric surgery. This allows for time to recover sufficiency from the decreased absorption of nutrients caused by the bariatric surgery. During this period there is a need for the use of reliable contraception. As there is a risk for malabsorption of hormones taken orally, the combined and progestogen-only pills are contraindicated, and displaced by non-oral hormonal contraception or non-hormonal methods, including intrauterine devices and condoms.

  16. Time-frequency analysis of phonocardiogram signals using wavelet transform: a comparative study.

    PubMed

    Ergen, Burhan; Tatar, Yetkin; Gulcur, Halil Ozcan

    2012-01-01

    Analysis of phonocardiogram (PCG) signals provides a non-invasive means to determine the abnormalities caused by cardiovascular system pathology. In general, time-frequency representation (TFR) methods are used to study the PCG signal because it is one of the non-stationary bio-signals. The continuous wavelet transform (CWT) is especially suitable for the analysis of non-stationary signals and to obtain the TFR, due to its high resolution, both in time and in frequency and has recently become a favourite tool. It decomposes a signal in terms of elementary contributions called wavelets, which are shifted and dilated copies of a fixed mother wavelet function, and yields a joint TFR. Although the basic characteristics of the wavelets are similar, each type of the wavelets produces a different TFR. In this study, eight real types of the most known wavelets are examined on typical PCG signals indicating heart abnormalities in order to determine the best wavelet to obtain a reliable TFR. For this purpose, the wavelet energy and frequency spectrum estimations based on the CWT and the spectra of the chosen wavelets were compared with the energy distribution and the autoregressive frequency spectra in order to determine the most suitable wavelet. The results show that Morlet wavelet is the most reliable wavelet for the time-frequency analysis of PCG signals.

  17. 49 CFR 192.713 - Transmission lines: Permanent field repair of imperfections and damages.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS...; or (2) Repaired by a method that reliable engineering tests and analyses show can permanently restore...

  18. 49 CFR 192.713 - Transmission lines: Permanent field repair of imperfections and damages.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS...; or (2) Repaired by a method that reliable engineering tests and analyses show can permanently restore...

  19. 49 CFR 192.713 - Transmission lines: Permanent field repair of imperfections and damages.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS...; or (2) Repaired by a method that reliable engineering tests and analyses show can permanently restore...

  20. 49 CFR 192.713 - Transmission lines: Permanent field repair of imperfections and damages.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS...; or (2) Repaired by a method that reliable engineering tests and analyses show can permanently restore...

  1. 49 CFR 192.713 - Transmission lines: Permanent field repair of imperfections and damages.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... (Continued) PIPELINE AND HAZARDOUS MATERIALS SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) PIPELINE SAFETY TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS...; or (2) Repaired by a method that reliable engineering tests and analyses show can permanently restore...

  2. An exploration of the factor structure and development of potentially useful subscales of etiological beliefs about schizophrenia in a general population sample.

    PubMed

    Goulding, Sandra M; Broussard, Beth; Demir, Berivan; Compton, Michael T

    2009-11-01

    Given that accessing care, treatment engagement, and course and outcomes among people with schizophrenia may be influenced by beliefs about causes in the larger community, causal beliefs about schizophrenia have been studied in numerous communities around the world. In particular, the 30-item list of etiological attributions developed by Angermeyer and colleagues has been used to describe causal beliefs in patients, family members, and lay community members within such communities. The current study, the first examination of the latent or factorial structure of these 30 causal beliefs, seeks to provide informative subscales that may enhance reliability and validity of groupings of causes for future analyses involving community members. Data were gathered from six separate surveys involving three distinct groups of individuals from the same community within the southeastern United States: lay community members, relatives of individuals with schizophrenia-spectrum disorders, and police officers at the start of a 1-week mental health training program. Exploratory factor analysis in the overall sample (n=577) revealed four factors that were used to define four subscales, termed: personal/family/social stressors (14 items), inconsistent with modern conceptions of risk (8 items), external/environmental insults to the brain (6 items), and consistent with modern biological conceptions (2 items). Cronbach's internal consistency reliability coefficients for these subscales were 0.91, 0.83, 0.71, and 0.65, respectively. These findings suggest that subscales could be derived to provide continuous measures for assessing causal beliefs in order to study how this concept relates to attitudes toward schizophrenia, the people affected by the disorder, and treatments that are recommended by mental health professionals. Replication within similar and dissimilar groups is warranted.

  3. Limiting excessive postoperative blood transfusion after cardiac procedures. A review.

    PubMed Central

    Ferraris, V A; Ferraris, S P

    1995-01-01

    Analysis of blood product use after cardiac operations reveals that a few patients (< or = 20%) consume the majority of blood products (> 80%). The risk factors that predispose a minority of patients to excessive blood use include patient-related factors, transfusion practices, drug-related causes, and procedure-related factors. Multivariate studies suggest that patient age and red blood cell volume are independent patient-related variables that predict excessive blood product transfusion after cardiac procedures. Other factors include preoperative aspirin ingestion, type of operation, over- or underutilization of heparin during cardiopulmonary bypass, failure to correct hypothermia after cardiopulmonary bypass, and physician overtransfusion. A survey of the currently available blood conservation techniques reveals 5 that stand out as reliable methods: 1) high-dose aprotinin therapy, 2) preoperative erythropoietin therapy when time permits adequate dosage before operation, 3) hemodilution by harvest of whole blood immediately before cardiopulmonary bypass, 4) autologous predonation of blood, and 5) salvage of oxygenator blood after cardiopulmonary bypass. Other methods, such as the use of epsilon-aminocaproic acid or desmopressin, cell saving devices, reinfusion of shed mediastinal blood, and hemofiltration have been reported to be less reliable and may even be harmful in some high-risk patients. Consideration of the available data allows formulation of a 4-pronged plan for limiting excessive blood transfusion after surgery: 1) recognize the causes of excessive transfusion, including the importance of red blood cell volume, type of procedure being performed, preoperative aspirin ingestion, etc.; 2) establish a quality management program, including a survey of transfusion practices that emphasizes physician education and availability of real-time laboratory testing to guide transfusion therapy; 3) adopt a multimodal approach using institution-proven techniques; and 4) continually reassess blood product use and analyze the cost-benefits of blood conservation interventions. PMID:7580359

  4. Novel Hierarchical Fall Detection Algorithm Using a Multiphase Fall Model.

    PubMed

    Hsieh, Chia-Yeh; Liu, Kai-Chun; Huang, Chih-Ning; Chu, Woei-Chyn; Chan, Chia-Tai

    2017-02-08

    Falls are the primary cause of accidents for the elderly in the living environment. Reducing hazards in the living environment and performing exercises for training balance and muscles are the common strategies for fall prevention. However, falls cannot be avoided completely; fall detection provides an alarm that can decrease injuries or death caused by the lack of rescue. The automatic fall detection system has opportunities to provide real-time emergency alarms for improving the safety and quality of home healthcare services. Two common technical challenges are also tackled in order to provide a reliable fall detection algorithm, including variability and ambiguity. We propose a novel hierarchical fall detection algorithm involving threshold-based and knowledge-based approaches to detect a fall event. The threshold-based approach efficiently supports the detection and identification of fall events from continuous sensor data. A multiphase fall model is utilized, including free fall, impact, and rest phases for the knowledge-based approach, which identifies fall events and has the potential to deal with the aforementioned technical challenges of a fall detection system. Seven kinds of falls and seven types of daily activities arranged in an experiment are used to explore the performance of the proposed fall detection algorithm. The overall performances of the sensitivity, specificity, precision, and accuracy using a knowledge-based algorithm are 99.79%, 98.74%, 99.05% and 99.33%, respectively. The results show that the proposed novel hierarchical fall detection algorithm can cope with the variability and ambiguity of the technical challenges and fulfill the reliability, adaptability, and flexibility requirements of an automatic fall detection system with respect to the individual differences.

  5. Novel Hierarchical Fall Detection Algorithm Using a Multiphase Fall Model

    PubMed Central

    Hsieh, Chia-Yeh; Liu, Kai-Chun; Huang, Chih-Ning; Chu, Woei-Chyn; Chan, Chia-Tai

    2017-01-01

    Falls are the primary cause of accidents for the elderly in the living environment. Reducing hazards in the living environment and performing exercises for training balance and muscles are the common strategies for fall prevention. However, falls cannot be avoided completely; fall detection provides an alarm that can decrease injuries or death caused by the lack of rescue. The automatic fall detection system has opportunities to provide real-time emergency alarms for improving the safety and quality of home healthcare services. Two common technical challenges are also tackled in order to provide a reliable fall detection algorithm, including variability and ambiguity. We propose a novel hierarchical fall detection algorithm involving threshold-based and knowledge-based approaches to detect a fall event. The threshold-based approach efficiently supports the detection and identification of fall events from continuous sensor data. A multiphase fall model is utilized, including free fall, impact, and rest phases for the knowledge-based approach, which identifies fall events and has the potential to deal with the aforementioned technical challenges of a fall detection system. Seven kinds of falls and seven types of daily activities arranged in an experiment are used to explore the performance of the proposed fall detection algorithm. The overall performances of the sensitivity, specificity, precision, and accuracy using a knowledge-based algorithm are 99.79%, 98.74%, 99.05% and 99.33%, respectively. The results show that the proposed novel hierarchical fall detection algorithm can cope with the variability and ambiguity of the technical challenges and fulfill the reliability, adaptability, and flexibility requirements of an automatic fall detection system with respect to the individual differences. PMID:28208694

  6. A continuous optimization approach for inferring parameters in mathematical models of regulatory networks.

    PubMed

    Deng, Zhimin; Tian, Tianhai

    2014-07-29

    The advances of systems biology have raised a large number of sophisticated mathematical models for describing the dynamic property of complex biological systems. One of the major steps in developing mathematical models is to estimate unknown parameters of the model based on experimentally measured quantities. However, experimental conditions limit the amount of data that is available for mathematical modelling. The number of unknown parameters in mathematical models may be larger than the number of observation data. The imbalance between the number of experimental data and number of unknown parameters makes reverse-engineering problems particularly challenging. To address the issue of inadequate experimental data, we propose a continuous optimization approach for making reliable inference of model parameters. This approach first uses a spline interpolation to generate continuous functions of system dynamics as well as the first and second order derivatives of continuous functions. The expanded dataset is the basis to infer unknown model parameters using various continuous optimization criteria, including the error of simulation only, error of both simulation and the first derivative, or error of simulation as well as the first and second derivatives. We use three case studies to demonstrate the accuracy and reliability of the proposed new approach. Compared with the corresponding discrete criteria using experimental data at the measurement time points only, numerical results of the ERK kinase activation module show that the continuous absolute-error criteria using both function and high order derivatives generate estimates with better accuracy. This result is also supported by the second and third case studies for the G1/S transition network and the MAP kinase pathway, respectively. This suggests that the continuous absolute-error criteria lead to more accurate estimates than the corresponding discrete criteria. We also study the robustness property of these three models to examine the reliability of estimates. Simulation results show that the models with estimated parameters using continuous fitness functions have better robustness properties than those using the corresponding discrete fitness functions. The inference studies and robustness analysis suggest that the proposed continuous optimization criteria are effective and robust for estimating unknown parameters in mathematical models.

  7. Handbook of Reliability Prediction Procedures for Mechanical Equipment

    DTIC Science & Technology

    1992-05-01

    stationary part, a rotor out of balance causing vibration,3 excessive thrust caused by mechanical failure of other parts, excessive temperature caused by...misalignment, a bent shaft, a rotating part rubbing on a stationary part, a rotor out of balance causing vibration, excessive thrust caused by...armature windings which are placed around the rotor , are connected in series so that all current that passes through the field windings also passes through

  8. Effect of post-spawning broodfish diet with high lipid content and n-3 fatty acids on reproductive performance of channel catfish

    USDA-ARS?s Scientific Manuscript database

    Channel x blue hybrid catfish are exclusively produced by hormone-induced spawning protocols and this process has proved to be a reliable method to mass produce hybrid catfish in hatcheries. Strip spawning of channel catfish needs a continuous and reliable supply of mature (gravid) fish during the...

  9. Inter-rater reliability of select physical examination procedures in patients with neck pain.

    PubMed

    Hanney, William J; George, Steven Z; Kolber, Morey J; Young, Ian; Salamh, Paul A; Cleland, Joshua A

    2014-07-01

    This study evaluated the inter-rater reliability of select examination procedures in patients with neck pain (NP) conducted over a 24- to 48-h period. Twenty-two patients with mechanical NP participated in a standardized examination. One examiner performed standardized examination procedures and a second blinded examiner repeated the procedures 24-48 h later with no treatment administered between examinations. Inter-rater reliability was calculated with the Cohen Kappa and weighted Kappa for ordinal data while continuous level data were calculated using an intraclass correlation coefficient model 2,1 (ICC2,1). Coefficients for categorical variables ranged from poor to moderate agreement (-0.22 to 0.70 Kappa) and coefficients for continuous data ranged from slight to moderate (ICC2,1 0.28-0.74). The standard error of measurement for cervical range of motion ranged from 5.3° to 9.9° while the minimal detectable change ranged from 12.5° to 23.1°. This study is the first to report inter-rater reliability values for select components of the cervical examination in those patients with NP performed 24-48 h after the initial examination. There was considerably less reliability when compared to previous studies, thus clinicians should consider how the passage of time may influence variability in examination findings over a 24- to 48-h period.

  10. The Reliability of Psychiatric Diagnosis Revisited

    PubMed Central

    Rankin, Eric; France, Cheryl; El-Missiry, Ahmed; John, Collin

    2006-01-01

    Background: The authors reviewed the topic of reliability of psychiatric diagnosis from the turn of the 20th century to present. The objectives of this paper are to explore the reasons of unreliability of psychiatric diagnosis and propose ways to improve the reliability of psychiatric diagnosis. Method: The authors reviewed the literature on the concept of reliability of psychiatric diagnosis with emphasis on the impact of interviewing skills, use of diagnostic criteria, and structured interviews on the reliability of psychiatric diagnosis. Results: Causes of diagnostic unreliability are attributed to the patient, the clinician and psychiatric nomenclature. The reliability of psychiatric diagnosis can be enhanced by using diagnostic criteria, defining psychiatric symptoms and structuring the interviews. Conclusions: The authors propose the acronym ‘DR.SED,' which stands for diagnostic criteria, reference definitions, structuring the interview, clinical experience, and data. The authors recommend that clinicians use the DR.SED paradigm to improve the reliability of psychiatric diagnoses. PMID:21103149

  11. Reliable Decentralized Control of Fuzzy Discrete-Event Systems and a Test Algorithm.

    PubMed

    Liu, Fuchun; Dziong, Zbigniew

    2013-02-01

    A framework for decentralized control of fuzzy discrete-event systems (FDESs) has been recently presented to guarantee the achievement of a given specification under the joint control of all local fuzzy supervisors. As a continuation, this paper addresses the reliable decentralized control of FDESs in face of possible failures of some local fuzzy supervisors. Roughly speaking, for an FDES equipped with n local fuzzy supervisors, a decentralized supervisor is called k-reliable (1 ≤ k ≤ n) provided that the control performance will not be degraded even when n - k local fuzzy supervisors fail. A necessary and sufficient condition for the existence of k-reliable decentralized supervisors of FDESs is proposed by introducing the notions of M̃uc-controllability and k-reliable coobservability of fuzzy language. In particular, a polynomial-time algorithm to test the k-reliable coobservability is developed by a constructive methodology, which indicates that the existence of k-reliable decentralized supervisors of FDESs can be checked with a polynomial complexity.

  12. A Computational Model of Event Segmentation from Perceptual Prediction

    ERIC Educational Resources Information Center

    Reynolds, Jeremy R.; Zacks, Jeffrey M.; Braver, Todd S.

    2007-01-01

    People tend to perceive ongoing continuous activity as series of discrete events. This partitioning of continuous activity may occur, in part, because events correspond to dynamic patterns that have recurred across different contexts. Recurring patterns may lead to reliable sequential dependencies in observers' experiences, which then can be used…

  13. Interrater Reliability to Assure Valid Content in Peer Review of CME-Accredited Presentations

    ERIC Educational Resources Information Center

    Quigg, Mark; Lado, Fred A.

    2009-01-01

    Introduction: The Accreditation Council for Continuing Medical Education (ACCME) provides guidelines for continuing medical education (CME) materials to mitigate problems in the independence or validity of content in certified activities; however, the process of peer review of materials appears largely unstudied and the reproducibility of…

  14. Direct maldi-tof mass spectrometry assay of blood culture broths for rapid identification of Candida species causing bloodstream infections: an observational study in two large microbiology laboratories.

    PubMed

    Spanu, Teresa; Posteraro, Brunella; Fiori, Barbara; D'Inzeo, Tiziana; Campoli, Serena; Ruggeri, Alberto; Tumbarello, Mario; Canu, Giulia; Trecarichi, Enrico Maria; Parisi, Gabriella; Tronci, Mirella; Sanguinetti, Maurizio; Fadda, Giovanni

    2012-01-01

    We evaluated the reliability of the Bruker Daltonik's MALDI Biotyper system in species-level identification of yeasts directly from blood culture bottles. Identification results were concordant with those of the conventional culture-based method for 95.9% of Candida albicans (187/195) and 86.5% of non-albicans Candida species (128/148). Results were available in 30 min (median), suggesting that this approach is a reliable, time-saving tool for routine identification of Candida species causing bloodstream infection.

  15. Thermo-piezo-electro-mechanical simulation of AlGaN (aluminum gallium nitride) / GaN (gallium nitride) High Electron Mobility Transistors

    NASA Astrophysics Data System (ADS)

    Stevens, Lorin E.

    Due to the current public demand of faster, more powerful, and more reliable electronic devices, research is prolific these days in the area of high electron mobility transistor (HEMT) devices. This is because of their usefulness in RF (radio frequency) and microwave power amplifier applications including microwave vacuum tubes, cellular and personal communications services, and widespread broadband access. Although electrical transistor research has been ongoing since its inception in 1947, the transistor itself continues to evolve and improve much in part because of the many driven researchers and scientists throughout the world who are pushing the limits of what modern electronic devices can do. The purpose of the research outlined in this paper was to better understand the mechanical stresses and strains that are present in a hybrid AlGaN (Aluminum Gallium Nitride) / GaN (Gallium Nitride) HEMT, while under electrically-active conditions. One of the main issues currently being researched in these devices is their reliability, or their consistent ability to function properly, when subjected to high-power conditions. The researchers of this mechanical study have performed a static (i.e. frequency-independent) reliability analysis using powerful multiphysics computer modeling/simulation to get a better idea of what can cause failure in these devices. Because HEMT transistors are so small (micro/nano-sized), obtaining experimental measurements of stresses and strains during the active operation of these devices is extremely challenging. Physical mechanisms that cause stress/strain in these structures include thermo-structural phenomena due to mismatch in both coefficient of thermal expansion (CTE) and mechanical stiffness between different materials, as well as stress/strain caused by "piezoelectric" effects (i.e. mechanical deformation caused by an electric field, and conversely voltage induced by mechanical stress) in the AlGaN and GaN device portions (both piezoelectric materials). This piezoelectric effect can be triggered by voltage applied to the device's gate contact and the existence of an HEMT-unique "two-dimensional electron gas" (2DEG) at the GaN-AlGaN interface. COMSOL Multiphysics computer software has been utilized to create a finite element (i.e. piece-by-piece) simulation to visualize both temperature and stress/strain distributions that can occur in the device, by coupling together (i.e. solving simultaneously) the thermal, electrical, structural, and piezoelectric effects inherent in the device. The 2DEG has been modeled not with the typically-used self-consistent quantum physics analytical equations, rather as a combined localized heat source* (thermal) and surface charge density* (electrical) boundary condition. Critical values of stress/strain and their respective locations in the device have been identified. Failure locations have been estimated based on the critical values of stress and strain, and compared with reports in literature. The knowledge of the overall stress/strain distribution has assisted in determining the likely device failure mechanisms and possible mitigation approaches. The contribution and interaction of individual stress mechanisms including piezoelectric effects and thermal expansion caused by device self-heating (i.e. fast-moving electrons causing heat) have been quantified. * Values taken from results of experimental studies in literature.

  16. Statistical Tests of Reliability of NDE

    NASA Technical Reports Server (NTRS)

    Baaklini, George Y.; Klima, Stanley J.; Roth, Don J.; Kiser, James D.

    1987-01-01

    Capabilities of advanced material-testing techniques analyzed. Collection of four reports illustrates statistical method for characterizing flaw-detecting capabilities of sophisticated nondestructive evaluation (NDE). Method used to determine reliability of several state-of-the-art NDE techniques for detecting failure-causing flaws in advanced ceramic materials considered for use in automobiles, airplanes, and space vehicles.

  17. Judged seriousness of environmental losses: reliability and cause of loss

    Treesearch

    Thomas C. Brown; Dawn Nannini; Robert B. Gorter; Paul A. Bell; George L. Peterson

    2002-01-01

    Public judgments of the seriousness of environmental losses were found to be internally consistent for most respondents, and largely unaffected by attempts to manipulate responses by altering the mix of losses being judged. Both findings enhance confidence in the feasibility of developing reliable rankings of the seriousness of environmental losses to aid resource...

  18. Development of a reliable and highly sensitive, digital PCR-based assay for early detection of HLB

    USDA-ARS?s Scientific Manuscript database

    Huanglongbing (HLB) is caused by a phloem-limited bacterium, Ca. Liberibacter asiaticus (Las) in the United States. The bacterium often is present at a low concentration and unevenly distributed in the early stage of infection, making reliable and early diagnosis a serious challenge. Conventional d...

  19. A reliable and highly sensitive, digital PCR-based assay for early detection of citrus Huanglongbing

    USDA-ARS?s Scientific Manuscript database

    Huanglongbing (HLB) is caused by a phloem-limited bacterium, Ca. Liberibacter asiaticus (Las) in the United States. The bacterium is often present at a low concentration and unevenly distributed in the early stage of infection, making reliable and early diagnosis a challenge. We have developed a pro...

  20. Assessment of concrete damage and strength degradation caused by reinforcement corrosion

    NASA Astrophysics Data System (ADS)

    Nepal, Jaya; Chen, Hua-Peng

    2015-07-01

    Structural performance deterioration of reinforced concrete structures has been extensively investigated, but very limited studies have been carried out to investigate the effect of reinforcement corrosion on time-dependent reliability with consideration of the influence of mechanical characteristics of the bond interface due to corrosion. This paper deals with how corrosion in reinforcement creates different types of defects in concrete structure and how they are responsible for the structural capacity deterioration of corrosion affected reinforced concrete structures during their service life. Cracking in cover concrete due to reinforcement corrosion is investigated by using rebar-concrete model and realistic concrete properties. The flexural strength deterioration is analytically predicted on the basis of bond strength evolution due to reinforcement corrosion, which is examined by the experimental data available. The time-dependent reliability analysis is undertaken to calculate the life time structural reliability of corrosion damaged concrete structures by stochastic deterioration modelling of reinforced concrete. The results from the numerical example show that the proposed approach is capable of evaluating the damage caused by reinforcement corrosion and also predicting the structural reliability of concrete structures during their lifecycle.

  1. Development of KSC program for investigating and generating field failure rates. Volume 2: Recommended format for reliability handbook for ground support equipment

    NASA Technical Reports Server (NTRS)

    Bloomquist, C. E.; Kallmeyer, R. H.

    1972-01-01

    Field failure rates and confidence factors are presented for 88 identifiable components of the ground support equipment at the John F. Kennedy Space Center. For most of these, supplementary information regarding failure mode and cause is tabulated. Complete reliability assessments are included for three systems, eight subsystems, and nine generic piece-part classifications. Procedures for updating or augmenting the reliability results presented in this handbook are also included.

  2. Model testing for reliability and validity of the Outcome Expectations for Exercise Scale.

    PubMed

    Resnick, B; Zimmerman, S; Orwig, D; Furstenberg, A L; Magaziner, J

    2001-01-01

    Development of a reliable and valid measure of outcome expectations for exercise appropriate for older adults will help establish the relationship between outcome expectations and exercise. Once established, this measure can be used to facilitate the development of interventions to strengthen outcome expectations and improve adherence to regular exercise in older adults. Building on initial psychometrics of the Outcome Expectation for Exercise (OEE) Scale, the purpose of the current study was to use structural equation modeling to provide additional support for the reliability and validity of this measure. The OEE scale is a 9-item measure specifically focusing on the perceived consequences of exercise for older adults. The OEE scale was given to 191 residents in a continuing care retirement community. The mean age of the participants was 85 +/- 6.1 and the majority were female (76%), White (99%), and unmarried (76%). Using structural equation modeling, reliability was based on R2 values, and validity was based on a confirmatory factor analysis and path coefficients. There was continued evidence for reliability of the OEE based on R2 values ranging from .42 to .77, and validity with path coefficients ranging from .69 to .87, and evidence of model fit (X2 of 69, df = 27, p < .05, NFI = .98, RMSEA = .07). The evidence of reliability and validity of this measure has important implications for clinical work and research. The OEE scale can be used to identify older adults who have low outcome expectations for exercise, and interventions can then be implemented to strengthen these expectations and thereby improve exercise behavior.

  3. Perfusion dynamics assessment with Power Doppler ultrasound in skeletal muscle during maximal and submaximal cycling exercise.

    PubMed

    Heres, H M; Schoots, T; Tchang, B C Y; Rutten, M C M; Kemps, H M C; van de Vosse, F N; Lopata, R G P

    2018-06-01

    Assessment of limitations in the perfusion dynamics of skeletal muscle may provide insight in the pathophysiology of exercise intolerance in, e.g., heart failure patients. Power doppler ultrasound (PDUS) has been recognized as a sensitive tool for the detection of muscle blood flow. In this volunteer study (N = 30), a method is demonstrated for perfusion measurements in the vastus lateralis muscle, with PDUS, during standardized cycling exercise protocols, and the test-retest reliability has been investigated. Fixation of the ultrasound probe on the upper leg allowed for continuous PDUS measurements. Cycling exercise protocols included a submaximal and an incremental exercise to maximal power. The relative perfused area (RPA) was determined as a measure of perfusion. Absolute and relative reliability of RPA amplitude and kinetic parameters during exercise (onset, slope, maximum value) and recovery (overshoot, decay time constants) were investigated. A RPA increase during exercise followed by a signal recovery was measured in all volunteers. Amplitudes and kinetic parameters during exercise and recovery showed poor to good relative reliability (ICC ranging from 0.2-0.8), and poor to moderate absolute reliability (coefficient of variation (CV) range 18-60%). A method has been demonstrated which allows for continuous (Power Doppler) ultrasonography and assessment of perfusion dynamics in skeletal muscle during exercise. The reliability of the RPA amplitudes and kinetics ranges from poor to good, while the reliability of the RPA increase in submaximal cycling (ICC = 0.8, CV = 18%) is promising for non-invasive clinical assessment of the muscle perfusion response to daily exercise.

  4. A Novel Ontology Approach to Support Design for Reliability considering Environmental Effects

    PubMed Central

    Sun, Bo; Li, Yu; Ye, Tianyuan

    2015-01-01

    Environmental effects are not considered sufficiently in product design. Reliability problems caused by environmental effects are very prominent. This paper proposes a method to apply ontology approach in product design. During product reliability design and analysis, environmental effects knowledge reusing is achieved. First, the relationship of environmental effects and product reliability is analyzed. Then environmental effects ontology to describe environmental effects domain knowledge is designed. Related concepts of environmental effects are formally defined by using the ontology approach. This model can be applied to arrange environmental effects knowledge in different environments. Finally, rubber seals used in the subhumid acid rain environment are taken as an example to illustrate ontological model application on reliability design and analysis. PMID:25821857

  5. A novel ontology approach to support design for reliability considering environmental effects.

    PubMed

    Sun, Bo; Li, Yu; Ye, Tianyuan; Ren, Yi

    2015-01-01

    Environmental effects are not considered sufficiently in product design. Reliability problems caused by environmental effects are very prominent. This paper proposes a method to apply ontology approach in product design. During product reliability design and analysis, environmental effects knowledge reusing is achieved. First, the relationship of environmental effects and product reliability is analyzed. Then environmental effects ontology to describe environmental effects domain knowledge is designed. Related concepts of environmental effects are formally defined by using the ontology approach. This model can be applied to arrange environmental effects knowledge in different environments. Finally, rubber seals used in the subhumid acid rain environment are taken as an example to illustrate ontological model application on reliability design and analysis.

  6. Changes of bone mineral density after cementless total hip arthroplasty with two different stems

    PubMed Central

    Ito, Kouji; Yamamoto, Kengo

    2007-01-01

    Cementless total hip arthroplasty has achieved reliable long-term results since porous coatings were developed, but postoperative changes around the stem remain poorly documented. In this study, changes of the bone mineral density (BMD) were compared between two types of cementless stem. In group B (28 patients with 31 hips), a straight tapered stem with porous plasma spray coating on the proximal 1/4 was used, while group S (24 patients with 26 hips) was given a fluted, tri-slot stem with porous hydroxyapatite coating on the proximal 1/3. In group B, there was an early decrease of BMD, which recovered after 12 months, indicating that stress shielding was minimal. In group S, however, BMD continued to decrease without recovery. The stem shape and radiological findings suggested that the cause of stress shielding in group S was distal fixation. PMID:17225187

  7. Interactive chemistry management system (ICMS); Field demonstration results at United Illuminating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noto, F.A.; Farrell, D.M.; Lombard, E.V.

    1988-01-01

    The authors report on a field demonstration of the interactive chemistry management system (ICMS) performed in the late summer of 1987 at the New Haven Harbor Station of United Illuminating Co. This demonstration was the first installation of the ICMS at an actual plant site. The ICMS is a computer-based system designed to monitor, diagnose, and provide optional automatic control of water and steam chemistry throughout the steam generator cycle. It is one of the diagnostic modules that comprises CE-TOPS (combustion engineering total on-line performance system), which continuously monitors operating conditions and suggests priority actions to increase operation efficiency, extendmore » the performance life of boiler components and reduce maintenance costs. By reducing the number of forced outages through early identification of potentially detrimental conditions, diagnosis of possible causes, and execution of corrective actions, improvements in unit availability and reliability will result.« less

  8. Third-generation pure alumina and alumina matrix composites in total hip arthroplasty

    PubMed Central

    Hannouche, Didier; Zingg, Matthieu; Miozzari, Hermes; Nizard, Remy; Lübbeke, Anne

    2018-01-01

    Wear, corrosion and periprosthetic osteolysis are important causes of failure in joint arthroplasty, especially in young patients. Ceramic bearings, developed 40 years ago, are an increasingly popular choice in hip arthroplasty. New manufacturing procedures have increased the strength and reliability of ceramic materials and reduced the risk of complications. In recent decades, ceramics made of pure alumina have continuously improved, resulting in a surgical-grade material that fulfills clinical requirements. Despite the track record of safety and long-term results, third-generation pure alumina ceramics are being replaced in clinical practice by alumina matrix composites, which are composed of alumina and zirconium. In this review, the characteristics of both materials are discussed, and the long-term results with third-generation alumina-on-alumina bearings and the associated complications are compared with those of other available ceramics. Cite this article: EFORT Open Rev 2018;3:7-14. DOI: 10.1302/2058-5241.3.170034 PMID:29657840

  9. Screening of the spine in adolescents: inter- and intra-rater reliability and measurement error of commonly used clinical tests.

    PubMed

    Aartun, Ellen; Degerfalk, Anna; Kentsdotter, Linn; Hestbaek, Lise

    2014-02-10

    Evidence on the reliability of clinical tests used for the spinal screening of children and adolescents is currently lacking. The aim of this study was to determine the inter- and intra-rater reliability and measurement error of clinical tests commonly used when screening young spines. Two experienced chiropractors independently assessed 111 adolescents aged 12-14 years who were recruited from a primary school in Denmark. A standardised examination protocol was used to test inter-rater reliability including tests for scoliosis, hypermobility, general mobility, inter-segmental mobility and end range pain in the spine. Seventy-five of the 111 subjects were re-examined after one to four hours to test intra-rater reliability. Percentage agreement and Cohen's Kappa were calculated for binary variables, and interclass correlation (ICC) and Bland-Altman plots with Limits of Agreement (LoA) were calculated for continuous measures. Inter-rater percentage agreement for binary data ranged from 59.5% to 100%. Kappa ranged from 0.06-1.00. Kappa ≥ 0.40 was seen for elbow, thumb, fifth finger and trunk/hip flexion hypermobility, pain response in inter-segmental mobility and end range pain in lumbar flexion and extension. For continuous data, ICCs ranged from 0.40-0.95. Only forward flexion as measured by finger-to-floor distance reached an acceptable ICC(≥ 0.75). Overall, results for intra-rater reliability were better than for inter-rater reliability but for both components, the LoA were quite wide compared with the range of assessments. Some clinical tests showed good, and some tests poor, reliability when applied in a spinal screening of adolescents. The results could probably be improved by additional training and further test standardization. This is the first step in evaluating the value of these tests for the spinal screening of adolescents. Future research should determine the association between these tests and current and/or future neck and back pain.

  10. Reconciling Streamflow Uncertainty Estimation and River Bed Morphology Dynamics. Insights from a Probabilistic Assessment of Streamflow Uncertainties Using a Reliability Diagram

    NASA Astrophysics Data System (ADS)

    Morlot, T.; Mathevet, T.; Perret, C.; Favre Pugin, A. C.

    2014-12-01

    Streamflow uncertainty estimation has recently received a large attention in the literature. A dynamic rating curve assessment method has been introduced (Morlot et al., 2014). This dynamic method allows to compute a rating curve for each gauging and a continuous streamflow time-series, while calculating streamflow uncertainties. Streamflow uncertainty takes into account many sources of uncertainty (water level, rating curve interpolation and extrapolation, gauging aging, etc.) and produces an estimated distribution of streamflow for each days. In order to caracterise streamflow uncertainty, a probabilistic framework has been applied on a large sample of hydrometric stations of the Division Technique Générale (DTG) of Électricité de France (EDF) hydrometric network (>250 stations) in France. A reliability diagram (Wilks, 1995) has been constructed for some stations, based on the streamflow distribution estimated for a given day and compared to a real streamflow observation estimated via a gauging. To build a reliability diagram, we computed the probability of an observed streamflow (gauging), given the streamflow distribution. Then, the reliability diagram allows to check that the distribution of probabilities of non-exceedance of the gaugings follows a uniform law (i.e., quantiles should be equipropables). Given the shape of the reliability diagram, the probabilistic calibration is caracterised (underdispersion, overdispersion, bias) (Thyer et al., 2009). In this paper, we present case studies where reliability diagrams have different statistical properties for different periods. Compared to our knowledge of river bed morphology dynamic of these hydrometric stations, we show how reliability diagram gives us invaluable information on river bed movements, like a continuous digging or backfilling of the hydraulic control due to erosion or sedimentation processes. Hence, the careful analysis of reliability diagrams allows to reconcile statistics and long-term river bed morphology processes. This knowledge improves our real-time management of hydrometric stations, given a better caracterisation of erosion/sedimentation processes and the stability of hydrometric station hydraulic control.

  11. Perceiving numbers does not cause automatic shifts of spatial attention.

    PubMed

    Fattorini, Enrico; Pinto, Mario; Rotondaro, Francesca; Doricchi, Fabrizio

    2015-12-01

    It is frequently assumed that the brain codes number magnitudes according to an inherent left-to-right spatial organization. In support of this hypothesis it has been reported that in humans, perceiving small numbers induces automatic shifts of attention toward the left side of space whereas perceiving large numbers automatically shifts attention to the right side of space (i.e., Attentional SNARC: Att-SNARC; Fischer, Castel, Dodd, & Pratt, 2003). Nonetheless, the Att-SNARC has been often not replicated and its reliability never tested. To ascertain whether the mere perception of numbers causes shifts of spatial attention or whether number-space interaction takes place at a different stage of cognitive processing, we re-assessed the consistency and reliability of the Att-SNARC and investigated its role in the production of SNARC effects in Parity Judgement (PJ) and Magnitude Comparison (MC) tasks. In a first study in 60 participants, we found no Att-SNARC, despite finding strong PJ- and MC-SNARC effects. No correlation was present between the Att-SNARC and the SNARC. Split-half tests showed no reliability of the Att-SNARC and high reliability of the PJ- and MC-SNARC. In a second study, we re-assessed the Att-SNARC and tested its direct influence on a MC-SNARC task with laterally presented targets. No Att-SNARC and no influence of the Att-SNARC on the MC-SNARC were found. Also in this case, the SNARC was reliable whereas the Att-SNARC task was not. Finally, in a third study we observed a significant Att-SNARC when participants were asked to recall the position occupied on a ruler by the numbers presented in each trial: however the Att-SNARC task was not reliable. These results show that perceiving numbers does not cause automatic shifts of spatial attention and that whenever present, these shifts do not modulate the SNARC. The same results suggest that numbers have no inherent mental left-to-right organization and that, whenever present, this organization can have both response-related and strategically driven memory-related origins. Nonetheless, response-related factors generate more reliable and stable spatial representations of numbers. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Measuring maternal satisfaction with maternity care: A systematic integrative review: What is the most appropriate, reliable and valid tool that can be used to measure maternal satisfaction with continuity of maternity care?

    PubMed

    Perriman, Noelyn; Davis, Deborah

    2016-06-01

    The objective of this systematic integrative review is to identify, summarise and communicate the findings of research relating to tools that measure maternal satisfaction with continuity of maternity care models. In so doing the most appropriate, reliable and valid tool that can be used to measure maternal satisfaction with continuity of maternity care will be determined. A systematic integrative review of published and unpublished literature was undertaken using selected databases. Research papers were included if they measured maternal satisfaction in a continuity model of maternity care, were published in English after 1999 and if they included (or made available) the instrument used to measure satisfaction. Six hundred and thirty two unique papers were identified and after applying the selection criteria, four papers were included in the review. Three of these originated in Australia and one in Canada. The primary focus of all papers was not on the development of a tool to measure maternal satisfaction but on the comparison of outcomes in different models of care. The instruments developed varied in terms of the degree to which they were tested for validity and reliability. Women's satisfaction with maternity services is an important measure of quality. Most satisfaction surveys in maternity appear to reflect fragmented models of care though continuity of care models are increasing in line with the evidence demonstrating their effectiveness. It is important that robust tools are developed for this context and that there is some consistency in the way this is measured and reported for the purposes of benchmarking and quality improvement. Copyright © 2016 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.

  13. Investigation of the thermal hazardous effect of protective clothing caused by stored energy discharge.

    PubMed

    He, Jiazhen; Lu, Yehu; Chen, Yan; Li, Jun

    2017-09-15

    In addition to direct thermal energy from a heating source, a large amount of thermal energy stored in clothing will continuously discharge to skin after exposure. Investigating the thermal hazardous effect of clothing caused by stored energy discharge is crucial for the reliability of thermal protective clothing. In this study several indices were proposed and applied to evaluate the impact of thermal energy discharge on human skin. The heat discharge from different layers of fabric systems was investigated, and the influences of air gaps and applied compression were examined. Heat fluxes at the boundaries of fabric layers and the distribution of heat discharge were determined. Additionally, the correlation between heat storage during exposure and heat discharge after exposure was identified. The results demonstrated that heat discharge to the skin could be correlated with heat storage within the fabric, however, it highly depended on the air gap under clothing, the applied compression, and the insulation provided by the fabric layers. Results from this study could contribute to thoroughly understanding the thermal hazardous effect of clothing and enhance the technical basis for developing new fabric combinations to minimize energy discharge after exposure. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. THE BALTIC SEA REGION - IN NEED OF A MORE TOTAL DEFENSE

    DTIC Science & Technology

    2017-04-05

    quick, reliable, transparent information, and education in source reliability. Furthermore, a strengthened Economic Defense is needed, where especially...Military forces once again were used to change borders in Europe.1 Economic sanctions and other responses by the West followed, against a revanchist...historical, geopolitical, military, and economic . There is a substantial risk that Russia, if not deterred, will continue its strategy of cross-domain

  15. A Case Study on Improving Intensive Care Unit (ICU) Services Reliability: By Using Process Failure Mode and Effects Analysis (PFMEA)

    PubMed Central

    Yousefinezhadi, Taraneh; Jannesar Nobari, Farnaz Attar; Goodari, Faranak Behzadi; Arab, Mohammad

    2016-01-01

    Introduction: In any complex human system, human error is inevitable and shows that can’t be eliminated by blaming wrong doers. So with the aim of improving Intensive Care Units (ICU) reliability in hospitals, this research tries to identify and analyze ICU’s process failure modes at the point of systematic approach to errors. Methods: In this descriptive research, data was gathered qualitatively by observations, document reviews, and Focus Group Discussions (FGDs) with the process owners in two selected ICUs in Tehran in 2014. But, data analysis was quantitative, based on failures’ Risk Priority Number (RPN) at the base of Failure Modes and Effects Analysis (FMEA) method used. Besides, some causes of failures were analyzed by qualitative Eindhoven Classification Model (ECM). Results: Through FMEA methodology, 378 potential failure modes from 180 ICU activities in hospital A and 184 potential failures from 99 ICU activities in hospital B were identified and evaluated. Then with 90% reliability (RPN≥100), totally 18 failures in hospital A and 42 ones in hospital B were identified as non-acceptable risks and then their causes were analyzed by ECM. Conclusions: Applying of modified PFMEA for improving two selected ICUs’ processes reliability in two different kinds of hospitals shows that this method empowers staff to identify, evaluate, prioritize and analyze all potential failure modes and also make them eager to identify their causes, recommend corrective actions and even participate in improving process without feeling blamed by top management. Moreover, by combining FMEA and ECM, team members can easily identify failure causes at the point of health care perspectives. PMID:27157162

  16. The Injury/Illness Performance Project (IIPP): A Novel Epidemiological Approach for Recording the Consequences of Sports Injuries and Illnesses

    PubMed Central

    Fuller, Colin; Jaques, Rod; Hunter, Glenn

    2013-01-01

    Background. Describing the frequency, severity, and causes of sports injuries and illnesses reliably is important for quantifying the risk to athletes and providing direction for prevention initiatives. Methods. Time-loss and/or medical-attention definitions have long been used in sports injury/illness epidemiology research, but the limitations to these definitions mean that some events are incorrectly classified or omitted completely, where athletes continue to train and compete at high levels but experience restrictions in their performance. Introducing a graded definition of performance-restriction may provide a solution to this issue. Results. Results from the Great Britain injury/illness performance project (IIPP) are presented using a performance-restriction adaptation of the accepted surveillance consensus methodologies. The IIPP involved 322 Olympic athletes (males: 172; female: 150) from 10 Great Britain Olympic sports between September 2009 and August 2012. Of all injuries (n = 565), 216 were classified as causing time-loss, 346 as causing performance-restriction, and 3 were unclassified. For athlete illnesses (n = 378), the majority (P < 0.01) resulted in time-loss (270) compared with performance-restriction (101) (7 unclassified). Conclusions. Successful implementation of prevention strategies relies on the correct characterisation of injury/illness risk factors. Including a performance-restriction classification could provide a deeper understanding of injuries/illnesses and better informed prevention initiatives. PMID:26464883

  17. DEVELOPMENT AND EVALUATION OF A CONTINUOUS COARSE (PM10-PM2.5) PARTICLE MONITOR

    EPA Science Inventory

    In this paper, we describe the development and laboratory and field evaluation of a continuous coarse (2.5-10 um) particle mass (PM) monitor that can provide reliable measurements of the coarse mass (CM) concentrations in time intervals as short as 5-10 min. The operating princ...

  18. Continuing Medical Education and Professional Revalidation in Europe: Five Case Examples

    ERIC Educational Resources Information Center

    Maisonneuve, Herve; Matillon, Yves; Negri, Alfonso; Pallares, Luis; Vigneri, Ricardo; Young, Howard L.

    2009-01-01

    Introduction: Since reliable information is scarce to describe continuing medical education (CME) and revalidation in Europe, we carried out a survey in 5 selected countries (France, Germany, Italy, Spain, and the United Kingdom). Methods: A tested questionnaire was sent to 2 experts per country (except in Germany), during August-September 2004.…

  19. Determination of continuous variable entanglement by purity measurements.

    PubMed

    Adesso, Gerardo; Serafini, Alessio; Illuminati, Fabrizio

    2004-02-27

    We classify the entanglement of two-mode Gaussian states according to their degree of total and partial mixedness. We derive exact bounds that determine maximally and minimally entangled states for fixed global and marginal purities. This characterization allows for an experimentally reliable estimate of continuous variable entanglement based on measurements of purity.

  20. Fiscal Year 2011 U.S. Government Financial Statements: The Federal Government Faces Continuing Financial Management and Long-Term Fiscal Challenges

    DTIC Science & Technology

    2012-03-01

    GAO annually audits the consolidated financial statements of the U.S. government. The Congress and the President need reliable, useful, and timely...based consolidated financial statements . Unless these weaknesses are adequately addressed, they will, among other things, continue to (1) hamper the

  1. Continuously-stirred anaerobic digester to convert organic wastes into biogas: system setup and basic operation.

    PubMed

    Usack, Joseph G; Spirito, Catherine M; Angenent, Largus T

    2012-07-13

    Anaerobic digestion (AD) is a bioprocess that is commonly used to convert complex organic wastes into a useful biogas with methane as the energy carrier. Increasingly, AD is being used in industrial, agricultural, and municipal waste(water) treatment applications. The use of AD technology allows plant operators to reduce waste disposal costs and offset energy utility expenses. In addition to treating organic wastes, energy crops are being converted into the energy carrier methane. As the application of AD technology broadens for the treatment of new substrates and co-substrate mixtures, so does the demand for a reliable testing methodology at the pilot- and laboratory-scale. Anaerobic digestion systems have a variety of configurations, including the continuously stirred tank reactor (CSTR), plug flow (PF), and anaerobic sequencing batch reactor (ASBR) configurations. The CSTR is frequently used in research due to its simplicity in design and operation, but also for its advantages in experimentation. Compared to other configurations, the CSTR provides greater uniformity of system parameters, such as temperature, mixing, chemical concentration, and substrate concentration. Ultimately, when designing a full-scale reactor, the optimum reactor configuration will depend on the character of a given substrate among many other nontechnical considerations. However, all configurations share fundamental design features and operating parameters that render the CSTR appropriate for most preliminary assessments. If researchers and engineers use an influent stream with relatively high concentrations of solids, then lab-scale bioreactor configurations cannot be fed continuously due to plugging problems of lab-scale pumps with solids or settling of solids in tubing. For that scenario with continuous mixing requirements, lab-scale bioreactors are fed periodically and we refer to such configurations as continuously stirred anaerobic digesters (CSADs). This article presents a general methodology for constructing, inoculating, operating, and monitoring a CSAD system for the purpose of testing the suitability of a given organic substrate for long-term anaerobic digestion. The construction section of this article will cover building the lab-scale reactor system. The inoculation section will explain how to create an anaerobic environment suitable for seeding with an active methanogenic inoculum. The operating section will cover operation, maintenance, and troubleshooting. The monitoring section will introduce testing protocols using standard analyses. The use of these measures is necessary for reliable experimental assessments of substrate suitability for AD. This protocol should provide greater protection against a common mistake made in AD studies, which is to conclude that reactor failure was caused by the substrate in use, when really it was improper user operation.

  2. Non-invasive continuous blood pressure monitoring of tachycardic episodes during interventional electrophysiology

    PubMed Central

    Maggi, Roberto; Viscardi, Valentina; Furukawa, Toshiyuki; Brignole, Michele

    2010-01-01

    Aims We thought to evaluate feasibility of continuous non-invasive blood pressure monitoring during procedures of interventional electrophysiology. Methods and results We evaluated continuous non-invasive finger blood pressure (BP) monitoring by means of the Nexfin device in 22 patients (mean age 70 ± 24 years), undergoing procedures of interventional electrophysiology, in critical situations of hypotension caused by tachyarrhythmias or by intermittent incremental ventricular temporary pacing till to the maximum tolerated systolic BP fall (mean 61 ± 14 mmHg per patient at a rate of 195 ± 37 bpm). In all patients, Nexfin was able to detect immediately, at the onset of tachyarrythmia, the changes in BP and recorded reliable waveforms. The quality of the signal was arbitrarily classified as excellent in 11 cases, good in 10 cases, and sufficient in 1 case. In basal conditions, calibrations of the signal occurred every 49.2 ± 24.3 s and accounted for 4% of total monitoring time; during tachyarrhythmias their frequency increased to one every 12.7 s and accounted for 19% of total recording duration. A linear correlation for a range of BP values from 41 to 190 mmHg was found between non-invasive and intra-arterial BP among a total of 1055 beats from three patients who underwent simultaneous recordings with both methods (coefficient of correlation of 0.81, P < 0.0001). Conclusion In conclusion, continuous non-invasive BP monitoring is feasible in the clinical practise of an interventional electrophysiology laboratory without the need of utilization of an intra-arterial BP line. PMID:20837572

  3. An Introduction to a Reliability Shorthand.

    DTIC Science & Technology

    1981-03-01

    II 00r.nt m 0 e@et IS. SUPPLEMENTARy NOTES It. KEY WORDS (CeMMle soewe" ad s it meomE, d aienf Iyp M mumbw) reliability system survival survival...function systems reliability shorthand redundant systems 20. ASISTRAGT (C411M en .MVm ideS if m I O 1411011010 IV N au11 l) -The determination of a system’s...OF TWOS IS 1 (Uim Urn. lincrl nqqi fi Pd "MTV C P06 L&a TIO 0 To oe 0S S e s O" M ef. Item 20, continued: Simple examples show the convenience of this

  4. An overview of the mathematical and statistical analysis component of RICIS

    NASA Technical Reports Server (NTRS)

    Hallum, Cecil R.

    1987-01-01

    Mathematical and statistical analysis components of RICIS (Research Institute for Computing and Information Systems) can be used in the following problem areas: (1) quantification and measurement of software reliability; (2) assessment of changes in software reliability over time (reliability growth); (3) analysis of software-failure data; and (4) decision logic for whether to continue or stop testing software. Other areas of interest to NASA/JSC where mathematical and statistical analysis can be successfully employed include: math modeling of physical systems, simulation, statistical data reduction, evaluation methods, optimization, algorithm development, and mathematical methods in signal processing.

  5. Influence of Schizotypy on Responding and Contingency Awareness on Free-Operant Schedules of Reinforcement

    ERIC Educational Resources Information Center

    Randell, Jordan; Searle, Rob; Reed, Phil

    2012-01-01

    Schedules of reinforcement typically produce reliable patterns of behaviour, and one factor that can cause deviations from these normally reliable patterns is schizotypy. Low scorers on the unusual experiences subscale of the Oxford-Liverpool Inventory of Feelings and Experiences performed as expected on a yoked random-ratio (RR), random-interval…

  6. Second-Order Conditioning of Human Causal Learning

    ERIC Educational Resources Information Center

    Jara, Elvia; Vila, Javier; Maldonado, Antonio

    2006-01-01

    This article provides the first demonstration of a reliable second-order conditioning (SOC) effect in human causal learning tasks. It demonstrates the human ability to infer relationships between a cause and an effect that were never paired together during training. Experiments 1a and 1b showed a clear and reliable SOC effect, while Experiments 2a…

  7. [Quality of information analysis on basic causes of neonatal deaths recorded in the Mortality Information System: a study in Maceió, Alagoas State, Brazil, 2001-2002].

    PubMed

    Pedrosa, Linda Délia C O; Sarinho, Silvia W; Ordonha, Manoelina R

    2007-10-01

    Analysis of the quality of information on basic causes of neonatal deaths in Brazil is crucially important, since it allows one to estimate how many deaths are avoidable and provide support for policies to decrease neonatal mortality. The current study aimed to evaluate the reliability and validity of the Mortality Information System (MIS) for discriminating between basic causes of neonatal deaths and defining percentages of reducible causes. The basic causes of early neonatal deaths in hospitals in Maceió, Alagoas State, were analyzed, and the causes recorded in medical records were compared to the MIS data in order to measure reliability and validity. The modified SEADE Foundation and Wigglesworth classifications were compared to analyze the capacity for reduction of neonatal mortality. Maternal causes predominated in the medical records, as compared to respiratory disorders on the death certificates and in the MIS. The percentage of avoidable deaths may be much higher than observed from the MIS, due to imprecision in completing death certificates. Based on the MIS, the greatest problems are in early diagnosis and treatment of neonatal causes. However, the results show that the most pressing problems relate to failures in prenatal care and lack of control of diseases.

  8. Experimental investigation of fluvial dike breaching due to flow overtopping

    NASA Astrophysics Data System (ADS)

    El Kadi Abderrezzak, K.; Rifai, I.; Erpicum, S.; Archambeau, P.; Violeau, D.; Pirotton, M.; Dewals, B.

    2017-12-01

    The failure of fluvial dikes (levees) often leads to devastating floods that cause loss of life and damages to public infrastructure. Overtopping flows have been recognized as one of the most frequent cause of dike erosion and breaching. Fluvial dike breaching is different from frontal dike (embankments) breaching, because of specific geometry and boundary conditions. The current knowledge on the physical processes underpinning fluvial dike failure due to overtopping remains limited. In addition, there is a lack of a continuous monitoring of the 3D breach formation, limiting the analysis of the key mechanisms governing the breach development and the validation of conceptual or physically-based models. Laboratory tests on breach growth in homogeneous, non-cohesive sandy fluvial dikes due to flow overtopping have been performed. Two experimental setups have been constructed, permitting the investigation of various hydraulic and geometric parameters. Each experimental setup includes a main channel, separated from a floodplain by a dike. A rectangular initial notch is cut in the crest to initiate dike breaching. The breach development is monitored continuously using a specific developed laser profilometry technique. The observations have shown that the breach develops in two stages: first the breach deepens and widens with the breach centerline being gradually shifted toward the downstream side of the main channel. This behavior underlines the influence of the flow momentum component parallel to the dike crest. Second, the dike geometry upstream of the breach stops evolving and the breach widening continues only toward the downstream side of the main channel. The breach evolution has been found strongly affected by the flow conditions (i.e. inflow discharge in the main channel, downstream boundary condition) and floodplain confinement. The findings of this work shed light on key mechanisms of fluvial dike breaching, which differ substantially from those of dam breaching. These specific features need to be incorporated in flood risk analyses involving fluvial dike breach and failure. In addition, a well-documented, reliable data set, with a continuous high resolution monitoring of the 3D breach evolution under various flow conditions, has been gathered, which can be used for validating numerical models.

  9. Construction simulation analysis of 120m continuous rigid frame bridge based on Midas Civil

    NASA Astrophysics Data System (ADS)

    Shi, Jing-xian; Ran, Zhi-hong

    2018-03-01

    In this paper, a three-dimensional finite element model of a continuous rigid frame bridge with a main span of 120m is established by the simulation and analysis of Midas Civil software. The deflection and stress of the main beam in each construction stage of continuous beam bridge are simulated and analyzed, which provides a reliable technical guarantee for the safe construction of the bridge.

  10. Quantifying the test-retest reliability of cerebral blood flow measurements in a clinical model of on-going post-surgical pain: A study using pseudo-continuous arterial spin labelling.

    PubMed

    Hodkinson, Duncan J; Krause, Kristina; Khawaja, Nadine; Renton, Tara F; Huggins, John P; Vennart, William; Thacker, Michael A; Mehta, Mitul A; Zelaya, Fernando O; Williams, Steven C R; Howard, Matthew A

    2013-01-01

    Arterial spin labelling (ASL) is increasingly being applied to study the cerebral response to pain in both experimental human models and patients with persistent pain. Despite its advantages, scanning time and reliability remain important issues in the clinical applicability of ASL. Here we present the test-retest analysis of concurrent pseudo-continuous ASL (pCASL) and visual analogue scale (VAS), in a clinical model of on-going pain following third molar extraction (TME). Using ICC performance measures, we were able to quantify the reliability of the post-surgical pain state and ΔCBF (change in CBF), both at the group and individual case level. Within-subject, the inter- and intra-session reliability of the post-surgical pain state was ranked good-to-excellent (ICC > 0.6) across both pCASL and VAS modalities. The parameter ΔCBF (change in CBF between pre- and post-surgical states) performed reliably (ICC > 0.4), provided that a single baseline condition (or the mean of more than one baseline) was used for subtraction. Between-subjects, the pCASL measurements in the post-surgical pain state and ΔCBF were both characterised as reliable (ICC > 0.4). However, the subjective VAS pain ratings demonstrated a significant contribution of pain state variability, which suggests diminished utility for interindividual comparisons. These analyses indicate that the pCASL imaging technique has considerable potential for the comparison of within- and between-subjects differences associated with pain-induced state changes and baseline differences in regional CBF. They also suggest that differences in baseline perfusion and functional lateralisation characteristics may play an important role in the overall reliability of the estimated changes in CBF. Repeated measures designs have the important advantage that they provide good reliability for comparing condition effects because all sources of variability between subjects are excluded from the experimental error. The ability to elicit reliable neural correlates of on-going pain using quantitative perfusion imaging may help support the conclusions derived from subjective self-report.

  11. Validation of a Detailed Scoring Checklist for Use During Advanced Cardiac Life Support Certification

    PubMed Central

    McEvoy, Matthew D.; Smalley, Jeremy C.; Nietert, Paul J.; Field, Larry C.; Furse, Cory M.; Blenko, John W.; Cobb, Benjamin G.; Walters, Jenna L.; Pendarvis, Allen; Dalal, Nishita S.; Schaefer, John J.

    2012-01-01

    Introduction Defining valid, reliable, defensible, and generalizable standards for the evaluation of learner performance is a key issue in assessing both baseline competence and mastery in medical education. However, prior to setting these standards of performance, the reliability of the scores yielding from a grading tool must be assessed. Accordingly, the purpose of this study was to assess the reliability of scores generated from a set of grading checklists used by non-expert raters during simulations of American Heart Association (AHA) MegaCodes. Methods The reliability of scores generated from a detailed set of checklists, when used by four non-expert raters, was tested by grading team leader performance in eight MegaCode scenarios. Videos of the scenarios were reviewed and rated by trained faculty facilitators and by a group of non-expert raters. The videos were reviewed “continuously” and “with pauses.” Two content experts served as the reference standard for grading, and four non-expert raters were used to test the reliability of the checklists. Results Our results demonstrate that non-expert raters are able to produce reliable grades when using the checklists under consideration, demonstrating excellent intra-rater reliability and agreement with a reference standard. The results also demonstrate that non-expert raters can be trained in the proper use of the checklist in a short amount of time, with no discernible learning curve thereafter. Finally, our results show that a single trained rater can achieve reliable scores of team leader performance during AHA MegaCodes when using our checklist in continuous mode, as measures of agreement in total scoring were very strong (Lin’s Concordance Correlation Coefficient = 0.96; Intraclass Correlation Coefficient = 0.97). Discussion We have shown that our checklists can yield reliable scores, are appropriate for use by non-expert raters, and are able to be employed during continuous assessment of team leader performance during the review of a simulated MegaCode. This checklist may be more appropriate for use by Advanced Cardiac Life Support (ACLS) instructors during MegaCode assessments than current tools provided by the AHA. PMID:22863996

  12. Self-regulation strategy, feedback timing and hemodynamic properties modulate learning in a simulated fMRI neurofeedback environment.

    PubMed

    Oblak, Ethan F; Lewis-Peacock, Jarrod A; Sulzer, James S

    2017-07-01

    Direct manipulation of brain activity can be used to investigate causal brain-behavior relationships. Current noninvasive neural stimulation techniques are too coarse to manipulate behaviors that correlate with fine-grained spatial patterns recorded by fMRI. However, these activity patterns can be manipulated by having people learn to self-regulate their own recorded neural activity. This technique, known as fMRI neurofeedback, faces challenges as many participants are unable to self-regulate. The causes of this non-responder effect are not well understood due to the cost and complexity of such investigation in the MRI scanner. Here, we investigated the temporal dynamics of the hemodynamic response measured by fMRI as a potential cause of the non-responder effect. Learning to self-regulate the hemodynamic response involves a difficult temporal credit-assignment problem because this signal is both delayed and blurred over time. Two factors critical to this problem are the prescribed self-regulation strategy (cognitive or automatic) and feedback timing (continuous or intermittent). Here, we sought to evaluate how these factors interact with the temporal dynamics of fMRI without using the MRI scanner. We first examined the role of cognitive strategies by having participants learn to regulate a simulated neurofeedback signal using a unidimensional strategy: pressing one of two buttons to rotate a visual grating that stimulates a model of visual cortex. Under these conditions, continuous feedback led to faster regulation compared to intermittent feedback. Yet, since many neurofeedback studies prescribe implicit self-regulation strategies, we created a computational model of automatic reward-based learning to examine whether this result held true for automatic processing. When feedback was delayed and blurred based on the hemodynamics of fMRI, this model learned more reliably from intermittent feedback compared to continuous feedback. These results suggest that different self-regulation mechanisms prefer different feedback timings, and that these factors can be effectively explored and optimized via simulation prior to deployment in the MRI scanner.

  13. Self-regulation strategy, feedback timing and hemodynamic properties modulate learning in a simulated fMRI neurofeedback environment

    PubMed Central

    Sulzer, James S.

    2017-01-01

    Direct manipulation of brain activity can be used to investigate causal brain-behavior relationships. Current noninvasive neural stimulation techniques are too coarse to manipulate behaviors that correlate with fine-grained spatial patterns recorded by fMRI. However, these activity patterns can be manipulated by having people learn to self-regulate their own recorded neural activity. This technique, known as fMRI neurofeedback, faces challenges as many participants are unable to self-regulate. The causes of this non-responder effect are not well understood due to the cost and complexity of such investigation in the MRI scanner. Here, we investigated the temporal dynamics of the hemodynamic response measured by fMRI as a potential cause of the non-responder effect. Learning to self-regulate the hemodynamic response involves a difficult temporal credit-assignment problem because this signal is both delayed and blurred over time. Two factors critical to this problem are the prescribed self-regulation strategy (cognitive or automatic) and feedback timing (continuous or intermittent). Here, we sought to evaluate how these factors interact with the temporal dynamics of fMRI without using the MRI scanner. We first examined the role of cognitive strategies by having participants learn to regulate a simulated neurofeedback signal using a unidimensional strategy: pressing one of two buttons to rotate a visual grating that stimulates a model of visual cortex. Under these conditions, continuous feedback led to faster regulation compared to intermittent feedback. Yet, since many neurofeedback studies prescribe implicit self-regulation strategies, we created a computational model of automatic reward-based learning to examine whether this result held true for automatic processing. When feedback was delayed and blurred based on the hemodynamics of fMRI, this model learned more reliably from intermittent feedback compared to continuous feedback. These results suggest that different self-regulation mechanisms prefer different feedback timings, and that these factors can be effectively explored and optimized via simulation prior to deployment in the MRI scanner. PMID:28753639

  14. Reliable intraocular pressure measurement using automated radio-wave telemetry

    PubMed Central

    Paschalis, Eleftherios I; Cade, Fabiano; Melki, Samir; Pasquale, Louis R; Dohlman, Claes H; Ciolino, Joseph B

    2014-01-01

    Purpose To present an autonomous intraocular pressure (IOP) measurement technique using a wireless implantable transducer (WIT) and a motion sensor. Methods The WIT optical aid was implanted within the ciliary sulcus of a normotensive rabbit eye after extracapsular clear lens extraction. An autonomous wireless data system (AWDS) comprising of a WIT and an external antenna aided by a motion sensor provided continuous IOP readings. The sensitivity of the technique was determined by the ability to detect IOP changes resulting from the administration of latanoprost 0.005% or dorzolamide 2%, while the reliability was determined by the agreement between baseline and vehicle (saline) IOP. Results On average, 12 diurnal and 205 nocturnal IOP measurements were performed with latanoprost, and 26 diurnal and 205 nocturnal measurements with dorzolamide. No difference was found between mean baseline IOP (13.08±2.2 mmHg) and mean vehicle IOP (13.27±2.1 mmHg) (P=0.45), suggesting good measurement reliability. Both antiglaucoma medications caused significant IOP reduction compared to baseline; latanoprost reduced mean IOP by 10% (1.3±3.54 mmHg; P<0.001), and dorzolamide by 5% (0.62±2.22 mmHg; P<0.001). Use of latanoprost resulted in an overall twofold higher IOP reduction compared to dorzolamide (P<0.001). Repeatability was ±1.8 mmHg, assessed by the variability of consecutive IOP measurements performed in a short period of time (≤1 minute), during which the IOP is not expected to change. Conclusion IOP measurements in conscious rabbits obtained without the need for human interactions using the AWDS are feasible and provide reproducible results. PMID:24531415

  15. TheClinical Research Tool: a high-performance microdialysis-based system for reliably measuring interstitial fluid glucose concentration.

    PubMed

    Ocvirk, Gregor; Hajnsek, Martin; Gillen, Ralph; Guenther, Arnfried; Hochmuth, Gernot; Kamecke, Ulrike; Koelker, Karl-Heinz; Kraemer, Peter; Obermaier, Karin; Reinheimer, Cornelia; Jendrike, Nina; Freckmann, Guido

    2009-05-01

    A novel microdialysis-based continuous glucose monitoring system, the so-called Clinical Research Tool (CRT), is presented. The CRT was designed exclusively for investigational use to offer high analytical accuracy and reliability. The CRT was built to avoid signal artifacts due to catheter clogging, flow obstruction by air bubbles, and flow variation caused by inconstant pumping. For differentiation between physiological events and system artifacts, the sensor current, counter electrode and polarization voltage, battery voltage, sensor temperature, and flow rate are recorded at a rate of 1 Hz. In vitro characterization with buffered glucose solutions (c(glucose) = 0 - 26 x 10(-3) mol liter(-1)) over 120 h yielded a mean absolute relative error (MARE) of 2.9 +/- 0.9% and a recorded mean flow rate of 330 +/- 48 nl/min with periodic flow rate variation amounting to 24 +/- 7%. The first 120 h in vivo testing was conducted with five type 1 diabetes subjects wearing two systems each. A mean flow rate of 350 +/- 59 nl/min and a periodic variation of 22 +/- 6% were recorded. Utilizing 3 blood glucose measurements per day and a physical lag time of 1980 s, retrospective calibration of the 10 in vivo experiments yielded a MARE value of 12.4 +/- 5.7. Clarke error grid analysis resulted in 81.0%, 16.6%, 0.8%, 1.6%, and 0% in regions A, B, C, D, and E, respectively. The CRT demonstrates exceptional reliability of system operation and very good measurement performance. The ability to differentiate between artifacts and physiological effects suggests the use of the CRT as a reference tool in clinical investigations. 2009 Diabetes Technology Society.

  16. The Clinical Research Tool: A High-Performance Microdialysis-Based System for Reliably Measuring Interstitial Fluid Glucose Concentration

    PubMed Central

    Ocvirk, Gregor; Hajnsek, Martin; Gillen, Ralph; Guenther, Arnfried; Hochmuth, Gernot; Kamecke, Ulrike; Koelker, Karl-Heinz; Kraemer, Peter; Obermaier, Karin; Reinheimer, Cornelia; Jendrike, Nina; Freckmann, Guido

    2009-01-01

    Background A novel microdialysis-based continuous glucose monitoring system, the so-called Clinical Research Tool (CRT), is presented. The CRT was designed exclusively for investigational use to offer high analytical accuracy and reliability. The CRT was built to avoid signal artifacts due to catheter clogging, flow obstruction by air bubbles, and flow variation caused by inconstant pumping. For differentiation between physiological events and system artifacts, the sensor current, counter electrode and polarization voltage, battery voltage, sensor temperature, and flow rate are recorded at a rate of 1 Hz. Method In vitro characterization with buffered glucose solutions (cglucose = 0 - 26 × 10-3 mol liter-1) over 120 h yielded a mean absolute relative error (MARE) of 2.9 ± 0.9% and a recorded mean flow rate of 330 ± 48 nl/min with periodic flow rate variation amounting to 24 ± 7%. The first 120 h in vivo testing was conducted with five type 1 diabetes subjects wearing two systems each. A mean flow rate of 350 ± 59 nl/min and a periodic variation of 22 ± 6% were recorded. Results Utilizing 3 blood glucose measurements per day and a physical lag time of 1980 s, retrospective calibration of the 10 in vivo experiments yielded a MARE value of 12.4 ± 5.7. Clarke error grid analysis resulted in 81.0%, 16.6%, 0.8%, 1.6%, and 0% in regions A, B, C, D, and E, respectively. Conclusion The CRT demonstrates exceptional reliability of system operation and very good measurement performance. The ability to differentiate between artifacts and physiological effects suggests the use of the CRT as a reference tool in clinical investigations. PMID:20144284

  17. Test-retest reliability of behavioral measures of impulsive choice, impulsive action, and inattention.

    PubMed

    Weafer, Jessica; Baggott, Matthew J; de Wit, Harriet

    2013-12-01

    Behavioral measures of impulsivity are widely used in substance abuse research, yet relatively little attention has been devoted to establishing their psychometric properties, especially their reliability over repeated administration. The current study examined the test-retest reliability of a battery of standardized behavioral impulsivity tasks, including measures of impulsive choice (i.e., delay discounting, probability discounting, and the Balloon Analogue Risk Task), impulsive action (i.e., the stop signal task, the go/no-go task, and commission errors on the continuous performance task), and inattention (i.e., attention lapses on a simple reaction time task and omission errors on the continuous performance task). Healthy adults (n = 128) performed the battery on two separate occasions. Reliability estimates for the individual tasks ranged from moderate to high, with Pearson correlations within the specific impulsivity domains as follows: impulsive choice (r range: .76-.89, ps < .001); impulsive action (r range: .65-.73, ps < .001); and inattention (r range: .38-.42, ps < .001). Additionally, the influence of day-to-day fluctuations in mood, as measured by the Profile of Mood States, was assessed in relation to variability in performance on each of the behavioral tasks. Change in performance on the delay discounting task was significantly associated with change in positive mood and arousal. No other behavioral measures were significantly associated with mood. In sum, the current analysis demonstrates that behavioral measures of impulsivity are reliable measures and thus can be confidently used to assess various facets of impulsivity as intermediate phenotypes for drug abuse.

  18. Evaluating the safety risk of roadside features for rural two-lane roads using reliability analysis.

    PubMed

    Jalayer, Mohammad; Zhou, Huaguo

    2016-08-01

    The severity of roadway departure crashes mainly depends on the roadside features, including the sideslope, fixed-object density, offset from fixed objects, and shoulder width. Common engineering countermeasures to improve roadside safety include: cross section improvements, hazard removal or modification, and delineation. It is not always feasible to maintain an object-free and smooth roadside clear zone as recommended in design guidelines. Currently, clear zone width and sideslope are used to determine roadside hazard ratings (RHRs) to quantify the roadside safety of rural two-lane roadways on a seven-point pictorial scale. Since these two variables are continuous and can be treated as random, probabilistic analysis can be applied as an alternative method to address existing uncertainties. Specifically, using reliability analysis, it is possible to quantify roadside safety levels by treating the clear zone width and sideslope as two continuous, rather than discrete, variables. The objective of this manuscript is to present a new approach for defining the reliability index for measuring roadside safety on rural two-lane roads. To evaluate the proposed approach, we gathered five years (2009-2013) of Illinois run-off-road (ROR) crash data and identified the roadside features (i.e., clear zone widths and sideslopes) of 4500 300ft roadway segments. Based on the obtained results, we confirm that reliability indices can serve as indicators to gauge safety levels, such that the greater the reliability index value, the lower the ROR crash rate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Test-retest reliability of behavioral measures of impulsive choice, impulsive action, and inattention

    PubMed Central

    Weafer, Jessica; Baggott, Matthew J.; de Wit, Harriet

    2014-01-01

    Behavioral measures of impulsivity are widely used in substance abuse research, yet relatively little attention has been devoted to establishing their psychometric properties, especially their reliability over repeated administration. The current study examined the test-retest reliability of a battery of standardized behavioral impulsivity tasks, including measures of impulsive choice (delay discounting, probability discounting, and the Balloon Analogue Risk Task), impulsive action (the stop signal task, the go/no-go task, and commission errors on the continuous performance task), and inattention (attention lapses on a simple reaction time task and omission errors on the continuous performance task). Healthy adults (n=128) performed the battery on two separate occasions. Reliability estimates for the individual tasks ranged from moderate to high, with Pearson correlations within the specific impulsivity domains as follows: impulsive choice (r = .76 - .89, ps < .001); impulsive action (r = .65 - .73, ps < .001); and inattention (r = .38-.42, ps < .001). Additionally, the influence of day-to-day fluctuations in mood as measured by the Profile of Mood States was assessed in relation to variability in performance on each of the behavioral tasks. Change in performance on the delay discounting task was significantly associated with change in positive mood and arousal. No other behavioral measures were significantly associated with mood. In sum, the current analysis demonstrates that behavioral measures of impulsivity are reliable measures and thus can be confidently used to assess various facets of impulsivity as intermediate phenotypes for drug abuse. PMID:24099351

  20. Rooftop applications

    NASA Technical Reports Server (NTRS)

    Kern, E.

    1982-01-01

    Research on residential photovoltaic power systems based upon the experience of MIT-LL in implementing the DOE Residential Demonstration Project, especially the Northeast Residential Experiment Station (NE RES) is discussed. There is an immediate need for improved power-conditioner operational and reliability capabilities. Continuing evaluation of photovoltaic power systems is required to verify long-term performance, reliability, and utility interface effects. In the long term, the price of photovoltaic power systems must decrease, especially of modules.

  1. A Reliability Generalization Study on the Survey of Perceived Organizational Support: The Effects of Mean Age and Number of Items on Score Reliability

    ERIC Educational Resources Information Center

    Hellman, Chan M.; Fuqua, Dale R.; Worley, Jody

    2006-01-01

    The Survey of Perceived Organizational Support (SPOS) is a unidimensional measure of the general belief held by an employee that the organization is committed to him or her, values his or her continued membership, and is generally concerned about the employee's well-being. In the interest of efficiency, researchers are often compelled to use a…

  2. 3-Dimensional Root Cause Diagnosis via Co-analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Ziming; Lan, Zhiling; Yu, Li

    2012-01-01

    With the growth of system size and complexity, reliability has become a major concern for large-scale systems. Upon the occurrence of failure, system administrators typically trace the events in Reliability, Availability, and Serviceability (RAS) logs for root cause diagnosis. However, RAS log only contains limited diagnosis information. Moreover, the manual processing is time-consuming, error-prone, and not scalable. To address the problem, in this paper we present an automated root cause diagnosis mechanism for large-scale HPC systems. Our mechanism examines multiple logs to provide a 3-D fine-grained root cause analysis. Here, 3-D means that our analysis will pinpoint the failure layer,more » the time, and the location of the event that causes the problem. We evaluate our mechanism by means of real logs collected from a production IBM Blue Gene/P system at Oak Ridge National Laboratory. It successfully identifies failure layer information for 219 failures during 23-month period. Furthermore, it effectively identifies the triggering events with time and location information, even when the triggering events occur hundreds of hours before the resulting failures.« less

  3. Space solar array reliability: A study and recommendations

    NASA Astrophysics Data System (ADS)

    Brandhorst, Henry W., Jr.; Rodiek, Julie A.

    2008-12-01

    Providing reliable power over the anticipated mission life is critical to all satellites; therefore solar arrays are one of the most vital links to satellite mission success. Furthermore, solar arrays are exposed to the harshest environment of virtually any satellite component. In the past 10 years 117 satellite solar array anomalies have been recorded with 12 resulting in total satellite failure. Through an in-depth analysis of satellite anomalies listed in the Airclaim's Ascend SpaceTrak database, it is clear that solar array reliability is a serious, industry-wide issue. Solar array reliability directly affects the cost of future satellites through increased insurance premiums and a lack of confidence by investors. Recommendations for improving reliability through careful ground testing, standardization of testing procedures such as the emerging AIAA standards, and data sharing across the industry will be discussed. The benefits of creating a certified module and array testing facility that would certify in-space reliability will also be briefly examined. Solar array reliability is an issue that must be addressed to both reduce costs and ensure continued viability of the commercial and government assets on orbit.

  4. [MaRS Project

    NASA Technical Reports Server (NTRS)

    Aruljothi, Arunvenkatesh

    2016-01-01

    The Space Exploration Division of the Safety and Mission Assurances Directorate is responsible for reducing the risk to Human Space Flight Programs by providing system safety, reliability, and risk analysis. The Risk & Reliability Analysis branch plays a part in this by utilizing Probabilistic Risk Assessment (PRA) and Reliability and Maintainability (R&M) tools to identify possible types of failure and effective solutions. A continuous effort of this branch is MaRS, or Mass and Reliability System, a tool that was the focus of this internship. Future long duration space missions will have to find a balance between the mass and reliability of their spare parts. They will be unable take spares of everything and will have to determine what is most likely to require maintenance and spares. Currently there is no database that combines mass and reliability data of low level space-grade components. MaRS aims to be the first database to do this. The data in MaRS will be based on the hardware flown on the International Space Stations (ISS). The components on the ISS have a long history and are well documented, making them the perfect source. Currently, MaRS is a functioning excel workbook database; the backend is complete and only requires optimization. MaRS has been populated with all the assemblies and their components that are used on the ISS; the failures of these components are updated regularly. This project was a continuation on the efforts of previous intern groups. Once complete, R&M engineers working on future space flight missions will be able to quickly access failure and mass data on assemblies and components, allowing them to make important decisions and tradeoffs.

  5. The reliability of multistory buildings with the effect of non-uniform settlements of foundation

    NASA Astrophysics Data System (ADS)

    Al'Malul, Rafik; Gadzhuntsev, Michail

    2018-03-01

    The issue is the evaluation of reliability of construction considering the influence of the variation of the support settlement, which is changing during the lifetime of constructions due to the consolidation process of the ground. Recently, the specialists give special emphasis to the necessity to develop the methods for the estimation of reliability and durability of structures. The problem, the article considers, is the determination of the reliability of multistory buildings with non-uniform changing-in-time sediments caused by the consolidation process in soils. Failure of structures may occur before the draft reaches it`s stabilizing value, because of the violations of the conditions of normal use.

  6. Gearbox Reliability Collaborative Phase 3 Gearbox 3 Test

    DOE Data Explorer

    Keller, Jonathan (ORCID:0000000177243885)

    2016-12-28

    The GRC uses a combined gearbox testing, modeling, and analysis approach disseminating data and results to the industry and facilitating improvement of gearbox reliability. This test data describes the tests of GRC gearbox 3 in the National Wind Technology Center dynamometer and documents any modifications to the original test plan. It serves as a guide to interpret the publicly released data sets with brief analyses to illustrate the data. TDMS viewer and Solidworks software required to view data files. The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliability.

  7. Gearbox Reliability Collaborative Phase 3 Gearbox 2 Test

    DOE Data Explorer

    Keller, Jonathan; Robb, Wallen

    2016-05-12

    The National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) was established by the U.S. Department of Energy in 2006; its key goal is to understand the root causes of premature gearbox failures and improve their reliability. The GRC uses a combined gearbox testing, modeling, and analysis approach disseminating data and results to the industry and facilitating improvement of gearbox reliability. This test data describes the tests of GRC gearbox 2 in the National Wind Technology Center dynamometer and documents any modifications to the original test plan. It serves as a guide to interpret the publicly released data sets with brief analyses to illustrate the data. TDMS viewer and Solidworks software required to view data files.

  8. Using multivariate generalizability theory to assess the effect of content stratification on the reliability of a performance assessment.

    PubMed

    Keller, Lisa A; Clauser, Brian E; Swanson, David B

    2010-12-01

    In recent years, demand for performance assessments has continued to grow. However, performance assessments are notorious for lower reliability, and in particular, low reliability resulting from task specificity. Since reliability analyses typically treat the performance tasks as randomly sampled from an infinite universe of tasks, these estimates of reliability may not be accurate. For tests built according to a table of specifications, tasks are randomly sampled from different strata (content domains, skill areas, etc.). If these strata remain fixed in the test construction process, ignoring this stratification in the reliability analysis results in an underestimate of "parallel forms" reliability, and an overestimate of the person-by-task component. This research explores the effect of representing and misrepresenting the stratification appropriately in estimation of reliability and the standard error of measurement. Both multivariate and univariate generalizability studies are reported. Results indicate that the proper specification of the analytic design is essential in yielding the proper information both about the generalizability of the assessment and the standard error of measurement. Further, illustrative D studies present the effect under a variety of situations and test designs. Additional benefits of multivariate generalizability theory in test design and evaluation are also discussed.

  9. Availability of information in Public Health on the Internet: An analysis of national health authorities in the Spanish-speaking Latin American and Caribbean countries.

    PubMed

    Novillo-Ortiz, David; Hernández-Pérez, Tony; Saigí-Rubió, Francesc

    2017-04-01

    Access to reliable and quality health information and appropriate medical advice can contribute to a dramatic reduction in the mortality figures of countries. The governments of the Americas are faced with the opportunity to continue working on this challenge, and their institutional presence on their websites should play a key role in this task. In a setting where the access to information is essential to both health professionals and citizens, it is relevant to analyze the role of national health authorities. Given that search engines play such a key role in the access to health information, it is important to specifically know - in connection to national health authorities - whether health information offered is easily available to the population, and whether this information is well-ranked in search engines. Quantitative methods were used to gather data on the institutional presence of national health authorities on the web. An exploratory and descriptive research served to analyze and interpret data and information obtained quantitatively from different perspectives, including an analysis by country, and also by leading causes of death. A total of 18 web pages were analyzed. Information on leading causes of death was searched on websites of national health authorities in the week of August 10-14, 2015. The probability of finding information of national health authorities on the 10 leading causes of death in a country, among the top 10 results on Google, is 6.66%. Additionally, ten out the 18 countries under study (55%) do not have information ranked among the top results in Google when searching for the selected terms. Additionally, a total of 33 websites represent the sources of information with the highest visibility for all the search strategies in each country on Google for the ten leading causes of death in a country. Two websites, the National Library of Medicine and Wikipedia, occur as a result with visibility in the total of eighteen countries of the sample. Taking into consideration that providing reliable and quality information on these topics to the population should be one of the priorities of national health authorities, these results suggest that national health authorities need to take measures to try to better position their contents. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  10. Controlling Quality in CME/CPD by Measuring and Illuminating Bias

    ERIC Educational Resources Information Center

    Dixon, David; Takhar, Jatinder; Macnab, Jennifer; Eadie, Jason; Lockyer, Jocelyn; Stenerson, Heather; Francois, Jose; Bell, Mary; Monette, Celine; Campbell, Craig; Marlow, Bernie

    2011-01-01

    Introduction: There has been a surge of interest in the area of bias in industry-supported continuing medical education/continuing professional development (CME/CPD) activities. In 2007, we published our first study on measuring bias in CME, demonstrating that our assessment tool was valid and reliable. In light of the increasing interest in this…

  11. Assessing the Culture and Climate for Quality Improvement in the Work Environment. AIR 1994 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Cameron, Kim; And Others

    This study attempted to develop a reliable and valid instrument for assessing work environment and continuous quality improvement efforts in the non-academic sectors of colleges and universities particularly those institutions who have adopted Total Quality Management programs. A model of a work environment for continuous quality improvement was…

  12. Emotional-volitional components of operator reliability. [sensorimotor function testing under stress

    NASA Technical Reports Server (NTRS)

    Mileryan, Y. A.

    1975-01-01

    Sensorimotor function testing in a tracking task under stressfull working conditions established a psychological characterization for a successful aviation pilot: Motivation significantly increased the reliability and effectiveness of their work. Their acitivities were aimed at suppressing weariness and the feeling of fear caused by the stress factors; they showed patience, endurance, persistence, and a capacity for lengthy volitional efforts.

  13. Preliminary design of the redundant software experiment

    NASA Technical Reports Server (NTRS)

    Campbell, Roy; Deimel, Lionel; Eckhardt, Dave, Jr.; Kelly, John; Knight, John; Lauterbach, Linda; Lee, Larry; Mcallister, Dave; Mchugh, John

    1985-01-01

    The goal of the present experiment is to characterize the fault distributions of highly reliable software replicates, constructed using techniques and environments which are similar to those used in comtemporary industrial software facilities. The fault distributions and their effect on the reliability of fault tolerant configurations of the software will be determined through extensive life testing of the replicates against carefully constructed randomly generated test data. Each detected error will be carefully analyzed to provide insight in to their nature and cause. A direct objective is to develop techniques for reducing the intensity of coincident errors, thus increasing the reliability gain which can be achieved with fault tolerance. Data on the reliability gains realized, and the cost of the fault tolerant configurations can be used to design a companion experiment to determine the cost effectiveness of the fault tolerant strategy. Finally, the data and analysis produced by this experiment will be valuable to the software engineering community as a whole because it will provide a useful insight into the nature and cause of hard to find, subtle faults which escape standard software engineering validation techniques and thus persist far into the software life cycle.

  14. A rater training protocol to assess team performance.

    PubMed

    Eppich, Walter; Nannicelli, Anna P; Seivert, Nicholas P; Sohn, Min-Woong; Rozenfeld, Ranna; Woods, Donna M; Holl, Jane L

    2015-01-01

    Simulation-based methodologies are increasingly used to assess teamwork and communication skills and provide team training. Formative feedback regarding team performance is an essential component. While effective use of simulation for assessment or training requires accurate rating of team performance, examples of rater-training programs in health care are scarce. We describe our rater training program and report interrater reliability during phases of training and independent rating. We selected an assessment tool shown to yield valid and reliable results and developed a rater training protocol with an accompanying rater training handbook. The rater training program was modeled after previously described high-stakes assessments in the setting of 3 facilitated training sessions. Adjacent agreement was used to measure interrater reliability between raters. Nine raters with a background in health care and/or patient safety evaluated team performance of 42 in-situ simulations using post-hoc video review. Adjacent agreement increased from the second training session (83.6%) to the third training session (85.6%) when evaluating the same video segments. Adjacent agreement for the rating of overall team performance was 78.3%, which was added for the third training session. Adjacent agreement was 97% 4 weeks posttraining and 90.6% at the end of independent rating of all simulation videos. Rater training is an important element in team performance assessment, and providing examples of rater training programs is essential. Articulating key rating anchors promotes adequate interrater reliability. In addition, using adjacent agreement as a measure allows differentiation between high- and low-performing teams on video review. © 2015 The Alliance for Continuing Education in the Health Professions, the Society for Academic Continuing Medical Education, and the Council on Continuing Medical Education, Association for Hospital Medical Education.

  15. Development and validation of a trustworthy multisource feedback instrument to support nurse appraisals.

    PubMed

    Crossley, James G M

    2015-01-01

    Nurse appraisal is well established in the Western world because of its obvious educational advantages. Appraisal works best with many sources of information on performance. Multisource feedback (MSF) is widely used in business and in other clinical disciplines to provide such information. It has also been incorporated into nursing appraisals, but, so far, none of the instruments in use for nurses has been validated. We set out to develop an instrument aligned with the UK Knowledge and Skills Framework (KSF) and to evaluate its reliability and feasibility across a wide hospital-based nursing population. The KSF framework provided a content template. Focus groups developed an instrument based on consensus. The instrument was administered to all the nursing staff in 2 large NHS hospitals forming a single trust in London, England. We used generalizability analysis to estimate reliability, response rates and unstructured interviews to evaluate feasibility, and factor structure and correlation studies to evaluate validity. On a voluntary basis the response rate was moderate (60%). A failure to engage with information technology and employment-related concerns were commonly cited as reasons for not responding. In this population, 11 responses provided a profile with sufficient reliability to inform appraisal (G = 0.7). Performance on the instrument was closely and significantly correlated with performance on a KSF questionnaire. This is the first contemporary psychometric evaluation of an MSF instrument for nurses. MSF appears to be as valid and reliable as an assessment method to inform appraisal in nurses as it is in other health professional groups. © 2015 The Alliance for Continuing Education in the Health Professions, the Society for Academic Continuing Medical Education, and the Council on Continuing Medical Education, Association for Hospital Medical Education.

  16. Optical Coherence Tomography in Kidney Transplantation

    NASA Astrophysics Data System (ADS)

    Andrews, Peter M.; Wierwille, Jeremiah; Chen, Yu

    End-stage renal disease (ESRD) is associated with both high mortality rates and an enormous economic burden [1]. The preferred treatment option for ESRD that can extend patients' lives and improve their quality of life is kidney transplantation. However, organ shortages continue to pose a major problem in kidney transplantation. Most kidneys for transplantation come from heart-beating cadavers. Although non-heart-beating cadavers represent a potentially large pool of donor kidneys, these kidneys are not often used due to the unknown extent of damage to the renal tubules (i.e., acute tubular necrosis or "ATN") induced by ischemia (i.e., lack of blood flow). Also, ischemic insult suffered by kidneys awaiting transplantation frequently causes ATN that leads to varying degrees of delayed graft function (DGF) after transplantation. Finally, ATN represents a significant risk for eventual graft and patient survival [2, 3] and can be difficult to discern from rejection. In present clinical practice, there is no reliable real-time test to determine the viability of donor kidneys and whether or not donor kidneys might exhibit ATN. Therefore, there is a critical need for an objective and reliable real-time test to predict ATN to use these organs safely and utilize the donor pool optimally. In this review, we provided preliminary data indicating that OCT can be used to predict the post-transplant function of kidneys used in transplantation.

  17. Excellent Resistive Switching Performance of Cu-Se-Based Atomic Switch Using Lanthanide Metal Nanolayer at the Cu-Se/Al2O3 Interface.

    PubMed

    Woo, Hyunsuk; Vishwanath, Sujaya Kumar; Jeon, Sanghun

    2018-03-07

    The next-generation electronic society is dependent on the performance of nonvolatile memory devices, which has been continuously improving. In the last few years, many memory devices have been introduced. However, atomic switches are considered to be a simple and reliable basis for next-generation nonvolatile devices. In general, atomic switch-based resistive switching is controlled by electrochemical metallization. However, excess ion injection from the entire area of the active electrode into the switching layer causes device nonuniformity and degradation of reliability. Here, we propose the fabrication of a high-performance atomic switch based on Cu x -Se 1- x by inserting lanthanide (Ln) metal buffer layers such as neodymium (Nd), samarium (Sm), dysprosium (Dy), or lutetium (Lu) between the active metal layer and the electrolyte. Current-atomic force microscopy results confirm that Cu ions penetrate through the Ln-buffer layer and form thin conductive filaments inside the switching layer. Compared with the Pt/Cu x -Se 1- x /Al 2 O 3 /Pt device, the optimized Pt/Cu x -Se 1- x /Ln/Al 2 O 3 /Pt devices show improvement in the on/off resistance ratio (10 2 -10 7 ), retention (10 years/85 °C), endurance (∼10 000 cycles), and uniform resistance state distribution.

  18. Comparison of methodologies estimating emissions of aircraft pollutants, environmental impact assessment around airports

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurniawan, Jermanto S., E-mail: Jermanto.kurniawan@inrets.fr; Khardi, S., E-mail: Salah.khardi@inrets.f

    2011-04-15

    Air transportation growth has increased continuously over the years. The rise in air transport activity has been accompanied by an increase in the amount of energy used to provide air transportation services. It is also assumed to increase environmental impacts, in particular pollutant emissions. Traditionally, the environmental impacts of atmospheric emissions from aircraft have been addressed in two separate ways; aircraft pollutant emissions occurring during the landing and take-off (LTO) phase (local pollutant emissions) which is the focus of this study, and the non-LTO phase (global/regional pollutant emissions). Aircraft pollutant emissions are an important source of pollution and directly ormore » indirectly harmfully affect human health, ecosystems and cultural heritage. There are many methods to asses pollutant emissions used by various countries. However, using different and separate methodology will cause a variation in results, some lack of information and the use of certain methods will require justification and reliability that must be demonstrated and proven. In relation to this issue, this paper presents identification, comparison and reviews of some of the methodologies of aircraft pollutant assessment from the past, present and future expectations of some studies and projects focusing on emissions factors, fuel consumption, and uncertainty. This paper also provides reliable information on the impacts of aircraft pollutant emissions in short term and long term predictions.« less

  19. Modern Methods of Rail Welding

    NASA Astrophysics Data System (ADS)

    Kozyrev, Nikolay A.; Kozyreva, Olga A.; Usoltsev, Aleksander A.; Kryukov, Roman E.; Shevchenko, Roman A.

    2017-10-01

    Existing methods of rail welding, which are enable to get continuous welded rail track, are observed in this article. Analysis of existing welding methods allows considering an issue of continuous rail track in detail. Metallurgical and welding technologies of rail welding and also process technologies reducing aftereffects of temperature exposure are important factors determining the quality and reliability of the continuous rail track. Analysis of the existing methods of rail welding enable to find the research line for solving this problem.

  20. Controversies in pediatric anesthesia: sevoflurane and fluid management.

    PubMed

    Gueli, Sarah L; Lerman, Jerrold

    2013-06-01

    To explore the interrelationships among the pharmacokinetics of sevoflurane, epileptiform electroencephalographic (EEG) activity and awareness in children. To also describe the revised perioperative fluid management strategy espoused by Holliday and Segar and noninvasive measures that may predict who will respond positively to fluid loading. The depth of anesthesia during the early washin period with sevoflurane 8% is one-third less than during halothane. Eight percent sevoflurane rarely causes clinical seizures; more commonly, it causes epileptiform EEG activity that only weakly portends seizure activity. When preceded by nitrous oxide, midazolam or normocapnia, the risk of inducing epileptiform activity during spontaneous respiration is exceedingly small. Decreasing the inspired concentration of sevoflurane upon loss of the eyelash reflex to prevent epileptiform activity has not been shown to reduce the risk of clinical seizures, but more importantly, it may increase the risk of awareness if the child is stimulated. Isotonic intravenous solutions should be infused in volumes of 20-40 ml/kg over 2-4 h in children undergoing elective surgery. Postoperatively, these infusions may be continued at rates of 2/1/0.5 ml/kg/h; serum sodium concentration should be measured periodically. Noninvasive measures currently do not reliably identify those children who will respond positively to fluid boluses. Sevoflurane is a well tolerated induction agent that rarely causes seizures in children, but may cause awareness if the inspired concentration is prematurely reduced. Perioperative isotonic fluids should be infused at 20-40 ml/kg over 2-4 h during elective surgery. Noninvasive metrics do not predict a child's responsiveness to fluid loading.

  1. TD-LTE Wireless Private Network QoS Transmission Protection

    NASA Astrophysics Data System (ADS)

    Zhang, Jianming; Cheng, Chao; Wu, Zanhong

    With the commencement of construction of the smart grid, the demand power business for reliability and security continues to improve, the reliability transmission of power TD-LTE Wireless Private Network are more and more attention. For TD-LTE power private network, it can provide different QoS services according to the user's business type, to protect the reliable transmission of business. This article describes in detail the AF module of PCC in the EPC network, specifically introduces set up AF module station and QoS mechanisms in the EPS load, fully considers the business characteristics of the special power network, establishing a suitable architecture for mapping QoS parameters, ensuring the implementation of each QoS business. Through using radio bearer management, we can achieve the reliable transmission of each business on physical channel.

  2. FLUIDIC: Metal Air Recharged

    ScienceCinema

    Friesen, Cody

    2018-02-14

    Fluidic, with the help of ARPA-E funding, has developed and deployed the world's first proven high cycle life metal air battery. Metal air technology, often used in smaller scale devices like hearing aids, has the lowest cost per electron of any rechargeable battery storage in existence. Deploying these batteries for grid reliability is competitive with pumped hydro installations while having the advantages of a small footprint. Fluidic's battery technology allows utilities and other end users to store intermittent energy generated from solar and wind, as well as maintain reliable electrical delivery during power outages. The batteries are manufactured in the US and currently deployed to customers in emerging markets for cell tower reliability. As they continue to add customers, they've gained experience and real world data that will soon be leveraged for US grid reliability.

  3. Managing Reliability in the 21st Century

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dellin, T.A.

    1998-11-23

    The rapid pace of change at Ike end of the 20th Century should continue unabated well into the 21st Century. The driver will be the marketplace imperative of "faster, better, cheaper." This imperative has already stimulated a revolution-in-engineering in design and manufacturing. In contrast, to date, reliability engineering has not undergone a similar level of change. It is critical that we implement a corresponding revolution-in-reliability-engineering as we enter the new millennium. If we are still using 20th Century reliability approaches in the 21st Century, then reliability issues will be the limiting factor in faster, better, and cheaper. At the heartmore » of this reliability revolution will be a science-based approach to reliability engineering. Science-based reliability will enable building-in reliability, application-specific products, virtual qualification, and predictive maintenance. The purpose of this paper is to stimulate a dialogue on the future of reliability engineering. We will try to gaze into the crystal ball and predict some key issues that will drive reliability programs in the new millennium. In the 21st Century, we will demand more of our reliability programs. We will need the ability to make accurate reliability predictions that will enable optimizing cost, performance and time-to-market to meet the needs of every market segment. We will require that all of these new capabilities be in place prior to the stint of a product development cycle. The management of reliability programs will be driven by quantifiable metrics of value added to the organization business objectives.« less

  4. Study on safety level of RC beam bridges under earthquake

    NASA Astrophysics Data System (ADS)

    Zhao, Jun; Lin, Junqi; Liu, Jinlong; Li, Jia

    2017-08-01

    This study considers uncertainties in material strengths and the modeling which have important effects on structural resistance force based on reliability theory. After analyzing the destruction mechanism of a RC bridge, structural functions and the reliability were given, then the safety level of the piers of a reinforced concrete continuous girder bridge with stochastic structural parameters against earthquake was analyzed. Using response surface method to calculate the failure probabilities of bridge piers under high-level earthquake, their seismic reliability for different damage states within the design reference period were calculated applying two-stage design, which describes seismic safety level of the built bridges to some extent.

  5. Modelling utility-scale wind power plants. Part 1: Economics

    NASA Astrophysics Data System (ADS)

    Milligan, Michael R.

    1999-10-01

    As the worldwide use of wind turbine generators continues to increase in utility-scale applications, it will become increasingly important to assess the economic and reliability impact of these intermittent resources. Although the utility industry in the United States appears to be moving towards a restructured environment, basic economic and reliability issues will continue to be relevant to companies involved with electricity generation. This article is the first of two which address modelling approaches and results obtained in several case studies and research projects at the National Renewable Energy Laboratory (NREL). This first article addresses the basic economic issues associated with electricity production from several generators that include large-scale wind power plants. An important part of this discussion is the role of unit commitment and economic dispatch in production cost models. This paper includes overviews and comparisons of the prevalent production cost modelling methods, including several case studies applied to a variety of electric utilities. The second article discusses various methods of assessing capacity credit and results from several reliability-based studies performed at NREL.

  6. Prospective study of one million deaths in India: rationale, design, and validation results.

    PubMed

    Jha, Prabhat; Gajalakshmi, Vendhan; Gupta, Prakash C; Kumar, Rajesh; Mony, Prem; Dhingra, Neeraj; Peto, Richard

    2006-02-01

    Over 75% of the annual estimated 9.5 million deaths in India occur in the home, and the large majority of these do not have a certified cause. India and other developing countries urgently need reliable quantification of the causes of death. They also need better epidemiological evidence about the relevance of physical (such as blood pressure and obesity), behavioral (such as smoking, alcohol, HIV-1 risk taking, and immunization history), and biological (such as blood lipids and gene polymorphisms) measurements to the development of disease in individuals or disease rates in populations. We report here on the rationale, design, and implementation of the world's largest prospective study of the causes and correlates of mortality. We will monitor nearly 14 million people in 2.4 million nationally representative Indian households (6.3 million people in 1.1 million households in the 1998-2003 sample frame and 7.6 million people in 1.3 million households in the 2004-2014 sample frame) for vital status and, if dead, the causes of death through a well-validated verbal autopsy (VA) instrument. About 300,000 deaths from 1998-2003 and some 700,000 deaths from 2004-2014 are expected; of these about 850,000 will be coded by two physicians to provide causes of death by gender, age, socioeconomic status, and geographical region. Pilot studies will evaluate the addition of physical and biological measurements, specifically dried blood spots. Preliminary results from over 35,000 deaths suggest that VA can ascertain the leading causes of death, reduce the misclassification of causes, and derive the probable underlying cause of death when it has not been reported. VA yields broad classification of the underlying causes in about 90% of deaths before age 70. In old age, however, the proportion of classifiable deaths is lower. By tracking underlying demographic denominators, the study permits quantification of absolute mortality rates. Household case-control, proportional mortality, and nested case-control methods permit quantification of risk factors. This study will reliably document not only the underlying cause of child and adult deaths but also key risk factors (behavioral, physical, environmental, and eventually, genetic). It offers a globally replicable model for reliably estimating cause-specific mortality using VA and strengthens India's flagship mortality monitoring system. Despite the misclassification that is still expected, the new cause-of-death data will be substantially better than that available previously.

  7. 42 CFR 401.705 - Eligibility criteria for qualified entities.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... (iv) Designing, and continuously improving the format of performance reports on providers and... subpart address the methodological concerns regarding sample size and reliability that have been expressed...

  8. 42 CFR 401.705 - Eligibility criteria for qualified entities.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... (iv) Designing, and continuously improving the format of performance reports on providers and... subpart address the methodological concerns regarding sample size and reliability that have been expressed...

  9. 42 CFR 401.705 - Eligibility criteria for qualified entities.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... (iv) Designing, and continuously improving the format of performance reports on providers and... subpart address the methodological concerns regarding sample size and reliability that have been expressed...

  10. Developing a Continuous Improvement System

    DTIC Science & Technology

    2016-09-16

    disagree that continuous improvement is critical to an organization’s suc-cess, since conducting business using a status quo philosophy will not work...for implementing one of these processes include: better operational efficiency, increased customer satisfaction, improved employee morale ...when a problem in reliability or maintenance may become the greatest opportunity. As described in the January-February 2011 issue of Defense AT&L

  11. A Map/INS/Wi-Fi Integrated System for Indoor Location-Based Service Applications

    PubMed Central

    Yu, Chunyang; Lan, Haiyu; Gu, Fuqiang; Yu, Fei; El-Sheimy, Naser

    2017-01-01

    In this research, a new Map/INS/Wi-Fi integrated system for indoor location-based service (LBS) applications based on a cascaded Particle/Kalman filter framework structure is proposed. Two-dimension indoor map information, together with measurements from an inertial measurement unit (IMU) and Received Signal Strength Indicator (RSSI) value, are integrated for estimating positioning information. The main challenge of this research is how to make effective use of various measurements that complement each other in order to obtain an accurate, continuous, and low-cost position solution without increasing the computational burden of the system. Therefore, to eliminate the cumulative drift caused by low-cost IMU sensor errors, the ubiquitous Wi-Fi signal and non-holonomic constraints are rationally used to correct the IMU-derived navigation solution through the extended Kalman Filter (EKF). Moreover, the map-aiding method and map-matching method are innovatively combined to constrain the primary Wi-Fi/IMU-derived position through an Auxiliary Value Particle Filter (AVPF). Different sources of information are incorporated through a cascaded structure EKF/AVPF filter algorithm. Indoor tests show that the proposed method can effectively reduce the accumulation of positioning errors of a stand-alone Inertial Navigation System (INS), and provide a stable, continuous and reliable indoor location service. PMID:28574471

  12. A Map/INS/Wi-Fi Integrated System for Indoor Location-Based Service Applications.

    PubMed

    Yu, Chunyang; Lan, Haiyu; Gu, Fuqiang; Yu, Fei; El-Sheimy, Naser

    2017-06-02

    In this research, a new Map/INS/Wi-Fi integrated system for indoor location-based service (LBS) applications based on a cascaded Particle/Kalman filter framework structure is proposed. Two-dimension indoor map information, together with measurements from an inertial measurement unit (IMU) and Received Signal Strength Indicator (RSSI) value, are integrated for estimating positioning information. The main challenge of this research is how to make effective use of various measurements that complement each other in order to obtain an accurate, continuous, and low-cost position solution without increasing the computational burden of the system. Therefore, to eliminate the cumulative drift caused by low-cost IMU sensor errors, the ubiquitous Wi-Fi signal and non-holonomic constraints are rationally used to correct the IMU-derived navigation solution through the extended Kalman Filter (EKF). Moreover, the map-aiding method and map-matching method are innovatively combined to constrain the primary Wi-Fi/IMU-derived position through an Auxiliary Value Particle Filter (AVPF). Different sources of information are incorporated through a cascaded structure EKF/AVPF filter algorithm. Indoor tests show that the proposed method can effectively reduce the accumulation of positioning errors of a stand-alone Inertial Navigation System (INS), and provide a stable, continuous and reliable indoor location service.

  13. A Method to Increase Drivers' Trust in Collision Warning Systems Based on Reliability Information of Sensor

    NASA Astrophysics Data System (ADS)

    Tsutsumi, Shigeyoshi; Wada, Takahiro; Akita, Tokihiko; Doi, Shun'ichi

    Driver's workload tends to be increased during driving under complicated traffic environments like a lane change. In such cases, rear collision warning is effective for reduction of cognitive workload. On the other hand, it is pointed out that false alarm or missing alarm caused by sensor errors leads to decrease of driver' s trust in the warning system and it can result in low efficiency of the system. Suppose that reliability information of the sensor is provided in real-time. In this paper, we propose a new warning method to increase driver' s trust in the system even with low sensor reliability utilizing the sensor reliability information. The effectiveness of the warning methods is shown by driving simulator experiments.

  14. Validity and Reliability of the 30-s Continuous Jump for Anaerobic Power and Capacity Assessment in Combat Sport

    PubMed Central

    Čular, Drazen; Ivančev, Vladimir; Zagatto, Alessandro M.; Milić, Mirjana; Beslija, Tea; Sellami, Maha; Padulo, Johnny

    2018-01-01

    Cycling test such Wingate anaerobic test (WAnT) is used to measure anaerobic power (AP), but not anaerobic capacity (AC, i.e., the metabolic energy demand). However, in sports that do not involve cycling movements (Karate), the continuous jump for 30 s (vertical jumps for 30 s) has been extensively used to measure anaerobic performance in all young athletes. Limited information’s are available concerning its validity and reliability especially in children. As such, the current study aimed to test validity and reliability of a continuous jumps test (the CJ30s), using WAnT as a reference. Thirteen female Karate kids (age: 11.07 ± 1.32 years; mass: 41.76 ± 15.32 kg; height: 152 ± 11.52 cm; training experience: 4.38 ± 2.14 years) were tested on three separate sessions. The first and second sessions were used to assess the reliability using Intra-class correlation coefficient (ICC) of CJ30s, whereas on the third session WAnT was administered. Following CJ30s and WAnT, we assessed AP (1/CJ30s, as jump height [JH], fatigue index [FI], and blood lactate [BL]; 2/WAnT, as mechanical power [P], FI, and BL) and AC as the excess post-exercise oxygen consumption (EPOC). Large/highly significant correlations were found between CJ30s and WAnT EPOCs (r = 0.730, P = 0.003), and BLs (r = 0.713, P = 0.009). Moderate/significant correlations were found between CJ30s and WAnT FIs (r = 0.640, P = 0.014), CJ30s first four jumps mean JH and WAnT peak P (r = 0.572, P = 0.032), and CJ30s mean JH and WAnT mean P (r = 0.589, P = 0.021). CJ30s showed excellent and moderate reliability (ICC) for AP (maximal JH 0.884, mean JH 0.742, FI 0.657, BL 0.653) and AC (EPOC 0.788), respectively. Correlations observed especially in terms of AC between CJ30s and WAnT provide evidence that former may adequately assess anaerobic performance for the young combat athlete. CJ30 is a reliable test and allow an easy assessment of AP and AC in karate children. PMID:29867580

  15. The Silent Revolution Continues.

    ERIC Educational Resources Information Center

    Perlin, John

    2001-01-01

    Discusses the reliability and versatility of using photovoltaics whereby solar cells convert sunlight directly into electricity. The growing concern of global warming promises to transform photovoltaics into a major energy producer. (Author/SAH)

  16. 40 CFR 64.3 - Monitoring design criteria.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... that are adequate to ensure the continuing validity of the data. The owner or operator shall consider... and control device operational variability, the reliability and latitude built into the control...

  17. The T 380A intrauterine device: a retrospective 5-year evaluation.

    PubMed

    de Araujo, Fabio Fernando; Barbieri, Márcia; Guazzelli, Cristina Aparecida Falho; Lindsey, Prescilla Chow

    2008-12-01

    The undue resistance to intrauterine device (IUD) use seen in several settings does not seem to occur in the Family Planning Unit of UNIFESP-EPM (São Paulo Federal University, Brazil). In fact, the Copper T 380A IUD in this clinic has reached an outstanding importance and this motivated us to present our differing experience. The prevalence of this method in this clinic is as high as 40%. This contrasts to the low use in the rest of the country, where tubal ligation is by far the most used contraceptive method (40%) and where IUD is inexpressive (1.1%). This is a retrospective study of the records of 118 users of Copper T 380A IUD inserted at the clinic and who were followed during 5 years. The cumulative pregnancy rate was 0.8%. The main cause for discontinuation of the study was loss to follow-up (21.3%). Other reasons for the withdrawal of the device were personal option (13.6%), dislocation (11.7%) and pregnancy wish (3.4%). There was no withdrawal by pelvic inflammatory disease. Bleeding (0.8%) was not an important cause for withdrawal, and there were no withdrawals due to pain. The continuation rate at 5 years was 46.7%. The structured service and an adequate educative program perhaps could explain at least partially the good performance of IUD use in this clinic. There was an amazing prevalence of the components of the metabolic syndrome. This could represent contraindications for hormonal contraception, and, in consequence, it could influence the increased option for and continuation of the IUD. These data show a good performance of the IUD for long duration, in relation to other studies, and this should be considered as a reliable alternative to the high prevalence of female sterilization in this country.

  18. Network reliability maximization for stochastic-flow network subject to correlated failures using genetic algorithm and tabu\\xA0search

    NASA Astrophysics Data System (ADS)

    Yeh, Cheng-Ta; Lin, Yi-Kuei; Yang, Jo-Yun

    2018-07-01

    Network reliability is an important performance index for many real-life systems, such as electric power systems, computer systems and transportation systems. These systems can be modelled as stochastic-flow networks (SFNs) composed of arcs and nodes. Most system supervisors respect the network reliability maximization by finding the optimal multi-state resource assignment, which is one resource to each arc. However, a disaster may cause correlated failures for the assigned resources, affecting the network reliability. This article focuses on determining the optimal resource assignment with maximal network reliability for SFNs. To solve the problem, this study proposes a hybrid algorithm integrating the genetic algorithm and tabu search to determine the optimal assignment, called the hybrid GA-TS algorithm (HGTA), and integrates minimal paths, recursive sum of disjoint products and the correlated binomial distribution to calculate network reliability. Several practical numerical experiments are adopted to demonstrate that HGTA has better computational quality than several popular soft computing algorithms.

  19. Achieving Reliable Communication in Dynamic Emergency Responses

    PubMed Central

    Chipara, Octav; Plymoth, Anders N.; Liu, Fang; Huang, Ricky; Evans, Brian; Johansson, Per; Rao, Ramesh; Griswold, William G.

    2011-01-01

    Emergency responses require the coordination of first responders to assess the condition of victims, stabilize their condition, and transport them to hospitals based on the severity of their injuries. WIISARD is a system designed to facilitate the collection of medical information and its reliable dissemination during emergency responses. A key challenge in WIISARD is to deliver data with high reliability as first responders move and operate in a dynamic radio environment fraught with frequent network disconnections. The initial WIISARD system employed a client-server architecture and an ad-hoc routing protocol was used to exchange data. The system had low reliability when deployed during emergency drills. In this paper, we identify the underlying causes of unreliability and propose a novel peer-to-peer architecture that in combination with a gossip-based communication protocol achieves high reliability. Empirical studies show that compared to the initial WIISARD system, the redesigned system improves reliability by as much as 37% while reducing the number of transmitted packets by 23%. PMID:22195075

  20. Study on Distribution Reliability with Parallel and On-site Distributed Generation Considering Protection Miscoordination and Tie Line

    NASA Astrophysics Data System (ADS)

    Chaitusaney, Surachai; Yokoyama, Akihiko

    In distribution system, Distributed Generation (DG) is expected to improve the system reliability as its backup generation. However, DG contribution in fault current may cause the loss of the existing protection coordination, e.g. recloser-fuse coordination and breaker-breaker coordination. This problem can drastically deteriorate the system reliability, and it is more serious and complicated when there are several DG sources in the system. Hence, the above conflict in reliability aspect unavoidably needs a detailed investigation before the installation or enhancement of DG is done. The model of composite DG fault current is proposed to find the threshold beyond which existing protection coordination is lost. Cases of protection miscoordination are described, together with their consequences. Since a distribution system may be tied with another system, the issues of tie line and on-site DG are integrated into this study. Reliability indices are evaluated and compared in the distribution reliability test system RBTS Bus 2.

  1. A rapid inoculation technique for assessing pathogenicity of Fusarium oxysporum f. sp. niveum and F. o. melonis on Cucurbits

    USGS Publications Warehouse

    Freeman, S.; Rodriguez, R.J.

    1993-01-01

    A continuous-dip inoculation technique for rapid assessment of pathogenicity of Fusarium oxysporum f. sp. niveum and F. o. melonis was developed. The method, adapted from a similar procedure for determining pathogenicity of Colletotrichum magna (causal agent of anthracnose of cucurbits), involves constant exposure of seedlings and cuttings (seedlings with root systems excised) of watermelon and muskmelon to conidial suspensions contained in small scintillation vials. Disease development in intact seedlings corresponded well to disease responses observed with the standard root-dip inoculation/pot assay. The continuous-dip inoculation technique resulted in rapid disease development, with 50% of watermelon cuttings dying after 4–6 days of exposure to F. o. niveum. A mortality of 30% also was observed in watermelon cuttings exposed to conidia of F. o. melonis, as opposed to only a 0–2.5% mortality in seedlings with intact roots. Disease response was similar with muskmelon seedlings and cuttings continuously dip-inoculated with F. o. melonis isolates. However, no disease symptoms were observed in muskmelon seedlings or cuttings inoculated with F. o. niveum. Four nonpathogenic isolates of F. oxysporum did not cause disease symptoms in either watermelon or muskmelon cuttings and seedlings when assayed by this technique. The proposed method enables a rapid screening of pathogenicity and requires less time, labor, and greenhouse space than the standard root-dip inoculation/pot assay. The reliability of the continuous-dip inoculation technique is limited, however, to exposure of intact seedlings at a concentration of 1 × 106conidia per milliliter; the method is not accurate at this range for excised seedlings.

  2. Moving stimuli are less effectively masked using traditional continuous flash suppression (CFS) compared to a moving Mondrian mask (MMM): a test case for feature-selective suppression and retinotopic adaptation.

    PubMed

    Moors, Pieter; Wagemans, Johan; de-Wit, Lee

    2014-01-01

    Continuous flash suppression (CFS) is a powerful interocular suppression technique, which is often described as an effective means to reliably suppress stimuli from visual awareness. Suppression through CFS has been assumed to depend upon a reduction in (retinotopically specific) neural adaptation caused by the continual updating of the contents of the visual input to one eye. In this study, we started from the observation that suppressing a moving stimulus through CFS appeared to be more effective when using a mask that was actually more prone to retinotopically specific neural adaptation, but in which the properties of the mask were more similar to those of the to-be-suppressed stimulus. In two experiments, we find that using a moving Mondrian mask (i.e., one that includes motion) is more effective in suppressing a moving stimulus than a regular CFS mask. The observed pattern of results cannot be explained by a simple simulation that computes the degree of retinotopically specific neural adaptation over time, suggesting that this kind of neural adaptation does not play a large role in predicting the differences between conditions in this context. We also find some evidence consistent with the idea that the most effective CFS mask is the one that matches the properties (speed) of the suppressed stimulus. These results question the general importance of retinotopically specific neural adaptation in CFS, and potentially help to explain an implicit trend in the literature to adapt one's CFS mask to match one's to-be-suppressed stimuli. Finally, the results should help to guide the methodological development of future research where continuous suppression of moving stimuli is desired.

  3. Proceedings of the First International Summit on Intestinal Anastomotic Leak, Chicago, Illinois, October 4–5, 2012

    PubMed Central

    Shogan, Benjamin D.; An, Gary C.; Schardey, Hans M.; Matthews, Jeffrey B.; Umanskiy, Konstantin; Fleshman, James W.; Hoeppner, Jens; Fry, Donald E.; Garcia-Granereo, Eduardo; Jeekel, Hans; van Goor, Harry; Dellinger, E. Patchen; Konda, Vani; Gilbert, Jack A.; Auner, Gregory W.

    2014-01-01

    Abstract Objective: The first international summit on anastomotic leak was held in Chicago in October, 2012 to assess current knowledge in the field and develop novel lines of inquiry. The following report is a summary of the proceedings with commentaries and future prospects for clinical trials and laboratory investigations. Background: Anastomotic leakage remains a devastating problem for the patient, and a continuing challenge to the surgeon operating on high-risk areas of the gastrointestinal tract such as the esophagus and rectum. Despite the traditional wisdom that anastomotic leak is because of technique, evidence to support this is weak-to-non-existent. Outcome data continue to demonstrate that expert high-volume surgeons working in high-volume centers continue to experience anastomotic leaks and that surgeons cannot predict reliably which patients will leak. Methods: A one and one-half day summit was held and a small working group assembled to review current practices, opinions, scientific evidence, and potential paths forward to understand and decrease the incidence of anastomotic leak. Results: Results of a survey of the opinions of the group demonstrated that the majority of participants believe that anastomotic leak is a complicated biologic problem whose pathogenesis remains ill-defined. The group opined that anastomotic leak is underreported clinically, it is not because of technique except when there is gross inattention to it, and that results from animal models are mostly irrelevant to the human condition. Conclusions: A fresh and unbiased examination of the causes and strategies for prevention of anastomotic leak needs to be addressed by a continuous working group of surgeons, basic scientists, and clinical trialists to realize a real and significant reduction in its incidence and morbidity. Such a path forward is discussed. PMID:25215465

  4. The development of a quality appraisal tool for studies of diagnostic reliability (QAREL).

    PubMed

    Lucas, Nicholas P; Macaskill, Petra; Irwig, Les; Bogduk, Nikolai

    2010-08-01

    In systematic reviews of the reliability of diagnostic tests, no quality assessment tool has been used consistently. The aim of this study was to develop a specific quality appraisal tool for studies of diagnostic reliability. Key principles for the quality of studies of diagnostic reliability were identified with reference to epidemiologic principles, existing quality appraisal checklists, and the Standards for Reporting of Diagnostic Accuracy (STARD) and Quality Assessment of Diagnostic Accuracy Studies (QUADAS) resources. Specific items that encompassed each of the principles were developed. Experts in diagnostic research provided feedback on the items that were to form the appraisal tool. This process was iterative and continued until consensus among experts was reached. The Quality Appraisal of Reliability Studies (QAREL) checklist includes 11 items that explore seven principles. Items cover the spectrum of subjects, spectrum of examiners, examiner blinding, order effects of examination, suitability of the time interval among repeated measurements, appropriate test application and interpretation, and appropriate statistical analysis. QAREL has been developed as a specific quality appraisal tool for studies of diagnostic reliability. The reliability of this tool in different contexts needs to be evaluated. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  5. Psychometric instrumentation: reliability and validity of instruments used for clinical practice, evidence-based practice projects and research studies.

    PubMed

    Mayo, Ann M

    2015-01-01

    It is important for CNSs and other APNs to consider the reliability and validity of instruments chosen for clinical practice, evidence-based practice projects, or research studies. Psychometric testing uses specific research methods to evaluate the amount of error associated with any particular instrument. Reliability estimates explain more about how well the instrument is designed, whereas validity estimates explain more about scores that are produced by the instrument. An instrument may be architecturally sound overall (reliable), but the same instrument may not be valid. For example, if a specific group does not understand certain well-constructed items, then the instrument does not produce valid scores when used with that group. Many instrument developers may conduct reliability testing only once, yet continue validity testing in different populations over many years. All CNSs should be advocating for the use of reliable instruments that produce valid results. Clinical nurse specialists may find themselves in situations where reliability and validity estimates for some instruments that are being utilized are unknown. In such cases, CNSs should engage key stakeholders to sponsor nursing researchers to pursue this most important work.

  6. Unreliable Yet Still Replicable: A Comment on LeBel and Paunonen (2011)

    PubMed Central

    De Schryver, Maarten; Hughes, Sean; Rosseel, Yves; De Houwer, Jan

    2016-01-01

    Lebel and Paunonen (2011) highlight that despite their importance and popularity in both theoretical and applied research, many implicit measures continue to be plagued by a persistent and troublesome issue—low reliability. In their paper, they offer a conceptual analysis of the relationship between reliability, power and replicability, and then provide a series of recommendations for researchers interested in using implicit measures in an experimental setting. At the core of their account is the idea that reliability can be equated with statistical power, such that “lower levels of reliability are associated with decreasing probabilities of detecting a statistically significant effect, given one exists in the population” (p. 573). They also take the additional step of equating reliability and replicability. In our commentary, we draw attention to the fact that there is no direct, fixed or one-to-one relation between reliability and power or replicability. More specifically, we argue that when adopting an experimental (rather than a correlational) approach, researchers strive to minimize inter-individual variation, which has a direct impact on sample based reliability estimates. We evaluate the strengths and weaknesses of the LeBel and Paunonen's recommendations and refine them where appropriate. PMID:26793150

  7. Methods for assessing reliability and validity for a measurement tool: a case study and critique using the WHO haemoglobin colour scale.

    PubMed

    White, Sarah A; van den Broek, Nynke R

    2004-05-30

    Before introducing a new measurement tool it is necessary to evaluate its performance. Several statistical methods have been developed, or used, to evaluate the reliability and validity of a new assessment method in such circumstances. In this paper we review some commonly used methods. Data from a study that was conducted to evaluate the usefulness of a specific measurement tool (the WHO Colour Scale) is then used to illustrate the application of these methods. The WHO Colour Scale was developed under the auspices of the WHO to provide a simple portable and reliable method of detecting anaemia. This Colour Scale is a discrete interval scale, whereas the actual haemoglobin values it is used to estimate are on a continuous interval scale and can be measured accurately using electrical laboratory equipment. The methods we consider are: linear regression, correlation coefficients, paired t-tests plotting differences against mean values and deriving limits of agreement; kappa and weighted kappa statistics, sensitivity and specificity, an intraclass correlation coefficient and the repeatability coefficient. We note that although the definition and properties of each of these methods is well established inappropriate methods continue to be used in medical literature for assessing reliability and validity, as evidenced in the context of the evaluation of the WHO Colour Scale. Copyright 2004 John Wiley & Sons, Ltd.

  8. From positive psychology to psychopathology: the continuum of attention-deficit hyperactivity disorder.

    PubMed

    Greven, Corina U; Buitelaar, Jan K; Salum, Giovanni A

    2018-03-01

    Integration of positive psychology into clinical research and treatment has been slow. This integration can be facilitated by the conceptualisation of mental disorders as the high, symptomatic extreme of continuous normal variation. This assumes that there is also a low, positive extreme, which is, however, unchartered territory. This study aims to examine how well current measures capture the low extreme of mental disorder continua, using attention-deficit hyperactivity disorder (ADHD) as an example. The ability of three validated scales to capture ADHD as a continuous trait was examined using Item Response Theory in a sample of 9,882 adolescents from the UK population-representative Twins Early Development Study. These scales were: the Strengths and Weakness of ADHD Symptoms and Normal behaviour scale (SWAN), Strength and Difficulties Questionnaire (SDQ - hyperactivity subscale), and Conners' Parent Rating Scale (Conners). Only the SWAN reliably differentiated interindividual differences between participants lying at any level of the continuous ADHD latent trait, including the extreme low, positive end (z-scores from -3 to +3). The SDQ showed low reliability across the ADHD latent trait. In contrast, the Conners performed best at differentiating individuals scoring at or above the mean to the high symptomatic range (z-scores from 0 to +3). The SWAN was the only measure to provide indicators of 'positive mental health', endorsed in the presence of particularly good attentive abilities. Scales such as the SWAN that reliably capture ADHD as a continuous trait, including the positive end, are important for not missing meaningful variation in population-based studies. Indicators of positive mental health may be helpful in clinical practice, as positive attributes have been shown to directly influence as well as buffer negative effects of psychiatric symptoms. © 2017 Association for Child and Adolescent Mental Health.

  9. The 6-min mastication test: a unique test to assess endurance of continuous chewing, normal values, reliability, reproducibility and usability in patients with mitochondrial disease.

    PubMed

    van den Engel-Hoek, L; Knuijt, S; van Gerven, M H J C; Lagarde, M L J; Groothuis, J T; de Groot, I J M; Janssen, M C H

    2017-03-01

    In patients with mitochondrial disease, fatigue and muscle problems are the most common complaints. They also experience these complaints during mastication. To measure endurance of continuous mastication in patients with mitochondrial diseases, the 6-min mastication test (6MMT) was developed. This study included the collection of normal data for the 6MMT in a healthy population (children and adults). During 6 min of continuous mastication on a chew tube chewing cycles per minute, total amount of chewing cycles and the difference between minute 1 (M 1 ) and minute 6 (M 2 ) were collected in 271 healthy participants (5-80 years old). These results were compared with those of nine paediatric and 25 adult patients with a mitochondrial disease. Visual analogue scale (VAS) scores were collected directly after the test and after 5 min. A qualitative rating was made on masticatory movements. The reproducibility of the 6MMT in the healthy population with an interval of approximately 2 weeks was good. The inter-rater reliability for the observations was excellent. The patient group demonstrated lower total amount of chewing cycles or had greater differences between M 1 and M 6 . The 6MMT is a reliable and objective test to assess endurance of continuous chewing. It demonstrates the ability of healthy children and adults to chew during 6 min with a highly stable frequency of mastication movements. The test may give an explanation for the masticatory problems in patient groups, who are complaining of pain and fatigue during mastication. © 2017 John Wiley & Sons Ltd.

  10. Southeastern Power Administration 2007 Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2007-12-28

    Dear Secretary Chu: I am proud to submit Southeastern Power Administration’s (Southeastern’s) fiscal year (FY) 2007 Annual Report for your review. The information included in this report reflects Southeastern’s programs, accomplishments, and financial activities for the 12-month period beginning October 1, 2006 and ending September 30, 2007. Southeastern marketed more than 5 billion kilowatt-hours of energy to 492 wholesale Federal power customers in an 11-state marketing area in FY 2007. Revenues from the sale of this power totaled approximately $219 million. Drought conditions continued to plague the southeast region of the United States during 2007 placing strains on our naturalmore » and financial resources. Southeastern purchased more than $40 million in replacement power to meet customer contract requirements to ensure the continued reliability of our nation’s power grid. With the financial assistance and support of our Federal power customers, continued funding for capitalized equipment replacements at various Corps of Engineers’ (Corps) hydroelectric projects provided much needed repairs and maintenance for aging facilities. Southeastern’s cyber and physical security program continued to be reviewed and updated to meet Department of Energy (DOE), Homeland Security, and North American Electric Reliability Corporation standards and requirements. Plans for the upcoming year include communication and cooperation with DOE, Federal power customers, and the Corps to maximize the benefits of our nation’s water resources. Competition for the use of water and the prolonged drought conditions will present another challenging year for our agency. The employees at Southeastern will be proactive in meeting these challenges and providing reliable hydroelectric power to the people in the southeast. Sincerely, Kenneth E. Legg Administrator« less

  11. Reliable gain-scheduled control of discrete-time systems and its application to CSTR model

    NASA Astrophysics Data System (ADS)

    Sakthivel, R.; Selvi, S.; Mathiyalagan, K.; Shi, Y.

    2016-10-01

    This paper is focused on reliable gain-scheduled controller design for a class of discrete-time systems with randomly occurring nonlinearities and actuator fault. Further, the nonlinearity in the system model is assumed to occur randomly according to a Bernoulli distribution with measurable time-varying probability in real time. The main purpose of this paper is to design a gain-scheduled controller by implementing a probability-dependent Lyapunov function and linear matrix inequality (LMI) approach such that the closed-loop discrete-time system is stochastically stable for all admissible randomly occurring nonlinearities. The existence conditions for the reliable controller is formulated in terms of LMI constraints. Finally, the proposed reliable gain-scheduled control scheme is applied on continuously stirred tank reactor model to demonstrate the effectiveness and applicability of the proposed design technique.

  12. Spatial temperature gradients guide axonal outgrowth

    PubMed Central

    Black, Bryan; Vishwakarma, Vivek; Dhakal, Kamal; Bhattarai, Samik; Pradhan, Prabhakar; Jain, Ankur; Kim, Young-tae; Mohanty, Samarendra

    2016-01-01

    Formation of neural networks during development and regeneration after injury depends on accuracy of axonal pathfinding, which is primarily believed to be influenced by chemical cues. Recently, there is growing evidence that physical cues can play crucial role in axonal guidance. However, detailed mechanism involved in such guidance cues is lacking. By using weakly-focused near-infrared continuous wave (CW) laser microbeam in the path of an advancing axon, we discovered that the beam acts as a repulsive guidance cue. Here, we report that this highly-effective at-a-distance guidance is the result of a temperature field produced by the near-infrared laser light absorption. Since light absorption by extracellular medium increases when the laser wavelength was red shifted, the threshold laser power for reliable guidance was significantly lower in the near-infrared as compared to the visible spectrum. The spatial temperature gradient caused by the near-infrared laser beam at-a-distance was found to activate temperature-sensitive membrane receptors, resulting in an influx of calcium. The repulsive guidance effect was significantly reduced when extracellular calcium was depleted or in the presence of TRPV1-antagonist. Further, direct heating using micro-heater confirmed that the axonal guidance is caused by shallow temperature-gradient, eliminating the role of any non-photothermal effects. PMID:27460512

  13. Spatial temperature gradients guide axonal outgrowth

    NASA Astrophysics Data System (ADS)

    Black, Bryan; Vishwakarma, Vivek; Dhakal, Kamal; Bhattarai, Samik; Pradhan, Prabhakar; Jain, Ankur; Kim, Young-Tae; Mohanty, Samarendra

    2016-07-01

    Formation of neural networks during development and regeneration after injury depends on accuracy of axonal pathfinding, which is primarily believed to be influenced by chemical cues. Recently, there is growing evidence that physical cues can play crucial role in axonal guidance. However, detailed mechanism involved in such guidance cues is lacking. By using weakly-focused near-infrared continuous wave (CW) laser microbeam in the path of an advancing axon, we discovered that the beam acts as a repulsive guidance cue. Here, we report that this highly-effective at-a-distance guidance is the result of a temperature field produced by the near-infrared laser light absorption. Since light absorption by extracellular medium increases when the laser wavelength was red shifted, the threshold laser power for reliable guidance was significantly lower in the near-infrared as compared to the visible spectrum. The spatial temperature gradient caused by the near-infrared laser beam at-a-distance was found to activate temperature-sensitive membrane receptors, resulting in an influx of calcium. The repulsive guidance effect was significantly reduced when extracellular calcium was depleted or in the presence of TRPV1-antagonist. Further, direct heating using micro-heater confirmed that the axonal guidance is caused by shallow temperature-gradient, eliminating the role of any non-photothermal effects.

  14. Beneficial effect of the bioflavonoid quercetin on cholecystokinin-induced mitochondrial dysfunction in isolated rat pancreatic acinar cells.

    PubMed

    Weber, Heike; Jonas, Ludwig; Wakileh, Michael; Krüger, Burkhard

    2014-03-01

    The pathogenesis of acute pancreatitis (AP) is still poorly understood. Thus, a reliable pharmacological therapy is currently lacking. In recent years, an impairment of the energy metabolism of pancreatic acinar cells, caused by Ca(2+)-mediated depolarization of the inner mitochondrial membrane and a decreased ATP supply, has been implicated as an important pathological event. In this study, we investigated whether quercetin exerts protection against mitochondrial dysfunction. Following treatment with or without quercetin, rat pancreatic acinar cells were stimulated with supramaximal cholecystokinin-8 (CCK). CCK caused a decrease in the mitochondrial membrane potential (MMP) and ATP concentration, whereas the mitochondrial dehydrogenase activity was significantly increased. Quercetin treatment before CCK application exerted no protection on MMP but increased ATP to a normal level, leading to a continuous decrease in the dehydrogenase activity. The protective effect of quercetin on mitochondrial function was accompanied by a reduction in CCK-induced changes to the cell membrane. Concerning the molecular mechanism underlying the protective effect of quercetin, an increased AMP/ATP ratio suggests that the AMP-activated protein kinase system may be activated. In addition, quercetin strongly inhibited CCK-induced trypsin activity. The results indicate that the use of quercetin may be a therapeutic strategy for reducing the severity of AP.

  15. Scoping study on trends in the economic value of electricity reliability to the U.S. economy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eto, Joseph; Koomey, Jonathan; Lehman, Bryan

    During the past three years, working with more than 150 organizations representing public and private stakeholders, EPRI has developed the Electricity Technology Roadmap. The Roadmap identifies several major strategic challenges that must be successfully addressed to ensure a sustainable future in which electricity continues to play an important role in economic growth. Articulation of these anticipated trends and challenges requires a detailed understanding of the role and importance of reliable electricity in different sectors of the economy. This report is intended to contribute to that understanding by analyzing key aspects of trends in the economic value of electricity reliability inmore » the U.S. economy. We first present a review of recent literature on electricity reliability costs. Next, we describe three distinct end-use approaches for tracking trends in reliability needs: (1) an analysis of the electricity-use requirements of office equipment in different commercial sectors; (2) an examination of the use of aggregate statistical indicators of industrial electricity use and economic activity to identify high reliability-requirement customer market segments; and (3) a case study of cleanrooms, which is a cross-cutting market segment known to have high reliability requirements. Finally, we present insurance industry perspectives on electricity reliability as an example of a financial tool for addressing customers' reliability needs.« less

  16. Standby battery requirements for telecommunications power

    NASA Astrophysics Data System (ADS)

    May, G. J.

    The requirements for standby power for telecommunications are changing as the network moves from conventional systems to Internet Protocol (IP) telephony. These new systems require higher power levels closer to the user but the level of availability and reliability cannot be compromised if the network is to provide service in the event of a failure of the public utility. Many parts of these new networks are ac rather than dc powered with UPS systems for back-up power. These generally have lower levels of reliability than dc systems and the network needs to be designed such that overall reliability is not reduced through appropriate levels of redundancy. Mobile networks have different power requirements. Where there is a high density of nodes, continuity of service can be reasonably assured with short autonomy times. Furthermore, there is generally no requirement that these networks are the provider of last resort and therefore, specifications for continuity of power are directed towards revenue protection and overall reliability targets. As a result of these changes, battery requirements for reserve power are evolving. Shorter autonomy times are specified for parts of the network although a large part will continue to need support for hours rather minutes. Operational temperatures are increasing and battery solutions that provide longer life in extreme conditions are becoming important. Different battery technologies will be discussed in the context of these requirements. Conventional large flooded lead/acid cells both with pasted and tubular plates are used in larger central office applications but the majority of requirements are met with valve-regulated lead/acid (VRLA) batteries. The different types of VRLA battery will be described and their suitability for various applications outlined. New developments in battery construction and battery materials have improved both performance and reliability in recent years. Alternative technologies are also being proposed for telecommunications power, either different battery chemistries including lithium batteries, flywheel energy storage or the use of fuel cells. These will be evaluated and the position of lead/acid batteries in the medium term for this important market will be assessed.

  17. ETARA PC version 3.3 user's guide: Reliability, availability, maintainability simulation model

    NASA Technical Reports Server (NTRS)

    Hoffman, David J.; Viterna, Larry A.

    1991-01-01

    A user's manual describing an interactive, menu-driven, personal computer based Monte Carlo reliability, availability, and maintainability simulation program called event time availability reliability (ETARA) is discussed. Given a reliability block diagram representation of a system, ETARA simulates the behavior of the system over a specified period of time using Monte Carlo methods to generate block failure and repair intervals as a function of exponential and/or Weibull distributions. Availability parameters such as equivalent availability, state availability (percentage of time as a particular output state capability), continuous state duration and number of state occurrences can be calculated. Initial spares allotment and spares replenishment on a resupply cycle can be simulated. The number of block failures are tabulated both individually and by block type, as well as total downtime, repair time, and time waiting for spares. Also, maintenance man-hours per year and system reliability, with or without repair, at or above a particular output capability can be calculated over a cumulative period of time or at specific points in time.

  18. Project report: Alaska Iways architecture

    DOT National Transportation Integrated Search

    2005-01-01

    The Alaska Department of Transportation and Public Facilities (ADOT&PF) is continually looking at ways to improve the efficiency, safety, and reliability of Alaskas transportation system. This effort includes the application of advanced communicat...

  19. Reliability considerations in long-life outer planet spacecraft system design

    NASA Technical Reports Server (NTRS)

    Casani, E. K.

    1975-01-01

    A Mariner Jupiter/Saturn mission has been planned for 1977. System reliability questions are discussed, taking into account the actual and design lifetime, causes of mission termination, in-flight failures and their consequences for the mission, and the use of redundancy to avoid failures. The design process employed optimizes the use of proven subsystem and system designs and then makes the necessary improvements to increase the lifetime as required.

  20. Robotic Assembly of Truss Structures for Space Systems and Future Research Plans

    NASA Technical Reports Server (NTRS)

    Doggett, William

    2002-01-01

    Many initiatives under study by both the space science and earth science communities require large space systems, i.e. with apertures greater than 15 m or dimensions greater than 20 m. This paper reviews the effort in NASA Langley Research Center's Automated Structural Assembly Laboratory which laid the foundations for robotic construction of these systems. In the Automated Structural Assembly Laboratory reliable autonomous assembly and disassembly of an 8 meter planar structure composed of 102 truss elements covered by 12 panels was demonstrated. The paper reviews the hardware and software design philosophy which led to reliable operation during weeks of near continuous testing. Special attention is given to highlight the features enhancing assembly reliability.

  1. Guidelines for Reliable DC/DC Converters for Space Use

    NASA Technical Reports Server (NTRS)

    Plante, Jeannette; Shue, Jack

    2008-01-01

    NESC saw the need to study the persistent failure of DC/DC Converters during ground testing and in flight, motivated investigation of causes and mitigation options. Research indicated misapplication and device quality to be root causes. The study took 20 months. Team included multiple NASA Centers : JPL, JSC, MSFC, GSFC Project Background

  2. Use of mycelium and detached leaves in bioassays for assessing resistance to boxwood blight

    USDA-ARS?s Scientific Manuscript database

    Boxwood blight caused by Calonectria pseudonaviculata is a newly emergent disease of boxwood (Buxus L.) in the United States that causes leaf drop, stem lesions, and plant death. A rapid and reliable laboratory assay that enables screening hundreds of boxwood genotypes for resistance to boxwood blig...

  3. Performability modeling with continuous accomplishment sets

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1979-01-01

    A general modeling framework that permits the definition, formulation, and evaluation of performability is described. It is shown that performability relates directly to system effectiveness, and is a proper generalization of both performance and reliability. A hierarchical modeling scheme is used to formulate the capability function used to evaluate performability. The case in which performance variables take values in a continuous accomplishment set is treated explicitly.

  4. 21 CFR 558.455 - Oxytetracycline and neomycin.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... (air-sac- infection) caused by E. coli susceptible to oxytetracycline. Feed continuously for 5 d; do... treatment of bacterial enteritis caused by E. coli and Salmonella choleraesuis and treatment of bacterial... (bacterial enteritis) caused by E. coli susceptible to neomycin. Feed continuously for 7 to 14 d; withdraw 5...

  5. 21 CFR 558.455 - Oxytetracycline and neomycin.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... (air-sac- infection) caused by E. coli susceptible to oxytetracycline. Feed continuously for 5 d; do... treatment of bacterial enteritis caused by E. coli and Salmonella choleraesuis and treatment of bacterial... (bacterial enteritis) caused by E. coli susceptible to neomycin. Feed continuously for 7 to 14 d; withdraw 5...

  6. Systematic review of statistics on causes of deaths in hospitals: strengthening the evidence for policy-makers.

    PubMed

    Rampatige, Rasika; Mikkelsen, Lene; Hernandez, Bernardo; Riley, Ian; Lopez, Alan D

    2014-11-01

    To systematically review the reliability of hospital data on cause of death and encourage periodic reviews of these data using a standard method. We searched Google Scholar, Pubmed and Biblioteca Virtual de la Salud for articles in English, Spanish and Portuguese that reported validation studies of data on cause of death. We analysed the results of 199 studies that had used medical record reviews to validate the cause of death reported on death certificates or by the vital registration system. The screened studies had been published between 1983 and 2013 and their results had been reported in English (n = 124), Portuguese (n = 25) or Spanish (n = 50). Only 29 of the studies met our inclusion criteria. Of these, 13 had examined cause of death patterns at the population level - with a view to correcting cause-specific mortality fractions - while the other 16 had been undertaken to identify discrepancies in the diagnosis for specific diseases before and after medical record review. Most of the selected studies reported substantial misdiagnosis of causes of death in hospitals. There was wide variation in study methodologies. Many studies did not describe the methods used in sufficient detail to be able to assess the reproducibility or comparability of their results. The assumption that causes of death are being accurately reported in hospitals is unfounded. To improve the reliability and usefulness of reported causes of death, national governments should do periodic medical record reviews to validate the quality of their hospital cause of death data, using a standard.

  7. Systematic review of statistics on causes of deaths in hospitals: strengthening the evidence for policy-makers

    PubMed Central

    Rampatige, Rasika; Mikkelsen, Lene; Hernandez, Bernardo; Riley, Ian

    2014-01-01

    Abstract Objective To systematically review the reliability of hospital data on cause of death and encourage periodic reviews of these data using a standard method. Methods We searched Google Scholar, Pubmed and Biblioteca Virtual de la Salud for articles in English, Spanish and Portuguese that reported validation studies of data on cause of death. We analysed the results of 199 studies that had used medical record reviews to validate the cause of death reported on death certificates or by the vital registration system. Findings The screened studies had been published between 1983 and 2013 and their results had been reported in English (n = 124), Portuguese (n = 25) or Spanish (n = 50). Only 29 of the studies met our inclusion criteria. Of these, 13 had examined cause of death patterns at the population level – with a view to correcting cause-specific mortality fractions – while the other 16 had been undertaken to identify discrepancies in the diagnosis for specific diseases before and after medical record review. Most of the selected studies reported substantial misdiagnosis of causes of death in hospitals. There was wide variation in study methodologies. Many studies did not describe the methods used in sufficient detail to be able to assess the reproducibility or comparability of their results. Conclusion The assumption that causes of death are being accurately reported in hospitals is unfounded. To improve the reliability and usefulness of reported causes of death, national governments should do periodic medical record reviews to validate the quality of their hospital cause of death data, using a standard. PMID:25378742

  8. Understanding Early Sexual Development (For Parents)

    MedlinePlus

    ... Your preschooler will continue to learn important sexual attitudes from you — from how you react to people ... begin to have a bigger influence on sexual attitudes. If you aren't a reliable resource, your ...

  9. Adjacent Vehicle Number-Triggered Adaptive Transmission for V2V Communications.

    PubMed

    Wei, Yiqiao; Chen, Jingjun; Hwang, Seung-Hoon

    2018-03-02

    For vehicle-to-vehicle (V2V) communication, such issues as continuity and reliability still have to be solved. Specifically, it is necessary to consider a more scalable physical layer due to the high-speed mobility of vehicles and the complex channel environment. Adaptive transmission has been adapted in channel-dependent scheduling. However, it has been neglected with regards to the physical topology changes in the vehicle network. In this paper, we propose a physical topology-triggered adaptive transmission scheme which adjusts the data rate between vehicles according to the number of connectable vehicles nearby. Also, we investigate the performance of the proposed method using computer simulations and compare it with the conventional methods. The numerical results show that the proposed method can provide more continuous and reliable data transmission for V2V communications.

  10. The therapeutic factor inventory-8: Using item response theory to create a brief scale for continuous process monitoring for group psychotherapy.

    PubMed

    Tasca, Giorgio A; Cabrera, Christine; Kristjansson, Elizabeth; MacNair-Semands, Rebecca; Joyce, Anthony S; Ogrodniczuk, John S

    2016-01-01

    We tested a very brief version of the 23-item Therapeutic Factors Inventory-Short Form (TFI-S), and describe the use of Item Response Theory (IRT) for the purpose of developing short and reliable scales for group psychotherapy. Group therapy patients (N = 578) completed the TFI-S on one occasion, and their data were used for the IRT analysis. Of those, 304 completed the TFI-S and other measures on more than one occasion to assess sensitivity to change, concurrent, and predictive validity of the brief version. Results suggest that the new TFI-8 is a brief, reliable, and valid measure of a higher-order group therapeutic factor. The TFI-8 may be used for continuous process measurement and feedback to improve the functioning of therapy groups.

  11. Adjacent Vehicle Number-Triggered Adaptive Transmission for V2V Communications

    PubMed Central

    Wei, Yiqiao; Chen, Jingjun

    2018-01-01

    For vehicle-to-vehicle (V2V) communication, such issues as continuity and reliability still have to be solved. Specifically, it is necessary to consider a more scalable physical layer due to the high-speed mobility of vehicles and the complex channel environment. Adaptive transmission has been adapted in channel-dependent scheduling. However, it has been neglected with regards to the physical topology changes in the vehicle network. In this paper, we propose a physical topology-triggered adaptive transmission scheme which adjusts the data rate between vehicles according to the number of connectable vehicles nearby. Also, we investigate the performance of the proposed method using computer simulations and compare it with the conventional methods. The numerical results show that the proposed method can provide more continuous and reliable data transmission for V2V communications. PMID:29498646

  12. A Universal Noninvasive Continuous Blood Pressure Measurement System for Remote Healthcare Monitoring.

    PubMed

    Mukherjee, Ramtanu; Ghosh, Sanchita; Gupta, Bharat; Chakravarty, Tapas

    2018-01-22

    The effectiveness of any remote healthcare monitoring system depends on how much accurate, patient-friendly, versatile, and cost-effective measurement it is delivering. There has always been a huge demand for such a long-term noninvasive remote blood pressure (BP) measurement system, which could be used worldwide in the remote healthcare industry. Thus, noninvasive continuous BP measurement and remote monitoring have become an emerging area in the remote healthcare industry. Photoplethysmography-based (PPG) BP measurement is a continuous, unobtrusive, patient-friendly, and cost-effective solution. However, BP measurements through PPG sensors are not much reliable and accurate due to some major limitations like pressure disturbance, motion artifacts, and variations in human skin tone. A novel reflective PPG sensor has been developed to eliminate the abovementioned pressure disturbance and motion artifacts during the BP measurement. Considering the variations of the human skin tone across demography, a novel algorithm has been developed to make the BP measurement accurate and reliable. The training dataset captured 186 subjects' data and the trial dataset captured another new 102 subjects' data. The overall accuracy achieved by using the proposed method is nearly 98%. Thus, demonstrating the efficacy of the proposed method. The developed BP monitoring system is quite accurate, reliable, cost-effective, handy, and user friendly. It is also expected that this system would be quite useful to monitor the BP of infants, elderly people, patients having wounds, burn injury, or in the intensive care unit environment.

  13. Hypergeometric continuation of divergent perturbation series: II. Comparison with Shanks transformation and Padé approximation

    NASA Astrophysics Data System (ADS)

    Sanders, Sören; Holthaus, Martin

    2017-11-01

    We explore in detail how analytic continuation of divergent perturbation series by generalized hypergeometric functions is achieved in practice. Using the example of strong-coupling perturbation series provided by the two-dimensional Bose-Hubbard model, we compare hypergeometric continuation to Shanks and Padé techniques, and demonstrate that the former yields a powerful, efficient and reliable alternative for computing the phase diagram of the Mott insulator-to-superfluid transition. In contrast to Shanks transformations and Padé approximations, hypergeometric continuation also allows us to determine the exponents which characterize the divergence of correlation functions at the transition points. Therefore, hypergeometric continuation constitutes a promising tool for the study of quantum phase transitions.

  14. A comparison of reliability and conventional estimation of safe fatigue life and safe inspection intervals

    NASA Technical Reports Server (NTRS)

    Hooke, F. H.

    1972-01-01

    Both the conventional and reliability analyses for determining safe fatigue life are predicted on a population having a specified (usually log normal) distribution of life to collapse under a fatigue test load. Under a random service load spectrum, random occurrences of load larger than the fatigue test load may confront and cause collapse of structures which are weakened, though not yet to the fatigue test load. These collapses are included in reliability but excluded in conventional analysis. The theory of risk determination by each method is given, and several reasonably typical examples have been worked out, in which it transpires that if one excludes collapse through exceedance of the uncracked strength, the reliability and conventional analyses gave virtually identical probabilities of failure or survival.

  15. Gas-driven pump for ground-water samples

    USGS Publications Warehouse

    Signor, Donald C.

    1978-01-01

    Observation wells installed for artificial-recharge research and other wells used in different ground-water programs are frequently cased with small-diameter steel pipe. To obtain samples from these small-diameter wells in order to monitor water quality, and to calibrate solute-transport models, a small-diameter pump with unique operating characteristics is required that causes a minimum alternation of samples during field sampling. A small-diameter gas-driven pump was designed and built to obtain water samples from wells of two-inch diameter or larger. The pump is a double-piston type with the following characteristics: (1) The water sample is isolated from the operating gas, (2) no source of electricity is ncessary, (3) operation is continuous, (4) use of compressed gas is efficient, and (5) operation is reliable over extended periods of time. Principles of operation, actual operation techniques, gas-use analyses and operating experience are described. Complete working drawings and a component list are included. Recent modifications and pump construction for high-pressure applications also are described. (Woodard-USGS)

  16. Streamflow characterization using functional data analysis of the Potomac River

    NASA Astrophysics Data System (ADS)

    Zelmanow, A.; Maslova, I.; Ticlavilca, A. M.; McKee, M.

    2013-12-01

    Flooding and droughts are extreme hydrological events that affect the United States economically and socially. The severity and unpredictability of flooding has caused billions of dollars in damage and the loss of lives in the eastern United States. In this context, there is an urgent need to build a firm scientific basis for adaptation by developing and applying new modeling techniques for accurate streamflow characterization and reliable hydrological forecasting. The goal of this analysis is to use numerical streamflow characteristics in order to classify, model, and estimate the likelihood of extreme events in the eastern United States, mainly the Potomac River. Functional data analysis techniques are used to study yearly streamflow patterns, with the extreme streamflow events characterized via functional principal component analysis. These methods are merged with more classical techniques such as cluster analysis, classification analysis, and time series modeling. The developed functional data analysis approach is used to model continuous streamflow hydrographs. The forecasting potential of this technique is explored by incorporating climate factors to produce a yearly streamflow outlook.

  17. Syntactic sequencing in Hebbian cell assemblies.

    PubMed

    Wennekers, Thomas; Palm, Günther

    2009-12-01

    Hebbian cell assemblies provide a theoretical framework for the modeling of cognitive processes that grounds them in the underlying physiological neural circuits. Recently we have presented an extension of cell assemblies by operational components which allows to model aspects of language, rules, and complex behaviour. In the present work we study the generation of syntactic sequences using operational cell assemblies timed by unspecific trigger signals. Syntactic patterns are implemented in terms of hetero-associative transition graphs in attractor networks which cause a directed flow of activity through the neural state space. We provide regimes for parameters that enable an unspecific excitatory control signal to switch reliably between attractors in accordance with the implemented syntactic rules. If several target attractors are possible in a given state, noise in the system in conjunction with a winner-takes-all mechanism can randomly choose a target. Disambiguation can also be guided by context signals or specific additional external signals. Given a permanently elevated level of external excitation the model can enter an autonomous mode, where it generates temporal grammatical patterns continuously.

  18. Development of an oximeter for neurology

    NASA Astrophysics Data System (ADS)

    Aleinik, A.; Serikbekova, Z.; Zhukova, N.; Zhukova, I.; Nikitina, M.

    2016-06-01

    Cerebral desaturation can occur during surgery manipulation, whereas other parameters vary insignificantly. Prolonged intervals of cerebral anoxia can cause serious damage to the nervous system. Commonly used method for measurement of cerebral blood flow uses invasive catheters. Other techniques include single photon emission computed tomography (SPECT), positron emission tomography (PET), magnetic resonance imaging (MRI). Tomographic methods frequently use isotope administration, that may result in anaphylactic reactions to contrast media and associated nerve diseases. Moreover, the high cost and the need for continuous monitoring make it difficult to apply these techniques in clinical practice. Cerebral oximetry is a method for measuring oxygen saturation using infrared spectrometry. Moreover reflection pulse oximetry can detect sudden changes in sympathetic tone. For this purpose the reflectance pulse oximeter for use in neurology is developed. Reflectance oximeter has a definite advantage as it can be used to measure oxygen saturation in any part of the body. Preliminary results indicate that the device has a good resolution and high reliability. Modern applied schematics have improved device characteristics compared with existing ones.

  19. Neonatal streptococcal infections.

    PubMed Central

    Parker, M. T.

    1977-01-01

    Most serious neonatal streptococcal infections are caused by group-B streptococci. The pattern of serious group-B neonatal disease in Britain resembles that described in other countries; both "early-onset" and "late-onset" forms are seen, but reliable incidence rates have not yet been determined. Serological-type III strains predominate in neonatal meningitis in Britain, but not so markedly as in some parts of the U.S.A. A deficiency of group-II strains in meningitis is, however, apparent in both countries. Present information about the carriage of group-B streptococci suggests that antibiotic prophylaxis administered to mothers or infants is unlikely to reduce greatly the frequency of "early-onset" disease. The continuous presence of a suitable chemical disinfectant in the vagina during labour might be more effective. Insufficient is known about the epidemiology of "late-onset" neonatal disease for rational preventive measures to be designed. More information is required about the postnatal acquisition of group-B streptococci by neonates and its sources, and about passive transfer of type-specific antibody from the mother to her child. PMID:339212

  20. Exemplary Design Envelope Specification for Standard Modular Hydropower Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witt, Adam M.; Smith, Brennan T.; Tsakiris, Achilleas

    Hydropower is an established, affordable renewable energy generation technology supplying nearly 18% of the electricity consumed globally. A hydropower facility interacts continuously with the surrounding water resource environment, causing alterations of varying magnitude in the natural flow of water, energy, fish, sediment, and recreation upstream and downstream. A universal challenge in facility design is balancing the extraction of useful energy and power system services from a stream with the need to maintain ecosystem processes and natural environmental function. On one hand, hydroelectric power is a carbon-free, renewable, and flexible asset to the power system. On the other, the disruption ofmore » longitudinal connectivity and the artificial barrier to aquatic movement created by hydraulic structures can produce negative impacts that stress fresh water environments. The growing need for carbon-free, reliable, efficient distributed energy sources suggests there is significant potential for hydropower projects that can deploy with low installed costs, enhanced ecosystem service offerings, and minimal disruptions of the stream environment.« less

  1. Strategies to improve the mechanical strength and water resistance of agar films for food packaging applications.

    PubMed

    Sousa, Ana M M; Gonçalves, Maria P

    2015-11-05

    Agar films possess several properties adequate for food packaging applications. However, their high cost-production and quality variations caused by physiological and environmental factors affecting wild seaweeds make them less attractive for industries. In this work, native (NA) and alkali-modified (AA) agars obtained from sustainably grown seaweeds (integrated multi-trophic aquaculture) were mixed with locust bean gum (LBG) to make 'knife-coated' films with fixed final concentration (1 wt%) and variable agar/LBG ratios. Agar films were easier to process upon LBG addition (viscosity increase and gelling character decrease of the film-forming solutions observed by dynamic oscillatory and steady shear measurements). The mechanical properties and water resistance were optimal for films with 50 and/or 75% LBG contents and best in the case of NA (cheaper to extract). These findings can help reduce the cost-production of agar packaging films. Moreover, the controlled cultivation of seaweeds can provide continuous and reliable feedstock for transformation industries. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Pneumatic gap sensor and method

    DOEpatents

    Bagdal, Karl T.; King, Edward L.; Follstaedt, Donald W.

    1992-01-01

    An apparatus and method for monitoring and maintaining a predetermined width in the gap between a casting nozzle and a casting wheel, wherein the gap is monitored by means of at least one pneumatic gap sensor. The pneumatic gap sensor is mounted on the casting nozzle in proximity to the casting surface and is connected by means of a tube to a regulator and a transducer. The regulator provides a flow of gas through a restictor to the pneumatic gap sensor, and the transducer translates the changes in the gas pressure caused by the proximity of the casting wheel to the pneumatic gap sensor outlet into a signal intelligible to a control device. The relative positions of the casting nozzle and casting wheel can thereby be selectively adjusted to continually maintain a predetermined distance between their adjacent surfaces. The apparatus and method enables accurate monitoring of the actual casting gap in a simple and reliable manner resistant to the extreme temperatures and otherwise hostile casting environment.

  3. Pneumatic gap sensor and method

    DOEpatents

    Bagdal, K.T.; King, E.L.; Follstaedt, D.W.

    1992-03-03

    An apparatus and method for monitoring and maintaining a predetermined width in the gap between a casting nozzle and a casting wheel, wherein the gap is monitored by means of at least one pneumatic gap sensor. The pneumatic gap sensor is mounted on the casting nozzle in proximity to the casting surface and is connected by means of a tube to a regulator and a transducer. The regulator provides a flow of gas through a restictor to the pneumatic gap sensor, and the transducer translates the changes in the gas pressure caused by the proximity of the casting wheel to the pneumatic gap sensor outlet into a signal intelligible to a control device. The relative positions of the casting nozzle and casting wheel can thereby be selectively adjusted to continually maintain a predetermined distance between their adjacent surfaces. The apparatus and method enables accurate monitoring of the actual casting gap in a simple and reliable manner resistant to the extreme temperatures and otherwise hostile casting environment. 6 figs.

  4. Real-time observational evidence of changing Asian dust morphology with the mixing of heavy anthropogenic pollution

    NASA Astrophysics Data System (ADS)

    Pan, X.; Uno, I.; Wang, Z.; Nishizawa, T.; Sugimoto, N.; Yamamoto, S.; Kobayashi, H.; Sun, Y.; Fu, P.; Tang, X.; Wang, Z.

    2017-12-01

    Natural mineral dust and heavy anthropogenic pollution and its complex interactions cause significant environmental problems in East Asia. Due to restrictions of observing technique, real-time morphological change in Asian dust particles owing to coating process of anthropogenic pollutants is still statistically unclear. Here, we first used a newly developed, single-particle polarization detector and quantitatively investigate the evolution of the polarization property of backscattering light reflected from dust particle as they were mixing with anthropogenic pollutants in North China. The decrease in observed depolarization ratio is mainly attributed to the decrease of aspect ratio of the dust particles as a result of continuous coating processes. Hygroscopic growth of Calcium nitrate (Ca(NO3)2) on the surface of the dust particles played a vital role, particularly when they are stagnant in the polluted region with high RH conditions. Reliable statistics highlight the significant importance of internally mixed, `quasi-spherical' Asian dust particles, which markedly act as cloud condensation nuclei and exert regional climate change.

  5. Real-time observational evidence of changing Asian dust morphology with the mixing of heavy anthropogenic pollution.

    PubMed

    Pan, Xiaole; Uno, Itsushi; Wang, Zhe; Nishizawa, Tomoaki; Sugimoto, Nobuo; Yamamoto, Shigekazu; Kobayashi, Hiroshi; Sun, Yele; Fu, Pingqing; Tang, Xiao; Wang, Zifa

    2017-03-23

    Natural mineral dust and heavy anthropogenic pollution and its complex interactions cause significant environmental problems in East Asia. Due to restrictions of observing technique, real-time morphological change in Asian dust particles owing to coating process of anthropogenic pollutants is still statistically unclear. Here, we first used a newly developed, single-particle polarization detector and quantitatively investigate the evolution of the polarization property of backscattering light reflected from dust particle as they were mixing with anthropogenic pollutants in North China. The decrease in observed depolarization ratio is mainly attributed to the decrease of aspect ratio of the dust particles as a result of continuous coating processes. Hygroscopic growth of Calcium nitrate (Ca(NO 3 ) 2 ) on the surface of the dust particles played a vital role, particularly when they are stagnant in the polluted region with high RH conditions. Reliable statistics highlight the significant importance of internally mixed, 'quasi-spherical' Asian dust particles, which markedly act as cloud condensation nuclei and exert regional climate change.

  6. Third-generation pure alumina and alumina matrix composites in total hip arthroplasty: What is the evidence?

    PubMed

    Hannouche, Didier; Zingg, Matthieu; Miozzari, Hermes; Nizard, Remy; Lübbeke, Anne

    2018-01-01

    Wear, corrosion and periprosthetic osteolysis are important causes of failure in joint arthroplasty, especially in young patients.Ceramic bearings, developed 40 years ago, are an increasingly popular choice in hip arthroplasty. New manufacturing procedures have increased the strength and reliability of ceramic materials and reduced the risk of complications.In recent decades, ceramics made of pure alumina have continuously improved, resulting in a surgical-grade material that fulfills clinical requirements.Despite the track record of safety and long-term results, third-generation pure alumina ceramics are being replaced in clinical practice by alumina matrix composites, which are composed of alumina and zirconium.In this review, the characteristics of both materials are discussed, and the long-term results with third-generation alumina-on-alumina bearings and the associated complications are compared with those of other available ceramics. Cite this article: EFORT Open Rev 2018;3:7-14. DOI: 10.1302/2058-5241.3.170034.

  7. Reliability of muscle strength assessment in chronic post-stroke hemiparesis: a systematic review and meta-analysis.

    PubMed

    Rabelo, Michelle; Nunes, Guilherme S; da Costa Amante, Natália Menezes; de Noronha, Marcos; Fachin-Martins, Emerson

    2016-02-01

    Muscle weakness is the main cause of motor impairment among stroke survivors and is associated with reduced peak muscle torque. To systematically investigate and organize the evidence of the reliability of muscle strength evaluation measures in post-stroke survivors with chronic hemiparesis. Two assessors independently searched four electronic databases in January 2014 (Medline, Scielo, CINAHL, Embase). Inclusion criteria comprised studies on reliability on muscle strength assessment in adult post-stroke patients with chronic hemiparesis. We extracted outcomes from included studies about reliability data, measured by intraclass correlation coefficient (ICC) and/or similar. The meta-analyses were conducted only with isokinetic data. Of 450 articles, eight articles were included for this review. After quality analysis, two studies were considered of high quality. Five different joints were analyzed within the included studies (knee, hip, ankle, shoulder, and elbow). Their reliability results varying from low to very high reliability (ICCs from 0.48 to 0.99). Results of meta-analysis for knee extension varying from high to very high reliability (pooled ICCs from 0.89 to 0.97), for knee flexion varying from high to very high reliability (pooled ICCs from 0.84 to 0.91) and for ankle plantar flexion showed high reliability (pooled ICC = 0.85). Objective muscle strength assessment can be reliably used in lower and upper extremities in post-stroke patients with chronic hemiparesis.

  8. Competing risk models in reliability systems, a weibull distribution model with bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, Ismed; Satria Gondokaryono, Yudi

    2016-02-01

    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range between the true value and the maximum likelihood estimated value lines.

  9. Reliability of the Roussel Uclaf Causality Assessment Method for Assessing Causality in Drug-Induced Liver Injury*

    PubMed Central

    Rochon, James; Protiva, Petr; Seeff, Leonard B.; Fontana, Robert J.; Liangpunsakul, Suthat; Watkins, Paul B.; Davern, Timothy; McHutchison, John G.

    2013-01-01

    The Roussel Uclaf Causality Assessment Method (RUCAM) was developed to quantify the strength of association between a liver injury and the medication implicated as causing the injury. However, its reliability in a research setting has never been fully explored. The aim of this study was to determine test-retest and interrater reliabilities of RUCAM in retrospectively-identified cases of drug induced liver injury. The Drug-Induced Liver Injury Network is enrolling well-defined cases of hepatotoxicity caused by isoniazid, phenytoin, clavulanate/amoxicillin, or valproate occurring since 1994. Each case was adjudicated by three reviewers working independently; after an interval of at least 5 months, cases were readjudicated by the same reviewers. A total of 40 drug-induced liver injury cases were enrolled including individuals treated with isoniazid (nine), phenytoin (five), clavulanate/amoxicillin (15), and valproate (11). Mean ± standard deviation age at protocol-defined onset was 44.8 ± 19.5 years; patients were 68% female and 78% Caucasian. Cases were classified as hepatocellular (44%), mixed (28%), or cholestatic (28%). Test-retest differences ranged from −7 to +8 with complete agreement in only 26% of cases. On average, the maximum absolute difference among the three reviewers was 3.1 on the first adjudication and 2.7 on the second, although much of this variability could be attributed to differences between the enrolling investigator and the external reviewers. The test-retest reliability by the same assessors was 0.54 (upper 95% confidence limit = 0.77); the interrater reliability was 0.45 (upper 95% confidence limit = 0.58). Categorizing the RUCAM to a five-category scale improved these reliabilities but only marginally. Conclusion The mediocre reliability of the RUCAM is problematic for future studies of drug-induced liver injury. Alternative methods, including modifying the RUCAM, developing drug-specific instruments, or causality assessment based on expert opinion, may be more appropriate. PMID:18798340

  10. Reliability Correction for Functional Connectivity: Theory and Implementation

    PubMed Central

    Mueller, Sophia; Wang, Danhong; Fox, Michael D.; Pan, Ruiqi; Lu, Jie; Li, Kuncheng; Sun, Wei; Buckner, Randy L.; Liu, Hesheng

    2016-01-01

    Network properties can be estimated using functional connectivity MRI (fcMRI). However, regional variation of the fMRI signal causes systematic biases in network estimates including correlation attenuation in regions of low measurement reliability. Here we computed the spatial distribution of fcMRI reliability using longitudinal fcMRI datasets and demonstrated how pre-estimated reliability maps can correct for correlation attenuation. As a test case of reliability-based attenuation correction we estimated properties of the default network, where reliability was significantly lower than average in the medial temporal lobe and higher in the posterior medial cortex, heterogeneity that impacts estimation of the network. Accounting for this bias using attenuation correction revealed that the medial temporal lobe’s contribution to the default network is typically underestimated. To render this approach useful to a greater number of datasets, we demonstrate that test-retest reliability maps derived from repeated runs within a single scanning session can be used as a surrogate for multi-session reliability mapping. Using data segments with different scan lengths between 1 and 30 min, we found that test-retest reliability of connectivity estimates increases with scan length while the spatial distribution of reliability is relatively stable even at short scan lengths. Finally, analyses of tertiary data revealed that reliability distribution is influenced by age, neuropsychiatric status and scanner type, suggesting that reliability correction may be especially important when studying between-group differences. Collectively, these results illustrate that reliability-based attenuation correction is an easily implemented strategy that mitigates certain features of fMRI signal nonuniformity. PMID:26493163

  11. Critical Assessment of the Foundations of Power Transmission and Distribution Reliability Metrics and Standards.

    PubMed

    Nateghi, Roshanak; Guikema, Seth D; Wu, Yue Grace; Bruss, C Bayan

    2016-01-01

    The U.S. federal government regulates the reliability of bulk power systems, while the reliability of power distribution systems is regulated at a state level. In this article, we review the history of regulating electric service reliability and study the existing reliability metrics, indices, and standards for power transmission and distribution networks. We assess the foundations of the reliability standards and metrics, discuss how they are applied to outages caused by large exogenous disturbances such as natural disasters, and investigate whether the standards adequately internalize the impacts of these events. Our reflections shed light on how existing standards conceptualize reliability, question the basis for treating large-scale hazard-induced outages differently from normal daily outages, and discuss whether this conceptualization maps well onto customer expectations. We show that the risk indices for transmission systems used in regulating power system reliability do not adequately capture the risks that transmission systems are prone to, particularly when it comes to low-probability high-impact events. We also point out several shortcomings associated with the way in which regulators require utilities to calculate and report distribution system reliability indices. We offer several recommendations for improving the conceptualization of reliability metrics and standards. We conclude that while the approaches taken in reliability standards have made considerable advances in enhancing the reliability of power systems and may be logical from a utility perspective during normal operation, existing standards do not provide a sufficient incentive structure for the utilities to adequately ensure high levels of reliability for end-users, particularly during large-scale events. © 2015 Society for Risk Analysis.

  12. Analysis on Sealing Reliability of Bolted Joint Ball Head Component of Satellite Propulsion System

    NASA Astrophysics Data System (ADS)

    Guo, Tao; Fan, Yougao; Gao, Feng; Gu, Shixin; Wang, Wei

    2018-01-01

    Propulsion system is one of the important subsystems of satellite, and its performance directly affects the service life, attitude control and reliability of the satellite. The Paper analyzes the sealing principle of bolted joint ball head component of satellite propulsion system and discuss from the compatibility of hydrazine anhydrous and bolted joint ball head component, influence of ground environment on the sealing performance of bolted joint ball heads, and material failure caused by environment, showing that the sealing reliability of bolted joint ball head component is good and the influence of above three aspects on sealing of bolted joint ball head component can be ignored.

  13. Patient safety in anesthesia: learning from the culture of high-reliability organizations.

    PubMed

    Wright, Suzanne M

    2015-03-01

    There has been an increased awareness of and interest in patient safety and improved outcomes, as well as a growing body of evidence substantiating medical error as a leading cause of death and injury in the United States. According to The Joint Commission, US hospitals demonstrate improvements in health care quality and patient safety. Although this progress is encouraging, much room for improvement remains. High-reliability organizations, industries that deliver reliable performances in the face of complex working environments, can serve as models of safety for our health care system until plausible explanations for patient harm are better understood. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. A systematic review of reliable and valid tools for the measurement of patient participation in healthcare.

    PubMed

    Phillips, Nicole Margaret; Street, Maryann; Haesler, Emily

    2016-02-01

    Patient participation in healthcare is recognised internationally as essential for consumer-centric, high-quality healthcare delivery. Its measurement as part of continuous quality improvement requires development of agreed standards and measurable indicators. This systematic review sought to identify strategies to measure patient participation in healthcare and to report their reliability and validity. In the context of this review, patient participation was constructed as shared decision-making, acknowledging the patient as having critical knowledge regarding their own health and care needs and promoting self-care/autonomy. Following a comprehensive search, studies reporting reliability or validity of an instrument used in a healthcare setting to measure patient participation, published in English between January 2004 and March 2014 were eligible for inclusion. From an initial search, which identified 1582 studies, 156 studies were retrieved and screened against inclusion criteria. Thirty-three studies reporting 24 patient participation measurement tools met inclusion criteria, and were critically appraised. The majority of studies were descriptive psychometric studies using prospective, cross-sectional designs. Almost all the tools completed by patients, family caregivers, observers or more than one stakeholder focused on aspects of patient-professional communication. Few tools designed for completion by patients or family caregivers provided valid and reliable measures of patient participation. There was low correlation between many of the tools and other measures of patient satisfaction. Few reliable and valid tools for measurement of patient participation in healthcare have been recently developed. Of those reported in this review, the dyadic Observing Patient Involvement in Decision Making (dyadic-OPTION) tool presents the most promise for measuring core components of patient participation. There remains a need for further study into valid, reliable and feasible strategies for measuring patient participation as part of continuous quality improvement. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  15. Incident learning in pursuit of high reliability: implementing a comprehensive, low-threshold reporting program in a large, multisite radiation oncology department.

    PubMed

    Gabriel, Peter E; Volz, Edna; Bergendahl, Howard W; Burke, Sean V; Solberg, Timothy D; Maity, Amit; Hahn, Stephen M

    2015-04-01

    Incident learning programs have been recognized as cornerstones of safety and quality assurance in so-called high reliability organizations in industries such as aviation and nuclear power. High reliability organizations are distinguished by their drive to continuously identify and proactively address a broad spectrum of latent safety issues. Many radiation oncology institutions have reported on their experience in tracking and analyzing adverse events and near misses but few have incorporated the principles of high reliability into their programs. Most programs have focused on the reporting and retrospective analysis of a relatively small number of significant adverse events and near misses. To advance a large, multisite radiation oncology department toward high reliability, a comprehensive, cost-effective, electronic condition reporting program was launched to enable the identification of a broad spectrum of latent system failures, which would then be addressed through a continuous quality improvement process. A comprehensive program, including policies, work flows, and information system, was designed and implemented, with use of a low reporting threshold to focus on precursors to adverse events. In a 46-month period from March 2011 through December 2014, a total of 8,504 conditions (average, 185 per month, 1 per patient treated, 3.9 per 100 fractions [individual treatments]) were reported. Some 77.9% of clinical staff members reported at least 1 condition. Ninety-eight percent of conditions were classified in the lowest two of four severity levels, providing the opportunity to address conditions before they contribute to adverse events. Results after approximately four years show excellent employee engagement, a sustained rate of reporting, and a focus on low-level issues leading to proactive quality improvement interventions.

  16. Reviewing Reliability and Validity of Information for University Educational Evaluation

    NASA Astrophysics Data System (ADS)

    Otsuka, Yusaku

    To better utilize evaluations in higher education, it is necessary to share the methods of reviewing reliability and validity of examination scores and grades, and to accumulate and share data for confirming results. Before the GPA system is first introduced into a university or college, the reliability of examination scores and grades, especially for essay examinations, must be assured. Validity is a complicated concept, so should be assured in various ways, including using professional audits, theoretical models, and statistical data analysis. Because individual students and teachers are continually improving, using evaluations to appraise their progress is not always compatible with using evaluations in appraising the implementation of accountability in various departments or the university overall. To better utilize evaluations and improve higher education, evaluations should be integrated into the current system by sharing the vision of an academic learning community and promoting interaction between students and teachers based on sufficiently reliable and validated evaluation tools.

  17. Confirmatory Factor Analysis of the System for Evaluation of Teaching Qualities (SETQ) in Graduate Medical Training.

    PubMed

    Boerebach, Benjamin C M; Lombarts, Kiki M J M H; Arah, Onyebuchi A

    2016-03-01

    The System for Evaluation of Teaching Qualities (SETQ) was developed as a formative system for the continuous evaluation and development of physicians' teaching performance in graduate medical training. It has been seven years since the introduction and initial exploratory psychometric analysis of the SETQ questionnaires. This study investigates the validity and reliability of the SETQ questionnaires across hospitals and medical specialties using confirmatory factor analyses (CFAs), reliability analysis, and generalizability analysis. The SETQ questionnaires were tested in a sample of 3,025 physicians and 2,848 trainees in 46 hospitals. The CFA revealed acceptable fit of the data to the previously identified five-factor model. The high internal consistency estimates suggest satisfactory reliability of the subscales. These results provide robust evidence for the validity and reliability of the SETQ questionnaires for evaluating physicians' teaching performance. © The Author(s) 2014.

  18. THE RELIABILITY OF HAND-WRITTEN AND COMPUTERISED RECORDS OF BIRTH DATA COLLECTED AT BARAGWANATH HOSPITAL IN SOWETO

    PubMed Central

    Ellison, GTH; Richter, LM; de Wet, T; Harris, HE; Griesel, RD; McIntyre, JA

    2007-01-01

    This study examined the reliability of hand-written and computerised records of birth data collected during the Birth to Ten study at Baragwanath Hospital in Soweto. The reliability of record-keeping in hand-written obstetric and neonatal files was assessed by comparing duplicate records of six different variables abstracted from six different sections in these files. The reliability of computerised record-keeping was assessed by comparing the original hand-written record of each variable with records contained in the hospital’s computerised database. These data sets displayed similar levels of reliability which suggests that similar errors occurred when data were transcribed from one section of the files to the next, and from these files to the computerised database. In both sets of records reliability was highest for the categorical variable infant sex, and for those continuous variables (such as maternal age and gravidity) recorded with unambiguous units. Reliability was lower for continuous variables that could be recorded with different levels of precision (such as birth weight), those that were occasionally measured more than once, and those that could be measured using more than one measurement technique (such as gestational age). Reducing the number of times records are transcribed, categorising continuous variables, and standardising the techniques used for measuring and recording variables would improve the reliability of both hand-written and computerised data sets. OPSOMMING In hierdie studie is die betroubaarheid van handgeskrewe en gerekenariseerde rekords van ge boortedata ondersoek, wat versamel is gedurende die ‘Birth to Ten’ -studie aan die Baragwanath hospitaal in Soweto. Die betroubaarheid van handgeskrewe verloskundige en pasgeboortelike rekords is beoordeel deur duplikaatrekords op ses verskillende verander likes te vergelyk, wat onttrek is uit ses verskillende dele van die betrokke lêers. Die gerekenariseerde rekords se betroubaarheid is beoordeel deur die oorspronklike geskrewe rekord van elke veranderlike te vergelyk met rekords wat beskikbaar is in die hospitaal se gerekenariseerde databasis Hierdie datastelle her vergelykbare vlakke van betroubaarheid getoon, waaruit afgelei kan word dat soortgelyke foute voorkom warmeer data oorgeplaas word vaneen deeivan ’n lêer na ’n ander, en vanaf die lêer na die gerekenariseerde databasis. In albei stelle rekords was die betroubaarheid die hoogste vir die kategoriese veranderlike suigeling se geslag, en vir daardie kontinue veranderlikes (soos moeder se ouderdom en gravida) wat in terme van ondubbelsinmge eenhede gekodeer kan word. Kontinue veranderlikes wat op wisselende vlakke van akkuratheid gemeet word (soos gewig met geboorte), veranderlikes wat soms meer as een keer gemeet is, en veranderlikes wat voigens meer as een metingstegniek bepaal is (soos draagtydsouderdom), was minder betroubaar Deur die aantal kere wat rekords oorgeskryf moet word te verminder, kontinue veranderlikes tat kategoriese veranderlikes te wysig. en tegnieke vir meting en aantekening van veranderlikes te standardiseer, kan die betroubaarheid van sowel handgeskrewe as gerekenariseerde datastelle verbeter word. PMID:9287552

  19. 46 CFR 62.30-1 - Failsafe.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... subsystem, system, or vessel to determine the least critical consequence. (b) All automatic control, remote control, safety control, and alarm systems must be failsafe. ..., DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING VITAL SYSTEM AUTOMATION Reliability and Safety...

  20. The paradox of verbal autopsy in cause of death assignment: symptom question unreliability but predictive accuracy.

    PubMed

    Serina, Peter; Riley, Ian; Hernandez, Bernardo; Flaxman, Abraham D; Praveen, Devarsetty; Tallo, Veronica; Joshi, Rohina; Sanvictores, Diozele; Stewart, Andrea; Mooney, Meghan D; Murray, Christopher J L; Lopez, Alan D

    2016-01-01

    We believe that it is important that governments understand the reliability of the mortality data which they have at their disposable to guide policy debates. In many instances, verbal autopsy (VA) will be the only source of mortality data for populations, yet little is known about how the accuracy of VA diagnoses is affected by the reliability of the symptom responses. We previously described the effect of the duration of time between death and VA administration on VA validity. In this paper, using the same dataset, we assess the relationship between the reliability and completeness of symptom responses and the reliability and accuracy of cause of death (COD) prediction. The study was based on VAs in the Population Health Metrics Research Consortium (PHMRC) VA Validation Dataset from study sites in Bohol and Manila, Philippines and Andhra Pradesh, India. The initial interview was repeated within 3-52 months of death. Question responses were assessed for reliability and completeness between the two survey rounds. COD was predicted by Tariff Method. A sample of 4226 VAs was collected for 2113 decedents, including 1394 adults, 349 children, and 370 neonates. Mean question reliability was unexpectedly low ( kappa  = 0.447): 42.5 % of responses positive at the first interview were negative at the second, and 47.9 % of responses positive at the second had been negative at the first. Question reliability was greater for the short form of the PHMRC instrument ( kappa  = 0.497) and when analyzed at the level of the individual decedent ( kappa  = 0.610). Reliability at the level of the individual decedent was associated with COD predictive reliability and predictive accuracy. Families give coherent accounts of events leading to death but the details vary from interview to interview for the same case. Accounts are accurate but inconsistent; different subsets of symptoms are identified on each occasion. However, there are sufficient accurate and consistent subsets of symptoms to enable the Tariff Method to assign a COD. Questions which contributed most to COD prediction were also the most reliable and consistent across repeat interviews; these have been included in the short form VA questionnaire. Accuracy and reliability of diagnosis for an individual death depend on the quality of interview. This has considerable implications for the progressive roll out of VAs into civil registration and vital statistics (CRVS) systems.

  1. Effects of forcing uncertainties in the improvement skills of assimilating satellite soil moisture retrievals into flood forecasting models

    USDA-ARS?s Scientific Manuscript database

    Floods have negative impacts on society, causing damages in infrastructures and industry, and in the worst cases, causing loss of human lives. Thus early and accurate warning is crucial to significantly reduce the impacts on public safety and economy. Reliable flood warning can be generated using ...

  2. Eucalypt powdery mildew caused by Podosphaera pannosa in Brazil

    Treesearch

    Natalia R. Fonseca; Lucio M. S. Guimaraes; Raul P. Pires; Ned B. Klopfenstein; Acelino C. Alfenas

    2017-01-01

    Eucalypt powdery mildew is an important disease in greenhouses and clonal hedges of Eucalyptus spp. in Brazil, which can cause leaf and shoot distortion, shoot discoloration, and growth reduction that results in production losses. Because reliable information regarding the causal agent of the disease is lacking, this study used ITS and 28S rDNA sequencing and...

  3. Precise time and time interval applications to electric power systems

    NASA Technical Reports Server (NTRS)

    Wilson, Robert E.

    1992-01-01

    There are many applications of precise time and time interval (frequency) in operating modern electric power systems. Many generators and customer loads are operated in parallel. The reliable transfer of electrical power to the consumer partly depends on measuring power system frequency consistently in many locations. The internal oscillators in the widely dispersed frequency measuring units must be syntonized. Elaborate protection and control systems guard the high voltage equipment from short and open circuits. For the highest reliability of electric service, engineers need to study all control system operations. Precise timekeeping networks aid in the analysis of power system operations by synchronizing the clocks on recording instruments. Utility engineers want to reproduce events that caused loss of service to customers. Precise timekeeping networks can synchronize protective relay test-sets. For dependable electrical service, all generators and large motors must remain close to speed synchronism. The stable response of a power system to perturbations is critical to continuity of electrical service. Research shows that measurement of the power system state vector can aid in the monitoring and control of system stability. If power system operators know that a lightning storm is approaching a critical transmission line or transformer, they can modify operating strategies. Knowledge of the location of a short circuit fault can speed the re-energizing of a transmission line. One fault location technique requires clocks synchronized to one microsecond. Current research seeks to find out if one microsecond timekeeping can aid and improve power system control and operation.

  4. Interrater Reliability of the Postoperative Epidural Fibrosis Classification: A Histopathologic Study in the Rat Model.

    PubMed

    Sae-Jung, Surachai; Jirarattanaphochai, Kitti; Sumananont, Chat; Wittayapairoj, Kriangkrai; Sukhonthamarn, Kamolsak

    2015-08-01

    Agreement study. To validate the interrater reliability of the histopathological classification of the post-laminectomy epidural fibrosis in an animal model. Epidural fibrosis is a common cause of failed back surgery syndrome. Many animal experiments have been developed to investigate the prevention of epidural fibrosis. One of the common outcome measurements is the epidural fibrous adherence grading, but the classification has not yet been validated. Five identical sets of histopathological digital files of L5-L6 laminectomized adult Sprague-Dawley rats, representing various degrees of postoperative epidural fibrous adherence were randomized and evaluated by five independent assessors masked to the study processes. Epidural fibrosis was rated as grade 0 (no fibrosis), grade 1 (thin fibrous band), grade 2 (continuous fibrous adherence for less than two-thirds of the laminectomy area), or grade 3 (large fibrotic tissue for more than two-thirds of the laminectomy area). A statistical analysis was performed. Four hundred slides were independently evaluated by each assessor. The percent agreement and intraclass correlation coefficient (ICC) between each pair of assessors varied from 73.5% to 81.3% and from 0.81 to 0.86, respectively. The overall ICC was 0.83 (95% confidence interval, 0.81-0.86). The postoperative epidural fibrosis classification showed almost perfect agreement among the assessors. This classification can be used in research involving the histopathology of postoperative epidural fibrosis; for example, for the development of preventions of postoperative epidural fibrosis or treatment in an animal model.

  5. Sliding into happiness: A new tool for measuring affective responses to words

    PubMed Central

    Warriner, Amy Beth; Shore, David I.; Schmidt, Louis A.; Imbault, Constance L.; Kuperman, Victor

    2016-01-01

    Reliable measurement of affective responses is critical for research into human emotion. Affective evaluation of words is most commonly gauged on multiple dimensions—including valence (positivity) and arousal—using a rating scale. Despite its popularity, this scale is open to criticism: it generates ordinal data that is often misinterpreted as interval, it does not provide the fine resolution that is essential by recent theoretical accounts of emotion, and its extremes may not be properly calibrated. In five experiments, we introduce a new slider tool for affective evaluation of words on a continuous, well-calibrated and high-resolution scale. In Experiment 1, participants were shown a word and asked to move a manikin representing themselves closer to or farther away from the word. The manikin’s distance from the word strongly correlated with the word’s valence. In Experiment 2, individual differences in shyness and sociability elicited reliable differences in distance from the words. Experiment 3 validated the results of Experiments 1 and 2 using a demographically more diverse population of responders. Finally, Experiment 4 (along with Experiment 2) suggested that task demand is not a potential cause for scale recalibration. In Experiment 5, men and women placed a manikin closer or farther from words that showed sex differences in valence, highlighting the sensitivity of this measure to group differences. These findings shed a new light on interactions among affect, language, and individual differences, and demonstrate the utility of a new tool for measuring word affect. PMID:28252996

  6. Long-term reliability study and failure analysis of quantum cascade lasers

    NASA Astrophysics Data System (ADS)

    Xie, Feng; Nguyen, Hong-Ky; Leblanc, Herve; Hughes, Larry; Wang, Jie; Miller, Dean J.; Lascola, Kevin

    2017-02-01

    Here we present lifetime test results of 4 groups of quantum cascade lasers (QCL) under various aging conditions including an accelerated life test. The total accumulated life time exceeds 1.5 million device·hours, which is the largest QCL reliability study ever reported. The longest single device aging time was 46.5 thousand hours (without failure) in the room temperature test. Four failures were found in a group of 19 devices subjected to the accelerated life test with a heat-sink temperature of 60 °C and a continuous-wave current of 1 A. Visual inspection of the laser facets of failed devices revealed an astonishing phenomenon, which has never been reported before, which manifested as a dark belt of an unknown substance appearing on facets. Although initially assumed to be contamination from the environment, failure analysis revealed that the dark substance is a thermally induced oxide of InP in the buried heterostructure semiinsulating layer. When the oxidized material starts to cover the core and blocks the light emission, it begins to cause the failure of QCLs in the accelerated test. An activation energy of 1.2 eV is derived from the dependence of the failure rate on laser core temperature. With the activation energy, the mean time to failure of the quantum cascade lasers operating at a current density of 5 kA/cm2 and heat-sink temperature of 25°C is expected to be 809 thousand hours.

  7. Thermal dosimetry for bladder hyperthermia treatment. An overview.

    PubMed

    Schooneveldt, Gerben; Bakker, Akke; Balidemaj, Edmond; Chopra, Rajiv; Crezee, Johannes; Geijsen, Elisabeth D; Hartmann, Josefin; Hulshof, Maarten C C M; Kok, H Petra; Paulides, Margarethus M; Sousa-Escandon, Alejandro; Stauffer, Paul R; Maccarini, Paolo F

    2016-06-01

    The urinary bladder is a fluid-filled organ. This makes, on the one hand, the internal surface of the bladder wall relatively easy to heat and ensures in most cases a relatively homogeneous temperature distribution; on the other hand the variable volume, organ motion, and moving fluid cause artefacts for most non-invasive thermometry methods, and require additional efforts in planning accurate thermal treatment of bladder cancer. We give an overview of the thermometry methods currently used and investigated for hyperthermia treatments of bladder cancer, and discuss their advantages and disadvantages within the context of the specific disease (muscle-invasive or non-muscle-invasive bladder cancer) and the heating technique used. The role of treatment simulation to determine the thermal dose delivered is also discussed. Generally speaking, invasive measurement methods are more accurate than non-invasive methods, but provide more limited spatial information; therefore, a combination of both is desirable, preferably supplemented by simulations. Current efforts at research and clinical centres continue to improve non-invasive thermometry methods and the reliability of treatment planning and control software. Due to the challenges in measuring temperature across the non-stationary bladder wall and surrounding tissues, more research is needed to increase our knowledge about the penetration depth and typical heating pattern of the various hyperthermia devices, in order to further improve treatments. The ability to better determine the delivered thermal dose will enable clinicians to investigate the optimal treatment parameters, and consequentially, to give better controlled, thus even more reliable and effective, thermal treatments.

  8. Evaluation of the anti-neoplastic effect of sorafenib on liver cancer through bioluminescence tomography

    NASA Astrophysics Data System (ADS)

    Liang, Qian; Ye, Jinzuo; Du, Yang; Chi, Chongwei; Tian, Jie

    2017-03-01

    Hepatocellular carcinoma (HCC) is one of the most important leading causes of cancer-related deaths worldwide. In this study, we evaluated the efficacy of sorafenib on hepatocellular carcinoma through bioluminescence tomography (BLT) based on Micro-CT/BLT multi-modal system. Initially, the human hepatocellular carcinoma cell line HepG2-Red-FLuc, which was transfected with luciferase gene, was cultured. And then, the orthotopic liver tumor mouse model was established on 4 5 weeks old athymic male Balb/c nude mice by inoculating the HepG2-Red-FLuc cell suspension into the liver lobe under isoflurane anesthesia. 15 20 days after tumor cells implantation, the mice were divided into two groups including the sorafenib treatment group and the control group. The mice in the treatment group were treated with sorafenib with dosage of 62 mg/kg/day by oral gavage for continuous 14 days, and the mice in the control group were treated with sterile water at equal volume. The tumor growth and drug treatment efficacy were dynamically monitored through BLT. The results in this study showed that the growth of liver cancer can be dynamically monitored from very early stage, and also the sorafenib treatment efficacy can be reliably and objectively assessed using BLT imaging method. Our experimental result demonstrated sorafenib can inhibit the tumor growth effectively. BLT enabled the non-invasive and reliable assessment of anti-neoplastic drug efficacy on liver cancer.

  9. Reliability enhancement of APR + diverse protection system regarding common cause failures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oh, Y. G.; Kim, Y. M.; Yim, H. S.

    2012-07-01

    The Advanced Power Reactor Plus (APR +) nuclear power plant design has been developed on the basis of the APR1400 (Advanced Power Reactor 1400 MWe) to further enhance safety and economics. For the mitigation of Anticipated Transients Without Scram (ATWS) as well as Common Cause Failures (CCF) within the Plant Protection System (PPS) and the Emergency Safety Feature - Component Control System (ESF-CCS), several design improvement features have been implemented for the Diverse Protection System (DPS) of the APR + plant. As compared to the APR1400 DPS design, the APR + DPS has been designed to provide the Safety Injectionmore » Actuation Signal (SIAS) considering a large break LOCA accident concurrent with the CCF. Additionally several design improvement features, such as channel structure with redundant processing modules, and changes of system communication methods and auto-system test methods, are introduced to enhance the functional reliability of the DPS. Therefore, it is expected that the APR + DPS can provide an enhanced safety and reliability regarding possible CCF in the safety-grade I and C systems as well as the DPS itself. (authors)« less

  10. Automation reliability in unmanned aerial vehicle control: a reliance-compliance model of automation dependence in high workload.

    PubMed

    Dixon, Stephen R; Wickens, Christopher D

    2006-01-01

    Two experiments were conducted in which participants navigated a simulated unmanned aerial vehicle (UAV) through a series of mission legs while searching for targets and monitoring system parameters. The goal of the study was to highlight the qualitatively different effects of automation false alarms and misses as they relate to operator compliance and reliance, respectively. Background data suggest that automation false alarms cause reduced compliance, whereas misses cause reduced reliance. In two studies, 32 and 24 participants, including some licensed pilots, performed in-lab UAV simulations that presented the visual world and collected dependent measures. Results indicated that with the low-reliability aids, false alarms correlated with poorer performance in the system failure task, whereas misses correlated with poorer performance in the concurrent tasks. Compliance and reliance do appear to be affected by false alarms and misses, respectively, and are relatively independent of each other. Practical implications are that automated aids must be fairly reliable to provide global benefits and that false alarms and misses have qualitatively different effects on performance.

  11. Seeking high reliability in primary care: Leadership, tools, and organization.

    PubMed

    Weaver, Robert R

    2015-01-01

    Leaders in health care increasingly recognize that improving health care quality and safety requires developing an organizational culture that fosters high reliability and continuous process improvement. For various reasons, a reliability-seeking culture is lacking in most health care settings. Developing a reliability-seeking culture requires leaders' sustained commitment to reliability principles using key mechanisms to embed those principles widely in the organization. The aim of this study was to examine how key mechanisms used by a primary care practice (PCP) might foster a reliability-seeking, system-oriented organizational culture. A case study approach was used to investigate the PCP's reliability culture. The study examined four cultural artifacts used to embed reliability-seeking principles across the organization: leadership statements, decision support tools, and two organizational processes. To decipher their effects on reliability, the study relied on observations of work patterns and the tools' use, interactions during morning huddles and process improvement meetings, interviews with clinical and office staff, and a "collective mindfulness" questionnaire. The five reliability principles framed the data analysis. Leadership statements articulated principles that oriented the PCP toward a reliability-seeking culture of care. Reliability principles became embedded in the everyday discourse and actions through the use of "problem knowledge coupler" decision support tools and daily "huddles." Practitioners and staff were encouraged to report unexpected events or close calls that arose and which often initiated a formal "process change" used to adjust routines and prevent adverse events from recurring. Activities that foster reliable patient care became part of the taken-for-granted routine at the PCP. The analysis illustrates the role leadership, tools, and organizational processes play in developing and embedding a reliable-seeking culture across an organization. Progress toward a reliability-seeking, system-oriented approach to care remains ongoing, and movement in that direction requires deliberate and sustained effort by committed leaders in health care.

  12. Methodology Series Module 9: Designing Questionnaires and Clinical Record Forms - Part II.

    PubMed

    Setia, Maninder Singh

    2017-01-01

    This article is a continuation of the previous module on designing questionnaires and clinical record form in which we have discussed some basic points about designing the questionnaire and clinical record forms. In this section, we will discuss the reliability and validity of questionnaires. The different types of validity are face validity, content validity, criterion validity, and construct validity. The different types of reliability are test-retest reliability, inter-rater reliability, and intra-rater reliability. Some of these parameters are assessed by subject area experts. However, statistical tests should be used for evaluation of other parameters. Once the questionnaire has been designed, the researcher should pilot test the questionnaire. The items in the questionnaire should be changed based on the feedback from the pilot study participants and the researcher's experience. After the basic structure of the questionnaire has been finalized, the researcher should assess the validity and reliability of the questionnaire or the scale. If an existing standard questionnaire is translated in the local language, the researcher should assess the reliability and validity of the translated questionnaire, and these values should be presented in the manuscript. The decision to use a self- or interviewer-administered, paper- or computer-based questionnaire depends on the nature of the questions, literacy levels of the target population, and resources.

  13. Methodology Series Module 9: Designing Questionnaires and Clinical Record Forms – Part II

    PubMed Central

    Setia, Maninder Singh

    2017-01-01

    This article is a continuation of the previous module on designing questionnaires and clinical record form in which we have discussed some basic points about designing the questionnaire and clinical record forms. In this section, we will discuss the reliability and validity of questionnaires. The different types of validity are face validity, content validity, criterion validity, and construct validity. The different types of reliability are test-retest reliability, inter-rater reliability, and intra-rater reliability. Some of these parameters are assessed by subject area experts. However, statistical tests should be used for evaluation of other parameters. Once the questionnaire has been designed, the researcher should pilot test the questionnaire. The items in the questionnaire should be changed based on the feedback from the pilot study participants and the researcher's experience. After the basic structure of the questionnaire has been finalized, the researcher should assess the validity and reliability of the questionnaire or the scale. If an existing standard questionnaire is translated in the local language, the researcher should assess the reliability and validity of the translated questionnaire, and these values should be presented in the manuscript. The decision to use a self- or interviewer-administered, paper- or computer-based questionnaire depends on the nature of the questions, literacy levels of the target population, and resources. PMID:28584367

  14. Reliability and validity of current physical examination techniques of the foot and ankle.

    PubMed

    Wrobel, James S; Armstrong, David G

    2008-01-01

    This literature review was undertaken to evaluate the reliability and validity of the orthopedic, neurologic, and vascular examination of the foot and ankle. We searched PubMed-the US National Library of Medicine's database of biomedical citations-and abstracts for relevant publications from 1966 to 2006. We also searched the bibliographies of the retrieved articles. We identified 35 articles to review. For discussion purposes, we used reliability interpretation guidelines proposed by others. For the kappa statistic that calculates reliability for dichotomous (eg, yes or no) measures, reliability was defined as moderate (0.4-0.6), substantial (0.6-0.8), and outstanding (> 0.8). For the intraclass correlation coefficient that calculates reliability for continuous (eg, degrees of motion) measures, reliability was defined as good (> 0.75), moderate (0.5-0.75), and poor (< 0.5). Intraclass correlations, based on the various examinations performed, varied widely. The range was from 0.08 to 0.98, depending on the examination performed. Concurrent and predictive validity ranged from poor to good. Although hundreds of articles exist describing various methods of lower-extremity assessment, few rigorously assess the measurement properties. This information can be used both by the discerning clinician in the art of clinical examination and by the scientist in the measurement properties of reproducibility and validity.

  15. Safety, reliability, maintainability and quality provisions for the Space Shuttle program

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This publication establishes common safety, reliability, maintainability and quality provisions for the Space Shuttle Program. NASA Centers shall use this publication both as the basis for negotiating safety, reliability, maintainability and quality requirements with Shuttle Program contractors and as the guideline for conduct of program safety, reliability, maintainability and quality activities at the Centers. Centers shall assure that applicable provisions of the publication are imposed in lower tier contracts. Centers shall give due regard to other Space Shuttle Program planning in order to provide an integrated total Space Shuttle Program activity. In the implementation of safety, reliability, maintainability and quality activities, consideration shall be given to hardware complexity, supplier experience, state of hardware development, unit cost, and hardware use. The approach and methods for contractor implementation shall be described in the contractors safety, reliability, maintainability and quality plans. This publication incorporates provisions of NASA documents: NHB 1700.1 'NASA Safety Manual, Vol. 1'; NHB 5300.4(IA), 'Reliability Program Provisions for Aeronautical and Space System Contractors'; and NHB 5300.4(1B), 'Quality Program Provisions for Aeronautical and Space System Contractors'. It has been tailored from the above documents based on experience in other programs. It is intended that this publication be reviewed and revised, as appropriate, to reflect new experience and to assure continuing viability.

  16. Probabilistic Design of a Plate-Like Wing to Meet Flutter and Strength Requirements

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Krishnamurthy, T.; Mason, Brian H.; Smith, Steven A.; Naser, Ahmad S.

    2002-01-01

    An approach is presented for carrying out reliability-based design of a metallic, plate-like wing to meet strength and flutter requirements that are given in terms of risk/reliability. The design problem is to determine the thickness distribution such that wing weight is a minimum and the probability of failure is less than a specified value. Failure is assumed to occur if either the flutter speed is less than a specified allowable or the stress caused by a pressure loading is greater than a specified allowable. Four uncertain quantities are considered: wing thickness, calculated flutter speed, allowable stress, and magnitude of a uniform pressure load. The reliability-based design optimization approach described herein starts with a design obtained using conventional deterministic design optimization with margins on the allowables. Reliability is calculated using Monte Carlo simulation with response surfaces that provide values of stresses and flutter speed. During the reliability-based design optimization, the response surfaces and move limits are coordinated to ensure accuracy of the response surfaces. Studies carried out in the paper show the relationship between reliability and weight and indicate that, for the design problem considered, increases in reliability can be obtained with modest increases in weight.

  17. Reliability of Visual and Somatosensory Feedback in Skilled Movement: The Role of the Cerebellum.

    PubMed

    Mizelle, J C; Oparah, Alexis; Wheaton, Lewis A

    2016-01-01

    The integration of vision and somatosensation is required to allow for accurate motor behavior. While both sensory systems contribute to an understanding of the state of the body through continuous updating and estimation, how the brain processes unreliable sensory information remains to be fully understood in the context of complex action. Using functional brain imaging, we sought to understand the role of the cerebellum in weighting visual and somatosensory feedback by selectively reducing the reliability of each sense individually during a tool use task. We broadly hypothesized upregulated activation of the sensorimotor and cerebellar areas during movement with reduced visual reliability, and upregulated activation of occipital brain areas during movement with reduced somatosensory reliability. As specifically compared to reduced somatosensory reliability, we expected greater activations of ipsilateral sensorimotor cerebellum for intact visual and somatosensory reliability. Further, we expected that ipsilateral posterior cognitive cerebellum would be affected with reduced visual reliability. We observed that reduced visual reliability results in a trend towards the relative consolidation of sensorimotor activation and an expansion of cerebellar activation. In contrast, reduced somatosensory reliability was characterized by the absence of cerebellar activations and a trend towards the increase of right frontal, left parietofrontal activation, and temporo-occipital areas. Our findings highlight the role of the cerebellum for specific aspects of skillful motor performance. This has relevance to understanding basic aspects of brain functions underlying sensorimotor integration, and provides a greater understanding of cerebellar function in tool use motor control.

  18. Applicability of the ReproQ client experiences questionnaire for quality improvement in maternity care

    PubMed Central

    Scheerhagen, Marisja; Tholhuijsen, Dominique J.C.; Birnie, Erwin; Franx, Arie; Bonsel, Gouke J.

    2016-01-01

    Background. The ReproQuestionnaire (ReproQ) measures the client’s experience with maternity care, following the WHO responsiveness model. In 2015, the ReproQ was appointed as national client experience questionnaire and will be added to the national list of indicators in maternity care. For using the ReproQ in quality improvement, the questionnaire should be able to identify best and worst practices. To achieve this, ReproQ should be reliable and able to identify relevant differences. Methods and Findings. We sent questionnaires to 17,867 women six weeks after labor (response 32%). Additionally, we invited 915 women for the retest (response 29%). Next we determined the test–retest reliability, the Minimally Important Difference (MID) and six known group comparisons, using two scorings methods: the percentage women with at least one negative experience and the mean score. The reliability for the percentage negative experience and mean score was both ‘good’ (Absolute agreement = 79%; intraclass correlation coefficient = 0.78). The MID was 11% for the percentage negative and 0.15 for the mean score. Application of the MIDs revealed relevant differences in women’s experience with regard to professional continuity, setting continuity and having travel time. Conclusions. The measurement characteristics of the ReproQ support its use in quality improvement cycle. Test–retest reliability was good, and the observed minimal important difference allows for discrimination of good and poor performers, also at the level of specific features of performance. PMID:27478690

  19. Markov reward processes

    NASA Technical Reports Server (NTRS)

    Smith, R. M.

    1991-01-01

    Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the behavior of the system with a continuous-time Markov chain, where a reward rate is associated with each state. In a reliability/availability model, upstates may have reward rate 1 and down states may have reward rate zero associated with them. In a queueing model, the number of jobs of certain type in a given state may be the reward rate attached to that state. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Expected steady-state reward rate and expected instantaneous reward rate are clearly useful measures of the Markov reward model. More generally, the distribution of accumulated reward or time-averaged reward over a finite time interval may be determined from the solution of the Markov reward model. This information is of great practical significance in situations where the workload can be well characterized (deterministically, or by continuous functions e.g., distributions). The design process in the development of a computer system is an expensive and long term endeavor. For aerospace applications the reliability of the computer system is essential, as is the ability to complete critical workloads in a well defined real time interval. Consequently, effective modeling of such systems must take into account both performance and reliability. This fact motivates our use of Markov reward models to aid in the development and evaluation of fault tolerant computer systems.

  20. Interexaminer reliability in physical examination of patients with low back pain.

    PubMed

    Strender, L E; Sjöblom, A; Sundell, K; Ludwig, R; Taube, A

    1997-04-01

    Seventy-one patients with low back pain were examined by two physiotherapists (50 patients) and two physicians (21 patients). The two physiotherapists had worked together for many years, but the two physicians had not. The interexaminer reliability of the clinical tests included in the physical examination was evaluated. To evaluate the interexaminer reliability of clinical tests used in the physical examination of patients with low back pain under ideal circumstances, which was the case for the physiotherapists. Numerous clinical tests are used in the evaluation of patients with low back pain. To reach the correct diagnosis, only tests with an acceptable validity and reliability should be used. Previous studies have mainly shown low reliability. It is important that clinical tests not be rejected because of low reliability caused by differences between examiners in performance of the examination and in their definition of normal results. Two examiners, either two physiotherapists or two physicians, independently examined patients with low back pain. In approximately half of the clinical tests studied, an acceptable reliability was demonstrated. On the basis of the physiotherapists series, the reliability was acceptable for a number of clinical tests that are used in the evaluation of patients with low back pain. The results suggest that clinical tests should be standardized to a much higher degree than they are today.

  1. Tackling reliability and construct validity: the systematic development of a qualitative protocol for skill and incident analysis.

    PubMed

    Savage, Trevor Nicholas; McIntosh, Andrew Stuart

    2017-03-01

    It is important to understand factors contributing to and directly causing sports injuries to improve the effectiveness and safety of sports skills. The characteristics of injury events must be evaluated and described meaningfully and reliably. However, many complex skills cannot be effectively investigated quantitatively because of ethical, technological and validity considerations. Increasingly, qualitative methods are being used to investigate human movement for research purposes, but there are concerns about reliability and measurement bias of such methods. Using the tackle in Rugby union as an example, we outline a systematic approach for developing a skill analysis protocol with a focus on improving objectivity, validity and reliability. Characteristics for analysis were selected using qualitative analysis and biomechanical theoretical models and epidemiological and coaching literature. An expert panel comprising subject matter experts provided feedback and the inter-rater reliability of the protocol was assessed using ten trained raters. The inter-rater reliability results were reviewed by the expert panel and the protocol was revised and assessed in a second inter-rater reliability study. Mean agreement in the second study improved and was comparable (52-90% agreement and ICC between 0.6 and 0.9) with other studies that have reported inter-rater reliability of qualitative analysis of human movement.

  2. [Santa Claus is perceived as reliable and friendly: results of the Danish Christmas 2013 survey].

    PubMed

    Amin, Faisal Mohammad; West, Anders Sode; Jørgensen, Carina Sleiborg; Simonsen, Sofie Amalie; Lindberg, Ulrich; Tranum-Jensen, Jørgen; Hougaard, Anders

    2013-12-02

    Several studies have indicated that the population in general perceives doctors as reliable. In the present study perceptions of reliability and kindness attributed to another socially significant archetype, Santa Claus, have been comparatively examined in relation to the doctor. In all, 52 randomly chosen participants were shown a film, where a narrator dressed either as Santa Claus or as a doctor tells an identical story. Structured interviews were then used to assess the subjects' perceptions of reliability and kindness in relation to the narrator's appearance. We found a strong inclination for Santa Claus being perceived as friendlier than the doctor (p = 0.053). However, there was no significant difference in the perception of reliability between Santa Claus and the doctor (p = 0.524). The positive associations attributed to Santa Claus probably cause that he is perceived friendlier than the doctor who may be associated with more serious and unpleasant memories of illness and suffering. Surprisingly, and despite him being an imaginary person, Santa Claus was assessed as being as reliable as the doctor.

  3. On the matter of the reliability of the chemical monitoring system based on the modern control and monitoring devices

    NASA Astrophysics Data System (ADS)

    Andriushin, A. V.; Dolbikova, N. S.; Kiet, S. V.; Merzlikina, E. I.; Nikitina, I. S.

    2017-11-01

    The reliability of the main equipment of any power station depends on the correct water chemistry. In order to provide it, it is necessary to monitor the heat carrier quality, which, in its turn, is provided by the chemical monitoring system. Thus, the monitoring system reliability plays an important part in providing reliability of the main equipment. The monitoring system reliability is determined by the reliability and structure of its hardware and software consisting of sensors, controllers, HMI and so on [1,2]. Workers of a power plant dealing with the measuring equipment must be informed promptly about any breakdowns in the monitoring system, in this case they are able to remove the fault quickly. A computer consultant system for personnel maintaining the sensors and other chemical monitoring equipment can help to notice faults quickly and identify their possible causes. Some technical solutions for such a system are considered in the present paper. The experimental results were obtained on the laboratory and experimental workbench representing a physical model of a part of the chemical monitoring system.

  4. Reliability and validity of the Outcome Expectations for Exercise Scale-2.

    PubMed

    Resnick, Barbara

    2005-10-01

    Development of a reliable and valid measure of outcome expectations for exercise for older adults will help establish the relationship between outcome expectations and exercise and facilitate the development of interventions to increase physical activity in older adults. The purpose of this study was to test the reliability and validity of the Outcome Expectations for Exercise-2 Scale (OEE-2), a 13-item measure with two subscales: positive OEE (POEE) and negative OEE (NOEE). The OEE-2 scale was given to 161 residents in a continuing-care retirement community. There was some evidence of validity based on confirmatory factor analysis, Rasch-analysis INFIT and OUTFIT statistics, and convergent validity and test criterion relationships. There was some evidence for reliability of the OEE-2 based on alpha coefficients, person- and item-separation reliability indexes, and R(2)values. Based on analyses, suggested revisions are provided for future use of the OEE-2. Although ongoing reliability and validity testing are needed, the OEE-2 scale can be used to identify older adults with low outcome expectations for exercise, and interventions can then be implemented to strengthen these expectations and improve exercise behavior.

  5. Assessment of Caregiver Inventory for Rett Syndrome

    PubMed Central

    Lane, Jane B.; Salter, Amber R.; Jones, Nancy E.; Cutter, Gary; Horrigan, Joseph; Skinner, Steve A.; Kaufmann, Walter E.; Glaze, Daniel G.; Neul, Jeffrey L.; Percy, Alan K.

    2017-01-01

    Rett syndrome (RTT) requires total caregiver attention and leads to potential difficulties throughout life. The Caregiver Burden Inventory, designed for Alzheimer disease, was modified to a RTT Caregiver Inventory Assessment (RTT CIA). Reliability and face, construct, and concurrent validity were assessed in caregivers of individuals with RTT. Chi-square or Fisher’s exact test for categorical variables and t-tests or Wilcoxon two-sample tests for continuous variables were utilized. Survey completed by 198 caregivers; 70 caregivers completed follow-up assessment. Exploratory factor analysis revealed good agreement for Physical Burden, Emotional Burden, and Social Burden. Internal reliability was high (Cronbach’s alpha: 0.898). RTT CIA represents a reliable and valid measure, providing a needed metric of caregiver burden in this disorder. PMID:28132121

  6. Sensor Network Architectures for Monitoring Underwater Pipelines

    PubMed Central

    Mohamed, Nader; Jawhar, Imad; Al-Jaroodi, Jameela; Zhang, Liren

    2011-01-01

    This paper develops and compares different sensor network architecture designs that can be used for monitoring underwater pipeline infrastructures. These architectures are underwater wired sensor networks, underwater acoustic wireless sensor networks, RF (Radio Frequency) wireless sensor networks, integrated wired/acoustic wireless sensor networks, and integrated wired/RF wireless sensor networks. The paper also discusses the reliability challenges and enhancement approaches for these network architectures. The reliability evaluation, characteristics, advantages, and disadvantages among these architectures are discussed and compared. Three reliability factors are used for the discussion and comparison: the network connectivity, the continuity of power supply for the network, and the physical network security. In addition, the paper also develops and evaluates a hierarchical sensor network framework for underwater pipeline monitoring. PMID:22346669

  7. A comparison of four measures of moral reasoning.

    PubMed

    Wilmoth, G H; McFarland, S G

    1977-08-01

    Kohlberg's Moral Judgment Scale, Gilligan et al.'s Sexual Moral Judgment Scale, Maitland and Goldman's Objective Moral Judgment Scale, and Hogan's Maturity of Moral Judgment Scale, were examined for reliability and inter-scale relationships. All measures except the Objective Moral Judgment Scale had good reliabilities. The obtained relations between the Moral Judgment Scale and the Sexual Moral Judgment Scale replicated previous research. The Objective Moral Judgment Scale was not found to validly assess the Kohlberg stages. The Maturity of Moral Judgment Scale scores were strongly related to the subjects's classification on the Kohlberg stages, and the scale appears to offer a reliable, quickly scored, and valid index of mature thought, although the scale's continuous scores do not permit clear stage classification.

  8. Sensor network architectures for monitoring underwater pipelines.

    PubMed

    Mohamed, Nader; Jawhar, Imad; Al-Jaroodi, Jameela; Zhang, Liren

    2011-01-01

    This paper develops and compares different sensor network architecture designs that can be used for monitoring underwater pipeline infrastructures. These architectures are underwater wired sensor networks, underwater acoustic wireless sensor networks, RF (radio frequency) wireless sensor networks, integrated wired/acoustic wireless sensor networks, and integrated wired/RF wireless sensor networks. The paper also discusses the reliability challenges and enhancement approaches for these network architectures. The reliability evaluation, characteristics, advantages, and disadvantages among these architectures are discussed and compared. Three reliability factors are used for the discussion and comparison: the network connectivity, the continuity of power supply for the network, and the physical network security. In addition, the paper also develops and evaluates a hierarchical sensor network framework for underwater pipeline monitoring.

  9. Spread prediction model of continuous steel tube based on BP neural network

    NASA Astrophysics Data System (ADS)

    Zhai, Jian-wei; Yu, Hui; Zou, Hai-bei; Wang, San-zhong; Liu, Li-gang

    2017-07-01

    According to the geometric pass of roll and technological parameters of three-roller continuous mandrel rolling mill in a factory, a finite element model is established to simulate the continuous rolling process of seamless steel tube, and the reliability of finite element model is verified by comparing with the simulation results and actual results of rolling force, wall thickness and outer diameter of the tube. The effect of roller reduction, roller rotation speed and blooming temperature on the spread rule is studied. Based on BP(Back Propagation) neural network technology, a spread prediction model of continuous rolling tube is established for training wall thickness coefficient and spread coefficient of the continuous rolling tube, and the rapid and accurate prediction of continuous rolling tube size is realized.

  10. Time-Tagged Risk/Reliability Assessment Program for Development and Operation of Space System

    NASA Astrophysics Data System (ADS)

    Kubota, Yuki; Takegahara, Haruki; Aoyagi, Junichiro

    We have investigated a new method of risk/reliability assessment for development and operation of space system. It is difficult to evaluate risk of spacecraft, because of long time operation, maintenance free and difficulty of test under the ground condition. Conventional methods are FMECA, FTA, ETA and miscellaneous. These are not enough to assess chronological anomaly and there is a problem to share information during R&D. A new method of risk and reliability assessment, T-TRAP (Time-tagged Risk/Reliability Assessment Program) is proposed as a management tool for the development and operation of space system. T-TRAP consisting of time-resolved Fault Tree and Criticality Analyses, upon occurrence of anomaly in the system, facilitates the responsible personnel to quickly identify the failure cause and decide corrective actions. This paper describes T-TRAP method and its availability.

  11. An Evidential Reasoning-Based CREAM to Human Reliability Analysis in Maritime Accident Process.

    PubMed

    Wu, Bing; Yan, Xinping; Wang, Yang; Soares, C Guedes

    2017-10-01

    This article proposes a modified cognitive reliability and error analysis method (CREAM) for estimating the human error probability in the maritime accident process on the basis of an evidential reasoning approach. This modified CREAM is developed to precisely quantify the linguistic variables of the common performance conditions and to overcome the problem of ignoring the uncertainty caused by incomplete information in the existing CREAM models. Moreover, this article views maritime accident development from the sequential perspective, where a scenario- and barrier-based framework is proposed to describe the maritime accident process. This evidential reasoning-based CREAM approach together with the proposed accident development framework are applied to human reliability analysis of a ship capsizing accident. It will facilitate subjective human reliability analysis in different engineering systems where uncertainty exists in practice. © 2017 Society for Risk Analysis.

  12. Carcinogen File: The Ames Test.

    ERIC Educational Resources Information Center

    Kendall, Jim; Kriebel, David

    1979-01-01

    This test measures the capability of a chemical substance to cause mutations in special strains of the bacterium Salmonella. It is quick, taking only forty-eight hours, inexpensive, and reliable. (BB)

  13. Genital Warts

    MedlinePlus

    ... transmitted disease (STD) caused by the human papillomavirus (HPV). The warts usually appear as a small bump ... completely eliminate, the risk of catching or spreading HPV. The most reliable way to avoid infection is ...

  14. Measuring and Evaluating Trends for Reliability, Integrity, and Continued Success (METRICS) Act

    THOMAS, 111th Congress

    Rep. Holt, Rush [D-NJ-12

    2010-04-14

    House - 04/30/2010 Referred to the Subcommittee on Early Childhood, Elementary, and Secondary Education. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:

  15. Creation of a ceramics handbook

    NASA Technical Reports Server (NTRS)

    Craft, W. J.; Filatovs, G. J.

    1974-01-01

    A study was conducted to develop a ceramics handbook defining properties and parameters necessary for thermostructural design. Continuing efforts toward this goal, and in particular toward the evolution of a reliable predictor of fracture from current literature, are described.

  16. Agricultural impacts: Europe's diminishing bread basket

    NASA Astrophysics Data System (ADS)

    Meinke, Holger

    2014-07-01

    Global demand for wheat is projected to increase significantly with continuing population growth. Currently, Europe reliably produces about 29% of global wheat supply. However, this might be under threat from climate change if adaptive measures are not taken now.

  17. Investigation of prototype volcano-surveillance network

    NASA Technical Reports Server (NTRS)

    Eaton, J. P. (Principal Investigator); Ward, P. L.

    1973-01-01

    The author has identified the following significant results. The equipment installed in the volcano surveillance network continues to work quite reliably and earthquakes are being recorded at all sites. A summary of platform receptions per day has been prepared.

  18. Weigh-in-Motion systems evaluation : final report.

    DOT National Transportation Integrated Search

    1976-04-01

    This relatively short-term project was initiated in order to perfect installation, operation, and maintenance practices necessary for continued accurate and reliable operation of a computerized Weigh-in-Motion system which will be used to gather truc...

  19. 14 CFR 121.201 - Nontransport category airplanes: En route limitations: One engine inoperative.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... concerned: (1) The reliability of wind and weather forecasting. (2) The location and kinds of navigation... operating at the maximum continuous power available; (5) The airplane is operating in standard atmosphere...

  20. Using goal attainment scaling to evaluate a needs-led exercise programme for people with severe and profound intellectual disabilities.

    PubMed

    Jones, Martyn C; Walley, Robert M; Leech, Amanda; Paterson, Marion; Common, Stephanie; Metcalf, Charlotte

    2006-12-01

    The aim of this study was to evaluate whether involvement in a 16 week exercise programme improved goal attainment in areas of behaviour, access to community-based experiences, health and physical competence. Participants were women with severe intellectual disability and associated challenging behaviour (setting A,N = 14) and male/female service users with profound physical and intellectual disabilities (setting B,N = 8). The exercise programme included active and passive exercise, walking, swimming, hydrotherapy, team games and rebound therapy. Significant gains in aggregated goal attainment were demonstrated by week 16. The reliability and validity of our goal attainment procedures were demonstrated with inter-rater reliabilities exceeding 80 percent. Changes in goal attainment were concurrent with global clinical impression scores in a series of single case studies. Continuing care settings should dedicate care staff to provide routinized, continuing exercise programmes.

  1. Frequency-dependent reliability of spike propagation is function of axonal voltage-gated sodium channels in cerebellar Purkinje cells.

    PubMed

    Yang, Zhilai; Wang, Jin-Hui

    2013-12-01

    The spike propagation on nerve axons, like synaptic transmission, is essential to ensure neuronal communication. The secure propagation of sequential spikes toward axonal terminals has been challenged in the neurons with a high firing rate, such as cerebellar Purkinje cells. The shortfall of spike propagation makes some digital spikes disappearing at axonal terminals, such that the elucidation of the mechanisms underlying spike propagation reliability is crucial to find the strategy of preventing loss of neuronal codes. As the spike propagation failure is influenced by the membrane potentials, this process is likely caused by altering the functional status of voltage-gated sodium channels (VGSC). We examined this hypothesis in Purkinje cells by using pair-recordings at their somata and axonal blebs in cerebellar slices. The reliability of spike propagation was deteriorated by elevating spike frequency. The frequency-dependent reliability of spike propagation was attenuated by inactivating VGSCs and improved by removing their inactivation. Thus, the functional status of axonal VGSCs influences the reliability of spike propagation.

  2. Scaling Impacts in Life Support Architecture and Technology Selection

    NASA Technical Reports Server (NTRS)

    Lange, Kevin

    2016-01-01

    For long-duration space missions outside of Earth orbit, reliability considerations will drive higher levels of redundancy and/or on-board spares for life support equipment. Component scaling will be a critical element in minimizing overall launch mass while maintaining an acceptable level of system reliability. Building on an earlier reliability study (AIAA 2012-3491), this paper considers the impact of alternative scaling approaches, including the design of technology assemblies and their individual components to maximum, nominal, survival, or other fractional requirements. The optimal level of life support system closure is evaluated for deep-space missions of varying duration using equivalent system mass (ESM) as the comparative basis. Reliability impacts are included in ESM by estimating the number of component spares required to meet a target system reliability. Common cause failures are included in the analysis. ISS and ISS-derived life support technologies are considered along with selected alternatives. This study focusses on minimizing launch mass, which may be enabling for deep-space missions.

  3. Measurement and Reliability of Response Inhibition

    PubMed Central

    Congdon, Eliza; Mumford, Jeanette A.; Cohen, Jessica R.; Galvan, Adriana; Canli, Turhan; Poldrack, Russell A.

    2012-01-01

    Response inhibition plays a critical role in adaptive functioning and can be assessed with the Stop-signal task, which requires participants to suppress prepotent motor responses. Evidence suggests that this ability to inhibit a prepotent motor response (reflected as Stop-signal reaction time (SSRT)) is a quantitative and heritable measure of interindividual variation in brain function. Although attention has been given to the optimal method of SSRT estimation, and initial evidence exists in support of its reliability, there is still variability in how Stop-signal task data are treated across samples. In order to examine this issue, we pooled data across three separate studies and examined the influence of multiple SSRT calculation methods and outlier calling on reliability (using Intra-class correlation). Our results suggest that an approach which uses the average of all available sessions, all trials of each session, and excludes outliers based on predetermined lenient criteria yields reliable SSRT estimates, while not excluding too many participants. Our findings further support the reliability of SSRT, which is commonly used as an index of inhibitory control, and provide support for its continued use as a neurocognitive phenotype. PMID:22363308

  4. Capacitor Technologies, Applications and Reliability

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Various aspects of capacitor technologies and applications are discussed. Major emphasis is placed on: the causes of failures; accelerated testing; screening tests; destructive physical analysis; applications techniques; and improvements in capacitor capabilities.

  5. Recent Enhancements to the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Kilgore, W. A.; Balakrishna, S.; Bobbitt, C. W.; Underwood, P.

    2003-01-01

    The National Transonic Facility continues to make enhancements to provide quality data in a safe, efficient and cost effective method for aerodynamic ground testing. Recent enhancements discussed in this paper include the restoration of reliability and improved performance of the heat exchanger systems resulting in the expansion of the NTF air operations envelope. Additionally, results are presented from a continued effort to reduce model dynamics through the use of a new stiffer balance and sting

  6. A Comparison of Reliability Measures for Continuous and Discontinuous Recording Methods: Inflated Agreement Scores with Partial Interval Recording and Momentary Time Sampling for Duration Events

    ERIC Educational Resources Information Center

    Rapp, John T.; Carroll, Regina A.; Stangeland, Lindsay; Swanson, Greg; Higgins, William J.

    2011-01-01

    The authors evaluated the extent to which interobserver agreement (IOA) scores, using the block-by-block method for events scored with continuous duration recording (CDR), were higher when the data from the same sessions were converted to discontinuous methods. Sessions with IOA scores of 89% or less with CDR were rescored using 10-s partial…

  7. Two Distinct Episodes Of Whooping Cough Caused By Consecutive Bordetella Pertussis And Bordetella Parapertussis Infections In A Fully Immunized Healthy Boy.

    PubMed

    Heininger, Ulrich; Schlassa, Detlef

    2016-11-01

    We describe a 5-year-old, fully immunized boy with polymerase chain reaction-proven consecutive Bordetella pertussis and Bordetella parapertussis infections causing typical whooping cough at the age of 2 and 5 years, respectively. Neither pertussis immunization nor disease provides reliable immunity against further episodes of whooping cough.

  8. Monitoring and root cause analysis of clinical biochemistry turn around time at an academic hospital.

    PubMed

    Chauhan, Kiran P; Trivedi, Amit P; Patel, Dharmik; Gami, Bhakti; Haridas, N

    2014-10-01

    Quality can be defined as the ability of a product or service to satisfy the needs and expectations of the customer. Laboratories are more focusing on technical and analytical quality for reliability and accuracy of test results. Patients and clinicians however are interested in rapid, reliable and efficient service from laboratory. Turn around time (TAT), the timeliness with which laboratory personnel deliver test results, is one of the most noticeable signs of laboratory service and is often used as a key performance indicator of laboratory performance. This study is aims to provide clue for laboratory TAT monitoring and root cause analysis. In a 2 year period a total of 75,499 specimens of outdoor patient department were monitor, of this a total of 4,142 specimens exceeded TAT. With consistent efforts to monitor, root cause analysis and corrective measures, we are able to decreased the specimens exceeding TAT from 7-8 to 3.7 %. Though it is difficult task to monitor TAT with the help of laboratory information system, real time documentation and authentic data retrievable, along with identification of causes for delays and its remedial measures, improve laboratory TAT and thus patient satisfaction.

  9. The role of test-retest reliability in measuring individual and group differences in executive functioning.

    PubMed

    Paap, Kenneth R; Sawi, Oliver

    2016-12-01

    Studies testing for individual or group differences in executive functioning can be compromised by unknown test-retest reliability. Test-retest reliabilities across an interval of about one week were obtained from performance in the antisaccade, flanker, Simon, and color-shape switching tasks. There is a general trade-off between the greater reliability of single mean RT measures, and the greater process purity of measures based on contrasts between mean RTs in two conditions. The individual differences in RT model recently developed by Miller and Ulrich was used to evaluate the trade-off. Test-retest reliability was statistically significant for 11 of the 12 measures, but was of moderate size, at best, for the difference scores. The test-retest reliabilities for the Simon and flanker interference scores were lower than those for switching costs. Standard practice evaluates the reliability of executive-functioning measures using split-half methods based on data obtained in a single day. Our test-retest measures of reliability are lower, especially for difference scores. These reliability measures must also take into account possible day effects that classical test theory assumes do not occur. Measures based on single mean RTs tend to have acceptable levels of reliability and convergent validity, but are "impure" measures of specific executive functions. The individual differences in RT model shows that the impurity problem is worse than typically assumed. However, the "purer" measures based on difference scores have low convergent validity that is partly caused by deficiencies in test-retest reliability. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Machine Maintenance Scheduling with Reliability Engineering Method and Maintenance Value Stream Mapping

    NASA Astrophysics Data System (ADS)

    Sembiring, N.; Nasution, A. H.

    2018-02-01

    Corrective maintenance i.e replacing or repairing the machine component after machine break down always done in a manufacturing company. It causes the production process must be stopped. Production time will decrease due to the maintenance team must replace or repair the damage machine component. This paper proposes a preventive maintenance’s schedule for a critical component of a critical machine of an crude palm oil and kernel company due to increase maintenance efficiency. The Reliability Engineering & Maintenance Value Stream Mapping is used as a method and a tool to analize the reliability of the component and reduce the wastage in any process by segregating value added and non value added activities.

  11. Cyber Security: Assessing Our Vulnerabilities and Developing an Effective Defense

    NASA Astrophysics Data System (ADS)

    Spafford, Eugene H.

    The number and sophistication of cyberattacks continues to increase, but no national policy is in place to confront them. Critical systems need to be built on secure foundations, rather than the cheapest general-purpose platform. A program that combines education in cyber security, increasing resources for law enforcement, development of reliable systems for critical applications, and expanding research support in multiple areas of security and reliability is essential to combat risks that are far beyond the nuisances of spam email and viruses, and involve widespread espionage, theft, and attacks on essential services.

  12. Performance and Reliability of Bonded Interfaces for High-temperature Packaging: Annual Progress Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeVoto, Douglas J.

    2017-10-19

    As maximum device temperatures approach 200 °Celsius, continuous operation, sintered silver materials promise to maintain bonds at these high temperatures without excessive degradation rates. A detailed characterization of the thermal performance and reliability of sintered silver materials and processes has been initiated for the next year. Future steps in crack modeling include efforts to simulate crack propagation directly using the extended finite element method (X-FEM), a numerical technique that uses the partition of unity method for modeling discontinuities such as cracks in a system.

  13. Biofouling and the continuous monitoring of underwater light from a seagrass perspective

    USGS Publications Warehouse

    Onuf, C.P.

    2006-01-01

    For more than a decade, inexpensive electronic instruments have made continuous underwater light monitoring an integral part of many seagrass studies. Although biofouling, if not controlled, compromises the utility of the record. A year-long assessment of the time course of sensor fouling, in the Laguna Madre of Texas established that light transmitted through the fouling layer after 2 wk of exposure exceeded 90% except for a 6-8 wk period in May and June. On that basis, a 2-wk interval was chosen for routine servicing. Subsequent monitoring proved this choice to be grossly in error. The period of sub-90% transmittance after 2 wk extended to 4-6 mo annually over the next 3 yr. Fouling was strongly correlated with temperature, ambient light, and year. Since an algal bloom of 7-yr duration finally waned during this study, increased ambient light seemed most likely to explain increased fouling later in the study. The explanatory value of light was less than temperature or year in multiple regression, requiring some other explanation of the date effect than change in ambient light. Allelopathic and suspension-feeding depressant effects of the brown tide are offered as the most likely cause of unusually low fouling in the first year. Biofouling was so unpredictable and rapid in this study that at least weekly maintenance would be required to assure reliability of the light monitoring record. ?? 2006 Estuarine Research Federation.

  14. Spatial correlation of shear-wave velocity in the San Francisco Bay Area sediments

    USGS Publications Warehouse

    Thompson, E.M.; Baise, L.G.; Kayen, R.E.

    2007-01-01

    Ground motions recorded within sedimentary basins are variable over short distances. One important cause of the variability is that local soil properties are variable at all scales. Regional hazard maps developed for predicting site effects are generally derived from maps of surficial geology; however, recent studies have shown that mapped geologic units do not correlate well with the average shear-wave velocity of the upper 30 m, Vs(30). We model the horizontal variability of near-surface soil shear-wave velocity in the San Francisco Bay Area to estimate values in unsampled locations in order to account for site effects in a continuous manner. Previous geostatistical studies of soil properties have shown horizontal correlations at the scale of meters to tens of meters while the vertical correlations are on the order of centimeters. In this paper we analyze shear-wave velocity data over regional distances and find that surface shear-wave velocity is correlated at horizontal distances up to 4 km based on data from seismic cone penetration tests and the spectral analysis of surface waves. We propose a method to map site effects by using geostatistical methods based on the shear-wave velocity correlation structure within a sedimentary basin. If used in conjunction with densely spaced shear-wave velocity profiles in regions of high seismic risk, geostatistical methods can produce reliable continuous maps of site effects. ?? 2006 Elsevier Ltd. All rights reserved.

  15. Intelligent transportation systems for planned special events : a cross-cutting study

    DOT National Transportation Integrated Search

    2008-11-01

    This cross-cutting study examines how six agencies in five states used and continue to use ITS to reduce congestion generated by planned special events, thereby reducing crashes, increasing travel time reliability, and reducing driver frustration.

  16. Measuring and Evaluating Trends for Reliability, Integrity, and Continued Success (METRICS) Act

    THOMAS, 111th Congress

    Sen. Brown, Sherrod [D-OH

    2010-04-14

    Senate - 04/14/2010 Read twice and referred to the Committee on Health, Education, Labor, and Pensions. (All Actions) Tracker: This bill has the status IntroducedHere are the steps for Status of Legislation:

  17. Finite element modeling and analysis of reinforced-concrete bridge.

    DOT National Transportation Integrated Search

    2000-09-01

    Despite its long history, the finite element method continues to be the predominant strategy employed by engineers to conduct structural analysis. A reliable method is needed for analyzing structures made of reinforced concrete, a complex but common ...

  18. Developing a tool for assessing competency in root cause analysis.

    PubMed

    Gupta, Priyanka; Varkey, Prathibha

    2009-01-01

    Root cause analysis (RCA) is a tool for identifying the key cause(s) contributing to a sentinel event or near miss. Although training in RCA is gaining popularity in medical education, there is no published literature on valid or reliable methods for assessing competency in the same. A tool for assessing competency in RCA was pilot tested as part of an eight-station Objective Structured Clinical Examination that was conducted at the completion of a three-week quality improvement (QI) curriculum for the Mayo Clinic Preventive Medicine and Endocrinology fellowship programs. As part of the curriculum, fellows completed a QI project to enhance physician communication of the diagnosis and treatment plan at the end of a patient visit. They had a didactic session on RCA, followed by process mapping of the information flow at the project clinic, after which fellows conducted an actual RCA using the Ishikawa fishbone diagram. For the RCA competency assessment, fellows performed an RCA regarding a scenario describing an adverse medication event and provided possible solutions to prevent such errors in the future. All faculty strongly agreed or agreed that they were able to accurately assess competency in RCA using the tool. Interrater reliability for the global competency rating and checklist scoring were 0.96 and 0.85, respectively. Internal consistency (Cronbach's alpha) was 0.76. Six of eight of the fellows found the difficulty level of the test to be optimal. Assessment methods must accompany education programs to ensure that graduates are competent in QI methodologies and are able to apply them effectively in the workplace. The RCA assessment tool was found to be a valid, reliable, feasible, and acceptable method for assessing competency in RCA. Further research is needed to examine its predictive validity and generalizability.

  19. Temporary Losses of Highway Capacity and Impacts on Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chin, S.M.

    2002-07-31

    Traffic congestion and its impacts significantly affect the nation's economic performance and the public's quality of life. In most urban areas, travel demand routinely exceeds highway capacity during peak periods. In addition, events such as crashes, vehicle breakdowns, work zones, adverse weather, and suboptimal signal timing cause temporary capacity losses, often worsening the conditions on already congested highway networks. The impacts of these temporary capacity losses include delay, reduced mobility, and reduced reliability of the highway system. They can also cause drivers to re-route or reschedule trips. Prior to this study, no nationwide estimates of temporary losses of highway capacitymore » had been made by type of capacity-reducing event. Such information is vital to formulating sound public policies for the highway infrastructure and its operation. This study is an initial attempt to provide nationwide estimates of the capacity losses and delay caused by temporary capacity-reducing events. The objective of this study was to develop and implement methods for producing national-level estimates of the loss of capacity on the nation's highway facilities due to temporary phenomena as well as estimates of the impacts of such losses. The estimates produced by this study roughly indicate the magnitude of problems that are likely be addressed by the Congress during the next re-authorization of the Surface Transportation Programs. The scope of the study includes all urban and rural freeways and principal arterials in the nation's highway system for 1999. Specifically, this study attempts to quantify the extent of temporary capacity losses due to crashes, breakdowns, work zones, weather, and sub-optimal signal timing. These events can cause impacts such as capacity reduction, delays, trip rescheduling, rerouting, reduced mobility, and reduced reliability. This study focuses on the reduction of capacity and resulting delays caused by the temporary events mentioned above. Impacts other than capacity losses and delay, such as re-routing, rescheduling, reduced mobility, and reduced reliability, are not covered in this phase of research.« less

  20. Effects of long-term continuous cropping on soil nematode community and soil condition associated with replant problem in strawberry habitat.

    PubMed

    Li, Xingyue; Lewis, Edwin E; Liu, Qizhi; Li, Heqin; Bai, Chunqi; Wang, Yuzhu

    2016-08-10

    Continuous cropping changes soil physiochemical parameters, enzymes and microorganism communities, causing "replant problem" in strawberry cultivation. We hypothesized that soil nematode community would reflect the changes in soil conditions caused by long-term continuous cropping, in ways that are consistent and predictable. To test this hypothesis, we studied the soil nematode communities and several soil parameters, including the concentration of soil phenolic acids, organic matter and nitrogen levels, in strawberry greenhouse under continuous-cropping for five different durations. Soil pH significantly decreased, and four phenolic acids, i.e., p-hydroxybenzoic acid, ferulic acid, cinnamic acid and p-coumaric acid, accumulated with time under continuous cropping. The four phenolic acids were highly toxic to Acrobeloides spp., the eudominant genus in non-continuous cropping, causing it to reduce to a resident genus after seven-years of continuous cropping. Decreased nematode diversity indicated loss of ecosystem stability and sustainability because of continuous-cropping practice. Moreover, the dominant decomposition pathway was altered from bacterial to fungal under continuous cropping. Our results suggest that along with the continuous-cropping time in strawberry habitat, the soil food web is disturbed, and the available plant nutrition as well as the general health of the soil deteriorates; these changes can be indicated by soil nematode community.

  1. Effects of long-term continuous cropping on soil nematode community and soil condition associated with replant problem in strawberry habitat

    NASA Astrophysics Data System (ADS)

    Li, Xingyue; Lewis, Edwin E.; Liu, Qizhi; Li, Heqin; Bai, Chunqi; Wang, Yuzhu

    2016-08-01

    Continuous cropping changes soil physiochemical parameters, enzymes and microorganism communities, causing “replant problem” in strawberry cultivation. We hypothesized that soil nematode community would reflect the changes in soil conditions caused by long-term continuous cropping, in ways that are consistent and predictable. To test this hypothesis, we studied the soil nematode communities and several soil parameters, including the concentration of soil phenolic acids, organic matter and nitrogen levels, in strawberry greenhouse under continuous-cropping for five different durations. Soil pH significantly decreased, and four phenolic acids, i.e., p-hydroxybenzoic acid, ferulic acid, cinnamic acid and p-coumaric acid, accumulated with time under continuous cropping. The four phenolic acids were highly toxic to Acrobeloides spp., the eudominant genus in non-continuous cropping, causing it to reduce to a resident genus after seven-years of continuous cropping. Decreased nematode diversity indicated loss of ecosystem stability and sustainability because of continuous-cropping practice. Moreover, the dominant decomposition pathway was altered from bacterial to fungal under continuous cropping. Our results suggest that along with the continuous-cropping time in strawberry habitat, the soil food web is disturbed, and the available plant nutrition as well as the general health of the soil deteriorates; these changes can be indicated by soil nematode community.

  2. Reliability assessment of multiple quantum well avalanche photodiodes

    NASA Technical Reports Server (NTRS)

    Yun, Ilgu; Menkara, Hicham M.; Wang, Yang; Oguzman, Isamil H.; Kolnik, Jan; Brennan, Kevin F.; May, Gray S.; Wagner, Brent K.; Summers, Christopher J.

    1995-01-01

    The reliability of doped-barrier AlGaAs/GsAs multi-quantum well avalanche photodiodes fabricated by molecular beam epitaxy is investigated via accelerated life tests. Dark current and breakdown voltage were the parameters monitored. The activation energy of the degradation mechanism and median device lifetime were determined. Device failure probability as a function of time was computed using the lognormal model. Analysis using the electron beam induced current method revealed the degradation to be caused by ionic impurities or contamination in the passivation layer.

  3. [Mathematic analysis of risk factors influence on occupational respiratory diseases development].

    PubMed

    Budkar', L N; Bugaeva, I V; Obukhova, T Iu; Tereshina, L G; Karpova, E A; Shmonina, O G

    2010-01-01

    Analysis covered 1348 case histories of workers exposed to industrial dust in Urals region. The analysis applied mathematical processing of survival theory and correlation analysis. The authors studied influence of various factors: dust concentration, connective tissue dysplasia, smoking habits--on duration for diseases caused by dust to appear. Findings are that occupational diseases develop reliably faster with higher ambient dust concentrations and with connective tissue dysplasia syndrome. Smoking habits do not alter duration of pneumoconiosis development, but reliably increases development of occupational dust bronchitis.

  4. Study samples are too small to produce sufficiently precise reliability coefficients.

    PubMed

    Charter, Richard A

    2003-04-01

    In a survey of journal articles, test manuals, and test critique books, the author found that a mean sample size (N) of 260 participants had been used for reliability studies on 742 tests. The distribution was skewed because the median sample size for the total sample was only 90. The median sample sizes for the internal consistency, retest, and interjudge reliabilities were 182, 64, and 36, respectively. The author presented sample size statistics for the various internal consistency methods and types of tests. In general, the author found that the sample sizes that were used in the internal consistency studies were too small to produce sufficiently precise reliability coefficients, which in turn could cause imprecise estimates of examinee true-score confidence intervals. The results also suggest that larger sample sizes have been used in the last decade compared with those that were used in earlier decades.

  5. Delay Analysis of Car-to-Car Reliable Data Delivery Strategies Based on Data Mulling with Network Coding

    NASA Astrophysics Data System (ADS)

    Park, Joon-Sang; Lee, Uichin; Oh, Soon Young; Gerla, Mario; Lun, Desmond Siumen; Ro, Won Woo; Park, Joonseok

    Vehicular ad hoc networks (VANET) aims to enhance vehicle navigation safety by providing an early warning system: any chance of accidents is informed through the wireless communication between vehicles. For the warning system to work, it is crucial that safety messages be reliably delivered to the target vehicles in a timely manner and thus reliable and timely data dissemination service is the key building block of VANET. Data mulling technique combined with three strategies, network codeing, erasure coding and repetition coding, is proposed for the reliable and timely data dissemination service. Particularly, vehicles in the opposite direction on a highway are exploited as data mules, mobile nodes physically delivering data to destinations, to overcome intermittent network connectivity cause by sparse vehicle traffic. Using analytic models, we show that in such a highway data mulling scenario the network coding based strategy outperforms erasure coding and repetition based strategies.

  6. Factors which Limit the Value of Additional Redundancy in Human Rated Launch Vehicle Systems

    NASA Technical Reports Server (NTRS)

    Anderson, Joel M.; Stott, James E.; Ring, Robert W.; Hatfield, Spencer; Kaltz, Gregory M.

    2008-01-01

    The National Aeronautics and Space Administration (NASA) has embarked on an ambitious program to return humans to the moon and beyond. As NASA moves forward in the development and design of new launch vehicles for future space exploration, it must fully consider the implications that rule-based requirements of redundancy or fault tolerance have on system reliability/risk. These considerations include common cause failure, increased system complexity, combined serial and parallel configurations, and the impact of design features implemented to control premature activation. These factors and others must be considered in trade studies to support design decisions that balance safety, reliability, performance and system complexity to achieve a relatively simple, operable system that provides the safest and most reliable system within the specified performance requirements. This paper describes conditions under which additional functional redundancy can impede improved system reliability. Examples from current NASA programs including the Ares I Upper Stage will be shown.

  7. Visual assessment of hemiplegic gait following stroke: pilot study.

    PubMed

    Hughes, K A; Bell, F

    1994-10-01

    A form that will guide clinicians through a reliable and valid visual assessment of hemiplegic gait was designed. Six hemiplegic patients were filmed walking along an instrumented walkway. These films were shown to three physiotherapists who used the form to rate the patients' gait. Each physiotherapist rated the six patients at both stages of recovery, repeating this a further two times. This resulted in 108 completed forms. Within-rater reliability is statistically significant for some raters and some individual form sections. Between-rater reliability is significant for some sections. Detailed analysis has shown that parts of the form have caused reduced reliability. These are mainly sections that ask for severity judgments or are duplicated. Some indication of normal gait should be included on the form. To test validity fully the form should be tested on a group of patients who all have significant changes in each objective gait measurement.

  8. Understanding the Reliability of Solder Joints Used in Advanced Structural and Electronics Applications: Part 2 - Reliability Performance.

    DOE PAGES

    Vianco, Paul T.

    2017-03-01

    Whether structural or electronic, all solder joints must provide the necessary level of reliability for the application. The Part 1 report examined the effects of filler metal properties and the soldering process on joint reliability. Filler metal solderability and mechanical properties, as well as the extents of base material dissolution and interface reaction that occur during the soldering process, were shown to affect reliability performance. The continuation of this discussion is presented in this Part 2 report, which highlights those factors that directly affect solder joint reliability. There is the growth of an intermetallic compound (IMC) reaction layer at themore » solder/base material interface by means of solid-state diffusion processes. In terms of mechanical response by the solder joint, fatigue remains as the foremost concern for long-term performance. Thermal mechanical fatigue (TMF), a form of low-cycle fatigue (LCF), occurs when temperature cycling is combined with mismatched values of the coefficient of thermal expansion (CTE) between materials comprising the solder joint “system.” Vibration environments give rise to high-cycle fatigue (HCF) degradation. Although accelerated aging studies provide valuable empirical data, too many variants of filler metals, base materials, joint geometries, and service environments are forcing design engineers to embrace computational modeling to predict the long-term reliability of solder joints.« less

  9. The development and test of a long-life, high reliability solar array drive actuator

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, D. L.

    1973-01-01

    To meet the life and reliability requirements of five to ten year space missions, a new solar array drive mechanism for 3-axis stabilized vehicles has been developed and is undergoing life testing. The drive employs a redundant lubrication system to increase its reliability. An overrunning clutch mechanism is used to permit block redundant application of two or more drives to a common array drive shaft. Two prototype actuator and clutch assemblies, in continuous vacuum life test under load at 10 to the minus 8th power torr for more than sixteen months, have each accumulated more than 34,000 output revolutions without anomaly, the equivalent of more than seven years of operation in a 1000 km orbit or nearly ninety-five years at synchronous altitude.

  10. Space Station Freedom power - A reliability, availability, and maintainability assessment of the proposed Space Station Freedom electric power system

    NASA Technical Reports Server (NTRS)

    Turnquist, S. R.; Twombly, M.; Hoffman, D.

    1989-01-01

    A preliminary reliability, availability, and maintainability (RAM) analysis of the proposed Space Station Freedom electric power system (EPS) was performed using the unit reliability, availability, and maintainability (UNIRAM) analysis methodology. Orbital replacement units (ORUs) having the most significant impact on EPS availability measures were identified. Also, the sensitivity of the EPS to variations in ORU RAM data was evaluated for each ORU. Estimates were made of average EPS power output levels and availability of power to the core area of the space station. The results of assessments of the availability of EPS power and power to load distribution points in the space stations are given. Some highlights of continuing studies being performed to understand EPS availability considerations are presented.

  11. Potential Seasonal Predictability for Winter Storms over Europe

    NASA Astrophysics Data System (ADS)

    Wild, Simon; Befort, Daniel J.; Leckebusch, Gregor C.

    2017-04-01

    Reliable seasonal forecasts of strong extra-tropical cyclones and windstorms would have great social and economical benefits, as these events are the most costly natural hazards over Europe. In a previous study we have shown good agreement of spatial climatological distributions of extra-tropical cyclones and wind storms in state-of-the-art multi-member seasonal prediction systems with reanalysis. We also found significant seasonal prediction skill of extra-tropical cyclones and windstorms affecting numerous European countries. We continue this research by investigating the mechanisms and precursor conditions (primarily over the North Atlantic) on a seasonal time scale leading to enhanced extra-tropical cyclone activity and winter storm frequency over Europe. Our results regarding mechanisms show that an increased surface temperature gradient at the western edge of the North Atlantic can be related to enhanced winter storm frequency further downstream causing for example a greater number of storms over the British Isles, as observed in winter 2013-14.The so-called "Horseshoe Index", a SST tripole anomaly pattern over the North Atlantic in the summer months can also cause a higher number of winter storms over Europe in the subsequent winter. We will show results of AMIP-type sensitivity experiments using an AGCM (ECHAM5), supporting this hypothesis. Finally we will analyse whether existing seasonal forecast systems are able to capture these identified mechanisms and precursor conditions affecting the models' seasonal prediction skill.

  12. Muscle Control and Non‐specific Chronic Low Back Pain

    PubMed Central

    Deckers, Kristiaan; Eldabe, Sam; Kiesel, Kyle; Gilligan, Chris; Vieceli, John; Crosby, Peter

    2017-01-01

    Objectives Chronic low back pain (CLBP) is the most prevalent of the painful musculoskeletal conditions. CLBP is a heterogeneous condition with many causes and diagnoses, but there are few established therapies with strong evidence of effectiveness (or cost effectiveness). CLBP for which it is not possible to identify any specific cause is often referred to as non‐specific chronic LBP (NSCLBP). One type of NSCLBP is continuing and recurrent primarily nociceptive CLBP due to vertebral joint overload subsequent to functional instability of the lumbar spine. This condition may occur due to disruption of the motor control system to the key stabilizing muscles in the lumbar spine, particularly the lumbar multifidus muscle (MF). Methods This review presents the evidence for MF involvement in CLBP, mechanisms of action of disruption of control of the MF, and options for restoring control of the MF as a treatment for NSCLBP. Results Imaging assessment of motor control dysfunction of the MF in individual patients is fraught with difficulty. MRI or ultrasound imaging techniques, while reliable, have limited diagnostic or predictive utility. For some patients, restoration of motor control to the MF with specific exercises can be effective, but population results are not persuasive since most patients are unable to voluntarily contract the MF and may be inhibited from doing so due to arthrogenic muscle inhibition. Conclusions Targeting MF control with restorative neurostimulation promises a new treatment option. PMID:29230905

  13. Frequency doubled high-power disk lasers in pulsed and continuous-wave operation

    NASA Astrophysics Data System (ADS)

    Weiler, Sascha; Hangst, Alexander; Stolzenburg, Christian; Zawischa, Ivo; Sutter, Dirk; Killi, Alexander; Kalfhues, Steffen; Kriegshaeuser, Uwe; Holzer, Marco; Havrilla, David

    2012-03-01

    The disk laser with multi-kW output power in infrared cw operation is widely used in today's manufacturing, primarily in the automotive industry. The disk technology combines high power (average and/or peak power), excellent beam quality, high efficiency and high reliability with low investment and operating costs. Additionally, the disk laser is ideally suited for frequency conversion due to its polarized output with negligible depolarization losses. Laser light in the green spectral range (~515 nm) can be created with a nonlinear crystal. Pulsed disk lasers with green output of well above 50 W (extracavity doubling) in the ps regime and several hundreds of Watts in the ns regime with intracavity doubling are already commercially available whereas intracavity doubled disk lasers in continuous wave operation with greater than 250 W output are in test phase. In both operating modes (pulsed and cw) the frequency doubled disk laser offers advantages in existing and new applications. Copper welding for example is said to show much higher process reliability with green laser light due to its higher absorption in comparison to the infrared. This improvement has the potential to be very beneficial for the automotive industry's move to electrical vehicles which requires reliable high-volume welding of copper as a major task for electro motors, batteries, etc.

  14. A multiplex primer design algorithm for target amplification of continuous genomic regions.

    PubMed

    Ozturk, Ahmet Rasit; Can, Tolga

    2017-06-19

    Targeted Next Generation Sequencing (NGS) assays are cost-efficient and reliable alternatives to Sanger sequencing. For sequencing of very large set of genes, the target enrichment approach is suitable. However, for smaller genomic regions, the target amplification method is more efficient than both the target enrichment method and Sanger sequencing. The major difficulty of the target amplification method is the preparation of amplicons, regarding required time, equipment, and labor. Multiplex PCR (MPCR) is a good solution for the mentioned problems. We propose a novel method to design MPCR primers for a continuous genomic region, following the best practices of clinically reliable PCR design processes. On an experimental setup with 48 different combinations of factors, we have shown that multiple parameters might effect finding the first feasible solution. Increasing the length of the initial primer candidate selection sequence gives better results whereas waiting for a longer time to find the first feasible solution does not have a significant impact. We generated MPCR primer designs for the HBB whole gene, MEFV coding regions, and human exons between 2000 bp to 2100 bp-long. Our benchmarking experiments show that the proposed MPCR approach is able produce reliable NGS assay primers for a given sequence in a reasonable amount of time.

  15. Towards New Metrics for High-Performance Computing Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Ashraf, Rizwan A; Engelmann, Christian

    Ensuring the reliability of applications is becoming an increasingly important challenge as high-performance computing (HPC) systems experience an ever-growing number of faults, errors and failures. While the HPC community has made substantial progress in developing various resilience solutions, it continues to rely on platform-based metrics to quantify application resiliency improvements. The resilience of an HPC application is concerned with the reliability of the application outcome as well as the fault handling efficiency. To understand the scope of impact, effective coverage and performance efficiency of existing and emerging resilience solutions, there is a need for new metrics. In this paper, wemore » develop new ways to quantify resilience that consider both the reliability and the performance characteristics of the solutions from the perspective of HPC applications. As HPC systems continue to evolve in terms of scale and complexity, it is expected that applications will experience various types of faults, errors and failures, which will require applications to apply multiple resilience solutions across the system stack. The proposed metrics are intended to be useful for understanding the combined impact of these solutions on an application's ability to produce correct results and to evaluate their overall impact on an application's performance in the presence of various modes of faults.« less

  16. The reliability of continuous brain responses during naturalistic listening to music.

    PubMed

    Burunat, Iballa; Toiviainen, Petri; Alluri, Vinoo; Bogert, Brigitte; Ristaniemi, Tapani; Sams, Mikko; Brattico, Elvira

    2016-01-01

    Low-level (timbral) and high-level (tonal and rhythmical) musical features during continuous listening to music, studied by functional magnetic resonance imaging (fMRI), have been shown to elicit large-scale responses in cognitive, motor, and limbic brain networks. Using a similar methodological approach and a similar group of participants, we aimed to study the replicability of previous findings. Participants' fMRI responses during continuous listening of a tango Nuevo piece were correlated voxelwise against the time series of a set of perceptually validated musical features computationally extracted from the music. The replicability of previous results and the present study was assessed by two approaches: (a) correlating the respective activation maps, and (b) computing the overlap of active voxels between datasets at variable levels of ranked significance. Activity elicited by timbral features was better replicable than activity elicited by tonal and rhythmical ones. These results indicate more reliable processing mechanisms for low-level musical features as compared to more high-level features. The processing of such high-level features is probably more sensitive to the state and traits of the listeners, as well as of their background in music. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Generalization of information-based concepts in forecast verification

    NASA Astrophysics Data System (ADS)

    Tödter, J.; Ahrens, B.

    2012-04-01

    This work deals with information-theoretical methods in probabilistic forecast verification. Recent findings concerning the Ignorance Score are shortly reviewed, then the generalization to continuous forecasts is shown. For ensemble forecasts, the presented measures can be calculated exactly. The Brier Score (BS) and its generalizations to the multi-categorical Ranked Probability Score (RPS) and to the Continuous Ranked Probability Score (CRPS) are the prominent verification measures for probabilistic forecasts. Particularly, their decompositions into measures quantifying the reliability, resolution and uncertainty of the forecasts are attractive. Information theory sets up the natural framework for forecast verification. Recently, it has been shown that the BS is a second-order approximation of the information-based Ignorance Score (IGN), which also contains easily interpretable components and can also be generalized to a ranked version (RIGN). Here, the IGN, its generalizations and decompositions are systematically discussed in analogy to the variants of the BS. Additionally, a Continuous Ranked IGN (CRIGN) is introduced in analogy to the CRPS. The applicability and usefulness of the conceptually appealing CRIGN is illustrated, together with an algorithm to evaluate its components reliability, resolution, and uncertainty for ensemble-generated forecasts. This is also directly applicable to the more traditional CRPS.

  18. The development and validation of a test of science critical thinking for fifth graders.

    PubMed

    Mapeala, Ruslan; Siew, Nyet Moi

    2015-01-01

    The paper described the development and validation of the Test of Science Critical Thinking (TSCT) to measure the three critical thinking skill constructs: comparing and contrasting, sequencing, and identifying cause and effect. The initial TSCT consisted of 55 multiple choice test items, each of which required participants to select a correct response and a correct choice of critical thinking used for their response. Data were obtained from a purposive sampling of 30 fifth graders in a pilot study carried out in a primary school in Sabah, Malaysia. Students underwent the sessions of teaching and learning activities for 9 weeks using the Thinking Maps-aided Problem-Based Learning Module before they answered the TSCT test. Analyses were conducted to check on difficulty index (p) and discrimination index (d), internal consistency reliability, content validity, and face validity. Analysis of the test-retest reliability data was conducted separately for a group of fifth graders with similar ability. Findings of the pilot study showed that out of initial 55 administered items, only 30 items with relatively good difficulty index (p) ranged from 0.40 to 0.60 and with good discrimination index (d) ranged within 0.20-1.00 were selected. The Kuder-Richardson reliability value was found to be appropriate and relatively high with 0.70, 0.73 and 0.92 for identifying cause and effect, sequencing, and comparing and contrasting respectively. The content validity index obtained from three expert judgments equalled or exceeded 0.95. In addition, test-retest reliability showed good, statistically significant correlations ([Formula: see text]). From the above results, the selected 30-item TSCT was found to have sufficient reliability and validity and would therefore represent a useful tool for measuring critical thinking ability among fifth graders in primary science.

  19. 46 CFR 62.30-1 - Failsafe.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 2 2012-10-01 2012-10-01 false Failsafe. 62.30-1 Section 62.30-1 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING VITAL SYSTEM AUTOMATION Reliability and Safety... control, safety control, and alarm systems must be failsafe. ...

  20. 46 CFR 62.30-1 - Failsafe.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 2 2014-10-01 2014-10-01 false Failsafe. 62.30-1 Section 62.30-1 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING VITAL SYSTEM AUTOMATION Reliability and Safety... control, safety control, and alarm systems must be failsafe. ...

  1. 46 CFR 62.30-1 - Failsafe.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 2 2013-10-01 2013-10-01 false Failsafe. 62.30-1 Section 62.30-1 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING VITAL SYSTEM AUTOMATION Reliability and Safety... control, safety control, and alarm systems must be failsafe. ...

  2. 46 CFR 61.40-1 - General.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC TESTS AND INSPECTIONS Design Verification and Periodic Testing of Vital System Automation § 61.40-1 General. (a) All automatically or... tests and inspections to evaluate the operation and reliability of controls, alarms, safety features...

  3. INTRODUCING CHANGES TO QUALITY SYSTEMS IN LARGE, ESTABLISHED ORGANIZATIONS

    EPA Science Inventory

    To achieve the agency's mission of having defensible and reliable scientific data with which to make informed decisions, the EPA Quality Assurance (QA) community must continue its successful efforts in increasing support for QA activities through personal communication and carefu...

  4. Risk-based asset management methodology for highway infrastructure systems.

    DOT National Transportation Integrated Search

    2004-01-01

    Maintaining the infrastructure of roads, highways, and bridges is paramount to ensuring that these assets will remain safe and reliable in the future. If maintenance costs remain the same or continue to escalate, and additional funding is not made av...

  5. Long-life slab replacement concrete : [summary].

    DOT National Transportation Integrated Search

    2015-04-01

    Concrete slab replacement projects in Florida have demonstrated a high incidence of : replacement slab cracking. Causes of cracking have not been reliably determined. University of South Florida researchers : sought to identify the factors or : param...

  6. 14 CFR 91.1415 - CAMP: Mechanical reliability reports.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... manager who maintains program aircraft under a CAMP must report the occurrence or detection of each...-dumping system that affects fuel flow or causes hazardous leakage during flight; (12) An unwanted landing...

  7. Detection of faults and software reliability analysis

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1987-01-01

    Specific topics briefly addressed include: the consistent comparison problem in N-version system; analytic models of comparison testing; fault tolerance through data diversity; and the relationship between failures caused by automatically seeded faults.

  8. Completeness and reliability of mortality data in Viet Nam: Implications for the national routine health management information system.

    PubMed

    Hong, Tran Thi; Phuong Hoa, Nguyen; Walker, Sue M; Hill, Peter S; Rao, Chalapati

    2018-01-01

    Mortality statistics form a crucial component of national Health Management Information Systems (HMIS). However, there are limitations in the availability and quality of mortality data at national level in Viet Nam. This study assessed the completeness of recorded deaths and the reliability of recorded causes of death (COD) in the A6 death registers in the national routine HMIS in Viet Nam. 1477 identified deaths in 2014 were reviewed in two provinces. A capture-recapture method was applied to assess the completeness of the A6 death registers. 1365 household verbal autopsy (VA) interviews were successfully conducted, and these were reviewed by physicians who assigned multiple and underlying cause of death (UCOD). These UCODs from VA were then compared with the CODs recorded in the A6 death registers, using kappa scores to assess the reliability of the A6 death register diagnoses. The overall completeness of the A6 death registers in the two provinces was 89.3% (95%CI: 87.8-90.8). No COD recorded in the A6 death registers demonstrated good reliability. There is very low reliability in recording of cardiovascular deaths (kappa for stroke = 0.47 and kappa for ischaemic heart diseases = 0.42) and diabetes (kappa = 0.33). The reporting of deaths due to road traffic accidents, HIV and some cancers are at a moderate level of reliability with kappa scores ranging between 0.57-0.69 (p<0.01). VA methods identify more specific COD than the A6 death registers, and also allow identification of multiple CODs. The study results suggest that data completeness in HMIS A6 death registers in the study sample of communes was relatively high (nearly 90%), but triangulation with death records from other sources would improve the completeness of this system. Further, there is an urgent need to enhance the reliability of COD recorded in the A6 death registers, for which VA methods could be effective. Focussed consultation among stakeholders is needed to develop a suitable mechanism and process for integrating VA methods into the national routine HMIS A6 death registers in Viet Nam.

  9. Use of primary diagnosis during hospitalization in the Unified Health System (Sistema Único de Saúde) to qualify information regarding the underlying cause of natural deaths among the elderly.

    PubMed

    Cascão, Angela Maria; Jorge, Maria Helena Prado de Mello; Costa, Antonio José Leal; Kale, Pauline Lorena

    2016-01-01

    Ill-defined causes of death are common among the elderly owing to the high frequency of comorbidities and, consequently, to the difficulty in defining the underlying cause of death. To analyze the validity and reliability of the "primary diagnosis" in hospitalization to recover the information on the underlying cause of death in natural deaths among the elderly whose deaths were originally assigned to "ill-defined cause" in their Death Certificate. The hospitalizations occurred in the state of Rio de Janeiro, in 2006. The databases obtained in the Information Systems on Mortality and Hospitalization were probabilistically linked. The following data were calculated for hospitalizations of the elderly that evolved into deaths with a natural cause: concordance percentages, Kappa coefficient, sensitivity, specificity, and the positive predictive value of the primary diagnosis. Deaths related to "ill-defined causes" were assigned to a new cause, which was defined based on the primary diagnosis. The reliability of the primary diagnosis was good, according to the total percentage of consistency (50.2%), and fair, according to the Kappa coefficient (k = 0.4; p < 0.0001). Diseases related to the circulatory system and neoplasia occurred with the highest frequency among the deaths and the hospitalizations and presented a higher consistency of positive predictive values per chapter and grouping of the International Classification of Diseases. The recovery of the information on the primary cause occurred in 22.6% of the deaths with ill-defined causes (n = 14). The methodology developed and applied for the recovery of the information on the natural cause of death among the elderly in this study had the advantage of effectiveness and the reduction of costs compared to an investigation of the death that is recommended in situations of non-linked and low positive predictive values. Monitoring the mortality profile by the cause of death is necessary to periodically update the predictive values.

  10. The 25 mA continuous-wave surface-plasma source of H{sup −} ions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belchenko, Yu., E-mail: belchenko@inp.nsk.su; Gorbovsky, A.; Sanin, A.

    The ion source with the Penning geometry of electrodes producing continuous-wave beam of H{sup −} ions with current up to 25 mA was developed. Several improvements were introduced to increase source intensity, reliability, and lifetime. The collar around the emission aperture increases the electrons filtering. The apertures’ diameters of the ion-optical system electrodes were increased to generate the beam with higher intensity. An optimization of electrodes’ temperature was performed.

  11. Biomechanical Behavior of Bioprosthetic Heart Valve Heterograft Tissues: Characterization, Simulation, and Performance

    PubMed Central

    Soares, Joao S.; Feaver, Kristen R.; Zhang, Will; Kamensky, David; Aggarwal, Ankush; Sacks, Michael S.

    2017-01-01

    The use of replacement heart valves continues to grow due to the increased prevalence of valvular heart disease resulting from an ageing population. Since bioprosthetic heart valves (BHVs) continue to be the preferred replacement valve, there continues to be a strong need to develop better and more reliable BHVs through and improved the general understanding of BHV failure mechanisms. The major technological hurdle for the lifespan of the BHV implant continues to be the durability of the constituent leaflet biomaterials, which if improved can lead to substantial clinical impact. In order to develop improved solutions for BHV biomaterials, it is critical to have a better understanding of the inherent biomechanical behaviors of the leaflet biomaterials, including chemical treatment technologies, the impact of repetitive mechanical loading, and the inherent failure modes. This review seeks to provide a comprehensive overview of these issues, with a focus on developing insight on the mechanisms of BHV function and failure. Additionally, this review provides a detailed summary of the computational biomechanical simulations that have been used to inform and develop a higher level of understanding of BHV tissues and their failure modes. Collectively, this information should serve as a tool not only to infer reliable and dependable prosthesis function, but also to instigate and facilitate the design of future bioprosthetic valves and clinically impact cardiology. PMID:27507280

  12. Tightly-Coupled Integration of Multi-GNSS Single-Frequency RTK and MEMS-IMU for Enhanced Positioning Performance

    PubMed Central

    Li, Tuan; Zhang, Hongping; Niu, Xiaoji; Gao, Zhouzheng

    2017-01-01

    Dual-frequency Global Positioning System (GPS) Real-time Kinematics (RTK) has been proven in the past few years to be a reliable and efficient technique to obtain high accuracy positioning. However, there are still challenges for GPS single-frequency RTK, such as low reliability and ambiguity resolution (AR) success rate, especially in kinematic environments. Recently, multi-Global Navigation Satellite System (multi-GNSS) has been applied to enhance the RTK performance in terms of availability and reliability of AR. In order to further enhance the multi-GNSS single-frequency RTK performance in terms of reliability, continuity and accuracy, a low-cost micro-electro-mechanical system (MEMS) inertial measurement unit (IMU) is adopted in this contribution. We tightly integrate the single-frequency GPS/BeiDou/GLONASS and MEMS-IMU through the extended Kalman filter (EKF), which directly fuses the ambiguity-fixed double-differenced (DD) carrier phase observables and IMU data. A field vehicular test was carried out to evaluate the impacts of the multi-GNSS and IMU on the AR and positioning performance in different system configurations. Test results indicate that the empirical success rate of single-epoch AR for the tightly-coupled single-frequency multi-GNSS RTK/INS integration is over 99% even at an elevation cut-off angle of 40°, and the corresponding position time series is much more stable in comparison with the GPS solution. Besides, GNSS outage simulations show that continuous positioning with certain accuracy is possible due to the INS bridging capability when GNSS positioning is not available. PMID:29077070

  13. Reliability of laser Doppler, near-infrared spectroscopy and Doppler ultrasound for peripheral blood flow measurements during and after exercise in the heat.

    PubMed

    Choo, Hui C; Nosaka, Kazunori; Peiffer, Jeremiah J; Ihsan, Mohammed; Yeo, Chow C; Abbiss, Chris R

    2017-09-01

    This study examined the test-retest reliability of near-infrared spectroscopy (NIRS), laser Doppler flowmetry (LDF) and Doppler ultrasound to assess exercise-induced haemodynamics. Nine men completed two identical trials consisting of 25-min submaximal cycling at first ventilatory threshold followed by repeated 30-s bouts of high-intensity (90% of peak power) cycling in 32.8 ± 0.4°C and 32 ± 5% relative humidity (RH). NIRS (tissue oxygenation index [TOI] and total haemoglobin [tHb]) and LDF (perfusion units [PU]) signals were monitored continuously during exercise, and leg blood flow was assessed by Doppler ultrasound at baseline and after exercise. Cutaneous vascular conductance (CVC; PU/mean arterial pressure (MAP)) was expressed as the percentage change from baseline (%CVC BL ). Coefficients of variation (CVs) as indicators of absolute reliability were 18.7-28.4%, 20.2-33.1%, 42.5-59.8%, 7.8-12.4% and 22.2-30.3% for PU, CVC, %CVC BL , TOI and tHb, respectively. CVs for these variables improved as exercise continued beyond 10 min. CVs for baseline and post-exercise leg blood flow were 17.8% and 10.5%, respectively. CVs for PU, tHb (r 2  = 0.062) and TOI (r 2  = 0.002) were not correlated (P > 0.05). Most variables demonstrated CVs lower than the expected changes (35%) induced by training or heat stress; however, minimum of 10 min exercise is recommended for more reliable measurements.

  14. Tightly-Coupled Integration of Multi-GNSS Single-Frequency RTK and MEMS-IMU for Enhanced Positioning Performance.

    PubMed

    Li, Tuan; Zhang, Hongping; Niu, Xiaoji; Gao, Zhouzheng

    2017-10-27

    Dual-frequency Global Positioning System (GPS) Real-time Kinematics (RTK) has been proven in the past few years to be a reliable and efficient technique to obtain high accuracy positioning. However, there are still challenges for GPS single-frequency RTK, such as low reliability and ambiguity resolution (AR) success rate, especially in kinematic environments. Recently, multi-Global Navigation Satellite System (multi-GNSS) has been applied to enhance the RTK performance in terms of availability and reliability of AR. In order to further enhance the multi-GNSS single-frequency RTK performance in terms of reliability, continuity and accuracy, a low-cost micro-electro-mechanical system (MEMS) inertial measurement unit (IMU) is adopted in this contribution. We tightly integrate the single-frequency GPS/BeiDou/GLONASS and MEMS-IMU through the extended Kalman filter (EKF), which directly fuses the ambiguity-fixed double-differenced (DD) carrier phase observables and IMU data. A field vehicular test was carried out to evaluate the impacts of the multi-GNSS and IMU on the AR and positioning performance in different system configurations. Test results indicate that the empirical success rate of single-epoch AR for the tightly-coupled single-frequency multi-GNSS RTK/INS integration is over 99% even at an elevation cut-off angle of 40°, and the corresponding position time series is much more stable in comparison with the GPS solution. Besides, GNSS outage simulations show that continuous positioning with certain accuracy is possible due to the INS bridging capability when GNSS positioning is not available.

  15. Fracture Toughness and Reliability in High-Temperature Structural Ceramics and Composites: Prospects and Challenges for the 21st Century

    NASA Technical Reports Server (NTRS)

    Dutta, Sunil

    1999-01-01

    The importance of high fracture toughness and reliability in Si3N4, and SiC-based structural ceramics and ceramic matrix composites is reviewed. The potential of these ceramics and ceramic matrix composites for high temperature applications in defense and aerospace applications such as gas turbine engines, radomes, and other energy conversion hardware have been well recognized. Numerous investigations were pursued to improve fracture toughness and reliability by incorporating various reinforcements such as particulate-, whisker-, and continuous fiber into Si3N4 and SiC matrices. All toughening mechanisms, e.g. crack deflection, crack branching, crack bridging, etc., essentially redistribute stresses at the crack tip and increase the energy needed to propagate a crack through the composite material, thereby resulting in improved fracture toughness and reliability. Because of flaw insensitivity, continuous fiber reinforced ceramic composite (CFCC) was found to have the highest potential for higher operating temperature and longer service conditions. However, the ceramic fibers should display sufficient high temperature strength and creep resistance at service temperatures above 1000 'C. The greatest challenge to date is the development of high quality ceramic fibers with associate coatings able to maintain their high strength in oxidizing environment at high temperature. In the area of processing, critical issues are, preparation of optimum matrix precursors, precursor infiltration into fiber array, and matrix densification at a temperature, where grain crystallization and fiber degradation do not occur. A broad scope of effort is required for improved processing and properties with a better understanding of all candidate composite systems.

  16. Interrater reliability to assure valid content in peer review of CME-accredited presentations.

    PubMed

    Quigg, Mark; Lado, Fred A

    2009-01-01

    The Accreditation Council for Continuing Medical Education (ACCME) provides guidelines for continuing medical education (CME) materials to mitigate problems in the independence or validity of content in certified activities; however, the process of peer review of materials appears largely unstudied and the reproducibility of peer-review audits for ACCME accreditation and designation of American Medical Association Category 1 Credit(TM) is unknown. Categories of presentation defects were constructed from discussions of the CME committee of the American Epilepsy Society: (1) insufficient citation, (2) poor formatting, (3) nonacknowledgment of non-FDA-approved use, (4) misapplied data, (5) 1-sided data, (6) self- or institutional promotion, (7) conflict of interest/commercial bias, (8) other, or (9) no defect. A PowerPoint lecture (n = 29 slides) suitable for presentation to general neurologists was purposefully created with the above defects. A multirater, multilevel kappa statistic was determined from the number and category of defects. Of 14 reviewers, 12 returned completed surveys (86%) identifying a mean +/- standard deviation 1.6 +/- 1.1 defects/slide. The interrater kappa equaled 0.115 (poor reliability) for number of defects/slides. No individual categories achieved kappa > 0.38. Interrater reliability on the rating of durable materials used in subspecialty CME was poor. Guidelines for CME appropriate content are too subjective to be applied reliably by raters knowledgeable in their specialty field but relatively untrained in the specifics of CME requirements. The process of peer review of CME materials would be aided by education of physicians on validation of materials appropriate for CME.

  17. 42 CFR 422.1062 - Dismissal for cause.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 3 2014-10-01 2014-10-01 false Dismissal for cause. 422.1062 Section 422.1062 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICARE PROGRAM (CONTINUED) MEDICARE ADVANTAGE PROGRAM Appeal procedures for Civil Money...

  18. 42 CFR 422.1062 - Dismissal for cause.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 3 2012-10-01 2012-10-01 false Dismissal for cause. 422.1062 Section 422.1062 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICARE PROGRAM (CONTINUED) MEDICARE ADVANTAGE PROGRAM Appeal procedures for Civil Money...

  19. 42 CFR 422.1062 - Dismissal for cause.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 3 2013-10-01 2013-10-01 false Dismissal for cause. 422.1062 Section 422.1062 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICARE PROGRAM (CONTINUED) MEDICARE ADVANTAGE PROGRAM Appeal procedures for Civil Money...

  20. 40 CFR 264.93 - Hazardous constituents.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the ground-water quality; (vii) The potential for health risks caused by human exposure to waste... quality; (viii) The potential for health risks caused by human exposure to waste constituents; (ix) The... 264.93 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SOLID WASTES (CONTINUED...

  1. Developing Ultra Reliable Life Support for the Moon and Mars

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2009-01-01

    Recycling life support systems can achieve ultra reliability by using spares to replace failed components. The added mass for spares is approximately equal to the original system mass, provided the original system reliability is not very low. Acceptable reliability can be achieved for the space shuttle and space station by preventive maintenance and by replacing failed units, However, this maintenance and repair depends on a logistics supply chain that provides the needed spares. The Mars mission must take all the needed spares at launch. The Mars mission also must achieve ultra reliability, a very low failure rate per hour, since it requires years rather than weeks and cannot be cut short if a failure occurs. Also, the Mars mission has a much higher mass launch cost per kilogram than shuttle or station. Achieving ultra reliable space life support with acceptable mass will require a well-planned and extensive development effort. Analysis must define the reliability requirement and allocate it to subsystems and components. Technologies, components, and materials must be designed and selected for high reliability. Extensive testing is needed to ascertain very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The systems must be designed, produced, integrated, and tested without impairing system reliability. Maintenance and failed unit replacement should not introduce any additional probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass must start soon if it is to produce timely results for the moon and Mars.

  2. Factors Influencing the Reliability of the Glasgow Coma Scale: A Systematic Review.

    PubMed

    Reith, Florence Cm; Synnot, Anneliese; van den Brande, Ruben; Gruen, Russell L; Maas, Andrew Ir

    2017-06-01

    The Glasgow Coma Scale (GCS) characterizes patients with diminished consciousness. In a recent systematic review, we found overall adequate reliability across different clinical settings, but reliability estimates varied considerably between studies, and methodological quality of studies was overall poor. Identifying and understanding factors that can affect its reliability is important, in order to promote high standards for clinical use of the GCS. The aim of this systematic review was to identify factors that influence reliability and to provide an evidence base for promoting consistent and reliable application of the GCS. A comprehensive literature search was undertaken in MEDLINE, EMBASE, and CINAHL from 1974 to July 2016. Studies assessing the reliability of the GCS in adults or describing any factor that influences reliability were included. Two reviewers independently screened citations, selected full texts, and undertook data extraction and critical appraisal. Methodological quality of studies was evaluated with the consensus-based standards for the selection of health measurement instruments checklist. Data were synthesized narratively and presented in tables. Forty-one studies were included for analysis. Factors identified that may influence reliability are education and training, the level of consciousness, and type of stimuli used. Conflicting results were found for experience of the observer, the pathology causing the reduced consciousness, and intubation/sedation. No clear influence was found for the professional background of observers. Reliability of the GCS is influenced by multiple factors and as such is context dependent. This review points to the potential for improvement from training and education and standardization of assessment methods, for which recommendations are presented. Copyright © 2017 by the Congress of Neurological Surgeons.

  3. Using Facility Condition Assessments to Identify Actions Related to Infrastructure

    NASA Technical Reports Server (NTRS)

    Rubert, Kennedy F.

    2010-01-01

    To support cost effective, quality research it is essential that laboratory and testing facilities are maintained in a continuous and reliable state of availability at all times. NASA Langley Research Center (LaRC) and its maintenance contractor, Jacobs Technology, Inc. Research Operations, Maintenance, and Engineering (ROME) group, are in the process of implementing a combined Facility Condition Assessment (FCA) and Reliability Centered Maintenance (RCM) program to improve asset management and overall reliability of testing equipment in facilities such as wind tunnels. Specific areas are being identified for improvement, the deferred maintenance cost is being estimated, and priority is being assigned against facilities where conditions have been allowed to deteriorate. This assessment serves to assist in determining where to commit available funds on the Center. RCM methodologies are being reviewed and enhanced to assure that appropriate preventive, predictive, and facilities/equipment acceptance techniques are incorporated to prolong lifecycle availability and assure reliability at minimum cost. The results from the program have been favorable, better enabling LaRC to manage assets prudently.

  4. Product reliability and thin-film photovoltaics

    NASA Astrophysics Data System (ADS)

    Gaston, Ryan; Feist, Rebekah; Yeung, Simon; Hus, Mike; Bernius, Mark; Langlois, Marc; Bury, Scott; Granata, Jennifer; Quintana, Michael; Carlson, Carl; Sarakakis, Georgios; Ogden, Douglas; Mettas, Adamantios

    2009-08-01

    Despite significant growth in photovoltaics (PV) over the last few years, only approximately 1.07 billion kWhr of electricity is estimated to have been generated from PV in the US during 2008, or 0.27% of total electrical generation. PV market penetration is set for a paradigm shift, as fluctuating hydrocarbon prices and an acknowledgement of the environmental impacts associated with their use, combined with breakthrough new PV technologies, such as thin-film and BIPV, are driving the cost of energy generated with PV to parity or cost advantage versus more traditional forms of energy generation. In addition to reaching cost parity with grid supplied power, a key to the long-term success of PV as a viable energy alternative is the reliability of systems in the field. New technologies may or may not have the same failure modes as previous technologies. Reliability testing and product lifetime issues continue to be one of the key bottlenecks in the rapid commercialization of PV technologies today. In this paper, we highlight the critical need for moving away from relying on traditional qualification and safety tests as a measure of reliability and focus instead on designing for reliability and its integration into the product development process. A drive towards quantitative predictive accelerated testing is emphasized and an industrial collaboration model addressing reliability challenges is proposed.

  5. Learning from Trending, Precursor Analysis, and System Failures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Youngblood, R. W.; Duffey, R. B.

    2015-11-01

    Models of reliability growth relate current system unreliability to currently accumulated experience. But “experience” comes in different forms. Looking back after a major accident, one is sometimes able to identify previous events or measurable performance trends that were, in some sense, signaling the potential for that major accident: potential that could have been recognized and acted upon, but was not recognized until the accident occurred. This could be a previously unrecognized cause of accidents, or underestimation of the likelihood that a recognized potential cause would actually operate. Despite improvements in the state of practice of modeling of risk and reliability,more » operational experience still has a great deal to teach us, and work has been going on in several industries to try to do a better job of learning from experience before major accidents occur. It is not enough to say that we should review operating experience; there is too much “experience” for such general advice to be considered practical. The paper discusses the following: 1. The challenge of deciding what to focus on in analysis of operating experience. 2. Comparing what different models of learning and reliability growth imply about trending and precursor analysis.« less

  6. Educating generalists: factors of resident continuity clinic associated with perceived impact on choosing a generalist career.

    PubMed

    Laponis, Ryan; O'Sullivan, Patricia S; Hollander, Harry; Cornett, Patricia; Julian, Katherine

    2011-12-01

    Fewer residents are choosing general internal medicine (GIM) careers, and their choice 5 be influenced by the continuity clinic experience during residency. We sought to explore the relationship between resident satisfaction with the continuity clinic experience and expressed interest in pursuing a GIM career. We surveyed internal medicine residents by using the Veterans Health Administration Office of Academic Affiliations Learners' Perceptions Survey-a 76-item instrument with established reliability and validity that measures satisfaction with faculty interactions, and learning, working, clinical, and physical environments, and personal experience. We identified 15 reliable subscales within the survey and asked participants whether their experience would prompt them to consider future employment opportunities in GIM. We examined the association between satisfaction measures and future GIM interest with 1-way analyses of variance followed by Student-Newman-Keuls post hoc tests. Of 217 residents, 90 (41%) completed the survey. Residents felt continuity clinic influenced career choice, with 22% more likely to choose a GIM career and 43% less likely. Those more likely to choose a GIM career had higher satisfaction with the learning (P  =  .001) and clinical (P  =  .002) environments and personal experience (P < .001). They also had higher satisfaction with learning processes (P  =  .002), patient diversity (P < .001), coordination of care (P  =  .009), workflow (P  =  .001), professional/personal satisfaction (P < .001), and work/life balance (P < .001). The continuity clinic experience 5 influence residents' GIM career choice. Residents who indicate they are more likely to pursue GIM based on that clinical experience have higher levels of satisfaction. Further prospective data are needed to assess if changes in continuity clinic toward these particular factors can enhance career choice.

  7. The reliability-quality relationship for quality systems and quality risk management.

    PubMed

    Claycamp, H Gregg; Rahaman, Faiad; Urban, Jason M

    2012-01-01

    Engineering reliability typically refers to the probability that a system, or any of its components, will perform a required function for a stated period of time and under specified operating conditions. As such, reliability is inextricably linked with time-dependent quality concepts, such as maintaining a state of control and predicting the chances of losses from failures for quality risk management. Two popular current good manufacturing practice (cGMP) and quality risk management tools, failure mode and effects analysis (FMEA) and root cause analysis (RCA) are examples of engineering reliability evaluations that link reliability with quality and risk. Current concepts in pharmaceutical quality and quality management systems call for more predictive systems for maintaining quality; yet, the current pharmaceutical manufacturing literature and guidelines are curiously silent on engineering quality. This commentary discusses the meaning of engineering reliability while linking the concept to quality systems and quality risk management. The essay also discusses the difference between engineering reliability and statistical (assay) reliability. The assurance of quality in a pharmaceutical product is no longer measured only "after the fact" of manufacturing. Rather, concepts of quality systems and quality risk management call for designing quality assurance into all stages of the pharmaceutical product life cycle. Interestingly, most assays for quality are essentially static and inform product quality over the life cycle only by being repeated over time. Engineering process reliability is the fundamental concept that is meant to anticipate quality failures over the life cycle of the product. Reliability is a well-developed theory and practice for other types of manufactured products and manufacturing processes. Thus, it is well known to be an appropriate index of manufactured product quality. This essay discusses the meaning of reliability and its linkages with quality systems and quality risk management.

  8. Reliability of physical examination for diagnosis of myofascial trigger points: a systematic review of the literature.

    PubMed

    Lucas, Nicholas; Macaskill, Petra; Irwig, Les; Moran, Robert; Bogduk, Nikolai

    2009-01-01

    Trigger points are promoted as an important cause of musculoskeletal pain. There is no accepted reference standard for the diagnosis of trigger points, and data on the reliability of physical examination for trigger points are conflicting. To systematically review the literature on the reliability of physical examination for the diagnosis of trigger points. MEDLINE, EMBASE, and other sources were searched for articles reporting the reliability of physical examination for trigger points. Included studies were evaluated for their quality and applicability, and reliability estimates were extracted and reported. Nine studies were eligible for inclusion. None satisfied all quality and applicability criteria. No study specifically reported reliability for the identification of the location of active trigger points in the muscles of symptomatic participants. Reliability estimates varied widely for each diagnostic sign, for each muscle, and across each study. Reliability estimates were generally higher for subjective signs such as tenderness (kappa range, 0.22-1.0) and pain reproduction (kappa range, 0.57-1.00), and lower for objective signs such as the taut band (kappa range, -0.08-0.75) and local twitch response (kappa range, -0.05-0.57). No study to date has reported the reliability of trigger point diagnosis according to the currently proposed criteria. On the basis of the limited number of studies available, and significant problems with their design, reporting, statistical integrity, and clinical applicability, physical examination cannot currently be recommended as a reliable test for the diagnosis of trigger points. The reliability of trigger point diagnosis needs to be further investigated with studies of high quality that use current diagnostic criteria in clinically relevant patients.

  9. Reliable aluminum contact formation by electrostatic bonding

    NASA Astrophysics Data System (ADS)

    Kárpáti, T.; Pap, A. E.; Radnóczi, Gy; Beke, B.; Bársony, I.; Fürjes, P.

    2015-07-01

    The paper presents a detailed study of a reliable method developed for aluminum fusion wafer bonding assisted by the electrostatic force evolving during the anodic bonding process. The IC-compatible procedure described allows the parallel formation of electrical and mechanical contacts, facilitating a reliable packaging of electromechanical systems with backside electrical contacts. This fusion bonding method supports the fabrication of complex microelectromechanical systems (MEMS) and micro-opto-electromechanical systems (MOEMS) structures with enhanced temperature stability, which is crucial in mechanical sensor applications such as pressure or force sensors. Due to the applied electrical potential of  -1000 V the Al metal layers are compressed by electrostatic force, and at the bonding temperature of 450 °C intermetallic diffusion causes aluminum ions to migrate between metal layers.

  10. Unique reliability characteristics of fully depleted silicon-on-insulator tunneling FET

    NASA Astrophysics Data System (ADS)

    Kang, Soo Cheol; Lim, Donghwan; Lim, Sung Kwan; Noh, Jinwoo; Kim, Seung-Mo; Lee, Sang Kyung; Choi, Changhwan; Lee, Byoung Hun

    2018-04-01

    This study investigated the unique reliability characteristics of tunneling field effect transistors (TFETs) by comparing the effects of positive bias temperature instability (PBTI) and hot carrier injection (HCI) stresses. In case of hot carrier injection (HCI) stress, the interface trap generation near a p/n+ region was the primary degradation mechanism. However, strong recovery after a high-pressure hydrogen annealing and weak degradation at low temperature indicates that the degradation mechanism of TFET under the HCI stress is different from the high-energy carrier stress induced permanent defect generation mechanism observed in MOSFETs. Further study is necessary to identify the exact location and defect species causing TFET degradation; however, a significant difference is evident between the dominant reliability mechanism of TFET and MOSFET.

  11. Helicobacter pylori therapy: a paradigm shift

    PubMed Central

    Graham, David Y; Dore, Maria Pina

    2016-01-01

    SUMMARY Helicobacter pylori (H. Pylori) is a leading cause of gastroduodenal disease, including gastric cancer. H. pylori eradication therapies and their efficacy are summarized. A number of current treatment regimens will reliably yield >90% or 95% cure rates with susceptible strains. None has proven to be superior. We show how to predict the efficacy of a regimen in any population provided one knows the prevalence of antibiotic resistance. As with other infectious diseases, therapy should always be susceptibility-based. Susceptibility testing should be demanded. We provide recommendations for empiric therapies when the only option and describe how to distinguish studies providing misinformation from those providing reliable and interpretable data. When treated as an infectious disease, high H. pylori cure rates are relatively simple to reliably achieve. PMID:27077447

  12. An assessment of the reliability of quantitative genetics estimates in study systems with high rate of extra-pair reproduction and low recruitment.

    PubMed

    Bourret, A; Garant, D

    2017-03-01

    Quantitative genetics approaches, and particularly animal models, are widely used to assess the genetic (co)variance of key fitness related traits and infer adaptive potential of wild populations. Despite the importance of precision and accuracy of genetic variance estimates and their potential sensitivity to various ecological and population specific factors, their reliability is rarely tested explicitly. Here, we used simulations and empirical data collected from an 11-year study on tree swallow (Tachycineta bicolor), a species showing a high rate of extra-pair paternity and a low recruitment rate, to assess the importance of identity errors, structure and size of the pedigree on quantitative genetic estimates in our dataset. Our simulations revealed an important lack of precision in heritability and genetic-correlation estimates for most traits, a low power to detect significant effects and important identifiability problems. We also observed a large bias in heritability estimates when using the social pedigree instead of the genetic one (deflated heritabilities) or when not accounting for an important cause of resemblance among individuals (for example, permanent environment or brood effect) in model parameterizations for some traits (inflated heritabilities). We discuss the causes underlying the low reliability observed here and why they are also likely to occur in other study systems. Altogether, our results re-emphasize the difficulties of generalizing quantitative genetic estimates reliably from one study system to another and the importance of reporting simulation analyses to evaluate these important issues.

  13. 46 CFR 169.621 - Communications.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 7 2012-10-01 2012-10-01 false Communications. 169.621 Section 169.621 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) NAUTICAL SCHOOLS SAILING SCHOOL VESSELS Machinery and Electrical Steering Systems § 169.621 Communications. A reliable means of voice communications must be...

  14. 46 CFR 61.40-1 - General.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC TESTS AND INSPECTIONS Design... tests and inspections to evaluate the operation and reliability of controls, alarms, safety features... designated by the owner of the vessel shall conduct all tests and the Design Verification and Periodic Safety...

  15. 46 CFR 61.40-1 - General.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC TESTS AND INSPECTIONS Design... tests and inspections to evaluate the operation and reliability of controls, alarms, safety features... designated by the owner of the vessel shall conduct all tests and the Design Verification and Periodic Safety...

  16. 46 CFR 61.40-1 - General.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC TESTS AND INSPECTIONS Design... tests and inspections to evaluate the operation and reliability of controls, alarms, safety features... designated by the owner of the vessel shall conduct all tests and the Design Verification and Periodic Safety...

  17. 46 CFR 61.40-1 - General.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING PERIODIC TESTS AND INSPECTIONS Design... tests and inspections to evaluate the operation and reliability of controls, alarms, safety features... designated by the owner of the vessel shall conduct all tests and the Design Verification and Periodic Safety...

  18. 46 CFR 15.715 - Automated vessels.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... GUARD, DEPARTMENT OF HOMELAND SECURITY MERCHANT MARINE OFFICERS AND SEAMEN MANNING REQUIREMENTS... automated system in establishing initial manning levels; however, until the system is proven reliable, a manning level adequate to operate in a continuously attended mode will be specified on a vessel's COI. It...

  19. 15 CFR 801.1 - Purpose.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) BUREAU OF ECONOMIC ANALYSIS, DEPARTMENT OF COMMERCE SURVEY OF INTERNATIONAL TRADE IN SERVICES BETWEEN U.S. AND FOREIGN PERSONS... overall purpose of the Act with respect to services trade is to provide comprehensive and reliable...

  20. 15 CFR 801.1 - Purpose.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... and Foreign Trade Regulations Relating to Commerce and Foreign Trade (Continued) BUREAU OF ECONOMIC ANALYSIS, DEPARTMENT OF COMMERCE SURVEY OF INTERNATIONAL TRADE IN SERVICES BETWEEN U.S. AND FOREIGN PERSONS... overall purpose of the Act with respect to services trade is to provide comprehensive and reliable...

  1. "vocd": A Theoretical and Empirical Evaluation

    ERIC Educational Resources Information Center

    McCarthy, Philip M.; Jarvis, Scott

    2007-01-01

    A reliable index of lexical diversity (LD) has remained stubbornly elusive for over 60 years. Meanwhile, researchers in fields as varied as "stylistics," "neuropathology," "language acquisition," and even "forensics" continue to use flawed LD indices--often ignorant that their results are questionable and in…

  2. 46 CFR 169.621 - Communications.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 7 2014-10-01 2014-10-01 false Communications. 169.621 Section 169.621 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) NAUTICAL SCHOOLS SAILING SCHOOL VESSELS Machinery and Electrical Steering Systems § 169.621 Communications. A reliable means of voice communications must be...

  3. 46 CFR 15.715 - Automated vessels.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... GUARD, DEPARTMENT OF HOMELAND SECURITY MERCHANT MARINE OFFICERS AND SEAMEN MANNING REQUIREMENTS... automated system in establishing initial manning levels; however, until the system is proven reliable, a manning level adequate to operate in a continuously attended mode will be specified on a vessel's COI. It...

  4. 46 CFR 15.715 - Automated vessels.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... GUARD, DEPARTMENT OF HOMELAND SECURITY MERCHANT MARINE OFFICERS AND SEAMEN MANNING REQUIREMENTS... automated system in establishing initial manning levels; however, until the system is proven reliable, a manning level adequate to operate in a continuously attended mode will be specified on a vessel's COI. It...

  5. 46 CFR 15.715 - Automated vessels.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... GUARD, DEPARTMENT OF HOMELAND SECURITY MERCHANT MARINE OFFICERS AND SEAMEN MANNING REQUIREMENTS... automated system in establishing initial manning levels; however, until the system is proven reliable, a manning level adequate to operate in a continuously attended mode will be specified on a vessel's COI. It...

  6. 46 CFR 15.715 - Automated vessels.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... GUARD, DEPARTMENT OF HOMELAND SECURITY MERCHANT MARINE OFFICERS AND SEAMEN MANNING REQUIREMENTS... automated system in establishing initial manning levels; however, until the system is proven reliable, a manning level adequate to operate in a continuously attended mode will be specified on a vessel's COI. It...

  7. 46 CFR 169.621 - Communications.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 7 2011-10-01 2011-10-01 false Communications. 169.621 Section 169.621 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) NAUTICAL SCHOOLS SAILING SCHOOL VESSELS Machinery and Electrical Steering Systems § 169.621 Communications. A reliable means of voice communications must be...

  8. 46 CFR 169.621 - Communications.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false Communications. 169.621 Section 169.621 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) NAUTICAL SCHOOLS SAILING SCHOOL VESSELS Machinery and Electrical Steering Systems § 169.621 Communications. A reliable means of voice communications must be...

  9. Quantitative Nondestructive Evaluation

    DTIC Science & Technology

    1979-10-01

    reliability has been discussed by a number of researchers, including Pachman, et. al. [25,28], Hastings [29], Ehret [30], Kaplan and Reiman [31], and...123 REFERENCES (Continued) 31. Kaplan, M.P. and Reiman , J.A. "Use of Fracture Mechanics in Estimating Structural Life and Inspection Intervals

  10. Case Studies in Continuous Process Improvement

    NASA Technical Reports Server (NTRS)

    Mehta, A.

    1997-01-01

    This study focuses on improving the SMT assembly process in a low-volume, high-reliability environment with emphasis on fine pitch and BGA packages. Before a process improvement is carried out, it is important to evaluate where the process stands in terms of process capability.

  11. Program for improved electrical harness documentation and fabrication

    NASA Technical Reports Server (NTRS)

    1971-01-01

    Computer program provides automated print-out of harness interconnection table and automated cross-check of reciprocal pin/connector assignments, and improves accuracy and reliability of final documented data. Programs and corresponding library tapes are successfully and continuously employed on Nimbus spacecraft programs.

  12. Microvascular temporalis fascia transfer for penile girth enhancement.

    PubMed

    Küçükçelebi, A; Ertaş, N M; Aydin, A; Eroğlu, A; Ozmen, E; Velidedeoğlu, H

    2001-07-01

    The authors report a 44-year-old man with inadequate penile girth that caused psychological problems. Using microvascular temporalis fascia transfer, they achieved satisfactory penile girth enhancement based on reliable vascularity in a single stage.

  13. A Statistical Perspective on Highly Accelerated Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, Edward V.

    Highly accelerated life testing has been heavily promoted at Sandia (and elsewhere) as a means to rapidly identify product weaknesses caused by flaws in the product's design or manufacturing process. During product development, a small number of units are forced to fail at high stress. The failed units are then examined to determine the root causes of failure. The identification of the root causes of product failures exposed by highly accelerated life testing can instigate changes to the product's design and/or manufacturing process that result in a product with increased reliability. It is widely viewed that this qualitative use ofmore » highly accelerated life testing (often associated with the acronym HALT) can be useful. However, highly accelerated life testing has also been proposed as a quantitative means for "demonstrating" the reliability of a product where unreliability is associated with loss of margin via an identified and dominating failure mechanism. It is assumed that the dominant failure mechanism can be accelerated by changing the level of a stress factor that is assumed to be related to the dominant failure mode. In extreme cases, a minimal number of units (often from a pre-production lot) are subjected to a single highly accelerated stress relative to normal use. If no (or, sufficiently few) units fail at this high stress level, some might claim that a certain level of reliability has been demonstrated (relative to normal use conditions). Underlying this claim are assumptions regarding the level of knowledge associated with the relationship between the stress level and the probability of failure. The primary purpose of this document is to discuss (from a statistical perspective) the efficacy of using accelerated life testing protocols (and, in particular, "highly accelerated" protocols) to make quantitative inferences concerning the performance of a product (e.g., reliability) when in fact there is lack-of-knowledge and uncertainty concerning the assumed relationship between the stress level and performance. In addition, this document contains recommendations for conducting more informative accelerated tests.« less

  14. Long-term blood glucose monitoring with implanted telemetry device in conscious and stress-free cynomolgus monkeys.

    PubMed

    Wang, B; Sun, G; Qiao, W; Liu, Y; Qiao, J; Ye, W; Wang, H; Wang, X; Lindquist, R; Wang, Y; Xiao, Y-F

    2017-09-01

    Continuous blood glucose monitoring, especially long-term and remote, in diabetic patients or research is very challenging. Nonhuman primate (NHP) is an excellent model for metabolic research, because NHPs can naturally develop Type 2 diabetes mellitus (T2DM) similarly to humans. This study was to investigate blood glucose changes in conscious, moving-free cynomolgus monkeys (Macaca fascicularis) during circadian, meal, stress and drug exposure. Blood glucose, body temperature and physical activities were continuously and simultaneously recorded by implanted HD-XG telemetry device for up to 10 weeks. Blood glucose circadian changes in normoglycemic monkeys significantly differed from that in diabetic animals. Postprandial glucose increase was more obvious after afternoon feeding. Moving a monkey from its housing cage to monkey chair increased blood glucose by 30% in both normoglycemic and diabetic monkeys. Such increase in blood glucose declined to the pre-procedure level in 30 min in normoglycemic animals and >2 h in diabetic monkeys. Oral gavage procedure alone caused hyperglycemia in both normoglycemic and diabetic monkeys. Intravenous injection with the stress hormones, angiotensin II (2 μg/kg) or norepinephrine (0.4 μg/kg), also increased blood glucose level by 30%. The glucose levels measured by the telemetry system correlated significantly well with glucometer readings during glucose tolerance tests (ivGTT or oGTT), insulin tolerance test (ITT), graded glucose infusion (GGI) and clamp. Our data demonstrate that the real-time telemetry method is reliable for monitoring blood glucose remotely and continuously in conscious, stress-free, and moving-free NHPs with the advantages highly valuable to diabetes research and drug discovery.

  15. Reliability analysis of the solar array based on Fault Tree Analysis

    NASA Astrophysics Data System (ADS)

    Jianing, Wu; Shaoze, Yan

    2011-07-01

    The solar array is an important device used in the spacecraft, which influences the quality of in-orbit operation of the spacecraft and even the launches. This paper analyzes the reliability of the mechanical system and certifies the most vital subsystem of the solar array. The fault tree analysis (FTA) model is established according to the operating process of the mechanical system based on DFH-3 satellite; the logical expression of the top event is obtained by Boolean algebra and the reliability of the solar array is calculated. The conclusion shows that the hinges are the most vital links between the solar arrays. By analyzing the structure importance(SI) of the hinge's FTA model, some fatal causes, including faults of the seal, insufficient torque of the locking spring, temperature in space, and friction force, can be identified. Damage is the initial stage of the fault, so limiting damage is significant to prevent faults. Furthermore, recommendations for improving reliability associated with damage limitation are discussed, which can be used for the redesigning of the solar array and the reliability growth planning.

  16. The Typical General Aviation Aircraft

    NASA Technical Reports Server (NTRS)

    Turnbull, Andrew

    1999-01-01

    The reliability of General Aviation aircraft is unknown. In order to "assist the development of future GA reliability and safety requirements", a reliability study needs to be performed. Before any studies on General Aviation aircraft reliability begins, a definition of a typical aircraft that encompasses most of the general aviation characteristics needs to be defined. In this report, not only is the typical general aviation aircraft defined for the purpose of the follow-on reliability study, but it is also separated, or "sifted" into several different categories where individual analysis can be performed on the reasonably independent systems. In this study, the typical General Aviation aircraft is a four-place, single engine piston, all aluminum fixed-wing certified aircraft with a fixed tricycle landing gear and a cable operated flight control system. The system breakdown of a GA aircraft "sifts" the aircraft systems and components into five categories: Powerplant, Airframe, Aircraft Control Systems, Cockpit Instrumentation Systems, and the Electrical Systems. This breakdown was performed along the lines of a failure of the system. Any component that caused a system to fail was considered a part of that system.

  17. A new approach to power quality and electricity reliability monitoring-case study illustrations of the capabilities of the I-GridTM system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Divan, Deepak; Brumsickle, William; Eto, Joseph

    2003-04-01

    This report describes a new approach for collecting information on power quality and reliability and making it available in the public domain. Making this information readily available in a form that is meaningful to electricity consumers is necessary for enabling more informed private and public decisions regarding electricity reliability. The system dramatically reduces the cost (and expertise) needed for customers to obtain information on the most significant power quality events, called voltage sags and interruptions. The system also offers widespread access to information on power quality collected from multiple sites and the potential for capturing information on the impacts ofmore » power quality problems, together enabling a wide variety of analysis and benchmarking to improve system reliability. Six case studies demonstrate selected functionality and capabilities of the system, including: Linking measured power quality events to process interruption and downtime; Demonstrating the ability to correlate events recorded by multiple monitors to narrow and confirm the causes of power quality events; and Benchmarking power quality and reliability on a firm and regional basis.« less

  18. Effect of bow-type initial imperfection on reliability of minimum-weight, stiffened structural panels

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Krishnamurthy, Thiagaraja; Sykes, Nancy P.; Elishakoff, Isaac

    1993-01-01

    Computations were performed to determine the effect of an overall bow-type imperfection on the reliability of structural panels under combined compression and shear loadings. A panel's reliability is the probability that it will perform the intended function - in this case, carry a given load without buckling or exceeding in-plane strain allowables. For a panel loaded in compression, a small initial bow can cause large bending stresses that reduce both the buckling load and the load at which strain allowables are exceeded; hence, the bow reduces the reliability of the panel. In this report, analytical studies on two stiffened panels quantified that effect. The bow is in the shape of a half-sine wave along the length of the panel. The size e of the bow at panel midlength is taken to be the single random variable. Several probability density distributions for e are examined to determine the sensitivity of the reliability to details of the bow statistics. In addition, the effects of quality control are explored with truncated distributions.

  19. Effects of long-term continuous cropping on soil nematode community and soil condition associated with replant problem in strawberry habitat

    PubMed Central

    Li, Xingyue; Lewis, Edwin E.; Liu, Qizhi; Li, Heqin; Bai, Chunqi; Wang, Yuzhu

    2016-01-01

    Continuous cropping changes soil physiochemical parameters, enzymes and microorganism communities, causing “replant problem” in strawberry cultivation. We hypothesized that soil nematode community would reflect the changes in soil conditions caused by long-term continuous cropping, in ways that are consistent and predictable. To test this hypothesis, we studied the soil nematode communities and several soil parameters, including the concentration of soil phenolic acids, organic matter and nitrogen levels, in strawberry greenhouse under continuous-cropping for five different durations. Soil pH significantly decreased, and four phenolic acids, i.e., p-hydroxybenzoic acid, ferulic acid, cinnamic acid and p-coumaric acid, accumulated with time under continuous cropping. The four phenolic acids were highly toxic to Acrobeloides spp., the eudominant genus in non-continuous cropping, causing it to reduce to a resident genus after seven-years of continuous cropping. Decreased nematode diversity indicated loss of ecosystem stability and sustainability because of continuous-cropping practice. Moreover, the dominant decomposition pathway was altered from bacterial to fungal under continuous cropping. Our results suggest that along with the continuous-cropping time in strawberry habitat, the soil food web is disturbed, and the available plant nutrition as well as the general health of the soil deteriorates; these changes can be indicated by soil nematode community. PMID:27506379

  20. Sliding into happiness: A new tool for measuring affective responses to words.

    PubMed

    Warriner, Amy Beth; Shore, David I; Schmidt, Louis A; Imbault, Constance L; Kuperman, Victor

    2017-03-01

    Reliable measurement of affective responses is critical for research into human emotion. Affective evaluation of words is most commonly gauged on multiple dimensions-including valence (positivity) and arousal-using a rating scale. Despite its popularity, this scale is open to criticism: It generates ordinal data that is often misinterpreted as interval, it does not provide the fine resolution that is essential by recent theoretical accounts of emotion, and its extremes may not be properly calibrated. In 5 experiments, the authors introduce a new slider tool for affective evaluation of words on a continuous, well-calibrated and high-resolution scale. In Experiment 1, participants were shown a word and asked to move a manikin representing themselves closer to or farther away from the word. The manikin's distance from the word strongly correlated with the word's valence. In Experiment 2, individual differences in shyness and sociability elicited reliable differences in distance from the words. Experiment 3 validated the results of Experiments 1 and 2 using a demographically more diverse population of responders. Finally, Experiment 4 (along with Experiment 2) suggested that task demand is not a potential cause for scale recalibration. In Experiment 5, men and women placed a manikin closer or farther from words that showed sex differences in valence, highlighting the sensitivity of this measure to group differences. These findings shed a new light on interactions among affect, language, and individual differences, and demonstrate the utility of a new tool for measuring word affect. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Specialized data analysis for the Space Shuttle Main Engine and diagnostic evaluation of advanced propulsion system components

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The Marshall Space Flight Center is responsible for the development and management of advanced launch vehicle propulsion systems, including the Space Shuttle Main Engine (SSME), which is presently operational, and the Space Transportation Main Engine (STME) under development. The SSME's provide high performance within stringent constraints on size, weight, and reliability. Based on operational experience, continuous design improvement is in progress to enhance system durability and reliability. Specialized data analysis and interpretation is required in support of SSME and advanced propulsion system diagnostic evaluations. Comprehensive evaluation of the dynamic measurements obtained from test and flight operations is necessary to provide timely assessment of the vibrational characteristics indicating the operational status of turbomachinery and other critical engine components. Efficient performance of this effort is critical due to the significant impact of dynamic evaluation results on ground test and launch schedules, and requires direct familiarity with SSME and derivative systems, test data acquisition, and diagnostic software. Detailed analysis and evaluation of dynamic measurements obtained during SSME and advanced system ground test and flight operations was performed including analytical/statistical assessment of component dynamic behavior, and the development and implementation of analytical/statistical models to efficiently define nominal component dynamic characteristics, detect anomalous behavior, and assess machinery operational condition. In addition, the SSME and J-2 data will be applied to develop vibroacoustic environments for advanced propulsion system components, as required. This study will provide timely assessment of engine component operational status, identify probable causes of malfunction, and indicate feasible engineering solutions. This contract will be performed through accomplishment of negotiated task orders.

  2. Parasacral Perforator Flaps for Reconstruction of Sacral Pressure Sores.

    PubMed

    Lin, Chin-Ta; Chen, Shih-Yi; Chen, Shyi-Gen; Tzeng, Yuan-Sheng; Chang, Shun-Cheng

    2015-07-01

    Despite advances in reconstruction techniques, pressure sores continue to present a challenge to the plastic surgeon. The parasacral perforator flap is a reliable flap that preserves the entire contralateral side as a future donor site. On the ipsilateral side, the gluteal muscle itself is preserved and all flaps based on the inferior gluteal artery are still possible. We present our experience of using parasacral perforator flaps in reconstructing sacral defects. Between August 2004 and January 2013, 19 patients with sacral defects were included in this study. All the patients had undergone surgical reconstruction of sacral defects with a parasacral perforator flap. The patients' sex, age, cause of sacral defect, flap size, flap type, numbers of perforators used, rotation angle, postoperative complications, and hospital stay were recorded. There were 19 parasacral perforator flaps in this series. All flaps survived uneventfully except for 1 parasacral perforator flap, which failed because of methicillin-resistant Staphylococcus aureus infection. The overall flap survival rate was 95% (18/19). The mean follow-up period was 17.3 months (range, 2-24 months). The average length of hospital stay was 20.7 days (range, 9-48 days). No flap surgery-related mortality was found. Also, there was no recurrence of sacral pressure sores or infected pilonidal cysts during the follow-up period. Perforator-based flaps have become popular in modern reconstructive surgery because of low donor-site morbidity and good preservation of muscle. Parasacral perforator flaps are durable and reliable in reconstructing sacral defects. We recommend the parasacral perforator flap as a good choice for reconstructing sacral defects.

  3. The applications of statistical quantification techniques in nanomechanics and nanoelectronics.

    PubMed

    Mai, Wenjie; Deng, Xinwei

    2010-10-08

    Although nanoscience and nanotechnology have been developing for approximately two decades and have achieved numerous breakthroughs, the experimental results from nanomaterials with a higher noise level and poorer repeatability than those from bulk materials still remain as a practical issue, and challenge many techniques of quantification of nanomaterials. This work proposes a physical-statistical modeling approach and a global fitting statistical method to use all the available discrete data or quasi-continuous curves to quantify a few targeted physical parameters, which can provide more accurate, efficient and reliable parameter estimates, and give reasonable physical explanations. In the resonance method for measuring the elastic modulus of ZnO nanowires (Zhou et al 2006 Solid State Commun. 139 222-6), our statistical technique gives E = 128.33 GPa instead of the original E = 108 GPa, and unveils a negative bias adjustment f(0). The causes are suggested by the systematic bias in measuring the length of the nanowires. In the electronic measurement of the resistivity of a Mo nanowire (Zach et al 2000 Science 290 2120-3), the proposed new method automatically identified the importance of accounting for the Ohmic contact resistance in the model of the Ohmic behavior in nanoelectronics experiments. The 95% confidence interval of resistivity in the proposed one-step procedure is determined to be 3.57 +/- 0.0274 x 10( - 5) ohm cm, which should be a more reliable and precise estimate. The statistical quantification technique should find wide applications in obtaining better estimations from various systematic errors and biased effects that become more significant at the nanoscale.

  4. The DSM-III personality disorders section: a commentary.

    PubMed

    Frances, A

    1980-09-01

    The author reviews the DSM-III section on personality disorders, discusses several of its more controversial diagnoses, and suggests some possible alternatives. He attributes the continued low reliability of personality diagnoses, compared with the other major sections of DSM-III, to two inherent obstacles: the lack of clear boundaries demarcating the personality disorders from normality and from one another, and the confounding influence of state and role factors. Nonetheless, the DSM-III multiaxial system highlights the importance of personality diagnosis and, together with the provision of clearly specified diagnostic criteria, achieves a considerably improved reliability compared with previous nomenclatures.

  5. The implementation and use of Ada on distributed systems with high reliability requirements

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1986-01-01

    The use and implementation of Ada in distributed environments in which reliability is the primary concern were investigted. A distributed system, programmed entirely in Ada, was studied to assess the use of individual tasks without concern for the processor used. Continued development and testing of the fault tolerant Ada testbed; development of suggested changes to Ada to cope with the failures of interest; design of approaches to fault tolerant software in real time systems, and the integration of these ideas into Ada; and the preparation of various papers and presentations were discussed.

  6. How to select a continuous emission monitoring system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radigan, M.J.

    1994-02-01

    Selecting a continuous emission monitoring system (CEMS) involves more than picking an analyzer. Successful CEMS interface sampling and data-management systems to produce accurate, reliable reports required by regulatory agencies. Following objective guidelines removes some of the misery from CEMS shopping. However, prospective CEMS buyers should do their homework and develop well-thought-out, detailed specification for the processes' sampling criteria. Fine tuning the analyzer/data management system can eliminate maintenance costs and keep the facility operating within its permit restrictions.

  7. Quantification and scaling of multipartite entanglement in continuous variable systems.

    PubMed

    Adesso, Gerardo; Serafini, Alessio; Illuminati, Fabrizio

    2004-11-26

    We present a theoretical method to determine the multipartite entanglement between different partitions of multimode, fully or partially symmetric Gaussian states of continuous variable systems. For such states, we determine the exact expression of the logarithmic negativity and show that it coincides with that of equivalent two-mode Gaussian states. Exploiting this reduction, we demonstrate the scaling of the multipartite entanglement with the number of modes and its reliable experimental estimate by direct measurements of the global and local purities.

  8. Micromechanical analysis of thermo-inelastic multiphase short-fiber composites

    NASA Technical Reports Server (NTRS)

    Aboudi, Jacob

    1994-01-01

    A micromechanical formulation is presented for the prediction of the overall thermo-inelastic behavior of multiphase composites which consist of short fibers. The analysis is an extension of the generalized method of cells that was previously derived for inelastic composites with continuous fibers, and the reliability of which was critically examined in several situations. The resulting three dimensional formulation is extremely general, wherein the analysis of thermo-inelastic composites with continuous fibers as well as particulate and porous inelastic materials are merely special cases.

  9. Software For Fault-Tree Diagnosis Of A System

    NASA Technical Reports Server (NTRS)

    Iverson, Dave; Patterson-Hine, Ann; Liao, Jack

    1993-01-01

    Fault Tree Diagnosis System (FTDS) computer program is automated-diagnostic-system program identifying likely causes of specified failure on basis of information represented in system-reliability mathematical models known as fault trees. Is modified implementation of failure-cause-identification phase of Narayanan's and Viswanadham's methodology for acquisition of knowledge and reasoning in analyzing failures of systems. Knowledge base of if/then rules replaced with object-oriented fault-tree representation. Enhancement yields more-efficient identification of causes of failures and enables dynamic updating of knowledge base. Written in C language, C++, and Common LISP.

  10. Selection and Reporting of Statistical Methods to Assess Reliability of a Diagnostic Test: Conformity to Recommended Methods in a Peer-Reviewed Journal

    PubMed Central

    Park, Ji Eun; Han, Kyunghwa; Sung, Yu Sub; Chung, Mi Sun; Koo, Hyun Jung; Yoon, Hee Mang; Choi, Young Jun; Lee, Seung Soo; Kim, Kyung Won; Shin, Youngbin; An, Suah; Cho, Hyo-Min

    2017-01-01

    Objective To evaluate the frequency and adequacy of statistical analyses in a general radiology journal when reporting a reliability analysis for a diagnostic test. Materials and Methods Sixty-three studies of diagnostic test accuracy (DTA) and 36 studies reporting reliability analyses published in the Korean Journal of Radiology between 2012 and 2016 were analyzed. Studies were judged using the methodological guidelines of the Radiological Society of North America-Quantitative Imaging Biomarkers Alliance (RSNA-QIBA), and COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) initiative. DTA studies were evaluated by nine editorial board members of the journal. Reliability studies were evaluated by study reviewers experienced with reliability analysis. Results Thirty-one (49.2%) of the 63 DTA studies did not include a reliability analysis when deemed necessary. Among the 36 reliability studies, proper statistical methods were used in all (5/5) studies dealing with dichotomous/nominal data, 46.7% (7/15) of studies dealing with ordinal data, and 95.2% (20/21) of studies dealing with continuous data. Statistical methods were described in sufficient detail regarding weighted kappa in 28.6% (2/7) of studies and regarding the model and assumptions of intraclass correlation coefficient in 35.3% (6/17) and 29.4% (5/17) of studies, respectively. Reliability parameters were used as if they were agreement parameters in 23.1% (3/13) of studies. Reproducibility and repeatability were used incorrectly in 20% (3/15) of studies. Conclusion Greater attention to the importance of reporting reliability, thorough description of the related statistical methods, efforts not to neglect agreement parameters, and better use of relevant terminology is necessary. PMID:29089821

  11. Reliability analysis in the Office of Safety, Environmental, and Mission Assurance (OSEMA)

    NASA Astrophysics Data System (ADS)

    Kauffmann, Paul J.

    1994-12-01

    The technical personnel in the SEMA office are working to provide the highest degree of value-added activities to their support of the NASA Langley Research Center mission. Management perceives that reliability analysis tools and an understanding of a comprehensive systems approach to reliability will be a foundation of this change process. Since the office is involved in a broad range of activities supporting space mission projects and operating activities (such as wind tunnels and facilities), it was not clear what reliability tools the office should be familiar with and how these tools could serve as a flexible knowledge base for organizational growth. Interviews and discussions with the office personnel (both technicians and engineers) revealed that job responsibilities ranged from incoming inspection to component or system analysis to safety and risk. It was apparent that a broad base in applied probability and reliability along with tools for practical application was required by the office. A series of ten class sessions with a duration of two hours each was organized and scheduled. Hand-out materials were developed and practical examples based on the type of work performed by the office personnel were included. Topics covered were: Reliability Systems - a broad system oriented approach to reliability; Probability Distributions - discrete and continuous distributions; Sampling and Confidence Intervals - random sampling and sampling plans; Data Analysis and Estimation - Model selection and parameter estimates; and Reliability Tools - block diagrams, fault trees, event trees, FMEA. In the future, this information will be used to review and assess existing equipment and processes from a reliability system perspective. An analysis of incoming materials sampling plans was also completed. This study looked at the issues associated with Mil Std 105 and changes for a zero defect acceptance sampling plan.

  12. Reliability analysis in the Office of Safety, Environmental, and Mission Assurance (OSEMA)

    NASA Technical Reports Server (NTRS)

    Kauffmann, Paul J.

    1994-01-01

    The technical personnel in the SEMA office are working to provide the highest degree of value-added activities to their support of the NASA Langley Research Center mission. Management perceives that reliability analysis tools and an understanding of a comprehensive systems approach to reliability will be a foundation of this change process. Since the office is involved in a broad range of activities supporting space mission projects and operating activities (such as wind tunnels and facilities), it was not clear what reliability tools the office should be familiar with and how these tools could serve as a flexible knowledge base for organizational growth. Interviews and discussions with the office personnel (both technicians and engineers) revealed that job responsibilities ranged from incoming inspection to component or system analysis to safety and risk. It was apparent that a broad base in applied probability and reliability along with tools for practical application was required by the office. A series of ten class sessions with a duration of two hours each was organized and scheduled. Hand-out materials were developed and practical examples based on the type of work performed by the office personnel were included. Topics covered were: Reliability Systems - a broad system oriented approach to reliability; Probability Distributions - discrete and continuous distributions; Sampling and Confidence Intervals - random sampling and sampling plans; Data Analysis and Estimation - Model selection and parameter estimates; and Reliability Tools - block diagrams, fault trees, event trees, FMEA. In the future, this information will be used to review and assess existing equipment and processes from a reliability system perspective. An analysis of incoming materials sampling plans was also completed. This study looked at the issues associated with Mil Std 105 and changes for a zero defect acceptance sampling plan.

  13. Reliability of McConnell's classification of patellar orientation in symptomatic and asymptomatic subjects.

    PubMed

    Watson, C J; Propps, M; Galt, W; Redding, A; Dobbs, D

    1999-07-01

    Test-retest reliability study with blinded testers. To determine the intratester reliability of the McConnell classification system and to determine whether the intertester reliability of this system would be improved by one-on-one training of the testers, increasing the variability and numbers of subjects, blinding the testers to the absence or presence of patellofemoral pain syndrome, and adhering to the McConnell classification system as it is taught in the "McConnell Patellofemoral Treatment Plan" continuing education course. The McConnell classification system is currently used by physical therapy clinicians to quantify static patellar orientation. The measurements generated from this system purportedly guide the therapist in the application of patellofemoral tape and in assessment of the efficacy of treatment interventions on changing patellar orientation. Fifty-six subjects (age range, 21-65 years) provided a total of 101 knees for assessment. Seventy-six knees did not produce symptoms. A researcher who did not participate in the measuring process determined that 17 subjects had patellofemoral pain syndrome in 25 knees. Two testers concurrently measured static patellar orientation (anterior/posterior and medial/lateral tilt, medial/lateral glide, and patellar rotation) on subjects, using the McConnell classification system. Repeat measures were performed 3-7 days later. A kappa (kappa) statistic was used to assess the degree of agreement within each tester and between testers. The kappa coefficients for intratester reliability varied from -0.06 to 0.35. Intertester reliability ranged from -0.03 to 0.19. The McConnell classification system, in its current form, does not appear to be very reliable. Intratester reliability ranged from poor to fair, and intertester reliability was poor to slight. This system should not be used as a measurement tool or as a basis for treatment decisions.

  14. Pathogenicity of an H5N1 avian influenza virus isolated in Vietnam in 2012 and reliability of conjunctival samples for diagnosis of infection.

    PubMed

    Bui, Vuong N; Dao, Tung D; Nguyen, Tham T H; Nguyen, Lien T; Bui, Anh N; Trinh, Dai Q; Pham, Nga T; Inui, Kenjiro; Runstadler, Jonathan; Ogawa, Haruko; Nguyen, Khong V; Imai, Kunitoshi

    2014-01-22

    The continued spread of highly pathogenic avian influenza virus (HPAIV) subtype H5N1 among poultry in Vietnam poses a potential threat to animals and public health. To evaluate the pathogenicity of a 2012 H5N1 HPAIV isolate and to assess the utility of conjunctival swabs for viral detection and isolation in surveillance, an experimental infection with HPAIV subtype H5N1 was carried out in domestic ducks. Ducks were infected with 10(7.2) TCID50 of A/duck/Vietnam/QB1207/2012 (H5N1), which was isolated from a moribund domestic duck. In the infected ducks, clinical signs of disease, including neurological disorder, were observed. Ducks started to die at 3 days-post-infection (dpi), and the study mortality reached 67%. Viruses were recovered from oropharyngeal and conjunctival swabs until 7 dpi and from cloacal swabs until 4 dpi. In the ducks that died or were sacrificed on 3, 5, or 6 dpi, viruses were recovered from lung, brain, heart, pancreas and intestine, among which the highest virus titers were in the lung, brain or heart. Results of virus titration were confirmed by real-time RT-PCR. Genetic and phylogenetic analysis of the HA gene revealed that the isolate belongs to clade 2.3.2.1 similarly to the H5N1 viruses isolated in Vietnam in 2012. The present study demonstrated that this recent HPAI H5N1 virus of clade 2.3.2.1 could replicate efficiently in the systemic organs, including the brain, and cause severe disease with neurological symptoms in domestic ducks. Therefore, this HPAI H5N1 virus seems to retain the neurotrophic feature and has further developed properties of shedding virus from the oropharynx and conjunctiva in addition to the cloaca, potentially posing a higher risk of virus spread through cross-contact and/or environmental transmission. Continued surveillance and diagnostic programs using conjunctival swabs in the field would further verify the apparent reliability of conjunctival samples for the detection of AIV. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Pathogenicity of an H5N1 avian influenza virus isolated in Vietnam in 2012 and reliability of conjunctival samples for diagnosis of infection

    PubMed Central

    Bui, Vuong N.; Dao, Tung D.; Nguyen, Tham T. H.; Nguyen, Lien T.; Bui, Anh N.; Trinh, Dai Q.; Pham, Nga T.; Inui, Kenjiro; Runstadler, Jonathan; Ogawa, Haruko; Nguyen, Khong V.; Imai, Kunitoshi

    2013-01-01

    The continued spread of highly pathogenic avian influenza virus (HPAIV) subtype H5N1 among poultry in Vietnam poses a potential threat to animals and public health. To evaluate the pathogenicity of a 2012 H5N1 HPAIV isolate and to assess the utility of conjunctival swabs for viral detection and isolation in surveillance, an experimental infection with HPAIV subtype H5N1 was carried out in domestic ducks. Ducks were infected with 107.2 TCID50 of A/duck/Vietnam/QB1207/2012 (H5N1), which was isolated from a moribund domestic duck. In the infected ducks, clinical signs of disease, including neurological disorder, were observed. Ducks started to die at 3 days-post-infection (dpi), and the study mortality reached 67%. Viruses were recovered from oropharyngeal and conjunctival swabs until 7 dpi and from cloacal swabs until 4 dpi. In the ducks that died or were sacrificed on 3, 5, or 6 dpi, viruses were recovered from lung, brain, heart, pancreas and intestine, among which the highest virus titers were in the lung, brain or heart. Results of virus titration were confirmed by real-time RT-PCR. Genetic and phylogenetic analysis of the HA gene revealed that the isolate belongs to clade 2.3.2.1 similarly to the H5N1 viruses isolated in Vietnam in 2012. The present study demonstrated that this recent HPAI H5N1 virus of clade 2.3.2.1 could replicate efficiently in the systemic organs, including the brain, and cause severe disease with neurological symptoms in domestic ducks. Therefore, this HPAI H5N1 virus seems to retain the neurotrophic feature and has further developed properties of shedding virus from the oropharynx and conjunctiva in addition to the cloaca, potentially posing a higher risk of virus spread through cross-contact and/or environmental transmission. Continued surveillance and diagnostic programs using conjuntcival swabs in the field would further verify the apparent reliability of conjunctival samples for the detection of AIV. PMID:24211664

  16. Artificial intelligence for turboprop engine maintenance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-01-01

    Long-term maintenance operations, causing the unit to out of action, may seem economical - but they result in reduced operating readiness. Offsetting that concern, careless, hurried maintenance reduces margins of safety and reliability. Any tool that improves maintenance without causing a sharp increase in cost is valuable. Artificial intelligence (AI) is one of the tools. Expert system and neural networks are two different areas of AI that show promise for turboprop engine maintenance.

  17. Advancements for continuous miners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiscor, S.

    2007-06-15

    Design changes and new technology make the modern continuous miner more user friendly. Two of the major manufacturers, Joy Mining Machinery and DBT, both based near Pittsburgh, PA, USA, have recently acquired other OEMs to offer a greater product line. Joy's biggest development in terms of improving cutting time is the FACEBOSS Control System which has an operator assistance element and Joy Surface Reporting Software (JSRP). Joy's WetHead continuous miners have excellent performance. DBT is researching ways to make the machines more reliable with new drive systems. It has also been experimenting with water sprays to improve dust suppression. 4more » photos.« less

  18. The Drought Task Force and Research on Understanding, Predicting, and Monitoring Drought

    NASA Astrophysics Data System (ADS)

    Barrie, D.; Mariotti, A.; Archambault, H. M.; Hoerling, M. P.; Wood, E. F.; Koster, R. D.; Svoboda, M.

    2016-12-01

    Drought has caused serious social and economic impacts throughout the history of the United States. All Americans are susceptible to the direct and indirect threats drought poses to the Nation. Drought challenges agricultural productivity and reduces the quantity and quality of drinking water supplies upon which communities and industries depend. Drought jeopardizes the integrity of critical infrastructure, causes extensive economic and health impacts, harms ecosystems, and increases energy costs. Ensuring the availability of clean, sufficient, and reliable water resources is a top national and NOAA priority. The Climate Program Office's Modeling, Analysis, Predictions, and Projections (MAPP) program, in partnership with the NOAA-led National Integrated Drought Information System (NIDIS), is focused on improving our understanding of drought causes, evolution, amelioration, and impacts as well as improving our capability to monitor and predict drought. These capabilities and knowledge are critical to providing communities with actionable, reliable information to increase drought preparedness and resilience. This poster will present information on the MAPP-organized Drought Task Force, a consortium of investigators funded by the MAPP program in partnership with NIDIS to advance drought understanding, monitoring, and prediction. Information on Task Force activities, products, and MAPP drought initiatives will be described in the poster, including the Task Force's ongoing focus on the California drought, its predictability, and its causes.

  19. Reliability and validity analysis of the transfer assessment instrument.

    PubMed

    McClure, Laura A; Boninger, Michael L; Ozawa, Haishin; Koontz, Alicia

    2011-03-01

    To describe the development and evaluate the reliability and validity of a newly created outcome measure, the Transfer Assessment Instrument (TAI), to assess the quality of transfers performed by full-time wheelchair users. Repeated measures. 2009 National Veterans Wheelchair Games in Spokane, WA. A convenience sample of full-time wheelchair users (N=40) who perform sitting pivot or standing pivot transfers. Not applicable. Intraclass correlation coefficients (ICCs) for reliability and Spearman correlation coefficients for concurrent validity between the TAI and a global assessment scale (0-100 visual analog scale [VAS]). No adverse events occurred during testing. Intrarater ICCs for 3 raters ranged between .35 and .89, and the interrater ICC was .642. Correlations between the TAI and a global assessment VAS ranged between .19 (P=.285) and .69 (P>.000). Item analyses of the tool found a wide range of results, from weak to good reliability. Evaluators found the TAI to be safe and able to be completed in a short time. The TAI is a safe, quick outcome measure that uses equipment typically found in a clinical setting and does not ask participants to perform new skills. Reliability and validity testing found the TAI to have acceptable interrater and a wide range of intrarater reliability. Future work indicates the need for continued refinement including removal or modification of items found to have low reliability, improved education for clinicians, and further reliability and validity analysis with a more diverse subject population. The TAI has the potential to fill a void in assessment of transfers. Copyright © 2011 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  20. The impact of national accreditation reform on survey reliability: a 2-year investigation of survey coordinators' perspectives.

    PubMed

    Greenfield, David; Hogden, Anne; Hinchcliff, Reece; Mumford, Virginia; Pawsey, Marjorie; Debono, Deborah; Westbrook, Johanna I; Braithwaite, Jeffrey

    2016-10-01

    Accrediting health care organizations against standards is a recognized safety and quality intervention. The credibility of an accreditation programme relies on surveying reliability. We investigated accreditation survey coordinators' perceptions of reliability issues and their continued relevancy, during a period of national accreditation reform. In 2013 and 2014, questionnaire surveys were developed using survey coordinators' feedback of their experiences and concerns regarding the accreditation process. Each year, a purpose-designed questionnaire survey was administered during the accrediting agency survey coordinator training days. Participants reported that survey reliability was informed by five categories of issues: the management of the accreditation process, including standards and health care organizational issues; surveyor workforce management; survey coordinator role; survey team; and individual surveyors. A new accreditation system and programme did not alter the factors reported to shape survey reliability. However, across the reform period, there was a noted change within each category of the specific issues that were of concern. Furthermore, consensus between coordinators that existed in 2013 appears to have diminished in 2014. Across all categories, in 2014 there was greater diversity of opinion than in 2013. The known challenges to the reliability of an accreditation programme retained their potency and relevancy during a period of reform. The diversity of opinion identified across the coordinator workforce could potentially place the credibility and reliability of the new scheme at risk. The study highlights that reliability of an accreditation scheme is an ongoing achievement, not a one-off attainment. © 2016 John Wiley & Sons, Ltd.

  1. Developing a tool to measure satisfaction among health professionals in sub-Saharan Africa

    PubMed Central

    2013-01-01

    Background In sub-Saharan Africa, lack of motivation and job dissatisfaction have been cited as causes of poor healthcare quality and outcomes. Measurement of health workers’ satisfaction adapted to sub-Saharan African working conditions and cultures is a challenge. The objective of this study was to develop a valid and reliable instrument to measure satisfaction among health professionals in the sub-Saharan African context. Methods A survey was conducted in Senegal and Mali in 2011 among 962 care providers (doctors, midwives, nurses and technicians) practicing in 46 hospitals (capital, regional and district). The participation rate was very high: 97% (937/962). After exploratory factor analysis (EFA), construct validity was assessed through confirmatory factor analysis (CFA). The discriminant validity of our subscales was evaluated by comparing the average variance extracted (AVE) for each of the constructs with the squared interconstruct correlation (SIC), and finally for criterion validity, each subscale was tested with two hypotheses. Two dimensions of reliability were assessed: internal consistency with Cronbach’s alpha subscales and stability over time using a test-retest process. Results Eight dimensions of satisfaction encompassing 24 items were identified and validated using a process that combined psychometric analyses and expert opinions: continuing education, salary and benefits, management style, tasks, work environment, workload, moral satisfaction and job stability. All eight dimensions demonstrated significant discriminant validity. The final model showed good performance, with a root mean square error of approximation (RMSEA) of 0.0508 (90% CI: 0.0448 to 0.0569) and a comparative fit index (CFI) of 0.9415. The concurrent criterion validity of the eight dimensions was good. Reliability was assessed based on internal consistency, which was good for all dimensions but one (moral satisfaction < 0.70). Test-retest showed satisfactory temporal stability (intra class coefficient range: 0.60 to 0.91). Conclusions Job satisfaction is a complex construct; this study provides a multidimensional instrument whose content, construct and criterion validities were verified to ensure its suitability for the sub-Saharan African context. When using these subscales in further studies, the variability of the reliability of the subscales should be taken in to account for calculating the sample sizes. The instrument will be useful in evaluative studies which will help guide interventions aimed at improving both the quality of care and its effectiveness. PMID:23826720

  2. Note: An online testing method for lifetime projection of high power light-emitting diode under accelerated reliability test.

    PubMed

    Chen, Qi; Chen, Quan; Luo, Xiaobing

    2014-09-01

    In recent years, due to the fast development of high power light-emitting diode (LED), its lifetime prediction and assessment have become a crucial issue. Although the in situ measurement has been widely used for reliability testing in laser diode community, it has not been applied commonly in LED community. In this paper, an online testing method for LED life projection under accelerated reliability test was proposed and the prototype was built. The optical parametric data were collected. The systematic error and the measuring uncertainty were calculated to be within 0.2% and within 2%, respectively. With this online testing method, experimental data can be acquired continuously and sufficient amount of data can be gathered. Thus, the projection fitting accuracy can be improved (r(2) = 0.954) and testing duration can be shortened.

  3. Factor Structure, Reliability and Criterion Validity of the Autism-Spectrum Quotient (AQ): A Study in Dutch Population and Patient Groups

    PubMed Central

    Bartels, Meike; Cath, Danielle C.; Boomsma, Dorret I.

    2008-01-01

    The factor structure of the Dutch translation of the Autism-Spectrum Quotient (AQ; a continuous, quantitative measure of autistic traits) was evaluated with confirmatory factor analyses in a large general population and student sample. The criterion validity of the AQ was examined in three matched patient groups (autism spectrum conditions (ASC), social anxiety disorder, and obsessive–compulsive disorder). A two factor model, consisting of a “Social interaction” factor and “Attention to detail” factor could be identified. The internal consistency and test–retest reliability of the AQ were satisfactory. High total AQ and factor scores were specific to ASC patients. Men scored higher than women and science students higher than non-science students. The Dutch translation of the AQ is a reliable instrument to assess autism spectrum conditions. PMID:18302013

  4. A critical review of period analyses and implications for mass exchange in W UMa eclipsing binaries: Paper 3

    NASA Astrophysics Data System (ADS)

    Nelson, R. H.; Terrell, D.; Milone, E. F.

    2016-02-01

    This is the third of a series of four papers, the goal of which is to identify the overcontact eclipsing binary star systems for which a solid case can be made for mass exchange. To reach this goal, it is necessary first to identify those systems for which there is a strong case for period change. We have identified 60 candidate systems; in the first two papers (Nelson et al. 2014, 2016) we discussed 40 individual cases; this paper continues with the last 20. For each system, we present a detailed discussion and evaluation concerning the observational and interpretive material presented in the literature. At least one eclipse timing (ET) diagram, commonly referred to as an "O-C diagram", that includes the latest available data, accompanies each discussion. In paper 4, we will discuss the mechanisms that can cause period change and which of the 60 systems can be reliably concluded to exhibit mass exchange; we will also provide a list of marginal and rejected cases - suitable for future work.

  5. Power transmission studies for tethered SP-100

    NASA Technical Reports Server (NTRS)

    Bents, David J.

    1988-01-01

    The tether and/or transmission line connecting the SP-100 to space station presents some unorthodox challenges in high voltage engineering, power transmission, and distribution. The line, which doubles as a structural element of this unusual spacecraft, will convey HVDC from SP-100 to the platform in low Earth orbit, and environment where the local plasma is sufficient to cause breakdown of exposed conductors at potentials of only a few hundred volts. Its anticipated several years operation, and continuously accumulating exposure to meteoroids and debris, raises an increasing likelihood that mechanical damage, including perforation, will be sustained in service. The present concept employs an array of gas insulated solid wall aluminum coaxial tubes; a conceptual design which showed basic feasibility of the SP-100 powered space station. Practical considerations of launch, deployment and assembly have lead to investigation of reel deployable, dielectric insulated coaxial cables. To be competitive, the dielectric would have to operate reliably in a radiation environment under electrical stresses exceeding 50 kV/cm. The SP-100 transmission line high voltage interfaces are also considered.

  6. Power transmission studies for tethered SP-100

    NASA Technical Reports Server (NTRS)

    Bents, David J.

    1988-01-01

    The tether and/or transmission line connecting the SP-100 to Space Station presents some unorthodox challenges in high voltage engineering, power transmission, and distribution. The line, which doubles as a structural element of this unusual spacecraft, will convey HVDC from SP-100 to the platform in low Earth orbit, and environment where the local plasma is sufficient to cause breakdown of exposed conductors at potentials of only a few hundred volts. Its anticipated several years operation, and continuously accumulating exposure to meteoroids and debris, raises an increasing likelihood that mechanical damage, including perforation, will be sustained in service. The present concept employs an array of gas insulated solid wall aluminum coaxial tubes; a conceptual design which showed basic feasibility of the SP-100 powered Space Station. Practical considerations of launch, deployment and assembly have led to investigation of reel deployable, dielectric insulated coaxial cables. To be competitive, the dielectric would have to operate reliably in a radiation environment under electrical stresses exceeding 50 kV/cm. The SP-100 transmission line high voltage interfaces are also considered.

  7. A diagnosis system using object-oriented fault tree models

    NASA Technical Reports Server (NTRS)

    Iverson, David L.; Patterson-Hine, F. A.

    1990-01-01

    Spaceborne computing systems must provide reliable, continuous operation for extended periods. Due to weight, power, and volume constraints, these systems must manage resources very effectively. A fault diagnosis algorithm is described which enables fast and flexible diagnoses in the dynamic distributed computing environments planned for future space missions. The algorithm uses a knowledge base that is easily changed and updated to reflect current system status. Augmented fault trees represented in an object-oriented form provide deep system knowledge that is easy to access and revise as a system changes. Given such a fault tree, a set of failure events that have occurred, and a set of failure events that have not occurred, this diagnosis system uses forward and backward chaining to propagate causal and temporal information about other failure events in the system being diagnosed. Once the system has established temporal and causal constraints, it reasons backward from heuristically selected failure events to find a set of basic failure events which are a likely cause of the occurrence of the top failure event in the fault tree. The diagnosis system has been implemented in common LISP using Flavors.

  8. A tool for assessing the quality of nursing handovers: a validation study.

    PubMed

    Ferrara, Paolo; Terzoni, Stefano; Davì, Salvatore; Bisesti, Alberto; Destrebecq, Anne

    2017-08-10

    Handover, in particular between two shifts, is a crucial aspect of nursing for patient safety, aimed at ensuring continuity of care. During this process, several factors can affect quality of care and cause errors. This study aimed to assess quality of handovers, by validating the Handoff CEX-Italian scale. The scale was translated from English into Italian and the content validity index was calculated and internal consistency assessed. The scale was used in several units of the San Paolo Teaching Hospital in Milan, Italy. A total of 48 reports were assessed (192 evaluations). The median score was 6, interquartile range (IQR) [5;7] and was not influenced by specific (p=0.21) or overall working experience (p=0.13). The domains showing the lowest median values (median=6, IQR [4;8]) were context, communication, and organisation. Night to morning handovers obtained the lowest scores. CVI-S was 0.96, Cronbach's alpha was 0.79. The Handoff CEX-Italian scale is valid and reliable and it can be used to assess the quality of nurse handovers.

  9. A DNA Barcode Library for Korean Chironomidae (Insecta: Diptera) and Indexes for Defining Barcode Gap

    PubMed Central

    Kim, Sungmin; Song, Kyo-Hong; Ree, Han-Il; Kim, Won

    2012-01-01

    Non-biting midges (Diptera: Chironomidae) are a diverse population that commonly causes respiratory allergies in humans. Chironomid larvae can be used to indicate freshwater pollution, but accurate identification on the basis of morphological characteristics is difficult. In this study, we constructed a mitochondrial cytochrome c oxidase subunit I (COI)-based DNA barcode library for Korean chironomids. This library consists of 211 specimens from 49 species, including adults and unidentified larvae. The interspecies and intraspecies COI sequence variations were analyzed. Sophisticated indexes were developed in order to properly evaluate indistinct barcode gaps that are created by insufficient sampling on both the interspecies and intraspecies levels and by variable mutation rates across taxa. In a variety of insect datasets, these indexes were useful for re-evaluating large barcode datasets and for defining COI barcode gaps. The COI-based DNA barcode library will provide a rapid and reliable tool for the molecular identification of Korean chironomid species. Furthermore, this reverse-taxonomic approach will be improved by the continuous addition of other speceis’ sequences to the library. PMID:22138764

  10. Thirty years of fluoridation: a review.

    PubMed

    Richmond, V L

    1985-01-01

    Fluoride contributes to stability of both teeth and bones and to reduction of caries, especially if ingested before eruption of teeth. Reduction of caries continues at about 60% in persons drinking fluoridated water only as long as fluoride washes over teeth. One-half the population of the US does not have access to water with an optimal fluoride concentration of about 1 mg/L. Misinformation about fluoridation contributes to reluctance of communities to supplement the natural but inadequate fluoride of those water supplies. Fluoridation of water has no positive or negative effect on incidence or mortality rates due to cancer, heart disease, intracranial lesions, nephritis, cirrhosis, mongoloid births, or from all causes together. The collective decision to increase the natural fluoride content of water supplies is not an infringement of civil rights, nor does it establish a precedent in the binding sense of the law. Supplemental fluoride in water makes it available to all members of the community in a safe, practical, economical and reliable manner. Fluoridation saves money in dental costs and time lost from work. Fluoridation is an appropriate action of government in promoting the health and welfare of society.

  11. Challenges/issues of NIS used in particle accelerator facilities

    NASA Astrophysics Data System (ADS)

    Faircloth, Dan

    2013-09-01

    High current, high duty cycle negative ion sources are an essential component of many high power particle accelerators. This talk gives an overview of the state-of-the-art sources used around the world. Volume, surface and charge exchange negative ion production processes are detailed. Cesiated magnetron and Penning surface plasma sources are discussed along with surface converter sources. Multicusp volume sources with filament and LaB6 cathodes are described before moving onto RF inductively coupled volume sources with internal and external antennas. The major challenges facing accelerator facilities are detailed. Beam current, source lifetime and reliability are the most pressing. The pros and cons of each source technology is discussed along with their development programs. The uncertainties and unknowns common to these sources are discussed. The dynamics of cesium surface coverage and the causes of source variability are still unknown. Minimizing beam emittance is essential to maximizing the transport of high current beams; space charge effects are very important. The basic physics of negative ion production is still not well understood, theoretical and experimental programs continue to improve this, but there are still many mysteries to be solved.

  12. The effect of warp tension on the colour of jacquard fabric made with different weaves structures

    NASA Astrophysics Data System (ADS)

    Karnoub, A.; Kadi, N.; Holmudd, O.; Peterson, J.; Skrifvars, M.

    2017-10-01

    The aims of this paper is to demonstrate the effect of warp tension on fabric colour for several types of weaves structures, and found a relationship between them. The image analyse technique used to determine the proportion of yarns colour appearance, the advantage of this techniques is the rapidity and reliability. The woven fabric samples are consisting of a polyester warp yarn with continuous filaments and density of 33 end/cm, a polypropylene weft yarn with a density of 24 pick/cm, and the warp tension ranged between 12-22 cN/tex. The experimental results demonstrated the effect of the warp tension on the colour of fabric, and this effect is related to several factors, where the large proportion of warp appearance leads to larger effect on fabric colour. The difference in the value of colour differences ΔEcmc is larger is in the range 16 to 20 cN/tex of warp tension. Using statistical methods, a mathematical model to calculate the amount of the colour difference ΔEcmc caused by the change in warp tension had been proposed.

  13. Extended applications of track irregularity probabilistic model and vehicle-slab track coupled model on dynamics of railway systems

    NASA Astrophysics Data System (ADS)

    Xu, Lei; Zhai, Wanming; Gao, Jianmin

    2017-11-01

    Track irregularities are inevitably in a process of stochastic evolution due to the uncertainty and continuity of wheel-rail interactions. For depicting the dynamic behaviours of vehicle-track coupling system caused by track random irregularities thoroughly, it is a necessity to develop a track irregularity probabilistic model to simulate rail surface irregularities with ergodic properties on amplitudes, wavelengths and probabilities, and to build a three-dimensional vehicle-track coupled model by properly considering the wheel-rail nonlinear contact mechanisms. In the present study, the vehicle-track coupled model is programmed by combining finite element method with wheel-rail coupling model firstly. Then, in light of the capability of power spectral density (PSD) in characterising amplitudes and wavelengths of stationary random signals, a track irregularity probabilistic model is presented to reveal and simulate the whole characteristics of track irregularity PSD. Finally, extended applications from three aspects, that is, extreme analysis, reliability analysis and response relationships between dynamic indices, are conducted to the evaluation and application of the proposed models.

  14. Are Escherichia coli Pathotypes Still Relevant in the Era of Whole-Genome Sequencing?

    PubMed Central

    Robins-Browne, Roy M.; Holt, Kathryn E.; Ingle, Danielle J.; Hocking, Dianna M.; Yang, Ji; Tauschek, Marija

    2016-01-01

    The empirical and pragmatic nature of diagnostic microbiology has given rise to several different schemes to subtype E.coli, including biotyping, serotyping, and pathotyping. These schemes have proved invaluable in identifying and tracking outbreaks, and for prognostication in individual cases of infection, but they are imprecise and potentially misleading due to the malleability and continuous evolution of E. coli. Whole genome sequencing can be used to accurately determine E. coli subtypes that are based on allelic variation or differences in gene content, such as serotyping and pathotyping. Whole genome sequencing also provides information about single nucleotide polymorphisms in the core genome of E. coli, which form the basis of sequence typing, and is more reliable than other systems for tracking the evolution and spread of individual strains. A typing scheme for E. coli based on genome sequences that includes elements of both the core and accessory genomes, should reduce typing anomalies and promote understanding of how different varieties of E. coli spread and cause disease. Such a scheme could also define pathotypes more precisely than current methods. PMID:27917373

  15. Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources

    NASA Astrophysics Data System (ADS)

    Jia, Z.; Zhan, Z.

    2017-12-01

    Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.

  16. Potential of SENTINEL-1A for Nation-Wide Routine Updates of Active Landslide Maps

    NASA Astrophysics Data System (ADS)

    Lazecky, M.; Canaslan Comut, F.; Nikolaeva, E.; Bakon, M.; Papco, J.; Ruiz-Armenteros, A. M.; Qin, Y.; de Sousa, J. J. M.; Ondrejka, P.

    2016-06-01

    Slope deformation is one of the typical geohazards that causes an extensive economic damage in mountainous regions. As such, they are usually intensively monitored by means of modern expertise commonly by national geological or emergency services. Resulting landslide susceptibility maps, or landslide inventories, offer an overview of areas affected by previously activated landslides as well as slopes known to be unstable currently. Current slope instabilities easily transform into a landslide after various triggering factors, such as an intensive rainfall or a melting snow cover. In these inventories, the majority of the existing landslide-affected slopes are marked as either stable or active, after a continuous investigative work of the experts in geology. In this paper we demonstrate the applicability of Sentinel-1A satellite SAR interferometry (InSAR) to assist by identifying slope movement activity and use the information to update national landslide inventories. This can be done reliably in cases of semi-arid regions or low vegetated slopes. We perform several analyses based on multitemporal InSAR techniques of Sentinel-1A data over selected areas prone to landslides.

  17. Are Escherichia coli Pathotypes Still Relevant in the Era of Whole-Genome Sequencing?

    PubMed

    Robins-Browne, Roy M; Holt, Kathryn E; Ingle, Danielle J; Hocking, Dianna M; Yang, Ji; Tauschek, Marija

    2016-01-01

    The empirical and pragmatic nature of diagnostic microbiology has given rise to several different schemes to subtype E .coli, including biotyping, serotyping, and pathotyping. These schemes have proved invaluable in identifying and tracking outbreaks, and for prognostication in individual cases of infection, but they are imprecise and potentially misleading due to the malleability and continuous evolution of E. coli . Whole genome sequencing can be used to accurately determine E. coli subtypes that are based on allelic variation or differences in gene content, such as serotyping and pathotyping. Whole genome sequencing also provides information about single nucleotide polymorphisms in the core genome of E. coli , which form the basis of sequence typing, and is more reliable than other systems for tracking the evolution and spread of individual strains. A typing scheme for E. coli based on genome sequences that includes elements of both the core and accessory genomes, should reduce typing anomalies and promote understanding of how different varieties of E. coli spread and cause disease. Such a scheme could also define pathotypes more precisely than current methods.

  18. Three-dimensional-printed gas dynamic virtual nozzles for x-ray laser sample delivery

    PubMed Central

    Nelson, Garrett; Kirian, Richard A.; Weierstall, Uwe; Zatsepin, Nadia A.; Faragó, Tomáš; Baumbach, Tilo; Wilde, Fabian; Niesler, Fabian B. P.; Zimmer, Benjamin; Ishigami, Izumi; Hikita, Masahide; Bajt, Saša; Yeh, Syun-Ru; Rousseau, Denis L.; Chapman, Henry N.; Spence, John C. H.; Heymann, Michael

    2016-01-01

    Reliable sample delivery is essential to biological imaging using X-ray Free Electron Lasers (XFELs). Continuous injection using the Gas Dynamic Virtual Nozzle (GDVN) has proven valuable, particularly for time-resolved studies. However, many important aspects of GDVN functionality have yet to be thoroughly understood and/or refined due to fabrication limitations. We report the application of 2-photon polymerization as a form of high-resolution 3D printing to fabricate high-fidelity GDVNs with submicron resolution. This technique allows rapid prototyping of a wide range of different types of nozzles from standard CAD drawings and optimization of crucial dimensions for optimal performance. Three nozzles were tested with pure water to determine general nozzle performance and reproducibility, with nearly reproducible off-axis jetting being the result. X-ray tomography and index matching were successfully used to evaluate the interior nozzle structures and identify the cause of off-axis jetting. Subsequent refinements to fabrication resulted in straight jetting. A performance test of printed nozzles at an XFEL provided high quality femtosecond diffraction patterns. PMID:27410079

  19. League tables and school effectiveness: a mathematical model.

    PubMed Central

    Hoyle, Rebecca B; Robinson, James C

    2003-01-01

    'School performance tables', an alphabetical list of secondary schools along with aggregates of their pupils' performances in national tests, have been published in the UK since 1992. Inevitably, the media have responded by publishing ranked 'league tables'. Despite concern over the potentially divisive effect of such tables, the current government has continued to publish this information in the same form. The effect of this information on standards and on the social make-up of the community has been keenly debated. Since there is no control group available that would allow us to investigate this issue directly, we present here a simple mathematical model. Our results indicate that, while random fluctuations from year to year can cause large distortions in the league-table positions, some schools still establish themselves as 'desirable'. To our surprise, we found that 'value-added' tables were no more accurate than tables based on raw exam scores, while a different method of drawing up the tables, in which exam results are averaged over a period of time, appears to give a much more reliable measure of school performance. PMID:12590748

  20. A Fresh Start for Flood Estimation in Ungauged Basins

    NASA Astrophysics Data System (ADS)

    Woods, R. A.

    2017-12-01

    The two standard methods for flood estimation in ungauged basins, regression-based statistical models and rainfall-runoff models using a design rainfall event, have survived relatively unchanged as the methods of choice for more than 40 years. Their technical implementation has developed greatly, but the models' representation of hydrological processes has not, despite a large volume of hydrological research. I suggest it is time to introduce more hydrology into flood estimation. The reliability of the current methods can be unsatisfactory. For example, despite the UK's relatively straightforward hydrology, regression estimates of the index flood are uncertain by +/- a factor of two (for a 95% confidence interval), an impractically large uncertainty for design. The standard error of rainfall-runoff model estimates is not usually known, but available assessments indicate poorer reliability than statistical methods. There is a practical need for improved reliability in flood estimation. Two promising candidates to supersede the existing methods are (i) continuous simulation by rainfall-runoff modelling and (ii) event-based derived distribution methods. The main challenge with continuous simulation methods in ungauged basins is to specify the model structure and parameter values, when calibration data are not available. This has been an active area of research for more than a decade, and this activity is likely to continue. The major challenges for the derived distribution method in ungauged catchments include not only the correct specification of model structure and parameter values, but also antecedent conditions (e.g. seasonal soil water balance). However, a much smaller community of researchers are active in developing or applying the derived distribution approach, and as a result slower progress is being made. A change in needed: surely we have learned enough about hydrology in the last 40 years that we can make a practical hydrological advance on our methods for flood estimation! A shift to new methods for flood estimation will not be taken lightly by practitioners. However, the standard for change is clear - can we develop new methods which give significant improvements in reliability over those existing methods which are demonstrably unsatisfactory?

Top