Sample records for higher system reliability

  1. Scaled CMOS Reliability and Considerations for Spacecraft Systems: Bottom-Up and Top-Down Perspective

    NASA Technical Reports Server (NTRS)

    White, Mark

    2012-01-01

    New space missions will increasingly rely on more advanced technologies because of system requirements for higher performance, particularly in instruments and high-speed processing. Component-level reliability challenges with scaled CMOS in spacecraft systems from a bottom-up perspective have been presented. Fundamental Front-end and Back-end processing reliability issues with more aggressively scaled parts have been discussed. Effective thermal management from system-level to the componentlevel (top-down) is a key element in overall design of reliable systems. Thermal management in space systems must consider a wide range of issues, including thermal loading of many different components, and frequent temperature cycling of some systems. Both perspectives (top-down and bottom-up) play a large role in robust, reliable spacecraft system design.

  2. Design of a modular digital computer system, DRL 4. [for meeting future requirements of spaceborne computers

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The design is reported of an advanced modular computer system designated the Automatically Reconfigurable Modular Multiprocessor System, which anticipates requirements for higher computing capacity and reliability for future spaceborne computers. Subjects discussed include: an overview of the architecture, mission analysis, synchronous and nonsynchronous scheduling control, reliability, and data transmission.

  3. Design and Analysis of a Flexible, Reliable Deep Space Life Support System

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2012-01-01

    This report describes a flexible, reliable, deep space life support system design approach that uses either storage or recycling or both together. The design goal is to provide the needed life support performance with the required ultra reliability for the minimum Equivalent System Mass (ESM). Recycling life support systems used with multiple redundancy can have sufficient reliability for deep space missions but they usually do not save mass compared to mixed storage and recycling systems. The best deep space life support system design uses water recycling with sufficient water storage to prevent loss of crew if recycling fails. Since the amount of water needed for crew survival is a small part of the total water requirement, the required amount of stored water is significantly less than the total to be consumed. Water recycling with water, oxygen, and carbon dioxide removal material storage can achieve the high reliability of full storage systems with only half the mass of full storage and with less mass than the highly redundant recycling systems needed to achieve acceptable reliability. Improved recycling systems with lower mass and higher reliability could perform better than systems using storage.

  4. Environmental Control and Life Support System Reliability for Long-Duration Missions Beyond Lower Earth Orbit

    NASA Technical Reports Server (NTRS)

    Sargusingh, Miriam J.; Nelson, Jason R.

    2014-01-01

    NASA has highlighted reliability as critical to future human space exploration, particularly in the area of environmental controls and life support systems. The Advanced Exploration Systems (AES) projects have been encouraged to pursue higher reliability components and systems as part of technology development plans. However, no consensus has been reached on what is meant by improving on reliability, or on how to assess reliability within the AES projects. This became apparent when trying to assess reliability as one of several figures of merit for a regenerable water architecture trade study. In the spring of 2013, the AES Water Recovery Project hosted a series of events at Johnson Space Center with the intended goal of establishing a common language and understanding of NASA's reliability goals, and equipping the projects with acceptable means of assessing the respective systems. This campaign included an educational series in which experts from across the agency and academia provided information on terminology, tools, and techniques associated with evaluating and designing for system reliability. The campaign culminated in a workshop that included members of the Environmental Control and Life Support System and AES communities. The goal of this workshop was to develop a consensus on what reliability means to AES and identify methods for assessing low- to mid-technology readiness level technologies for reliability. This paper details the results of that workshop.

  5. Reviewing Reliability and Validity of Information for University Educational Evaluation

    NASA Astrophysics Data System (ADS)

    Otsuka, Yusaku

    To better utilize evaluations in higher education, it is necessary to share the methods of reviewing reliability and validity of examination scores and grades, and to accumulate and share data for confirming results. Before the GPA system is first introduced into a university or college, the reliability of examination scores and grades, especially for essay examinations, must be assured. Validity is a complicated concept, so should be assured in various ways, including using professional audits, theoretical models, and statistical data analysis. Because individual students and teachers are continually improving, using evaluations to appraise their progress is not always compatible with using evaluations in appraising the implementation of accountability in various departments or the university overall. To better utilize evaluations and improve higher education, evaluations should be integrated into the current system by sharing the vision of an academic learning community and promoting interaction between students and teachers based on sufficiently reliable and validated evaluation tools.

  6. Developing Reliable Life Support for Mars

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2017-01-01

    A human mission to Mars will require highly reliable life support systems. Mars life support systems may recycle water and oxygen using systems similar to those on the International Space Station (ISS). However, achieving sufficient reliability is less difficult for ISS than it will be for Mars. If an ISS system has a serious failure, it is possible to provide spare parts, or directly supply water or oxygen, or if necessary bring the crew back to Earth. Life support for Mars must be designed, tested, and improved as needed to achieve high demonstrated reliability. A quantitative reliability goal should be established and used to guide development t. The designers should select reliable components and minimize interface and integration problems. In theory a system can achieve the component-limited reliability, but testing often reveal unexpected failures due to design mistakes or flawed components. Testing should extend long enough to detect any unexpected failure modes and to verify the expected reliability. Iterated redesign and retest may be required to achieve the reliability goal. If the reliability is less than required, it may be improved by providing spare components or redundant systems. The number of spares required to achieve a given reliability goal depends on the component failure rate. If the failure rate is under estimated, the number of spares will be insufficient and the system may fail. If the design is likely to have undiscovered design or component problems, it is advisable to use dissimilar redundancy, even though this multiplies the design and development cost. In the ideal case, a human tended closed system operational test should be conducted to gain confidence in operations, maintenance, and repair. The difficulty in achieving high reliability in unproven complex systems may require the use of simpler, more mature, intrinsically higher reliability systems. The limitations of budget, schedule, and technology may suggest accepting lower and less certain expected reliability. A plan to develop reliable life support is needed to achieve the best possible reliability.

  7. High power diode lasers emitting from 639 nm to 690 nm

    NASA Astrophysics Data System (ADS)

    Bao, L.; Grimshaw, M.; DeVito, M.; Kanskar, M.; Dong, W.; Guan, X.; Zhang, S.; Patterson, J.; Dickerson, P.; Kennedy, K.; Li, S.; Haden, J.; Martinsen, R.

    2014-03-01

    There is increasing market demand for high power reliable red lasers for display and cinema applications. Due to the fundamental material system limit at this wavelength range, red diode lasers have lower efficiency and are more temperature sensitive, compared to 790-980 nm diode lasers. In terms of reliability, red lasers are also more sensitive to catastrophic optical mirror damage (COMD) due to the higher photon energy. Thus developing higher power-reliable red lasers is very challenging. This paper will present nLIGHT's released red products from 639 nm to 690nm, with established high performance and long-term reliability. These single emitter diode lasers can work as stand-alone singleemitter units or efficiently integrate into our compact, passively-cooled Pearl™ fiber-coupled module architectures for higher output power and improved reliability. In order to further improve power and reliability, new chip optimizations have been focused on improving epitaxial design/growth, chip configuration/processing and optical facet passivation. Initial optimization has demonstrated promising results for 639 nm diode lasers to be reliably rated at 1.5 W and 690nm diode lasers to be reliably rated at 4.0 W. Accelerated life-test has started and further design optimization are underway.

  8. Integrating Formal Methods and Testing 2002

    NASA Technical Reports Server (NTRS)

    Cukic, Bojan

    2002-01-01

    Traditionally, qualitative program verification methodologies and program testing are studied in separate research communities. None of them alone is powerful and practical enough to provide sufficient confidence in ultra-high reliability assessment when used exclusively. Significant advances can be made by accounting not only tho formal verification and program testing. but also the impact of many other standard V&V techniques, in a unified software reliability assessment framework. The first year of this research resulted in the statistical framework that, given the assumptions on the success of the qualitative V&V and QA procedures, significantly reduces the amount of testing needed to confidently assess reliability at so-called high and ultra-high levels (10-4 or higher). The coming years shall address the methodologies to realistically estimate the impacts of various V&V techniques to system reliability and include the impact of operational risk to reliability assessment. Combine formal correctness verification, process and product metrics, and other standard qualitative software assurance methods with statistical testing with the aim of gaining higher confidence in software reliability assessment for high-assurance applications. B) Quantify the impact of these methods on software reliability. C) Demonstrate that accounting for the effectiveness of these methods reduces the number of tests needed to attain certain confidence level. D) Quantify and justify the reliability estimate for systems developed using various methods.

  9. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  10. Ultra Reliable Closed Loop Life Support for Long Space Missions

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Ewert, Michael K.

    2010-01-01

    Spacecraft human life support systems can achieve ultra reliability by providing sufficient spares to replace all failed components. The additional mass of spares for ultra reliability is approximately equal to the original system mass, provided that the original system reliability is not too low. Acceptable reliability can be achieved for the Space Shuttle and Space Station by preventive maintenance and by replacing failed units. However, on-demand maintenance and repair requires a logistics supply chain in place to provide the needed spares. In contrast, a Mars or other long space mission must take along all the needed spares, since resupply is not possible. Long missions must achieve ultra reliability, a very low failure rate per hour, since they will take years rather than weeks and cannot be cut short if a failure occurs. Also, distant missions have a much higher mass launch cost per kilogram than near-Earth missions. Achieving ultra reliable spacecraft life support systems with acceptable mass will require a well-planned and extensive development effort. Analysis must determine the reliability requirement and allocate it to subsystems and components. Ultra reliability requires reducing the intrinsic failure causes, providing spares to replace failed components and having "graceful" failure modes. Technologies, components, and materials must be selected and designed for high reliability. Long duration testing is needed to confirm very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The system must be designed, developed, integrated, and tested with system reliability in mind. Maintenance and reparability of failed units must not add to the probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass should start soon since it must be a long term effort.

  11. Study of turboprop systems reliability and maintenance costs

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The overall reliability and maintenance costs (R&MC's) of past and current turboprop systems were examined. Maintenance cost drivers were found to be scheduled overhaul (40%), lack of modularity particularly in the propeller and reduction gearbox, and lack of inherent durability (reliability) of some parts. Comparisons were made between the 501-D13/54H60 turboprop system and the widely used JT8D turbofan. It was found that the total maintenance cost per flight hour of the turboprop was 75% higher than that of the JT8D turbofan. Part of this difference was due to propeller and gearbox costs being higher than those of the fan and reverser, but most of the difference was in the engine core where the older technology turboprop core maintenance costs were nearly 70 percent higher than for the turbofan. The estimated maintenance cost of both the advanced turboprop and advanced turbofan were less than the JT8D. The conclusion was that an advanced turboprop and an advanced turbofan, using similar cores, will have very competitive maintenance costs per flight hour.

  12. ECLSS Reliability for Long Duration Missions Beyond Lower Earth Orbit

    NASA Technical Reports Server (NTRS)

    Sargusingh, Miriam J.; Nelson, Jason

    2014-01-01

    Reliability has been highlighted by NASA as critical to future human space exploration particularly in the area of environmental controls and life support systems. The Advanced Exploration Systems (AES) projects have been encouraged to pursue higher reliability components and systems as part of technology development plans. However there is no consensus on what is meant by improving on reliability; nor on how to assess reliability within the AES projects. This became apparent when trying to assess reliability as one of several figures of merit for a regenerable water architecture trade study. In the spring of 2013, the AES Water Recovery Project (WRP) hosted a series of events at the NASA Johnson Space Center (JSC) with the intended goal of establishing a common language and understanding of our reliability goals, and equipping the projects with acceptable means of assessing our respective systems. This campaign included an educational series in which experts from across the agency and academia provided information on terminology, tools and techniques associated with evalauating and designing for system reliability. The campaign culminated in a workshop at JSC with members of the ECLSS and AES communities with the goal of developing a consensus on what reliability means to AES and identifying methods for assessing our low to mid-technology readiness level (TRL) technologies for reliability. This paper details the results of the workshop.

  13. ECLSS Reliability for Long Duration Missions Beyond Lower Earth Orbit

    NASA Technical Reports Server (NTRS)

    Sargusingh, Miriam J.; Nelson, Jason

    2014-01-01

    Reliability has been highlighted by NASA as critical to future human space exploration particularly in the area of environmental controls and life support systems. The Advanced Exploration Systems (AES) projects have been encouraged to pursue higher reliability components and systems as part of technology development plans. However, there is no consensus on what is meant by improving on reliability; nor on how to assess reliability within the AES projects. This became apparent when trying to assess reliability as one of several figures of merit for a regenerable water architecture trade study. In the Spring of 2013, the AES Water Recovery Project (WRP) hosted a series of events at the NASA Johnson Space Center (JSC) with the intended goal of establishing a common language and understanding of our reliability goals and equipping the projects with acceptable means of assessing our respective systems. This campaign included an educational series in which experts from across the agency and academia provided information on terminology, tools and techniques associated with evaluating and designing for system reliability. The campaign culminated in a workshop at JSC with members of the ECLSS and AES communities with the goal of developing a consensus on what reliability means to AES and identifying methods for assessing our low to mid-technology readiness level (TRL) technologies for reliability. This paper details the results of the workshop.

  14. Developing Ultra Reliable Life Support for the Moon and Mars

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2009-01-01

    Recycling life support systems can achieve ultra reliability by using spares to replace failed components. The added mass for spares is approximately equal to the original system mass, provided the original system reliability is not very low. Acceptable reliability can be achieved for the space shuttle and space station by preventive maintenance and by replacing failed units, However, this maintenance and repair depends on a logistics supply chain that provides the needed spares. The Mars mission must take all the needed spares at launch. The Mars mission also must achieve ultra reliability, a very low failure rate per hour, since it requires years rather than weeks and cannot be cut short if a failure occurs. Also, the Mars mission has a much higher mass launch cost per kilogram than shuttle or station. Achieving ultra reliable space life support with acceptable mass will require a well-planned and extensive development effort. Analysis must define the reliability requirement and allocate it to subsystems and components. Technologies, components, and materials must be designed and selected for high reliability. Extensive testing is needed to ascertain very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The systems must be designed, produced, integrated, and tested without impairing system reliability. Maintenance and failed unit replacement should not introduce any additional probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass must start soon if it is to produce timely results for the moon and Mars.

  15. Soft-Fault Detection Technologies Developed for Electrical Power Systems

    NASA Technical Reports Server (NTRS)

    Button, Robert M.

    2004-01-01

    The NASA Glenn Research Center, partner universities, and defense contractors are working to develop intelligent power management and distribution (PMAD) technologies for future spacecraft and launch vehicles. The goals are to provide higher performance (efficiency, transient response, and stability), higher fault tolerance, and higher reliability through the application of digital control and communication technologies. It is also expected that these technologies will eventually reduce the design, development, manufacturing, and integration costs for large, electrical power systems for space vehicles. The main focus of this research has been to incorporate digital control, communications, and intelligent algorithms into power electronic devices such as direct-current to direct-current (dc-dc) converters and protective switchgear. These technologies, in turn, will enable revolutionary changes in the way electrical power systems are designed, developed, configured, and integrated in aerospace vehicles and satellites. Initial successes in integrating modern, digital controllers have proven that transient response performance can be improved using advanced nonlinear control algorithms. One technology being developed includes the detection of "soft faults," those not typically covered by current systems in use today. Soft faults include arcing faults, corona discharge faults, and undetected leakage currents. Using digital control and advanced signal analysis algorithms, we have shown that it is possible to reliably detect arcing faults in high-voltage dc power distribution systems (see the preceding photograph). Another research effort has shown that low-level leakage faults and cable degradation can be detected by analyzing power system parameters over time. This additional fault detection capability will result in higher reliability for long-lived power systems such as reusable launch vehicles and space exploration missions.

  16. Six of one, half a dozen of the other: A measure of multidisciplinary inter/intra-rater reliability of the society for fetal urology and urinary tract dilation grading systems for hydronephrosis.

    PubMed

    Rickard, Mandy; Easterbrook, Bethany; Kim, Soojin; Farrokhyar, Forough; Stein, Nina; Arora, Steven; Belostotsky, Vladamir; DeMaria, Jorge; Lorenzo, Armando J; Braga, Luis H

    2017-02-01

    The urinary tract dilation (UTD) classification system was introduced to standardize terminology in the reporting of hydronephrosis (HN), and bridge a gap between pre- and postnatal classification such as the Society for Fetal Urology (SFU) grading system. Herein we compare the intra/inter-rater reliability of both grading systems. SFU (I-IV) and UTD (I-III) grades were independently assigned by 13 raters (9 pediatric urology staff, 2 nephrologists, 2 radiologists), twice, 3 weeks apart, to 50 sagittal postnatal ultrasonographic views of hydronephrotic kidneys. Data regarding ureteral measurements and bladder abnormalities were included to allow proper UTD categorization. Ten images were repeated to assess intra-rater reliability. Krippendorff's alpha coefficient was used to measure overall and by grade intra/inter-rater reliability. Reliability between specialties and training levels were also analyzed. Overall inter-rater reliability was slightly higher for SFU (α = 0.842, 95% CI 0.812-0.879, in session 1; and α = 0.808, 95% CI 0.775-0.839, in session 2) than for UTD (α = 0.774, 95% CI 0.715-0.827, in session 1; and α = 0.679, 95% CI 0.605-0.750, in session 2). Reliability for intermediate grades (SFU II/III and UTD 2) of HN was poor regardless of the system. Reliabilities for SFU and UTD classifications among Urology, Nephrology, and Radiology, as well as between training levels were not significantly different. Despite the introduction of HN grading systems to standardize the interpretation and reporting of renal ultrasound in infants with HN, none have been proven superior in allowing clinicians to distinguish between "moderate" grades. While this study demonstrated high reliability in distinguishing between "mild" (SFU I/II and UTD 1) and "severe" (SFU IV and UTD 3) grades of HN, the overall reliability between specialties was poor. This is in keeping with a previous report of modest inter-rater reliability of the SFU system. This drawback is likely explained by the subjective interpretation required to assign grades, which can be impacted by experience, image quality, and scanning technique. As shown in the figure, which demonstrates SFU II (a) and SFU III (b), as assigned by a radiologist, it is possible to make an argument that either of these images can be classified into both categories that were observed during the grading sessions of this study. Although both systems have acceptable reliability, the SFU grading system showed higher overall intra/inter-rater reliability regardless of rater specialty than the UTD classification. Inter-rater reliability for SFU grades II/III and UTD 2 was low, highlighting the limitations of both classifications in regards to properly segregating moderate HN grades. Copyright © 2016 Journal of Pediatric Urology Company. Published by Elsevier Ltd. All rights reserved.

  17. Scaling Impacts in Life Support Architecture and Technology Selection

    NASA Technical Reports Server (NTRS)

    Lange, Kevin

    2016-01-01

    For long-duration space missions outside of Earth orbit, reliability considerations will drive higher levels of redundancy and/or on-board spares for life support equipment. Component scaling will be a critical element in minimizing overall launch mass while maintaining an acceptable level of system reliability. Building on an earlier reliability study (AIAA 2012-3491), this paper considers the impact of alternative scaling approaches, including the design of technology assemblies and their individual components to maximum, nominal, survival, or other fractional requirements. The optimal level of life support system closure is evaluated for deep-space missions of varying duration using equivalent system mass (ESM) as the comparative basis. Reliability impacts are included in ESM by estimating the number of component spares required to meet a target system reliability. Common cause failures are included in the analysis. ISS and ISS-derived life support technologies are considered along with selected alternatives. This study focusses on minimizing launch mass, which may be enabling for deep-space missions.

  18. Hierarchical modeling for reliability analysis using Markov models. B.S./M.S. Thesis - MIT

    NASA Technical Reports Server (NTRS)

    Fagundo, Arturo

    1994-01-01

    Markov models represent an extremely attractive tool for the reliability analysis of many systems. However, Markov model state space grows exponentially with the number of components in a given system. Thus, for very large systems Markov modeling techniques alone become intractable in both memory and CPU time. Often a particular subsystem can be found within some larger system where the dependence of the larger system on the subsystem is of a particularly simple form. This simple dependence can be used to decompose such a system into one or more subsystems. A hierarchical technique is presented which can be used to evaluate these subsystems in such a way that their reliabilities can be combined to obtain the reliability for the full system. This hierarchical approach is unique in that it allows the subsystem model to pass multiple aggregate state information to the higher level model, allowing more general systems to be evaluated. Guidelines are developed to assist in the system decomposition. An appropriate method for determining subsystem reliability is also developed. This method gives rise to some interesting numerical issues. Numerical error due to roundoff and integration are discussed at length. Once a decomposition is chosen, the remaining analysis is straightforward but tedious. However, an approach is developed for simplifying the recombination of subsystem reliabilities. Finally, a real world system is used to illustrate the use of this technique in a more practical context.

  19. The Intellectual Work Management as an Essential Condition for Creativity Development of Higher Education Institution

    ERIC Educational Resources Information Center

    Yerzhanov, Yerlan T.; Adilova, Valentina Kh.; Shakaman, Yrysgul B.; Shaimardanov, Rafis Kh.

    2016-01-01

    The article discusses the methodological basis of creating effective system for intellectual work management at the higher education institution. The study of the intellectual work management is caused by the need of today's complicated socio-economic, organizational and educational systems to obtain reliably reproducible repercussions from future…

  20. Reliability Analysis and Modeling of ZigBee Networks

    NASA Astrophysics Data System (ADS)

    Lin, Cheng-Min

    The architecture of ZigBee networks focuses on developing low-cost, low-speed ubiquitous communication between devices. The ZigBee technique is based on IEEE 802.15.4, which specifies the physical layer and medium access control (MAC) for a low rate wireless personal area network (LR-WPAN). Currently, numerous wireless sensor networks have adapted the ZigBee open standard to develop various services to promote improved communication quality in our daily lives. The problem of system and network reliability in providing stable services has become more important because these services will be stopped if the system and network reliability is unstable. The ZigBee standard has three kinds of networks; star, tree and mesh. The paper models the ZigBee protocol stack from the physical layer to the application layer and analyzes these layer reliability and mean time to failure (MTTF). Channel resource usage, device role, network topology and application objects are used to evaluate reliability in the physical, medium access control, network, and application layers, respectively. In the star or tree networks, a series system and the reliability block diagram (RBD) technique can be used to solve their reliability problem. However, a division technology is applied here to overcome the problem because the network complexity is higher than that of the others. A mesh network using division technology is classified into several non-reducible series systems and edge parallel systems. Hence, the reliability of mesh networks is easily solved using series-parallel systems through our proposed scheme. The numerical results demonstrate that the reliability will increase for mesh networks when the number of edges in parallel systems increases while the reliability quickly drops when the number of edges and the number of nodes increase for all three networks. More use of resources is another factor impact on reliability decreasing. However, lower network reliability will occur due to network complexity, more resource usage and complex object relationship.

  1. Automation in visual inspection tasks: X-ray luggage screening supported by a system of direct, indirect or adaptable cueing with low and high system reliability.

    PubMed

    Chavaillaz, Alain; Schwaninger, Adrian; Michel, Stefan; Sauer, Juergen

    2018-05-25

    The present study evaluated three automation modes for improving performance in an X-ray luggage screening task. 140 participants were asked to detect the presence of prohibited items in X-ray images of cabin luggage. Twenty participants conducted this task without automatic support (control group), whereas the others worked with either indirect cues (system indicated the target presence without specifying its location), or direct cues (system pointed out the exact target location) or adaptable automation (participants could freely choose between no cue, direct and indirect cues). Furthermore, automatic support reliability was manipulated (low vs. high). The results showed a clear advantage for direct cues regarding detection performance and response time. No benefits were observed for adaptable automation. Finally, high automation reliability led to better performance and higher operator trust. The findings overall confirmed that automatic support systems for luggage screening should be designed such that they provide direct, highly reliable cues.

  2. Reliability and validity of CODA motion analysis system for measuring cervical range of motion in patients with cervical spondylosis and anterior cervical fusion.

    PubMed

    Gao, Zhongyang; Song, Hui; Ren, Fenggang; Li, Yuhuan; Wang, Dong; He, Xijing

    2017-12-01

    The aim of the present study was to evaluate the reliability of the Cartesian Optoelectronic Dynamic Anthropometer (CODA) motion system in measuring the cervical range of motion (ROM) and verify the construct validity of the CODA motion system. A total of 26 patients with cervical spondylosis and 22 patients with anterior cervical fusion were enrolled and the CODA motion analysis system was used to measure the three-dimensional cervical ROM. Intra- and inter-rater reliability was assessed by interclass correlation coefficients (ICCs), standard error of measurement (SEm), Limits of Agreements (LOA) and minimal detectable change (MDC). Independent samples t-tests were performed to examine the differences of cervical ROM between cervical spondylosis and anterior cervical fusion patients. The results revealed that in the cervical spondylosis group, the reliability was almost perfect (intra-rater reliability: ICC, 0.87-0.95; LOA, -12.86-13.70; SEm, 2.97-4.58; inter-rater reliability: ICC, 0.84-0.95; LOA, -13.09-13.48; SEm, 3.13-4.32). In the anterior cervical fusion group, the reliability was high (intra-rater reliability: ICC, 0.88-0.97; LOA, -10.65-11.08; SEm, 2.10-3.77; inter-rater reliability: ICC, 0.86-0.96; LOA, -10.91-13.66; SEm, 2.20-4.45). The cervical ROM in the cervical spondylosis group was significantly higher than that in the anterior cervical fusion group in all directions except for left rotation. In conclusion, the CODA motion analysis system is highly reliable in measuring cervical ROM and the construct validity was verified, as the system was sufficiently sensitive to distinguish between the cervical spondylosis and anterior cervical fusion groups based on their ROM.

  3. LOX/LH2 propulsion system for launch vehicle upper stage, test results

    NASA Technical Reports Server (NTRS)

    Ikeda, T.; Imachi, U.; Yuzawa, Y.; Kondo, Y.; Miyoshi, K.; Higashino, K.

    1984-01-01

    The test results of small LOX/LH2 engines for two propulsion systems, a pump fed system and a pressure fed system are reported. The pump fed system has the advantages of higher performances and higher mass fraction. The pressure fed system has the advantages of higher reliability and relative simplicity. Adoption of these cryogenic propulsion systems for upper stage of launch vehicle increases the payload capability with low cost. The 1,000 kg thrust class engine was selected for this cryogenic stage. A thrust chamber assembly for the pressure fed propulsion system was tested. It is indicated that it has good performance to meet system requirements.

  4. Final evaluation plan : Utah Transit Authority Connection Protection system

    DOT National Transportation Integrated Search

    2003-08-27

    Utah Transit Authority (UTA) implemented a Connection Protection system (CP) to improve the reliability of transfers from the higher frequency light rail trains, TRAX, to the lower frequency bus services. The CP system examines the status of TRAX tra...

  5. Evaluation of Utah Transit Authority's Connection Protection system

    DOT National Transportation Integrated Search

    2004-05-12

    The Utah Transit Authority (UTA) implemented a Connection Protection (CP) system to improve the reliability of transfers from the higher frequency light rail TRAX trains to the lower frequency bus services. The CP system examines the status of TRAX t...

  6. Inter-rater reliability of twelve diagnostic systems of schizophrenia.

    PubMed

    Helmes, E; Landmark, J; Kazarian, S S

    1983-05-01

    The present and past symptomatology of 31 chronic schizophrenics was rated by four independent judges, two experienced clinical psychiatrists and two psychiatric residents, in a context more representative of actual clinical practice than most research studies. Ratings were made on 64 symptoms derived from 12 diagnostic systems, based on either live or videotaped interviews for present symptomatology and case records for past symptomatology. Inter-rater reliabilities were higher for present than for past symptoms, and in general did not approach those reported for highly trained raters. There were no differences between live and videotaped interviews. Diagnostic systems differed widely in rater agreement. The most consistent across both past and present symptomatology were the systems of Langfeldt, Schneider, and DSM-III, for which the level of reliability was consistent with other studies.

  7. Detailed test plans : Evaluation of Utah Transit Authority Connection Protection system

    DOT National Transportation Integrated Search

    2003-10-31

    The purpose of this evaluation is to assess the effectiveness of the Connection Protection (CP) system implemented by the Utah Transit Authority (UTA). The objective of the CP system is to improve the reliability of transfers from the higher frequenc...

  8. Operational present status and reliability analysis of the upgraded EAST cryogenic system

    NASA Astrophysics Data System (ADS)

    Zhou, Z. W.; Y Zhang, Q.; Lu, X. F.; Hu, L. B.; Zhu, P.

    2017-12-01

    Since the first commissioning in 2005, the cryogenic system for EAST (Experimental Advanced Superconducting Tokamak) has been cooled down and warmed up for thirteen experimental campaigns. In order to promote the refrigeration efficiencies and reliability, the EAST cryogenic system was upgraded gradually with new helium screw compressors and new dynamic gas bearing helium turbine expanders with eddy current brake to improve the original poor mechanical and operational performance from 2012 to 2015. Then the totally upgraded cryogenic system was put into operation in the eleventh cool-down experiment, and has been operated for the latest several experimental campaigns. The upgraded system has successfully coped with various normal operational modes during cool-down and 4.5 K steady-state operation under pulsed heat load from the tokamak as well as the abnormal fault modes including turbines protection stop. In this paper, the upgraded EAST cryogenic system including its functional analysis and new cryogenic control networks will be presented in detail. Also, its operational present status in the latest cool-down experiments will be presented and the system reliability will be analyzed, which shows a high reliability and low fault rate after upgrade. In the end, some future necessary work to meet the higher reliability requirement for future uninterrupted long-term experimental operation will also be proposed.

  9. Evaluating the Level of Degree Programmes in Higher Education: The Case of Nursing

    ERIC Educational Resources Information Center

    Rexwinkel, Trudy; Haenen, Jacques; Pilot, Albert

    2013-01-01

    The European Quality Assurance system demands that the degree programme level is represented in terms of quantitative outcomes to be valid and reliable. To meet this need the Educational Level Evaluator (ELE) was devised. This conceptually designed procedure with instrumentation aiming to evaluate the level of a degree validly and reliably still…

  10. Research of vibration controlling based on programmable logic controller for electrostatic precipitator

    NASA Astrophysics Data System (ADS)

    Zhang, Zisheng; Li, Yanhu; Li, Jiaojiao; Liu, Zhiqiang; Li, Qing

    2013-03-01

    In order to improve the reliability, stability and automation of electrostatic precipitator, circuits of vibration motor for ESP and vibration control ladder diagram program are investigated using Schneider PLC with high performance and programming software of Twidosoft. Operational results show that after adopting PLC, vibration motor can run automatically; compared with traditional control system of vibration based on single-chip microcomputer, it has higher reliability, better stability and higher dust removal rate, when dust emission concentrations <= 50 mg m-3, providing a new method for vibration controlling of ESP.

  11. Managing Complexity in Next Generation Robotic Spacecraft: From a Software Perspective

    NASA Technical Reports Server (NTRS)

    Reinholtz, Kirk

    2008-01-01

    This presentation highlights the challenges in the design of software to support robotic spacecraft. Robotic spacecraft offer a higher degree of autonomy, however currently more capabilities are required, primarily in the software, while providing the same or higher degree of reliability. The complexity of designing such an autonomous system is great, particularly while attempting to address the needs for increased capabilities and high reliability without increased needs for time or money. The efforts to develop programming models for the new hardware and the integration of software architecture are highlighted.

  12. The reliability and validity of the Saliba Postural Classification System

    PubMed Central

    Collins, Cristiana Kahl; Johnson, Vicky Saliba; Godwin, Ellen M.; Pappas, Evangelos

    2016-01-01

    Objectives To determine the reliability and validity of the Saliba Postural Classification System (SPCS). Methods Two physical therapists classified pictures of 100 volunteer participants standing in their habitual posture for inter and intra-tester reliability. For validity, 54 participants stood on a force plate in a habitual and a corrected posture, while a vertical force was applied through the shoulders until the clinician felt a postural give. Data were extracted at the time the give was felt and at a time in the corrected posture that matched the peak vertical ground reaction force (VGRF) in the habitual posture. Results Inter-tester reliability demonstrated 75% agreement with a Kappa = 0.64 (95% CI = 0.524–0.756, SE = 0.059). Intra-tester reliability demonstrated 87% agreement with a Kappa = 0.8, (95% CI = 0.702–0.898, SE = 0.05) and 80% agreement with a Kappa = 0.706, (95% CI = 0.594–0818, SE = 0.057). The examiner applied a significantly higher (p < 0.001) peak vertical force in the corrected posture prior to a postural give when compared to the habitual posture. Within the corrected posture, the %VGRF was higher when the test was ongoing vs. when a postural give was felt (p < 0.001). The %VGRF was not different between the two postures when comparing the peaks (p = 0.214). Discussion The SPCS has substantial agreement for inter- and intra-tester reliability and is largely a valid postural classification system as determined by the larger vertical forces in the corrected postures. Further studies on the correlation between the SPCS and diagnostic classifications are indicated. PMID:27559288

  13. The reliability and validity of the Saliba Postural Classification System.

    PubMed

    Collins, Cristiana Kahl; Johnson, Vicky Saliba; Godwin, Ellen M; Pappas, Evangelos

    2016-07-01

    To determine the reliability and validity of the Saliba Postural Classification System (SPCS). Two physical therapists classified pictures of 100 volunteer participants standing in their habitual posture for inter and intra-tester reliability. For validity, 54 participants stood on a force plate in a habitual and a corrected posture, while a vertical force was applied through the shoulders until the clinician felt a postural give. Data were extracted at the time the give was felt and at a time in the corrected posture that matched the peak vertical ground reaction force (VGRF) in the habitual posture. Inter-tester reliability demonstrated 75% agreement with a Kappa = 0.64 (95% CI = 0.524-0.756, SE = 0.059). Intra-tester reliability demonstrated 87% agreement with a Kappa = 0.8, (95% CI = 0.702-0.898, SE = 0.05) and 80% agreement with a Kappa = 0.706, (95% CI = 0.594-0818, SE = 0.057). The examiner applied a significantly higher (p < 0.001) peak vertical force in the corrected posture prior to a postural give when compared to the habitual posture. Within the corrected posture, the %VGRF was higher when the test was ongoing vs. when a postural give was felt (p < 0.001). The %VGRF was not different between the two postures when comparing the peaks (p = 0.214). The SPCS has substantial agreement for inter- and intra-tester reliability and is largely a valid postural classification system as determined by the larger vertical forces in the corrected postures. Further studies on the correlation between the SPCS and diagnostic classifications are indicated.

  14. Reliability of Beam Loss Monitors System for the Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Guaglio, G.; Dehning, B.; Santoni, C.

    2004-11-01

    The employment of superconducting magnets in high energy colliders opens challenging failure scenarios and brings new criticalities for the whole system protection. For the LHC beam loss protection system, the failure rate and the availability requirements have been evaluated using the Safety Integrity Level (SIL) approach. A downtime cost evaluation is used as input for the SIL approach. The most critical systems, which contribute to the final SIL value, are the dump system, the interlock system, the beam loss monitors system and the energy monitor system. The Beam Loss Monitors System (BLMS) is critical for short and intense particle losses, while at medium and higher loss time it is assisted by other systems, such as the quench protection system and the cryogenic system. For BLMS, hardware and software have been evaluated in detail. The reliability input figures have been collected using historical data from the SPS, using temperature and radiation damage experimental data as well as using standard databases. All the data have been processed by reliability software (Isograph). The analysis ranges from the components data to the system configuration.

  15. The Italian version of the Mouth Handicap in Systemic Sclerosis scale (MHISS) is valid, reliable and useful in assessing oral health-related quality of life (OHRQoL) in systemic sclerosis (SSc) patients.

    PubMed

    Maddali Bongi, S; Del Rosso, A; Miniati, I; Galluccio, F; Landi, G; Tai, G; Matucci-Cerinic, M

    2012-09-01

    In systemic sclerosis (SSc), mouth and face involvement leads to problems in oral health-related quality of life (OHRQoL). Mouth Handicap in Systemic Sclerosis scale (MHISS) is a 12-item questionnaire specifically quantifying mouth disability in SSc, organized in 3 subscales. Our aim was to validate Italian version of MHISS, by assessing its test-retest reliability and internal and external consistency in Italian SSc patients. Forty SSc patients (7 dSSc, 33 lSSc; age and disease duration: 57.27 ± 11.41, 9.4 ± 4.4 years; 22 with sicca syndrome) were evaluated with MHISS. MHISS was translated following a forward-backward translation procedure, with independent translations and counter-translation. Test-retest reliability was evaluated, comparing the results of two administrations, with intraclass correlation coefficient (ICC). Internal consistency was assessed by Cronbach's α and external consistency by comparison with mouth opening. MHISS has a good test-retest reliability (ICC: 0.93) and internal consistency (Cronbach's α:0.99). A good external consistency was confirmed by correlation with mouth opening (rho: -0,3869, p: 0.0137). Total MHISS score was 17.65 ± 5.20, with scores of subscale 1 (reduced mouth opening) of 6.60 ± 2.85 and scores of subscales 2 (sicca syndrome) and 3 (aesthetic concerns) of 7.82 ± 2.59 and 3.22 ± 1.14. Total and subscale 2 scores are higher in dSSc than in lSSc. This result may be due to the higher presence of sicca syndrome in dSSc than in lSSc (p = 0.0109). Our results support validity and reliability in Italian SSc patients of MHISS, specifically measuring SSc OHRQoL.

  16. Inter-rater reliability of a modified version of Delitto et al.’s classification-based system for low back pain: a pilot study

    PubMed Central

    Apeldoorn, Adri T.; van Helvoirt, Hans; Ostelo, Raymond W.; Meihuizen, Hanneke; Kamper, Steven J.; van Tulder, Maurits W.; de Vet, Henrica C. W.

    2016-01-01

    Study design Observational inter-rater reliability study. Objectives To examine: (1) the inter-rater reliability of a modified version of Delitto et al.’s classification-based algorithm for patients with low back pain; (2) the influence of different levels of familiarity with the system; and (3) the inter-rater reliability of algorithm decisions in patients who clearly fit into a subgroup (clear classifications) and those who do not (unclear classifications). Methods Patients were examined twice on the same day by two of three participating physical therapists with different levels of familiarity with the system. Patients were classified into one of four classification groups. Raters were blind to the others’ classification decision. In order to quantify the inter-rater reliability, percentages of agreement and Cohen’s Kappa were calculated. Results A total of 36 patients were included (clear classification n = 23; unclear classification n = 13). The overall rate of agreement was 53% and the Kappa value was 0·34 [95% confidence interval (CI): 0·11–0·57], which indicated only fair inter-rater reliability. Inter-rater reliability for patients with a clear classification (agreement 52%, Kappa value 0·29) was not higher than for patients with an unclear classification (agreement 54%, Kappa value 0·33). Familiarity with the system (i.e. trained with written instructions and previous research experience with the algorithm) did not improve the inter-rater reliability. Conclusion Our pilot study challenges the inter-rater reliability of the classification procedure in clinical practice. Therefore, more knowledge is needed about factors that affect the inter-rater reliability, in order to improve the clinical applicability of the classification scheme. PMID:27559279

  17. A Litmus Test of Academic Quality

    ERIC Educational Resources Information Center

    Orkodashvili, Mariam

    2009-01-01

    The paper discusses the major issues connected with the accreditation procedures in higher education system in the U.S. The questions raised are as follows: what are the reliable and credible indicators of quality instruction that could be measured in the process of accreditation of higher education institutions? How does greater transparency in…

  18. Performance improvement on a MIMO radio-over-fiber system by probabilistic shaping

    NASA Astrophysics Data System (ADS)

    Kong, Miao; Yu, Jianjun

    2018-01-01

    As we know, probabilistic shaping (PS), as a typical one of modulation format optimization technologies, becomes a promising technology and attracts more and more attention, because of its higher transmission capacity and lower computation complexity. In this paper, we experimentally demonstrated a reliable 8 Gbaud-rate delivery of polarization multiplexed PS 16-QAM single carrier signal in a MIMO radio-over-fiber system with 20-km SMF-28 wire link and 2.5-m wireless link at 60 GHz. The BER performance of PS 16-QAM signals at different baud rate was also evaluated. What is more, PS 16-QAM was also experimentally compared with uniform 16-QAM, and it can be concluded that PS 16-QAM brings a better compromise between effectiveness and reliability performance and a higher capacity than uniform 16-QAM for the radio-over-fiber system.

  19. Reliability of Fault Tolerant Control Systems. Part 2

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva

    2000-01-01

    This paper reports Part II of a two part effort that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability properties peculiar to fault-tolerant control systems are emphasized, such as the presence of analytic redundancy in high proportion, the dependence of failures on control performance, and high risks associated with decisions in redundancy management due to multiple sources of uncertainties and sometimes large processing requirements. As a consequence, coverage of failures through redundancy management can be severely limited. The paper proposes to formulate the fault tolerant control problem as an optimization problem that maximizes coverage of failures through redundancy management. Coverage modeling is attempted in a way that captures its dependence on the control performance and on the diagnostic resolution. Under the proposed redundancy management policy, it is shown that an enhanced overall system reliability can be achieved with a control law of a superior robustness, with an estimator of a higher resolution, and with a control performance requirement of a lesser stringency.

  20. Energy Advantages for Green Schools

    ERIC Educational Resources Information Center

    Griffin, J. Tim

    2012-01-01

    Because of many advantages associated with central utility systems, school campuses, from large universities to elementary schools, have used district energy for decades. District energy facilities enable thermal and electric utilities to be generated with greater efficiency and higher system reliability, while requiring fewer maintenance and…

  1. Assessing the Students' Evaluations of Educational Quality (SEEQ) Questionnaire in Greek Higher Education

    ERIC Educational Resources Information Center

    Grammatikopoulos, Vasilis; Linardakis, M.; Gregoriadis, A.; Oikonomidis, V.

    2015-01-01

    The aim of the current study was to provide a valid and reliable instrument for the evaluation of the teaching effectiveness in the Greek higher education system. Other objectives of the study were (a) the examination of the dimensionality and the higher-order structure of the Greek version of Students' Evaluation of Educational Quality (SEEQ)…

  2. Reliability in individual monitoring service.

    PubMed

    Mod Ali, N

    2011-03-01

    As a laboratory certified to ISO 9001:2008 and accredited to ISO/IEC 17025, the Secondary Standard Dosimetry Laboratory (SSDL)-Nuclear Malaysia has incorporated an overall comprehensive system for technical and quality management in promoting a reliable individual monitoring service (IMS). Faster identification and resolution of issues regarding dosemeter preparation and issuing of reports, personnel enhancement, improved customer satisfaction and overall efficiency of laboratory activities are all results of the implementation of an effective quality system. Review of these measures and responses to observed trends provide continuous improvement of the system. By having these mechanisms, reliability of the IMS can be assured in the promotion of safe behaviour at all levels of the workforce utilising ionising radiation facilities. Upgradation of in the reporting program through a web-based e-SSDL marks a major improvement in Nuclear Malaysia's IMS reliability on the whole. The system is a vital step in providing a user friendly and effective occupational exposure evaluation program in the country. It provides a higher level of confidence in the results generated for occupational dose monitoring of the IMS, thus, enhances the status of the radiation protection framework of the country.

  3. Delirium diagnosis defined by cluster analysis of symptoms versus diagnosis by DSM and ICD criteria: diagnostic accuracy study.

    PubMed

    Sepulveda, Esteban; Franco, José G; Trzepacz, Paula T; Gaviria, Ana M; Meagher, David J; Palma, José; Viñuelas, Eva; Grau, Imma; Vilella, Elisabet; de Pablo, Joan

    2016-05-26

    Information on validity and reliability of delirium criteria is necessary for clinicians, researchers, and further developments of DSM or ICD. We compare four DSM and ICD delirium diagnostic criteria versions, which were developed by consensus of experts, with a phenomenology-based natural diagnosis delineated using cluster analysis of delirium features in a sample with a high prevalence of dementia. We also measured inter-rater reliability of each system when applied by two evaluators from distinct disciplines. Cross-sectional analysis of 200 consecutive patients admitted to a skilled nursing facility, independently assessed within 24-48 h after admission with the Delirium Rating Scale-Revised-98 (DRS-R98) and for DSM-III-R, DSM-IV, DSM-5, and ICD-10 criteria for delirium. Cluster analysis (CA) delineated natural delirium and nondelirium reference groups using DRS-R98 items and then diagnostic systems' performance were evaluated against the CA-defined groups using logistic regression and crosstabs for discriminant analysis (sensitivity, specificity, percentage of subjects correctly classified by each diagnostic system and their individual criteria, and performance for each system when excluding each individual criterion are reported). Kappa Index (K) was used to report inter-rater reliability for delirium diagnostic systems and their individual criteria. 117 (58.5 %) patients had preexisting dementia according to the Informant Questionnaire on Cognitive Decline in the Elderly. CA delineated 49 delirium subjects and 151 nondelirium. Against these CA groups, delirium diagnosis accuracy was highest using DSM-III-R (87.5 %) followed closely by DSM-IV (86.0 %), ICD-10 (85.5 %) and DSM-5 (84.5 %). ICD-10 had the highest specificity (96.0 %) but lowest sensitivity (53.1 %). DSM-III-R had the best sensitivity (81.6 %) and the best sensitivity-specificity balance. DSM-5 had the highest inter-rater reliability (K =0.73) while DSM-III-R criteria were the least reliable. Using our CA-defined, phenomenologically-based delirium designations as the reference standard, we found performance discordance among four diagnostic systems when tested in subjects where comorbid dementia was prevalent. The most complex diagnostic systems have higher accuracy and the newer DSM-5 have higher reliability. Our novel phenomenological approach to designing a delirium reference standard may be preferred to guide revisions of diagnostic systems in the future.

  4. Proximal humeral fracture classification systems revisited.

    PubMed

    Majed, Addie; Macleod, Iain; Bull, Anthony M J; Zyto, Karol; Resch, Herbert; Hertel, Ralph; Reilly, Peter; Emery, Roger J H

    2011-10-01

    This study evaluated several classification systems and expert surgeons' anatomic understanding of these complex injuries based on a consecutive series of patients. We hypothesized that current proximal humeral fracture classification systems, regardless of imaging methods, are not sufficiently reliable to aid clinical management of these injuries. Complex fractures in 96 consecutive patients were investigated by generation of rapid sequence prototyping models from computed tomography Digital Imaging and Communications in Medicine (DICOM) imaging data. Four independent senior observers were asked to classify each model using 4 classification systems: Neer, AO, Codman-Hertel, and a prototype classification system by Resch. Interobserver and intraobserver κ coefficient values were calculated for the overall classification system and for selected classification items. The κ coefficient values for the interobserver reliability were 0.33 for Neer, 0.11 for AO, 0.44 for Codman-Hertel, and 0.15 for Resch. Interobserver reliability κ coefficient values were 0.32 for the number of fragments and 0.30 for the anatomic segment involved using the Neer system, 0.30 for the AO type (A, B, C), and 0.53, 0.48, and 0.08 for the Resch impaction/distraction, varus/valgus and flexion/extension subgroups, respectively. Three-part fractures showed low reliability for the Neer and AO systems. Currently available evidence suggests fracture classifications in use have poor intra- and inter-observer reliability despite the modality of imaging used thus making treating these injuries difficult as weak as affecting scientific research as well. This study was undertaken to evaluate the reliability of several systems using rapid sequence prototype models. Overall interobserver κ values represented slight to moderate agreement. The most reliable interobserver scores were found with the Codman-Hertel classification, followed by elements of Resch's trial system. The AO system had the lowest values. The higher interobserver reliability values for the Codman-Hertel system showed that is the only comprehensive fracture description studied, whereas the novel classification by Resch showed clear definition in respect to varus/valgus and impaction/distraction angulation. Copyright © 2011 Journal of Shoulder and Elbow Surgery Board of Trustees. All rights reserved.

  5. The Effect of Incorrect Reliability Information on Expectations, Perceptions, and Use of Automation.

    PubMed

    Barg-Walkow, Laura H; Rogers, Wendy A

    2016-03-01

    We examined how providing artificially high or low statements about automation reliability affected expectations, perceptions, and use of automation over time. One common method of introducing automation is providing explicit statements about the automation's capabilities. Research is needed to understand how expectations from such introductions affect perceptions and use of automation. Explicit-statement introductions were manipulated to set higher-than (90%), same-as (75%), or lower-than (60%) levels of expectations in a dual-task scenario with 75% reliable automation. Two experiments were conducted to assess expectations, perceptions, compliance, reliance, and task performance over (a) 2 days and (b) 4 days. The baseline assessments showed initial expectations of automation reliability matched introduced levels of expectation. For the duration of each experiment, the lower-than groups' perceptions were lower than the actual automation reliability. However, the higher-than groups' perceptions were no different from actual automation reliability after Day 1 in either study. There were few differences between groups for automation use, which generally stayed the same or increased with experience using the system. Introductory statements describing artificially low automation reliability have a long-lasting impact on perceptions about automation performance. Statements including incorrect automation reliability do not appear to affect use of automation. Introductions should be designed according to desired outcomes for expectations, perceptions, and use of the automation. Low expectations have long-lasting effects. © 2015, Human Factors and Ergonomics Society.

  6. Perspectives of different type biological life support systems (BLSS) usage in space missions

    NASA Astrophysics Data System (ADS)

    Bartsev, S. I.; Gitelson, J. I.; Lisovsky, G. M.; Mezhevikin, V. V.; Okhonin, V. A.

    1996-10-01

    In the paper an attempt is made to combine three important criteria of LSS comparison: minimum mass, maximum safety and maximum quality of life. Well-known types of BLSS were considered: with higher plant, higher plants and mushrooms, microalgae, and hydrogen-oxidizing bacteria. These BLSSs were compared in terms of "integrated" mass for the case of a vegetarian diet and a "normal" one (with animal proteins and fats). It was shown that the BLSS with higher plants and incineration of wastes becomes the best when the exploitation period is more than 1 yr. The dependence of higher plants' LSS structure on operation time was found. Comparison of BLSSs in terms of integral reliability (this criterion includes mass and quality of life criteria) for a lunar base scenario showed that BLSSs with higher plants are advantageous in reliability and comfort. This comparison was made for achieved level of technology of closing and for perspective one.

  7. Systematic Evaluation of Stochastic Methods in Power System Scheduling and Dispatch with Renewable Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yishen; Zhou, Zhi; Liu, Cong

    2016-08-01

    As more wind power and other renewable resources are being integrated into the electric power grid, the forecast uncertainty brings operational challenges for the power system operators. In this report, different operational strategies for uncertainty management are presented and evaluated. A comprehensive and consistent simulation framework is developed to analyze the performance of different reserve policies and scheduling techniques under uncertainty in wind power. Numerical simulations are conducted on a modified version of the IEEE 118-bus system with a 20% wind penetration level, comparing deterministic, interval, and stochastic unit commitment strategies. The results show that stochastic unit commitment provides amore » reliable schedule without large increases in operational costs. Moreover, decomposition techniques, such as load shift factor and Benders decomposition, can help in overcoming the computational obstacles to stochastic unit commitment and enable the use of a larger scenario set to represent forecast uncertainty. In contrast, deterministic and interval unit commitment tend to give higher system costs as more reserves are being scheduled to address forecast uncertainty. However, these approaches require a much lower computational effort Choosing a proper lower bound for the forecast uncertainty is important for balancing reliability and system operational cost in deterministic and interval unit commitment. Finally, we find that the introduction of zonal reserve requirements improves reliability, but at the expense of higher operational costs.« less

  8. A systematic review of publications assessing reliability and validity of the Behavioral Risk Factor Surveillance System (BRFSS), 2004–2011

    PubMed Central

    2013-01-01

    Background In recent years response rates on telephone surveys have been declining. Rates for the behavioral risk factor surveillance system (BRFSS) have also declined, prompting the use of new methods of weighting and the inclusion of cell phone sampling frames. A number of scholars and researchers have conducted studies of the reliability and validity of the BRFSS estimates in the context of these changes. As the BRFSS makes changes in its methods of sampling and weighting, a review of reliability and validity studies of the BRFSS is needed. Methods In order to assess the reliability and validity of prevalence estimates taken from the BRFSS, scholarship published from 2004–2011 dealing with tests of reliability and validity of BRFSS measures was compiled and presented by topics of health risk behavior. Assessments of the quality of each publication were undertaken using a categorical rubric. Higher rankings were achieved by authors who conducted reliability tests using repeated test/retest measures, or who conducted tests using multiple samples. A similar rubric was used to rank validity assessments. Validity tests which compared the BRFSS to physical measures were ranked higher than those comparing the BRFSS to other self-reported data. Literature which undertook more sophisticated statistical comparisons was also ranked higher. Results Overall findings indicated that BRFSS prevalence rates were comparable to other national surveys which rely on self-reports, although specific differences are noted for some categories of response. BRFSS prevalence rates were less similar to surveys which utilize physical measures in addition to self-reported data. There is very little research on reliability and validity for some health topics, but a great deal of information supporting the validity of the BRFSS data for others. Conclusions Limitations of the examination of the BRFSS were due to question differences among surveys used as comparisons, as well as mode of data collection differences. As the BRFSS moves to incorporating cell phone data and changing weighting methods, a review of reliability and validity research indicated that past BRFSS landline only data were reliable and valid as measured against other surveys. New analyses and comparisons of BRFSS data which include the new methodologies and cell phone data will be needed to ascertain the impact of these changes on estimates in the future. PMID:23522349

  9. Market Orientation in Universities: A Comparative Study of Two National Higher Education Systems

    ERIC Educational Resources Information Center

    Hemsley-Brown, Jane; Oplatka, Izhar

    2010-01-01

    Purpose: The paper's purpose is to test: whether there are significant differences between England and Israel, in terms of perceptions of market orientation (MO) in higher education (HE); which MO dimensions (student, competition, intra-functional) indicate more positive attitudes and whether the differences are significant; and the reliability of…

  10. Reliability and Geographic Trends of 50,000 Photovoltaic Systems in the USA: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jordan, D. C.; Kurtz, S. R.

    2014-09-01

    This paper presents performance and reliability data from nearly 50,000 photovoltaic (PV) systems totaling 1.7 gigawatts installed capacity in the USA from 2009 to 2012 and their geographic trends. About 90% of the normal systems and about 85% of all systems, including systems with known issues, performed to within 10% or better of expected performance. Although considerable uncertainty may exist due to the nature of the data, hotter climates appear to exhibit some degradation not seen in the more moderate climates. Special causes of underperformance and their impacts are delineated by reliability category. Hardware-related issues are dominated by inverter problemsmore » (totaling less than 0.5%) and underperforming modules (totaling less than 0.1%). Furthermore, many reliability categories show a significant decrease in occurrence from year 1 to subsequent years, emphasizing the need for higher-quality installations but also the need for improved standards development. The probability of PV system damage because of hail is below 0.05%. Singular weather events can have a significant impact such as a single lightning strike to a transformer or the impact of a hurricane. However, grid outages are more likely to have a significant impact than PV system damage when extreme weather events occur.« less

  11. Reliability of Beam Loss Monitor Systems for the Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Guaglio, G.; Dehning, B.; Santoni, C.

    2005-06-01

    The increase of beam energy and beam intensity, together with the use of super conducting magnets, opens new failure scenarios and brings new criticalities for the whole accelerator protection system. For the LHC beam loss protection system, the failure rate and the availability requirements have been evaluated using the Safety Integrity Level (SIL) approach. A downtime cost evaluation is used as input for the SIL approach. The most critical systems, which contribute to the final SIL value, are the dump system, the interlock system, the beam loss monitors system, and the energy monitor system. The Beam Loss Monitors System (BLMS) is critical for short and intense particles losses at 7 TeV and assisted by the Fast Beam Current Decay Monitors at 450 GeV. At medium and higher loss time it is assisted by other systems, such as the quench protection system and the cryogenic system. For BLMS, hardware and software have been evaluated in detail. The reliability input figures have been collected using historical data from the SPS, using temperature and radiation damage experimental data as well as using standard databases. All the data has been processed by reliability software (Isograph). The analysis spaces from the components data to the system configuration.

  12. Radioisotope Power System Pool Concept

    NASA Technical Reports Server (NTRS)

    Rusick, Jeffrey J.; Bolotin, Gary S.

    2015-01-01

    Advanced Radioisotope Power Systems (RPS) for NASA deep space science missions have historically used static thermoelectric-based designs because they are highly reliable, and their radioisotope heat sources can be passively cooled throughout the mission life cycle. Recently, a significant effort to develop a dynamic RPS, the Advanced Stirling Radioisotope Generator (ASRG), was conducted by NASA and the Department of Energy, because Stirling based designs offer energy conversion efficiencies four times higher than heritage thermoelectric designs; and the efficiency would proportionately reduce the amount of radioisotope fuel needed for the same power output. However, the long term reliability of a Stirling based design is a concern compared to thermoelectric designs, because for certain Stirling system architectures the radioisotope heat sources must be actively cooled via the dynamic operation of Stirling converters throughout the mission life cycle. To address this reliability concern, a new dynamic Stirling cycle RPS architecture is proposed called the RPS Pool Concept.

  13. Revenue Sufficiency and Reliability in a Zero Marginal Cost Future

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frew, Bethany A.

    Features of existing wholesale electricity markets, such as administrative pricing rules and policy-based reliability standards, can distort market incentives from allowing generators sufficient opportunities to recover both fixed and variable costs. Moreover, these challenges can be amplified by other factors, including (1) inelastic demand resulting from a lack of price signal clarity, (2) low- or near-zero marginal cost generation, particularly arising from low natural gas fuel prices and variable generation (VG), such as wind and solar, and (3) the variability and uncertainty of this VG. As power systems begin to incorporate higher shares of VG, many questions arise about themore » suitability of the existing marginal-cost-based price formation, primarily within an energy-only market structure, to ensure the economic viability of resources that might be needed to provide system reliability. This article discusses these questions and provides a summary of completed and ongoing modelling-based work at the National Renewable Energy Laboratory to better understand the impacts of evolving power systems on reliability and revenue sufficiency.« less

  14. Revenue Sufficiency and Reliability in a Zero Marginal Cost Future: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frew, Bethany A.; Milligan, Michael; Brinkman, Greg

    Features of existing wholesale electricity markets, such as administrative pricing rules and policy-based reliability standards, can distort market incentives from allowing generators sufficient opportunities to recover both fixed and variable costs. Moreover, these challenges can be amplified by other factors, including (1) inelastic demand resulting from a lack of price signal clarity, (2) low- or near-zero marginal cost generation, particularly arising from low natural gas fuel prices and variable generation (VG), such as wind and solar, and (3) the variability and uncertainty of this VG. As power systems begin to incorporate higher shares of VG, many questions arise about themore » suitability of the existing marginal-cost-based price formation, primarily within an energy-only market structure, to ensure the economic viability of resources that might be needed to provide system reliability. This article discusses these questions and provides a summary of completed and ongoing modelling-based work at the National Renewable Energy Laboratory to better understand the impacts of evolving power systems on reliability and revenue sufficiency.« less

  15. Advancement of a 30K W Solar Electric Propulsion System Capability for NASA Human and Robotic Exploration Missions

    NASA Technical Reports Server (NTRS)

    Smith, Bryan K.; Nazario, Margaret L.; Manzella, David H.

    2012-01-01

    Solar Electric Propulsion has evolved into a demonstrated operational capability performing station keeping for geosynchronous satellites, enabling challenging deep-space science missions, and assisting in the transfer of satellites from an elliptical orbit Geostationary Transfer Orbit (GTO) to a Geostationary Earth Orbit (GEO). Advancing higher power SEP systems will enable numerous future applications for human, robotic, and commercial missions. These missions are enabled by either the increased performance of the SEP system or by the cost reductions when compared to conventional chemical propulsion systems. Higher power SEP systems that provide very high payload for robotic missions also trade favorably for the advancement of human exploration beyond low Earth orbit. Demonstrated reliable systems are required for human space flight and due to their successful present day widespread use and inherent high reliability, SEP systems have progressively become a viable entrant into these future human exploration architectures. NASA studies have identified a 30 kW-class SEP capability as the next appropriate evolutionary step, applicable to wide range of both human and robotic missions. This paper describes the planning options, mission applications, and technology investments for representative 30kW-class SEP mission concepts under consideration by NASA

  16. Performance Analysis of Stirling Engine-Driven Vapor Compression Heat Pump System

    NASA Astrophysics Data System (ADS)

    Kagawa, Noboru

    Stirling engine-driven vapor compression systems have many unique advantages including higher thermal efficiencies, preferable exhaust gas characteristics, multi-fuel usage, and low noise and vibration which can play an important role in alleviating environmental and energy problems. This paper introduces a design method for the systems based on reliable mathematical methods for Stirling and Rankin cycles using reliable thermophysical information for refrigerants. The model deals with a combination of a kinematic Stirling engine and a scroll compressor. Some experimental coefficients are used to formulate the model. The obtained results show the performance behavior in detail. The measured performance of the actual system coincides with the calculated results. Furthermore, the calculated results clarify the performance using alternative refrigerants for R-22.

  17. Reliable vision-guided grasping

    NASA Technical Reports Server (NTRS)

    Nicewarner, Keith E.; Kelley, Robert B.

    1992-01-01

    Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided 'docking' problems. It differs from other hand-in-eye visual servoing problems, such as tracking, in that the distance from the target is a relevant servo parameter. The methodology described in this paper is hierarchy of levels in which the vision/robot interface is decreasingly 'intelligent,' and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system.

  18. A classification system for characterization of physical and non-physical work factors.

    PubMed

    Genaidy, A; Karwowski, W; Succop, P; Kwon, Y G; Alhemoud, A; Goyal, D

    2000-01-01

    A comprehensive evaluation of work-related performance factors is a prerequisite to developing integrated and long-term solutions to workplace performance improvement. This paper describes a work-factor classification system that categorizes the entire domain of workplace factors impacting performance. A questionnaire-based instrument was developed to implement this classification system in industry. Fifty jobs were evaluated in 4 different service and manufacturing companies using the proposed questionnaire-based instrument. The reliability coefficients obtained from the analyzed jobs were considered good (0.589 to 0.862). In general, the physical work factors resulted in higher reliability coefficients (0.847 to 0.862) than non-physical work factors (0.589 to 0.768).

  19. The welfare effects of integrating renewable energy into electricity markets

    NASA Astrophysics Data System (ADS)

    Lamadrid, Alberto J.

    The challenges of deploying more renewable energy sources on an electric grid are caused largely by their inherent variability. In this context, energy storage can help make the electric delivery system more reliable by mitigating this variability. This thesis analyzes a series of models for procuring electricity and ancillary services for both individuals and social planners with high penetrations of stochastic wind energy. The results obtained for an individual decision maker using stochastic optimization are ambiguous, with closed form solutions dependent on technological parameters, and no consideration of the system reliability. The social planner models correctly reflect the effect of system reliability, and in the case of a Stochastic, Security Constrained Optimal Power Flow (S-SC-OPF or SuperOPF), determine reserve capacity endogenously so that system reliability is maintained. A single-period SuperOPF shows that including ramping costs in the objective function leads to more wind spilling and increased capacity requirements for reliability. However, this model does not reflect the inter temporal tradeoffs of using Energy Storage Systems (ESS) to improve reliability and mitigate wind variability. The results with the multiperiod SuperOPF determine the optimum use of storage for a typical day, and compare the effects of collocating ESS at wind sites with the same amount of storage (deferrable demand) located at demand centers. The collocated ESS has slightly lower operating costs and spills less wind generation compared to deferrable demand, but the total amount of conventional generating capacity needed for system adequacy is higher. In terms of the total system costs, that include the capital cost of conventional generating capacity, the costs with deferrable demand is substantially lower because the daily demand profile is flattened and less conventional generation capacity is then needed for reliability purposes. The analysis also demonstrates that the optimum daily pattern of dispatch and reserves is seriously distorted if the stochastic characteristics of wind generation are ignored.

  20. Developing an Advanced Life Support System for the Flexible Path into Deep Space

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Kliss, Mark H.

    2010-01-01

    Long duration human missions beyond low Earth orbit, such as a permanent lunar base, an asteroid rendezvous, or exploring Mars, will use recycling life support systems to preclude supplying large amounts of metabolic consumables. The International Space Station (ISS) life support design provides a historic guiding basis for future systems, but both its system architecture and the subsystem technologies should be reconsidered. Different technologies for the functional subsystems have been investigated and some past alternates appear better for flexible path destinations beyond low Earth orbit. There is a need to develop more capable technologies that provide lower mass, increased closure, and higher reliability. A major objective of redesigning the life support system for the flexible path is achieving the maintainability and ultra-reliability necessary for deep space operations.

  1. Reliability of sagittal plane hip, knee, and ankle joint angles from a single frame of video data using the GAITRite camera system.

    PubMed

    Ross, Sandy A; Rice, Clinton; Von Behren, Kristyn; Meyer, April; Alexander, Rachel; Murfin, Scott

    2015-01-01

    The purpose of this study was to establish intra-rater, intra-session, and inter-rater, reliability of sagittal plane hip, knee, and ankle angles with and without reflective markers using the GAITRite walkway and single video camera between student physical therapists and an experienced physical therapist. This study included thirty-two healthy participants age 20-59, stratified by age and gender. Participants performed three successful walks with and without markers applied to anatomical landmarks. GAITRite software was used to digitize sagittal hip, knee, and ankle angles at two phases of gait: (1) initial contact; and (2) mid-stance. Intra-rater reliability was more consistent for the experienced physical therapist, regardless of joint or phase of gait. Intra-session reliability was variable, the experienced physical therapist showed moderate to high reliability (intra-class correlation coefficient (ICC) = 0.50-0.89) and the student physical therapist showed very poor to high reliability (ICC = 0.07-0.85). Inter-rater reliability was highest during mid-stance at the knee with markers (ICC = 0.86) and lowest during mid-stance at the hip without markers (ICC = 0.25). Reliability of a single camera system, especially at the knee joint shows promise. Depending on the specific type of reliability, error can be attributed to the testers (e.g. lack of digitization practice and marker placement), participants (e.g. loose fitting clothing) and camera systems (e.g. frame rate and resolution). However, until the camera technology can be upgraded to a higher frame rate and resolution, and the software can be linked to the GAITRite walkway, the clinical utility for pre/post measures is limited.

  2. Brief Report: Development of the Adolescent Empathy and Systemizing Quotients

    ERIC Educational Resources Information Center

    Auyeung, Bonnie; Allison, Carrie; Wheelwright, Sally; Baron-Cohen, Simon

    2012-01-01

    Adolescent versions of the Empathy Quotient (EQ) and Systemizing Quotient (SQ) were developed and administered to n = 1,030 parents of typically developing adolescents, aged 12-16 years. Both measures showed good test-retest reliability and high internal consistency. Girls scored significantly higher on the EQ, and boys scored significantly higher…

  3. Precise time dissemination and applications development on the Bonneville Power Administration system

    NASA Technical Reports Server (NTRS)

    Martin, Ken E.; Esztergalyos, J.

    1992-01-01

    The Bonneville Power Administration (BPA) uses IRIG-B transmitted over microwave as its primary system time dissemination. Problems with accuracy and reliability have led to ongoing research into better methods. BPA has also developed and deployed a unique fault locator which uses precise clocks synchronized by a pulse over microwaves. It automatically transmits the data to a central computer for analysis. A proposed system could combine fault location timing and time dissemination into a Global Position System (GPS) timing receiver and close the verification loop through a master station at the Dittmer Control Center. Such a system would have many advantages, including lower cost, higher reliability, and wider industry support. Test results indicate the GPS has sufficient accuracy and reliability for this and other current timing requirements including synchronous phase angle measurements. A phasor measurement system which provides phase angle has recently been tested with excellent results. Phase angle is a key parameter in power system control applications including dynamic braking, DC modulation, remedial action schemes, and system state estimation. Further research is required to determine the applications which can most effectively use real-time phase angle measurements and the best method to apply them.

  4. Precise time dissemination and applications development on the Bonneville Power Administration system

    NASA Astrophysics Data System (ADS)

    Martin, Ken E.; Esztergalyos, J.

    1992-07-01

    The Bonneville Power Administration (BPA) uses IRIG-B transmitted over microwave as its primary system time dissemination. Problems with accuracy and reliability have led to ongoing research into better methods. BPA has also developed and deployed a unique fault locator which uses precise clocks synchronized by a pulse over microwaves. It automatically transmits the data to a central computer for analysis. A proposed system could combine fault location timing and time dissemination into a Global Position System (GPS) timing receiver and close the verification loop through a master station at the Dittmer Control Center. Such a system would have many advantages, including lower cost, higher reliability, and wider industry support. Test results indicate the GPS has sufficient accuracy and reliability for this and other current timing requirements including synchronous phase angle measurements. A phasor measurement system which provides phase angle has recently been tested with excellent results. Phase angle is a key parameter in power system control applications including dynamic braking, DC modulation, remedial action schemes, and system state estimation. Further research is required to determine the applications which can most effectively use real-time phase angle measurements and the best method to apply them.

  5. Low Level Leaks

    NASA Technical Reports Server (NTRS)

    1998-01-01

    NASA has transferred the improved portable leak detector technology to UE Systems, Inc.. This instrument was developed to detect leaks in fluid systems of critical launch and ground support equipment. This system incorporates innovative electronic circuitry, improved transducers, collecting horns, and contact sensors that provide a much higher degree of reliability, sensitivity and versatility over previously used systems. Potential commercial uses are pipelines, underground utilities, air-conditioning systems, petrochemical systems, aerospace, power transmission lines and medical devices.

  6. Arm cranking versus wheelchair propulsion for testing aerobic fitness in children with spina bifida who are wheelchair dependent.

    PubMed

    Bloemen, Manon A T; de Groot, Janke F; Backx, Frank J G; Westerveld, Rosalyne A; Takken, Tim

    2015-05-01

    To determine the best test performance and feasibility using a Graded Arm Cranking Test vs a Graded Wheelchair Propulsion Test in young people with spina bifida who use a wheelchair, and to determine the reliability of the best test. Validity and reliability study. Young people with spina bifida who use a wheelchair. Physiological responses were measured during a Graded Arm Cranking Test and a Graded Wheelchair Propulsion Test using a heart rate monitor and calibrated mobile gas analysis system (Cortex Metamax). For validity, peak oxygen uptake (VO2peak) and peak heart rate (HRpeak) were compared using paired t-tests. For reliability, the intra-class correlation coefficients, standard error of measurement, and standard detectable change were calculated. VO2peak and HRpeak were higher during wheelchair propulsion compared with arm cranking (23.1 vs 19.5 ml/kg/min, p = 0.11; 165 vs 150 beats/min, p < 0.05). Reliability of wheelchair propulsion showed high intra-class correlation coefficients (ICCs) for both VO2peak (ICC = 0.93) and HRpeak (ICC = 0.90). This pilot study shows higher HRpeak and a tendency to higher VO2peak in young people with spina bifida who are using a wheelchair when tested during wheelchair propulsion compared with arm cranking. Wheelchair propulsion showed good reliability. We recommend performing a wheelchair propulsion test for aerobic fitness testing in this population.

  7. BDS/GPS Dual Systems Positioning Based on the Modified SR-UKF Algorithm

    PubMed Central

    Kong, JaeHyok; Mao, Xuchu; Li, Shaoyuan

    2016-01-01

    The Global Navigation Satellite System can provide all-day three-dimensional position and speed information. Currently, only using the single navigation system cannot satisfy the requirements of the system’s reliability and integrity. In order to improve the reliability and stability of the satellite navigation system, the positioning method by BDS and GPS navigation system is presented, the measurement model and the state model are described. Furthermore, the modified square-root Unscented Kalman Filter (SR-UKF) algorithm is employed in BDS and GPS conditions, and analysis of single system/multi-system positioning has been carried out, respectively. The experimental results are compared with the traditional estimation results, which show that the proposed method can perform highly-precise positioning. Especially when the number of satellites is not adequate enough, the proposed method combine BDS and GPS systems to achieve a higher positioning precision. PMID:27153068

  8. Human Expert Labeling Process (HELP): Towards a Reliable Higher-Order User State Labeling Process and Tool to Assess Student Engagement

    ERIC Educational Resources Information Center

    Aslan, Sinem; Mete, Sinem Emine; Okur, Eda; Oktay, Ece; Alyuz, Nese; Genc, Utku Ergin; Stanhill, David; Esme, Asli Arslan

    2017-01-01

    In a series of longitudinal research studies, researchers at Intel Corporation in Turkey have been working towards an adaptive learning system automatically detecting student engagement as a higher-order user state in real-time. The labeled data necessary for supervised learning can be obtained through labeling conducted by human experts. Using…

  9. High-resolution audiometry: an automated method for hearing threshold acquisition with quality control.

    PubMed

    Bian, Lin

    2012-01-01

    In clinical practice, hearing thresholds are measured at only five to six frequencies at octave intervals. Thus, the audiometric configuration cannot closely reflect the actual status of the auditory structures. In addition, differential diagnosis requires quantitative comparison of behavioral thresholds with physiological measures, such as otoacoustic emissions (OAEs) that are usually measured in higher resolution. The purpose of this research was to develop a method to improve the frequency resolution of the audiogram. A repeated-measure design was used in the study to evaluate the reliability of the threshold measurements. A total of 16 participants with clinically normal hearing and mild hearing loss were recruited from a population of university students. No intervention was involved in the study. Custom developed system and software were used for threshold acquisition with quality control (QC). With real-ear calibration and monitoring of test signals, the system provided accurate and individualized measure of hearing thresholds that were determined by an analysis based on signal detection theory (SDT). The reliability of the threshold measure was assessed by correlation and differences between the repeated measures. The audiometric configurations were diverse and unique to each individual ear. The accuracy, within-subject reliability, and between-test repeatability are relatively high. With QC, the high-resolution audiograms can be reliably and accurately measured. Hearing thresholds measured as ear canal sound pressures with higher frequency resolution can provide more customized hearing-aid fitting. The test system may be integrated with other physiological measures, such as OAEs, into a comprehensive evaluative tool. American Academy of Audiology.

  10. R&D of high reliable refrigeration system for superconducting generators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hosoya, T.; Shindo, S.; Yaguchi, H.

    1996-12-31

    Super-GM carries out R&D of 70 MW class superconducting generators (model machines), refrigeration system and superconducting wires to apply superconducting technology to electric power apparatuses. The helium refrigeration system for keeping field windings of superconducting generator (SCG) in cryogenic environment must meet the requirement of high reliability for uninterrupted long term operation of the SCG. In FY 1992, a high reliable conventional refrigeration system for the model machines was integrated by combining components such as compressor unit, higher temperature cold box and lower temperature cold box which were manufactured utilizing various fundamental technologies developed in early stage of the projectmore » since 1988. Since FY 1993, its performance tests have been carried out. It has been confirmed that its performance was fulfilled the development target of liquefaction capacity of 100 L/h and impurity removal in the helium gas to < 0.1 ppm. Furthermore, its operation method and performance were clarified to all different modes as how to control liquefaction rate and how to supply liquid helium from a dewar to the model machine. In addition, the authors have made performance tests and system performance analysis of oil free screw type and turbo type compressors which greatly improve reliability of conventional refrigeration systems. The operation performance and operational control method of the compressors has been clarified through the tests and analysis.« less

  11. [Design of low-intermediate frequency electrotherapy and pain assessment system].

    PubMed

    Liang, Chunyan; Tian, Xuelong; Yu, Xuehong; Luo, Hongyan

    2014-06-01

    Aiming at the single treatment and the design separation between treatment and assessment in electrotherapy equipment, a kind of system including low-intermediate frequency treatment and efficacy evaluation was developed. With C8051F020 single-chip microcomputer as the core and the circuit design and software programming used, the system realized the random switch of therapeutic parameters, the collection, display and data storage of pressure pain threshold in the assessment. Experiment results showed that the stimulus waveform, current intensity, frequency, duty ratio of the system output were adjustable, accurate and reliable. The obtained pressure pain threshold had a higher accuracy (< 0.3 N) and better stability, guiding the parameter choice in the precise electrical stimulation. It, therefore, provides a reliable technical support for the treatment and curative effect assessment.

  12. Development and validation of the irritable bowel syndrome scale under the system of quality of life instruments for chronic diseases QLICD-IBS: combinations of classical test theory and generalizability theory.

    PubMed

    Lei, Pingguang; Lei, Guanghe; Tian, Jianjun; Zhou, Zengfen; Zhao, Miao; Wan, Chonghua

    2014-10-01

    This paper is aimed to develop the irritable bowel syndrome (IBS) scale of the system of Quality of Life Instruments for Chronic Diseases (QLICD-IBS) by the modular approach and validate it by both classical test theory and generalizability theory. The QLICD-IBS was developed based on programmed decision procedures with multiple nominal and focus group discussions, in-depth interview, and quantitative statistical procedures. One hundred twelve inpatients with IBS were used to provide the data measuring QOL three times before and after treatments. The psychometric properties of the scale were evaluated with respect to validity, reliability, and responsiveness employing correlation analysis, factor analyses, multi-trait scaling analysis, t tests and also G studies and D studies of generalizability theory analysis. Multi-trait scaling analysis, correlation, and factor analyses confirmed good construct validity and criterion-related validity when using SF-36 as a criterion. Test-retest reliability coefficients (Pearson r and intra-class correlation (ICC)) for the overall score and all domains were higher than 0.80; the internal consistency α for all domains at two measurements were higher than 0.70 except for the social domain (0.55 and 0.67, respectively). The overall score and scores for all domains/facets had statistically significant changes after treatments with moderate or higher effect size standardized response mean (SRM) ranging from 0.72 to 1.02 at domain levels. G coefficients and index of dependability (Ф coefficients) confirmed the reliability of the scale further with more exact variance components. The QLICD-IBS has good validity, reliability, responsiveness, and some highlights and can be used as the quality of life instrument for patients with IBS.

  13. GAS DISCHARGE SWITCH EVALUATION FOR RHIC BEAM ABORT KICKER APPLICATION.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ZHANG,W.; SANDBERG,J.; SHELDRAKE,R.

    2002-06-30

    A gas discharge switch EEV HX3002 is being evaluated at Brookhaven National Laboratory as a possible candidate of RHIC Beam Abort Kicker modulator main switch. At higher beam energy and higher beam intensity, the switch stability becomes very crucial. The hollow anode thyratron used in the existing system is not rated for long reverse current conduction. The reverse voltage arcing caused thyratron hold-off voltage de-rating has been the main limitation of the system operation. To improve the system reliability, a new type of gas discharge switch has been suggested by Marconi Applied Technology for its reverse conducting capability.

  14. [Signs and symptoms of autonomic dysfunction in dysphonic individuals].

    PubMed

    Park, Kelly; Behlau, Mara

    2011-01-01

    To verify the occurrence of signs and symptoms of autonomic nervous system dysfunction in individuals with behavioral dysphonia, and to compare it with the results obtained by individuals without vocal complaints. Participants were 128 adult individuals with ages between 14 and 74 years, divided into two groups: behavioral dysphonia (61 subjects) and without vocal complaints (67 subjects). It was administered the Protocol of Autonomic Dysfunction, containing 46 questions: 22 related to the autonomic nervous system and had no direct relationship with voice, 16 related to both autonomic nervous system and voice, six non-relevant questions, and two reliability questions. There was a higher occurrence of reported neurovegetative signs in the group with behavioral dysphonia, in questions related to voice, such as frequent throat clearing, frequent swallowing need, fatigability when speaking, and sore throat. In questions not directly related to voice, dysphonic individuals presented greater occurrence of three out of 22 symptoms: gas, tinnitus and aerophagia. Both groups presented similar results in questions non-relevant to the autonomic nervous system. Reliability questions needed reformulation. Individuals with behavioral dysphonia present higher occurrence of neurovegetative signs and symptoms, particularly those with direct relationship with voice, indicating greater lability of the autonomic nervous system in these subjects.

  15. Reliability Considerations of ULP Scaled CMOS in Spacecraft Systems

    NASA Technical Reports Server (NTRS)

    White, Mark; MacNeal, Kristen; Cooper, Mark

    2012-01-01

    NASA, the aerospace community, and other high reliability (hi-rel) users of advanced microelectronic products face many challenges as technology continues to scale into the deep sub-micron region. Decreasing the feature size of CMOS devices not only allows more components to be placed on a single chip, but it increases performance by allowing faster switching (or clock) speeds with reduced power compared to larger scaled devices. Higher performance, and lower operating and stand-by power characteristics of Ultra-Low Power (ULP) microelectronics are not only desirable, but also necessary to meet low power consumption design goals of critical spacecraft systems. The integration of these components in such systems, however, must be balanced with the overall risk tolerance of the project.

  16. Test-retest reliability of the assessment of postural stability in typically developing children and in hearing impaired children.

    PubMed

    De Kegel, A; Dhooge, I; Cambier, D; Baetens, T; Palmans, T; Van Waelvelde, H

    2011-04-01

    The purpose of this study was to establish test-retest reliability of centre of pressure (COP) measurements obtained by an AccuGait portable forceplate (ACG), mean COG sway velocity measured by a Basic Balance Master (BBM) and clinical balance tests in children with and without balance difficulties. 49 typically developing children and 23 hearing impaired children, with a higher risk for stability problems, between 6 and 12 years of age participated. Each child performed the modified Clinical Test of Sensory Interaction on Balance (mCTSIB), Unilateral Stance (US) and Tandem Stance on ACG, mCTSIB and US on BBM and clinical balance tests: one-leg standing, balance beam walking and one-leg hopping. All subjects completed 2 test sessions on 2 different days in the same week assessed by the same examiner. Among COP measurements obtained by the ACG, mean sway velocity was the most reliable parameter with all ICCs higher than 0.72. The standard deviation (SD) of sway velocity, sway area, SD of anterior-posterior and SD of medio-lateral COP data showed moderate to excellent reliability with ICCs between 0.55 and 0.96 but some caution must be taken into account in some conditions. BBM is less reliable but clinical balance tests are as reliable as ACG. Hearing impaired children exhibited better relative reliability (ICC) and comparable absolute reliability (SEM) for most balance parameters compared to typically developing children. Reliable information regarding postural stability of typically developing children and hearing impaired children may be obtained utilizing COP measurements generated by an AccuGait system and clinical balance tests. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Reimagining cost recovery in Pakistan's irrigation system through willingness-to-pay estimates for irrigation water from a discrete choice experiment

    NASA Astrophysics Data System (ADS)

    Bell, Andrew Reid; Shah, M. Azeem Ali; Ward, Patrick S.

    2014-08-01

    It is widely argued that farmers are unwilling to pay adequate fees for surface water irrigation to recover the costs associated with maintenance and improvement of delivery systems. In this paper, we use a discrete choice experiment to study farmer preferences for irrigation characteristics along two branch canals in Punjab Province in eastern Pakistan. We find that farmers are generally willing to pay well in excess of current surface water irrigation costs for increased surface water reliability and that the amount that farmers are willing to pay is an increasing function of their existing surface water supply as well as location along the main canal branch. This explicit translation of implicit willingness-to-pay (WTP) for water (via expenditure on groundwater pumping) to WTP for reliable surface water demonstrates the potential for greatly enhanced cost recovery in the Indus Basin Irrigation System via appropriate setting of water user fees, driven by the higher WTP of those currently receiving reliable supplies.

  18. Reimagining cost recovery in Pakistan's irrigation system through willingness-to-pay estimates for irrigation water from a discrete choice experiment

    PubMed Central

    Bell, Andrew Reid; Shah, M Azeem Ali; Ward, Patrick S

    2014-01-01

    It is widely argued that farmers are unwilling to pay adequate fees for surface water irrigation to recover the costs associated with maintenance and improvement of delivery systems. In this paper, we use a discrete choice experiment to study farmer preferences for irrigation characteristics along two branch canals in Punjab Province in eastern Pakistan. We find that farmers are generally willing to pay well in excess of current surface water irrigation costs for increased surface water reliability and that the amount that farmers are willing to pay is an increasing function of their existing surface water supply as well as location along the main canal branch. This explicit translation of implicit willingness-to-pay (WTP) for water (via expenditure on groundwater pumping) to WTP for reliable surface water demonstrates the potential for greatly enhanced cost recovery in the Indus Basin Irrigation System via appropriate setting of water user fees, driven by the higher WTP of those currently receiving reliable supplies. PMID:25552779

  19. A Critique of the Use of Self-Evaluation in a Compulsory Accreditation System

    ERIC Educational Resources Information Center

    Van Kemenade, Everard; Hardjono, Teun W.

    2010-01-01

    Self-evaluation is supposed to be a valid, reliable and easy-to-use instrument to commit professionals to external quality assurance. The writing of a self-evaluation report is the first step in most higher education accreditation systems all over the world. Research on accreditation in the Netherlands and Flanders shows that professionals…

  20. The Case for Modular Redundancy in Large-Scale High Performance Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engelmann, Christian; Ong, Hong Hoe; Scott, Stephen L

    2009-01-01

    Recent investigations into resilience of large-scale high-performance computing (HPC) systems showed a continuous trend of decreasing reliability and availability. Newly installed systems have a lower mean-time to failure (MTTF) and a higher mean-time to recover (MTTR) than their predecessors. Modular redundancy is being used in many mission critical systems today to provide for resilience, such as for aerospace and command \\& control systems. The primary argument against modular redundancy for resilience in HPC has always been that the capability of a HPC system, and respective return on investment, would be significantly reduced. We argue that modular redundancy can significantly increasemore » compute node availability as it removes the impact of scale from single compute node MTTR. We further argue that single compute nodes can be much less reliable, and therefore less expensive, and still be highly available, if their MTTR/MTTF ratio is maintained.« less

  1. High-precision real-time 3D shape measurement based on a quad-camera system

    NASA Astrophysics Data System (ADS)

    Tao, Tianyang; Chen, Qian; Feng, Shijie; Hu, Yan; Zhang, Minliang; Zuo, Chao

    2018-01-01

    Phase-shifting profilometry (PSP) based 3D shape measurement is well established in various applications due to its high accuracy, simple implementation, and robustness to environmental illumination and surface texture. In PSP, higher depth resolution generally requires higher fringe density of projected patterns which, in turn, lead to severe phase ambiguities that must be solved with additional information from phase coding and/or geometric constraints. However, in order to guarantee the reliability of phase unwrapping, available techniques are usually accompanied by increased number of patterns, reduced amplitude of fringe, and complicated post-processing algorithms. In this work, we demonstrate that by using a quad-camera multi-view fringe projection system and carefully arranging the relative spatial positions between the cameras and the projector, it becomes possible to completely eliminate the phase ambiguities in conventional three-step PSP patterns with high-fringe-density without projecting any additional patterns or embedding any auxiliary signals. Benefiting from the position-optimized quad-camera system, stereo phase unwrapping can be efficiently and reliably performed by flexible phase consistency checks. Besides, redundant information of multiple phase consistency checks is fully used through a weighted phase difference scheme to further enhance the reliability of phase unwrapping. This paper explains the 3D measurement principle and the basic design of quad-camera system, and finally demonstrates that in a large measurement volume of 200 mm × 200 mm × 400 mm, the resultant dynamic 3D sensing system can realize real-time 3D reconstruction at 60 frames per second with a depth precision of 50 μm.

  2. A Statistical Simulation Approach to Safe Life Fatigue Analysis of Redundant Metallic Components

    NASA Technical Reports Server (NTRS)

    Matthews, William T.; Neal, Donald M.

    1997-01-01

    This paper introduces a dual active load path fail-safe fatigue design concept analyzed by Monte Carlo simulation. The concept utilizes the inherent fatigue life differences between selected pairs of components for an active dual path system, enhanced by a stress level bias in one component. The design is applied to a baseline design; a safe life fatigue problem studied in an American Helicopter Society (AHS) round robin. The dual active path design is compared with a two-element standby fail-safe system and the baseline design for life at specified reliability levels and weight. The sensitivity of life estimates for both the baseline and fail-safe designs was examined by considering normal and Weibull distribution laws and coefficient of variation levels. Results showed that the biased dual path system lifetimes, for both the first element failure and residual life, were much greater than for standby systems. The sensitivity of the residual life-weight relationship was not excessive at reliability levels up to R = 0.9999 and the weight penalty was small. The sensitivity of life estimates increases dramatically at higher reliability levels.

  3. Probabilistic evaluation of seismic isolation effect with respect to siting of a fusion reactor facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takeda, Masatoshi; Komura, Toshiyuki; Hirotani, Tsutomu

    1995-12-01

    Annual failure probabilities of buildings and equipment were roughly evaluated for two fusion-reactor-like buildings, with and without seismic base isolation, in order to examine the effectiveness of the base isolation system regarding siting issues. The probabilities are calculated considering nonlinearity and rupture of isolators. While the probability of building failure for both buildings on the same site was almost equal, the function failures for equipment showed that the base-isolated building had higher reliability than the non-isolated building. Even if the base-isolated building alone is located on a higher seismic hazard area, it could compete favorably with the ordinary one inmore » reliability of equipment.« less

  4. Silicon Nanophotonics for Many-Core On-Chip Networks

    NASA Astrophysics Data System (ADS)

    Mohamed, Moustafa

    Number of cores in many-core architectures are scaling to unprecedented levels requiring ever increasing communication capacity. Traditionally, architects follow the path of higher throughput at the expense of latency. This trend has evolved into being problematic for performance in many-core architectures. Moreover, the trends of power consumption is increasing with system scaling mandating nontraditional solutions. Nanophotonics can address these problems, offering benefits in the three frontiers of many-core processor design: Latency, bandwidth, and power. Nanophotonics leverage circuit-switching flow control allowing low latency; in addition, the power consumption of optical links is significantly lower compared to their electrical counterparts at intermediate and long links. Finally, through wave division multiplexing, we can keep the high bandwidth trends without sacrificing the throughput. This thesis focuses on realizing nanophotonics for communication in many-core architectures at different design levels considering reliability challenges that our fabrication and measurements reveal. First, we study how to design on-chip networks for low latency, low power, and high bandwidth by exploiting the full potential of nanophotonics. The design process considers device level limitations and capabilities on one hand, and system level demands in terms of power and performance on the other hand. The design involves the choice of devices, designing the optical link, the topology, the arbitration technique, and the routing mechanism. Next, we address the problem of reliability in on-chip networks. Reliability not only degrades performance but can block communication. Hence, we propose a reliability-aware design flow and present a reliability management technique based on this flow to address reliability in the system. In the proposed flow reliability is modeled and analyzed for at the device, architecture, and system level. Our reliability management technique is superior to existing solutions in terms of power and performance. In fact, our solution can scale to thousand core with low overhead.

  5. Lockheed L-1101 avionic flight control redundant systems

    NASA Technical Reports Server (NTRS)

    Throndsen, E. O.

    1976-01-01

    The Lockheed L-1011 automatic flight control systems - yaw stability augmentation and automatic landing - are described in terms of their redundancies. The reliability objectives for these systems are discussed and related to in-service experience. In general, the availability of the stability augmentation system is higher than the original design requirement, but is commensurate with early estimates. The in-service experience with automatic landing is not sufficient to provide verification of Category 3 automatic landing system estimated availability.

  6. Analytical Micromechanics Modeling Technique Developed for Ceramic Matrix Composites Analysis

    NASA Technical Reports Server (NTRS)

    Min, James B.

    2005-01-01

    Ceramic matrix composites (CMCs) promise many advantages for next-generation aerospace propulsion systems. Specifically, carbon-reinforced silicon carbide (C/SiC) CMCs enable higher operational temperatures and provide potential component weight savings by virtue of their high specific strength. These attributes may provide systemwide benefits. Higher operating temperatures lessen or eliminate the need for cooling, thereby reducing both fuel consumption and the complex hardware and plumbing required for heat management. This, in turn, lowers system weight, size, and complexity, while improving efficiency, reliability, and service life, resulting in overall lower operating costs.

  7. Reliability and Reproducibility of Advanced ECG Parameters in Month-to-Month and Year-to-Year Recordings in Healthy Subjects

    NASA Technical Reports Server (NTRS)

    Starc, Vito; Abughazaleh, Ahmed S.; Schlegel, Todd T.

    2014-01-01

    Advanced resting ECG parameters such the spatial mean QRS-T angle and the QT variability index (QTVI) have important diagnostic and prognostic utility, but their reliability and reproducibility (R&R) are not well characterized. We hypothesized that the spatial QRS-T angle would have relatively higher R&R than parameters such as QTVI that are more responsive to transient changes in the autonomic nervous system. The R&R of several conventional and advanced ECG para-meters were studied via intraclass correlation coefficients (ICCs) and coefficients of variation (CVs) in: (1) 15 supine healthy subjects from month-to-month; (2) 27 supine healthy subjects from year-to-year; and (3) 25 subjects after transition from the supine to the seated posture. As hypothesized, for the spatial mean QRS-T angle and many conventional ECG parameters, ICCs we-re higher, and CVs lower than QTVI, suggesting that the former parameters are more reliable and reproducible.

  8. 76 FR 4912 - Proposed Information Collection Activity; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-27

    ... reliable evaluation design to produce accurate evidence of the effect of HPOG on individuals and health job training programs systems. The goals of the HPOG evaluation are to establish a performance management... grantee organizations (higher education Institutions, workforce investment boards, private training...

  9. [Adaptive reactions of students from mountain and plain regions of Latin America to conditions of middle Russia].

    PubMed

    Ermakova, N V

    2003-01-01

    This article contains results of the comparative study of the functional state of respiratory and cardiovascular systems of almost healthy students (man) of age 19-22, inhabitants of mountain and plain regions of Latin America during their adaptation to the conditions of middle Russia. We have established that there are reliable distinctions in the functional state of cardio-respiratory system of students from mountain and plain regions of Latin America. So for representatives of mountain regions of LA were typical higher indicators of vital capacity, permeability of large and medium bronchial tubes, stroke volume, lower indicators of heart rate, systolic arterial pressure, myocard tension index, but higher coefficient of myocard efficiency than for inhabitants the plain. Considerable distinctions have been observed also in the intercommunication between different indicators. There have been marked considerable correlation connections between small bronchial tubes permeability and cardiovascular system indicators for plain inhabitants. For mountain regions inhabitants almost every indicator of bronchial tubes permeability correlate reliably with vital capacity, but didn't correlate with hemodynamics indicators.

  10. Reliable and redundant FPGA based read-out design in the ATLAS TileCal Demonstrator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akerstedt, Henrik; Muschter, Steffen; Drake, Gary

    The Tile Calorimeter at ATLAS [1] is a hadron calorimeter based on steel plates and scintillating tiles read out by PMTs. The current read-out system uses standard ADCs and custom ASICs to digitize and temporarily store the data on the detector. However, only a subset of the data is actually read out to the counting room. The on-detector electronics will be replaced around 2023. To achieve the required reliability the upgraded system will be highly redundant. Here the ASICs will be replaced with Kintex-7 FPGAs from Xilinx. This, in addition to the use of multiple 10 Gbps optical read-out links,more » will allow a full read-out of all detector data. Due to the higher radiation levels expected when the beam luminosity is increased, opportunities for repairs will be less frequent. The circuitry and firmware must therefore be designed for sufficiently high reliability using redundancy and radiation tolerant components. Within a year, a hybrid demonstrator including the new readout system will be installed in one slice of the ATLAS Tile Calorimeter. This will allow the proposed upgrade to be thoroughly evaluated well before the planned 2023 deployment in all slices, especially with regard to long term reliability. Different firmware strategies alongside with their integration in the demonstrator are presented in the context of high reliability protection against hardware malfunction and radiation induced errors.« less

  11. Power Electronics Thermal Management Research: Annual Progress Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreno, Gilberto

    The objective for this project is to develop thermal management strategies to enable efficient and high-temperature wide-bandgap (WBG)-based power electronic systems (e.g., emerging inverter and DC-DC converter). Reliable WBG devices are capable of operating at elevated temperatures (≥ 175 °Celsius). However, packaging WBG devices within an automotive inverter and operating them at higher junction temperatures will expose other system components (e.g., capacitors and electrical boards) to temperatures that may exceed their safe operating limits. This creates challenges for thermal management and reliability. In this project, system-level thermal analyses are conducted to determine the effect of elevated device temperatures on invertermore » components. Thermal modeling work is then conducted to evaluate various thermal management strategies that will enable the use of highly efficient WBG devices with automotive power electronic systems.« less

  12. The Physician Recommendation Coding System (PhyReCS): A Reliable and Valid Method to Quantify the Strength of Physician Recommendations During Clinical Encounters

    PubMed Central

    Scherr, Karen A.; Fagerlin, Angela; Williamson, Lillie D.; Davis, J. Kelly; Fridman, Ilona; Atyeo, Natalie; Ubel, Peter A.

    2016-01-01

    Background Physicians’ recommendations affect patients’ treatment choices. However, most research relies on physicians’ or patients’ retrospective reports of recommendations, which offer a limited perspective and have limitations such as recall bias. Objective To develop a reliable and valid method to measure the strength of physician recommendations using direct observation of clinical encounters. Methods Clinical encounters (n = 257) were recorded as part of a larger study of prostate cancer decision making. We used an iterative process to create the 5-point Physician Recommendation Coding System (PhyReCS). To determine reliability, research assistants double-coded 50 transcripts. To establish content validity, we used one-way ANOVAs to determine whether relative treatment recommendation scores differed as a function of which treatment patients received. To establish concurrent validity, we examined whether patients’ perceived treatment recommendations matched our coded recommendations. Results The PhyReCS was highly reliable (Krippendorf’s alpha =. 89, 95% CI [.86, .91]). The average relative treatment recommendation score for each treatment was higher for individuals who received that particular treatment. For example, the average relative surgery recommendation score was higher for individuals who received surgery versus radiation (mean difference = .98, SE = .18, p < .001) or active surveillance (mean difference = 1.10, SE = .14, p < .001). Patients’ perceived recommendations matched coded recommendations 81% of the time. Conclusion The PhyReCS is a reliable and valid way to capture the strength of physician recommendations. We believe that the PhyReCS would be helpful for other researchers who wish to study physician recommendations, an important part of patient decision making. PMID:27343015

  13. The Physician Recommendation Coding System (PhyReCS): A Reliable and Valid Method to Quantify the Strength of Physician Recommendations During Clinical Encounters.

    PubMed

    Scherr, Karen A; Fagerlin, Angela; Williamson, Lillie D; Davis, J Kelly; Fridman, Ilona; Atyeo, Natalie; Ubel, Peter A

    2017-01-01

    Physicians' recommendations affect patients' treatment choices. However, most research relies on physicians' or patients' retrospective reports of recommendations, which offer a limited perspective and have limitations such as recall bias. To develop a reliable and valid method to measure the strength of physician recommendations using direct observation of clinical encounters. Clinical encounters (n = 257) were recorded as part of a larger study of prostate cancer decision making. We used an iterative process to create the 5-point Physician Recommendation Coding System (PhyReCS). To determine reliability, research assistants double-coded 50 transcripts. To establish content validity, we used 1-way analyses of variance to determine whether relative treatment recommendation scores differed as a function of which treatment patients received. To establish concurrent validity, we examined whether patients' perceived treatment recommendations matched our coded recommendations. The PhyReCS was highly reliable (Krippendorf's alpha = 0.89, 95% CI [0.86, 0.91]). The average relative treatment recommendation score for each treatment was higher for individuals who received that particular treatment. For example, the average relative surgery recommendation score was higher for individuals who received surgery versus radiation (mean difference = 0.98, SE = 0.18, P < 0.001) or active surveillance (mean difference = 1.10, SE = 0.14, P < 0.001). Patients' perceived recommendations matched coded recommendations 81% of the time. The PhyReCS is a reliable and valid way to capture the strength of physician recommendations. We believe that the PhyReCS would be helpful for other researchers who wish to study physician recommendations, an important part of patient decision making. © The Author(s) 2016.

  14. Rapid Modeling and Analysis Tools: Evolution, Status, Needs and Directions

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Stone, Thomas J.; Ransom, Jonathan B. (Technical Monitor)

    2002-01-01

    Advanced aerospace systems are becoming increasingly more complex, and customers are demanding lower cost, higher performance, and high reliability. Increased demands are placed on the design engineers to collaborate and integrate design needs and objectives early in the design process to minimize risks that may occur later in the design development stage. High performance systems require better understanding of system sensitivities much earlier in the design process to meet these goals. The knowledge, skills, intuition, and experience of an individual design engineer will need to be extended significantly for the next generation of aerospace system designs. Then a collaborative effort involving the designer, rapid and reliable analysis tools and virtual experts will result in advanced aerospace systems that are safe, reliable, and efficient. This paper discusses the evolution, status, needs and directions for rapid modeling and analysis tools for structural analysis. First, the evolution of computerized design and analysis tools is briefly described. Next, the status of representative design and analysis tools is described along with a brief statement on their functionality. Then technology advancements to achieve rapid modeling and analysis are identified. Finally, potential future directions including possible prototype configurations are proposed.

  15. [The evaluation of sensitivity and specificity of technique of detection of C-reactive protein under diagnostic of infectious complications in patients with acute lymphoblastic leucosis receiving chemotherapy].

    PubMed

    Vladimirova, S G; Tarasova, L N; Dokshina, I A; Cherepanova, V A

    2014-11-01

    The C-reactive protein is a generally recognized marker of inflammation and bacterial infection. However, issue of diagnostic effectiveness of this indicator is still open-ended in case of patients with oncologic hematological diseases. The level of C-reactive protein can increase under neoplastic processes. On the contrary, the inhibition of immune response observed under cytoplastic therapy can decrease synthesis of this protein. The study was organized to establish levels of C-reactive protein as markers of infection in adult patients with acute lymphoblastic leucosis under application of chemotherapy and to evaluate their diagnostic effectiveness. The sampling included 34 patients with acute lymphoblastic leucosis all patients had infectious complications at various stages of treatment. The levels of C-reactive protein in groups of patients with localized infections (mucositis, abscess, pneumonia, etc.) or fever of unknown genesis had no statistical differences but were reliably higher in patients without infectious complications. The concentrations of C-reactive protein in patients with syndrome of systemic inflammatory response and sepsis had no differences. At the same time, level of C-reactive protein under systemic infection (syndrome of systemic inflammatory response, sepsis) was reliably higher than in case of localized infection. The diagnostically reliable levels of C-reactive protein were established as follows: lower than 11 mg/l--infectious complications are lacking; higher than 11 mg/l--availability of infectious process; higher than 82 mg/l--generalization of infection. The given levels are characterized by high diagnostic sensitivity (92% and 97% correspondingly) and specificity (97% and 97%) when patients receive therapy without application of L-asparaginase. At the stages of introduction of this preparation effecting protein synthesizing function of liver sensitivity of proposed criteria are decreased (69% and 55% correspondingly). However; due to high specificity (100% and 96%) their diagnostic effectiveness remains high.

  16. Integrated image presentation of transmission and fluorescent X-ray CT using synchrotron radiation

    NASA Astrophysics Data System (ADS)

    Zeniya, T.; Takeda, T.; Yu, Q.; Hasegawa, Y.; Hyodo, K.; Yuasa, T.; Hiranaka, Y.; Itai, Y.; Akatsuka, T.

    2001-07-01

    We have developed a computed tomography (CT) system with synchrotron radiation (SR) to detect fluorescent X-rays and transmitted X-rays simultaneously. Both SR transmission X-ray CT (SR-TXCT) and SR fluorescent X-ray CT (SR-FXCT) can describe cross-sectional images with high spatial and contrast resolutions as compared to conventional CT. TXCT gives morphological information and FXCT gives functional information of organs. So, superposed display system for SR-FXCT and SR-TXCT images has been developed for clinical diagnosis with higher reliability. Preliminary experiment with brain phantom was carried out and the superposition of both images was performed. The superposed SR-CT image gave us both functional and morphological information easily with high reliability, thus demonstrating the usefulness of this system.

  17. Estimating the Reliability of a Soyuz Spacecraft Mission

    NASA Technical Reports Server (NTRS)

    Lutomski, Michael G.; Farnham, Steven J., II; Grant, Warren C.

    2010-01-01

    Once the US Space Shuttle retires in 2010, the Russian Soyuz Launcher and Soyuz Spacecraft will comprise the only means for crew transportation to and from the International Space Station (ISS). The U.S. Government and NASA have contracted for crew transportation services to the ISS with Russia. The resulting implications for the US space program including issues such as astronaut safety must be carefully considered. Are the astronauts and cosmonauts safer on the Soyuz than the Space Shuttle system? Is the Soyuz launch system more robust than the Space Shuttle? Is it safer to continue to fly the 30 year old Shuttle fleet for crew transportation and cargo resupply than the Soyuz? Should we extend the life of the Shuttle Program? How does the development of the Orion/Ares crew transportation system affect these decisions? The Soyuz launcher has been in operation for over 40 years. There have been only two loss of life incidents and two loss of mission incidents. Given that the most recent incident took place in 1983, how do we determine current reliability of the system? Do failures of unmanned Soyuz rockets impact the reliability of the currently operational man-rated launcher? Does the Soyuz exhibit characteristics that demonstrate reliability growth and how would that be reflected in future estimates of success? NASA s next manned rocket and spacecraft development project is currently underway. Though the projects ultimate goal is to return to the Moon and then to Mars, the launch vehicle and spacecraft s first mission will be for crew transportation to and from the ISS. The reliability targets are currently several times higher than the Shuttle and possibly even the Soyuz. Can these targets be compared to the reliability of the Soyuz to determine whether they are realistic and achievable? To help answer these questions this paper will explore how to estimate the reliability of the Soyuz Launcher/Spacecraft system, compare it to the Space Shuttle, and its potential impacts for the future of manned spaceflight. Specifically it will look at estimating the Loss of Mission (LOM) probability using historical data, reliability growth, and Probabilistic Risk Assessment techniques

  18. Operating Experience and Reliability Improvements on the 5 kW CW Klystron at Jefferson Lab

    NASA Astrophysics Data System (ADS)

    Nelson, R.; Holben, S.

    1997-05-01

    With substantial operating hours on the RF system, considerable information on reliability of the 5 kW CW klystrons has been obtained. High early failure rates led to examination of the operating conditions and failure modes. Internal ceramic contamination caused premature failure of gun potting material and ultimate tube demise through arcing or ceramic fracture. A planned course of repotting and reconditioning of approximately 300 klystrons, plus careful attention to operating conditions and periodic analysis of operational data, has substantially reduced the failure rate. It is anticipated that implementation of planned supplemental monitoring systems for the klystrons will allow most catastrophic failures to be avoided. By predicting end of life, tubes can be changed out before they fail, thus minimizing unplanned downtime. Initial tests have also been conducted on this same klystron operated at higher voltages with resultant higher output power. The outcome of these tests will provide information to be considered for future upgrades to the accelerator.

  19. Optical detection of metastatic cancer cells using a scanned laser pico-projection system

    NASA Astrophysics Data System (ADS)

    Huang, Chih-Ling; Chiu, Wen-Tai; Lo, Yu-Lung; Chuang, Chin-Ho; Chen, Yu-Bin; Chang, Shu-Jing; Ke, Tung-Ting; Cheng, Hung-Chi; Wu, Hua-Lin

    2015-03-01

    Metastasis is responsible for 90% of all cancer-related deaths in humans. As a result, reliable techniques for detecting metastatic cells are urgently required. Although various techniques have been proposed for metastasis detection, they are generally capable of detecting metastatic cells only once migration has already occurred. Accordingly, the present study proposes an optical method for physical characterization of metastatic cancer cells using a scanned laser pico-projection system (SLPP). The validity of the proposed method is demonstrated using five pairs of cancer cell lines and two pairs of non-cancer cell lines treated by IPTG induction in order to mimic normal cells with an overexpression of oncogene. The results show that for all of the considered cell lines, the SLPP speckle contrast of the high-metastatic cells is significantly higher than that of the low-metastatic cells. As a result, the speckle contrast measurement provides a reliable means of distinguishing quantitatively between low- and high-metastatic cells of the same origin. Compared to existing metastasis detection methods, the proposed SLPP approach has many advantages, including a higher throughput, a lower cost, a larger sample size and a more reliable diagnostic performance. As a result, it provides a highly promising solution for physical characterization of metastatic cancer cells in vitro.

  20. Water System Architectures for Moon and Mars Bases

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Hodgson, Edward W.; Kliss, Mark H.

    2015-01-01

    Water systems for human bases on the moon and Mars will recycle multiple sources of wastewater. Systems for both the moon and Mars will also store water to support and backup the recycling system. Most water system requirements, such as number of crew, quantity and quality of water supply, presence of gravity, and surface mission duration of 6 or 18 months, will be similar for the moon and Mars. If the water system fails, a crew on the moon can quickly receive spare parts and supplies or return to Earth, but a crew on Mars cannot. A recycling system on the moon can have a reasonable reliability goal, such as only one unrecoverable failure every five years, if there is enough stored water to allow time for attempted repairs and for the crew to return if repair fails. The water system that has been developed and successfully operated on the International Space Station (ISS) could be used on a moon base. To achieve the same high level of crew safety on Mars without an escape option, either the recycling system must have much higher reliability or enough water must be stored to allow the crew to survive the full duration of the Mars surface mission. A three loop water system architecture that separately recycles condensate, wash water, and urine and flush can improve reliability and reduce cost for a Mars base.

  1. Compact dewar and electronics for large-format infrared detectors

    NASA Astrophysics Data System (ADS)

    Manissadjian, A.; Magli, S.; Mallet, E.; Cassaigne, P.

    2011-06-01

    Infrared systems cameras trend is to require higher performance (thanks to higher resolution) and in parallel higher compactness for easier integration in systems. The latest developments at SOFRADIR / France on HgCdTe (Mercury Cadmium Telluride / MCT) cooled IR staring detectors do show constant improvements regarding detector performances and compactness, by reducing the pixel pitch and optimizing their encapsulation. Among the latest introduced detectors, the 15μm pixel pitch JUPITER HD-TV format (1280×1024) has to deal with challenging specifications regarding dewar compactness, low power consumption and reliability. Initially introduced four years ago in a large dewar with a more than 2kg split Stirling cooler compressor, it is now available in a new versatile compact dewar that is vacuum-maintenance-free over typical 18 years mission profiles, and that can be integrated with the different available Stirling coolers: K548 microcooler for light solution (less than 0.7 kg), K549 or LSF9548 for split cooler and/or higher reliability solution. The IDDCAs are also required with simplified electrical interface enabling to shorten the system development time and to standardize the electronic boards definition with smaller volumes. Sofradir is therefore introducing MEGALINK, the new compact Command & Control Electronics compatible with most of the Sofradir IDDCAs. MEGALINK provides all necessary input biases and clocks to the FPAs, and digitizes and multiplexes the video outputs to provide a 14 bit output signal through a cameralink interface, in a surface smaller than a business card.

  2. Study on high reliability safety valve for railway vehicle

    NASA Astrophysics Data System (ADS)

    Zhang, Xuan; Chen, Ruikun; Zhang, Shixi; Xu, BuDu

    2017-09-01

    Now, the realization of most of the functions of the railway vehicles rely on compressed air, so the demand for compressed air is growing higher and higher. This safety valve is a protection device for pressure limitation and pressure relief in an air supply system of railway vehicles. I am going to introduce the structure, operating principle, research and development process of the safety valve designed by our company in this document.

  3. The optimization on flow scheme of helium liquefier with genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wang, H. R.; Xiong, L. Y.; Peng, N.; Liu, L. Q.

    2017-01-01

    There are several ways to organize the flow scheme of the helium liquefiers, such as arranging the expanders in parallel (reverse Brayton stage) or in series (modified Brayton stages). In this paper, the inlet mass flow and temperatures of expanders in Collins cycle are optimized using genetic algorithm (GA). Results show that maximum liquefaction rate can be obtained when the system is working at the optimal parameters. However, the reliability of the system is not well due to high wheel speed of the first turbine. Study shows that the scheme in which expanders are arranged in series with heat exchangers between them has higher operation reliability but lower plant efficiency when working at the same situation. Considering both liquefaction rate and system stability, another flow scheme is put forward hoping to solve the dilemma. The three configurations are compared from different aspects, they are respectively economic cost, heat exchanger size, system reliability and exergy efficiency. In addition, the effect of heat capacity ratio on heat transfer efficiency is discussed. A conclusion of choosing liquefier configuration is given in the end, which is meaningful for the optimal design of helium liquefier.

  4. Serial Back-Plane Technologies in Advanced Avionics Architectures

    NASA Technical Reports Server (NTRS)

    Varnavas, Kosta

    2005-01-01

    Current back plane technologies such as VME, and current personal computer back planes such as PCI, are shared bus systems that can exhibit nondeterministic latencies. This means a card can take control of the bus and use resources indefinitely affecting the ability of other cards in the back plane to acquire the bus. This provides a real hit on the reliability of the system. Additionally, these parallel busses only have bandwidths in the 100s of megahertz range and EMI and noise effects get worse the higher the bandwidth goes. To provide scalable, fault-tolerant, advanced computing systems, more applicable to today s connected computing environment and to better meet the needs of future requirements for advanced space instruments and vehicles, serial back-plane technologies should be implemented in advanced avionics architectures. Serial backplane technologies eliminate the problem of one card getting the bus and never relinquishing it, or one minor problem on the backplane bringing the whole system down. Being serial instead of parallel improves the reliability by reducing many of the signal integrity issues associated with parallel back planes and thus significantly improves reliability. The increased speeds associated with a serial backplane are an added bonus.

  5. Mathematical Modelling-Based Energy System Operation Strategy Considering Energy Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryu, Jun-Hyung; Hodge, Bri-Mathias

    2016-06-25

    Renewable energy resources are widely recognized as an alternative to environmentally harmful fossil fuels. More renewable energy technologies will need to penetrate into fossil fuel dominated energy systems to mitigate the globally witnessed climate changes and environmental pollutions. It is necessary to prepare for the potential problems with increased proportions of renewable energy in the energy system, to prevent higher costs and decreased reliability. Motivated by this need, this paper addresses the operation of an energy system with an energy storage system in the context of developing a decision-supporting framework.

  6. A Comparison of a Brain-Based Adaptive System and a Manual Adaptable System for Invoking Automation

    NASA Technical Reports Server (NTRS)

    Bailey, Nathan R.; Scerbo, Mark W.; Freeman, Frederick G.; Mikulka, Peter J.; Scott, Lorissa A.

    2004-01-01

    Two experiments are presented that examine alternative methods for invoking automation. In each experiment, participants were asked to perform simultaneously a monitoring task and a resource management task as well as a tracking task that changed between automatic and manual modes. The monitoring task required participants to detect failures of an automated system to correct aberrant conditions under either high or low system reliability. Performance on each task was assessed as well as situation awareness and subjective workload. In the first experiment, half of the participants worked with a brain-based system that used their EEG signals to switch the tracking task between automatic and manual modes. The remaining participants were yoked to participants from the adaptive condition and received the same schedule of mode switches, but their EEG had no effect on the automation. Within each group, half of the participants were assigned to either the low or high reliability monitoring task. In addition, within each combination of automation invocation and system reliability, participants were separated into high and low complacency potential groups. The results revealed no significant effects of automation invocation on the performance measures; however, the high complacency individuals demonstrated better situation awareness when working with the adaptive automation system. The second experiment was the same as the first with one important exception. Automation was invoked manually. Thus, half of the participants pressed a button to invoke automation for 10 s. The remaining participants were yoked to participants from the adaptable condition and received the same schedule of mode switches, but they had no control over the automation. The results showed that participants who could invoke automation performed more poorly on the resource management task and reported higher levels of subjective workload. Further, those who invoked automation more frequently performed more poorly on the tracking task and reported higher levels of subjective workload. and the adaptable condition in the second experiment revealed only one significant difference: the subjective workload was higher in the adaptable condition. Overall, the results show that a brain-based, adaptive automation system may facilitate situation awareness for those individuals who are more complacent toward automation. By contrast, requiring operators to invoke automation manually may have some detrimental impact on performance but does appear to increases subjective workload relative to an adaptive system.

  7. An Experimental Study of Procedures to Enhance Ratings of Fidelity to an Evidence-Based Family Intervention.

    PubMed

    Smith, Justin D; Dishion, Thomas J; Brown, Kimbree; Ramos, Karina; Knoble, Naomi B; Shaw, Daniel S; Wilson, Melvin N

    2016-01-01

    The valid and reliable assessment of fidelity is critical at all stages of intervention research and is particularly germane to interpreting the results of efficacy and implementation trials. Ratings of protocol adherence typically are reliable, but ratings of therapist competence are plagued by low reliability. Because family context and case conceptualization guide the therapist's delivery of interventions, the reliability of fidelity ratings might be improved if the coder is privy to client context in the form of an ecological assessment. We conducted a randomized experiment to test this hypothesis. A subsample of 46 families with 5-year-old children from a multisite randomized trial who participated in the feedback session of the Family Check-Up (FCU) intervention were selected. We randomly assigned FCU feedback sessions to be rated for fidelity to the protocol using the COACH rating system either after the coder reviewed the results of a recent ecological assessment or had not. Inter-rater reliability estimates of fidelity ratings were meaningfully higher for the assessment information condition compared to the no-information condition. Importantly, the reliability of the COACH mean score was found to be statistically significantly higher in the information condition. These findings suggest that the reliability of observational ratings of fidelity, particularly when the competence or quality of delivery is considered, could be improved by providing assessment data to the coders. Our findings might be most applicable to assessment-driven interventions, where assessment data explicitly guides therapist's selection of intervention strategies tailored to the family's context and needs, but they could also apply to other intervention programs and observational coding of context-dependent therapy processes, such as the working alliance.

  8. An Experimental Study of Procedures to Enhance Ratings of Fidelity to an Evidence-Based Family Intervention

    PubMed Central

    Smith, Justin D.; Dishion, Thomas J.; Brown, Kimbree; Ramos, Karina; Knoble, Naomi B.; Shaw, Daniel S.; Wilson, Melvin N.

    2015-01-01

    The valid and reliable assessment of fidelity is critical at all stages of intervention research and is particularly germane to interpreting the results of efficacy and implementation trials. Ratings of protocol adherence typically are reliable, but ratings of therapist competence are plagued by low reliability. Because family context and case conceptualization guide the therapist's delivery of interventions, the reliability of fidelity ratings might be improved if the coder is privy to client context in the form of an ecological assessment. We conducted a randomized experiment to test this hypothesis. A subsample of 46 families with 5-year-old children from a multisite randomized trial who participated in the feedback session of the Family Check-Up (FCU) intervention were selected. We randomly assigned FCU feedback sessions to be rated for fidelity to the protocol using the COACH rating system either after the coder reviewed the results of a recent ecological assessment or had not. Inter-rater reliability estimates of fidelity ratings were meaningfully higher for the assessment information condition compared to the no-information condition. Importantly, the reliability of the COACH mean score was found to be statistically significantly higher in the information condition. These findings suggest that the reliability of observational ratings of fidelity, particularly when the competence or quality of delivery is considered, could be improved by providing assessment data to the coders. Our findings might be most applicable to assessment-driven interventions, where assessment data explicitly guides therapist's selection of intervention strategies tailored to the family's context and needs, but they could also apply to other intervention programs and observational coding of context-dependent therapy processes, such as the working alliance. PMID:26271300

  9. Reliability analysis of component-level redundant topologies for solid-state fault current limiter

    NASA Astrophysics Data System (ADS)

    Farhadi, Masoud; Abapour, Mehdi; Mohammadi-Ivatloo, Behnam

    2018-04-01

    Experience shows that semiconductor switches in power electronics systems are the most vulnerable components. One of the most common ways to solve this reliability challenge is component-level redundant design. There are four possible configurations for the redundant design in component level. This article presents a comparative reliability analysis between different component-level redundant designs for solid-state fault current limiter. The aim of the proposed analysis is to determine the more reliable component-level redundant configuration. The mean time to failure (MTTF) is used as the reliability parameter. Considering both fault types (open circuit and short circuit), the MTTFs of different configurations are calculated. It is demonstrated that more reliable configuration depends on the junction temperature of the semiconductor switches in the steady state. That junction temperature is a function of (i) ambient temperature, (ii) power loss of the semiconductor switch and (iii) thermal resistance of heat sink. Also, results' sensitivity to each parameter is investigated. The results show that in different conditions, various configurations have higher reliability. The experimental results are presented to clarify the theory and feasibility of the proposed approaches. At last, levelised costs of different configurations are analysed for a fair comparison.

  10. Propulsion System Advances that Enable a Reusable Liquid Fly Back Booster (LFBB)

    NASA Technical Reports Server (NTRS)

    Keith, Edward L.; Rothschild, William J.

    1998-01-01

    This paper provides an overview of the booster propulsion system for the Liquid Fly Back Booster (LFBB). This includes, system requirements, design approach, concept of operations, reliability, safety and cost assumptions. The paper summarizes the findings of the Boeing propulsion team that has been studying the LFBB feasibility as a booster replacement for the Space Shuttle. This paper will discuss recent advances including a new generation of kerosene and oxygen rich pre-burner staged combustion cycle main rocket engines. The engine reliability and safety is expected to be much higher than current standards by adding extra operating margins into the design and normally operating the engines at 75% of engine rated power. This allows for engine out capability. The new generation of main engines operates at significantly higher chamber pressure than the prior generation of gas generator cycle engines. The oxygen rich pre-burner engine cycle, unlike the fuel rich gas generator cycle, results in internally self-cleaning firings which facilitates reusability. Maintenance is further enhanced with integrated health monitoring to improve safety and turn-around efficiency. The maintainability of the LFBB LOX / kerosene engines is being improved by designing the vehicle/engine interfaces for easy access to key engine components.

  11. Propulsion system advances that enable a reusable Liquid Fly Back Booster (LFBB)

    NASA Technical Reports Server (NTRS)

    Keith, E. L.; Rothschild, W. J.

    1998-01-01

    This paper provides an overview of the booster propulsion system for the Liquid Fly Back Booster (LFBB). This includes, system requirements, design approach, concept of operations, reliability, safety and cost assumptions. The paper summarizes the findings of the Boeing propulsion team that has been studying the LFBB feasibility as a booster replacement for the Space Shuttle. This paper will discuss recent advances including a new generation of kerosene and oxygen rich pre-burner staged combustion cycle main rocket engines. The engine reliability and safety is expected to be much higher than current standards by adding extra operating margins into the design and normally operating the engines at 75% of engine rated power. This allows for engine out capability. The new generation of main engines operates at significantly higher chamber pressure than the prior generation of gas generator cycle engines. The oxygen rich pre-burner engine cycle, unlike the fuel rich gas generator cycle, results in internally self-cleaning firings which facilitates reusability. Maintenance is further enhanced with integrated health monitoring to improve safety and turn-around efficiency. The maintainability of the LFBB LOX/kerosene engines is being improved by designing the vehicle/engine interfaces for easy access to key engine components.

  12. Smart Sensor Demonstration Payload

    NASA Technical Reports Server (NTRS)

    Schmalzel, John; Bracey, Andrew; Rawls, Stephen; Morris, Jon; Turowski, Mark; Franzl, Richard; Figueroa, Fernando

    2010-01-01

    Sensors are a critical element to any monitoring, control, and evaluation processes such as those needed to support ground based testing for rocket engine test. Sensor applications involve tens to thousands of sensors; their reliable performance is critical to achieving overall system goals. Many figures of merit are used to describe and evaluate sensor characteristics; for example, sensitivity and linearity. In addition, sensor selection must satisfy many trade-offs among system engineering (SE) requirements to best integrate sensors into complex systems [1]. These SE trades include the familiar constraints of power, signal conditioning, cabling, reliability, and mass, and now include considerations such as spectrum allocation and interference for wireless sensors. Our group at NASA s John C. Stennis Space Center (SSC) works in the broad area of integrated systems health management (ISHM). Core ISHM technologies include smart and intelligent sensors, anomaly detection, root cause analysis, prognosis, and interfaces to operators and other system elements [2]. Sensor technologies are the base fabric that feed data and health information to higher layers. Cost-effective operation of the complement of test stands benefits from technologies and methodologies that contribute to reductions in labor costs, improvements in efficiency, reductions in turn-around times, improved reliability, and other measures. ISHM is an active area of development at SSC because it offers the potential to achieve many of those operational goals [3-5].

  13. Autonomic Computing for Spacecraft Ground Systems

    NASA Technical Reports Server (NTRS)

    Li, Zhenping; Savkli, Cetin; Jones, Lori

    2007-01-01

    Autonomic computing for spacecraft ground systems increases the system reliability and reduces the cost of spacecraft operations and software maintenance. In this paper, we present an autonomic computing solution for spacecraft ground systems at NASA Goddard Space Flight Center (GSFC), which consists of an open standard for a message oriented architecture referred to as the GMSEC architecture (Goddard Mission Services Evolution Center), and an autonomic computing tool, the Criteria Action Table (CAT). This solution has been used in many upgraded ground systems for NASA 's missions, and provides a framework for developing solutions with higher autonomic maturity.

  14. DC Microgrids Scoping Study. Estimate of Technical and Economic Benefits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backhaus, Scott N.; Swift, Gregory William; Chatzivasileiadis, Spyridon

    Microgrid demonstrations and deployments are expanding in US power systems and around the world. Although goals are specific to each site, these microgrids have demonstrated the ability to provide higher reliability and higher power quality than utility power systems and improved energy utilization. The vast majority of these microgrids are based on AC power transfer because this has been the traditionally dominant power delivery scheme. Independently, manufacturers, power system designers and researchers are demonstrating and deploying DC power distribution systems for applications where the end-use loads are natively DC, e.g., computers, solid-state lighting, and building networks. These early DC applicationsmore » may provide higher efficiency, added flexibility, reduced capital costs over their AC counterparts. Further, when onsite renewable generation, electric vehicles and storage systems are present, DC-based microgrids may offer additional benefits. Early successes from these efforts raises a question - can a combination of microgrid concepts and DC distribution systems provide added benefits beyond what has been achieved individually?« less

  15. Site-specific landslide assessment in Alpine area using a reliable integrated monitoring system

    NASA Astrophysics Data System (ADS)

    Romeo, Saverio; Di Matteo, Lucio; Kieffer, Daniel Scott

    2016-04-01

    Rockfalls are one of major cause of landslide fatalities around the world. The present work discusses the reliability of integrated monitoring of displacements in a rockfall within the Alpine region (Salzburg Land - Austria), taking into account also the effect of the ongoing climate change. Due to the unpredictability of the frequency and magnitude, that threatens human lives and infrastructure, frequently it is necessary to implement an efficient monitoring system. For this reason, during the last decades, integrated monitoring systems of unstable slopes were widely developed and used (e.g., extensometers, cameras, remote sensing, etc.). In this framework, Remote Sensing techniques, such as GBInSAR technique (Groung-Based Interferometric Synthetic Aperture Radar), have emerged as efficient and powerful tools for deformation monitoring. GBInSAR measurements can be useful to achieve an early warning system using surface deformation parameters as ground displacement or inverse velocity (for semi-empirical forecasting methods). In order to check the reliability of GBInSAR and to monitor the evolution of landslide, it is very important to integrate different techniques. Indeed, a multi-instrumental approach is essential to investigate movements both in surface and in depth and the use of different monitoring techniques allows to perform a cross analysis of the data and to minimize errors, to check the data quality and to improve the monitoring system. During 2013, an intense and complete monitoring campaign has been conducted on the Ingelsberg landslide. By analyzing both historical temperature series (HISTALP) recorded during the last century and those from local weather stations, temperature values (Autumn-Winter, Winter and Spring) are clearly increased in Bad Hofgastein area as well as in Alpine region. As consequence, in the last decades the rockfall events have been shifted from spring to summer due to warmer winters. It is interesting to point out that temperature values recorded in the valley and on the slope show a good relationship indicating that the climatic monitoring is reliable. In addition, the landslide displacement monitoring is reliable as well: the comparison between displacements in depth by extensometers and in surface by GBInSAR - referred to March-December 2013 - shows ad high reliability as confirmed by the inter-rater reliability analysis (Pearson correlation coefficient higher than 0.9). In conclusion, the reliability of the monitoring system confirms that data can be useful to improve the knowledge on rockfall kinematic and to develop an accurate early warning system useful for civil protection issues.

  16. The quest for a general theory of aging and longevity.

    PubMed

    Gavrilov, Leonid A; Gavrilova, Natalia S

    2003-07-16

    Extensive studies of phenomena related to aging have produced many diverse findings, which require a general theoretical framework to be organized into a comprehensive body of knowledge. As demonstrated by the success of evolutionary theories of aging, quite general theoretical considerations can be very useful when applied to research on aging. In this theoretical study, we attempt to gain insight into aging by applying a general theory of systems failure known as reliability theory. Considerations of this theory lead to the following conclusions: (i) Redundancy is a concept of crucial importance for understanding aging, particularly the systemic nature of aging. Systems that are redundant in numbers of irreplaceable elements deteriorate (that is, age) over time, even if they are built of elements that do not themselves age. (ii) An apparent aging rate or expression of aging is higher for systems that have higher levels of redundancy. (iii) Redundancy exhaustion over the life course explains a number of observations about mortality, including mortality convergence at later life (when death rates are becoming relatively similar at advanced ages for different populations of the same species) as well as late-life mortality deceleration, leveling off, and mortality plateaus. (iv) Living organisms apparently contain a high load of initial damage from the early stages of development, and therefore their life span and aging patterns may be sensitive to early-life conditions that determine this initial damage load. Thus, the reliability theory provides a parsimonious explanation for many important aging-related phenomena and suggests a number of interesting testable predictions. We therefore suggest adding the reliability theory to the arsenal of methodological approaches applied to research on aging.

  17. Inter-rater reliability of postnatal ultrasound interpretation in infants with congenital hydronephrosis.

    PubMed

    Vemulakonda, V M; Wilcox, D T; Torok, M R; Hou, A; Campbell, J B; Kempe, A

    2015-09-01

    The most common measurements of hydronephrosis are the anterior-posterior (AP) diameter and the Society for Fetal Urology (SFU) grading systems. To date, the inter-rater reliability (IRR) of these measures has not been compared in the postnatal period. The objectives of this study were to compare the IRR of the AP diameter and the SFU grading system in infants and to determine whether ultrasound findings other than pelvicalyceal dilation are associated with higher SFU grades. Initial postnatal ultrasounds of infants seen from February 1, 2011, to January 31, 2012, with a primary diagnosis of congenital hydronephrosis were included for review. Ultrasound images were de-identified and reviewed by four pediatric urologists. IRR was calculated using the intraclass correlation (ICC) measure. A paired t test was used to compare ICCs. Associations between SFU grade and other ultrasound findings were tested using Chi-square or Fisher's exact tests. A total of 112 kidneys in 56 patients were reviewed. IRR of the SFU grading system was high (right kidney ICC = 0.83, left kidney ICC = 0.85); however, IRR of AP diameter measurement was higher (right kidney ICC = 00.97, left kidney ICC = 0.98; p < 0.001). Renal asymmetry (p < 0.001), echogenicity (p < 0.001), and parenchymal thinning (p < 0.001) were significantly associated with SFU grade 4 hydronephrosis on bivariable and multivariable analysis. The SFU grading system is associated with excellent IRR, although the AP diameter appears to have higher IRR. Physicians may consider ultrasound findings that are not explicitly included in the SFU system when assigning hydronephrosis grade, which may lead to variability in use of this classification system.

  18. Using meta-quality to assess the utility of volunteered geographic information for science.

    PubMed

    Langley, Shaun A; Messina, Joseph P; Moore, Nathan

    2017-11-06

    Volunteered geographic information (VGI) has strong potential to be increasingly valuable to scientists in collaboration with non-scientists. The abundance of mobile phones and other wireless forms of communication open up significant opportunities for the public to get involved in scientific research. As these devices and activities become more abundant, questions of uncertainty and error in volunteer data are emerging as critical components for using volunteer-sourced spatial data. Here we present a methodology for using VGI and assessing its sensitivity to three types of error. More specifically, this study evaluates the reliability of data from volunteers based on their historical patterns. The specific context is a case study in surveillance of tsetse flies, a health concern for being the primary vector of African Trypanosomiasis. Reliability, as measured by a reputation score, determines the threshold for accepting the volunteered data for inclusion in a tsetse presence/absence model. Higher reputation scores are successful in identifying areas of higher modeled tsetse prevalence. A dynamic threshold is needed but the quality of VGI will improve as more data are collected and the errors in identifying reliable participants will decrease. This system allows for two-way communication between researchers and the public, and a way to evaluate the reliability of VGI. Boosting the public's ability to participate in such work can improve disease surveillance and promote citizen science. In the absence of active surveillance, VGI can provide valuable spatial information given that the data are reliable.

  19. The reliability of the Glasgow Coma Scale: a systematic review.

    PubMed

    Reith, Florence C M; Van den Brande, Ruben; Synnot, Anneliese; Gruen, Russell; Maas, Andrew I R

    2016-01-01

    The Glasgow Coma Scale (GCS) provides a structured method for assessment of the level of consciousness. Its derived sum score is applied in research and adopted in intensive care unit scoring systems. Controversy exists on the reliability of the GCS. The aim of this systematic review was to summarize evidence on the reliability of the GCS. A literature search was undertaken in MEDLINE, EMBASE and CINAHL. Observational studies that assessed the reliability of the GCS, expressed by a statistical measure, were included. Methodological quality was evaluated with the consensus-based standards for the selection of health measurement instruments checklist and its influence on results considered. Reliability estimates were synthesized narratively. We identified 52 relevant studies that showed significant heterogeneity in the type of reliability estimates used, patients studied, setting and characteristics of observers. Methodological quality was good (n = 7), fair (n = 18) or poor (n = 27). In good quality studies, kappa values were ≥0.6 in 85%, and all intraclass correlation coefficients indicated excellent reliability. Poor quality studies showed lower reliability estimates. Reliability for the GCS components was higher than for the sum score. Factors that may influence reliability include education and training, the level of consciousness and type of stimuli used. Only 13% of studies were of good quality and inconsistency in reported reliability estimates was found. Although the reliability was adequate in good quality studies, further improvement is desirable. From a methodological perspective, the quality of reliability studies needs to be improved. From a clinical perspective, a renewed focus on training/education and standardization of assessment is required.

  20. Statistical properties of Chinese phonemic networks

    NASA Astrophysics Data System (ADS)

    Yu, Shuiyuan; Liu, Haitao; Xu, Chunshan

    2011-04-01

    The study of properties of speech sound systems is of great significance in understanding the human cognitive mechanism and the working principles of speech sound systems. Some properties of speech sound systems, such as the listener-oriented feature and the talker-oriented feature, have been unveiled with the statistical study of phonemes in human languages and the research of the interrelations between human articulatory gestures and the corresponding acoustic parameters. With all the phonemes of speech sound systems treated as a coherent whole, our research, which focuses on the dynamic properties of speech sound systems in operation, investigates some statistical parameters of Chinese phoneme networks based on real text and dictionaries. The findings are as follows: phonemic networks have high connectivity degrees and short average distances; the degrees obey normal distribution and the weighted degrees obey power law distribution; vowels enjoy higher priority than consonants in the actual operation of speech sound systems; the phonemic networks have high robustness against targeted attacks and random errors. In addition, for investigating the structural properties of a speech sound system, a statistical study of dictionaries is conducted, which shows the higher frequency of shorter words and syllables and the tendency that the longer a word is, the shorter the syllables composing it are. From these structural properties and dynamic properties one can derive the following conclusion: the static structure of a speech sound system tends to promote communication efficiency and save articulation effort while the dynamic operation of this system gives preference to reliable transmission and easy recognition. In short, a speech sound system is an effective, efficient and reliable communication system optimized in many aspects.

  1. High Spatial Resolution Commercial Satellite Imaging Product Characterization

    NASA Technical Reports Server (NTRS)

    Ryan, Robert E.; Pagnutti, Mary; Blonski, Slawomir; Ross, Kenton W.; Stnaley, Thomas

    2005-01-01

    NASA Stennis Space Center's Remote Sensing group has been characterizing privately owned high spatial resolution multispectral imaging systems, such as IKONOS, QuickBird, and OrbView-3. Natural and man made targets were used for spatial resolution, radiometric, and geopositional characterizations. Higher spatial resolution also presents significant adjacency effects for accurate reliable radiometry.

  2. Evaluating the Reliability of Indices from IEP. AIR 1983 Annual Forum Paper.

    ERIC Educational Resources Information Center

    McLaughlin, Gerald W.; And Others

    The Information Exchange Procedures (IEP), which were developed through a project sponsored by the National Center for Higher Education Management Systems, are briefly described, and the application of the IEP in Virginia is examined. The IEP were designed to enhance the institution's ability to identify alternatives in the allocation of resources…

  3. Helping Students Succeed. Annual Report, 2010

    ERIC Educational Resources Information Center

    New Mexico Higher Education Department, 2010

    2010-01-01

    This annual report contains postsecondary data that has been collected and analyzed using the New Mexico Higher Education Department's Data Editing and Reporting (DEAR) database, unless otherwise noted. The purpose of the DEAR system is to increase the reliability in the data and to make more efficient efforts by institutions and the New Mexico…

  4. Validity of a novel computerized screening test system for mild cognitive impairment.

    PubMed

    Park, Jin-Hyuck; Jung, Minye; Kim, Jongbae; Park, Hae Yean; Kim, Jung-Ran; Park, Ji-Hyuk

    2018-06-20

    ABSTRACTBackground:The mobile screening test system for screening mild cognitive impairment (mSTS-MCI) was developed for clinical use. However, the clinical usefulness of mSTS-MCI to detect elderly with MCI from those who are cognitively healthy has yet to be validated. Moreover, the comparability between this system and traditional screening tests for MCI has not been evaluated. The purpose of this study was to examine the validity and reliability of the mSTS-MCI and confirm the cut-off scores to detect MCI. The data were collected from 107 healthy elderly people and 74 elderly people with MCI. Concurrent validity was examined using the Korean version of Montreal Cognitive Assessment (MoCA-K) as a gold standard test, and test-retest reliability was investigated using 30 of the study participants at four-week intervals. The sensitivity, specificity, positive predictive value, and negative predictive value (NPV) were confirmed through Receiver Operating Characteristic (ROC) analysis, and the cut-off scores for elderly people with MCI were identified. Concurrent validity showed statistically significant correlations between the mSTS-MCI and MoCA-K and test-rests reliability indicated high correlation. As a result of screening predictability, the mSTS-MCI had a higher NPV than the MoCA-K. The mSTS-MCI was identified as a system with a high degree of validity and reliability. In addition, the mSTS-MCI showed high screening predictability, indicating it can be used in the clinical field as a screening test system for mild cognitive impairment.

  5. A Power Conditioning Stage Based on Analog-Circuit MPPT Control and a Superbuck Converter for Thermoelectric Generators in Spacecraft Power Systems

    NASA Astrophysics Data System (ADS)

    Sun, Kai; Wu, Hongfei; Cai, Yan; Xing, Yan

    2014-06-01

    A thermoelectric generator (TEG) is a very important kind of power supply for spacecraft, especially for deep-space missions, due to its long lifetime and high reliability. To develop a practical TEG power supply for spacecraft, a power conditioning stage is indispensable, being employed to convert the varying output voltage of the TEG modules to a definite voltage for feeding batteries or loads. To enhance the system reliability, a power conditioning stage based on analog-circuit maximum-power-point tracking (MPPT) control and a superbuck converter is proposed in this paper. The input of this power conditioning stage is connected to the output of the TEG modules, and the output of this stage is connected to the battery and loads. The superbuck converter is employed as the main circuit, featuring low input current ripples and high conversion efficiency. Since for spacecraft power systems reliable operation is the key target for control circuits, a reset-set flip-flop-based analog circuit is used as the basic control circuit to implement MPPT, being much simpler than digital control circuits and offering higher reliability. Experiments have verified the feasibility and effectiveness of the proposed power conditioning stage. The results show the advantages of the proposed stage, such as maximum utilization of TEG power, small input ripples, and good stability.

  6. Commercial Off-The-Shelf (COTS) Parts Risk and Reliability User and Application Guide

    NASA Technical Reports Server (NTRS)

    White, Mark

    2017-01-01

    All COTS parts are not created equal. Because they are not created equal, the notion that one can force the commercial industry to follow a set of military specifications and standards, along with the certifications, audits and qualification commitments that go with them, is unrealistic for the sale of a few parts. The part technologies that are Defense Logistics Agency (DLA) certified or Military Specification (MS) qualified, are several generations behind the state-of-the-art high-performance parts that are required for the compact, higher performing systems for the next generation of spacecraft and instruments. The majority of the part suppliers are focused on the portion of the market that is producing high-tech commercial products and systems. To that end, in order to compete in the high performance and leading edge advanced technological systems, an alternative approach to risk assessment and reliability prediction must be considered.

  7. Many-objective optimization and visual analytics reveal key trade-offs for London's water supply

    NASA Astrophysics Data System (ADS)

    Matrosov, Evgenii S.; Huskova, Ivana; Kasprzyk, Joseph R.; Harou, Julien J.; Lambert, Chris; Reed, Patrick M.

    2015-12-01

    In this study, we link a water resource management simulator to multi-objective search to reveal the key trade-offs inherent in planning a real-world water resource system. We consider new supplies and demand management (conservation) options while seeking to elucidate the trade-offs between the best portfolios of schemes to satisfy projected water demands. Alternative system designs are evaluated using performance measures that minimize capital and operating costs and energy use while maximizing resilience, engineering and environmental metrics, subject to supply reliability constraints. Our analysis shows many-objective evolutionary optimization coupled with state-of-the art visual analytics can help planners discover more diverse water supply system designs and better understand their inherent trade-offs. The approach is used to explore future water supply options for the Thames water resource system (including London's water supply). New supply options include a new reservoir, water transfers, artificial recharge, wastewater reuse and brackish groundwater desalination. Demand management options include leakage reduction, compulsory metering and seasonal tariffs. The Thames system's Pareto approximate portfolios cluster into distinct groups of water supply options; for example implementing a pipe refurbishment program leads to higher capital costs but greater reliability. This study highlights that traditional least-cost reliability constrained design of water supply systems masks asset combinations whose benefits only become apparent when more planning objectives are considered.

  8. Definition and Proposed Realization of the International Height Reference System (IHRS)

    NASA Astrophysics Data System (ADS)

    Ihde, Johannes; Sánchez, Laura; Barzaghi, Riccardo; Drewes, Hermann; Foerste, Christoph; Gruber, Thomas; Liebsch, Gunter; Marti, Urs; Pail, Roland; Sideris, Michael

    2017-05-01

    Studying, understanding and modelling global change require geodetic reference frames with an order of accuracy higher than the magnitude of the effects to be actually studied and with high consistency and reliability worldwide. The International Association of Geodesy, taking care of providing a precise geodetic infrastructure for monitoring the Earth system, promotes the implementation of an integrated global geodetic reference frame that provides a reliable frame for consistent analysis and modelling of global phenomena and processes affecting the Earth's gravity field, the Earth's surface geometry and the Earth's rotation. The definition, realization, maintenance and wide utilization of the International Terrestrial Reference System guarantee a globally unified geometric reference frame with an accuracy at the millimetre level. An equivalent high-precision global physical reference frame that supports the reliable description of changes in the Earth's gravity field (such as sea level variations, mass displacements, processes associated with geophysical fluids) is missing. This paper addresses the theoretical foundations supporting the implementation of such a physical reference surface in terms of an International Height Reference System and provides guidance for the coming activities required for the practical and sustainable realization of this system. Based on conceptual approaches of physical geodesy, the requirements for a unified global height reference system are derived. In accordance with the practice, its realization as the International Height Reference Frame is designed. Further steps for the implementation are also proposed.

  9. The German Version of the Manchester Triage System and Its Quality Criteria – First Assessment of Validity and Reliability

    PubMed Central

    Gräff, Ingo; Goldschmidt, Bernd; Glien, Procula; Bogdanow, Manuela; Fimmers, Rolf; Hoeft, Andreas; Kim, Se-Chan; Grigutsch, Daniel

    2014-01-01

    Background The German Version of the Manchester Triage System (MTS) has found widespread use in EDs across German-speaking Europe. Studies about the quality criteria validity and reliability of the MTS currently only exist for the English-language version. Most importantly, the content of the German version differs from the English version with respect to presentation diagrams and change indicators, which have a significant impact on the category assigned. This investigation offers a preliminary assessment in terms of validity and inter-rater reliability of the German MTS. Methods Construct validity of assigned MTS level was assessed based on comparisons to hospitalization (general / intensive care), mortality, ED and hospital length of stay, level of prehospital care and number of invasive diagnostics. A sample of 45,469 patients was used. Inter-rater agreement between an expert and triage nurses (reliability) was calculated separately for a subset group of 167 emergency patients. Results For general hospital admission the area under the curve (AUC) of the receiver operating characteristic was 0.749; for admission to ICU it was 0.871. An examination of MTS-level and number of deceased patients showed that the higher the priority derived from MTS, the higher the number of deaths (p<0.0001 / χ2 Test). There was a substantial difference in the 30-day survival among the 5 MTS categories (p<0.0001 / log-rank test).The AUC for the predict 30-day mortality was 0.613. Categories orange and red had the highest numbers of heart catheter and endoscopy. Category red and orange were mostly accompanied by an emergency physician, whereas categories blue and green were walk-in patients. Inter-rater agreement between expert triage nurses was almost perfect (κ = 0.954). Conclusion The German version of the MTS is a reliable and valid instrument for a first assessment of emergency patients in the emergency department. PMID:24586477

  10. Tutorial: Performance and reliability in redundant disk arrays

    NASA Technical Reports Server (NTRS)

    Gibson, Garth A.

    1993-01-01

    A disk array is a collection of physically small magnetic disks that is packaged as a single unit but operates in parallel. Disk arrays capitalize on the availability of small-diameter disks from a price-competitive market to provide the cost, volume, and capacity of current disk systems but many times their performance. Unfortunately, relative to current disk systems, the larger number of components in disk arrays leads to higher rates of failure. To tolerate failures, redundant disk arrays devote a fraction of their capacity to an encoding of their information. This redundant information enables the contents of a failed disk to be recovered from the contents of non-failed disks. The simplest and least expensive encoding for this redundancy, known as N+1 parity is highlighted. In addition to compensating for the higher failure rates of disk arrays, redundancy allows highly reliable secondary storage systems to be built much more cost-effectively than is now achieved in conventional duplicated disks. Disk arrays that combine redundancy with the parallelism of many small-diameter disks are often called Redundant Arrays of Inexpensive Disks (RAID). This combination promises improvements to both the performance and the reliability of secondary storage. For example, IBM's premier disk product, the IBM 3390, is compared to a redundant disk array constructed of 84 IBM 0661 3 1/2-inch disks. The redundant disk array has comparable or superior values for each of the metrics given and appears likely to cost less. In the first section of this tutorial, I explain how disk arrays exploit the emergence of high performance, small magnetic disks to provide cost-effective disk parallelism that combats the access and transfer gap problems. The flexibility of disk-array configurations benefits manufacturer and consumer alike. In contrast, I describe in this tutorial's second half how parallelism, achieved through increasing numbers of components, causes overall failure rates to rise. Redundant disk arrays overcome this threat to data reliability by ensuring that data remains available during and after component failures.

  11. Implementation method of multi-terminal DC control system

    NASA Astrophysics Data System (ADS)

    Yi, Liu; Hao-Ran, Huang; Jun-Wen, Zhou; Hong-Guang, Guo; Yu-Yong, Zhou

    2018-04-01

    Currently the multi-terminal DC system (MTDC) has more stations. Each station needs operators to monitor and control the device. It needs much more operation and maintenance, low efficiency and small reliability; for the most important reason, multi-terminal DC system has complex control mode. If one of the stations has some problem, the control of the whole system should have problems. According to research of the characteristics of multi-terminal DC (VSC-MTDC) systems, this paper presents a strong implementation of the multi-terminal DC Supervisory Control and Data Acquisition (SCADA) system. This system is intelligent, can be networking, integration and intelligent. A master control system is added in each station to communication with the other stations to send current and DC voltage value to pole control system for each station. Based on the practical application and information feedback in the China South Power Grid research center VSC-MTDC project, this system is higher efficiency and save the cost on the maintenance of convertor station to improve the intelligent level and comprehensive effect. And because of the master control system, a multi-terminal system hierarchy coordination control strategy is formed, this make the control and protection system more efficiency and reliability.

  12. Reliability of a retail food store survey and development of an accompanying retail scoring system to communicate survey findings and identify vendors for healthful food and marketing initiatives.

    PubMed

    Ghirardelli, Alyssa; Quinn, Valerie; Sugerman, Sharon

    2011-01-01

    To develop a retail grocery instrument with weighted scoring to be used as an indicator of the food environment. Twenty six retail food stores in low-income areas in California. Observational. Inter-rater reliability for grocery store survey instrument. Description of store scoring methodology weighted to emphasize availability of healthful food. Type A intra-class correlation coefficients (ICC) with absolute agreement definition or a κ test for measures using ranges as categories. Measures of availability and price of fruits and vegetables performed well in reliability testing (κ = 0.681-0.800). Items for vegetable quality were better than for fruit (ICC 0.708 vs 0.528). Kappa scores indicated low to moderate agreement (0.372-0.674) on external store marketing measures and higher scores for internal store marketing. "Next to" the checkout counter was more reliable than "within 6 feet." Health departments using the store scoring system reported it as the most useful communication of neighborhood findings. There was good reliability of the measures among the research pairs. The local store scores can show the need to bring in resources and to provide access to fruits and vegetables and other healthful food. Copyright © 2011 Society for Nutrition Education. Published by Elsevier Inc. All rights reserved.

  13. Forward Skirt Structural Testing on the Space Launch System (SLS) Program

    NASA Technical Reports Server (NTRS)

    Lohrer, J. D.; Wright, R. D.

    2016-01-01

    Structural testing was performed to evaluate heritage forward skirts from the Space Shuttle program for use on the NASA Space Launch System (SLS) program. Testing was needed because SLS ascent loads are 35% higher than Space Shuttle loads. Objectives of testing were to determine margins of safety, demonstrate reliability, and validate analytical models. Testing combined with analysis was able to show heritage forward skirts were acceptable to use on the SLS program.

  14. A reliable transmission protocol for ZigBee-based wireless patient monitoring.

    PubMed

    Chen, Shyr-Kuen; Kao, Tsair; Chan, Chia-Tai; Huang, Chih-Ning; Chiang, Chih-Yen; Lai, Chin-Yu; Tung, Tse-Hua; Wang, Pi-Chung

    2012-01-01

    Patient monitoring systems are gaining their importance as the fast-growing global elderly population increases demands for caretaking. These systems use wireless technologies to transmit vital signs for medical evaluation. In a multihop ZigBee network, the existing systems usually use broadcast or multicast schemes to increase the reliability of signals transmission; however, both the schemes lead to significantly higher network traffic and end-to-end transmission delay. In this paper, we present a reliable transmission protocol based on anycast routing for wireless patient monitoring. Our scheme automatically selects the closest data receiver in an anycast group as a destination to reduce the transmission latency as well as the control overhead. The new protocol also shortens the latency of path recovery by initiating route recovery from the intermediate routers of the original path. On the basis of a reliable transmission scheme, we implement a ZigBee device for fall monitoring, which integrates fall detection, indoor positioning, and ECG monitoring. When the triaxial accelerometer of the device detects a fall, the current position of the patient is transmitted to an emergency center through a ZigBee network. In order to clarify the situation of the fallen patient, 4-s ECG signals are also transmitted. Our transmission scheme ensures the successful transmission of these critical messages. The experimental results show that our scheme is fast and reliable. We also demonstrate that our devices can seamlessly integrate with the next generation technology of wireless wide area network, worldwide interoperability for microwave access, to achieve real-time patient monitoring.

  15. Cultural competency assessment tool for hospitals: evaluating hospitals' adherence to the culturally and linguistically appropriate services standards.

    PubMed

    Weech-Maldonado, Robert; Dreachslin, Janice L; Brown, Julie; Pradhan, Rohit; Rubin, Kelly L; Schiller, Cameron; Hays, Ron D

    2012-01-01

    The U.S. national standards for culturally and linguistically appropriate services (CLAS) in health care provide guidelines on policies and practices aimed at developing culturally competent systems of care. The Cultural Competency Assessment Tool for Hospitals (CCATH) was developed as an organizational tool to assess adherence to the CLAS standards. First, we describe the development of the CCATH and estimate the reliability and validity of the CCATH measures. Second, we discuss the managerial implications of the CCATH as an organizational tool to assess cultural competency. We pilot tested an initial draft of the CCATH, revised it based on a focus group and cognitive interviews, and then administered it in a field test with a sample of California hospitals. The reliability and validity of the CCATH were evaluated using factor analysis, analysis of variance, and Cronbach's alphas. Exploratory and confirmatory factor analyses identified 12 CCATH composites: leadership and strategic planning, data collection on inpatient population, data collection on service area, performance management systems and quality improvement, human resources practices, diversity training, community representation, availability of interpreter services, interpreter services policies, quality of interpreter services, translation of written materials, and clinical cultural competency practices. All the CCATH scales had internal consistency reliability of .65 or above, and the reliability was .70 or above for 9 of the 12 scales. Analysis of variance results showed that not-for-profit hospitals have higher CCATH scores than for-profit hospitals in five CCATH scales and higher CCATH scores than government hospitals in two CCATH scales. The CCATH showed adequate psychometric properties. Managers and policy makers can use the CCATH as a tool to evaluate hospital performance in cultural competency and identify and target improvements in hospital policies and practices that undergird the provision of CLAS.

  16. Reliability model generator

    NASA Technical Reports Server (NTRS)

    Cohen, Gerald C. (Inventor); McMann, Catherine M. (Inventor)

    1991-01-01

    An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.

  17. Assessing efficiency and economic viability of rainwater harvesting systems for meeting non-potable water demands in four climatic zones of China

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Jing, X.

    2017-12-01

    Rainwater harvesting is now increasingly used to manage urban flood and alleviate water scarcity crisis. In this study, a computational tool based on water balance equation is developed to assess stormwater capture and water saving efficiency and economic viability of rainwater harvesting systems (RHS) in eight cities across four climatic zones of China. It requires daily rainfall, contributing area, runoff losses, first flush volume, storage capacity, daily water demand and economic parameters as inputs. Three non-potable water demand scenarios (i.e., toilet flushing, lawn irrigation, and combination of them) are considered. The water demand for lawn irrigation is estimated using the Cropwat 8.0 and Climwat 2.0. Results indicate that higher water saving efficiency and water supply time reliability can be achieved for RHS with larger storage capacities, for lower water demand scenarios and located in more humid regions, while higher stormwater capture efficiency is associated with larger storage capacity, higher water demand scenarios and less rainfall. For instance, a 40 m3 RHS in Shanghai (humid climate) for lawn irrigation can capture 17% of stormwater, while its water saving efficiency and time reliability can reach 96 % and 98%, respectively. The water saving efficiency and time reliability of a 20 m3 RHS in Xining (semi-arid climate) for toilet flushing are 19% and 16%, respectively, but it can capture 63% of stormwater. With the current values of economic parameters, economic viability of RHS can be achieved in humid and semi-humid regions for reasonably designed RHS; however, it is not financially viable to install RHS in arid regions as the benefit-cost ratio is much smaller than 1.0.

  18. Scheduling and Pricing for Expected Ramp Capability in Real-Time Power Markets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ela, Erik; O'Malley, Mark

    2016-05-01

    Higher variable renewable generation penetrations are occurring throughout the world on different power systems. These resources increase the variability and uncertainty on the system which must be accommodated by an increase in the flexibility of the system resources in order to maintain reliability. Many scheduling strategies have been discussed and introduced to ensure that this flexibility is available at multiple timescales. To meet variability, that is, the expected changes in system conditions, two recent strategies have been introduced: time-coupled multi-period market clearing models and the incorporation of ramp capability constraints. To appropriately evaluate these methods, it is important to assessmore » both efficiency and reliability. But it is also important to assess the incentive structure to ensure that resources asked to perform in different ways have the proper incentives to follow these directions, which is a step often ignored in simulation studies. We find that there are advantages and disadvantages to both approaches. We also find that look-ahead horizon length in multi-period market models can impact incentives. This paper proposes scheduling and pricing methods that ensure expected ramps are met reliably, efficiently, and with associated prices based on true marginal costs that incentivize resources to do as directed by the market. Case studies show improvements of the new method.« less

  19. Impact of Distributed Energy Resources on the Reliability of a Critical Telecommunications Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, D.; Atcitty, C.; Zuffranieri, J.

    2006-03-01

    Telecommunications has been identified by the Department of Homeland Security as a critical infrastructure to the United States. Failures in the power systems supporting major telecommunications service nodes are a main contributor to major telecommunications outages, as documented by analyses of Federal Communications Commission (FCC) outage reports by the National Reliability Steering Committee (under auspices of the Alliance for Telecommunications Industry Solutions). There are two major issues that are having increasing impact on the sensitivity of the power distribution to telecommunication facilities: deregulation of the power industry, and changing weather patterns. A logical approach to improve the robustness of telecommunicationmore » facilities would be to increase the depth and breadth of technologies available to restore power in the face of power outages. Distributed energy resources such as fuel cells and gas turbines could provide one more onsite electric power source to provide backup power, if batteries and diesel generators fail. But does the diversity in power sources actually increase the reliability of offered power to the office equipment, or does the complexity of installing and managing the extended power system induce more potential faults and higher failure rates? This report analyzes a system involving a telecommunications facility consisting of two switch-bays and a satellite reception system.« less

  20. Wafer level reliability for high-performance VLSI design

    NASA Technical Reports Server (NTRS)

    Root, Bryan J.; Seefeldt, James D.

    1987-01-01

    As very large scale integration architecture requires higher package density, reliability of these devices has approached a critical level. Previous processing techniques allowed a large window for varying reliability. However, as scaling and higher current densities push reliability to its limit, tighter control and instant feedback becomes critical. Several test structures developed to monitor reliability at the wafer level are described. For example, a test structure was developed to monitor metal integrity in seconds as opposed to weeks or months for conventional testing. Another structure monitors mobile ion contamination at critical steps in the process. Thus the reliability jeopardy can be assessed during fabrication preventing defective devices from ever being placed in the field. Most importantly, the reliability can be assessed on each wafer as opposed to an occasional sample.

  1. Power-Efficient, High-Current-Density, Long-Life Thermionic Cathode Developed for Microwave Amplifier Applications

    NASA Technical Reports Server (NTRS)

    Wintucky, Edwin G.

    2002-01-01

    A power-efficient, miniature, easily manufactured, reservoir-type barium-dispenser thermionic cathode has been developed that offers the significant advantages of simultaneous high electron-emission current density (>2 A/sq cm) and very long life (>100,000 hr of continuous operation) when compared with the commonly used impregnated-type barium-dispenser cathodes. Important applications of this cathode are a wide variety of microwave and millimeter-wave vacuum electronic devices, where high output power and reliability (long life) are essential. We also expect it to enable the practical development of higher purveyance electron guns for lower voltage and more reliable device operation. The low cathode heater power and reduced size and mass are expected to be particularly beneficial in traveling-wave-tube amplifiers (TWTA's) for space communications, where future NASA mission requirements include smaller onboard spacecraft systems, higher data transmission rates (high frequency and output power) and greater electrical efficiency.

  2. World Ships - Architectures & Feasibility Revisited

    NASA Astrophysics Data System (ADS)

    Hein, A. M.; Pak, M.; Putz, D.; Buhler, C.; Reiss, P.

    A world ship is a concept for manned interstellar flight. It is a huge, self-contained and self-sustained interstellar vehicle. It travels at a fraction of a per cent of the speed of light and needs several centuries to reach its target star system. The well- known world ship concept by Alan Bond and Anthony Martin was intended to show its principal feasibility. However, several important issues haven't been addressed so far: the relationship between crew size and robustness of knowledge transfer, reliability, and alternative mission architectures. This paper addresses these gaps. Furthermore, it gives an update on target star system choice, and develops possible mission architectures. The derived conclusions are: a large population size leads to robust knowledge transfer and cultural adaptation. These processes can be improved by new technologies. World ship reliability depends on the availability of an automatic repair system, as in the case of the Daedalus probe. Star systems with habitable planets are probably farther away than systems with enough resources to construct space colonies. Therefore, missions to habitable planets have longer trip times and have a higher risk of mission failure. On the other hand, the risk of constructing colonies is higher than to establish an initial settlement on a habitable planet. Mission architectures with precursor probes have the potential to significantly reduce trip and colonization risk without being significantly more costly than architectures without. In summary world ships remain an interesting concept, although they require a space colony-based civilization within our own solar system before becoming feasible.

  3. User's guide to the Reliability Estimation System Testbed (REST)

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam

    1992-01-01

    The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.

  4. Affordable Launch Services using the Sport Orbit Transfer System

    NASA Astrophysics Data System (ADS)

    Goldstein, D. J.

    2002-01-01

    Despite many advances in small satellite technology, a low-cost, reliable method is needed to place spacecraft in their de- sired orbits. AeroAstro has developed the Small Payload ORbit Transfer (SPORTTM) system to provide a flexible low-cost orbit transfer capability, enabling small payloads to use low-cost secondary launch opportunities and still reach their desired final orbits. This capability allows small payloads to effectively use a wider variety of launch opportunities, including nu- merous under-utilized GTO slots. Its use, in conjunction with growing opportunities for secondary launches, enable in- creased access to space using proven technologies and highly reliable launch vehicles such as the Ariane family and the Starsem launcher. SPORT uses a suite of innovative technologies that are packaged in a simple, reliable, modular system. The command, control and data handling of SPORT is provided by the AeroAstro BitsyTM core electronics module. The Bitsy module also provides power regulation for the batteries and optional solar arrays. The primary orbital maneuvering capability is provided by a nitrous oxide monopropellant propulsion system. This system exploits the unique features of nitrous oxide, which in- clude self-pressurization, good performance, and safe handling, to provide a light-weight, low-cost and reliable propulsion capability. When transferring from a higher energy orbit to a lower energy orbit (i.e. GTO to LEO), SPORT uses aerobraking technol- ogy. After using the propulsion system to lower the orbit perigee, the aerobrake gradually slows SPORT via atmospheric drag. After the orbit apogee is reduced to the target level, an apogee burn raises the perigee and ends the aerobraking. At the conclusion of the orbit transfer maneuver, either the aerobrake or SPORT can be shed, as desired by the payload. SPORT uses a simple design for high reliability and a modular architecture for maximum mission flexibility. This paper will discuss the launch system and its application to small satellite launch without increasing risk. It will also discuss relevant issues such as aerobraking operations and radiation issues, as well as existing partnerships and patents for the system.

  5. The Draw-A-Person Test: an indicator of children's cognitive and socioemotional adaptation?

    PubMed

    ter Laak, J; de Goede, M; Aleva, A; van Rijswijk, P

    2005-03-01

    The authors examined aspects of reliability and validity of the Goodenough-Harris Draw-A-Person Test (DAP; D. B. Harris, 1963). The participants were 115 seven- to nine-year-old students attending regular or special education schools. Three judges, with a modest degree of training similar to that found among practicing clinicians, rated the students' human figure drawings on developmental and personality variables. The authors found that counting details and determining developmental level in the DAP test could be carried out reliably by judges with limited experience. However, the reliability of judgments of children's social and emotional development and personality was insufficient. Older students and students attending regular schools received significantly higher scores than did younger students or students attending special education schools. The authors found that the success of the DAP test as an indicator of cognitive level, socioemotional development, and personality is limited when global judgments are used. The authors concluded that more specific, reliable, valid, and useful scoring systems are needed for the DAP test.

  6. Reliability Evaluation and Improvement Approach of Chemical Production Man - Machine - Environment System

    NASA Astrophysics Data System (ADS)

    Miao, Yongchun; Kang, Rongxue; Chen, Xuefeng

    2017-12-01

    In recent years, with the gradual extension of reliability research, the study of production system reliability has become the hot topic in various industries. Man-machine-environment system is a complex system composed of human factors, machinery equipment and environment. The reliability of individual factor must be analyzed in order to gradually transit to the research of three-factor reliability. Meanwhile, the dynamic relationship among man-machine-environment should be considered to establish an effective blurry evaluation mechanism to truly and effectively analyze the reliability of such systems. In this paper, based on the system engineering, fuzzy theory, reliability theory, human error, environmental impact and machinery equipment failure theory, the reliabilities of human factor, machinery equipment and environment of some chemical production system were studied by the method of fuzzy evaluation. At last, the reliability of man-machine-environment system was calculated to obtain the weighted result, which indicated that the reliability value of this chemical production system was 86.29. Through the given evaluation domain it can be seen that the reliability of man-machine-environment integrated system is in a good status, and the effective measures for further improvement were proposed according to the fuzzy calculation results.

  7. Test of e-Learning Related Attitudes (TeLRA) Scale: Development, Reliability and Validity Study

    ERIC Educational Resources Information Center

    Kisanga, D. H.; Ireson, G.

    2016-01-01

    The Tanzanian education system is in transition from face-to-face classroom learning to e-learning. E-learning is a new learning approach in Tanzanian Higher Learning Institutions [HLIs] and with teachers being the key stakeholders of all formal education, investigating their attitude towards e-learning is essential. So far, however, there has…

  8. Modeling and simulation of reliability of unmanned intelligent vehicles

    NASA Astrophysics Data System (ADS)

    Singh, Harpreet; Dixit, Arati M.; Mustapha, Adam; Singh, Kuldip; Aggarwal, K. K.; Gerhart, Grant R.

    2008-04-01

    Unmanned ground vehicles have a large number of scientific, military and commercial applications. A convoy of such vehicles can have collaboration and coordination. For the movement of such a convoy, it is important to predict the reliability of the system. A number of approaches are available in literature which describes the techniques for determining the reliability of the system. Graph theoretic approaches are popular in determining terminal reliability and system reliability. In this paper we propose to exploit Fuzzy and Neuro-Fuzzy approaches for predicting the node and branch reliability of the system while Boolean algebra approaches are used to determine terminal reliability and system reliability. Hence a combination of intelligent approaches like Fuzzy, Neuro-Fuzzy and Boolean approaches is used to predict the overall system reliability of a convoy of vehicles. The node reliabilities may correspond to the collaboration of vehicles while branch reliabilities will determine the terminal reliabilities between different nodes. An algorithm is proposed for determining the system reliabilities of a convoy of vehicles. The simulation of the overall system is proposed. Such simulation should be helpful to the commander to take an appropriate action depending on the predicted reliability in different terrain and environmental conditions. It is hoped that results of this paper will lead to more important techniques to have a reliable convoy of vehicles in a battlefield.

  9. Applicability and Limitations of Reliability Allocation Methods

    NASA Technical Reports Server (NTRS)

    Cruz, Jose A.

    2016-01-01

    Reliability allocation process may be described as the process of assigning reliability requirements to individual components within a system to attain the specified system reliability. For large systems, the allocation process is often performed at different stages of system design. The allocation process often begins at the conceptual stage. As the system design develops, more information about components and the operating environment becomes available, different allocation methods can be considered. Reliability allocation methods are usually divided into two categories: weighting factors and optimal reliability allocation. When properly applied, these methods can produce reasonable approximations. Reliability allocation techniques have limitations and implied assumptions that need to be understood by system engineers. Applying reliability allocation techniques without understanding their limitations and assumptions can produce unrealistic results. This report addresses weighting factors, optimal reliability allocation techniques, and identifies the applicability and limitations of each reliability allocation technique.

  10. Reliability and coverage analysis of non-repairable fault-tolerant memory systems

    NASA Technical Reports Server (NTRS)

    Cox, G. W.; Carroll, B. D.

    1976-01-01

    A method was developed for the construction of probabilistic state-space models for nonrepairable systems. Models were developed for several systems which achieved reliability improvement by means of error-coding, modularized sparing, massive replication and other fault-tolerant techniques. From the models developed, sets of reliability and coverage equations for the systems were developed. Comparative analyses of the systems were performed using these equation sets. In addition, the effects of varying subunit reliabilities on system reliability and coverage were described. The results of these analyses indicated that a significant gain in system reliability may be achieved by use of combinations of modularized sparing, error coding, and software error control. For sufficiently reliable system subunits, this gain may far exceed the reliability gain achieved by use of massive replication techniques, yet result in a considerable saving in system cost.

  11. Biomechanical considerations for abdominal loading by seat belt pretensioners.

    PubMed

    Rouhana, Stephen W; El-Jawahri, Raed E; Laituri, Tony R

    2010-11-01

    While seat belts are the most effective safety technology in vehicles today, there are continual efforts in the industry to improve their ability to reduce the risk of injury. In this paper, seat belt pretensioners and current trends towards more powerful systems were reviewed and analyzed. These more powerful systems may be, among other things, systems that develop higher belt forces, systems that remove slack from belt webbing at higher retraction speeds, or both. The analysis started with validation of the Ford Human Body Finite Element Model for use in evaluation of abdominal belt loading by pretensioners. The model was then used to show that those studies, done with lap-only belts, can be used to establish injury metrics for tests done with lap-shoulder belts. Then, previously-performed PMHS studies were used to develop AIS 2+ and AIS 3+ injury risk curves for abdominal interaction with seat belts via logistic regression and reliability analysis with interval censoring. Finally, some considerations were developed for a possible laboratory test to evaluate higher-powered pretensioners.

  12. 885-nm laser diode array pumped ceramic Nd:YAG master oscillator power amplifier system

    NASA Astrophysics Data System (ADS)

    Yu, Anthony W.; Li, Steven X.; Stephen, Mark A.; Seas, Antonios; Troupaki, Elisavet; Vasilyev, Aleksey; Conley, Heather; Filemyr, Tim; Kirchner, Cynthia; Rosanova, Alberto

    2010-04-01

    The objective of this effort is to develop more reliable, higher efficiency diode pumped Nd:YAG laser systems for space applications by leveraging technology investments from the DoD and other commercial industries. Our goal is to design, build, test and demonstrate the effectiveness of combining 885 nm laser pump diodes and the use of ceramic Nd:YAG for future flight missions. The significant reduction in thermal loading on the gain medium by the use of 885 nm pump lasers will improve system efficiency.

  13. Reliability of a visual scoring system with fluorescent tracers to assess dermal pesticide exposure.

    PubMed

    Aragon, Aurora; Blanco, Luis; Lopez, Lylliam; Liden, Carola; Nise, Gun; Wesseling, Catharina

    2004-10-01

    We modified Fenske's semi-quantitative 'visual scoring system' of fluorescent tracer deposited on the skin of pesticide applicators and evaluated its reproducibility in the Nicaraguan setting. The body surface of 33 farmers, divided into 31 segments, was videotaped in the field after spraying with a pesticide solution containing a fluorescent tracer. A portable UV lamp was used for illumination in a foldaway dark room. The videos of five farmers were randomly selected. The scoring was based on a matrix with extension of fluorescent patterns (scale 0-5) on the ordinate and intensity (scale 0-5) on the abscissa, with the product of these two ranks as the final score for each body segment (0-25). Five medical students rated and evaluated the quality of 155 video images having undergone 4 h of training. Cronbach alpha coefficients and two-way random effects intraclass correlation coefficients (ICC) with absolute agreement were computed to assess inter-rater reliability. Consistency was high (Cronbach alpha = 0.96), but the scores differed substantially between raters. The overall ICC was satisfactory [0.75; 95% confidence interval (CI) = 0.62-0.83], but it was lower for intensity (0.54; 95% CI = 0.40-0.66) and higher for extension (0.80; 95% CI = 0.71-0.86). ICCs were lowest for images with low scores and evaluated as low quality, and highest for images with high scores and high quality. Inter-rater reliability coefficients indicate repeatability of the scoring system. However, field conditions for recording fluorescence should be improved to achieve higher quality images, and training should emphasize a better mechanism for the reading of body areas with low contamination.

  14. High-Reliability Pump Module for Non-Planar Ring Oscillator Laser

    NASA Technical Reports Server (NTRS)

    Liu, Duncan T.; Qiu, Yueming; Wilson, Daniel W.; Dubovitsky, Serge; Forouhar, Siamak

    2007-01-01

    We propose and have demonstrated a prototype high-reliability pump module for pumping a Non-Planar Ring Oscillator (NPRO) laser suitable for space missions. The pump module consists of multiple fiber-coupled single-mode laser diodes and a fiber array micro-lens array based fiber combiner. The reported Single-Mode laser diode combiner laser pump module (LPM) provides a higher normalized brightness at the combined beam than multimode laser diode based LPMs. A higher brightness from the pump source is essential for efficient NPRO laser pumping and leads to higher reliability because higher efficiency requires a lower operating power for the laser diodes, which in turn increases the reliability and lifetime of the laser diodes. Single-mode laser diodes with Fiber Bragg Grating (FBG) stabilized wavelength permit the pump module to be operated without a thermal electric cooler (TEC) and this further improves the overall reliability of the pump module. The single-mode laser diode LPM is scalable in terms of the number of pump diodes and is capable of combining hundreds of fiber-coupled laser diodes. In the proof-of-concept demonstration, an e-beam written diffractive micro lens array, a custom fiber array, commercial 808nm single mode laser diodes, and a custom NPRO laser head are used. The reliability of the proposed LPM is discussed.

  15. A particle swarm model for estimating reliability and scheduling system maintenance

    NASA Astrophysics Data System (ADS)

    Puzis, Rami; Shirtz, Dov; Elovici, Yuval

    2016-05-01

    Modifying data and information system components may introduce new errors and deteriorate the reliability of the system. Reliability can be efficiently regained with reliability centred maintenance, which requires reliability estimation for maintenance scheduling. A variant of the particle swarm model is used to estimate reliability of systems implemented according to the model view controller paradigm. Simulations based on data collected from an online system of a large financial institute are used to compare three component-level maintenance policies. Results show that appropriately scheduled component-level maintenance greatly reduces the cost of upholding an acceptable level of reliability by reducing the need in system-wide maintenance.

  16. Upgrade of the Minos+ Experiment Data Acquisition for the High Energy NuMI Beam Run

    DOE PAGES

    Badgett, William; Hahn, Steve R.; Torretta, Donatella; ...

    2016-03-14

    The Minos+ experiment is an extension of the Minos experiment at a higher energy and more intense neutrino beam, with the data collection having begun in the fall of 2013. The neutrino beam is provided by the Neutrinos from the Main Injector (NuMI) beam-line at Fermi National Accelerator Laboratory (Fermilab). The detector apparatus consists of two main detectors, one underground at Fermilab and the other in Soudan, Minnesota with the purpose of studying neutrino oscillations at a base line of 735 km. The original data acquisition system has been running for several years collecting data from NuMI, but with themore » extended run from 2013, parts of the system needed to be replaced due to obsolescence, reliability problems, and data throughput limitations. Specifically, we have replaced the front-end readout controllers, event builder, and data acquisition computing and trigger processing farms with modern, modular and reliable devices with few single points of failure. The new system is based on gigabit Ethernet TCP/IP communication to implement the event building and concatenation of data from many front-end VME readout crates. The simplicity and partitionability of the new system greatly eases the debugging and diagnosing process. As a result, the new system improves throughput by about a factor of three compared to the old system, up to 800 megabits per second, and has proven robust and reliable in the current run.« less

  17. High rate information systems - Architectural trends in support of the interdisciplinary investigator

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Preheim, Larry E.

    1990-01-01

    Data systems requirements in the Earth Observing System (EOS) Space Station Freedom (SSF) eras indicate increasing data volume, increased discipline interplay, higher complexity and broader data integration and interpretation. A response to the needs of the interdisciplinary investigator is proposed, considering the increasing complexity and rising costs of scientific investigation. The EOS Data Information System, conceived to be a widely distributed system with reliable communication links between central processing and the science user community, is described. Details are provided on information architecture, system models, intelligent data management of large complex databases, and standards for archiving ancillary data, using a research library, a laboratory and collaboration services.

  18. System reliability approaches for advanced propulsion system structures

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.; Mahadevan, S.

    1991-01-01

    This paper identifies significant issues that pertain to the estimation and use of system reliability in the design of advanced propulsion system structures. Linkages between the reliabilities of individual components and their effect on system design issues such as performance, cost, availability, and certification are examined. The need for system reliability computation to address the continuum nature of propulsion system structures and synergistic progressive damage modes has been highlighted. Available system reliability models are observed to apply only to discrete systems. Therefore a sequential structural reanalysis procedure is formulated to rigorously compute the conditional dependencies between various failure modes. The method is developed in a manner that supports both top-down and bottom-up analyses in system reliability.

  19. ICAROUS - Integrated Configurable Algorithms for Reliable Operations Of Unmanned Systems

    NASA Technical Reports Server (NTRS)

    Consiglio, María; Muñoz, César; Hagen, George; Narkawicz, Anthony; Balachandran, Swee

    2016-01-01

    NASA's Unmanned Aerial System (UAS) Traffic Management (UTM) project aims at enabling near-term, safe operations of small UAS vehicles in uncontrolled airspace, i.e., Class G airspace. A far-term goal of UTM research and development is to accommodate the expected rise in small UAS traffic density throughout the National Airspace System (NAS) at low altitudes for beyond visual line-of-sight operations. This paper describes a new capability referred to as ICAROUS (Integrated Configurable Algorithms for Reliable Operations of Unmanned Systems), which is being developed under the UTM project. ICAROUS is a software architecture comprised of highly assured algorithms for building safety-centric, autonomous, unmanned aircraft applications. Central to the development of the ICAROUS algorithms is the use of well-established formal methods to guarantee higher levels of safety assurance by monitoring and bounding the behavior of autonomous systems. The core autonomy-enabling capabilities in ICAROUS include constraint conformance monitoring and contingency control functions. ICAROUS also provides a highly configurable user interface that enables the modular integration of mission-specific software components.

  20. 75 FR 72664 - System Personnel Training Reliability Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-26

    ...Under section 215 of the Federal Power Act, the Commission approves two Personnel Performance, Training and Qualifications (PER) Reliability Standards, PER-004-2 (Reliability Coordination--Staffing) and PER-005-1 (System Personnel Training), submitted to the Commission for approval by the North American Electric Reliability Corporation, the Electric Reliability Organization certified by the Commission. The approved Reliability Standards require reliability coordinators, balancing authorities, and transmission operators to establish a training program for their system operators, verify each of their system operators' capability to perform tasks, and provide emergency operations training to every system operator. The Commission also approves NERC's proposal to retire two existing PER Reliability Standards that are replaced by the standards approved in this Final Rule.

  1. Results of a Demonstration Assessment of Passive System Reliability Utilizing the Reliability Method for Passive Systems (RMPS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bucknor, Matthew; Grabaskas, David; Brunett, Acacia

    2015-04-26

    Advanced small modular reactor designs include many advantageous design features such as passively driven safety systems that are arguably more reliable and cost effective relative to conventional active systems. Despite their attractiveness, a reliability assessment of passive systems can be difficult using conventional reliability methods due to the nature of passive systems. Simple deviations in boundary conditions can induce functional failures in a passive system, and intermediate or unexpected operating modes can also occur. As part of an ongoing project, Argonne National Laboratory is investigating various methodologies to address passive system reliability. The Reliability Method for Passive Systems (RMPS), amore » systematic approach for examining reliability, is one technique chosen for this analysis. This methodology is combined with the Risk-Informed Safety Margin Characterization (RISMC) approach to assess the reliability of a passive system and the impact of its associated uncertainties. For this demonstration problem, an integrated plant model of an advanced small modular pool-type sodium fast reactor with a passive reactor cavity cooling system is subjected to a station blackout using RELAP5-3D. This paper discusses important aspects of the reliability assessment, including deployment of the methodology, the uncertainty identification and quantification process, and identification of key risk metrics.« less

  2. Validity and Reliability of Surface Electromyography Measurements from a Wearable Athlete Performance System

    PubMed Central

    Lynn, Scott K.; Watkins, Casey M.; Wong, Megan A.; Balfany, Katherine; Feeney, Daniel F.

    2018-01-01

    The Athos ® wearable system integrates surface electromyography (sEMG ) electrodes into the construction of compression athletic apparel. The Athos system reduces the complexity and increases the portability of collecting EMG data and provides processed data to the end user. The objective of the study was to determine the reliability and validity of Athos as compared with a research grade sEMG system. Twelve healthy subjects performed 7 trials on separate days (1 baseline trial and 6 repeated trials). In each trial subjects wore the wearable sEMG system and had a research grade sEMG system’s electrodes placed just distal on the same muscle, as close as possible to the wearable system’s electrodes. The muscles tested were the vastus lateralis (VL), vastus medialis (VM), and biceps femoris (BF). All testing was done on an isokinetic dynamometer. Baseline testing involved performing isometric 1 repetition maximum tests for the knee extensors and flexors and three repetitions of concentric-concentric knee flexion and extension at MVC for each testing speed: 60, 180, and 300 deg/sec. Repeated trials 2-7 each comprised 9 sets where each set included three repetitions of concentric-concentric knee flexion-extension. Each repeated trial (2-7) comprised one set at each speed and percent MVC (50%, 75%, 100%) combination. The wearable system and research grade sEMG data were processed using the same methods and aligned in time. The amplitude metrics calculated from the sEMG for each repetition were the peak amplitude, sum of the linear envelope, and 95th percentile. Validity results comprise two main findings. First, there is not a significant effect of system (Athos or research grade system) on the repetition amplitude metrics (95%, peak, or sum). Second, the relationship between torque and sEMG is not significantly different between Athos and the research grade system. For reliability testing, the variation across trials and averaged across speeds was 0.8%, 7.3%, and 0.2% higher for Athos from BF, VL and VM, respectively. Also, using the standard deviation of the MVC normalized repetition amplitude, the research grade system showed 10.7% variability while Athos showed 12%. The wearable technology (Athos) provides sEMG measures that are consistent with controlled, research grade technologies and data collection procedures. Key points Surface EMG embedded into athletic garments (Athos) had similar validity and reliability when compared with a research grade system There was no difference in the torque-EMG relationship between the two systems No statistically significant difference in reliability across 6 trials between the two systems The validity and reliability of Athos demonstrates the potential for sEMG to be applied in dynamic rehabilitation and sports settings PMID:29769821

  3. Reliability and Validity Assessment of a Linear Position Transducer

    PubMed Central

    Garnacho-Castaño, Manuel V.; López-Lastra, Silvia; Maté-Muñoz, José L.

    2015-01-01

    The objectives of the study were to determine the validity and reliability of peak velocity (PV), average velocity (AV), peak power (PP) and average power (AP) measurements were made using a linear position transducer. Validity was assessed by comparing measurements simultaneously obtained using the Tendo Weightlifting Analyzer Systemi and T-Force Dynamic Measurement Systemr (Ergotech, Murcia, Spain) during two resistance exercises, bench press (BP) and full back squat (BS), performed by 71 trained male subjects. For the reliability study, a further 32 men completed both lifts using the Tendo Weightlifting Analyzer Systemz in two identical testing sessions one week apart (session 1 vs. session 2). Intraclass correlation coefficients (ICCs) indicating the validity of the Tendo Weightlifting Analyzer Systemi were high, with values ranging from 0.853 to 0.989. Systematic biases and random errors were low to moderate for almost all variables, being higher in the case of PP (bias ±157.56 W; error ±131.84 W). Proportional biases were identified for almost all variables. Test-retest reliability was strong with ICCs ranging from 0.922 to 0.988. Reliability results also showed minimal systematic biases and random errors, which were only significant for PP (bias -19.19 W; error ±67.57 W). Only PV recorded in the BS showed no significant proportional bias. The Tendo Weightlifting Analyzer Systemi emerged as a reliable system for measuring movement velocity and estimating power in resistance exercises. The low biases and random errors observed here (mainly AV, AP) make this device a useful tool for monitoring resistance training. Key points This study determined the validity and reliability of peak velocity, average velocity, peak power and average power measurements made using a linear position transducer The Tendo Weight-lifting Analyzer Systemi emerged as a reliable system for measuring movement velocity and power. PMID:25729300

  4. Fast and reliable obstacle detection and segmentation for cross-country navigation

    NASA Technical Reports Server (NTRS)

    Talukder, A.; Manduchi, R.; Rankin, A.; Matthies, L.

    2002-01-01

    Obstacle detection is one of the main components of the control system of autonomous vehicles. In the case of indoor/urban navigation, obstacles are typically defined as surface points that are higher than the ground plane. This characterization, however, cannot be used in cross-country and unstructured environments, where the notion of ground plane is often not meaningful.

  5. Hybrid RAID With Dual Control Architecture for SSD Reliability

    NASA Astrophysics Data System (ADS)

    Chatterjee, Santanu

    2010-10-01

    The Solid State Devices (SSD) which are increasingly being adopted in today's data storage Systems, have higher capacity and performance but lower reliability, which leads to more frequent rebuilds and to a higher risk. Although SSD is very energy efficient compared to Hard Disk Drives but Bit Error Rate (BER) of an SSD require expensive erase operations between successive writes. Parity based RAID (for Example RAID4,5,6)provides data integrity using parity information and supports losing of any one (RAID4, 5)or two drives(RAID6), but the parity blocks are updated more often than the data blocks due to random access pattern so SSD devices holding more parity receive more writes and consequently age faster. To address this problem, in this paper we propose a Model based System of hybrid disk array architecture in which we plan to use RAID 4(Stripping with Parity) technique and SSD drives as Data drives while any fastest Hard disk drives of same capacity can be used as dedicated parity drives. By this proposed architecture we can open the door to using commodity SSD's past their erasure limit and it can also reduce the need for expensive hardware Error Correction Code (ECC) in the devices.

  6. A Design Heritage-Based Forecasting Methodology for Risk Informed Management of Advanced Systems

    NASA Technical Reports Server (NTRS)

    Maggio, Gaspare; Fragola, Joseph R.

    1999-01-01

    The development of next generation systems often carries with it the promise of improved performance, greater reliability, and reduced operational costs. These expectations arise from the use of novel designs, new materials, advanced integration and production technologies intended for functionality replacing the previous generation. However, the novelty of these nascent technologies is accompanied by lack of operational experience and, in many cases, no actual testing as well. Therefore some of the enthusiasm surrounding most new technologies may be due to inflated aspirations from lack of knowledge rather than actual future expectations. This paper proposes a design heritage approach for improved reliability forecasting of advanced system components. The basis of the design heritage approach is to relate advanced system components to similar designs currently in operation. The demonstrated performance of these components could then be used to forecast the expected performance and reliability of comparable advanced technology components. In this approach the greater the divergence of the advanced component designs from the current systems the higher the uncertainty that accompanies the associated failure estimates. Designers of advanced systems are faced with many difficult decisions. One of the most common and more difficult types of these decisions are those related to the choice between design alternatives. In the past decision-makers have found these decisions to be extremely difficult to make because they often involve the trade-off between a known performing fielded design and a promising paper design. When it comes to expected reliability performance the paper design always looks better because it is on paper and it addresses all the know failure modes of the fielded design. On the other hand there is a long, and sometimes very difficult road, between the promise of a paper design and its fulfillment; with the possibility that sometimes the reliability promise is not fulfilled at all. Decision makers in advanced technology areas have always known to discount the performance claims of a design to a degree in proportion to its stage of development, and at times have preferred the more mature design over the one of lesser maturity even with the latter promising substantially better performance once fielded. As with the broader measures of performance this has also been true for projected reliability performance. Paper estimates of potential advances in design reliability are to a degree uncertain in proportion to the maturity of the features being proposed to secure those advances. This is especially true when performance-enhancing features in other areas are also planned to be part of the development program.

  7. Calculating system reliability with SRFYDO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M

    2010-01-01

    SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for themore » system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.« less

  8. Minimal support technology and in situ resource utilization for risk management of planetary spaceflight missions

    NASA Astrophysics Data System (ADS)

    Murphy, K. L.; Rygalov, V. Ye.; Johnson, S. B.

    2009-04-01

    All artificial systems and components in space degrade at higher rates than on Earth, depending in part on environmental conditions, design approach, assembly technologies, and the materials used. This degradation involves not only the hardware and software systems but the humans that interact with those systems. All technological functions and systems can be expressed through functional dependence: [Function]˜[ERU]∗[RUIS]∗[ISR]/[DR];where [ERU]efficiency (rate) of environmental resource utilization[RUIS]resource utilization infrastructure[ISR]in situ resources[DR]degradation rateThe limited resources of spaceflight and open space for autonomous missions require a high reliability (maximum possible, approaching 100%) for system functioning and operation, and must minimize the rate of any system degradation. To date, only a continuous human presence with a system in the spaceflight environment can absolutely mitigate those degradations. This mitigation is based on environmental amelioration for both the technology systems, as repair of data and spare parts, and the humans, as exercise and psychological support. Such maintenance now requires huge infrastructures, including research and development complexes and management agencies, which currently cannot move beyond the Earth. When considering what is required to move manned spaceflight from near Earth stations to remote locations such as Mars, what are the minimal technologies and infrastructures necessary for autonomous restoration of a degrading system in space? In all of the known system factors of a mission to Mars that reduce the mass load, increase the reliability, and reduce the mission’s overall risk, the current common denominator is the use of undeveloped or untested technologies. None of the technologies required to significantly reduce the risk for critical systems are currently available at acceptable readiness levels. Long term interplanetary missions require that space programs produce a craft with all systems integrated so that they are of the highest reliability. Right now, with current technologies, we cannot guarantee this reliability for a crew of six for 1000 days to Mars and back. Investigation of the technologies to answer this need and a focus of resources and research on their advancement would significantly improve chances for a safe and successful mission.

  9. Starship Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2009-01-01

    The design and mass cost of a starship and its life support system are investigated. The mission plan for a multi generational interstellar voyage to colonize a new planet is used to describe the starship design, including the crew habitat, accommodations, and life support. Only current technology is assumed. Highly reliable life support systems can be provided with reasonably small additional mass, suggesting that they can support long duration missions. Bioregenerative life support, growing crop plants that provide food, water, and oxygen, has been thought to need less mass than providing stored food for long duration missions. The large initial mass of hydroponics systems is paid for over time by saving the mass of stored food. However, the yearly logistics mass required to support a bioregenerative system exceeds the mass of food solids it produces, so that supplying stored dehydrated food always requires less mass than bioregenerative food production. A mixed system that grows about half the food and supplies the other half dehydrated has advantages that allow it to breakeven with stored dehydrated food in about 66 years. However, moderate increases in the hydroponics system mass to achieve high reliability, such as adding spares that double the system mass and replacing the initial system every 100 years, increase the mass cost of bioregenerative life support. In this case, the high reliability half food growing, half food supplying system does not breakeven for 389 years. An even higher reliability half and half system, with three times original system mass and replacing the system every 50 years, never breaks even. Growing food for starship life support requires more mass than providing dehydrated food, even for multigeneration voyages of hundreds of years. The benefits of growing some food may justify the added mass cost. Much more efficient recycling food production is wanted but may not be possible. A single multigenerational interstellar voyage to colonize a new planet would have cost similar to that of the Apollo program. Cost is reduced if a small crew travels slowly and lands with minimal equipment. We can go to the stars!

  10. A soft decoding algorithm and hardware implementation for the visual prosthesis based on high order soft demodulation.

    PubMed

    Yang, Yuan; Quan, Nannan; Bu, Jingjing; Li, Xueping; Yu, Ningmei

    2016-09-26

    High order modulation and demodulation technology can solve the frequency requirement between the wireless energy transmission and data communication. In order to achieve reliable wireless data communication based on high order modulation technology for visual prosthesis, this work proposed a Reed-Solomon (RS) error correcting code (ECC) circuit on the basis of differential amplitude and phase shift keying (DAPSK) soft demodulation. Firstly, recognizing the weakness of the traditional DAPSK soft demodulation algorithm based on division that is complex for hardware implementation, an improved phase soft demodulation algorithm for visual prosthesis to reduce the hardware complexity is put forward. Based on this new algorithm, an improved RS soft decoding method is hence proposed. In this new decoding method, the combination of Chase algorithm and hard decoding algorithms is used to achieve soft decoding. In order to meet the requirements of implantable visual prosthesis, the method to calculate reliability of symbol-level based on multiplication of bit reliability is derived, which reduces the testing vectors number of Chase algorithm. The proposed algorithms are verified by MATLAB simulation and FPGA experimental results. During MATLAB simulation, the biological channel attenuation property model is added into the ECC circuit. The data rate is 8 Mbps in the MATLAB simulation and FPGA experiments. MATLAB simulation results show that the improved phase soft demodulation algorithm proposed in this paper saves hardware resources without losing bit error rate (BER) performance. Compared with the traditional demodulation circuit, the coding gain of the ECC circuit has been improved by about 3 dB under the same BER of [Formula: see text]. The FPGA experimental results show that under the condition of data demodulation error with wireless coils 3 cm away, the system can correct it. The greater the distance, the higher the BER. Then we use a bit error rate analyzer to measure BER of the demodulation circuit and the RS ECC circuit with different distance of two coils. And the experimental results show that the RS ECC circuit has about an order of magnitude lower BER than the demodulation circuit when under the same coils distance. Therefore, the RS ECC circuit has more higher reliability of the communication in the system. The improved phase soft demodulation algorithm and soft decoding algorithm proposed in this paper enables data communication that is more reliable than other demodulation system, which also provide a significant reference for further study to the visual prosthesis system.

  11. Methods and Costs to Achieve Ultra Reliable Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2012-01-01

    A published Mars mission is used to explore the methods and costs to achieve ultra reliable life support. The Mars mission and its recycling life support design are described. The life support systems were made triply redundant, implying that each individual system will have fairly good reliability. Ultra reliable life support is needed for Mars and other long, distant missions. Current systems apparently have insufficient reliability. The life cycle cost of the Mars life support system is estimated. Reliability can be increased by improving the intrinsic system reliability, adding spare parts, or by providing technically diverse redundant systems. The costs of these approaches are estimated. Adding spares is least costly but may be defeated by common cause failures. Using two technically diverse systems is effective but doubles the life cycle cost. Achieving ultra reliability is worth its high cost because the penalty for failure is very high.

  12. Validity, Reliability, and Ability to Identify Fall Status of the Berg Balance Scale, BESTest, Mini-BESTest, and Brief-BESTest in Patients With COPD.

    PubMed

    Jácome, Cristina; Cruz, Joana; Oliveira, Ana; Marques, Alda

    2016-11-01

    The Berg Balance Scale (BBS), Balance Evaluation Systems Test (BESTest), Mini-BESTest, and Brief-BESTest are useful in the assessment of balance. Their psychometric properties, however, have not been tested in patients with chronic obstructive pulmonary disease (COPD). This study aimed to compare the validity, reliability, and ability to identify fall status of the BBS, BESTest, Mini-BESTest, and the Brief-BESTest in patients with COPD. A cross-sectional study was conducted. Forty-six patients (24 men, 22 women; mean age=75.9 years, SD=7.1) were included. Participants were asked to report their falls during the previous 12 months and to fill in the Activity-specific Balance Confidence (ABC) Scale. The BBS and the BESTest were administered. Mini-BESTest and Brief-BESTest scores were computed based on the participants' BESTest performance. Validity was assessed by correlating balance tests with each other and with the ABC Scale. Interrater reliability (2 raters), intrarater reliability (48-72 hours), and minimal detectable changes (MDCs) were established. Receiver operating characteristics assessed the ability of each balance test to differentiate between participants with and without a history of falls. Balance test scores were significantly correlated with each other (Spearman correlation rho=.73-.90) and with the ABC Scale (rho=.53-.75). Balance tests presented high interrater reliability (intraclass correlation coefficient [ICC]=.85-.97) and intrarater reliability (ICC=.52-.88) and acceptable MDCs (MDC=3.3-6.3 points). Although all balance tests were able to identify fall status (area under the curve=0.74-0.84), the BBS (sensitivity=73%, specificity=77%) and the Brief-BESTest (sensitivity=81%, specificity=73%) had the higher ability to identify fall status. Findings are generalizable mainly to older patients with moderate COPD. The 4 balance tests are valid, reliable, and valuable in identifying fall status in patients with COPD. The Brief-BESTest presented slightly higher interrater reliability and ability to differentiate participants' fall status. © 2016 American Physical Therapy Association.

  13. Bioresorbable polymer coated drug eluting stent: a model study.

    PubMed

    Rossi, Filippo; Casalini, Tommaso; Raffa, Edoardo; Masi, Maurizio; Perale, Giuseppe

    2012-07-02

    In drug eluting stent technologies, an increased demand for better control, higher reliability, and enhanced performances of drug delivery systems emerged in the last years and thus offered the opportunity to introduce model-based approaches aimed to overcome the remarkable limits of trial-and-error methods. In this context a mathematical model was studied, based on detailed conservation equations and taking into account the main physical-chemical mechanisms involved in polymeric coating degradation, drug release, and restenosis inhibition. It allowed highlighting the interdependence between factors affecting each of these phenomena and, in particular, the influence of stent design parameters on drug antirestenotic efficacy. Therefore, the here-proposed model is aimed to simulate the diffusional release, for both in vitro and the in vivo conditions: results were verified against various literature data, confirming the reliability of the parameter estimation procedure. The hierarchical structure of this model also allows easily modifying the set of equations describing restenosis evolution to enhance model reliability and taking advantage of the deep understanding of physiological mechanisms governing the different stages of smooth muscle cell growth and proliferation. In addition, thanks to its simplicity and to the very low system requirements and central processing unit (CPU) time, our model allows obtaining immediate views of system behavior.

  14. Quantifying Engagement: Measuring Player Involvement in Human-Avatar Interactions

    PubMed Central

    Norris, Anne E.; Weger, Harry; Bullinger, Cory; Bowers, Alyssa

    2014-01-01

    This research investigated the merits of using an established system for rating behavioral cues of involvement in human dyadic interactions (i.e., face-to-face conversation) to measure involvement in human-avatar interactions. Gameplay audio-video and self-report data from a Feasibility Trial and Free Choice study of an effective peer resistance skill building simulation game (DRAMA-RAMA™) were used to evaluate reliability and validity of the rating system when applied to human-avatar interactions. The Free Choice study used a revised game prototype that was altered to be more engaging. Both studies involved girls enrolled in a public middle school in Central Florida that served a predominately Hispanic (greater than 80%), low-income student population. Audio-video data were coded by two raters, trained in the rating system. Self-report data were generated using measures of perceived realism, predictability and flow administered immediately after game play. Hypotheses for reliability and validity were supported: Reliability values mirrored those found in the human dyadic interaction literature. Validity was supported by factor analysis, significantly higher levels of involvement in Free Choice as compared to Feasibility Trial players, and correlations between involvement dimension sub scores and self-report measures. Results have implications for the science of both skill-training intervention research and game design. PMID:24748718

  15. Implications of DSM-5 for the diagnosis of pediatric eating disorders.

    PubMed

    Limburg, Karina; Shu, Chloe Y; Watson, Hunna J; Hoiles, Kimberley J; Egan, Sarah J

    2018-05-01

    The aim of the study was to compare the DSM-IV, DSM-5, and ICD-10 eating disorders (ED) nomenclatures to assess their value in the classification of pediatric eating disorders. We investigated the prevalence of the disorders in accordance with each system's diagnostic criteria, diagnostic concordance between the systems, and interrater reliability. Participants were 1062 children and adolescents assessed at intake to a specialist Eating Disorders Program (91.6% female, mean age 14.5 years, SD = 1.75). Measures were collected from routine intake assessments. DSM-5 categorization led to a lower prevalence of unspecified EDs when compared with DSM-IV. There was almost complete overlap for specified EDs. Kappa values indicated almost excellent agreement between the two coders on all three diagnostic systems, although there was higher interrater reliability for DSM-5 and ICD-10 when compared with DSM-IV. DSM-5 nomenclature is useful in classifying eating disorders in pediatric clinical samples. © 2018 Wiley Periodicals, Inc.

  16. Evaluation of power system security and development of transmission pricing method

    NASA Astrophysics Data System (ADS)

    Kim, Hyungchul

    The electric power utility industry is presently undergoing a change towards the deregulated environment. This has resulted in unbundling of generation, transmission and distribution services. The introduction of competition into unbundled electricity services may lead system operation closer to its security boundaries resulting in smaller operating safety margins. The competitive environment is expected to lead to lower price rates for customers and higher efficiency for power suppliers in the long run. Under this deregulated environment, security assessment and pricing of transmission services have become important issues in power systems. This dissertation provides new methods for power system security assessment and transmission pricing. In power system security assessment, the following issues are discussed (1) The description of probabilistic methods for power system security assessment; (2) The computation time of simulation methods; (3) on-line security assessment for operation. A probabilistic method using Monte-Carlo simulation is proposed for power system security assessment. This method takes into account dynamic and static effects corresponding to contingencies. Two different Kohonen networks, Self-Organizing Maps and Learning Vector Quantization, are employed to speed up the probabilistic method. The combination of Kohonen networks and Monte-Carlo simulation can reduce computation time in comparison with straight Monte-Carlo simulation. A technique for security assessment employing Bayes classifier is also proposed. This method can be useful for system operators to make security decisions during on-line power system operation. This dissertation also suggests an approach for allocating transmission transaction costs based on reliability benefits in transmission services. The proposed method shows the transmission transaction cost of reliability benefits when transmission line capacities are considered. The ratio between allocation by transmission line capacity-use and allocation by reliability benefits is computed using the probability of system failure.

  17. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 1: HARP introduction and user's guide

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Rothmann, Elizabeth; Dugan, Joanne Bechta; Trivedi, Kishor S.; Mittal, Nitin; Boyd, Mark A.; Geist, Robert M.; Smotherman, Mark D.

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed to be compatible with most computing platforms and operating systems, and some programs have been beta tested, within the aerospace community for over 8 years. Volume 1 provides an introduction to the HARP program. Comprehensive information on HARP mathematical models can be found in the references.

  18. Intra- and Interobserver Reliability of Three Classification Systems for Hallux Rigidus.

    PubMed

    Dillard, Sarita; Schilero, Christina; Chiang, Sharon; Pham, Peter

    2018-04-18

    There are over ten classification systems currently used in the staging of hallux rigidus. This results in confusion and inconsistency with radiographic interpretation and treatment. The reliability of hallux rigidus classification systems has not yet been tested. The purpose of this study was to evaluate intra- and interobserver reliability using three commonly used classifications for hallux rigidus. Twenty-one plain radiograph sets were presented to ten ACFAS board-certified foot and ankle surgeons. Each physician classified each radiograph based on clinical experience and knowledge according to the Regnauld, Roukis, and Hattrup and Johnson classification systems. The two-way mixed single-measure consistency intraclass correlation was used to calculate intra- and interrater reliability. The intrarater reliability of individual sets for the Roukis and Hattrup and Johnson classification systems was "fair to good" (Roukis, 0.62±0.19; Hattrup and Johnson, 0.62±0.28), whereas the intrarater reliability of individual sets for the Regnauld system bordered between "fair to good" and "poor" (0.43±0.24). The interrater reliability of the mean classification was "excellent" for all three classification systems. Conclusions Reliable and reproducible classification systems are essential for treatment and prognostic implications in hallux rigidus. In our study, Roukis classification system had the best intrarater reliability. Although there are various classification systems for hallux rigidus, our results indicate that all three of these classification systems show reliability and reproducibility.

  19. Space Shuttle Program Primary Avionics Software System (PASS) Success Legacy - Quality and Reliability Date

    NASA Technical Reports Server (NTRS)

    Orr, James K.; Peltier, Daryl

    2010-01-01

    Thsi slide presentation reviews the avionics software system on board the space shuttle, with particular emphasis on the quality and reliability. The Primary Avionics Software System (PASS) provides automatic and fly-by-wire control of critical shuttle systems which executes in redundant computers. Charts given show the number of space shuttle flights vs time, PASS's development history, and other charts that point to the reliability of the system's development. The reliability of the system is also compared to predicted reliability.

  20. Application of the Reference Method Isotope Dilution Gas Chromatography Mass Spectrometry (ID/GC/MS) to Establish Metrological Traceability for Calibration and Control of Blood Glucose Test Systems

    PubMed Central

    Andreis, Elisabeth; Küllmer, Kai

    2014-01-01

    Self-monitoring of blood glucose (BG) by means of handheld BG systems is a cornerstone in diabetes therapy. The aim of this article is to describe a procedure with proven traceability for calibration and evaluation of BG systems to guarantee reliable BG measurements. Isotope dilution gas chromatography mass spectrometry (ID/GC/MS) is a method that fulfills all requirements to be used in a higher-order reference measurement procedure. However, this method is not applicable for routine measurements because of the time-consuming sample preparation. A hexokinase method with perchloric acid (PCA) sample pretreatment is used in a measurement procedure for such purposes. This method is directly linked to the ID/GC/MS method by calibration with a glucose solution that has an ID/GC/MS-determined target value. BG systems are calibrated with whole blood samples. The glucose levels in such samples are analyzed by this ID/GC/MS-linked hexokinase method to establish traceability to higher-order reference material. For method comparison, the glucose concentrations in 577 whole blood samples were measured using the PCA-hexokinase method and the ID/GC/MS method; this resulted in a mean deviation of 0.1%. The mean deviation between BG levels measured in >500 valid whole blood samples with BG systems and the ID/GC/MS was 1.1%. BG systems allow a reliable glucose measurement if a true reference measurement procedure, with a noninterrupted traceability chain using ID/GC/MS linked hexokinase method for calibration of BG systems, is implemented. Systems should be calibrated by means of a traceable and defined measurement procedure to avoid bias. PMID:24876614

  1. Inter-rater reliability for movement pattern analysis (MPA): measuring patterning of behaviors versus discrete behavior counts as indicators of decision-making style

    PubMed Central

    Connors, Brenda L.; Rende, Richard; Colton, Timothy J.

    2014-01-01

    The unique yield of collecting observational data on human movement has received increasing attention in a number of domains, including the study of decision-making style. As such, interest has grown in the nuances of core methodological issues, including the best ways of assessing inter-rater reliability. In this paper we focus on one key topic – the distinction between establishing reliability for the patterning of behaviors as opposed to the computation of raw counts – and suggest that reliability for each be compared empirically rather than determined a priori. We illustrate by assessing inter-rater reliability for key outcome measures derived from movement pattern analysis (MPA), an observational methodology that records body movements as indicators of decision-making style with demonstrated predictive validity. While reliability ranged from moderate to good for raw counts of behaviors reflecting each of two Overall Factors generated within MPA (Assertion and Perspective), inter-rater reliability for patterning (proportional indicators of each factor) was significantly higher and excellent (ICC = 0.89). Furthermore, patterning, as compared to raw counts, provided better prediction of observable decision-making process assessed in the laboratory. These analyses support the utility of using an empirical approach to inform the consideration of measuring patterning versus discrete behavioral counts of behaviors when determining inter-rater reliability of observable behavior. They also speak to the substantial reliability that may be achieved via application of theoretically grounded observational systems such as MPA that reveal thinking and action motivations via visible movement patterns. PMID:24999336

  2. Inter-rater reliability for movement pattern analysis (MPA): measuring patterning of behaviors versus discrete behavior counts as indicators of decision-making style.

    PubMed

    Connors, Brenda L; Rende, Richard; Colton, Timothy J

    2014-01-01

    The unique yield of collecting observational data on human movement has received increasing attention in a number of domains, including the study of decision-making style. As such, interest has grown in the nuances of core methodological issues, including the best ways of assessing inter-rater reliability. In this paper we focus on one key topic - the distinction between establishing reliability for the patterning of behaviors as opposed to the computation of raw counts - and suggest that reliability for each be compared empirically rather than determined a priori. We illustrate by assessing inter-rater reliability for key outcome measures derived from movement pattern analysis (MPA), an observational methodology that records body movements as indicators of decision-making style with demonstrated predictive validity. While reliability ranged from moderate to good for raw counts of behaviors reflecting each of two Overall Factors generated within MPA (Assertion and Perspective), inter-rater reliability for patterning (proportional indicators of each factor) was significantly higher and excellent (ICC = 0.89). Furthermore, patterning, as compared to raw counts, provided better prediction of observable decision-making process assessed in the laboratory. These analyses support the utility of using an empirical approach to inform the consideration of measuring patterning versus discrete behavioral counts of behaviors when determining inter-rater reliability of observable behavior. They also speak to the substantial reliability that may be achieved via application of theoretically grounded observational systems such as MPA that reveal thinking and action motivations via visible movement patterns.

  3. Geometric classification of scalp hair for valid drug testing, 6 more reliable than 8 hair curl groups.

    PubMed

    Mkentane, K; Van Wyk, J C; Sishi, N; Gumedze, F; Ngoepe, M; Davids, L M; Khumalo, N P

    2017-01-01

    Curly hair is reported to contain higher lipid content than straight hair, which may influence incorporation of lipid soluble drugs. The use of race to describe hair curl variation (Asian, Caucasian and African) is unscientific yet common in medical literature (including reports of drug levels in hair). This study investigated the reliability of a geometric classification of hair (based on 3 measurements: the curve diameter, curl index and number of waves). After ethical approval and informed consent, proximal virgin (6cm) hair sampled from the vertex of scalp in 48 healthy volunteers were evaluated. Three raters each scored hairs from 48 volunteers at two occasions each for the 8 and 6-group classifications. One rater applied the 6-group classification to 80 additional volunteers in order to further confirm the reliability of this system. The Kappa statistic was used to assess intra and inter rater agreement. Each rater classified 480 hairs on each occasion. No rater classified any volunteer's 10 hairs into the same group; the most frequently occurring group was used for analysis. The inter-rater agreement was poor for the 8-groups (k = 0.418) but improved for the 6-groups (k = 0.671). The intra-rater agreement also improved (k = 0.444 to 0.648 versus 0.599 to 0.836) for 6-groups; that for the one evaluator for all volunteers was good (k = 0.754). Although small, this is the first study to test the reliability of a geometric classification. The 6-group method is more reliable. However, a digital classification system is likely to reduce operator error. A reliable objective classification of human hair curl is long overdue, particularly with the increasing use of hair as a testing substrate for treatment compliance in Medicine.

  4. Reliability and Cost Impacts for Attritable Systems

    DTIC Science & Technology

    2017-03-23

    and cost risk metrics to convey the value of reliability and reparability trades. Investigation of the benefit of trading system reparability...illustrates the benefit that reliability engineering can have on total cost . 2.3.1 Contexts of System Reliability Hogge (2012) identifies two distinct...reliability and reparability trades. Investigation of the benefit of trading system reparability shows a marked increase in cost risk. Yet, trades in

  5. Electrochemical disinfection of repeatedly recycled blackwater in a free-standing, additive-free toilet.

    PubMed

    Hawkins, Brian T; Sellgren, Katelyn L; Klem, Ethan J D; Piascik, Jeffrey R; Stoner, Brian R

    2017-11-01

    Decentralized, energy-efficient waste water treatment technologies enabling water reuse are needed to sustainably address sanitation needs in water- and energy-scarce environments. Here, we describe the effects of repeated recycling of disinfected blackwater (as flush liquid) on the energy required to achieve full disinfection with an electrochemical process in a prototype toilet system. The recycled liquid rapidly reached a steady state with total solids reliably ranging between 0.50 and 0.65% and conductivity between 20 and 23 mS/cm through many flush cycles over 15 weeks. The increase in accumulated solids was associated with increased energy demand and wide variation in the free chlorine contact time required to achieve complete disinfection. Further studies on the system at steady state revealed that running at higher voltage modestly improves energy efficiency, and established running parameters that reliably achieve disinfection at fixed run times. These results will guide prototype testing in the field.

  6. Smart distribution systems

    DOE PAGES

    Jiang, Yazhou; Liu, Chen -Ching; Xu, Yin

    2016-04-19

    The increasing importance of system reliability and resilience is changing the way distribution systems are planned and operated. To achieve a distribution system self-healing against power outages, emerging technologies and devices, such as remote-controlled switches (RCSs) and smart meters, are being deployed. The higher level of automation is transforming traditional distribution systems into the smart distribution systems (SDSs) of the future. The availability of data and remote control capability in SDSs provides distribution operators with an opportunity to optimize system operation and control. In this paper, the development of SDSs and resulting benefits of enhanced system capabilities are discussed. Amore » comprehensive survey is conducted on the state-of-the-art applications of RCSs and smart meters in SDSs. Specifically, a new method, called Temporal Causal Diagram (TCD), is used to incorporate outage notifications from smart meters for enhanced outage management. To fully utilize the fast operation of RCSs, the spanning tree search algorithm is used to develop service restoration strategies. Optimal placement of RCSs and the resulting enhancement of system reliability are discussed. Distribution system resilience with respect to extreme events is presented. Furthermore, test cases are used to demonstrate the benefit of SDSs. Active management of distributed generators (DGs) is introduced. Future research in a smart distribution environment is proposed.« less

  7. Evaluation of reliability modeling tools for advanced fault tolerant systems

    NASA Technical Reports Server (NTRS)

    Baker, Robert; Scheper, Charlotte

    1986-01-01

    The Computer Aided Reliability Estimation (CARE III) and Automated Reliability Interactice Estimation System (ARIES 82) reliability tools for application to advanced fault tolerance aerospace systems were evaluated. To determine reliability modeling requirements, the evaluation focused on the Draper Laboratories' Advanced Information Processing System (AIPS) architecture as an example architecture for fault tolerance aerospace systems. Advantages and limitations were identified for each reliability evaluation tool. The CARE III program was designed primarily for analyzing ultrareliable flight control systems. The ARIES 82 program's primary use was to support university research and teaching. Both CARE III and ARIES 82 were not suited for determining the reliability of complex nodal networks of the type used to interconnect processing sites in the AIPS architecture. It was concluded that ARIES was not suitable for modeling advanced fault tolerant systems. It was further concluded that subject to some limitations (the difficulty in modeling systems with unpowered spare modules, systems where equipment maintenance must be considered, systems where failure depends on the sequence in which faults occurred, and systems where multiple faults greater than a double near coincident faults must be considered), CARE III is best suited for evaluating the reliability of advanced tolerant systems for air transport.

  8. Bulk electric system reliability evaluation incorporating wind power and demand side management

    NASA Astrophysics Data System (ADS)

    Huang, Dange

    Electric power systems are experiencing dramatic changes with respect to structure, operation and regulation and are facing increasing pressure due to environmental and societal constraints. Bulk electric system reliability is an important consideration in power system planning, design and operation particularly in the new competitive environment. A wide range of methods have been developed to perform bulk electric system reliability evaluation. Theoretically, sequential Monte Carlo simulation can include all aspects and contingencies in a power system and can be used to produce an informative set of reliability indices. It has become a practical and viable tool for large system reliability assessment technique due to the development of computing power and is used in the studies described in this thesis. The well-being approach used in this research provides the opportunity to integrate an accepted deterministic criterion into a probabilistic framework. This research work includes the investigation of important factors that impact bulk electric system adequacy evaluation and security constrained adequacy assessment using the well-being analysis framework. Load forecast uncertainty is an important consideration in an electrical power system. This research includes load forecast uncertainty considerations in bulk electric system reliability assessment and the effects on system, load point and well-being indices and reliability index probability distributions are examined. There has been increasing worldwide interest in the utilization of wind power as a renewable energy source over the last two decades due to enhanced public awareness of the environment. Increasing penetration of wind power has significant impacts on power system reliability, and security analyses become more uncertain due to the unpredictable nature of wind power. The effects of wind power additions in generating and bulk electric system reliability assessment considering site wind speed correlations and the interactive effects of wind power and load forecast uncertainty on system reliability are examined. The concept of the security cost associated with operating in the marginal state in the well-being framework is incorporated in the economic analyses associated with system expansion planning including wind power and load forecast uncertainty. Overall reliability cost/worth analyses including security cost concepts are applied to select an optimal wind power injection strategy in a bulk electric system. The effects of the various demand side management measures on system reliability are illustrated using the system, load point, and well-being indices, and the reliability index probability distributions. The reliability effects of demand side management procedures in a bulk electric system including wind power and load forecast uncertainty considerations are also investigated. The system reliability effects due to specific demand side management programs are quantified and examined in terms of their reliability benefits.

  9. A Wavelet-based Fast Discrimination of Transformer Magnetizing Inrush Current

    NASA Astrophysics Data System (ADS)

    Kitayama, Masashi

    Recently customers who need electricity of higher quality have been installing co-generation facilities. They can avoid voltage sags and other distribution system related disturbances by supplying electricity to important load from their generators. For another example, FRIENDS, highly reliable distribution system using semiconductor switches or storage devices based on power electronics technology, is proposed. These examples illustrates that the request for high reliability in distribution system is increasing. In order to realize these systems, fast relaying algorithms are indispensable. The author proposes a new method of detecting magnetizing inrush current using discrete wavelet transform (DWT). DWT provides the function of detecting discontinuity of current waveform. Inrush current occurs when transformer core becomes saturated. The proposed method detects spikes of DWT components derived from the discontinuity of the current waveform at both the beginning and the end of inrush current. Wavelet thresholding, one of the wavelet-based statistical modeling, was applied to detect the DWT component spikes. The proposed method is verified using experimental data using single-phase transformer and the proposed method is proved to be effective.

  10. System and Software Reliability (C103)

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores

    2003-01-01

    Within the last decade better reliability models (hardware. software, system) than those currently used have been theorized and developed but not implemented in practice. Previous research on software reliability has shown that while some existing software reliability models are practical, they are no accurate enough. New paradigms of development (e.g. OO) have appeared and associated reliability models have been proposed posed but not investigated. Hardware models have been extensively investigated but not integrated into a system framework. System reliability modeling is the weakest of the three. NASA engineers need better methods and tools to demonstrate that the products meet NASA requirements for reliability measurement. For the new models for the software component of the last decade, there is a great need to bring them into a form that they can be used on software intensive systems. The Statistical Modeling and Estimation of Reliability Functions for Systems (SMERFS'3) tool is an existing vehicle that may be used to incorporate these new modeling advances. Adapting some existing software reliability modeling changes to accommodate major changes in software development technology may also show substantial improvement in prediction accuracy. With some additional research, the next step is to identify and investigate system reliability. System reliability models could then be incorporated in a tool such as SMERFS'3. This tool with better models would greatly add value in assess in GSFC projects.

  11. Proposing Melasma Severity Index: A New, More Practical, Office-based Scoring System for Assessing the Severity of Melasma

    PubMed Central

    Majid, Imran; Haq, Inaamul; Imran, Saher; Keen, Abid; Aziz, Khalid; Arif, Tasleem

    2016-01-01

    Background: Melasma Area and Severity Index (MASI), the scoring system in melasma, needs to be refined. Aims and Objectives: To propose a more practical scoring system, named as Melasma Severity Index (MSI), for assessing the disease severity and treatment response in melasma. Materials and Methods: Four dermatologists were trained to calculate MASI and also the proposed MSI scores. For MSI, the formula used was 0.4 (a × p2) l + 0.4 (a × p2) r + 0.2 (a × p2) n where “a” stands for area, “p” for pigmentation, “l” for left face, “r” for right face, and “n” for nose. On a single day, 30 enrolled patients were randomly examined by each trained dermatologist and their MASI and MSI scores were calculated. Next, each rater re-examined every 6th patient for repeat MASI and MSI scoring to assess intra- and inter-rater reliability of MASI and MSI scores. Validity was assessed by comparing the individual scores of each rater with objective data from mexameter and ImageJ software. Results: Inter-rater reliability, as assessed by intraclass correlation coefficient, was significantly higher for MSI (0.955) as compared to MASI (0.816). Correlation of scores with objective data by Spearman's correlation revealed higher rho values for MSI than for MASI for all raters. Limitations: Sample population belonged to a single ethnic group. Conclusions: MSI is simpler and more practical scoring system for melasma. PMID:26955093

  12. Chip-package nano-structured copper and nickel interconnections with metallic and polymeric bonding interfaces

    NASA Astrophysics Data System (ADS)

    Aggarwal, Ankur

    With the semiconductor industry racing toward a historic transition, nano chips with less than 45 nm features demand I/Os in excess of 20,000 that support computing speed in terabits per second, with multi-core processors aggregately providing highest bandwidth at lowest power. On the other hand, emerging mixed signal systems are driving the need for 3D packaging with embedded active components and ultra-short interconnections. Decreasing I/O pitch together with low cost, high electrical performance and high reliability are the key technological challenges identified by the 2005 International Technology Roadmap for Semiconductors (ITRS). Being able to provide several fold increase in the chip-to-package vertical interconnect density is essential for garnering the true benefits of nanotechnology that will utilize nano-scale devices. Electrical interconnections are multi-functional materials that must also be able to withstand complex, sustained and cyclic thermo-mechanical loads. In addition, the materials must be environmentally-friendly, corrosion resistant, thermally stable over a long time, and resistant to electro-migration. A major challenge is also to develop economic processes that can be integrated into back end of the wafer foundry, i.e. with wafer level packaging. Device-to-system board interconnections are typically accomplished today with either wire bonding or solders. Both of these are incremental and run into either electrical or mechanical barriers as they are extended to higher density of interconnections. Downscaling traditional solder bump interconnect will not satisfy the thermo-mechanical reliability requirements at very fine pitches of the order of 30 microns and less. Alternate interconnection approaches such as compliant interconnects typically require lengthy connections and are therefore limited in terms of electrical properties, although expected to meet the mechanical requirements. A novel chip-package interconnection technology is developed to address the IC packaging requirements beyond the ITRS projections and to introduce innovative design and fabrication concepts that will further advance the performance of the chip, the package, and the system board. The nano-structured interconnect technology simultaneously packages all the ICs intact in wafer form with quantum jump in the number of interconnections with the lowest electrical parasitics. The intrinsic properties of nano materials also enable several orders of magnitude higher interconnect densities with the best mechanical properties for the highest reliability and yet provide higher current and heat transfer densities. Nano-structured interconnects provides the ability to assemble the packaged parts on the system board without the use of underfill materials and to enable advanced analog/digital testing, reliability testing, and burn-in at wafer level. This thesis investigates the electrical and mechanical performance of nanostructured interconnections through modeling and test vehicle fabrication. The analytical models evaluate the performance improvements over solder and compliant interconnections. Test vehicles with nano-interconnections were fabricated using low cost electro-deposition techniques and assembled with various bonding interfaces. Interconnections were fabricated at 200 micron pitch to compare with the existing solder joints and at 50 micron pitch to demonstrate fabrication processes at fine pitches. Experimental and modeling results show that the proposed nano-interconnections could enhance the reliability and potentially meet all the system performance requirements for the emerging micro/nano-systems.

  13. 76 FR 42534 - Mandatory Reliability Standards for Interconnection Reliability Operating Limits; System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-19

    ... Reliability Operating Limits; System Restoration Reliability Standards AGENCY: Federal Energy Regulatory... data necessary to analyze and monitor Interconnection Reliability Operating Limits (IROL) within its... Interconnection Reliability Operating Limits, Order No. 748, 134 FERC ] 61,213 (2011). \\2\\ The term ``Wide-Area...

  14. Alternative magnetic flux leakage modalities for pipeline inspection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katragadda, G.; Lord, W.; Sun, Y.S.

    1996-05-01

    Increasing quality consciousness is placing higher demands on the accuracy and reliability of inspection systems used in defect detection and characterization. Nondestructive testing techniques often rely on using multi-transducer approaches to obtain greater defect sensitivity. This paper investigates the possibility of taking advantage of alternative modalities associated with the standard magnetic flux leakage tool to obtain additional defect information, while still using a single excitation source.

  15. Advanced OTV engine concepts

    NASA Technical Reports Server (NTRS)

    Zachary, A. T.

    1984-01-01

    The results and status of engine technology efforts to date and related company funded activities are presented. Advanced concepts in combustors and injectors, high speed turbomachinery, controls, and high-area-ratio nozzles that package within a short length result is engines with specific impulse values 35 to 46 seconds higher than those now realized by operational systems. The improvement in life, reliability, and maintainability of OTV engines are important.

  16. Corrosion Monitors for Embedded Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, Alex L.; Pfeifer, Kent B.; Casias, Adrian L.

    2017-05-01

    We have developed and characterized novel in-situ corrosion sensors to monitor and quantify the corrosive potential and history of localized environments. Embedded corrosion sensors can provide information to aid health assessments of internal electrical components including connectors, microelectronics, wires, and other susceptible parts. When combined with other data (e.g. temperature and humidity), theory, and computational simulation, the reliability of monitored systems can be predicted with higher fidelity.

  17. Reliability testing of two classification systems for osteoarthritis and post-traumatic arthritis of the elbow.

    PubMed

    Amini, Michael H; Sykes, Joshua B; Olson, Stephen T; Smith, Richard A; Mauck, Benjamin M; Azar, Frederick M; Throckmorton, Thomas W

    2015-03-01

    The severity of elbow arthritis is one of many factors that surgeons must evaluate when considering treatment options for a given patient. Elbow surgeons have historically used the Broberg and Morrey (BM) and Hastings and Rettig (HR) classification systems to radiographically stage the severity of post-traumatic arthritis (PTA) and primary osteoarthritis (OA). We proposed to compare the intraobserver and interobserver reliability between systems for patients with either PTA or OA. The radiographs of 45 patients were evaluated at least 2 weeks apart by 6 evaluators of different levels of training. Intraobserver and interobserver reliability were calculated by Spearman correlation coefficients with 95% confidence intervals. Agreement was considered almost perfect for coefficients >0.80 and substantial for coefficients of 0.61 to 0.80. In patients with both PTA and OA, intraobserver reliability and interobserver reliability were substantial, with no difference between classification systems. There were no significant differences in intraobserver or interobserver reliability between attending physicians and trainees for either classification system (all P > .10). The presence of fracture implants did not affect reliability in the BM system but did substantially worsen reliability in the HR system (intraobserver P = .04 and interobserver P = .001). The BM and HR classifications both showed substantial intraobserver and interobserver reliability for PTA and OA. Training level differences did not affect reliability for either system. Both trainees and fellowship-trained surgeons may easily and reliably apply each classification system to the evaluation of primary elbow OA and PTA, although the HR system was less reliable in the presence of fracture implants. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  18. Reliable actuators for twin rotor MIMO system

    NASA Astrophysics Data System (ADS)

    Rao, Vidya S.; V. I, George; Kamath, Surekha; Shreesha, C.

    2017-11-01

    Twin Rotor MIMO System (TRMS) is a bench mark system to test flight control algorithms. One of the perturbations on TRMS which is likely to affect the control system is actuator failure. Therefore, there is a need for a reliable control system, which includes H infinity controller along with redundant actuators. Reliable control refers to the design of a control system to tolerate failures of a certain set of actuators or sensors while retaining desired control system properties. Output of reliable controller has to be transferred to the redundant actuator effectively to make the TRMS reliable even under actual actuator failure.

  19. Comparative Study of ENIG and ENEPIG as Surface Finishes for a Sn-Ag-Cu Solder Joint

    NASA Astrophysics Data System (ADS)

    Yoon, Jeong-Won; Noh, Bo-In; Jung, Seung-Boo

    2011-09-01

    Interfacial reactions and joint reliability of Sn-3.0Ag-0.5Cu solder with two different surface finishes, electroless nickel-immersion gold (ENIG) and electroless nickel-electroless palladium-immersion gold (ENEPIG), were evaluated during a reflow process. We first compared the interfacial reactions of the two solder joints and also successfully revealed a connection between the interfacial reaction behavior and mechanical reliability. The Sn-Ag-Cu/ENIG joint exhibited a higher intermetallic compound (IMC) growth rate and a higher consumption rate of the Ni(P) layer than the Sn-Ag-Cu/ENEPIG joint. The presence of the Pd layer in the ENEPIG suppressed the growth of the interfacial IMC layer and the consumption of the Ni(P) layer, resulting in the superior interfacial stability of the solder joint. The shear test results show that the ENIG joint fractured along the interface, exhibiting indications of brittle failure possibly due to the brittle IMC layer. In contrast, the failure of the ENEPIG joint only went through the bulk solder, supporting the idea that the interface is mechanically reliable. The results from this study confirm that the Sn-Ag-Cu/ENEPIG solder joint is mechanically robust and, thus, the combination is a viable option for a Pb-free package system.

  20. Checklist and Scoring System for the Assessment of Soft Tissue Preservation in CT Examinations of Human Mummies.

    PubMed

    Panzer, Stephanie; Mc Coy, Mark R; Hitzl, Wolfgang; Piombino-Mascali, Dario; Jankauskas, Rimantas; Zink, Albert R; Augat, Peter

    2015-01-01

    The purpose of this study was to develop a checklist for standardized assessment of soft tissue preservation in human mummies based on whole-body computed tomography examinations, and to add a scoring system to facilitate quantitative comparison of mummies. Computed tomography examinations of 23 mummies from the Capuchin Catacombs of Palermo, Sicily (17 adults, 6 children; 17 anthropogenically and 6 naturally mummified) and 7 mummies from the crypt of the Dominican Church of the Holy Spirit of Vilnius, Lithuania (5 adults, 2 children; all naturally mummified) were used to develop the checklist following previously published guidelines. The scoring system was developed by assigning equal scores for checkpoints with equivalent quality. The checklist was evaluated by intra- and inter-observer reliability. The finalized checklist was applied to compare the groups of anthropogenically and naturally mummified bodies. The finalized checklist contains 97 checkpoints and was divided into two main categories, "A. Soft Tissues of Head and Musculoskeletal System" and "B. Organs and Organ Systems", each including various subcategories. The complete checklist had an intra-observer reliability of 98% and an inter-observer reliability of 93%. Statistical comparison revealed significantly higher values in anthropogenically compared to naturally mummified bodies for the total score and for three subcategories. In conclusion, the developed checklist allows for a standardized assessment and documentation of soft tissue preservation in whole-body computed tomography examinations of human mummies. The scoring system facilitates a quantitative comparison of the soft tissue preservation status between single mummies or mummy collections.

  1. Team performance in networked supervisory control of unmanned air vehicles: effects of automation, working memory, and communication content.

    PubMed

    McKendrick, Ryan; Shaw, Tyler; de Visser, Ewart; Saqer, Haneen; Kidwell, Brian; Parasuraman, Raja

    2014-05-01

    Assess team performance within a net-worked supervisory control setting while manipulating automated decision aids and monitoring team communication and working memory ability. Networked systems such as multi-unmanned air vehicle (UAV) supervision have complex properties that make prediction of human-system performance difficult. Automated decision aid can provide valuable information to operators, individual abilities can limit or facilitate team performance, and team communication patterns can alter how effectively individuals work together. We hypothesized that reliable automation, higher working memory capacity, and increased communication rates of task-relevant information would offset performance decrements attributed to high task load. Two-person teams performed a simulated air defense task with two levels of task load and three levels of automated aid reliability. Teams communicated and received decision aid messages via chat window text messages. Task Load x Automation effects were significant across all performance measures. Reliable automation limited the decline in team performance with increasing task load. Average team spatial working memory was a stronger predictor than other measures of team working memory. Frequency of team rapport and enemy location communications positively related to team performance, and word count was negatively related to team performance. Reliable decision aiding mitigated team performance decline during increased task load during multi-UAV supervisory control. Team spatial working memory, communication of spatial information, and team rapport predicted team success. An automated decision aid can improve team performance under high task load. Assessment of spatial working memory and the communication of task-relevant information can help in operator and team selection in supervisory control systems.

  2. Bearing Procurement Analysis Method by Total Cost of Ownership Analysis and Reliability Prediction

    NASA Astrophysics Data System (ADS)

    Trusaji, Wildan; Akbar, Muhammad; Sukoyo; Irianto, Dradjad

    2018-03-01

    In making bearing procurement analysis, price and its reliability must be considered as decision criteria, since price determines the direct cost as acquisition cost and reliability of bearing determine the indirect cost such as maintenance cost. Despite the indirect cost is hard to identify and measured, it has high contribution to overall cost that will be incurred. So, the indirect cost of reliability must be considered when making bearing procurement analysis. This paper tries to explain bearing evaluation method with the total cost of ownership analysis to consider price and maintenance cost as decision criteria. Furthermore, since there is a lack of failure data when bearing evaluation phase is conducted, reliability prediction method is used to predict bearing reliability from its dynamic load rating parameter. With this method, bearing with a higher price but has higher reliability is preferable for long-term planning. But for short-term planning the cheaper one but has lower reliability is preferable. This contextuality can give rise to conflict between stakeholders. Thus, the planning horizon needs to be agreed by all stakeholder before making a procurement decision.

  3. Parts and Components Reliability Assessment: A Cost Effective Approach

    NASA Technical Reports Server (NTRS)

    Lee, Lydia

    2009-01-01

    System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.

  4. Generalized quantum kinetic expansion: Higher-order corrections to multichromophoric Förster theory

    NASA Astrophysics Data System (ADS)

    Wu, Jianlan; Gong, Zhihao; Tang, Zhoufei

    2015-08-01

    For a general two-cluster energy transfer network, a new methodology of the generalized quantum kinetic expansion (GQKE) method is developed, which predicts an exact time-convolution equation for the cluster population evolution under the initial condition of the local cluster equilibrium state. The cluster-to-cluster rate kernel is expanded over the inter-cluster couplings. The lowest second-order GQKE rate recovers the multichromophoric Förster theory (MCFT) rate. The higher-order corrections to the MCFT rate are systematically included using the continued fraction resummation form, resulting in the resummed GQKE method. The reliability of the GQKE methodology is verified in two model systems, revealing the relevance of higher-order corrections.

  5. Cultural competency assessment tool for hospitals: Evaluating hospitals’ adherence to the culturally and linguistically appropriate services standards

    PubMed Central

    Weech-Maldonado, Robert; Dreachslin, Janice L.; Brown, Julie; Pradhan, Rohit; Rubin, Kelly L.; Schiller, Cameron; Hays, Ron D.

    2016-01-01

    Background The U.S. national standards for culturally and linguistically appropriate services (CLAS) in health care provide guidelines on policies and practices aimed at developing culturally competent systems of care. The Cultural Competency Assessment Tool for Hospitals (CCATH) was developed as an organizational tool to assess adherence to the CLAS standards. Purposes First, we describe the development of the CCATH and estimate the reliability and validity of the CCATH measures. Second, we discuss the managerial implications of the CCATH as an organizational tool to assess cultural competency. Methodology/Approach We pilot tested an initial draft of the CCATH, revised it based on a focus group and cognitive interviews, and then administered it in a field test with a sample of California hospitals. The reliability and validity of the CCATH were evaluated using factor analysis, analysis of variance, and Cronbach’s alphas. Findings Exploratory and confirmatory factor analyses identified 12 CCATH composites: leadership and strategic planning, data collection on inpatient population, data collection on service area, performance management systems and quality improvement, human resources practices, diversity training, community representation, availability of interpreter services, interpreter services policies, quality of interpreter services, translation of written materials, and clinical cultural competency practices. All the CCATH scales had internal consistency reliability of .65 or above, and the reliability was .70 or above for 9 of the 12 scales. Analysis of variance results showed that not-for-profit hospitals have higher CCATH scores than for-profit hospitals in five CCATH scales and higher CCATH scores than government hospitals in two CCATH scales. Practice Implications The CCATH showed adequate psychometric properties. Managers and policy makers can use the CCATH as a tool to evaluate hospital performance in cultural competency and identify and target improvements in hospital policies and practices that undergird the provision of CLAS. PMID:21934511

  6. Theory of reliable systems. [systems analysis and design

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1973-01-01

    The analysis and design of reliable systems are discussed. The attributes of system reliability studied are fault tolerance, diagnosability, and reconfigurability. Objectives of the study include: to determine properties of system structure that are conducive to a particular attribute; to determine methods for obtaining reliable realizations of a given system; and to determine how properties of system behavior relate to the complexity of fault tolerant realizations. A list of 34 references is included.

  7. 76 FR 64082 - Mandatory Reliability Standards for the Bulk-Power System; Notice of Staff Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-17

    ... Reliability Standards for the Bulk-Power System; Notice of Staff Meeting Take notice that the Federal Energy... reliability implications to the interconnected transmission system associated with a single point of failure... R1.3.10 of Commission-approved transmission planning Reliability Standard TPL-002- 0 (System...

  8. Design modification for the modular helium reactor for higher temperature operation and reliability studies for nuclear hydrogen production processes

    NASA Astrophysics Data System (ADS)

    Reza, S. M. Mohsin

    Design options have been evaluated for the Modular Helium Reactor (MHR) for higher temperature operation. An alternative configuration for the MHR coolant inlet flow path is developed to reduce the peak vessel temperature (PVT). The coolant inlet path is shifted from the annular path between reactor core barrel and vessel wall through the permanent side reflector (PSR). The number and dimensions of coolant holes are varied to optimize the pressure drop, the inlet velocity, and the percentage of graphite removed from the PSR to create this inlet path. With the removal of ˜10% of the graphite from PSR the PVT is reduced from 541°C to 421°C. A new design for the graphite block core has been evaluated and optimized to reduce the inlet coolant temperature with the aim of further reduction of PVT. The dimensions and number of fuel rods and coolant holes, and the triangular pitch have been changed and optimized. Different packing fractions for the new core design have been used to conserve the number of fuel particles. Thermal properties for the fuel elements are calculated and incorporated into these analyses. The inlet temperature, mass flow and bypass flow are optimized to limit the peak fuel temperature (PFT) within an acceptable range. Using both of these modifications together, the PVT is reduced to ˜350°C while keeping the outlet temperature at 950°C and maintaining the PFT within acceptable limits. The vessel and fuel temperatures during low pressure conduction cooldown and high pressure conduction cooldown transients are found to be well below the design limits. The reliability and availability studies for coupled nuclear hydrogen production processes based on the sulfur iodine thermochemical process and high temperature electrolysis process have been accomplished. The fault tree models for both these processes are developed. Using information obtained on system configuration, component failure probability, component repair time and system operating modes and conditions, the system reliability and availability are assessed. Required redundancies are made to improve system reliability and to optimize the plant design for economic performance. The failure rates and outage factors of both processes are found to be well below the maximum acceptable range.

  9. Assessing the validity and reliability of three indicators self-reported on the pregnancy risk assessment monitoring system survey.

    PubMed

    Ahluwalia, Indu B; Helms, Kristen; Morrow, Brian

    2013-01-01

    We investigated the reliability and validity of three self-reported indicators from the Pregnancy Risk Assessment Monitoring System (PRAMS) survey. We used 2008 PRAMS (n=15,646) data from 12 states that had implemented the 2003 revised U.S. Certificate of Live Birth. We estimated reliability by kappa coefficient and validity by sensitivity and specificity using the birth certificate data as the reference for the following: prenatal participation in the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC); Medicaid payment for delivery; and breastfeeding initiation. These indicators were examined across several demographic subgroups. The reliability was high for all three measures: 0.81 for WIC participation, 0.67 for Medicaid payment of delivery, and 0.72 for breastfeeding initiation. The validity of PRAMS indicators was also high: WIC participation (sensitivity = 90.8%, specificity = 90.6%), Medicaid payment for delivery (sensitivity = 82.4%, specificity = 85.6%), and breastfeeding initiation (sensitivity = 94.3%, specificity = 76.0%). The prevalence estimates were higher on PRAMS than the birth certificate for each of the indicators except Medicaid-paid delivery among non-Hispanic black women. Kappa values within most subgroups remained in the moderate range (0.40-0.80). Sensitivity and specificity values were lower for Hispanic women who responded to the PRAMS survey in Spanish and for breastfeeding initiation among women who delivered very low birthweight and very preterm infants. The validity and reliability of the PRAMS data for measures assessed were high. Our findings support the use of PRAMS data for epidemiological surveillance, research, and planning.

  10. Do aggressive signals evolve towards higher reliability or lower costs of assessment?

    PubMed

    Ręk, P

    2014-12-01

    It has been suggested that the evolution of signals must be a wasteful process for the signaller, aimed at the maximization of signal honesty. However, the reliability of communication depends not only on the costs paid by signallers but also on the costs paid by receivers during assessment, and less attention has been given to the interaction between these two types of costs during the evolution of signalling systems. A signaller and receiver may accept some level of signal dishonesty by choosing signals that are cheaper in terms of assessment but that are stabilized with less reliable mechanisms. I studied the potential trade-off between signal reliability and the costs of signal assessment in the corncrake (Crex crex). I found that the birds prefer signals that are less costly regarding assessment rather than more reliable. Despite the fact that the fundamental frequency of calls was a strong predictor of male size, it was ignored by receivers unless they could directly compare signal variants. My data revealed a response advantage of costly signals when comparison between calls differing with fundamental frequencies is fast and straightforward, whereas cheap signalling is preferred in natural conditions. These data might improve our understanding of the influence of receivers on signal design because they support the hypothesis that fully honest signalling systems may be prone to dishonesty based on the effects of receiver costs and be replaced by signals that are cheaper in production and reception but more susceptible to cheating. © 2014 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.

  11. A study on reliability of power customer in distribution network

    NASA Astrophysics Data System (ADS)

    Liu, Liyuan; Ouyang, Sen; Chen, Danling; Ma, Shaohua; Wang, Xin

    2017-05-01

    The existing power supply reliability index system is oriented to power system without considering actual electricity availability in customer side. In addition, it is unable to reflect outage or customer’s equipment shutdown caused by instantaneous interruption and power quality problem. This paper thus makes a systematic study on reliability of power customer. By comparing with power supply reliability, reliability of power customer is defined and extracted its evaluation requirements. An indexes system, consisting of seven customer indexes and two contrast indexes, are designed to describe reliability of power customer from continuity and availability. In order to comprehensively and quantitatively evaluate reliability of power customer in distribution networks, reliability evaluation method is proposed based on improved entropy method and the punishment weighting principle. Practical application has proved that reliability index system and evaluation method for power customer is reasonable and effective.

  12. Key Reliability Drivers of Liquid Propulsion Engines and A Reliability Model for Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Huang, Zhao-Feng; Fint, Jeffry A.; Kuck, Frederick M.

    2005-01-01

    This paper is to address the in-flight reliability of a liquid propulsion engine system for a launch vehicle. We first establish a comprehensive list of system and sub-system reliability drivers for any liquid propulsion engine system. We then build a reliability model to parametrically analyze the impact of some reliability parameters. We present sensitivity analysis results for a selected subset of the key reliability drivers using the model. Reliability drivers identified include: number of engines for the liquid propulsion stage, single engine total reliability, engine operation duration, engine thrust size, reusability, engine de-rating or up-rating, engine-out design (including engine-out switching reliability, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction), propellant specific hazards, engine start and cutoff transient hazards, engine combustion cycles, vehicle and engine interface and interaction hazards, engine health management system, engine modification, engine ground start hold down with launch commit criteria, engine altitude start (1 in. start), Multiple altitude restart (less than 1 restart), component, subsystem and system design, manufacturing/ground operation support/pre and post flight check outs and inspection, extensiveness of the development program. We present some sensitivity analysis results for the following subset of the drivers: number of engines for the propulsion stage, single engine total reliability, engine operation duration, engine de-rating or up-rating requirements, engine-out design, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction, and engine health management system implementation (basic redlines and more advanced health management systems).

  13. Critical Assessment of the Foundations of Power Transmission and Distribution Reliability Metrics and Standards.

    PubMed

    Nateghi, Roshanak; Guikema, Seth D; Wu, Yue Grace; Bruss, C Bayan

    2016-01-01

    The U.S. federal government regulates the reliability of bulk power systems, while the reliability of power distribution systems is regulated at a state level. In this article, we review the history of regulating electric service reliability and study the existing reliability metrics, indices, and standards for power transmission and distribution networks. We assess the foundations of the reliability standards and metrics, discuss how they are applied to outages caused by large exogenous disturbances such as natural disasters, and investigate whether the standards adequately internalize the impacts of these events. Our reflections shed light on how existing standards conceptualize reliability, question the basis for treating large-scale hazard-induced outages differently from normal daily outages, and discuss whether this conceptualization maps well onto customer expectations. We show that the risk indices for transmission systems used in regulating power system reliability do not adequately capture the risks that transmission systems are prone to, particularly when it comes to low-probability high-impact events. We also point out several shortcomings associated with the way in which regulators require utilities to calculate and report distribution system reliability indices. We offer several recommendations for improving the conceptualization of reliability metrics and standards. We conclude that while the approaches taken in reliability standards have made considerable advances in enhancing the reliability of power systems and may be logical from a utility perspective during normal operation, existing standards do not provide a sufficient incentive structure for the utilities to adequately ensure high levels of reliability for end-users, particularly during large-scale events. © 2015 Society for Risk Analysis.

  14. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 3: HARP Graphics Oriented (GO) input user's guide

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Rothmann, Elizabeth; Mittal, Nitin; Koppen, Sandra Howell

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems, and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical preprocessor Graphics Oriented (GO) program. GO is a graphical user interface for the HARP engine that enables the drawing of reliability/availability models on a monitor. A mouse is used to select fault tree gates or Markov graphical symbols from a menu for drawing.

  15. Evaluation Applied to Reliability Analysis of Reconfigurable, Highly Reliable, Fault-Tolerant, Computing Systems for Avionics

    NASA Technical Reports Server (NTRS)

    Migneault, G. E.

    1979-01-01

    Emulation techniques are proposed as a solution to a difficulty arising in the analysis of the reliability of highly reliable computer systems for future commercial aircraft. The difficulty, viz., the lack of credible precision in reliability estimates obtained by analytical modeling techniques are established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible, (2) a complex system design technique, fault tolerance, (3) system reliability dominated by errors due to flaws in the system definition, and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. The technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. The use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques.

  16. Reliability assessment and improvement for a fast corrector power supply in TPS

    NASA Astrophysics Data System (ADS)

    Liu, Kuo-Bin; Liu, Chen-Yao; Wang, Bao-Sheng; Wong, Yong Seng

    2018-07-01

    Fast Orbit Feedback System (FOFB) can be installed in a synchrotron light source to eliminate undesired disturbances and to improve the stability of beam orbit. The design and implementation of an accurate and reliable Fast Corrector Power Supply (FCPS) is essential to realize the effectiveness and availability of the FOFB. A reliability assessment for the FCPSs in the FOFB of Taiwan Photon Source (TPS) considering MOSFETs' temperatures is represented in this paper. The FCPS is composed of a full-bridge topology and a low-pass filter. A Hybrid Pulse Width Modulation (HPWM) requiring two MOSFETs in the full-bridge circuit to be operated at high frequency and the other two be operated at the output frequency is adopted to control the implemented FCPS. Due the characteristic of HPWM, the conduction loss and switching loss of each MOSFET in the FCPS is not same. Two of the MOSFETs in the full-bridge circuit will suffer higher temperatures and therefore the circuit reliability of FCPS is reduced. A Modified PWM Scheme (MPWMS) designed to average MOSFETs' temperatures and to improve circuit reliability is proposed in this paper. Experimental results measure the MOSFETs' temperatures of FCPS controlled by the HPWM and the proposed MPWMS. The reliability indices under different PWM controls are then assessed. From the experimental results, it can be observed that the reliability of FCPS using the proposed MPWMS can be improved because the MOSFETs' temperatures are closer. Since the reliability of FCPS can be enhanced, the availability of FOFB can also be improved.

  17. Reliability design and verification for launch-vehicle propulsion systems - Report of an AIAA Workshop, Washington, DC, May 16, 17, 1989

    NASA Astrophysics Data System (ADS)

    Launch vehicle propulsion system reliability considerations during the design and verification processes are discussed. The tools available for predicting and minimizing anomalies or failure modes are described and objectives for validating advanced launch system propulsion reliability are listed. Methods for ensuring vehicle/propulsion system interface reliability are examined and improvements in the propulsion system development process are suggested to improve reliability in launch operations. Also, possible approaches to streamline the specification and procurement process are given. It is suggested that government and industry should define reliability program requirements and manage production and operations activities in a manner that provides control over reliability drivers. Also, it is recommended that sufficient funds should be invested in design, development, test, and evaluation processes to ensure that reliability is not inappropriately subordinated to other management considerations.

  18. NASA TEERM Hexavalent Chrome Alternatives Projects

    NASA Technical Reports Server (NTRS)

    Kessel, Kurt R.; Rothgeb, Matthew

    2011-01-01

    The overall objective of the Hex Chrome Free Coatings for Electronics project is to evaluate and test pretreatment coating systems not containing hexavalent chrome in avionics and electronics housing applications. This objective will be accomplished by testing strong performing coating systems from prior NASA and DoD testing or new coating systems as determined by the stakeholders. The technical stakeholders have agreed that this protocol will focus specifically on Class 3 coatings. Original Equipment Manufacturers (OEMs), depots, and support contractors have to be prepared to deal with an electronics supply chain that increasingly provides parts with lead-free finishes, some labeled no differently and intermingled with their SnPb counterparts. Allowance of lead-free components presents one of the greatest risks to the reliability of military and aerospace electronics. The introduction of components with lead-free terminations, termination finishes, or circuit boards presents a host of concerns to customers, suppliers, and maintainers of aerospace and military electronic systems such as: 1. Electrical shorting due to tin whiskers 2. Incompatibility of lead-free processes and parameters (including higher melting points of lead-free alloys) with other materials in the system 3. Unknown material properties and incompatibilities that could reduce solder joint reliability

  19. Minimum Control Requirements for Advanced Life Support Systems

    NASA Technical Reports Server (NTRS)

    Boulange, Richard; Jones, Harry; Jones, Harry

    2002-01-01

    Advanced control technologies are not necessary for the safe, reliable and continuous operation of Advanced Life Support (ALS) systems. ALS systems can and are adequately controlled by simple, reliable, low-level methodologies and algorithms. The automation provided by advanced control technologies is claimed to decrease system mass and necessary crew time by reducing buffer size and minimizing crew involvement. In truth, these approaches increase control system complexity without clearly demonstrating an increase in reliability across the ALS system. Unless these systems are as reliable as the hardware they control, there is no savings to be had. A baseline ALS system is presented with the minimal control system required for its continuous safe reliable operation. This baseline control system uses simple algorithms and scheduling methodologies and relies on human intervention only in the event of failure of the redundant backup equipment. This ALS system architecture is designed for reliable operation, with minimal components and minimal control system complexity. The fundamental design precept followed is "If it isn't there, it can't fail".

  20. Study on evaluation of construction reliability for engineering project based on fuzzy language operator

    NASA Astrophysics Data System (ADS)

    Shi, Yu-Fang; Ma, Yi-Yi; Song, Ping-Ping

    2018-03-01

    System Reliability Theory is a research hotspot of management science and system engineering in recent years, and construction reliability is useful for quantitative evaluation of project management level. According to reliability theory and target system of engineering project management, the defination of construction reliability appears. Based on fuzzy mathematics theory and language operator, value space of construction reliability is divided into seven fuzzy subsets and correspondingly, seven membership function and fuzzy evaluation intervals are got with the operation of language operator, which provides the basis of corresponding method and parameter for the evaluation of construction reliability. This method is proved to be scientific and reasonable for construction condition and an useful attempt for theory and method research of engineering project system reliability.

  1. Reliability models applicable to space telescope solar array assembly system

    NASA Technical Reports Server (NTRS)

    Patil, S. A.

    1986-01-01

    A complex system may consist of a number of subsystems with several components in series, parallel, or combination of both series and parallel. In order to predict how well the system will perform, it is necessary to know the reliabilities of the subsystems and the reliability of the whole system. The objective of the present study is to develop mathematical models of the reliability which are applicable to complex systems. The models are determined by assuming k failures out of n components in a subsystem. By taking k = 1 and k = n, these models reduce to parallel and series models; hence, the models can be specialized to parallel, series combination systems. The models are developed by assuming the failure rates of the components as functions of time and as such, can be applied to processes with or without aging effects. The reliability models are further specialized to Space Telescope Solar Arrray (STSA) System. The STSA consists of 20 identical solar panel assemblies (SPA's). The reliabilities of the SPA's are determined by the reliabilities of solar cell strings, interconnects, and diodes. The estimates of the reliability of the system for one to five years are calculated by using the reliability estimates of solar cells and interconnects given n ESA documents. Aging effects in relation to breaks in interconnects are discussed.

  2. 78 FR 73112 - Monitoring System Conditions-Transmission Operations Reliability Standards; Interconnection...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-05

    ..., RM13-14-000 and RM13-15-000] Monitoring System Conditions--Transmission Operations Reliability...) 502-6817, [email protected] . Robert T. Stroh (Legal Information), Office of the General... Reliability Standards ``address the important reliability goal of ensuring that the transmission system is...

  3. How reliable are clinical systems in the UK NHS? A study of seven NHS organisations

    PubMed Central

    Franklin, Bryony Dean; Moorthy, Krishna; Cooke, Matthew W; Vincent, Charles

    2012-01-01

    Background It is well known that many healthcare systems have poor reliability; however, the size and pervasiveness of this problem and its impact has not been systematically established in the UK. The authors studied four clinical systems: clinical information in surgical outpatient clinics, prescribing for hospital inpatients, equipment in theatres, and insertion of peripheral intravenous lines. The aim was to describe the nature, extent and variation in reliability of these four systems in a sample of UK hospitals, and to explore the reasons for poor reliability. Methods Seven UK hospital organisations were involved; each system was studied in three of these. The authors took delivery of the systems' intended outputs to be a proxy for the reliability of the system as a whole. For example, for clinical information, 100% reliability was defined as all patients having an agreed list of clinical information available when needed during their appointment. Systems factors were explored using semi-structured interviews with key informants. Common themes across the systems were identified. Results Overall reliability was found to be between 81% and 87% for the systems studied, with significant variation between organisations for some systems: clinical information in outpatient clinics ranged from 73% to 96%; prescribing for hospital inpatients 82–88%; equipment availability in theatres 63–88%; and availability of equipment for insertion of peripheral intravenous lines 80–88%. One in five reliability failures were associated with perceived threats to patient safety. Common factors causing poor reliability included lack of feedback, lack of standardisation, and issues such as access to information out of working hours. Conclusions Reported reliability was low for the four systems studied, with some common factors behind each. However, this hides significant variation between organisations for some processes, suggesting that some organisations have managed to create more reliable systems. Standardisation of processes would be expected to have significant benefit. PMID:22495099

  4. Higher-order kinetic expansion of quantum dissipative dynamics: mapping quantum networks to kinetic networks.

    PubMed

    Wu, Jianlan; Cao, Jianshu

    2013-07-28

    We apply a new formalism to derive the higher-order quantum kinetic expansion (QKE) for studying dissipative dynamics in a general quantum network coupled with an arbitrary thermal bath. The dynamics of system population is described by a time-convoluted kinetic equation, where the time-nonlocal rate kernel is systematically expanded of the order of off-diagonal elements of the system Hamiltonian. In the second order, the rate kernel recovers the expression of the noninteracting-blip approximation method. The higher-order corrections in the rate kernel account for the effects of the multi-site quantum coherence and the bath relaxation. In a quantum harmonic bath, the rate kernels of different orders are analytically derived. As demonstrated by four examples, the higher-order QKE can reliably predict quantum dissipative dynamics, comparing well with the hierarchic equation approach. More importantly, the higher-order rate kernels can distinguish and quantify distinct nontrivial quantum coherent effects, such as long-range energy transfer from quantum tunneling and quantum interference arising from the phase accumulation of interactions.

  5. The accuracy of Internet search engines to predict diagnoses from symptoms can be assessed with a validated scoring system.

    PubMed

    Shenker, Bennett S

    2014-02-01

    To validate a scoring system that evaluates the ability of Internet search engines to correctly predict diagnoses when symptoms are used as search terms. We developed a five point scoring system to evaluate the diagnostic accuracy of Internet search engines. We identified twenty diagnoses common to a primary care setting to validate the scoring system. One investigator entered the symptoms for each diagnosis into three Internet search engines (Google, Bing, and Ask) and saved the first five webpages from each search. Other investigators reviewed the webpages and assigned a diagnostic accuracy score. They rescored a random sample of webpages two weeks later. To validate the five point scoring system, we calculated convergent validity and test-retest reliability using Kendall's W and Spearman's rho, respectively. We used the Kruskal-Wallis test to look for differences in accuracy scores for the three Internet search engines. A total of 600 webpages were reviewed. Kendall's W for the raters was 0.71 (p<0.0001). Spearman's rho for test-retest reliability was 0.72 (p<0.0001). There was no difference in scores based on Internet search engine. We found a significant difference in scores based on the webpage's order on the Internet search engine webpage (p=0.007). Pairwise comparisons revealed higher scores in the first webpages vs. the fourth (corr p=0.009) and fifth (corr p=0.017). However, this significance was lost when creating composite scores. The five point scoring system to assess diagnostic accuracy of Internet search engines is a valid and reliable instrument. The scoring system may be used in future Internet research. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. General Aviation Aircraft Reliability Study

    NASA Technical Reports Server (NTRS)

    Pettit, Duane; Turnbull, Andrew; Roelant, Henk A. (Technical Monitor)

    2001-01-01

    This reliability study was performed in order to provide the aviation community with an estimate of Complex General Aviation (GA) Aircraft System reliability. To successfully improve the safety and reliability for the next generation of GA aircraft, a study of current GA aircraft attributes was prudent. This was accomplished by benchmarking the reliability of operational Complex GA Aircraft Systems. Specifically, Complex GA Aircraft System reliability was estimated using data obtained from the logbooks of a random sample of the Complex GA Aircraft population.

  7. Fuzzy probabilistic design of water distribution networks

    NASA Astrophysics Data System (ADS)

    Fu, Guangtao; Kapelan, Zoran

    2011-05-01

    The primary aim of this paper is to present a fuzzy probabilistic approach for optimal design and rehabilitation of water distribution systems, combining aleatoric and epistemic uncertainties in a unified framework. The randomness and imprecision in future water consumption are characterized using fuzzy random variables whose realizations are not real but fuzzy numbers, and the nodal head requirements are represented by fuzzy sets, reflecting the imprecision in customers' requirements. The optimal design problem is formulated as a two-objective optimization problem, with minimization of total design cost and maximization of system performance as objectives. The system performance is measured by the fuzzy random reliability, defined as the probability that the fuzzy head requirements are satisfied across all network nodes. The satisfactory degree is represented by necessity measure or belief measure in the sense of the Dempster-Shafer theory of evidence. An efficient algorithm is proposed, within a Monte Carlo procedure, to calculate the fuzzy random system reliability and is effectively combined with the nondominated sorting genetic algorithm II (NSGAII) to derive the Pareto optimal design solutions. The newly proposed methodology is demonstrated with two case studies: the New York tunnels network and Hanoi network. The results from both cases indicate that the new methodology can effectively accommodate and handle various aleatoric and epistemic uncertainty sources arising from the design process and can provide optimal design solutions that are not only cost-effective but also have higher reliability to cope with severe future uncertainties.

  8. Applying reliability analysis to design electric power systems for More-electric aircraft

    NASA Astrophysics Data System (ADS)

    Zhang, Baozhu

    The More-Electric Aircraft (MEA) is a type of aircraft that replaces conventional hydraulic and pneumatic systems with electrically powered components. These changes have significantly challenged the aircraft electric power system design. This thesis investigates how reliability analysis can be applied to automatically generate system topologies for the MEA electric power system. We first use a traditional method of reliability block diagrams to analyze the reliability level on different system topologies. We next propose a new methodology in which system topologies, constrained by a set reliability level, are automatically generated. The path-set method is used for analysis. Finally, we interface these sets of system topologies with control synthesis tools to automatically create correct-by-construction control logic for the electric power system.

  9. Development of the Connecticut Airway Risk Evaluation (CARE) system to improve handoff communication in pediatric patients with tracheotomy.

    PubMed

    Lawrason Hughes, Amy; Murray, Nicole; Valdez, Tulio A; Kelly, Raeanne; Kavanagh, Katherine

    2014-01-01

    National attention has focused on the importance of handoffs in medicine. Our practice during airway patient handoffs is to communicate a patient-specific emergency plan for airway reestablishment; patients who are not intubatable by standard means are at higher risk for failure. There is currently no standard classification system describing airway risk in tracheotomized patients. To introduce and assess the interrater reliability of a simple airway risk classification system, the Connecticut Airway Risk Evaluation (CARE) system. We created a novel classification system, the CARE system, based on ease of intubation and the need for ventilation: group 1, easily intubatable; group 2, intubatable with special equipment and/or maneuvers; group 3, not intubatable. A "v" was appended to any group number to indicate the need for mechanical ventilation. We performed a retrospective medical chart review of patients aged 0 to 18 years who were undergoing tracheotomy at our tertiary care pediatric hospital between January 2000 and April 2011. INTERVENTIONS Each patient's medical history, including airway disease and means of intubation, was reviewed by 4 raters. Patient airways were separately rated as CARE groups 1, 2, or 3, each group with or without a v appended, as appropriate, based on the available information. After the patients were assigned to an airway group by each of the 4 raters, the interrater reliability was calculated to determine the ease of use of the rating system. We identified complete data for 155 of 169 patients (92%), resulting in a total of 620 ratings. Based on the patient's ease of intubation, raters categorized tracheotomized patients into group 1 (70%, 432 of 620); group 2 (25%, 157 of 620); or group 3 (5%, 29 of 620), each with a v appended if appropriate. The interrater reliability was κ = 0.95. We propose an airway risk classification system for tracheotomized patients, CARE, that has high interrater reliability and is easy to use and interpret. As medical providers and national organizations place more focus on improvements in interprovider communication, the creation of an airway handoff tool is integral to improving patient safety and airway management strategies following tracheotomy complications.

  10. Design of fuel cell powered data centers for sufficient reliability and availability

    NASA Astrophysics Data System (ADS)

    Ritchie, Alexa J.; Brouwer, Jacob

    2018-04-01

    It is challenging to design a sufficiently reliable fuel cell electrical system for use in data centers, which require 99.9999% uptime. Such a system could lower emissions and increase data center efficiency, but the reliability and availability of such a system must be analyzed and understood. Currently, extensive backup equipment is used to ensure electricity availability. The proposed design alternative uses multiple fuel cell systems each supporting a small number of servers to eliminate backup power equipment provided the fuel cell design has sufficient reliability and availability. Potential system designs are explored for the entire data center and for individual fuel cells. Reliability block diagram analysis of the fuel cell systems was accomplished to understand the reliability of the systems without repair or redundant technologies. From this analysis, it was apparent that redundant components would be necessary. A program was written in MATLAB to show that the desired system reliability could be achieved by a combination of parallel components, regardless of the number of additional components needed. Having shown that the desired reliability was achievable through some combination of components, a dynamic programming analysis was undertaken to assess the ideal allocation of parallel components.

  11. Reliability modeling of fault-tolerant computer based systems

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.

    1987-01-01

    Digital fault-tolerant computer-based systems have become commonplace in military and commercial avionics. These systems hold the promise of increased availability, reliability, and maintainability over conventional analog-based systems through the application of replicated digital computers arranged in fault-tolerant configurations. Three tightly coupled factors of paramount importance, ultimately determining the viability of these systems, are reliability, safety, and profitability. Reliability, the major driver affects virtually every aspect of design, packaging, and field operations, and eventually produces profit for commercial applications or increased national security. However, the utilization of digital computer systems makes the task of producing credible reliability assessment a formidable one for the reliability engineer. The root of the problem lies in the digital computer's unique adaptability to changing requirements, computational power, and ability to test itself efficiently. Addressed here are the nuances of modeling the reliability of systems with large state sizes, in the Markov sense, which result from systems based on replicated redundant hardware and to discuss the modeling of factors which can reduce reliability without concomitant depletion of hardware. Advanced fault-handling models are described and methods of acquiring and measuring parameters for these models are delineated.

  12. Rocket engine system reliability analyses using probabilistic and fuzzy logic techniques

    NASA Technical Reports Server (NTRS)

    Hardy, Terry L.; Rapp, Douglas C.

    1994-01-01

    The reliability of rocket engine systems was analyzed by using probabilistic and fuzzy logic techniques. Fault trees were developed for integrated modular engine (IME) and discrete engine systems, and then were used with the two techniques to quantify reliability. The IRRAS (Integrated Reliability and Risk Analysis System) computer code, developed for the U.S. Nuclear Regulatory Commission, was used for the probabilistic analyses, and FUZZYFTA (Fuzzy Fault Tree Analysis), a code developed at NASA Lewis Research Center, was used for the fuzzy logic analyses. Although both techniques provided estimates of the reliability of the IME and discrete systems, probabilistic techniques emphasized uncertainty resulting from randomness in the system whereas fuzzy logic techniques emphasized uncertainty resulting from vagueness in the system. Because uncertainty can have both random and vague components, both techniques were found to be useful tools in the analysis of rocket engine system reliability.

  13. Analyzing the effect of transmissivity uncertainty on the reliability of a model of the northwestern Sahara aquifer system

    NASA Astrophysics Data System (ADS)

    Zammouri, Mounira; Ribeiro, Luis

    2017-05-01

    Groundwater flow model of the transboundary Saharan aquifer system is developed in 2003 and used for management and decision-making by Algeria, Tunisia and Libya. In decision-making processes, reliability plays a decisive role. This paper looks into the reliability assessment of the Saharan aquifers model. It aims to detect the shortcomings of the model considered properly calibrated. After presenting the calibration results of the effort modelling in 2003, the uncertainty in the model which arising from the lack of the groundwater level and the transmissivity data is analyzed using kriging technique and stochastic approach. The structural analysis of piezometry in steady state and logarithms of transmissivity were carried out for the Continental Intercalaire (CI) and the Complexe Terminal (CT) aquifers. The available data (piezometry and transmissivity) were compared to the calculated values, using geostatistics approach. Using a stochastic approach, 2500 realizations of a log-normal random transmissivity field of the CI aquifer has been performed to assess the errors of the model output, due to the uncertainty in transmissivity. Two types of bad calibration are shown. In some regions, calibration should be improved using the available data. In others areas, undertaking the model refinement requires gathering new data to enhance the aquifer system knowledge. Stochastic simulations' results showed that the calculated drawdowns in 2050 could be higher than the values predicted by the calibrated model.

  14. Reliability of Fault Tolerant Control Systems. Part 1

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva

    2001-01-01

    This paper reports Part I of a two part effort, that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability analysis of fault-tolerant control systems is performed using Markov models. Reliability properties, peculiar to fault-tolerant control systems are emphasized. As a consequence, coverage of failures through redundancy management can be severely limited. It is shown that in the early life of a syi1ein composed of highly reliable subsystems, the reliability of the overall system is affine with respect to coverage, and inadequate coverage induces dominant single point failures. The utility of some existing software tools for assessing the reliability of fault tolerant control systems is also discussed. Coverage modeling is attempted in Part II in a way that captures its dependence on the control performance and on the diagnostic resolution.

  15. Dynamic decision-making for reliability and maintenance analysis of manufacturing systems based on failure effects

    NASA Astrophysics Data System (ADS)

    Zhang, Ding; Zhang, Yingjie

    2017-09-01

    A framework for reliability and maintenance analysis of job shop manufacturing systems is proposed in this paper. An efficient preventive maintenance (PM) policy in terms of failure effects analysis (FEA) is proposed. Subsequently, reliability evaluation and component importance measure based on FEA are performed under the PM policy. A job shop manufacturing system is applied to validate the reliability evaluation and dynamic maintenance policy. Obtained results are compared with existed methods and the effectiveness is validated. Some vague understandings for issues such as network modelling, vulnerabilities identification, the evaluation criteria of repairable systems, as well as PM policy during manufacturing system reliability analysis are elaborated. This framework can help for reliability optimisation and rational maintenance resources allocation of job shop manufacturing systems.

  16. Reliability analysis and initial requirements for FC systems and stacks

    NASA Astrophysics Data System (ADS)

    Åström, K.; Fontell, E.; Virtanen, S.

    In the year 2000 Wärtsilä Corporation started an R&D program to develop SOFC systems for CHP applications. The program aims to bring to the market highly efficient, clean and cost competitive fuel cell systems with rated power output in the range of 50-250 kW for distributed generation and marine applications. In the program Wärtsilä focuses on system integration and development. System reliability and availability are key issues determining the competitiveness of the SOFC technology. In Wärtsilä, methods have been implemented for analysing the system in respect to reliability and safety as well as for defining reliability requirements for system components. A fault tree representation is used as the basis for reliability prediction analysis. A dynamic simulation technique has been developed to allow for non-static properties in the fault tree logic modelling. Special emphasis has been placed on reliability analysis of the fuel cell stacks in the system. A method for assessing reliability and critical failure predictability requirements for fuel cell stacks in a system consisting of several stacks has been developed. The method is based on a qualitative model of the stack configuration where each stack can be in a functional, partially failed or critically failed state, each of the states having different failure rates and effects on the system behaviour. The main purpose of the method is to understand the effect of stack reliability, critical failure predictability and operating strategy on the system reliability and availability. An example configuration, consisting of 5 × 5 stacks (series of 5 sets of 5 parallel stacks) is analysed in respect to stack reliability requirements as a function of predictability of critical failures and Weibull shape factor of failure rate distributions.

  17. A reliability analysis tool for SpaceWire network

    NASA Astrophysics Data System (ADS)

    Zhou, Qiang; Zhu, Longjiang; Fei, Haidong; Wang, Xingyou

    2017-04-01

    A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. It is becoming more and more popular in space applications due to its technical advantages, including reliability, low power and fault protection, etc. High reliability is the vital issue for spacecraft. Therefore, it is very important to analyze and improve the reliability performance of the SpaceWire network. This paper deals with the problem of reliability modeling and analysis with SpaceWire network. According to the function division of distributed network, a reliability analysis method based on a task is proposed, the reliability analysis of every task can lead to the system reliability matrix, the reliability result of the network system can be deduced by integrating these entire reliability indexes in the matrix. With the method, we develop a reliability analysis tool for SpaceWire Network based on VC, where the computation schemes for reliability matrix and the multi-path-task reliability are also implemented. By using this tool, we analyze several cases on typical architectures. And the analytic results indicate that redundancy architecture has better reliability performance than basic one. In practical, the dual redundancy scheme has been adopted for some key unit, to improve the reliability index of the system or task. Finally, this reliability analysis tool will has a directive influence on both task division and topology selection in the phase of SpaceWire network system design.

  18. Availability Estimation for Facilities in Extreme Geographical Locations

    NASA Technical Reports Server (NTRS)

    Fischer, Gerd M.; Omotoso, Oluseun; Chen, Guangming; Evans, John W.

    2012-01-01

    A value added analysis for the Reliability. Availability and Maintainability of McMurdo Ground Station was developed, which will be a useful tool for system managers in sparing, maintenance planning and determining vital performance metrics needed for readiness assessment of the upgrades to the McMurdo System. Output of this study can also be used as inputs and recommendations for the application of Reliability Centered Maintenance (RCM) for the system. ReliaSoft's BlockSim. a commercial Reliability Analysis software package, has been used to model the availability of the system upgrade to the National Aeronautics and Space Administration (NASA) Near Earth Network (NEN) Ground Station at McMurdo Station in the Antarctica. The logistics challenges due to the closure of access to McMurdo Station during the Antarctic winter was modeled using a weighted composite of four Weibull distributions. one of the possible choices for statistical distributions throughout the software program and usually used to account for failure rates of components supplied by different manufacturers. The inaccessibility of the antenna site on a hill outside McMurdo Station throughout one year due to severe weather was modeled with a Weibull distribution for the repair crew availability. The Weibull distribution is based on an analysis of the available weather data for the antenna site for 2007 in combination with the rules for travel restrictions due to severe weather imposed by the administrating agency, the National Science Foundation (NSF). The simulations resulted in an upper bound for the system availability and allowed for identification of components that would improve availability based on a higher on-site spare count than initially planned.

  19. Perceived Transcultural Self-Efficacy of Nurses in General Hospitals in Guangzhou, China

    PubMed Central

    Li, Juan; He, Zhuang; Luo, Yong; Zhang, Rong

    2016-01-01

    Background Conflicts arising from cultural diversity among patients and hospital staff in China have become intense. Hospitals have an urgent need to improve transcultural self-efficacy of nurses for providing effective transcultural nursing. Objective The purpose of the research was to (a) evaluate the current status of perceived transcultural self-efficacy of nurses in general hospitals in Guangzhou, China; (b) explore associations between demographic characteristics of nurses and their perceived transcultural self-efficacy; and (c) assess the reliability and validity of scores on the Chinese version of the Transcultural Self-Efficacy Tool (TSET). Methods A cross-sectional survey of registered nurses from three general hospitals was conducted. Quota and convenience sampling were used. Participants provided demographic information and answered questions on the TSET. Results A total of 1,156 registered nurses took part. Most nurses had a moderate level of self-efficacy on the Cognitive (87.9%), Practical (87%), and Affective (89.2%) TSET subscales. Nurses who were older; who had more years of work experience, higher professional titles, higher incomes, and a minority background; and who were officially employed (not temporary positions) had higher perceived transcultural self-efficacy. Reliability estimated using Cronbach’s alpha was .99 for the total TSET score; reliability for the three subscales ranged from .97 to .98. Confirmatory factor analysis of TSET scores showed good fit with a three-factor model. Conclusion The results of this study can provide insights and guidelines for hospital nursing management to facilitate design of in-service education systems to improve transcultural self-efficacy of nurses. PMID:27454552

  20. Space Shuttle Propulsion System Reliability

    NASA Technical Reports Server (NTRS)

    Welzyn, Ken; VanHooser, Katherine; Moore, Dennis; Wood, David

    2011-01-01

    This session includes the following sessions: (1) External Tank (ET) System Reliability and Lessons, (2) Space Shuttle Main Engine (SSME), Reliability Validated by a Million Seconds of Testing, (3) Reusable Solid Rocket Motor (RSRM) Reliability via Process Control, and (4) Solid Rocket Booster (SRB) Reliability via Acceptance and Testing.

  1. Validity of the Family Asthma Management System Scale with an urban African-American sample.

    PubMed

    Celano, Marianne; Klinnert, Mary D; Holsey, Chanda Nicole; McQuaid, Elizabeth L

    2011-06-01

    To examine the reliability and validity of the Family Asthma Management System Scale for low-income African-American children with poor asthma control and caregivers under stress. The FAMSS assesses eight aspects of asthma management from a family systems perspective. Forty-three children, ages 8-13, and caregivers were interviewed with the FAMSS; caregivers completed measures of primary care quality, family functioning, parenting stress, and psychological distress. Children rated their relatedness with the caregiver, and demonstrated inhaler technique. Medical records were reviewed for dates of outpatient visits for asthma. The FAMSS demonstrated good internal consistency. Higher scores were associated with adequate inhaler technique, recent outpatient care, less parenting stress and better family functioning. Higher scores on the Collaborative Relationship with Provider subscale were associated with greater perceived primary care quality. The FAMSS demonstrated relevant associations with asthma management criteria and family functioning for a low-income, African-American sample.

  2. The reliability of in-training assessment when performance improvement is taken into account.

    PubMed

    van Lohuizen, Mirjam T; Kuks, Jan B M; van Hell, Elisabeth A; Raat, A N; Stewart, Roy E; Cohen-Schotanus, Janke

    2010-12-01

    During in-training assessment students are frequently assessed over a longer period of time and therefore it can be expected that their performance will improve. We studied whether there really is a measurable performance improvement when students are assessed over an extended period of time and how this improvement affects the reliability of the overall judgement. In-training assessment results were obtained from 104 students on rotation at our university hospital or at one of the six affiliated hospitals. Generalisability theory was used in combination with multilevel analysis to obtain reliability coefficients and to estimate the number of assessments needed for reliable overall judgement, both including and excluding performance improvement. Students' clinical performance ratings improved significantly from a mean of 7.6 at the start to a mean of 7.8 at the end of their clerkship. When taking performance improvement into account, reliability coefficients were higher. The number of assessments needed to achieve a reliability of 0.80 or higher decreased from 17 to 11. Therefore, when studying reliability of in-training assessment, performance improvement should be considered.

  3. Reliability of light microscopy and a computer-assisted replica measurement technique for evaluating the fit of dental copings.

    PubMed

    Rudolph, Heike; Ostertag, Silke; Ostertag, Michael; Walter, Michael H; Luthardt, Ralph Gunnar; Kuhn, Katharina

    2018-02-01

    The aim of this in vitro study was to assess the reliability of two measurement systems for evaluating the marginal and internal fit of dental copings. Sixteen CAD/CAM titanium copings were produced for a prepared maxillary canine. To modify the CAD surface model using different parameters (data density; enlargement in different directions), varying fit was created. Five light-body silicone replicas representing the gap between the canine and the coping were made for each coping and for each measurement method: (1) light microscopy measurements (LMMs); and (2) computer-assisted measurements (CASMs) using an optical digitizing system. Two investigators independently measured the marginal and internal fit using both methods. The inter-rater reliability [intraclass correlation coefficient (ICC)] and agreement [Bland-Altman (bias) analyses]: mean of the differences (bias) between two measurements [the closer to zero the mean (bias) is, the higher the agreement between the two measurements] were calculated for several measurement points (marginal-distal, marginal-buccal, axial-buccal, incisal). For the LMM technique, one investigator repeated the measurements to determine repeatability (intra-rater reliability and agreement). For inter-rater reliability, the ICC was 0.848-0.998 for LMMs and 0.945-0.999 for CASMs, depending on the measurement point. Bland-Altman bias was -15.7 to 3.5 μm for LMMs and -3.0 to 1.9 μm for CASMs. For LMMs, the marginal-distal and marginal-buccal measurement points showed the lowest ICC (0.848/0.978) and the highest bias (-15.7 μm/-7.6 μm). With the intra-rater reliability and agreement (repeatability) for LMMs, the ICC was 0.970-0.998 and bias was -1.3 to 2.3 μm. LMMs showed lower interrater reliability and agreement at the marginal measurement points than CASMs, which indicates a more subjective influence with LMMs at these measurement points. The values, however, were still clinically acceptable. LMMs showed very high intra-rater reliability and agreement for all measurement points, indicating high repeatability.

  4. Reliability of light microscopy and a computer-assisted replica measurement technique for evaluating the fit of dental copings

    PubMed Central

    Rudolph, Heike; Ostertag, Silke; Ostertag, Michael; Walter, Michael H.; LUTHARDT, Ralph Gunnar; Kuhn, Katharina

    2018-01-01

    Abstract The aim of this in vitro study was to assess the reliability of two measurement systems for evaluating the marginal and internal fit of dental copings. Material and Methods Sixteen CAD/CAM titanium copings were produced for a prepared maxillary canine. To modify the CAD surface model using different parameters (data density; enlargement in different directions), varying fit was created. Five light-body silicone replicas representing the gap between the canine and the coping were made for each coping and for each measurement method: (1) light microscopy measurements (LMMs); and (2) computer-assisted measurements (CASMs) using an optical digitizing system. Two investigators independently measured the marginal and internal fit using both methods. The inter-rater reliability [intraclass correlation coefficient (ICC)] and agreement [Bland-Altman (bias) analyses]: mean of the differences (bias) between two measurements [the closer to zero the mean (bias) is, the higher the agreement between the two measurements] were calculated for several measurement points (marginal-distal, marginal-buccal, axial-buccal, incisal). For the LMM technique, one investigator repeated the measurements to determine repeatability (intra-rater reliability and agreement). Results For inter-rater reliability, the ICC was 0.848-0.998 for LMMs and 0.945-0.999 for CASMs, depending on the measurement point. Bland-Altman bias was −15.7 to 3.5 μm for LMMs and −3.0 to 1.9 μm for CASMs. For LMMs, the marginal-distal and marginal-buccal measurement points showed the lowest ICC (0.848/0.978) and the highest bias (-15.7 μm/-7.6 μm). With the intra-rater reliability and agreement (repeatability) for LMMs, the ICC was 0.970-0.998 and bias was −1.3 to 2.3 μm. Conclusion LMMs showed lower interrater reliability and agreement at the marginal measurement points than CASMs, which indicates a more subjective influence with LMMs at these measurement points. The values, however, were still clinically acceptable. LMMs showed very high intra-rater reliability and agreement for all measurement points, indicating high repeatability. PMID:29412364

  5. Identification of Classified Information in Unclassified DoD Systems During the Audit of Internal Controls and Data Reliability in the Deployable Disbursing System

    DTIC Science & Technology

    2009-02-17

    Identification of Classified Information in Unclassified DoD Systems During the Audit of Internal Controls and Data Reliability in the Deployable...TITLE AND SUBTITLE Identification of Classified Information in Unclassified DoD Systems During the Audit of Internal Controls and Data Reliability...Systems During the Audit ofInternal Controls and Data Reliability in the Deployable Disbursing System (Report No. D-2009-054) Weare providing this

  6. Reliability of 3D laser-based anthropometry and comparison with classical anthropometry.

    PubMed

    Kuehnapfel, Andreas; Ahnert, Peter; Loeffler, Markus; Broda, Anja; Scholz, Markus

    2016-05-26

    Anthropometric quantities are widely used in epidemiologic research as possible confounders, risk factors, or outcomes. 3D laser-based body scans (BS) allow evaluation of dozens of quantities in short time with minimal physical contact between observers and probands. The aim of this study was to compare BS with classical manual anthropometric (CA) assessments with respect to feasibility, reliability, and validity. We performed a study on 108 individuals with multiple measurements of BS and CA to estimate intra- and inter-rater reliabilities for both. We suggested BS equivalents of CA measurements and determined validity of BS considering CA the gold standard. Throughout the study, the overall concordance correlation coefficient (OCCC) was chosen as indicator of agreement. BS was slightly more time consuming but better accepted than CA. For CA, OCCCs for intra- and inter-rater reliability were greater than 0.8 for all nine quantities studied. For BS, 9 of 154 quantities showed reliabilities below 0.7. BS proxies for CA measurements showed good agreement (minimum OCCC > 0.77) after offset correction. Thigh length showed higher reliability in BS while upper arm length showed higher reliability in CA. Except for these issues, reliabilities of CA measurements and their BS equivalents were comparable.

  7. Emulation applied to reliability analysis of reconfigurable, highly reliable, fault-tolerant computing systems

    NASA Technical Reports Server (NTRS)

    Migneault, G. E.

    1979-01-01

    Emulation techniques applied to the analysis of the reliability of highly reliable computer systems for future commercial aircraft are described. The lack of credible precision in reliability estimates obtained by analytical modeling techniques is first established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Next, the technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. Use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques. Finally an illustrative example is presented to demonstrate from actual use the promise of the proposed application of emulation.

  8. Space Transportation System Availability Relationships to Life Cycle Cost

    NASA Technical Reports Server (NTRS)

    Rhodes, Russel E.; Donahue, Benjamin B.; Chen, Timothy T.

    2009-01-01

    Future space transportation architectures and designs must be affordable. Consequently, their Life Cycle Cost (LCC) must be controlled. For the LCC to be controlled, it is necessary to identify all the requirements and elements of the architecture at the beginning of the concept phase. Controlling LCC requires the establishment of the major operational cost drivers. Two of these major cost drivers are reliability and maintainability, in other words, the system's availability (responsiveness). Potential reasons that may drive the inherent availability requirement are the need to control the number of unique parts and the spare parts required to support the transportation system's operation. For more typical space transportation systems used to place satellites in space, the productivity of the system will drive the launch cost. This system productivity is the resultant output of the system availability. Availability is equal to the mean uptime divided by the sum of the mean uptime plus the mean downtime. Since many operational factors cannot be projected early in the definition phase, the focus will be on inherent availability which is equal to the mean time between a failure (MTBF) divided by the MTBF plus the mean time to repair (MTTR) the system. The MTBF is a function of reliability or the expected frequency of failures. When the system experiences failures the result is added operational flow time, parts consumption, and increased labor with an impact to responsiveness resulting in increased LCC. The other function of availability is the MTTR, or maintainability. In other words, how accessible is the failed hardware that requires replacement and what operational functions are required before and after change-out to make the system operable. This paper will describe how the MTTR can be equated to additional labor, additional operational flow time, and additional structural access capability, all of which drive up the LCC. A methodology will be presented that provides the decision makers with the understanding necessary to place constraints on the design definition. This methodology for the major drivers will determine the inherent availability, safety, reliability, maintainability, and the life cycle cost of the fielded system. This methodology will focus on the achievement of an affordable, responsive space transportation system. It is the intent of this paper to not only provide the visibility of the relationships of these major attribute drivers (variables) to each other and the resultant system inherent availability, but also to provide the capability to bound the variables, thus providing the insight required to control the system's engineering solution. An example of this visibility is the need to provide integration of similar discipline functions to allow control of the total parts count of the space transportation system. Also, selecting a reliability requirement will place a constraint on parts count to achieve a given inherent availability requirement, or require accepting a larger parts count with the resulting higher individual part reliability requirements. This paper will provide an understanding of the relationship of mean repair time (mean downtime) to maintainability (accessibility for repair), and both mean time between failure (reliability of hardware) and the system inherent availability.

  9. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 4: HARP Output (HARPO) graphics display user's guide

    NASA Technical Reports Server (NTRS)

    Sproles, Darrell W.; Bavuso, Salvatore J.

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical postprocessor program HARPO (HARP Output). HARPO reads ASCII files generated by HARP. It provides an interactive plotting capability that can be used to display alternate model data for trade-off analyses. File data can also be imported to other commercial software programs.

  10. System and method for leveraging human physiological traits to control microprocessor frequency

    DOEpatents

    Shye, Alex; Pan, Yan; Scholbrock, Benjamin; Miller, J. Scott; Memik, Gokhan; Dinda, Peter A; Dick, Robert P

    2014-03-25

    A system and method for leveraging physiological traits to control microprocessor frequency are disclosed. In some embodiments, the system and method may optimize, for example, a particular processor-based architecture based on, for example, end user satisfaction. In some embodiments, the system and method may determine, for example, whether their users are satisfied to provide higher efficiency, improved reliability, reduced power consumption, increased security, and a better user experience. The system and method may use, for example, biometric input devices to provide information about a user's physiological traits to a computer system. Biometric input devices may include, for example, one or more of the following: an eye tracker, a galvanic skin response sensor, and/or a force sensor.

  11. 18 CFR 39.3 - Electric Reliability Organization certification.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... operators of the Bulk-Power System, and other interested parties for improvement of the Electric Reliability... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Electric Reliability..., Reliability Standards that provide for an adequate level of reliability of the Bulk-Power System, and (2) Has...

  12. Reliability-Related Issues in the Context of Student Evaluations of Teaching in Higher Education

    ERIC Educational Resources Information Center

    Kalender, Ilker

    2015-01-01

    Student evaluations of teaching (SET) have been the principal instrument to elicit students' opinions in higher education institutions. Many decisions, including high-stake ones, are made based on SET scores reported by students. In this respect, reliability of SET scores is of considerable importance. This paper has an argument that there are…

  13. Pupil light reflex evoked by light-emitting diode and computer screen: Methodology and association with need for recovery in daily life.

    PubMed

    Wang, Yang; Zekveld, Adriana A; Wendt, Dorothea; Lunner, Thomas; Naylor, Graham; Kramer, Sophia E

    2018-01-01

    Pupil light reflex (PLR) has been widely used as a method for evaluating parasympathetic activity. The first aim of the present study is to develop a PLR measurement using a computer screen set-up and compare its results with the PLR generated by a more conventional setup using light-emitting diode (LED). The parasympathetic nervous system, which is known to control the 'rest and digest' response of the human body, is considered to be associated with daily life fatigue. However, only few studies have attempted to test the relationship between self-reported daily fatigue and physiological measurement of the parasympathetic nervous system. Therefore, the second aim of this study was to investigate the relationship between daily-life fatigue, assessed using the Need for Recovery scale, and parasympathetic activity, as indicated by the PLR parameters. A pilot study was conducted first to develop a PLR measurement set-up using a computer screen. PLRs evoked by light stimuli with different characteristics were recorded to confirm the influence of light intensity, flash duration, and color on the PLRs evoked by the system. In the subsequent experimental study, we recorded the PLR of 25 adult participants to light flashes generated by the screen set-up as well as by a conventional LED set-up. PLR parameters relating to parasympathetic and sympathetic activity were calculated from the pupil responses. We tested the split-half reliability across two consecutive blocks of trials, and the relationships between the parameters of PLRs evoked by the two set-ups. Participants rated their need for recovery prior to the PLR recordings. PLR parameters acquired in the screen and LED set-ups showed good reliability for amplitude related parameters. The PLRs evoked by both set-ups were consistent, but showed systematic differences in absolute values of all parameters. Additionally, higher need for recovery was associated with faster and larger constriction of the PLR. This study assessed the PLR generated by a computer screen and the PLR generated by a LED. The good reliability within set-ups and the consistency between the PLRs evoked by the set-ups indicate that both systems provides a valid way to evoke the PLR. A higher need for recovery was associated with faster and larger constricting PLRs, suggesting increased levels of parasympathetic nervous system activity in people experiencing higher levels of need for recovery on a daily basis.

  14. Status and Needs of Power Electronics for Photovoltaic Inverters: Summary Document

    NASA Astrophysics Data System (ADS)

    West, R.; Mauch, K.; Qin, Y. C.; Mohan, N.; Bonn, R.

    2002-05-01

    Photovoltaic inverters are the most mature of any DER inverter, and their mean time to first failure (MTFF) is about five years. This is an unacceptable MTFF and will inhibit the rapid expansion of PV. With all DER technologies, (solar, wind, fuel cells, and microturbines) the inverter is still an immature product that will result in reliability problems in fielded systems. The increasing need for all of these technologies to have a reliable inverter provides a unique opportunity to address these needs with focused R&D development projects. The requirements for these inverters are so similar that modular designs with universal features are obviously the best solution for a 'next generation' inverter. A 'next generation' inverter will have improved performance, higher reliability, and improved profitability. Sandia National Laboratories has estimated that the development of a 'next generation' inverter could require approximately 20 man-years of work over an 18- to 24-month time frame, and that a government-industry partnership will greatly improve the chances of success.

  15. Modified Y-TZP Core Design Improves All-ceramic Crown Reliability

    PubMed Central

    Silva, N.R.F.A.; Bonfante, E.A.; Rafferty, B.T.; Zavanelli, R.A.; Rekow, E.D.; Thompson, V.P.; Coelho, P.G.

    2011-01-01

    This study tested the hypothesis that all-ceramic core-veneer system crown reliability is improved by modification of the core design. We modeled a tooth preparation by reducing the height of proximal walls by 1.5 mm and the occlusal surface by 2.0 mm. The CAD-based tooth preparation was replicated and positioned in a dental articulator for core and veneer fabrication. Standard (0.5 mm uniform thickness) and modified (2.5 mm height lingual and proximal cervical areas) core designs were produced, followed by the application of veneer porcelain for a total thickness of 1.5 mm. The crowns were cemented to 30-day-aged composite dies and were either single-load-to-failure or step-stress-accelerated fatigue-tested. Use of level probability plots showed significantly higher reliability for the modified core design group. The fatigue fracture modes were veneer chipping not exposing the core for the standard group, and exposing the veneer core interface for the modified group. PMID:21057036

  16. 78 FR 44475 - Protection System Maintenance Reliability Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-24

    ... Protection System Maintenance--Phase 2 (Reclosing Relays)). 12. NERC states that the proposed Reliability... of the relay inputs and outputs that are essential to proper functioning of the protection system...] Protection System Maintenance Reliability Standard AGENCY: Federal Energy Regulatory Commission, Energy...

  17. Methods for Calculating Frequency of Maintenance of Complex Information Security System Based on Dynamics of Its Reliability

    NASA Astrophysics Data System (ADS)

    Varlataya, S. K.; Evdokimov, V. E.; Urzov, A. Y.

    2017-11-01

    This article describes a process of calculating a certain complex information security system (CISS) reliability using the example of the technospheric security management model as well as ability to determine the frequency of its maintenance using the system reliability parameter which allows one to assess man-made risks and to forecast natural and man-made emergencies. The relevance of this article is explained by the fact the CISS reliability is closely related to information security (IS) risks. Since reliability (or resiliency) is a probabilistic characteristic of the system showing the possibility of its failure (and as a consequence - threats to the protected information assets emergence), it is seen as a component of the overall IS risk in the system. As it is known, there is a certain acceptable level of IS risk assigned by experts for a particular information system; in case of reliability being a risk-forming factor maintaining an acceptable risk level should be carried out by the routine analysis of the condition of CISS and its elements and their timely service. The article presents a reliability parameter calculation for the CISS with a mixed type of element connection, a formula of the dynamics of such system reliability is written. The chart of CISS reliability change is a S-shaped curve which can be divided into 3 periods: almost invariable high level of reliability, uniform reliability reduction, almost invariable low level of reliability. Setting the minimum acceptable level of reliability, the graph (or formula) can be used to determine the period of time during which the system would meet requirements. Ideally, this period should not be longer than the first period of the graph. Thus, the proposed method of calculating the CISS maintenance frequency helps to solve a voluminous and critical task of the information assets risk management.

  18. Reliability testing of the Larsen and Sharp classifications for rheumatoid arthritis of the elbow.

    PubMed

    Jew, Nicholas B; Hollins, Anthony M; Mauck, Benjamin M; Smith, Richard A; Azar, Frederick M; Miller, Robert H; Throckmorton, Thomas W

    2017-01-01

    Two popular systems for classifying rheumatoid arthritis affecting the elbow are the Larsen and Sharp schemes. To our knowledge, no study has investigated the reliability of these 2 systems. We compared the intraobserver and interobserver agreement of the 2 systems to determine whether one is more reliable than the other. The radiographs of 45 patients diagnosed with rheumatoid arthritis affecting the elbow were evaluated. Anteroposterior and lateral radiographs were deidentified and distributed to 6 evaluators (4 fellowship-trained upper extremity surgeons and 2 orthopedic trainees). Each evaluator graded all 45 radiographs according to the Larsen and Sharp scoring methods on 2 occasions, at least 2 weeks apart. Overall intraobserver reliability was 0.93 (95% confidence interval [CI], 0.90-0.95) for the Larsen system and 0.92 (95% CI, 0.86-0.96) for the Sharp classification, both indicating substantial agreement. Overall interobserver reliability was 0.70 (95% CI, 0.60-0.80) for the Larsen classification and 0.68 (95% CI, 0.54-0.81) for the Sharp system, both indicating good agreement. There were no significant differences in the intraobserver or interobserver reliability of the systems overall and no significant differences in reliability between attending surgeons and trainees for either classification system. The Larsen and Sharp systems both show substantial intraobserver reliability and good interobserver agreement for the radiographic classification of rheumatoid arthritis affecting the elbow. Differences in training level did not result in substantial variances in reliability for either system. We conclude that both systems can be reliably used to evaluate rheumatoid arthritis of the elbow by observers of varying training levels. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  19. Reliability program requirements for aeronautical and space system contractors

    NASA Technical Reports Server (NTRS)

    1987-01-01

    General reliability program requirements for NASA contracts involving the design, development, fabrication, test, and/or use of aeronautical and space systems including critical ground support equipment are prescribed. The reliability program requirements require (1) thorough planning and effective management of the reliability effort; (2) definition of the major reliability tasks and their place as an integral part of the design and development process; (3) planning and evaluating the reliability of the system and its elements (including effects of software interfaces) through a program of analysis, review, and test; and (4) timely status indication by formal documentation and other reporting to facilitate control of the reliability program.

  20. Development of a Kinect Software Tool to Classify Movements during Active Video Gaming.

    PubMed

    Rosenberg, Michael; Thornton, Ashleigh L; Lay, Brendan S; Ward, Brodie; Nathan, David; Hunt, Daniel; Braham, Rebecca

    2016-01-01

    While it has been established that using full body motion to play active video games results in increased levels of energy expenditure, there is little information on the classification of human movement during active video game play in relationship to fundamental movement skills. The aim of this study was to validate software utilising Kinect sensor motion capture technology to recognise fundamental movement skills (FMS), during active video game play. Two human assessors rated jumping and side-stepping and these assessments were compared to the Kinect Action Recognition Tool (KART), to establish a level of agreement and determine the number of movements completed during five minutes of active video game play, for 43 children (m = 12 years 7 months ± 1 year 6 months). During five minutes of active video game play, inter-rater reliability, when examining the two human raters, was found to be higher for the jump (r = 0.94, p < .01) than the sidestep (r = 0.87, p < .01), although both were excellent. Excellent reliability was also found between human raters and the KART system for the jump (r = 0.84, p, .01) and moderate reliability for sidestep (r = 0.6983, p < .01) during game play, demonstrating that both humans and KART had higher agreement for jumps than sidesteps in the game play condition. The results of the study provide confidence that the Kinect sensor can be used to count the number of jumps and sidestep during five minutes of active video game play with a similar level of accuracy as human raters. However, in contrast to humans, the KART system required a fraction of the time to analyse and tabulate the results.

  1. When the Jeans Do Not Fit: How Stellar Feedback Drives Stellar Kinematics and Complicates Dynamical Modeling in Low-mass Galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    El-Badry, Kareem; Quataert, Eliot; Wetzel, Andrew R.

    In low-mass galaxies, stellar feedback can drive gas outflows that generate non-equilibrium fluctuations in the gravitational potential. Using cosmological zoom-in baryonic simulations from the Feedback in Realistic Environments project, we investigate how these fluctuations affect stellar kinematics and the reliability of Jeans dynamical modeling in low-mass galaxies. We find that stellar velocity dispersion and anisotropy profiles fluctuate significantly over the course of galaxies’ starburst cycles. We therefore predict an observable correlation between star formation rate and stellar kinematics: dwarf galaxies with higher recent star formation rates should have systemically higher stellar velocity dispersions. This prediction provides an observational test ofmore » the role of stellar feedback in regulating both stellar and dark-matter densities in dwarf galaxies. We find that Jeans modeling, which treats galaxies as virialized systems in dynamical equilibrium, overestimates a galaxy’s dynamical mass during periods of post-starburst gas outflow and underestimates it during periods of net inflow. Short-timescale potential fluctuations lead to typical errors of ∼20% in dynamical mass estimates, even if full three-dimensional stellar kinematics—including the orbital anisotropy—are known exactly. When orbital anisotropy is not known a priori, typical mass errors arising from non-equilibrium fluctuations in the potential are larger than those arising from the mass-anisotropy degeneracy. However, Jeans modeling alone cannot reliably constrain the orbital anisotropy, and problematically, it often favors anisotropy models that do not reflect the true profile. If galaxies completely lose their gas and cease forming stars, fluctuations in the potential subside, and Jeans modeling becomes much more reliable.« less

  2. Development of a Kinect Software Tool to Classify Movements during Active Video Gaming

    PubMed Central

    Rosenberg, Michael; Lay, Brendan S.; Ward, Brodie; Nathan, David; Hunt, Daniel; Braham, Rebecca

    2016-01-01

    While it has been established that using full body motion to play active video games results in increased levels of energy expenditure, there is little information on the classification of human movement during active video game play in relationship to fundamental movement skills. The aim of this study was to validate software utilising Kinect sensor motion capture technology to recognise fundamental movement skills (FMS), during active video game play. Two human assessors rated jumping and side-stepping and these assessments were compared to the Kinect Action Recognition Tool (KART), to establish a level of agreement and determine the number of movements completed during five minutes of active video game play, for 43 children (m = 12 years 7 months ± 1 year 6 months). During five minutes of active video game play, inter-rater reliability, when examining the two human raters, was found to be higher for the jump (r = 0.94, p < .01) than the sidestep (r = 0.87, p < .01), although both were excellent. Excellent reliability was also found between human raters and the KART system for the jump (r = 0.84, p, .01) and moderate reliability for sidestep (r = 0.6983, p < .01) during game play, demonstrating that both humans and KART had higher agreement for jumps than sidesteps in the game play condition. The results of the study provide confidence that the Kinect sensor can be used to count the number of jumps and sidestep during five minutes of active video game play with a similar level of accuracy as human raters. However, in contrast to humans, the KART system required a fraction of the time to analyse and tabulate the results. PMID:27442437

  3. The Strengths and Difficulties Questionnaire: psychometric properties of the parent and teacher version in children aged 4-7.

    PubMed

    Stone, Lisanne L; Janssens, Jan M A M; Vermulst, Ad A; Van Der Maten, Marloes; Engels, Rutger C M E; Otten, Roy

    2015-01-01

    The Strengths and Difficulties Questionnaire is one of the most employed screening instruments. Although there is a large research body investigating its psychometric properties, reliability and validity are not yet fully tested using modern techniques. Therefore, we investigate reliability, construct validity, measurement invariance, and predictive validity of the parent and teacher version in children aged 4-7. Besides, we intend to replicate previous studies by investigating test-retest reliability and criterion validity. In a Dutch community sample 2,238 teachers and 1,513 parents filled out questionnaires regarding problem behaviors and parenting, while 1,831 children reported on sociometric measures at T1. These children were followed-up during three consecutive years. Reliability was examined using Cronbach's alpha and McDonald's omega, construct validity was examined by Confirmatory Factor Analysis, and predictive validity was examined by calculating developmental profiles and linking these to measures of inadequate parenting, parenting stress and social preference. Further, mean scores and percentiles were examined in order to establish norms. Omega was consistently higher than alpha regarding reliability. The original five-factor structure was replicated, and measurement invariance was established on a configural level. Further, higher SDQ scores were associated with future indices of higher inadequate parenting, higher parenting stress and lower social preference. Finally, previous results on test-retest reliability and criterion validity were replicated. This study is the first to show SDQ scores are predictively valid, attesting to the feasibility of the SDQ as a screening instrument. Future research into predictive validity of the SDQ is warranted.

  4. Advanced Rankine and Brayton cycle power systems - Materials needs and opportunities

    NASA Technical Reports Server (NTRS)

    Grisaffe, S. J.; Guentert, D. C.

    1974-01-01

    Conceptual advanced potassium Rankine and closed Brayton power conversion cycles offer the potential for improved efficiency over steam systems through higher operating temperatures. However, for utility service of at least 100,000 hours, materials technology advances will be needed for such high temperature systems. Improved alloys and surface protection must be developed and demonstrated to resist coal combustion gases as well as potassium corrosion or helium surface degradation at high temperatures. Extensions in fabrication technology are necessary to produce large components of high temperature alloys. Long-time property data must be obtained under environments of interest to assure high component reliability.

  5. Advanced Rankine and Brayton cycle power systems: Materials needs and opportunities

    NASA Technical Reports Server (NTRS)

    Grisaffe, S. J.; Guentert, D. C.

    1974-01-01

    Conceptual advanced potassium Rankine and closed Brayton power conversion cycles offer the potential for improved efficiency over steam systems through higher operating temperatures. However, for utility service of at least 100,000 hours, materials technology advances will be needed for such high temperature systems. Improved alloys and surface protection must be developed and demonstrated to resist coal combustion gases as well as potassium corrosion or helium surface degradation at high temperatures. Extensions in fabrication technology are necessary to produce large components of high temperature alloys. Long time property data must be obtained under environments of interest to assure high component reliability.

  6. Measuring eating disorder attitudes and behaviors: a reliability generalization study

    PubMed Central

    2014-01-01

    Background Although score reliability is a sample-dependent characteristic, researchers often only report reliability estimates from previous studies as justification for employing particular questionnaires in their research. The present study followed reliability generalization procedures to determine the mean score reliability of the Eating Disorder Inventory and its most commonly employed subscales (Drive for Thinness, Bulimia, and Body Dissatisfaction) and the Eating Attitudes Test as a way to better identify those characteristics that might impact score reliability. Methods Published studies that used these measures were coded based on their reporting of reliability information and additional study characteristics that might influence score reliability. Results Score reliability estimates were included in 26.15% of studies using the EDI and 36.28% of studies using the EAT. Mean Cronbach’s alphas for the EDI (total score = .91; subscales = .75 to .89), EAT-40 (total score = .81) and EAT-26 (total score = .86; subscales = .56 to .80) suggested variability in estimated internal consistency. Whereas some EDI subscales exhibited higher score reliability in clinical eating disorder samples than in nonclinical samples, other subscales did not exhibit these differences. Score reliability information for the EAT was primarily reported for nonclinical samples, making it difficult to characterize the effect of type of sample on these measures. However, there was a tendency for mean score reliability to be higher in the adult (vs. adolescent) samples and in female (vs. male) samples. Conclusions Overall, this study highlights the importance of assessing and reporting internal consistency during every test administration because reliability is affected by characteristics of the participants being examined. PMID:24764530

  7. Development of Fiber-Based Laser Systems for LISA

    NASA Technical Reports Server (NTRS)

    Numata, Kenji; Camp, Jordan

    2010-01-01

    We present efforts on fiber-based laser systems for the LISA mission at the NASA Goddard Space Flight Center. A fiber-based system has the advantage of higher robustness against external disturbances and easier implementation of redundancies. For a master oscillator, we are developing a ring fiber laser and evaluating two commercial products, a DBR linear fiber laser and a planar-waveguide external cavity diode laser. They all have comparable performance to a traditional NPRO at LISA band. We are also performing reliability tests of a 2-W Yb fiber amplifier and radiation tests of fiber laser/amplifier components. We describe our progress to date and discuss the path to a working LISA laser system design.

  8. Optically controlled phased-array antenna technology for space communication systems

    NASA Technical Reports Server (NTRS)

    Kunath, Richard R.; Bhasin, Kul B.

    1988-01-01

    Using MMICs in phased-array applications above 20 GHz requires complex RF and control signal distribution systems. Conventional waveguide, coaxial cable, and microstrip methods are undesirable due to their high weight, high loss, limited mechanical flexibility and large volume. An attractive alternative to these transmission media, for RF and control signal distribution in MMIC phased-array antennas, is optical fiber. Presented are potential system architectures and their associated characteristics. The status of high frequency opto-electronic components needed to realize the potential system architectures is also discussed. It is concluded that an optical fiber network will reduce weight and complexity, and increase reliability and performance, but may require higher power.

  9. The 747 primary flight control systems reliability and maintenance study

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The major operational characteristics of the 747 Primary Flight Control Systems (PFCS) are described. Results of reliability analysis for separate control functions are presented. The analysis makes use of a NASA computer program which calculates reliability of redundant systems. Costs for maintaining the 747 PFCS in airline service are assessed. The reliabilities and cost will provide a baseline for use in trade studies of future flight control system design.

  10. Integrated navigation, flight guidance, and synthetic vision system for low-level flight

    NASA Astrophysics Data System (ADS)

    Mehler, Felix E.

    2000-06-01

    Future military transport aircraft will require a new approach with respect to the avionics suite to fulfill an ever-changing variety of missions. The most demanding phases of these mission are typically the low level flight segments, including tactical terrain following/avoidance,payload drop and/or board autonomous landing at forward operating strips without ground-based infrastructure. As a consequence, individual components and systems must become more integrated to offer a higher degree of reliability, integrity, flexibility and autonomy over existing systems while reducing crew workload. The integration of digital terrain data not only introduces synthetic vision into the cockpit, but also enhances navigation and guidance capabilities. At DaimlerChrysler Aerospace AG Military Aircraft Division (Dasa-M), an integrated navigation, flight guidance and synthetic vision system, based on digital terrain data, has been developed to fulfill the requirements of the Future Transport Aircraft (FTA). The fusion of three independent navigation sensors provides a more reliable and precise solution to both the 4D-flight guidance and the display components, which is comprised of a Head-up and a Head-down Display with synthetic vision. This paper will present the system, its integration into the DLR's VFW 614 Advanced Technology Testing Aircraft System (ATTAS) and the results of the flight-test campaign.

  11. Scheduling for energy and reliability management on multiprocessor real-time systems

    NASA Astrophysics Data System (ADS)

    Qi, Xuan

    Scheduling algorithms for multiprocessor real-time systems have been studied for years with many well-recognized algorithms proposed. However, it is still an evolving research area and many problems remain open due to their intrinsic complexities. With the emergence of multicore processors, it is necessary to re-investigate the scheduling problems and design/develop efficient algorithms for better system utilization, low scheduling overhead, high energy efficiency, and better system reliability. Focusing cluster schedulings with optimal global schedulers, we study the utilization bound and scheduling overhead for a class of cluster-optimal schedulers. Then, taking energy/power consumption into consideration, we developed energy-efficient scheduling algorithms for real-time systems, especially for the proliferating embedded systems with limited energy budget. As the commonly deployed energy-saving technique (e.g. dynamic voltage frequency scaling (DVFS)) will significantly affect system reliability, we study schedulers that have intelligent mechanisms to recuperate system reliability to satisfy the quality assurance requirements. Extensive simulation is conducted to evaluate the performance of the proposed algorithms on reduction of scheduling overhead, energy saving, and reliability improvement. The simulation results show that the proposed reliability-aware power management schemes could preserve the system reliability while still achieving substantial energy saving.

  12. A novel redundant INS based on triple rotary inertial measurement units

    NASA Astrophysics Data System (ADS)

    Chen, Gang; Li, Kui; Wang, Wei; Li, Peng

    2016-10-01

    Accuracy and reliability are two key performances of inertial navigation system (INS). Rotation modulation (RM) can attenuate the bias of inertial sensors and make it possible for INS to achieve higher navigation accuracy with lower-class sensors. Therefore, the conflict between the accuracy and cost of INS can be eased. Traditional system redundancy and recently researched sensor redundancy are two primary means to improve the reliability of INS. However, how to make the best use of the redundant information from redundant sensors hasn’t been studied adequately, especially in rotational INS. This paper proposed a novel triple rotary unit strapdown inertial navigation system (TRUSINS), which combines RM and sensor redundancy design to enhance the accuracy and reliability of rotational INS. Each rotary unit independently rotates to modulate the errors of two gyros and two accelerometers. Three units can provide double sets of measurements along all three axes of body frame to constitute a couple of INSs which make TRUSINS redundant. Experiments and simulations based on a prototype which is made up of six fiber-optic gyros with drift stability of 0.05° h-1 show that TRUSINS can achieve positioning accuracy of about 0.256 n mile h-1, which is ten times better than that of a normal non-rotational INS with the same level inertial sensors. The theoretical analysis and the experimental results show that due to the advantage of the innovative structure, the designed fault detection and isolation (FDI) strategy can tolerate six sensor faults at most, and is proved to be effective and practical. Therefore, TRUSINS is particularly suitable and highly beneficial for the applications where high accuracy and high reliability is required.

  13. REANALYSIS OF F-STATISTIC GRAVITATIONAL-WAVE SEARCHES WITH THE HIGHER CRITICISM STATISTIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, M. F.; Melatos, A.; Delaigle, A.

    2013-04-01

    We propose a new method of gravitational-wave detection using a modified form of higher criticism, a statistical technique introduced by Donoho and Jin. Higher criticism is designed to detect a group of sparse, weak sources, none of which are strong enough to be reliably estimated or detected individually. We apply higher criticism as a second-pass method to synthetic F-statistic and C-statistic data for a monochromatic periodic source in a binary system and quantify the improvement relative to the first-pass methods. We find that higher criticism on C-statistic data is more sensitive by {approx}6% than the C-statistic alone under optimal conditionsmore » (i.e., binary orbit known exactly) and the relative advantage increases as the error in the orbital parameters increases. Higher criticism is robust even when the source is not monochromatic (e.g., phase-wandering in an accreting system). Applying higher criticism to a phase-wandering source over multiple time intervals gives a {approx}> 30% increase in detectability with few assumptions about the frequency evolution. By contrast, in all-sky searches for unknown periodic sources, which are dominated by the brightest source, second-pass higher criticism does not provide any benefits over a first-pass search.« less

  14. Development and Evaluation of a Questionnaire for Measuring Suboptimal Health Status in Urban Chinese

    PubMed Central

    Yan, Yu-Xiang; Liu, You-Qin; Li, Man; Hu, Pei-Feng; Guo, Ai-Min; Yang, Xing-Hua; Qiu, Jing-Jun; Yang, Shan-Shan; Shen, Jian; Zhang, Li-Ping; Wang, Wei

    2009-01-01

    Background Suboptimal health status (SHS) is characterized by ambiguous health complaints, general weakness, and lack of vitality, and has become a new public health challenge in China. It is believed to be a subclinical, reversible stage of chronic disease. Studies of intervention and prognosis for SHS are expected to become increasingly important. Consequently, a reliable and valid instrument to assess SHS is essential. We developed and evaluated a questionnaire for measuring SHS in urban Chinese. Methods Focus group discussions and a literature review provided the basis for the development of the questionnaire. Questionnaire validity and reliability were evaluated in a small pilot study and in a larger cross-sectional study of 3000 individuals. Analyses included tests for reliability and internal consistency, exploratory and confirmatory factor analysis, and tests for discriminative ability and convergent validity. Results The final questionnaire included 25 items on SHS (SHSQ-25), and encompassed 5 subscales: fatigue, the cardiovascular system, the digestive tract, the immune system, and mental status. Overall, 2799 of 3000 participants completed the questionnaire (93.3%). Test-retest reliability coefficients of individual items ranged from 0.89 to 0.98. Item-subscale correlations ranged from 0.51 to 0.72, and Cronbach’s α was 0.70 or higher for all subscales. Factor analysis established 5 distinct domains, as conceptualized in our model. One-way ANOVA showed statistically significant differences in scale scores between 3 occupation groups; these included total scores and subscores (P < 0.01). The correlation between the SHS scores and experienced stress was statistically significant (r = 0.57, P < 0.001). Conclusions The SHSQ-25 is a reliable and valid instrument for measuring sub-health status in urban Chinese. PMID:19749497

  15. Examiner Training and Reliability in Two Randomized Clinical Trials of Adult Dental Caries

    PubMed Central

    Banting, David W.; Amaechi, Bennett T.; Bader, James D.; Blanchard, Peter; Gilbert, Gregg H.; Gullion, Christina M.; Holland, Jan Carlton; Makhija, Sonia K.; Papas, Athena; Ritter, André V.; Singh, Mabi L.; Vollmer, William M.

    2013-01-01

    Objectives This report describes the training of dental examiners participating in two dental caries clinical trials and reports the inter- and intra- examiner reliability scores from the initial standardization sessions. Methods Study examiners were trained to use a modified ICDAS-II system to detect the visual signs of non-cavitated and cavitated dental caries in adult subjects. Dental caries was classified as no caries (S), non-cavitated caries (D1), enamel caries (D2) and dentine caries (D3). Three standardization sessions involving 60 subjects and 3604 tooth surface calls were used to calculate several measures of examiner reliability. Results The prevalence of dental caries observed in the standardization sessions ranged from 1.4% to 13.5% of the coronal tooth surfaces examined. Overall agreement between pairs of examiners ranged from 0.88 to 0.99. An intra-class coefficient threshold of 0.60 was surpassed for all but one examiner. Inter-examiner unweighted kappa values were low (0.23– 0.35) but weighted kappas and the ratio of observed to maximum kappas were more encouraging (0.42– 0.83). The highest kappa values occurred for the S/D1 vs. D2/D3 two-level classification of dental caries, for which seven of the eight examiners achieved observed to maximum kappa values over 0.90.Intra-examiner reliability was notably higher than inter-examiner reliability for all measures and dental caries classification systems employed. Conclusion The methods and results for the initial examiner training and standardization sessions for two large clinical trials are reported. Recommendations for others planning examiner training and standardization sessions are offered. PMID:22320292

  16. Intra-and inter-observer reliability of nailfold videocapillaroscopy - A possible outcome measure for systemic sclerosis-related microangiopathy.

    PubMed

    Dinsdale, Graham; Moore, Tonia; O'Leary, Neil; Tresadern, Philip; Berks, Michael; Roberts, Christopher; Manning, Joanne; Allen, John; Anderson, Marina; Cutolo, Maurizio; Hesselstrand, Roger; Howell, Kevin; Pizzorni, Carmen; Smith, Vanessa; Sulli, Alberto; Wildt, Marie; Taylor, Christopher; Murray, Andrea; Herrick, Ariane L

    2017-07-01

    Our aim was to assess the reliability of nailfold capillary assessment in terms of image evaluability, image severity grade ('normal', 'early', 'active', 'late'), capillary density, capillary (apex) width, and presence of giant capillaries, and also to gain further insight into differences in these parameters between patients with systemic sclerosis (SSc), patients with primary Raynaud's phenomenon (PRP) and healthy control subjects. Videocapillaroscopy images (magnification 300×) were acquired from all 10 digits from 173 participants: 101 patients with SSc, 22 with PRP and 50 healthy controls. Ten capillaroscopy experts from 7 European centres evaluated the images. Custom image mark-up software allowed extraction of the following outcome measures: overall grade ('normal', 'early', 'active', 'late', 'non-specific', or 'ungradeable'), capillary density (vessels/mm), mean vessel apical width, and presence of giant capillaries. Observers analysed a median of 129 images each. Evaluability (i.e. the availability of measures) varied across outcome measures (e.g. 73.0% for density and 46.2% for overall grade in patients with SSc). Intra-observer reliability for evaluability was consistently higher than inter- (e.g. for density, intra-class correlation coefficient [ICC] was 0.71 within and 0.14 between observers). Conditional on evaluability, both intra- and inter-observer reliability were high for grade (ICC 0.93 and 0.78 respectively), density (0.91 and 0.64) and width (0.91 and 0.85). Evaluability is one of the major challenges in assessing nailfold capillaries. However, when images are evaluable, the high intra- and inter-reliabilities suggest that overall image grade, capillary density and apex width have potential as outcome measures in longitudinal studies. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Examiner training and reliability in two randomized clinical trials of adult dental caries.

    PubMed

    Banting, David W; Amaechi, Bennett T; Bader, James D; Blanchard, Peter; Gilbert, Gregg H; Gullion, Christina M; Holland, Jan Carlton; Makhija, Sonia K; Papas, Athena; Ritter, André V; Singh, Mabi L; Vollmer, William M

    2011-01-01

    This report describes the training of dental examiners participating in two dental caries clinical trials and reports the inter- and intra-examiner reliability scores from the initial standardization sessions. Study examiners were trained to use a modified International Caries Detection and Assessment System II system to detect the visual signs of non-cavitated and cavitated dental caries in adult subjects. Dental caries was classified as no caries (S), non-cavitated caries (D1), enamel caries (D2), and dentine caries (D3). Three standardization sessions involving 60 subjects and 3,604 tooth surface calls were used to calculate several measures of examiner reliability. The prevalence of dental caries observed in the standardization sessions ranged from 1.4 percent to 13.5 percent of the coronal tooth surfaces examined. Overall agreement between pairs of examiners ranged from 0.88 to 0.99. An intra-class coefficient threshold of 0.60 was surpassed for all but one examiner. Inter-examiner unweighted kappa values were low (0.23-0.35), but weighted kappas and the ratio of observed to maximum kappas were more encouraging (0.42-0.83). The highest kappa values occurred for the S/D1 versus D2/D3 two-level classification of dental caries, for which seven of the eight examiners achieved observed to maximum kappa values over 0.90. Intra-examiner reliability was notably higher than inter-examiner reliability for all measures and dental caries classifications employed. The methods and results for the initial examiner training and standardization sessions for two large clinical trials are reported. Recommendations for others planning examiner training and standardization sessions are offered. © 2011 American Association of Public Health Dentistry.

  18. The Impact Of Multimode Fiber Chromatic Dispersion On Data Communications

    NASA Astrophysics Data System (ADS)

    Hackert, Michael J.

    1990-01-01

    Capability for the lowest cost is the goal of contemporary communications managers. With all of the competitive pressures that modern businesses are experiencing these days, communications needs must be met with the most information carrying capacity for the lowest cost. Optical fiber communication systems meet these requirements while providing reliability, system integrity, and potential future upgradability. Consequently, optical fiber is finding numerous applications in addition to its traditional telephony plant. Fiber based systems are meeting these requirements in building networks and computer interconnects at a lower cost than copper based systems. A fiber type being chosen by industry to meet these needs in standard systems such as FDDI, is multimode fiber. Multimode fiber systems offer cost advantages over single-mode fiber through lower fiber connection costs. Also, system designers can gain savings by using low cost, high reliability, wide spectral width sources such as LEDs instead of lasers and by operating at higher bit rates than used for multimode systems in the past. However, in order to maximize the cost savings while ensuring the system will operate as intended, the chromatic dispersion of the fiber must be taken into account. This paper explains how to do that and shows how to calculate multimode chromatic dispersion for each of the standard fiber sizes (50 μm, 62.5 μm, 85 μm, and 100μm core diameter).

  19. Reliability and Heat Transfer Performance of a Miniature High-Temperature Thermosyphon-Based Thermal Valve

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alleman, Jeffrey L; Olsen, Michele L; Glatzmaier, Gregory C

    Latent heat thermal energy storage systems have the advantages of near isothermal heat release and high energy density compared to sensible heat, generally resulting in higher power block efficiencies. Until now, there has been no highly effective and reliable method to passively extract that stored latent energy. Most modern attempts rely on external power supplied to a pump to move viscous heat transfer fluids from the phase change material (PCM) to the power block. In this work, the problem of latent heat dispatchability has been addressed with a redesigned thermosyphon geometry that can act as a 'thermal valve' capable ofmore » passively and efficiently controlling the release of heat from a thermal reservoir. A bench-scale prototype with a stainless steel casing and sodium working fluid was designed and tested to be reliable for more than fifty 'on/off' cycles at an operating temperature of 600 degrees C. The measured thermal resistances in the 'on' and 'off' states were 0.0395 K/W and 11.0 K/W respectively. This device demonstrated efficient, fast, reliable, and passive heat extraction from a PCM and may have application to other fields and industries using thermal processing.« less

  20. The Application of a Residual Risk Evaluation Technique Used for Expendable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Latimer, John A.

    2009-01-01

    This presentation provides a Residual Risk Evaluation Technique (RRET) developed by Kennedy Space Center (KSC) Safety and Mission Assurance (S&MA) Launch Services Division. This technique is one of many procedures used by S&MA at KSC to evaluate residual risks for each Expendable Launch Vehicle (ELV) mission. RRET is a straight forward technique that incorporates the proven methodology of risk management, fault tree analysis, and reliability prediction. RRET derives a system reliability impact indicator from the system baseline reliability and the system residual risk reliability values. The system reliability impact indicator provides a quantitative measure of the reduction in the system baseline reliability due to the identified residual risks associated with the designated ELV mission. An example is discussed to provide insight into the application of RRET.

  1. Evaluation of the branched-chain DNA assay for measurement of RNA in formalin-fixed tissues.

    PubMed

    Knudsen, Beatrice S; Allen, April N; McLerran, Dale F; Vessella, Robert L; Karademos, Jonathan; Davies, Joan E; Maqsodi, Botoul; McMaster, Gary K; Kristal, Alan R

    2008-03-01

    We evaluated the branched-chain DNA (bDNA) assay QuantiGene Reagent System to measure RNA in formalin-fixed, paraffin-embedded (FFPE) tissues. The QuantiGene Reagent System does not require RNA isolation, avoids enzymatic preamplification, and has a simple workflow. Five selected genes were measured by bDNA assay; quantitative polymerase chain reaction (qPCR) was used as a reference method. Mixed-effect statistical models were used to partition the overall variance into components attributable to xenograft, sample, and assay. For FFPE tissues, the coefficients of reliability were significantly higher for the bDNA assay (93-100%) than for qPCR (82.4-95%). Correlations between qPCR(FROZEN), the gold standard, and bDNA(FFPE) ranged from 0.60 to 0.94, similar to those from qPCR(FROZEN) and qPCR(FFPE). Additionally, the sensitivity of the bDNA assay in tissue homogenates was 10-fold higher than in purified RNA. In 9- to 13-year-old blocks with poor RNA quality, the bDNA assay allowed the correct identification of the overexpression of known cancer genes. In conclusion, the QuantiGene Reagent System is considerably more reliable, reproducible, and sensitive than qPCR, providing an alternative method for the measurement of gene expression in FFPE tissues. It also appears to be well suited for the clinical analysis of FFPE tissues with diagnostic or prognostic gene expression biomarker panels for use in patient treatment and management.

  2. Evaluation of the Branched-Chain DNA Assay for Measurement of RNA in Formalin-Fixed Tissues

    PubMed Central

    Knudsen, Beatrice S.; Allen, April N.; McLerran, Dale F.; Vessella, Robert L.; Karademos, Jonathan; Davies, Joan E.; Maqsodi, Botoul; McMaster, Gary K.; Kristal, Alan R.

    2008-01-01

    We evaluated the branched-chain DNA (bDNA) assay QuantiGene Reagent System to measure RNA in formalin-fixed, paraffin-embedded (FFPE) tissues. The QuantiGene Reagent System does not require RNA isolation, avoids enzymatic preamplification, and has a simple workflow. Five selected genes were measured by bDNA assay; quantitative polymerase chain reaction (qPCR) was used as a reference method. Mixed-effect statistical models were used to partition the overall variance into components attributable to xenograft, sample, and assay. For FFPE tissues, the coefficients of reliability were significantly higher for the bDNA assay (93–100%) than for qPCR (82.4–95%). Correlations between qPCRFROZEN, the gold standard, and bDNAFFPE ranged from 0.60 to 0.94, similar to those from qPCRFROZEN and qPCRFFPE. Additionally, the sensitivity of the bDNA assay in tissue homogenates was 10-fold higher than in purified RNA. In 9- to 13-year-old blocks with poor RNA quality, the bDNA assay allowed the correct identification of the overexpression of known cancer genes. In conclusion, the QuantiGene Reagent System is considerably more reliable, reproducible, and sensitive than qPCR, providing an alternative method for the measurement of gene expression in FFPE tissues. It also appears to be well suited for the clinical analysis of FFPE tissues with diagnostic or prognostic gene expression biomarker panels for use in patient treatment and management. PMID:18276773

  3. Standby battery requirements for telecommunications power

    NASA Astrophysics Data System (ADS)

    May, G. J.

    The requirements for standby power for telecommunications are changing as the network moves from conventional systems to Internet Protocol (IP) telephony. These new systems require higher power levels closer to the user but the level of availability and reliability cannot be compromised if the network is to provide service in the event of a failure of the public utility. Many parts of these new networks are ac rather than dc powered with UPS systems for back-up power. These generally have lower levels of reliability than dc systems and the network needs to be designed such that overall reliability is not reduced through appropriate levels of redundancy. Mobile networks have different power requirements. Where there is a high density of nodes, continuity of service can be reasonably assured with short autonomy times. Furthermore, there is generally no requirement that these networks are the provider of last resort and therefore, specifications for continuity of power are directed towards revenue protection and overall reliability targets. As a result of these changes, battery requirements for reserve power are evolving. Shorter autonomy times are specified for parts of the network although a large part will continue to need support for hours rather minutes. Operational temperatures are increasing and battery solutions that provide longer life in extreme conditions are becoming important. Different battery technologies will be discussed in the context of these requirements. Conventional large flooded lead/acid cells both with pasted and tubular plates are used in larger central office applications but the majority of requirements are met with valve-regulated lead/acid (VRLA) batteries. The different types of VRLA battery will be described and their suitability for various applications outlined. New developments in battery construction and battery materials have improved both performance and reliability in recent years. Alternative technologies are also being proposed for telecommunications power, either different battery chemistries including lithium batteries, flywheel energy storage or the use of fuel cells. These will be evaluated and the position of lead/acid batteries in the medium term for this important market will be assessed.

  4. 75 FR 80391 - Electric Reliability Organization Interpretations of Interconnection Reliability Operations and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-22

    ... configuration to maintain system stability, acceptable voltage or power flows.\\12\\ \\12\\ In the Western... prevent system instability or cascading outages, and protect other facilities in response to transmission... nature used to address system reliability vulnerabilities to prevent system instability, cascading...

  5. Implications of scaling on static RAM bit cell stability and reliability

    NASA Astrophysics Data System (ADS)

    Coones, Mary Ann; Herr, Norm; Bormann, Al; Erington, Kent; Soorholtz, Vince; Sweeney, John; Phillips, Michael

    1993-01-01

    In order to lower manufacturing costs and increase performance, static random access memory (SRAM) bit cells are scaled progressively toward submicron geometries. The reliability of an SRAM is highly dependent on the bit cell stability. Smaller memory cells with less capacitance and restoring current make the array more susceptible to failures from defectivity, alpha hits, and other instabilities and leakage mechanisms. Improving long term reliability while migrating to higher density devices makes the task of building in and improving reliability increasingly difficult. Reliability requirements for high density SRAMs are very demanding with failure rates of less than 100 failures per billion device hours (100 FITs) being a common criteria. Design techniques for increasing bit cell stability and manufacturability must be implemented in order to build in this level of reliability. Several types of analyses are performed to benchmark the performance of the SRAM device. Examples of these analysis techniques which are presented here include DC parametric measurements of test structures, functional bit mapping of the circuit used to characterize the entire distribution of bits, electrical microprobing of weak and/or failing bits, and system and accelerated soft error rate measurements. These tests allow process and design improvements to be evaluated prior to implementation on the final product. These results are used to provide comprehensive bit cell characterization which can then be compared to device models and adjusted accordingly to provide optimized cell stability versus cell size for a particular technology. The result is designed in reliability which can be accomplished during the early stages of product development.

  6. A SVM framework for fault detection of the braking system in a high speed train

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Li, Yan-Fu; Zio, Enrico

    2017-03-01

    In April 2015, the number of operating High Speed Trains (HSTs) in the world has reached 3603. An efficient, effective and very reliable braking system is evidently very critical for trains running at a speed around 300 km/h. Failure of a highly reliable braking system is a rare event and, consequently, informative recorded data on fault conditions are scarce. This renders the fault detection problem a classification problem with highly unbalanced data. In this paper, a Support Vector Machine (SVM) framework, including feature selection, feature vector selection, model construction and decision boundary optimization, is proposed for tackling this problem. Feature vector selection can largely reduce the data size and, thus, the computational burden. The constructed model is a modified version of the least square SVM, in which a higher cost is assigned to the error of classification of faulty conditions than the error of classification of normal conditions. The proposed framework is successfully validated on a number of public unbalanced datasets. Then, it is applied for the fault detection of braking systems in HST: in comparison with several SVM approaches for unbalanced datasets, the proposed framework gives better results.

  7. Reliability studies of Integrated Modular Engine system designs

    NASA Technical Reports Server (NTRS)

    Hardy, Terry L.; Rapp, Douglas C.

    1993-01-01

    A study was performed to evaluate the reliability of Integrated Modular Engine (IME) concepts. Comparisons were made between networked IME systems and non-networked discrete systems using expander cycle configurations. Both redundant and non-redundant systems were analyzed. Binomial approximation and Markov analysis techniques were employed to evaluate total system reliability. In addition, Failure Modes and Effects Analyses (FMEA), Preliminary Hazard Analyses (PHA), and Fault Tree Analysis (FTA) were performed to allow detailed evaluation of the IME concept. A discussion of these system reliability concepts is also presented.

  8. Reliability studies of integrated modular engine system designs

    NASA Technical Reports Server (NTRS)

    Hardy, Terry L.; Rapp, Douglas C.

    1993-01-01

    A study was performed to evaluate the reliability of Integrated Modular Engine (IME) concepts. Comparisons were made between networked IME systems and non-networked discrete systems using expander cycle configurations. Both redundant and non-redundant systems were analyzed. Binomial approximation and Markov analysis techniques were employed to evaluate total system reliability. In addition, Failure Modes and Effects Analyses (FMEA), Preliminary Hazard Analyses (PHA), and Fault Tree Analysis (FTA) were performed to allow detailed evaluation of the IME concept. A discussion of these system reliability concepts is also presented.

  9. Reliability studies of integrated modular engine system designs

    NASA Astrophysics Data System (ADS)

    Hardy, Terry L.; Rapp, Douglas C.

    1993-06-01

    A study was performed to evaluate the reliability of Integrated Modular Engine (IME) concepts. Comparisons were made between networked IME systems and non-networked discrete systems using expander cycle configurations. Both redundant and non-redundant systems were analyzed. Binomial approximation and Markov analysis techniques were employed to evaluate total system reliability. In addition, Failure Modes and Effects Analyses (FMEA), Preliminary Hazard Analyses (PHA), and Fault Tree Analysis (FTA) were performed to allow detailed evaluation of the IME concept. A discussion of these system reliability concepts is also presented.

  10. Reliability studies of Integrated Modular Engine system designs

    NASA Astrophysics Data System (ADS)

    Hardy, Terry L.; Rapp, Douglas C.

    1993-06-01

    A study was performed to evaluate the reliability of Integrated Modular Engine (IME) concepts. Comparisons were made between networked IME systems and non-networked discrete systems using expander cycle configurations. Both redundant and non-redundant systems were analyzed. Binomial approximation and Markov analysis techniques were employed to evaluate total system reliability. In addition, Failure Modes and Effects Analyses (FMEA), Preliminary Hazard Analyses (PHA), and Fault Tree Analysis (FTA) were performed to allow detailed evaluation of the IME concept. A discussion of these system reliability concepts is also presented.

  11. Integrating laboratory robots with analytical instruments--must it really be so difficult?

    PubMed

    Kramer, G W

    1990-09-01

    Creating a reliable system from discrete laboratory instruments is often a task fraught with difficulties. While many modern analytical instruments are marvels of detection and data handling, attempts to create automated analytical systems incorporating such instruments are often frustrated by their human-oriented control structures and their egocentricity. The laboratory robot, while fully susceptible to these problems, extends such compatibility issues to the physical dimensions involving sample interchange, manipulation, and event timing. The workcell concept was conceived to describe the procedure and equipment necessary to carry out a single task during sample preparation. This notion can be extended to organize all operations in an automated system. Each workcell, no matter how complex its local repertoire of functions, must be minimally capable of accepting information (commands, data), returning information on demand (status, results), and being started, stopped, and reset by a higher level device. Even the system controller should have a mode where it can be directed by instructions from a higher level.

  12. A Preliminary Analysis of the Reliability and Validity of the Leader Observation System.

    DTIC Science & Technology

    1982-08-01

    financial instituition , a state agency, a medium sized manufacturing plant, a campus police department, and the Navy and Army R.O.T.C. units of a...specifics of the study. The outside observers (N=8) used in the study were graduate students in management . Three were assigned to the financial ... managing Interpersonal conflict etc. between subordinates or others d. routine financial reporting nnd b. appealing to higher authority to bookkeeping

  13. Intrarater Reliability of Muscle Strength and Hamstring to Quadriceps Strength Imbalance Ratios During Concentric, Isometric, and Eccentric Maximal Voluntary Contractions Using the Isoforce Dynamometer.

    PubMed

    Mau-Moeller, Anett; Gube, Martin; Felser, Sabine; Feldhege, Frank; Weippert, Matthias; Husmann, Florian; Tischer, Thomas; Bader, Rainer; Bruhn, Sven; Behrens, Martin

    2017-08-17

    To determine intrasession and intersession reliability of strength measurements and hamstrings to quadriceps strength imbalance ratios (H/Q ratios) using the new isoforce dynamometer. Repeated measures. Exercise science laboratory. Thirty healthy subjects (15 females, 15 males, 27.8 years). Coefficient of variation (CV) and intraclass correlation coefficients (ICC) were calculated for (1) strength parameters, that is peak torque, mean work, and mean power for concentric and eccentric maximal voluntary contractions; isometric maximal voluntary torque (IMVT); rate of torque development (RTD), and (2) H/Q ratios, that is conventional concentric, eccentric, and isometric H/Q ratios (Hcon/Qcon at 60 deg/s, 120 deg/s, and 180 deg/s, Hecc/Qecc at -60 deg/s and Hiso/Qiso) and functional eccentric antagonist to concentric agonist H/Q ratios (Hecc/Qcon and Hcon/Qecc). High reliability: CV <10%, ICC >0.90; moderate reliability: CV between 10% and 20%, ICC between 0.80 and 0.90; low reliability: CV >20%, ICC <0.80. (1) Strength parameters: (a) high intrasession reliability for concentric, eccentric, and isometric measurements, (b) moderate-to-high intersession reliability for concentric and eccentric measurements and IMVT, and (c) moderate-to-high intrasession reliability but low intersession reliability for RTD. (2) H/Q ratios: (a) moderate-to-high intrasession reliability for conventional ratios, (b) high intrasession reliability for functional ratios, (c) higher intersession reliability for Hcon/Qcon and Hiso/Qiso (moderate to high) than Hecc/Qecc (low to moderate), and (d) higher intersession reliability for conventional H/Q ratios (low to high) than functional H/Q ratios (low to moderate). The results have confirmed the reliability of strength parameters and the most frequently used H/Q ratios.

  14. Computer-Aided Reliability Estimation

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.; Stiffler, J. J.; Bryant, L. A.; Petersen, P. L.

    1986-01-01

    CARE III (Computer-Aided Reliability Estimation, Third Generation) helps estimate reliability of complex, redundant, fault-tolerant systems. Program specifically designed for evaluation of fault-tolerant avionics systems. However, CARE III general enough for use in evaluation of other systems as well.

  15. Verification of Triple Modular Redundancy Insertion for Reliable and Trusted Systems

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Kenneth

    2016-01-01

    If a system is required to be protected using triple modular redundancy (TMR), improper insertion can jeopardize the reliability and security of the system. Due to the complexity of the verification process and the complexity of digital designs, there are currently no available techniques that can provide complete and reliable confirmation of TMR insertion. We propose a method for TMR insertion verification that satisfies the process for reliable and trusted systems.

  16. Agent autonomy approach to probabilistic physics-of-failure modeling of complex dynamic systems with interacting failure mechanisms

    NASA Astrophysics Data System (ADS)

    Gromek, Katherine Emily

    A novel computational and inference framework of the physics-of-failure (PoF) reliability modeling for complex dynamic systems has been established in this research. The PoF-based reliability models are used to perform a real time simulation of system failure processes, so that the system level reliability modeling would constitute inferences from checking the status of component level reliability at any given time. The "agent autonomy" concept is applied as a solution method for the system-level probabilistic PoF-based (i.e. PPoF-based) modeling. This concept originated from artificial intelligence (AI) as a leading intelligent computational inference in modeling of multi agents systems (MAS). The concept of agent autonomy in the context of reliability modeling was first proposed by M. Azarkhail [1], where a fundamentally new idea of system representation by autonomous intelligent agents for the purpose of reliability modeling was introduced. Contribution of the current work lies in the further development of the agent anatomy concept, particularly the refined agent classification within the scope of the PoF-based system reliability modeling, new approaches to the learning and the autonomy properties of the intelligent agents, and modeling interacting failure mechanisms within the dynamic engineering system. The autonomous property of intelligent agents is defined as agent's ability to self-activate, deactivate or completely redefine their role in the analysis. This property of agents and the ability to model interacting failure mechanisms of the system elements makes the agent autonomy fundamentally different from all existing methods of probabilistic PoF-based reliability modeling. 1. Azarkhail, M., "Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems", PhD thesis, University of Maryland, College Park, 2007.

  17. Electrochemical disinfection of repeatedly recycled blackwater in a free‐standing, additive‐free toilet

    PubMed Central

    Sellgren, Katelyn L.; Klem, Ethan J. D.; Piascik, Jeffrey R.; Stoner, Brian R.

    2017-01-01

    Abstract Decentralized, energy‐efficient waste water treatment technologies enabling water reuse are needed to sustainably address sanitation needs in water‐ and energy‐scarce environments. Here, we describe the effects of repeated recycling of disinfected blackwater (as flush liquid) on the energy required to achieve full disinfection with an electrochemical process in a prototype toilet system. The recycled liquid rapidly reached a steady state with total solids reliably ranging between 0.50 and 0.65% and conductivity between 20 and 23 mS/cm through many flush cycles over 15 weeks. The increase in accumulated solids was associated with increased energy demand and wide variation in the free chlorine contact time required to achieve complete disinfection. Further studies on the system at steady state revealed that running at higher voltage modestly improves energy efficiency, and established running parameters that reliably achieve disinfection at fixed run times. These results will guide prototype testing in the field. PMID:29242713

  18. Improvements of vacuum system in J-PARC 3 GeV synchrotron

    NASA Astrophysics Data System (ADS)

    Kamiya, J.; Hikichi, Y.; Namekawa, Y.; Takeishi, K.; Yanagibashi, T.; Kinsho, M.; Yamamoto, K.; Sato, A.

    2017-07-01

    The RCS vacuum system has been upgraded since the completion of its construction towards the objectives of both better vacuum quality and higher reliability of the components. For the better vacuum quality, (1) pressure of the injection beam line was improved to prevent the H-beam from converting to H0; (2) leakage in the beam injection area due to the thermal expansion was eliminated by applying the adequate torque amount for the clamps; (3) new in-situ degassing method of the kicker magnet was developed. For the reliability increase of the components, (1) A considerable number of fluoroelastmer seal was exchanged to metal seal with the low spring constant bellows and the light clamps; (2) TMP controller for the long cable was developed to prevent the controller failure by the severe electrical noise; (3) A number of TMP were installed instead of ion pumps in the RF cavity section as an insurance for the case of pump trouble.

  19. Reliable communication in the presence of failures

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.; Joseph, Thomas A.

    1987-01-01

    The design and correctness of a communication facility for a distributed computer system are reported on. The facility provides support for fault-tolerant process groups in the form of a family of reliable multicast protocols that can be used in both local- and wide-area networks. These protocols attain high levels of concurrency, while respecting application-specific delivery ordering constraints, and have varying cost and performance that depend on the degree of ordering desired. In particular, a protocol that enforces causal delivery orderings is introduced and shown to be a valuable alternative to conventional asynchronous communication protocols. The facility also ensures that the processes belonging to a fault-tolerant process group will observe consistant orderings of events affecting the group as a whole, including process failures, recoveries, migration, and dynamic changes to group properties like member rankings. A review of several uses for the protocols is the ISIS system, which supports fault-tolerant resilient objects and bulletin boards, illustrates the significant simplification of higher level algorithms made possible by our approach.

  20. Multidisciplinary System Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  1. Multi-Disciplinary System Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Han, Song

    1997-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  2. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Posse, Christian

    2005-09-15

    The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabási-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using other methods and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.

  3. Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Posse, Christian

    2005-09-15

    The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabasi-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using standard power engineering methods, and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.

  4. The use of lithium batteries in biomedical devices

    NASA Astrophysics Data System (ADS)

    Owens, Boone B.

    1989-06-01

    Lithium batteries have played an important role in the development of useful implantable biomedical devices. The cardiac pacemaker is the most well known of these devices and high energy, long-life reliable lithium primary cells have effectively replaced all of the alkaline cells previously used in these electronic systems. The recent development of higher power devices such as drug pumps and cardiac defibrillators require the use of batteries with higher energy and power capabilities. High rate rechargeable batteries that can be configured as flat prismatic cells would be especially useful in some of these new applications. Lithium polymer electrolyte batteries may find a useful role in these new areas.

  5. Hawaii electric system reliability.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva Monroy, Cesar Augusto; Loose, Verne William

    2012-09-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and formore » application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.« less

  6. Hawaii Electric System Reliability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loose, Verne William; Silva Monroy, Cesar Augusto

    2012-08-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers’ views of reliability “worth” and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and formore » application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers’ views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.« less

  7. Mathematical modeling and fuzzy availability analysis for serial processes in the crystallization system of a sugar plant

    NASA Astrophysics Data System (ADS)

    Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram

    2017-03-01

    The binary states, i.e., success or failed state assumptions used in conventional reliability are inappropriate for reliability analysis of complex industrial systems due to lack of sufficient probabilistic information. For large complex systems, the uncertainty of each individual parameter enhances the uncertainty of the system reliability. In this paper, the concept of fuzzy reliability has been used for reliability analysis of the system, and the effect of coverage factor, failure and repair rates of subsystems on fuzzy availability for fault-tolerant crystallization system of sugar plant is analyzed. Mathematical modeling of the system is carried out using the mnemonic rule to derive Chapman-Kolmogorov differential equations. These governing differential equations are solved with Runge-Kutta fourth-order method.

  8. Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip

    NASA Astrophysics Data System (ADS)

    Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang

    2016-09-01

    Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.

  9. Requirements for implementing real-time control functional modules on a hierarchical parallel pipelined system

    NASA Technical Reports Server (NTRS)

    Wheatley, Thomas E.; Michaloski, John L.; Lumia, Ronald

    1989-01-01

    Analysis of a robot control system leads to a broad range of processing requirements. One fundamental requirement of a robot control system is the necessity of a microcomputer system in order to provide sufficient processing capability.The use of multiple processors in a parallel architecture is beneficial for a number of reasons, including better cost performance, modular growth, increased reliability through replication, and flexibility for testing alternate control strategies via different partitioning. A survey of the progression from low level control synchronizing primitives to higher level communication tools is presented. The system communication and control mechanisms of existing robot control systems are compared to the hierarchical control model. The impact of this design methodology on the current robot control systems is explored.

  10. Push-Pull Receptive Field Organization and Synaptic Depression: Mechanisms for Reliably Encoding Naturalistic Stimuli in V1

    PubMed Central

    Kremkow, Jens; Perrinet, Laurent U.; Monier, Cyril; Alonso, Jose-Manuel; Aertsen, Ad; Frégnac, Yves; Masson, Guillaume S.

    2016-01-01

    Neurons in the primary visual cortex are known for responding vigorously but with high variability to classical stimuli such as drifting bars or gratings. By contrast, natural scenes are encoded more efficiently by sparse and temporal precise spiking responses. We used a conductance-based model of the visual system in higher mammals to investigate how two specific features of the thalamo-cortical pathway, namely push-pull receptive field organization and fast synaptic depression, can contribute to this contextual reshaping of V1 responses. By comparing cortical dynamics evoked respectively by natural vs. artificial stimuli in a comprehensive parametric space analysis, we demonstrate that the reliability and sparseness of the spiking responses during natural vision is not a mere consequence of the increased bandwidth in the sensory input spectrum. Rather, it results from the combined impacts of fast synaptic depression and push-pull inhibition, the later acting for natural scenes as a form of “effective” feed-forward inhibition as demonstrated in other sensory systems. Thus, the combination of feedforward-like inhibition with fast thalamo-cortical synaptic depression by simple cells receiving a direct structured input from thalamus composes a generic computational mechanism for generating a sparse and reliable encoding of natural sensory events. PMID:27242445

  11. An Investigation of Interrater Reliability for the Rorschach Performance Assessment System (R-PAS) in a Nonpatient U.S. Sample.

    PubMed

    Kivisalu, Trisha M; Lewey, Jennifer H; Shaffer, Thomas W; Canfield, Merle L

    2016-01-01

    The Rorschach Performance Assessment System (R-PAS) aims to provide an evidence-based approach to administration, coding, and interpretation of the Rorschach Inkblot Method (RIM). R-PAS analyzes individualized communications given by respondents to each card to code a wide pool of possible variables. Due to the large number of possible codes that can be assigned to these responses, it is important to consider the concordance rates among different assessors. This study investigated interrater reliability for R-PAS protocols. Data were analyzed from a nonpatient convenience sample of 50 participants who were recruited through networking, local marketing, and advertising efforts from January 2013 through October 2014. Blind recoding was used and discrepancies between the initial and blind coders' ratings were analyzed for each variable with SPSS yielding percent agreement and intraclass correlation values. Data for Location, Space, Contents, Synthesis, Vague, Pairs, Form Quality, Populars, Determinants, and Cognitive and Thematic codes are presented. Rates of agreement for 1,168 responses were higher for more simplistic coding (e.g., Location), whereas agreement was lower for more complex codes (e.g., Cognitive and Thematic codes). Overall, concordance rates achieved good to excellent agreement. Results suggest R-PAS is an effective method with high interrater reliability supporting its empirical basis.

  12. Geometric classification of scalp hair for valid drug testing, 6 more reliable than 8 hair curl groups

    PubMed Central

    Mkentane, K.; Gumedze, F.; Ngoepe, M.; Davids, L. M.; Khumalo, N. P.

    2017-01-01

    Introduction Curly hair is reported to contain higher lipid content than straight hair, which may influence incorporation of lipid soluble drugs. The use of race to describe hair curl variation (Asian, Caucasian and African) is unscientific yet common in medical literature (including reports of drug levels in hair). This study investigated the reliability of a geometric classification of hair (based on 3 measurements: the curve diameter, curl index and number of waves). Materials and methods After ethical approval and informed consent, proximal virgin (6cm) hair sampled from the vertex of scalp in 48 healthy volunteers were evaluated. Three raters each scored hairs from 48 volunteers at two occasions each for the 8 and 6-group classifications. One rater applied the 6-group classification to 80 additional volunteers in order to further confirm the reliability of this system. The Kappa statistic was used to assess intra and inter rater agreement. Results Each rater classified 480 hairs on each occasion. No rater classified any volunteer’s 10 hairs into the same group; the most frequently occurring group was used for analysis. The inter-rater agreement was poor for the 8-groups (k = 0.418) but improved for the 6-groups (k = 0.671). The intra-rater agreement also improved (k = 0.444 to 0.648 versus 0.599 to 0.836) for 6-groups; that for the one evaluator for all volunteers was good (k = 0.754). Conclusions Although small, this is the first study to test the reliability of a geometric classification. The 6-group method is more reliable. However, a digital classification system is likely to reduce operator error. A reliable objective classification of human hair curl is long overdue, particularly with the increasing use of hair as a testing substrate for treatment compliance in Medicine. PMID:28570555

  13. On the capacity of MIMO-OFDM based diversity and spatial multiplexing in Radio-over-Fiber system

    NASA Astrophysics Data System (ADS)

    El Yahyaoui, Moussa; El Moussati, Ali; El Zein, Ghaïs

    2017-11-01

    This paper proposes a realistic and global simulation to predict the behavior of a Radio over Fiber (RoF) system before its realization. In this work we consider a 2 × 2 Multiple-Input Multiple-Output (MIMO) Orthogonal Frequency Division Multiplexing (OFDM) RoF system at 60 GHz. This system is based on Spatial Diversity (SD) which increases reliability (decreases probability of error) and Spatial Multiplexing (SMX) which increases data rate, but not necessarily reliability. The 60 GHz MIMO channel model employed in this work based on a lot of measured data and statistical analysis named Triple-S and Valenzuela (TSV) model. To the authors best knowledge; it is the first time that this type of TSV channel model has been employed for 60 GHz MIMO-RoF system. We have evaluated and compared the performance of this system according to the diversity technique, modulation schemes, and channel coding rate for Line-Of-Sight (LOS) desktop environment. The SMX coded is proposed as an intermediate system to improve the Signal to Noise Ratio (SNR) and the data rate. The resulting 2 × 2 MIMO-OFDM SMX system achieves a higher data rate up to 70 Gb/s with 64QAM and Forward Error Correction (FEC) limit of 10-3 over 25-km fiber transmission followed by 3-m wireless transmission using 7 GHz bandwidth of millimeter wave band.

  14. The reliability-quality relationship for quality systems and quality risk management.

    PubMed

    Claycamp, H Gregg; Rahaman, Faiad; Urban, Jason M

    2012-01-01

    Engineering reliability typically refers to the probability that a system, or any of its components, will perform a required function for a stated period of time and under specified operating conditions. As such, reliability is inextricably linked with time-dependent quality concepts, such as maintaining a state of control and predicting the chances of losses from failures for quality risk management. Two popular current good manufacturing practice (cGMP) and quality risk management tools, failure mode and effects analysis (FMEA) and root cause analysis (RCA) are examples of engineering reliability evaluations that link reliability with quality and risk. Current concepts in pharmaceutical quality and quality management systems call for more predictive systems for maintaining quality; yet, the current pharmaceutical manufacturing literature and guidelines are curiously silent on engineering quality. This commentary discusses the meaning of engineering reliability while linking the concept to quality systems and quality risk management. The essay also discusses the difference between engineering reliability and statistical (assay) reliability. The assurance of quality in a pharmaceutical product is no longer measured only "after the fact" of manufacturing. Rather, concepts of quality systems and quality risk management call for designing quality assurance into all stages of the pharmaceutical product life cycle. Interestingly, most assays for quality are essentially static and inform product quality over the life cycle only by being repeated over time. Engineering process reliability is the fundamental concept that is meant to anticipate quality failures over the life cycle of the product. Reliability is a well-developed theory and practice for other types of manufactured products and manufacturing processes. Thus, it is well known to be an appropriate index of manufactured product quality. This essay discusses the meaning of reliability and its linkages with quality systems and quality risk management.

  15. The Development of a Web-Based Attendance System with RFID for Higher Education Institution in Binus University

    NASA Astrophysics Data System (ADS)

    Kurniali, S.; Mayliana

    2014-03-01

    This study focuses on the development of a web-based attendance system with RFID in a Indonesian higher education institution. The development of this system is motivated due to the fact that the students' attendance records are one of the important elements that reflect their academic achievements. However, the current manual practice implemented is causing such a hassle. Empowering the usage of the new RFID based student card, a new web based-attendance system has been built to cater the recording and reporting of not just the student's' attendances, but also the lecturer's and taught topics in the class. The development of this system is inspired by the senior management. And the system can be easily accessed through the learning management system and can generate a report in real time, This paper will discuss in details the development until the maintaining phase of the system. Result achieved is the innovation of developing the system proved reliable to support related business processes and empowered the intention to maximize the usage of the RFID card. Considered as a successful implementation, this paper will give an input for others who want to implement a similar system.

  16. Using Penelope to assess the correctness of NASA Ada software: A demonstration of formal methods as a counterpart to testing

    NASA Technical Reports Server (NTRS)

    Eichenlaub, Carl T.; Harper, C. Douglas; Hird, Geoffrey

    1993-01-01

    Life-critical applications warrant a higher level of software reliability than has yet been achieved. Since it is not certain that traditional methods alone can provide the required ultra reliability, new methods should be examined as supplements or replacements. This paper describes a mathematical counterpart to the traditional process of empirical testing. ORA's Penelope verification system is demonstrated as a tool for evaluating the correctness of Ada software. Grady Booch's Ada calendar utility package, obtained through NASA, was specified in the Larch/Ada language. Formal verification in the Penelope environment established that many of the package's subprograms met their specifications. In other subprograms, failed attempts at verification revealed several errors that had escaped detection by testing.

  17. Study on Distribution Reliability with Parallel and On-site Distributed Generation Considering Protection Miscoordination and Tie Line

    NASA Astrophysics Data System (ADS)

    Chaitusaney, Surachai; Yokoyama, Akihiko

    In distribution system, Distributed Generation (DG) is expected to improve the system reliability as its backup generation. However, DG contribution in fault current may cause the loss of the existing protection coordination, e.g. recloser-fuse coordination and breaker-breaker coordination. This problem can drastically deteriorate the system reliability, and it is more serious and complicated when there are several DG sources in the system. Hence, the above conflict in reliability aspect unavoidably needs a detailed investigation before the installation or enhancement of DG is done. The model of composite DG fault current is proposed to find the threshold beyond which existing protection coordination is lost. Cases of protection miscoordination are described, together with their consequences. Since a distribution system may be tied with another system, the issues of tie line and on-site DG are integrated into this study. Reliability indices are evaluated and compared in the distribution reliability test system RBTS Bus 2.

  18. A Passive System Reliability Analysis for a Station Blackout

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunett, Acacia; Bucknor, Matthew; Grabaskas, David

    2015-05-03

    The latest iterations of advanced reactor designs have included increased reliance on passive safety systems to maintain plant integrity during unplanned sequences. While these systems are advantageous in reducing the reliance on human intervention and availability of power, the phenomenological foundations on which these systems are built require a novel approach to a reliability assessment. Passive systems possess the unique ability to fail functionally without failing physically, a result of their explicit dependency on existing boundary conditions that drive their operating mode and capacity. Argonne National Laboratory is performing ongoing analyses that demonstrate various methodologies for the characterization of passivemore » system reliability within a probabilistic framework. Two reliability analysis techniques are utilized in this work. The first approach, the Reliability Method for Passive Systems, provides a mechanistic technique employing deterministic models and conventional static event trees. The second approach, a simulation-based technique, utilizes discrete dynamic event trees to treat time- dependent phenomena during scenario evolution. For this demonstration analysis, both reliability assessment techniques are used to analyze an extended station blackout in a pool-type sodium fast reactor (SFR) coupled with a reactor cavity cooling system (RCCS). This work demonstrates the entire process of a passive system reliability analysis, including identification of important parameters and failure metrics, treatment of uncertainties and analysis of results.« less

  19. Impact of relationships between test and training animals and among training animals on reliability of genomic prediction.

    PubMed

    Wu, X; Lund, M S; Sun, D; Zhang, Q; Su, G

    2015-10-01

    One of the factors affecting the reliability of genomic prediction is the relationship among the animals of interest. This study investigated the reliability of genomic prediction in various scenarios with regard to the relationship between test and training animals, and among animals within the training data set. Different training data sets were generated from EuroGenomics data and a group of Nordic Holstein bulls (born in 2005 and afterwards) as a common test data set. Genomic breeding values were predicted using a genomic best linear unbiased prediction model and a Bayesian mixture model. The results showed that a closer relationship between test and training animals led to a higher reliability of genomic predictions for the test animals, while a closer relationship among training animals resulted in a lower reliability. In addition, the Bayesian mixture model in general led to a slightly higher reliability of genomic prediction, especially for the scenario of distant relationships between training and test animals. Therefore, to prevent a decrease in reliability, constant updates of the training population with animals from more recent generations are required. Moreover, a training population consisting of less-related animals is favourable for reliability of genomic prediction. © 2015 Blackwell Verlag GmbH.

  20. 40 CFR 75.42 - Reliability criteria.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Reliability criteria. 75.42 Section 75...) CONTINUOUS EMISSION MONITORING Alternative Monitoring Systems § 75.42 Reliability criteria. To demonstrate reliability equal to or better than the continuous emission monitoring system, the owner or operator shall...

  1. Integrating Reliability Analysis with a Performance Tool

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Ulrey, Michael

    1995-01-01

    A large number of commercial simulation tools support performance oriented studies of complex computer and communication systems. Reliability of these systems, when desired, must be obtained by remodeling the system in a different tool. This has obvious drawbacks: (1) substantial extra effort is required to create the reliability model; (2) through modeling error the reliability model may not reflect precisely the same system as the performance model; (3) as the performance model evolves one must continuously reevaluate the validity of assumptions made in that model. In this paper we describe an approach, and a tool that implements this approach, for integrating a reliability analysis engine into a production quality simulation based performance modeling tool, and for modeling within such an integrated tool. The integrated tool allows one to use the same modeling formalisms to conduct both performance and reliability studies. We describe how the reliability analysis engine is integrated into the performance tool, describe the extensions made to the performance tool to support the reliability analysis, and consider the tool's performance.

  2. The reliability analysis of a separated, dual fail operational redundant strapdown IMU. [inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Motyka, P.

    1983-01-01

    A methodology for quantitatively analyzing the reliability of redundant avionics systems, in general, and the dual, separated Redundant Strapdown Inertial Measurement Unit (RSDIMU), in particular, is presented. The RSDIMU is described and a candidate failure detection and isolation system presented. A Markov reliability model is employed. The operational states of the system are defined and the single-step state transition diagrams discussed. Graphical results, showing the impact of major system parameters on the reliability of the RSDIMU system, are presented and discussed.

  3. Reliability issues of free-space communications systems and networks

    NASA Astrophysics Data System (ADS)

    Willebrand, Heinz A.

    2003-04-01

    Free space optics (FSO) is a high-speed point-to-point connectivity solution traditionally used in the enterprise campus networking market for building-to-building LAN connectivity. However, more recently some wire line and wireless carriers started to deploy FSO systems in their networks. The requirements on FSO system reliability, meaing both system availability and component reliability, are far more stringent in the carrier market when compared to the requirements in the enterprise market segment. This paper tries to outline some of the aspects that are important to ensure carrier class system reliability.

  4. Advanced reliability modeling of fault-tolerant computer-based systems

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.

    1982-01-01

    Two methodologies for the reliability assessment of fault tolerant digital computer based systems are discussed. The computer-aided reliability estimation 3 (CARE 3) and gate logic software simulation (GLOSS) are assessment technologies that were developed to mitigate a serious weakness in the design and evaluation process of ultrareliable digital systems. The weak link is based on the unavailability of a sufficiently powerful modeling technique for comparing the stochastic attributes of one system against others. Some of the more interesting attributes are reliability, system survival, safety, and mission success.

  5. Methodology for Software Reliability Prediction. Volume 1.

    DTIC Science & Technology

    1987-11-01

    SPACECRAFT 0 MANNED SPACECRAFT B ATCH SYSTEM AIRBORNE AVIONICS 0 UNMANNED EVENT C014TROL a REAL TIME CLOSED 0 UNMANNED SPACECRAFT LOOP OPERATINS SPACECRAFT...software reliability. A Software Reliability Measurement Framework was established which spans the life cycle of a software system and includes the...specification, prediction, estimation, and assessment of software reliability. Data from 59 systems , representing over 5 million lines of code, were

  6. Pocket Handbook on Reliability

    DTIC Science & Technology

    1975-09-01

    exponencial distributions Weibull distribution, -xtimating reliability, confidence intervals, relia- bility growth, 0. P- curves, Bayesian analysis. 20 A S...introduction for those not familiar with reliability and a good refresher for those who are currently working in the area. LEWIS NERI, CHIEF...includes one or both of the following objectives: a) prediction of the current system reliability, b) projection on the system reliability for someI future

  7. Management of ATM-based networks supporting multimedia medical information systems

    NASA Astrophysics Data System (ADS)

    Whitman, Robert A.; Blaine, G. James; Fritz, Kevin; Goodgold, Ken; Heisinger, Patrick

    1997-05-01

    Medical information systems are acquiring the ability to collect and deliver many different types of medical information. In support of the increased network demands necessitated by these expanded capabilities, asynchronous transfer mode (ATM) based networks are being deployed in medical care systems. While ATM supplies a much greater line rate than currently deployed networks, the management and standards surrounding ATM are yet to mature. This paper explores the management and control issues surrounding an ATM network supporting medical information systems, and examines how management impacts network performance and robustness. A multivendor ATM network at the BJC Health System/Washington University and the applications using the network are discussed. Performance information for specific applications is presented and analyzed. Network management's influence on application reliability is outlined. The information collected is used to show how ATM network standards and management tools influence network reliability and performance. Performance of current applications using the ATM network is discussed. Special attention is given to issues encountered in implementation of hypertext transfer protocol over ATM internet protocol (IP) communications. A classical IP ATM implementation yields greater than twenty percent higher network performance over LANE. Maximum performance for a host's suite of applications can be obtained by establishing multiple individually engineered IP links through its ATM network connection.

  8. Tightly coupled integration of ionosphere-constrained precise point positioning and inertial navigation systems.

    PubMed

    Gao, Zhouzheng; Zhang, Hongping; Ge, Maorong; Niu, Xiaoji; Shen, Wenbin; Wickert, Jens; Schuh, Harald

    2015-03-10

    The continuity and reliability of precise GNSS positioning can be seriously limited by severe user observation environments. The Inertial Navigation System (INS) can overcome such drawbacks, but its performance is clearly restricted by INS sensor errors over time. Accordingly, the tightly coupled integration of GPS and INS can overcome the disadvantages of each individual system and together form a new navigation system with a higher accuracy, reliability and availability. Recently, ionosphere-constrained (IC) precise point positioning (PPP) utilizing raw GPS observations was proven able to improve both the convergence and positioning accuracy of the conventional PPP using ionosphere-free combined observations (LC-PPP). In this paper, a new mode of tightly coupled integration, in which the IC-PPP instead of LC-PPP is employed, is implemented to further improve the performance of the coupled system. We present the detailed mathematical model and the related algorithm of the new integration of IC-PPP and INS. To evaluate the performance of the new tightly coupled integration, data of both airborne and vehicle experiments with a geodetic GPS receiver and tactical grade inertial measurement unit are processed and the results are analyzed. The statistics show that the new approach can further improve the positioning accuracy compared with both IC-PPP and the tightly coupled integration of the conventional PPP and INS.

  9. Reliability and maintainability assessment factors for reliable fault-tolerant systems

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.

    1984-01-01

    A long term goal of the NASA Langley Research Center is the development of a reliability assessment methodology of sufficient power to enable the credible comparison of the stochastic attributes of one ultrareliable system design against others. This methodology, developed over a 10 year period, is a combined analytic and simulative technique. An analytic component is the Computer Aided Reliability Estimation capability, third generation, or simply CARE III. A simulative component is the Gate Logic Software Simulator capability, or GLOSS. The numerous factors that potentially have a degrading effect on system reliability and the ways in which these factors that are peculiar to highly reliable fault tolerant systems are accounted for in credible reliability assessments. Also presented are the modeling difficulties that result from their inclusion and the ways in which CARE III and GLOSS mitigate the intractability of the heretofore unworkable mathematics.

  10. Overview of RICOR's reliability theoretical analysis, accelerated life demonstration test results and verification by field data

    NASA Astrophysics Data System (ADS)

    Vainshtein, Igor; Baruch, Shlomi; Regev, Itai; Segal, Victor; Filis, Avishai; Riabzev, Sergey

    2018-05-01

    The growing demand for EO applications that work around the clock 24hr/7days a week, such as in border surveillance systems, emphasizes the need for a highly reliable cryocooler having increased operational availability and optimized system's Integrated Logistic Support (ILS). In order to meet this need, RICOR developed linear and rotary cryocoolers which achieved successfully this goal. Cryocoolers MTTF was analyzed by theoretical reliability evaluation methods, demonstrated by normal and accelerated life tests at Cryocooler level and finally verified by field data analysis derived from Cryocoolers operating at system level. The following paper reviews theoretical reliability analysis methods together with analyzing reliability test results derived from standard and accelerated life demonstration tests performed at Ricor's advanced reliability laboratory. As a summary for the work process, reliability verification data will be presented as a feedback from fielded systems.

  11. Reliability analysis of the AOSpine thoracolumbar spine injury classification system by a worldwide group of naïve spinal surgeons.

    PubMed

    Kepler, Christopher K; Vaccaro, Alexander R; Koerner, John D; Dvorak, Marcel F; Kandziora, Frank; Rajasekaran, Shanmuganathan; Aarabi, Bizhan; Vialle, Luiz R; Fehlings, Michael G; Schroeder, Gregory D; Reinhold, Maximilian; Schnake, Klaus John; Bellabarba, Carlo; Cumhur Öner, F

    2016-04-01

    The aims of this study were (1) to demonstrate the AOSpine thoracolumbar spine injury classification system can be reliably applied by an international group of surgeons and (2) to delineate those injury types which are difficult for spine surgeons to classify reliably. A previously described classification system of thoracolumbar injuries which consists of a morphologic classification of the fracture, a grading system for the neurologic status and relevant patient-specific modifiers was applied to 25 cases by 100 spinal surgeons from across the world twice independently, in grading sessions 1 month apart. The results were analyzed for classification reliability using the Kappa coefficient (κ). The overall Kappa coefficient for all cases was 0.56, which represents moderate reliability. Kappa values describing interobserver agreement were 0.80 for type A injuries, 0.68 for type B injuries and 0.72 for type C injuries, all representing substantial reliability. The lowest level of agreement for specific subtypes was for fracture subtype A4 (Kappa = 0.19). Intraobserver analysis demonstrated overall average Kappa statistic for subtype grading of 0.68 also representing substantial reproducibility. In a worldwide sample of spinal surgeons without previous exposure to the recently described AOSpine Thoracolumbar Spine Injury Classification System, we demonstrated moderate interobserver and substantial intraobserver reliability. These results suggest that most spine surgeons can reliably apply this system to spine trauma patients as or more reliably than previously described systems.

  12. Integrated performance and reliability specification for digital avionics systems

    NASA Technical Reports Server (NTRS)

    Brehm, Eric W.; Goettge, Robert T.

    1995-01-01

    This paper describes an automated tool for performance and reliability assessment of digital avionics systems, called the Automated Design Tool Set (ADTS). ADTS is based on an integrated approach to design assessment that unifies traditional performance and reliability views of system designs, and that addresses interdependencies between performance and reliability behavior via exchange of parameters and result between mathematical models of each type. A multi-layer tool set architecture has been developed for ADTS that separates the concerns of system specification, model generation, and model solution. Performance and reliability models are generated automatically as a function of candidate system designs, and model results are expressed within the system specification. The layered approach helps deal with the inherent complexity of the design assessment process, and preserves long-term flexibility to accommodate a wide range of models and solution techniques within the tool set structure. ADTS research and development to date has focused on development of a language for specification of system designs as a basis for performance and reliability evaluation. A model generation and solution framework has also been developed for ADTS, that will ultimately encompass an integrated set of analytic and simulated based techniques for performance, reliability, and combined design assessment.

  13. Trial application of reliability technology to emergency diesel generators at the Trojan Nuclear Power Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, S.M.; Boccio, J.L.; Karimian, S.

    1986-01-01

    In this paper, a trial application of reliability technology to the emergency diesel generator system at the Trojan Nuclear Power Plant is presented. An approach for formulating a reliability program plan for this system is being developed. The trial application has shown that a reliability program process, using risk- and reliability-based techniques, can be interwoven into current plant operational activities to help in controlling, analyzing, and predicting faults that can challenge safety systems. With the cooperation of the utility, Portland General Electric Co., this reliability program can eventually be implemented at Trojan to track its effectiveness.

  14. Development of a nanosatellite de-orbiting system by reliability based design optimization

    NASA Astrophysics Data System (ADS)

    Nikbay, Melike; Acar, Pınar; Aslan, Alim Rüstem

    2015-12-01

    This paper presents design approaches to develop a reliable and efficient de-orbiting system for the 3USAT nanosatellite to provide a beneficial orbital decay process at the end of a mission. A de-orbiting system is initially designed by employing the aerodynamic drag augmentation principle where the structural constraints of the overall satellite system and the aerodynamic forces are taken into account. Next, an alternative de-orbiting system is designed with new considerations and further optimized using deterministic and reliability based design techniques. For the multi-objective design, the objectives are chosen to maximize the aerodynamic drag force through the maximization of the Kapton surface area while minimizing the de-orbiting system mass. The constraints are related in a deterministic manner to the required deployment force, the height of the solar panel hole and the deployment angle. The length and the number of layers of the deployable Kapton structure are used as optimization variables. In the second stage of this study, uncertainties related to both manufacturing and operating conditions of the deployable structure in space environment are considered. These uncertainties are then incorporated into the design process by using different probabilistic approaches such as Monte Carlo Simulation, the First-Order Reliability Method and the Second-Order Reliability Method. The reliability based design optimization seeks optimal solutions using the former design objectives and constraints with the inclusion of a reliability index. Finally, the de-orbiting system design alternatives generated by different approaches are investigated and the reliability based optimum design is found to yield the best solution since it significantly improves both system reliability and performance requirements.

  15. Approach to developing reliable space reactor power systems

    NASA Technical Reports Server (NTRS)

    Mondt, Jack F.; Shinbrot, Charles H.

    1991-01-01

    During Phase II, the Engineering Development Phase, the SP-100 Project has defined and is pursuing a new approach to developing reliable power systems. The approach to developing such a system during the early technology phase is described along with some preliminary examples to help explain the approach. Developing reliable components to meet space reactor power system requirements is based on a top-down systems approach which includes a point design based on a detailed technical specification of a 100-kW power system. The SP-100 system requirements implicitly recognize the challenge of achieving a high system reliability for a ten-year lifetime, while at the same time using technologies that require very significant development efforts. A low-cost method for assessing reliability, based on an understanding of fundamental failure mechanisms and design margins for specific failure mechanisms, is being developed as part of the SP-100 Program.

  16. A new method for computing the reliability of consecutive k-out-of-n:F systems

    NASA Astrophysics Data System (ADS)

    Gökdere, Gökhan; Gürcan, Mehmet; Kılıç, Muhammet Burak

    2016-01-01

    In many physical systems, reliability evaluation, such as ones encountered in telecommunications, the design of integrated circuits, microwave relay stations, oil pipeline systems, vacuum systems in accelerators, computer ring networks, and spacecraft relay stations, have had applied consecutive k-out-of-n system models. These systems are characterized as logical connections among the components of the systems placed in lines or circles. In literature, a great deal of attention has been paid to the study of the reliability evaluation of consecutive k-out-of-n systems. In this paper, we propose a new method to compute the reliability of consecutive k-out-of-n:F systems, with n linearly and circularly arranged components. The proposed method provides a simple way for determining the system failure probability. Also, we write R-Project codes based on our proposed method to compute the reliability of the linear and circular systems which have a great number of components.

  17. Development and Validation of a Practical Instrument for Injury Prevention: The Occupational Safety and Health Monitoring and Assessment Tool (OSH-MAT).

    PubMed

    Sun, Yi; Arning, Martin; Bochmann, Frank; Börger, Jutta; Heitmann, Thomas

    2018-06-01

    The Occupational Safety and Health Monitoring and Assessment Tool (OSH-MAT) is a practical instrument that is currently used in the German woodworking and metalworking industries to monitor safety conditions at workplaces. The 12-item scoring system has three subscales rating technical, organizational, and personnel-related conditions in a company. Each item has a rating value ranging from 1 to 9, with higher values indicating higher standard of safety conditions. The reliability of this instrument was evaluated in a cross-sectional survey among 128 companies and its validity among 30,514 companies. The inter-rater reliability of the instrument was examined independently and simultaneously by two well-trained safety engineers. Agreement between the double ratings was quantified by the intraclass correlation coefficient and absolute agreement of the rating values. The content validity of the OSH-MAT was evaluated by quantifying the association between OSH-MAT values and 5-year average injury rates by Poisson regression analysis adjusted for the size of the companies and industrial sectors. The construct validity of OSH-MAT was examined by principle component factor analysis. Our analysis indicated good to very good inter-rater reliability (intraclass correlation coefficient = 0.64-0.74) of OSH-MAT values with an absolute agreement of between 72% and 81%. Factor analysis identified three component subscales that met exactly the structure theory of this instrument. The Poisson regression analysis demonstrated a statistically significant exposure-response relationship between OSH-MAT values and the 5-year average injury rates. These analyses indicate that OSH-MAT is a valid and reliable instrument that can be used effectively to monitor safety conditions at workplaces.

  18. ANALYSIS OF SEQUENTIAL FAILURES FOR ASSESSMENT OF RELIABILITY AND SAFETY OF MANUFACTURING SYSTEMS. (R828541)

    EPA Science Inventory

    Assessment of reliability and safety of a manufacturing system with sequential failures is an important issue in industry, since the reliability and safety of the system depend not only on all failed states of system components, but also on the sequence of occurrences of those...

  19. On the matter of the reliability of the chemical monitoring system based on the modern control and monitoring devices

    NASA Astrophysics Data System (ADS)

    Andriushin, A. V.; Dolbikova, N. S.; Kiet, S. V.; Merzlikina, E. I.; Nikitina, I. S.

    2017-11-01

    The reliability of the main equipment of any power station depends on the correct water chemistry. In order to provide it, it is necessary to monitor the heat carrier quality, which, in its turn, is provided by the chemical monitoring system. Thus, the monitoring system reliability plays an important part in providing reliability of the main equipment. The monitoring system reliability is determined by the reliability and structure of its hardware and software consisting of sensors, controllers, HMI and so on [1,2]. Workers of a power plant dealing with the measuring equipment must be informed promptly about any breakdowns in the monitoring system, in this case they are able to remove the fault quickly. A computer consultant system for personnel maintaining the sensors and other chemical monitoring equipment can help to notice faults quickly and identify their possible causes. Some technical solutions for such a system are considered in the present paper. The experimental results were obtained on the laboratory and experimental workbench representing a physical model of a part of the chemical monitoring system.

  20. 18 CFR 40.2 - Mandatory Reliability Standards.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...-POWER SYSTEM § 40.2 Mandatory Reliability Standards. (a) Each applicable user, owner or operator of the Bulk-Power System must comply with Commission-approved Reliability Standards developed by the Electric... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Mandatory Reliability...

  1. Requirements for future automotive batteries - a snapshot

    NASA Astrophysics Data System (ADS)

    Karden, Eckhard; Shinn, Paul; Bostock, Paul; Cunningham, James; Schoultz, Evan; Kok, Daniel

    Introduction of new fuel economy, performance, safety, and comfort features in future automobiles will bring up many new, power-hungry electrical systems. As a consequence, demands on automotive batteries will grow substantially, e.g. regarding reliability, energy throughput (shallow-cycle life), charge acceptance, and high-rate partial state-of-charge (HRPSOC) operation. As higher voltage levels are mostly not an economically feasible alternative for the short term, the existing 14 V electrical system will have to fulfil these new demands, utilizing advanced 12 V energy storage devices. The well-established lead-acid battery technology is expected to keep playing a key role in this application. Compared to traditional starting-lighting-ignition (SLI) batteries, significant technological progress has been achieved or can be expected, which improve both performance and service life. System integration of the storage device into the vehicle will become increasingly important. Battery monitoring systems (BMS) are expected to become a commodity, penetrating the automotive volume market from both highly equipped premium cars and dedicated fuel-economy vehicles (e.g. stop/start). Battery monitoring systems will allow for more aggressive battery operating strategies, at the same time improving the reliability of the power supply system. Where a single lead-acid battery cannot fulfil the increasing demands, dual-storage systems may form a cost-efficient extension. They consist either of two lead-acid batteries or of a lead-acid battery plus another storage device.

  2. System engineering of complex optical systems for mission assurance and affordability

    NASA Astrophysics Data System (ADS)

    Ahmad, Anees

    2017-08-01

    Affordability and reliability are equally important as the performance and development time for many optical systems for military, space and commercial applications. These characteristics are even more important for the systems meant for space and military applications where total lifecycle costs must be affordable. Most customers are looking for high performance optical systems that are not only affordable but are designed with "no doubt" mission assurance, reliability and maintainability in mind. Both US military and commercial customers are now demanding an optimum balance between performance, reliability and affordability. Therefore, it is important to employ a disciplined systems design approach for meeting the performance, cost and schedule targets while keeping affordability and reliability in mind. The US Missile Defense Agency (MDA) now requires all of their systems to be engineered, tested and produced according to the Mission Assurance Provisions (MAP). These provisions or requirements are meant to ensure complex and expensive military systems are designed, integrated, tested and produced with the reliability and total lifecycle costs in mind. This paper describes a system design approach based on the MAP document for developing sophisticated optical systems that are not only cost-effective but also deliver superior and reliable performance during their intended missions.

  3. Diverse Redundant Systems for Reliable Space Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2015-01-01

    Reliable life support systems are required for deep space missions. The probability of a fatal life support failure should be less than one in a thousand in a multi-year mission. It is far too expensive to develop a single system with such high reliability. Using three redundant units would require only that each have a failure probability of one in ten over the mission. Since the system development cost is inverse to the failure probability, this would cut cost by a factor of one hundred. Using replaceable subsystems instead of full systems would further cut cost. Using full sets of replaceable components improves reliability more than using complete systems as spares, since a set of components could repair many different failures instead of just one. Replaceable components would require more tools, space, and planning than full systems or replaceable subsystems. However, identical system redundancy cannot be relied on in practice. Common cause failures can disable all the identical redundant systems. Typical levels of common cause failures will defeat redundancy greater than two. Diverse redundant systems are required for reliable space life support. Three, four, or five diverse redundant systems could be needed for sufficient reliability. One system with lower level repair could be substituted for two diverse systems to save cost.

  4. Evolving Reliability and Maintainability Allocations for NASA Ground Systems

    NASA Technical Reports Server (NTRS)

    Munoz, Gisela; Toon, T.; Toon, J.; Conner, A.; Adams, T.; Miranda, D.

    2016-01-01

    This paper describes the methodology and value of modifying allocations to reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) programs subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. This iterative process provided an opportunity for the reliability engineering team to reevaluate allocations as systems moved beyond their conceptual and preliminary design phases. These new allocations are based on updated designs and maintainability characteristics of the components. It was found that trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper discusses the results of reliability and maintainability reallocations made for the GSDO subsystems as the program nears the end of its design phase.

  5. Evolving Reliability and Maintainability Allocations for NASA Ground Systems

    NASA Technical Reports Server (NTRS)

    Munoz, Gisela; Toon, Troy; Toon, Jamie; Conner, Angelo C.; Adams, Timothy C.; Miranda, David J.

    2016-01-01

    This paper describes the methodology and value of modifying allocations to reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) program’s subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. This iterative process provided an opportunity for the reliability engineering team to reevaluate allocations as systems moved beyond their conceptual and preliminary design phases. These new allocations are based on updated designs and maintainability characteristics of the components. It was found that trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper discusses the results of reliability and maintainability reallocations made for the GSDO subsystems as the program nears the end of its design phase.

  6. Evolving Reliability and Maintainability Allocations for NASA Ground Systems

    NASA Technical Reports Server (NTRS)

    Munoz, Gisela; Toon, Jamie; Toon, Troy; Adams, Timothy C.; Miranda, David J.

    2016-01-01

    This paper describes the methodology that was developed to allocate reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) program's subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. Allocating is an iterative process; as systems moved beyond their conceptual and preliminary design phases this provided an opportunity for the reliability engineering team to reevaluate allocations based on updated designs and maintainability characteristics of the components. Trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper will discuss the value of modifying reliability and maintainability allocations made for the GSDO subsystems as the program nears the end of its design phase.

  7. 18 CFR 39.11 - Reliability reports.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Electric Reliability Organization shall conduct assessments of the adequacy of the Bulk-Power System in... assessments as determined by the Commission of the reliability of the Bulk-Power System in North America and... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Reliability reports. 39...

  8. Network reliability maximization for stochastic-flow network subject to correlated failures using genetic algorithm and tabu\\xA0search

    NASA Astrophysics Data System (ADS)

    Yeh, Cheng-Ta; Lin, Yi-Kuei; Yang, Jo-Yun

    2018-07-01

    Network reliability is an important performance index for many real-life systems, such as electric power systems, computer systems and transportation systems. These systems can be modelled as stochastic-flow networks (SFNs) composed of arcs and nodes. Most system supervisors respect the network reliability maximization by finding the optimal multi-state resource assignment, which is one resource to each arc. However, a disaster may cause correlated failures for the assigned resources, affecting the network reliability. This article focuses on determining the optimal resource assignment with maximal network reliability for SFNs. To solve the problem, this study proposes a hybrid algorithm integrating the genetic algorithm and tabu search to determine the optimal assignment, called the hybrid GA-TS algorithm (HGTA), and integrates minimal paths, recursive sum of disjoint products and the correlated binomial distribution to calculate network reliability. Several practical numerical experiments are adopted to demonstrate that HGTA has better computational quality than several popular soft computing algorithms.

  9. The 25 mA continuous-wave surface-plasma source of H{sup −} ions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belchenko, Yu., E-mail: belchenko@inp.nsk.su; Gorbovsky, A.; Sanin, A.

    The ion source with the Penning geometry of electrodes producing continuous-wave beam of H{sup −} ions with current up to 25 mA was developed. Several improvements were introduced to increase source intensity, reliability, and lifetime. The collar around the emission aperture increases the electrons filtering. The apertures’ diameters of the ion-optical system electrodes were increased to generate the beam with higher intensity. An optimization of electrodes’ temperature was performed.

  10. A Microstructure-Based Time-Dependent Crack Growth Model for Life and Reliability Prediction of Turbopropulsion Systems

    NASA Astrophysics Data System (ADS)

    Chan, Kwai S.; Enright, Michael P.; Moody, Jonathan; Fitch, Simeon H. K.

    2014-01-01

    The objective of this investigation was to develop an innovative methodology for life and reliability prediction of hot-section components in advanced turbopropulsion systems. A set of generic microstructure-based time-dependent crack growth (TDCG) models was developed and used to assess the sources of material variability due to microstructure and material parameters such as grain size, activation energy, and crack growth threshold for TDCG. A comparison of model predictions and experimental data obtained in air and in vacuum suggests that oxidation is responsible for higher crack growth rates at high temperatures, low frequencies, and long dwell times, but oxidation can also induce higher crack growth thresholds (Δ K th or K th) under certain conditions. Using the enhanced risk analysis tool and material constants calibrated to IN 718 data, the effect of TDCG on the risk of fracture in turboengine components was demonstrated for a generic rotor design and a realistic mission profile using the DARWIN® probabilistic life-prediction code. The results of this investigation confirmed that TDCG and cycle-dependent crack growth in IN 718 can be treated by a simple summation of the crack increments over a mission. For the temperatures considered, TDCG in IN 718 can be considered as a K-controlled or a diffusion-controlled oxidation-induced degradation process. This methodology provides a pathway for evaluating microstructural effects on multiple damage modes in hot-section components.

  11. The F-12 series aircraft approach to design for control system reliability

    NASA Technical Reports Server (NTRS)

    Schenk, F. L.; Mcmaster, J. R.

    1976-01-01

    The F-12 series aircraft control system design philosophy is reviewed as it pertains to functional reliability. The basic control system, i.e., cables, mixer, feel system, trim devices, and hydraulic systems are described and discussed. In addition, the implementation of the redundant stability augmentation system in the F-12 aircraft is described. Finally, the functional reliability record that has been achieved is presented.

  12. Reliability Impacts in Life Support Architecture and Technology Selection

    NASA Technical Reports Server (NTRS)

    Lange, Kevin E.; Anderson, Molly S.

    2011-01-01

    Equivalent System Mass (ESM) and reliability estimates were performed for different life support architectures based primarily on International Space Station (ISS) technologies. The analysis was applied to a hypothetical 1-year deep-space mission. High-level fault trees were initially developed relating loss of life support functionality to the Loss of Crew (LOC) top event. System reliability was then expressed as the complement (nonoccurrence) this event and was increased through the addition of redundancy and spares, which added to the ESM. The reliability analysis assumed constant failure rates and used current projected values of the Mean Time Between Failures (MTBF) from an ISS database where available. Results were obtained showing the dependence of ESM on system reliability for each architecture. Although the analysis employed numerous simplifications and many of the input parameters are considered to have high uncertainty, the results strongly suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support system mass. As a point of reference, the reliability for a single-string architecture using the most regenerative combination of ISS technologies without unscheduled replacement spares was estimated to be less than 1%. The results also demonstrate how adding technologies in a serial manner to increase system closure forces the reliability of other life support technologies to increase in order to meet the system reliability requirement. This increase in reliability results in increased mass for multiple technologies through the need for additional spares. Alternative parallel architecture approaches and approaches with the potential to do more with less are discussed. The tall poles in life support ESM are also reexamined in light of estimated reliability impacts.

  13. The application of emulation techniques in the analysis of highly reliable, guidance and control computer systems

    NASA Technical Reports Server (NTRS)

    Migneault, Gerard E.

    1987-01-01

    Emulation techniques can be a solution to a difficulty that arises in the analysis of the reliability of guidance and control computer systems for future commercial aircraft. Described here is the difficulty, the lack of credibility of reliability estimates obtained by analytical modeling techniques. The difficulty is an unavoidable consequence of the following: (1) a reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Use of emulation techniques for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques is then discussed. Finally several examples of the application of emulation techniques are described.

  14. Small core fiber coupled 60-W laser diode

    NASA Astrophysics Data System (ADS)

    Fernie, Douglas P.; Mannonen, Ilkka; Raven, Anthony L.

    1995-05-01

    Semiconductor laser diodes are compact, efficient and reliable sources of laser light and 25 W fiber coupled systems developed by Diomed have been in clinical use for over three years. For certain applications, particularly in the treatment of benign prostatic hyperplasia and flexible endoscopy, higher powers are desirable. In these applications the use of flexible optical fibers of no more than 600 micrometers core diameter is essential for compatibility with most commercial delivery fibers and instrumentation. A high power 60 W diode laser system for driving these small core fibers has been developed. The design requirements for medical applications are analyzed and system performance and results of use in gastroenterology and urology with small core fibers will be presented.

  15. Reliability of human-supervised formant-trajectory measurement for forensic voice comparison.

    PubMed

    Zhang, Cuiling; Morrison, Geoffrey Stewart; Ochoa, Felipe; Enzinger, Ewald

    2013-01-01

    Acoustic-phonetic approaches to forensic voice comparison often include human-supervised measurement of vowel formants, but the reliability of such measurements is a matter of concern. This study assesses the within- and between-supervisor variability of three sets of formant-trajectory measurements made by each of four human supervisors. It also assesses the validity and reliability of forensic-voice-comparison systems based on these measurements. Each supervisor's formant-trajectory system was fused with a baseline mel-frequency cepstral-coefficient system, and performance was assessed relative to the baseline system. Substantial improvements in validity were found for all supervisors' systems, but some supervisors' systems were more reliable than others.

  16. Reliability/safety analysis of a fly-by-wire system

    NASA Technical Reports Server (NTRS)

    Brock, L. D.; Goddman, H. A.

    1980-01-01

    An analysis technique has been developed to estimate the reliability of a very complex, safety-critical system by constructing a diagram of the reliability equations for the total system. This diagram has many of the characteristics of a fault-tree or success-path diagram, but is much easier to construct for complex redundant systems. The diagram provides insight into system failure characteristics and identifies the most likely failure modes. A computer program aids in the construction of the diagram and the computation of reliability. Analysis of the NASA F-8 Digital Fly-by-Wire Flight Control System is used to illustrate the technique.

  17. Research on Novel Algorithms for Smart Grid Reliability Assessment and Economic Dispatch

    NASA Astrophysics Data System (ADS)

    Luo, Wenjin

    In this dissertation, several studies of electric power system reliability and economy assessment methods are presented. To be more precise, several algorithms in evaluating power system reliability and economy are studied. Furthermore, two novel algorithms are applied to this field and their simulation results are compared with conventional results. As the electrical power system develops towards extra high voltage, remote distance, large capacity and regional networking, the application of a number of new technique equipments and the electric market system have be gradually established, and the results caused by power cut has become more and more serious. The electrical power system needs the highest possible reliability due to its complication and security. In this dissertation the Boolean logic Driven Markov Process (BDMP) method is studied and applied to evaluate power system reliability. This approach has several benefits. It allows complex dynamic models to be defined, while maintaining its easy readability as conventional methods. This method has been applied to evaluate IEEE reliability test system. The simulation results obtained are close to IEEE experimental data which means that it could be used for future study of the system reliability. Besides reliability, modern power system is expected to be more economic. This dissertation presents a novel evolutionary algorithm named as quantum evolutionary membrane algorithm (QEPS), which combines the concept and theory of quantum-inspired evolutionary algorithm and membrane computation, to solve the economic dispatch problem in renewable power system with on land and offshore wind farms. The case derived from real data is used for simulation tests. Another conventional evolutionary algorithm is also used to solve the same problem for comparison. The experimental results show that the proposed method is quick and accurate to obtain the optimal solution which is the minimum cost for electricity supplied by wind farm system.

  18. Emergency Severity Index version 4: a valid and reliable tool in pediatric emergency department triage.

    PubMed

    Green, Nicole A; Durani, Yamini; Brecher, Deena; DePiero, Andrew; Loiselle, John; Attia, Magdy

    2012-08-01

    The Emergency Severity Index version 4 (ESI v.4) is the most recently implemented 5-level triage system. The validity and reliability of this triage tool in the pediatric population have not been extensively established. The goals of this study were to assess the validity of ESI v.4 in predicting hospital admission, emergency department (ED) length of stay (LOS), and number of resources utilized, as well as its reliability in a prospective cohort of pediatric patients. The first arm of the study was a retrospective chart review of 780 pediatric patients presenting to a pediatric ED to determine the validity of ESI v.4. Abstracted data included acuity level assigned by the triage nurse using ESI v.4 algorithm, disposition (admission vs discharge), LOS, and number of resources utilized in the ED. To analyze the validity of ESI v.4, patients were divided into 2 groups for comparison: higher-acuity patients (ESI levels 1, 2, and 3) and lower-acuity patients (ESI levels 4 and 5). Pearson χ analysis was performed for categorical variables. For continuous variables, we conducted a comparison of means based on parametric distribution of variables. The second arm was a prospective cohort study to determine the interrater reliability of ESI v.4 among and between pediatric triage (PT) nurses and pediatric emergency medicine (PEM) physicians. Three raters (2 PT nurses and 1 PEM physician) independently assigned triage scores to 100 patients; k and interclass correlation coefficient were calculated among PT nurses and between the primary PT nurses and physicians. In the validity arm, the distribution of ESI score levels among the 780 cases are as follows: ESI 1: 2 (0.25%); ESI 2: 73 (9.4%); ESI 3: 289 (37%); ESI 4: 251 (32%); and ESI 5: 165 (21%). Hospital admission rates by ESI level were 1: 100%, 2: 42%, 3: 14.9%, 4: 1.2%, and 5: 0.6%. The admission rate of the higher-acuity group (76/364, 21%) was significantly greater than the lower-acuity group (4/415, 0.96%), P < 0.001. The mean ED LOS (in minutes) for the higher-acuity group was 257 (SD, 132) versus 143 (SD, 81) in the lower-acuity group, P < 0.001. The higher-acuity group also had significantly greater use of resources than the lower-acuity group, P < 0.001. The percentage of low-acuity patients receiving no resources was 54%, compared with only 26% in the higher-acuity group. Conversely, a greater percentage of higher-acuity patients utilized 2 or more resources than the lower-acuity cohorts, 43% vs 12%, respectively, P < 0.001. In the prospective reliability arm of the study, 15 PT nurses and 8 PEM attending physicians participated in the study; k among nurses was 0.92 and between the primary triage nurses and physicians was 0.78, P < 0.001. The intraclass correlation coefficient was 0.96 for PT nurses and 0.91 between the primary triage nurse and physicians, P < 0.001. Emergency Severity Index v.4 is a valid predictor of hospital admission, ED LOS, and resource utilization in the pediatric ED population. It is a reliable pediatric triage instrument with high agreement among PT nurses and between PT nurses and PEM physicians.

  19. Reliability Modeling of Microelectromechanical Systems Using Neural Networks

    NASA Technical Reports Server (NTRS)

    Perera. J. Sebastian

    2000-01-01

    Microelectromechanical systems (MEMS) are a broad and rapidly expanding field that is currently receiving a great deal of attention because of the potential to significantly improve the ability to sense, analyze, and control a variety of processes, such as heating and ventilation systems, automobiles, medicine, aeronautical flight, military surveillance, weather forecasting, and space exploration. MEMS are very small and are a blend of electrical and mechanical components, with electrical and mechanical systems on one chip. This research establishes reliability estimation and prediction for MEMS devices at the conceptual design phase using neural networks. At the conceptual design phase, before devices are built and tested, traditional methods of quantifying reliability are inadequate because the device is not in existence and cannot be tested to establish the reliability distributions. A novel approach using neural networks is created to predict the overall reliability of a MEMS device based on its components and each component's attributes. The methodology begins with collecting attribute data (fabrication process, physical specifications, operating environment, property characteristics, packaging, etc.) and reliability data for many types of microengines. The data are partitioned into training data (the majority) and validation data (the remainder). A neural network is applied to the training data (both attribute and reliability); the attributes become the system inputs and reliability data (cycles to failure), the system output. After the neural network is trained with sufficient data. the validation data are used to verify the neural networks provided accurate reliability estimates. Now, the reliability of a new proposed MEMS device can be estimated by using the appropriate trained neural networks developed in this work.

  20. Reliability Growth in Space Life Support Systems

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2014-01-01

    A hardware system's failure rate often increases over time due to wear and aging, but not always. Some systems instead show reliability growth, a decreasing failure rate with time, due to effective failure analysis and remedial hardware upgrades. Reliability grows when failure causes are removed by improved design. A mathematical reliability growth model allows the reliability growth rate to be computed from the failure data. The space shuttle was extensively maintained, refurbished, and upgraded after each flight and it experienced significant reliability growth during its operational life. In contrast, the International Space Station (ISS) is much more difficult to maintain and upgrade and its failure rate has been constant over time. The ISS Carbon Dioxide Removal Assembly (CDRA) reliability has slightly decreased. Failures on ISS and with the ISS CDRA continue to be a challenge.

  1. An approximation formula for a class of Markov reliability models

    NASA Technical Reports Server (NTRS)

    White, A. L.

    1984-01-01

    A way of considering a small but often used class of reliability model and approximating algebraically the systems reliability is shown. The models considered are appropriate for redundant reconfigurable digital control systems that operate for a short period of time without maintenance, and for such systems the method gives a formula in terms of component fault rates, system recovery rates, and system operating time.

  2. Validation and Improvement of Reliability Methods for Air Force Building Systems

    DTIC Science & Technology

    focusing primarily on HVAC systems . This research used contingency analysis to assess the performance of each model for HVAC systems at six Air Force...probabilistic model produced inflated reliability calculations for HVAC systems . In light of these findings, this research employed a stochastic method, a...Nonhomogeneous Poisson Process (NHPP), in an attempt to produce accurate HVAC system reliability calculations. This effort ultimately concluded that

  3. A novel approach for analyzing fuzzy system reliability using different types of intuitionistic fuzzy failure rates of components.

    PubMed

    Kumar, Mohit; Yadav, Shiv Prasad

    2012-03-01

    This paper addresses the fuzzy system reliability analysis using different types of intuitionistic fuzzy numbers. Till now, in the literature, to analyze the fuzzy system reliability, it is assumed that the failure rates of all components of a system follow the same type of fuzzy set or intuitionistic fuzzy set. However, in practical problems, such type of situation rarely occurs. Therefore, in the present paper, a new algorithm has been introduced to construct the membership function and non-membership function of fuzzy reliability of a system having components following different types of intuitionistic fuzzy failure rates. Functions of intuitionistic fuzzy numbers are calculated to construct the membership function and non-membership function of fuzzy reliability via non-linear programming techniques. Using the proposed algorithm, membership functions and non-membership functions of fuzzy reliability of a series system and a parallel systems are constructed. Our study generalizes the various works of the literature. Numerical examples are given to illustrate the proposed algorithm. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Reliability of Phase Velocity Measurements of Flexural Acoustic Waves in the Human Tibia In-Vivo.

    PubMed

    Vogl, Florian; Schnüriger, Karin; Gerber, Hans; Taylor, William R

    2016-01-01

    Axial-transmission acoustics have shown to be a promising technique to measure individual bone properties and detect bone pathologies. With the ultimate goal being the in-vivo application of such systems, quantification of the key aspects governing the reliability is crucial to bring this method towards clinical use. This work presents a systematic reliability study quantifying the sources of variability and their magnitudes of in-vivo measurements using axial-transmission acoustics. 42 healthy subjects were measured by an experienced operator twice per week, over a four-month period, resulting in over 150000 wave measurements. In a complementary study to assess the influence of different operators performing the measurements, 10 novice operators were trained, and each measured 5 subjects on a single occasion, using the same measurement protocol as in the first part of the study. The estimated standard error for the measurement protocol used to collect the study data was ∼ 17 m/s (∼ 4% of the grand mean) and the index of dependability, as a measure of reliability, was Φ = 0.81. It was shown that the method is suitable for multi-operator use and that the reliability can be improved efficiently by additional measurements with device repositioning, while additional measurements without repositioning cannot improve the reliability substantially. Phase velocity values were found to be significantly higher in males than in females (p < 10-5) and an intra-class correlation coefficient of r = 0.70 was found between the legs of each subject. The high reliability of this non-invasive approach and its intrinsic sensitivity to mechanical properties opens perspectives for the rapid and inexpensive clinical assessment of bone pathologies, as well as for monitoring programmes without any radiation exposure for the patient.

  5. Reliability of digital reactor protection system based on extenics.

    PubMed

    Zhao, Jing; He, Ya-Nan; Gu, Peng-Fei; Chen, Wei-Hua; Gao, Feng

    2016-01-01

    After the Fukushima nuclear accident, safety of nuclear power plants (NPPs) is widespread concerned. The reliability of reactor protection system (RPS) is directly related to the safety of NPPs, however, it is difficult to accurately evaluate the reliability of digital RPS. The method is based on estimating probability has some uncertainties, which can not reflect the reliability status of RPS dynamically and support the maintenance and troubleshooting. In this paper, the reliability quantitative analysis method based on extenics is proposed for the digital RPS (safety-critical), by which the relationship between the reliability and response time of RPS is constructed. The reliability of the RPS for CPR1000 NPP is modeled and analyzed by the proposed method as an example. The results show that the proposed method is capable to estimate the RPS reliability effectively and provide support to maintenance and troubleshooting of digital RPS system.

  6. Reliability Impacts in Life Support Architecture and Technology Selection

    NASA Technical Reports Server (NTRS)

    Lange Kevin E.; Anderson, Molly S.

    2012-01-01

    Quantitative assessments of system reliability and equivalent system mass (ESM) were made for different life support architectures based primarily on International Space Station technologies. The analysis was applied to a one-year deep-space mission. System reliability was increased by adding redundancy and spares, which added to the ESM. Results were thus obtained allowing a comparison of the ESM for each architecture at equivalent levels of reliability. Although the analysis contains numerous simplifications and uncertainties, the results suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support ESM and could influence the optimal degree of life support closure. Approaches for reducing reliability impacts were investigated and are discussed.

  7. Measuring competence in endoscopic sinus surgery.

    PubMed

    Syme-Grant, J; White, P S; McAleer, J P G

    2008-02-01

    Competence based education is currently being introduced into higher surgical training in the UK. Valid and reliable performance assessment tools are essential to ensure competencies are achieved. No such tools have yet been reported in the UK literature. We sought to develop and pilot test an Endoscopic Sinus Surgery Competence Assessment Tool (ESSCAT). The ESSCAT was designed for in-theatre assessment of higher surgical trainees in the UK. The ESSCAT rating matrix was developed through task analysis of ESS procedures. All otolaryngology consultants and specialist registrars in Scotland were given the opportunity to contribute to its refinement. Two cycles of in-theatre testing were used to ensure utility and gather quantitative data on validity and reliability. Videos of trainees performing surgery were used in establishing inter-rater reliability. National consultation, the consensus derived minimum standard of performance, Cronbach's alpha = 0.89 and demonstration of trainee learning (p = 0.027) during the in vivo application of the ESSCAT suggest a high level of validity. Inter-rater reliability was moderate for competence decisions (Cohen's Kappa = 0.5) and good for total scores (Intra-Class Correlation Co-efficient = 0.63). Intra-rater reliability was good for both competence decisions (Kappa = 0.67) and total scores (Kendall's Tau-b = 0.73). The ESSCAT generates a valid and reliable assessment of trainees' in-theatre performance of endoscopic sinus surgery. In conjunction with ongoing evaluation of the instrument we recommend the use of the ESSCAT in higher specialist training in otolaryngology in the UK.

  8. Evaluation methodologies for an advanced information processing system

    NASA Technical Reports Server (NTRS)

    Schabowsky, R. S., Jr.; Gai, E.; Walker, B. K.; Lala, J. H.; Motyka, P.

    1984-01-01

    The system concept and requirements for an Advanced Information Processing System (AIPS) are briefly described, but the emphasis of this paper is on the evaluation methodologies being developed and utilized in the AIPS program. The evaluation tasks include hardware reliability, maintainability and availability, software reliability, performance, and performability. Hardware RMA and software reliability are addressed with Markov modeling techniques. The performance analysis for AIPS is based on queueing theory. Performability is a measure of merit which combines system reliability and performance measures. The probability laws of the performance measures are obtained from the Markov reliability models. Scalar functions of this law such as the mean and variance provide measures of merit in the AIPS performability evaluations.

  9. Flight control electronics reliability/maintenance study

    NASA Technical Reports Server (NTRS)

    Dade, W. W.; Edwards, R. H.; Katt, G. T.; Mcclellan, K. L.; Shomber, H. A.

    1977-01-01

    Collection and analysis of data are reported that concern the reliability and maintenance experience of flight control system electronics currently in use on passenger carrying jet aircraft. Two airlines B-747 airplane fleets were analyzed to assess the component reliability, system functional reliability, and achieved availability of the CAT II configuration flight control system. Also assessed were the costs generated by this system in the categories of spare equipment, schedule irregularity, and line and shop maintenance. The results indicate that although there is a marked difference in the geographic location and route pattern between the airlines studied, there is a close similarity in the reliability and the maintenance costs associated with the flight control electronics.

  10. Reliability and Validity of the Arthroscopic International Cartilage Repair Society Classification System: Correlation With Histological Assessment of Depth.

    PubMed

    Dwyer, Tim; Martin, C Ryan; Kendra, Rita; Sermer, Corey; Chahal, Jaskarndip; Ogilvie-Harris, Darrell; Whelan, Daniel; Murnaghan, Lucas; Nauth, Aaron; Theodoropoulos, John

    2017-06-01

    To determine the interobserver reliability of the International Cartilage Repair Society (ICRS) grading system of chondral lesions in cadavers, to determine the intraobserver reliability of the ICRS grading system comparing arthroscopy and video assessment, and to compare the arthroscopic ICRS grading system with histological grading of lesion depth. Eighteen lesions in 5 cadaveric knee specimens were arthroscopically graded by 7 fellowship-trained arthroscopic surgeons using the ICRS classification system. The arthroscopic video of each lesion was sent to the surgeons 6 weeks later for repeat grading and determination of intraobserver reliability. Lesions were biopsied, and the depth of the cartilage lesion was assessed. Reliability was calculated using intraclass correlations. The interobserver reliability was 0.67 (95% confidence interval, 0.5-0.89) for the arthroscopic grading, and the intraobserver reliability with the video grading was 0.8 (95% confidence interval, 0.67-0.9). A high correlation was seen between the arthroscopic grading of depth and the histological grading of depth (0.91); on average, surgeons graded lesions using arthroscopy a mean of 0.37 (range, 0-0.86) deeper than the histological grade. The arthroscopic ICRS classification system has good interobserver and intraobserver reliability. A high correlation with histological assessment of depth provides evidence of validity for this classification system. As cartilage lesions are treated on the basis of the arthroscopic ICRS classification, it is important to ascertain the reliability and validity of this method. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  11. Reliability evaluation methodology for NASA applications

    NASA Technical Reports Server (NTRS)

    Taneja, Vidya S.

    1992-01-01

    Liquid rocket engine technology has been characterized by the development of complex systems containing large number of subsystems, components, and parts. The trend to even larger and more complex system is continuing. The liquid rocket engineers have been focusing mainly on performance driven designs to increase payload delivery of a launch vehicle for a given mission. In otherwords, although the failure of a single inexpensive part or component may cause the failure of the system, reliability in general has not been considered as one of the system parameters like cost or performance. Up till now, quantification of reliability has not been a consideration during system design and development in the liquid rocket industry. Engineers and managers have long been aware of the fact that the reliability of the system increases during development, but no serious attempts have been made to quantify reliability. As a result, a method to quantify reliability during design and development is needed. This includes application of probabilistic models which utilize both engineering analysis and test data. Classical methods require the use of operating data for reliability demonstration. In contrast, the method described in this paper is based on similarity, analysis, and testing combined with Bayesian statistical analysis.

  12. Trust and reliance on an automated combat identification system.

    PubMed

    Wang, Lu; Jamieson, Greg A; Hollands, Justin G

    2009-06-01

    We examined the effects of aid reliability and reliability disclosure on human trust in and reliance on a combat identification (CID) aid. We tested whether trust acts as a mediating factor between belief in and reliance on a CID aid. Individual CID systems have been developed to reduce friendly fire incidents. However, these systems cannot positively identify a target that does not have a working transponder. Therefore, when the feedback is "unknown", the target could be hostile, neutral, or friendly. Soldiers have difficulty relying on this type of imperfect automation appropriately. In manual and aided conditions, 24 participants completed a simulated CID task. The reliability of the aid varied within participants, half of whom were told the aid reliability level. We used the difference in response bias values across conditions to measure automation reliance. Response bias varied more appropriately with the aid reliability level when it was disclosed than when not. Trust in aid feedback correlated with belief in aid reliability and reliance on aid feedback; however, belief was not correlated with reliance. To engender appropriate reliance on CID systems, users should be made aware of system reliability. The findings can be applied to the design of information displays for individual CID systems and soldier training.

  13. On modeling human reliability in space flights - Redundancy and recovery operations

    NASA Astrophysics Data System (ADS)

    Aarset, M.; Wright, J. F.

    The reliability of humans is of paramount importance to the safety of space flight systems. This paper describes why 'back-up' operators might not be the best solution, and in some cases, might even degrade system reliability. The problem associated with human redundancy calls for special treatment in reliability analyses. The concept of Standby Redundancy is adopted, and psychological and mathematical models are introduced to improve the way such problems can be estimated and handled. In the past, human reliability has practically been neglected in most reliability analyses, and, when included, the humans have been modeled as a component and treated numerically the way technical components are. This approach is not wrong in itself, but it may lead to systematic errors if too simple analogies from the technical domain are used in the modeling of human behavior. In this paper redundancy in a man-machine system will be addressed. It will be shown how simplification from the technical domain, when applied to human components of a system, may give non-conservative estimates of system reliability.

  14. Distribution System Reliability Analysis for Smart Grid Applications

    NASA Astrophysics Data System (ADS)

    Aljohani, Tawfiq Masad

    Reliability of power systems is a key aspect in modern power system planning, design, and operation. The ascendance of the smart grid concept has provided high hopes of developing an intelligent network that is capable of being a self-healing grid, offering the ability to overcome the interruption problems that face the utility and cost it tens of millions in repair and loss. To address its reliability concerns, the power utilities and interested parties have spent extensive amount of time and effort to analyze and study the reliability of the generation and transmission sectors of the power grid. Only recently has attention shifted to be focused on improving the reliability of the distribution network, the connection joint between the power providers and the consumers where most of the electricity problems occur. In this work, we will examine the effect of the smart grid applications in improving the reliability of the power distribution networks. The test system used in conducting this thesis is the IEEE 34 node test feeder, released in 2003 by the Distribution System Analysis Subcommittee of the IEEE Power Engineering Society. The objective is to analyze the feeder for the optimal placement of the automatic switching devices and quantify their proper installation based on the performance of the distribution system. The measures will be the changes in the reliability system indices including SAIDI, SAIFI, and EUE. The goal is to design and simulate the effect of the installation of the Distributed Generators (DGs) on the utility's distribution system and measure the potential improvement of its reliability. The software used in this work is DISREL, which is intelligent power distribution software that is developed by General Reliability Co.

  15. Reliability of McConnell's classification of patellar orientation in symptomatic and asymptomatic subjects.

    PubMed

    Watson, C J; Propps, M; Galt, W; Redding, A; Dobbs, D

    1999-07-01

    Test-retest reliability study with blinded testers. To determine the intratester reliability of the McConnell classification system and to determine whether the intertester reliability of this system would be improved by one-on-one training of the testers, increasing the variability and numbers of subjects, blinding the testers to the absence or presence of patellofemoral pain syndrome, and adhering to the McConnell classification system as it is taught in the "McConnell Patellofemoral Treatment Plan" continuing education course. The McConnell classification system is currently used by physical therapy clinicians to quantify static patellar orientation. The measurements generated from this system purportedly guide the therapist in the application of patellofemoral tape and in assessment of the efficacy of treatment interventions on changing patellar orientation. Fifty-six subjects (age range, 21-65 years) provided a total of 101 knees for assessment. Seventy-six knees did not produce symptoms. A researcher who did not participate in the measuring process determined that 17 subjects had patellofemoral pain syndrome in 25 knees. Two testers concurrently measured static patellar orientation (anterior/posterior and medial/lateral tilt, medial/lateral glide, and patellar rotation) on subjects, using the McConnell classification system. Repeat measures were performed 3-7 days later. A kappa (kappa) statistic was used to assess the degree of agreement within each tester and between testers. The kappa coefficients for intratester reliability varied from -0.06 to 0.35. Intertester reliability ranged from -0.03 to 0.19. The McConnell classification system, in its current form, does not appear to be very reliable. Intratester reliability ranged from poor to fair, and intertester reliability was poor to slight. This system should not be used as a measurement tool or as a basis for treatment decisions.

  16. A Review on VSC-HVDC Reliability Modeling and Evaluation Techniques

    NASA Astrophysics Data System (ADS)

    Shen, L.; Tang, Q.; Li, T.; Wang, Y.; Song, F.

    2017-05-01

    With the fast development of power electronics, voltage-source converter (VSC) HVDC technology presents cost-effective ways for bulk power transmission. An increasing number of VSC-HVDC projects has been installed worldwide. Their reliability affects the profitability of the system and therefore has a major impact on the potential investors. In this paper, an overview of the recent advances in the area of reliability evaluation for VSC-HVDC systems is provided. Taken into account the latest multi-level converter topology, the VSC-HVDC system is categorized into several sub-systems and the reliability data for the key components is discussed based on sources with academic and industrial backgrounds. The development of reliability evaluation methodologies is reviewed and the issues surrounding the different computation approaches are briefly analysed. A general VSC-HVDC reliability evaluation procedure is illustrated in this paper.

  17. Power System Reliability Assessment by Analysing Voltage Dips on the Blue Horizon Bay 22KV Overhead Line in the Nelson Mandela Bay Municipality

    NASA Astrophysics Data System (ADS)

    Lamour, B. G.; Harris, R. T.; Roberts, A. G.

    2010-06-01

    Power system reliability problems are very difficult to solve because the power systems are complex and geographically widely distributed and influenced by numerous unexpected events. It is therefore imperative to employ the most efficient optimization methods in solving the problems relating to reliability of the power system. This paper presents a reliability analysis and study of the power interruptions resulting from severe power outages in the Nelson Mandela Bay Municipality (NMBM), South Africa and includes an overview of the important factors influencing reliability, and methods to improve the reliability. The Blue Horizon Bay 22 kV overhead line, supplying a 6.6 kV residential sector has been selected. It has been established that 70% of the outages, recorded at the source, originate on this feeder.

  18. Performance Evaluation of Reliable Multicast Protocol for Checkout and Launch Control Systems

    NASA Technical Reports Server (NTRS)

    Shu, Wei Wennie; Porter, John

    2000-01-01

    The overall objective of this project is to study reliability and performance of Real Time Critical Network (RTCN) for checkout and launch control systems (CLCS). The major tasks include reliability and performance evaluation of Reliable Multicast (RM) package and fault tolerance analysis and design of dual redundant network architecture.

  19. 75 FR 14097 - Revision to Electric Reliability Organization Definition of Bulk Electric System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-24

    ... Commission 18 CFR Part 40 [Docket No. RM09-18-000; 130 FERC ] 61,204] Revision to Electric Reliability... Reliability Organization (ERO) to revise its definition of the term ``bulk electric system'' to include all... compliance with mandatory Reliability Standards. The Commission believes that a 100 kV threshold for...

  20. A Systems Engineering Approach to Aircraft Kinetic Kill Countermeasures Technology: Development of an Active Aircraft Defense System for the C/KC-135 Aircraft. Volume 1

    DTIC Science & Technology

    1995-12-01

    comparison among all candidate systems , the reliability of each aircraft defense would evenly affect the evaluation of each system , and would have the...more reliable data. Obviously the more reliable and accurate the data evaluated though the hierarchy chart, the better the results. 29 IV. System ...1.3.5 System Evaluation ................................................................ 6 1.3.6 Decision M aking

  1. A reliable in vitro fruiting system for armillaria mellea for evaluation of agrobacterium tumefaciens transformation vectors

    USDA-ARS?s Scientific Manuscript database

    Armillaria mellea is a serious pathogen of horticultural and agricultural systems in Europe and North America. The lack of a reliable in vitro fruiting system has hindered research, and necessitated dependence on intermittently available wild-collected basidiospores. Here we describe a reliable, rep...

  2. ADVANCEMENT OF THE RHIC BEAM ABORT KICKER SYSTEM.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ZHANG,W.AHRENS,L.MI,J.OERTER,B.SANDBERG,J.WARBURTON,D.

    2003-05-12

    As one of the most critical system for RHIC operation, the beam abort kicker system has to be highly available, reliable, and stable for the entire operating range. Along with the RHIC commission and operation, consistent efforts have been spend to cope with immediate issues as well as inherited design issues. Major design changes have been implemented to achieve the higher operating voltage, longer high voltage hold-off time, fast retriggering and redundant triggering, and improved system protection, etc. Recent system test has demonstrated for the first time that both blue ring and yellow ring beam abort systems have achieved moremore » than 24 hours hold off time at desired operating voltage. In this paper, we report break down, thyratron reverse arcing, and to build a fast re-trigger system to reduce beam spreading in event of premature discharge.« less

  3. Digital Low Level RF Systems for Fermilab Main Ring and Tevatron

    NASA Astrophysics Data System (ADS)

    Chase, B.; Barnes, B.; Meisner, K.

    1997-05-01

    At Fermilab, a new Low Level RF system is successfully installed and operating in the Main Ring. Installation is proceeding for a Tevatron system. This upgrade replaces aging CAMAC/NIM components for an increase in accuracy, reliability, and flexibility. These VXI systems are based on a custom three channel direct digital synthesizer(DDS) module. Each synthesizer channel is capable of independent or ganged operation for both frequency and phase modulation. New frequency and phase values are computed at a 100kHz rate on the module's Analog Devices ADSP21062 (SHARC) digital signal processor. The DSP concurrently handles feedforward, feedback, and beam manipulations. Higher level state machines and the control system interface are handled at the crate level using the VxWorks operating system. This paper discusses the hardware, software and operational aspects of these LLRF systems.

  4. B-52 stability augmentation system reliability

    NASA Technical Reports Server (NTRS)

    Bowling, T. C.; Key, L. W.

    1976-01-01

    The B-52 SAS (Stability Augmentation System) was developed and retrofitted to nearly 300 aircraft. It actively controls B-52 structural bending, provides improved yaw and pitch damping through sensors and electronic control channels, and puts complete reliance on hydraulic control power for rudder and elevators. The system has experienced over 300,000 flight hours and has exhibited service reliability comparable to the results of the reliability test program. Development experience points out numerous lessons with potential application in the mechanization and development of advanced technology control systems of high reliability.

  5. Operator adaptation to changes in system reliability under adaptable automation.

    PubMed

    Chavaillaz, Alain; Sauer, Juergen

    2017-09-01

    This experiment examined how operators coped with a change in system reliability between training and testing. Forty participants were trained for 3 h on a complex process control simulation modelling six levels of automation (LOA). In training, participants either experienced a high- (100%) or low-reliability system (50%). The impact of training experience on operator behaviour was examined during a 2.5 h testing session, in which participants either experienced a high- (100%) or low-reliability system (60%). The results showed that most operators did not often switch between LOA. Most chose an LOA that relieved them of most tasks but maintained their decision authority. Training experience did not have a strong impact on the outcome measures (e.g. performance, complacency). Low system reliability led to decreased performance and self-confidence. Furthermore, complacency was observed under high system reliability. Overall, the findings suggest benefits of adaptable automation because it accommodates different operator preferences for LOA. Practitioner Summary: The present research shows that operators can adapt to changes in system reliability between training and testing sessions. Furthermore, it provides evidence that each operator has his/her preferred automation level. Since this preference varies strongly between operators, adaptable automation seems to be suitable to accommodate these large differences.

  6. 30% CPV Module Milestone

    NASA Astrophysics Data System (ADS)

    Gordon, Robert; Kinsey, Geoff; Nayaak, Adi; Garboushian, Vahan

    2010-10-01

    Concentrating Photovoltaics has held out the promise of low cost solar electricity for now several decades. Steady progress towards this goal in the 80's and 90's gradually produced more efficient and reliable systems. System efficiency is regarded as the largest factor in lowering the electricity cost and the relatively recent advent of the terrestrial multi-junction solar cell has pressed this race forward dramatically. CPV systems are now exhibiting impressive AC field efficiencies of 25% and more, approximately twice that of the best flat plate systems available today. Amonix inc. has just tested their latest generation multi-junction module design, achieving over 31% DC efficiency at near PVUSA test conditions. Inculcating this design into their next MegaModule is forthcoming, but the expected AC system field efficiency should be significantly higher than current 25% levels.

  7. Advances in knowledge-based software engineering

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt

    1991-01-01

    The underlying hypothesis of this work is that a rigorous and comprehensive software reuse methodology can bring about a more effective and efficient utilization of constrained resources in the development of large-scale software systems by both government and industry. It is also believed that correct use of this type of software engineering methodology can significantly contribute to the higher levels of reliability that will be required of future operational systems. An overview and discussion of current research in the development and application of two systems that support a rigorous reuse paradigm are presented: the Knowledge-Based Software Engineering Environment (KBSEE) and the Knowledge Acquisition fo the Preservation of Tradeoffs and Underlying Rationales (KAPTUR) systems. Emphasis is on a presentation of operational scenarios which highlight the major functional capabilities of the two systems.

  8. High-efficiency high-reliability optical components for a large, high-average-power visible laser system

    NASA Astrophysics Data System (ADS)

    Taylor, John R.; Stolz, Christopher J.

    1993-08-01

    Laser system performance and reliability depends on the related performance and reliability of the optical components which define the cavity and transport subsystems. High-average-power and long transport lengths impose specific requirements on component performance. The complexity of the manufacturing process for optical components requires a high degree of process control and verification. Qualification has proven effective in ensuring confidence in the procurement process for these optical components. Issues related to component reliability have been studied and provide useful information to better understand the long term performance and reliability of the laser system.

  9. High-efficiency high-reliability optical components for a large, high-average-power visible laser system

    NASA Astrophysics Data System (ADS)

    Taylor, J. R.; Stolz, C. J.

    1992-12-01

    Laser system performance and reliability depends on the related performance and reliability of the optical components which define the cavity and transport subsystems. High-average-power and long transport lengths impose specific requirements on component performance. The complexity of the manufacturing process for optical components requires a high degree of process control and verification. Qualification has proven effective in ensuring confidence in the procurement process for these optical components. Issues related to component reliability have been studied and provide useful information to better understand the long term performance and reliability of the laser system.

  10. A novel computer system for the evaluation of nasolabial morphology, symmetry and aesthetics after cleft lip and palate treatment. Part 1: General concept and validation.

    PubMed

    Pietruski, Piotr; Majak, Marcin; Debski, Tomasz; Antoszewski, Boguslaw

    2017-04-01

    The need for a widely accepted method suitable for a multicentre quantitative evaluation of facial aesthetics after surgical treatment of cleft lip and palate (CLP) has been emphasized for years. The aim of this study was to validate a novel computer system 'Analyse It Doc' (A.I.D.) as a tool for objective anthropometric analysis of the nasolabial region. An indirect anthropometric analysis of facial photographs was conducted with the A.I.D. system and Adobe Photoshop/ImageJ software. Intra-rater and inter-rater reliability and the time required for the analysis were estimated separately for each method and compared. Analysis with A.I.D. system was nearly 10-fold faster than that with the reference evaluation method. The A.I.D. system provided strong inter-rater and intra-rater correlations for linear, angular and area measurements of the nasolabial region, as well as a significantly higher accuracy and reproducibility of angular measurements in submental view. No statistically significant inter-method differences were found for other measurements. The hereby presented novel computer system is suitable for simple, time-efficient and reliable multicenter photogrammetric analyses of the nasolabial region in CLP patients and healthy subjects. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  11. [Study of the relationship between human quality and reliability].

    PubMed

    Long, S; Wang, C; Wang, L i; Yuan, J; Liu, H; Jiao, X

    1997-02-01

    To clarify the relationship between human quality and reliability, 1925 experiments in 20 subjects were carried out to study the relationship between disposition character, digital memory, graphic memory, multi-reaction time and education level and simulated aircraft operation. Meanwhile, effects of task difficulty and enviromental factor on human reliability were also studied. The results showed that human quality can be predicted and evaluated through experimental methods. The better the human quality, the higher the human reliability.

  12. To the question about the states of workability for automatic control systems with complicated structure

    NASA Astrophysics Data System (ADS)

    Kuznetsov, P. A.; Kovalev, I. V.; Losev, V. V.; Kalinin, A. O.; Murygin, A. V.

    2016-04-01

    The article discusses the reliability of automated control systems. Analyzes the approach to the classification systems for health States. This approach can be as traditional binary approach, operating with the concept of "serviceability", and other variants of estimation of the system state. This article provides one such option, providing selective evaluation of components for the reliability of the entire system. Introduced description of various automatic control systems and their elements from the point of view of health and risk, mathematical method of determining the transition object from state to state, they differ from each other in the implementation of the objective function. Explores the interplay of elements in different States, the aggregate state of the elements connected in series or in parallel. Are the tables of various logic States and the principles of their calculation in series and parallel connection. Through simulation the proposed approach is illustrated by finding the probability of getting into the system state data in parallel and serially connected elements, with their different probabilities of moving from state to state. In general, the materials of article will be useful for analyzing of the reliability the automated control systems and engineering of the highly-reliable systems. Thus, this mechanism to determine the State of the system provides more detailed information about it and allows a selective approach to the reliability of the system as a whole. Detailed results when assessing the reliability of the automated control systems allows the engineer to make an informed decision when designing means of improving reliability.

  13. Interrater reliability of a Pilates movement-based classification system.

    PubMed

    Yu, Kwan Kenny; Tulloch, Evelyn; Hendrick, Paul

    2015-01-01

    To determine the interrater reliability for identification of a specific movement pattern using a Pilates Classification system. Videos of 5 subjects performing specific movement tasks were sent to raters trained in the DMA-CP classification system. Ninety-six raters completed the survey. Interrater reliability for the detection of a directional bias was excellent (Pi = 0.92, and K(free) = 0.89). Interrater reliability for classifying an individual into a specific subgroup was moderate (Pi = 0.64, K(free) = 0.55) however raters who had completed levels 1-4 of the DMA-CP training and reported using the assessment daily demonstrated excellent reliability (Pi = 0.89 and K(free) = 0.87). The reliability of the classification system demonstrated almost perfect agreement in determining the existence of a specific movement pattern and classifying into a subgroup for experienced raters. There was a trend for greater reliability associated with increased levels of training and experience of the raters. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Laser System Reliability

    DTIC Science & Technology

    1977-03-01

    system acquisition cycle since they provide necessary inputs to comparative analyses, cost/benefit trade -offs, and system simulations. In addition, the...Management Program from above performs the function of analyzing the system trade -offs with respect to reliability to determine a reliability goal...one encounters the problem of comparing present dollars with future dollars. In this analysis, we are trading off costs expended initially (or at

  15. Tracking wildlife by satellite: Current systems and performance

    USGS Publications Warehouse

    Harris, Richard B.; Fancy, Steven G.; Douglas, David C.; Garner, Gerald W.; Amstrup, Steven C.; McCabe, Thomas R.; Pank, Larry F.

    1990-01-01

    Since 1984, the U.S. Fish and Wildlife Service has used the Argos Data Collection and Location System (DCLS) and Tiros-N series satellites to monitor movements and activities of 10 species of large mammals in Alaska and the Rocky Mountain region. Reliability of the entire system was generally high. Data were received from instrumented caribou (Rangifer tarandus) during 91% of 318 possible transmitter-months. Transmitters failed prematurely on 5 of 45 caribou, 2 of 6 muskoxen (Ovibos moschatus), and 1 of 2 gray wolves (Canis lupus). Failure rates were considerably higher for polar (Ursus maritimus) and brown (U. arctos) bears than for caribou (Rangifer tarandus). Efficiency of gathering both locational and sensor data was related to both latitude and topography.Mean error of locations was estimated to be 954 m (median = 543 m) for transmitters on captive animals; 90% of locations were <1,732 m from the true location. Argos's new location class zero processing provided many more locations than normal processing, but mean location error was much higher than locations estimated normally. Locations were biased when animals were at elevations other than those used in Argos's calculations.Long-term and short-term indices of animal activity were developed and evaluated. For several species, the long-term index was correlated with movement patterns and the short-term index was calibrated to specific activity categories (e.g., lying, feeding, walking).Data processing and sampling considerations were evaluated. Algorithms for choosing the most reliable among a series of reported locations were investigated. Applications of satellite telemetry data and problems with lack of independence among locations are discussed.

  16. The design and performance of a 2.5-GHz telecommand link for wireless biomedical monitoring.

    PubMed

    Crumley, G C; Evans, N E; Scanlon, W G; Burns, J B; Trouton, T G

    2000-12-01

    This paper details the implementation and operational performance of a minimum-power 2.45-GHz pulse receiver and a companion on-off keyed transmitter for use in a semi-active, duplex RF biomedical transponder. A 50-ohm microstrip stub-matched zero-bias diode detector forms the heart of a body-worn receiver that has a (CMOS baseband amplifier consuming 20 microA from +3 V and achieves a tangential sensitivity of -53 dBm. The base transmitter generates 0.5 W of peak RF output power into 50 ohms. Both linear and right-hand circularly polarized Tx-Rx antenna sets were employed in system reliability trials carried out in a hospital Coronary Care Unit. For transmitting antenna heights between 0.3 and 2.2 m above floor level, transponder interrogations were 95% reliable within the 67-m2 area of the ward, falling to an average of 46% in the surrounding rooms and corridors. Overall, the circular antenna set gave the higher reliability and lower propagation power decay index.

  17. Salivary Cortisol Protocol Adherence and Reliability by Sociodemographic Features: the Multi-Ethnic Study of Atherosclerosis

    PubMed Central

    Golden, Sherita Hill; Sánchez, Brisa N.; DeSantis, Amy S.; Wu, Meihua; Castro, Cecilia; Seeman, Teresa E.; Tadros, Sameh; Shrager, Sandi; Diez Roux, Ana V.

    2014-01-01

    Collection of salivary cortisol has become increasingly popular in large population-based studies. However, the impact of protocol compliance on day-to-day reliabilities of measures, and the extent to which reliabilities differ systematically according to socio-demographic characteristics, has not been well characterized in large-scale population-based studies to date. Using data on 935 men and women from the Multi-ethnic Study of Atherosclerosis, we investigated whether sampling protocol compliance differs systematically according to socio-demographic factors and whether compliance was associated with cortisol estimates, as well as whether associations of cortisol with both compliance and socio-demographic characteristics were robust to adjustments for one another. We further assessed the day-to-day reliability for cortisol features and the extent to which reliabilities vary according to socio-demographic factors and sampling protocol compliance. Overall, we found higher compliance among persons with higher levels of income and education. Lower compliance was significantly associated with a less pronounced cortisol awakening response (CAR) but was not associated with any other cortisol features, and adjustment for compliance did not affect associations of socio-demographic characteristics with cortisol. Reliability was higher for area under the curve (AUC) and wake up values than for other features, but generally did not vary according to socio-demographic characteristics, with few exceptions. Our findings regarding intra-class correlation coefficients (ICCs) support prior research indicating that multiple day collection is preferable to single day collection, particularly for CAR and slopes, more so than wakeup and AUC. There were few differences in reliability by socio-demographic characteristics. Thus, it is unlikely that group-specific sampling protocols are warranted. PMID:24703168

  18. Validation of the Dementia Care Assessment Packet-Instrumental Activities of Daily Living

    PubMed Central

    Lee, Seok Bum; Park, Jeong Ran; Yoo, Jeong-Hwa; Park, Joon Hyuk; Lee, Jung Jae; Yoon, Jong Chul; Jhoo, Jin Hyeong; Lee, Dong Young; Woo, Jong Inn; Han, Ji Won; Huh, Yoonseok; Kim, Tae Hui

    2013-01-01

    Objective We aimed to evaluate the psychometric properties of the IADL measure included in the Dementia Care Assessment Packet (DCAP-IADL) in dementia patients. Methods The study involved 112 dementia patients and 546 controls. The DCAP-IADL was scored in two ways: observed score (OS) and predicted score (PS). The reliability of the DCAP-IADL was evaluated by testing its internal consistency, inter-rater reliability and test-retest reliability. Discriminant validity was evaluated by comparing the mean OS and PS between dementia patients and controls by ANCOVA. Pearson or Spearman correlation analysis was performed with other instruments to assess concurrent validity. Receiver operating characteristics curve analysis was performed to examine diagnostic accuracy. Results Chronbach's α coefficients of the DCAP-IADL were above 0.7. The values in dementia patients were much higher (OS=0.917, PS=0.927), indicating excellent degrees of internal consistency. Inter-rater reliabilities and test-retest reliabilities were statistically significant (p<0.05). PS exhibited higher reliabilities than OS. The mean OS and PS of dementia patients were significantly higher than those of the non-demented group after controlling for age, sex and education level. The DCAP-IADL was significantly correlated with other IADL instruments and MMSE-KC (p<0.001). Areas under the curves of the DCAP-IADL were above 0.9. Conclusion The DCAP-IADL is a reliable and valid instrument for evaluating instrumental ability of daily living for the elderly, and may also be useful for screening dementia. Moreover, administering PS may enable the DCAP-IADL to overcome the differences in gender, culture and life style that hinders accurate evaluation of the elderly in previous IADL instruments. PMID:24302946

  19. Joint mobilization forces and therapist reliability in subjects with knee osteoarthritis

    PubMed Central

    Tragord, Bradley S; Gill, Norman W; Silvernail, Jason L; Teyhen, Deydre S; Allison, Stephen C

    2013-01-01

    Objectives: This study determined biomechanical force parameters and reliability among clinicians performing knee joint mobilizations. Methods: Sixteen subjects with knee osteoarthritis and six therapists participated in the study. Forces were recorded using a capacitive-based pressure mat for three techniques at two grades of mobilization, each with two trials of 15 seconds. Dosage (force–time integral), amplitude, and frequency were also calculated. Analysis of variance was used to analyze grade differences, intraclass correlation coefficients determined reliability, and correlations assessed force associations with subject and rater variables. Results: Grade IV mobilizations produced higher mean forces (P<0.001) and higher dosage (P<0.001), while grade III produced higher maximum forces (P = 0.001). Grade III forces (Newtons) by technique (mean, maximum) were: extension 48, 81; flexion 41, 68; and medial glide 21, 34. Grade IV forces (Newtons) by technique (mean, maximum) were: extension 58, 78; flexion 44, 60; and medial glide 22, 30. Frequency (Hertz) ranged between 0.9–1.1 (grade III) and 1.4–1.6 (grade IV). Intra-clinician reliability was excellent (>0.90). Inter-clinician reliability was moderate for force and dosage, and poor for amplitude and frequency. Discussion: Force measurements were consistent with previously reported ranges and clinical constructs. Grade III and grade IV mobilizations can be distinguished from each other with differences for force and frequency being small, and dosage and amplitude being large. Intra-clinician reliability was excellent for all biomechanical parameters and inter-clinician reliability for dosage, the main variable of clinical interest, was moderate. This study quantified the applied forces among multiple clinicians, which may help determine optimal dosage and standardize care. PMID:24421632

  20. Maximally reliable spatial filtering of steady state visual evoked potentials.

    PubMed

    Dmochowski, Jacek P; Greaves, Alex S; Norcia, Anthony M

    2015-04-01

    Due to their high signal-to-noise ratio (SNR) and robustness to artifacts, steady state visual evoked potentials (SSVEPs) are a popular technique for studying neural processing in the human visual system. SSVEPs are conventionally analyzed at individual electrodes or linear combinations of electrodes which maximize some variant of the SNR. Here we exploit the fundamental assumption of evoked responses--reproducibility across trials--to develop a technique that extracts a small number of high SNR, maximally reliable SSVEP components. This novel spatial filtering method operates on an array of Fourier coefficients and projects the data into a low-dimensional space in which the trial-to-trial spectral covariance is maximized. When applied to two sample data sets, the resulting technique recovers physiologically plausible components (i.e., the recovered topographies match the lead fields of the underlying sources) while drastically reducing the dimensionality of the data (i.e., more than 90% of the trial-to-trial reliability is captured in the first four components). Moreover, the proposed technique achieves a higher SNR than that of the single-best electrode or the Principal Components. We provide a freely-available MATLAB implementation of the proposed technique, herein termed "Reliable Components Analysis". Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Requirements and approach for a space tourism launch system

    NASA Astrophysics Data System (ADS)

    Penn, Jay P.; Lindley, Charles A.

    2003-01-01

    Market surveys suggest that a viable space tourism industry will require flight rates about two orders of magnitude higher than those required for conventional spacelift. Although enabling round-trip cost goals for a viable space tourism business are about 240/pound (529/kg), or 72,000/passenger round-trip, goals should be about 50/pound (110/kg) or approximately 15,000 for a typical passenger and baggage. The lower price will probably open space tourism to the general population. Vehicle reliabilities must approach those of commercial aircraft as closely as possible. This paper addresses the development of spaceplanes optimized for the ultra-high flight rate and high reliability demands of the space tourism mission. It addresses the fundamental operability, reliability, and cost drivers needed to satisfy this mission need. Figures of merit similar to those used to evaluate the economic viability of conventional commercial aircraft are developed, including items such as payload/vehicle dry weight, turnaround time, propellant cost per passenger, and insurance and depreciation costs, which show that infrastructure can be developed for a viable space tourism industry. A reference spaceplane design optimized for space tourism is described. Subsystem allocations for reliability, operability, and costs are made and a route to developing such a capability is discussed. The vehicle's ability to satisfy the traditional spacelift market is also shown.

  2. Teaching DSM-III to clinicians. Some problems of the DSM-III system reducing reliability, using the diagnosis and classification of depressive disorders as an example.

    PubMed

    Malt, U F

    1986-01-01

    Experiences from teaching DSM-III to more than three hundred Norwegian psychiatrists and clinical psychologists suggest that reliable DSM-III diagnoses can be achieved within a few hours training with reference to the decision trees and the diagnostic criteria only. The diagnoses provided are more reliable than the corresponding ICD diagnoses which the participants were more familiar with. The three main sources of reduced reliability of the DSM-III diagnoses are related to: poor knowledge of the criteria which often is connected with failure of obtaining diagnostic key information during the clinical interview; unfamiliar concepts and vague or ambiguous criteria. The two first issues are related to the quality of the teaching of DSM-III. The third source of reduced reliability reflects unsolved validity issues. By using the classification of five affective case stories as examples, these sources of diagnostic pitfalls, reducing reliability and ways to overcome these problems when teaching the DSM-III system, are discussed. It is concluded that the DSM-III system of classification is easy to teach and that the system is superior to other classification systems available from a reliability point of view. The current version of the DSM-III system, however, partly owes a high degree of reliability to broad and heterogeneous diagnostic categories like the concept major depression, which may have questionable validity. Thus, the future revisions of the DSM-III system should, above all, address the issue of validity.

  3. How to harvest efficient laser from solar light

    NASA Astrophysics Data System (ADS)

    Zhao, Changming; Guan, Zhe; Zhang, Haiyang

    2018-02-01

    Solar Pumped Solid State Lasers (SPSSL) is a kind of solid state lasers that can transform solar light into laser directly, with the advantages of least energy transform procedure, higher energy transform efficiency, simpler structure, higher reliability, and longer lifetime, which is suitable for use in unmanned space system, for solar light is the only form of energy source in space. In order to increase the output power and improve the efficiency of SPSSL, we conducted intensive studies on the suitable laser material selection for solar pump, high efficiency/large aperture focusing optical system, the optimization of concave cavity as the second focusing system, laser material bonding and surface processing. Using bonded and grooved Nd:YAG rod as laser material, large aperture Fresnel lens as the first stage focusing element, concave cavity as the second stage focusing element, we finally got 32.1W/m2 collection efficiency, which is the highest collection efficiency in the world up to now.

  4. Influence of finishing systems on hydrophilic and lipophilic oxygen radical absorbance capacity (ORAC) in beef.

    PubMed

    Wu, C; Duckett, S K; Neel, J P S; Fontenot, J P; Clapham, W M

    2008-11-01

    The aim of this research was to: (1) develop a reliable extraction procedure and assay to determine antioxidant activity in meat products, and (2) assess the effect of beef finishing system (forage-finished: alfalfa, pearl millet or mixed pastures vs. concentrate-finished) on longissimus muscle antioxidant activity. The effect of extraction method (ethanol concentration and extraction time), protein removal, and sample preparation method (pulverization or freeze drying) were first evaluated to develop an antioxidant assay for meat products. Beef extracts prepared with low ethanol concentrations (20%) demonstrated higher hydrophilic ORAC. Protein removal prior to extraction reduced hydrophilic ORAC values. Sample preparation method influenced both hydrophilic and lipophilic ORAC, with pulverized samples containing higher hydrophilic and lipophilic ORAC values. Beef cattle finishing system (Forage: alfalfa, pearl millet, or natural pasture vs. concentrates) had little impact on muscle hydrophilic ORAC, but muscle from forage finished beef contained greater lipophilic ORAC. In addition, broiling of steaks reduced hydrophilic ORAC.

  5. Validity of the Family Asthma Management System Scale with an Urban African-American Sample

    PubMed Central

    Klinnert, Mary D.; Holsey, Chanda Nicole; McQuaid, Elizabeth L.

    2011-01-01

    Objective To examine the reliability and validity of the Family Asthma Management System Scale for low-income African-American children with poor asthma control and caregivers under stress. The FAMSS assesses eight aspects of asthma management from a family systems perspective. Methods Forty-three children, ages 8–13, and caregivers were interviewed with the FAMSS; caregivers completed measures of primary care quality, family functioning, parenting stress, and psychological distress. Children rated their relatedness with the caregiver, and demonstrated inhaler technique. Medical records were reviewed for dates of outpatient visits for asthma. Results The FAMSS demonstrated good internal consistency. Higher scores were associated with adequate inhaler technique, recent outpatient care, less parenting stress and better family functioning. Higher scores on the Collaborative Relationship with Provider subscale were associated with greater perceived primary care quality. Conclusions The FAMSS demonstrated relevant associations with asthma management criteria and family functioning for a low-income, African-American sample. PMID:19776230

  6. Rear-end vision-based collision detection system for motorcyclists

    NASA Astrophysics Data System (ADS)

    Muzammel, Muhammad; Yusoff, Mohd Zuki; Meriaudeau, Fabrice

    2017-05-01

    In many countries, the motorcyclist fatality rate is much higher than that of other vehicle drivers. Among many other factors, motorcycle rear-end collisions are also contributing to these biker fatalities. To increase the safety of motorcyclists and minimize their road fatalities, this paper introduces a vision-based rear-end collision detection system. The binary road detection scheme contributes significantly to reduce the negative false detections and helps to achieve reliable results even though shadows and different lane markers are present on the road. The methodology is based on Harris corner detection and Hough transform. To validate this methodology, two types of dataset are used: (1) self-recorded datasets (obtained by placing a camera at the rear end of a motorcycle) and (2) online datasets (recorded by placing a camera at the front of a car). This method achieved 95.1% accuracy for the self-recorded dataset and gives reliable results for the rear-end vehicle detections under different road scenarios. This technique also performs better for the online car datasets. The proposed technique's high detection accuracy using a monocular vision camera coupled with its low computational complexity makes it a suitable candidate for a motorbike rear-end collision detection system.

  7. High-reliability microcontroller nerve stimulator for assistance in regional anaesthesia procedures.

    PubMed

    Ferri, Carlos A; Quevedo, Antonio A F

    2017-07-01

    In the last decades, the use of nerve stimulators to aid in regional anaesthesia has been shown to benefit the patient since it allows a better location of the nerve plexus, leading to correct positioning of the needle through which the anaesthetic is applied. However, most of the nerve stimulators available in the market for this purpose do not have the minimum recommended features for a good stimulator, and this can lead to risks to the patient. Thus, this study aims to develop an equipment, using embedded electronics, which meets all the characteristics, for a successful blockade. The system is made of modules for generation and overall control of the current pulse and the patient and user interfaces. The results show that the designed system fits into required specifications for a good and reliable nerve stimulator. Linearity proved satisfactory, ensuring accuracy in electrical current amplitude for a wide range of body impedances. Field tests have proven very successful. The anaesthesiologist that used the system reported that, in all cases, plexus blocking was achieved with higher quality, faster anaesthetic diffusion and without needed of an additional dose when compared with same procedure without the use of the device.

  8. Space Transportation System Availability Requirement and Its Influencing Attributes Relationships

    NASA Technical Reports Server (NTRS)

    Rhodes, Russell E.; Adams, Timothy C.; McCleskey, Carey M.

    2008-01-01

    It is important that engineering and management accept the need for an availability requirement that is derived with its influencing attributes. It is the intent of this paper to provide the visibility of relationships of these major attribute drivers (variables) to each other and the resultant system inherent availability. Also important to provide bounds of the variables providing engineering the insight required to control the system's engineering solution, e.g., these influencing attributes become design requirements also. These variables will drive the need to provide integration of similar discipline functions or technology selection to allow control of the total parts count. The relationship of selecting a reliability requirement will place a constraint on parts count to achieve a given availability requirement or if allowed to increase the parts count will drive the system reliability requirement higher. They also provide the understanding for the relationship of mean repair time (or mean down time) to maintainability, e.g., accessibility for repair, and both the mean time between failure, e.g., reliability of hardware and availability. The concerns and importance of achieving a strong availability requirement is driven by the need for affordability, the choice of using the two launch solution for the single space application, or the need to control the spare parts count needed to support the long stay in either orbit or on the surface of the moon. Understanding the requirements before starting the architectural design concept will avoid considerable time and money required to iterate the design to meet the redesign and assessment process required to achieve the results required of the customer's space transportation system. In fact the impact to the schedule to being able to deliver the system that meets the customer's needs, goals, and objectives may cause the customer to compromise his desired operational goal and objectives resulting in considerable increased life cycle cost of the fielded space transportation system.

  9. Space Transportation System Availability Requirements and Its Influencing Attributes Relationships

    NASA Technical Reports Server (NTRS)

    Rhodes, Russell E.; Adams, Timothy C.; McCleskey, Carey M.

    2008-01-01

    It is important that engineering and management accept the need for an availability requirement that is derived with its influencing attributes. It is the intent of this paper to provide the visibility of relationships of these major attribute drivers (variables) to each other and the resultant system inherent availability. Also important to provide bounds of the variables providing engineering the insight required to control the system's engineering solution, e.g., these influencing attributes become design requirements also. These variables will drive the need to provide integration of similar discipline functions or technology selection to allow control of the total parts count. The relationship of selecting a reliability requirement will place a constraint on parts count to achieve a given availability requirement or if allowed to increase the parts count will drive the system reliability requirement higher. They also provide the understanding for the relationship of mean repair time (or mean down time) to maintainability, e.g., accessibility for repair, and both the mean time between failure, e.g., reliability of hardware and availability. The concerns and importance of achieving a strong availability requirement is driven by the need for affordability, the choice of using the two launch solution for the single space application, or the need to control the spare parts count needed to support the long stay in either orbit or on the surface of the moon. Understanding the requirements before starting the architectural design concept will avoid considerable time and money required to iterate the design to meet the redesign and assessment process required to achieve the results required of the customer's space transportation system. In fact the impact to the schedule to being able to deliver the system that meets the customer's needs, goals, and objectives may cause the customer to compromise his desired operational goal and objectives resulting in considerable increased life cycle cost of the fielded space transportation system.

  10. Effects of Secondary Task Modality and Processing Code on Automation Trust and Utilization During Simulated Airline Luggage Screening

    NASA Technical Reports Server (NTRS)

    Phillips, Rachel; Madhavan, Poornima

    2010-01-01

    The purpose of this research was to examine the impact of environmental distractions on human trust and utilization of automation during the process of visual search. Participants performed a computer-simulated airline luggage screening task with the assistance of a 70% reliable automated decision aid (called DETECTOR) both with and without environmental distractions. The distraction was implemented as a secondary task in either a competing modality (visual) or non-competing modality (auditory). The secondary task processing code either competed with the luggage screening task (spatial code) or with the automation's textual directives (verbal code). We measured participants' system trust, perceived reliability of the system (when a target weapon was present and absent), compliance, reliance, and confidence when agreeing and disagreeing with the system under both distracted and undistracted conditions. Results revealed that system trust was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Perceived reliability of the system (when the target was present) was significantly higher when the secondary task was visual rather than auditory. Compliance with the aid increased in all conditions except for the auditory-verbal condition, where it decreased. Similar to the pattern for trust, reliance on the automation was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Confidence when agreeing with the system decreased with the addition of any kind of distraction; however, confidence when disagreeing increased with the addition of an auditory secondary task but decreased with the addition of a visual task. A model was developed to represent the research findings and demonstrate the relationship between secondary task modality, processing code, and automation use. Results suggest that the nature of environmental distractions influence interaction with automation via significant effects on trust and system utilization. These findings have implications for both automation design and operator training.

  11. Reliability model of a monopropellant auxiliary propulsion system

    NASA Technical Reports Server (NTRS)

    Greenberg, J. S.

    1971-01-01

    A mathematical model and associated computer code has been developed which computes the reliability of a monopropellant blowdown hydrazine spacecraft auxiliary propulsion system as a function of time. The propulsion system is used to adjust or modify the spacecraft orbit over an extended period of time. The multiple orbit corrections are the multiple objectives which the auxiliary propulsion system is designed to achieve. Thus the reliability model computes the probability of successfully accomplishing each of the desired orbit corrections. To accomplish this, the reliability model interfaces with a computer code that models the performance of a blowdown (unregulated) monopropellant auxiliary propulsion system. The computer code acts as a performance model and as such gives an accurate time history of the system operating parameters. The basic timing and status information is passed on to and utilized by the reliability model which establishes the probability of successfully accomplishing the orbit corrections.

  12. Radiation tolerant power converter controls

    NASA Astrophysics Data System (ADS)

    Todd, B.; Dinius, A.; King, Q.; Uznanski, S.

    2012-11-01

    The Large Hadron Collider (LHC) at the European Organisation for Nuclear Research (CERN) is the world's most powerful particle collider. The LHC has several thousand magnets, both warm and super-conducting, which are supplied with current by power converters. Each converter is controlled by a purpose-built electronic module called a Function Generator Controller (FGC). The FGC allows remote control of the power converter and forms the central part of a closed-loop control system where the power converter voltage is set, based on the converter output current and magnet-circuit characteristics. Some power converters and FGCs are located in areas which are exposed to beam-induced radiation. There are numerous radiation induced effects, some of which lead to a loss of control of the power converter, having a direct impact upon the accelerator's availability. Following the first long shut down (LS1), the LHC will be able to run with higher intensity beams and higher beam energy. This is expected to lead to significantly increased radiation induced effects in materials close to the accelerator, including the FGC. Recent radiation tests indicate that the current FGC would not be sufficiently reliable. A so-called FGClite is being designed to work reliably in the radiation environment in the post-LS1 era. This paper outlines the concepts of power converter controls for machines such as the LHC, introduces the risks related to radiation and a radiation tolerant project flow. The FGClite is then described, with its key concepts and challenges: aiming for high reliability in a radiation field.

  13. Metrics for Assessing the Reliability of a Telemedicine Remote Monitoring System

    PubMed Central

    Fox, Mark; Papadopoulos, Amy; Crump, Cindy

    2013-01-01

    Abstract Objective: The goal of this study was to assess using new metrics the reliability of a real-time health monitoring system in homes of older adults. Materials and Methods: The “MobileCare Monitor” system was installed into the homes of nine older adults >75 years of age for a 2-week period. The system consisted of a wireless wristwatch-based monitoring system containing sensors for location, temperature, and impacts and a “panic” button that was connected through a mesh network to third-party wireless devices (blood pressure cuff, pulse oximeter, weight scale, and a survey-administering device). To assess system reliability, daily phone calls instructed participants to conduct system tests and reminded them to fill out surveys and daily diaries. Phone reports and participant diary entries were checked against data received at a secure server. Results: Reliability metrics assessed overall system reliability, data concurrence, study effectiveness, and system usability. Except for the pulse oximeter, system reliability metrics varied between 73% and 92%. Data concurrence for proximal and distal readings exceeded 88%. System usability following the pulse oximeter firmware update varied between 82% and 97%. An estimate of watch-wearing adherence within the home was quite high, about 80%, although given the inability to assess watch-wearing when a participant left the house, adherence likely exceeded the 10 h/day requested time. In total, 3,436 of 3,906 potential measurements were obtained, indicating a study effectiveness of 88%. Conclusions: The system was quite effective in providing accurate remote health data. The different system reliability measures identify important error sources in remote monitoring systems. PMID:23611640

  14. Surgeon Reliability for the Assessment of Lumbar Spinal Stenosis on MRI: The Impact of Surgeon Experience.

    PubMed

    Marawar, Satyajit V; Madom, Ian A; Palumbo, Mark; Tallarico, Richard A; Ordway, Nathaniel R; Metkar, Umesh; Wang, Dongliang; Green, Adam; Lavelle, William F

    2017-01-01

    Treating surgeon's visual assessment of axial MRI images to ascertain the degree of stenosis has a critical impact on surgical decision-making. The purpose of this study was to prospectively analyze the impact of surgeon experience on inter-observer and intra-observer reliability of assessing severity of spinal stenosis on MRIs by spine surgeons directly involved in surgical decision-making. Seven fellowship trained spine surgeons reviewed MRI studies of 30 symptomatic patients with lumbar stenosis and graded the stenosis in the central canal, the lateral recess and the foramen at T12-L1 to L5-S1 as none, mild, moderate or severe. No specific instructions were provided to what constituted mild, moderate, or severe stenosis. Two surgeons were "senior" (>fifteen years of practice experience); two were "intermediate" (>four years of practice experience), and three "junior" (< one year of practice experience). The concordance correlation coefficient (CCC) was calculated to assess inter-observer reliability. Seven MRI studies were duplicated and randomly re-read to evaluate inter-observer reliability. Surgeon experience was found to be a strong predictor of inter-observer reliability. Senior inter-observer reliability was significantly higher assessing central(p<0.001), foraminal p=0.005 and lateral p=0.001 than "junior" group.Senior group also showed significantly higher inter-observer reliability that intermediate group assessing foraminal stenosis (p=0.036). In intra-observer reliability the results were contrary to that found in inter-observer reliability. Inter-observer reliability of assessing stenosis on MRIs increases with surgeon experience. Lower intra-observer reliability values among the senior group, although not clearly explained, may be due to the small number of MRIs evaluated and quality of MRI images.Level of evidence: Level 3.

  15. Achieving Reliable Communication in Dynamic Emergency Responses

    PubMed Central

    Chipara, Octav; Plymoth, Anders N.; Liu, Fang; Huang, Ricky; Evans, Brian; Johansson, Per; Rao, Ramesh; Griswold, William G.

    2011-01-01

    Emergency responses require the coordination of first responders to assess the condition of victims, stabilize their condition, and transport them to hospitals based on the severity of their injuries. WIISARD is a system designed to facilitate the collection of medical information and its reliable dissemination during emergency responses. A key challenge in WIISARD is to deliver data with high reliability as first responders move and operate in a dynamic radio environment fraught with frequent network disconnections. The initial WIISARD system employed a client-server architecture and an ad-hoc routing protocol was used to exchange data. The system had low reliability when deployed during emergency drills. In this paper, we identify the underlying causes of unreliability and propose a novel peer-to-peer architecture that in combination with a gossip-based communication protocol achieves high reliability. Empirical studies show that compared to the initial WIISARD system, the redesigned system improves reliability by as much as 37% while reducing the number of transmitted packets by 23%. PMID:22195075

  16. Aerospace Applications of Weibull and Monte Carlo Simulation with Importance Sampling

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.

    1998-01-01

    Recent developments in reliability modeling and computer technology have made it practical to use the Weibull time to failure distribution to model the system reliability of complex fault-tolerant computer-based systems. These system models are becoming increasingly popular in space systems applications as a result of mounting data that support the decreasing Weibull failure distribution and the expectation of increased system reliability. This presentation introduces the new reliability modeling developments and demonstrates their application to a novel space system application. The application is a proposed guidance, navigation, and control (GN&C) system for use in a long duration manned spacecraft for a possible Mars mission. Comparisons to the constant failure rate model are presented and the ramifications of doing so are discussed.

  17. The vital role of manufacturing quality in the reliability of PV modules

    NASA Astrophysics Data System (ADS)

    Rusch, Peter

    2014-10-01

    The influence of manufacturing quality on the reliability of PV modules coming out of today's factories has been, and is still, under estimated among investors and buyers. The main reason is perception. Contrary to popular belief, PV modules are not a commodity. Module quality does differ among module brands. Certification alone does not guarantee the quality or reliability of a module. Cost reductions in manufacturing have unequivocally affected module quality. And the use of new, cheaper materials has had a measureable impact on module reliability. The need for meaningful manufacturing quality standards has been understood by the leading technical institutions and important industry players. The fact that most leading PV panel manufacturers have been certified according to ISO 9001 has led to some level of improvement and higher effectiveness. The new ISO 9001 PV QMS standards will be a major step in providing a tool to assess PV manufacturers' quality management systems. The current lack of sufficient standards has still got a negative influence on the quality of modules being installed today. Today every manufacturer builds their modules in their own way with little standardization or adherence to quality processes and methods, which are commonplace in other manufacturing industries. Although photovoltaic technology is to a great extent mature, the way modules are being produced has changed significantly over the past few years and it continues to change at a rapid pace. Investors, financiers and lenders stand the most to gain from PV systems over the long-term, but also the most to lose. Investors, developers, EPC, O&M and solar asset management companies must all manage manufacturing quality more proactively or they will face unexpected risks and failures down the road. Manufacturing quality deserves more transparency and attention, as it is a major driver of module performance and reliability. This paper will explain the benefits of good manufacturing quality and the dangers in poor manufacturing quality. The paper also explains why buyers and long-term investors need to pay close attention to the day-to-day manufacturing quality of module manufacturers. We demonstrate how these quality risks can be assessed and mitigated by independent diligence, professional contracting and smart quality assurance processes that can be easily built into any module procurement process. We highlight the steps to ensure that every module used in a PV system is built to quality standards that support the long-term reliability of a PV system.

  18. Bayesian methods in reliability

    NASA Astrophysics Data System (ADS)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  19. The Typical General Aviation Aircraft

    NASA Technical Reports Server (NTRS)

    Turnbull, Andrew

    1999-01-01

    The reliability of General Aviation aircraft is unknown. In order to "assist the development of future GA reliability and safety requirements", a reliability study needs to be performed. Before any studies on General Aviation aircraft reliability begins, a definition of a typical aircraft that encompasses most of the general aviation characteristics needs to be defined. In this report, not only is the typical general aviation aircraft defined for the purpose of the follow-on reliability study, but it is also separated, or "sifted" into several different categories where individual analysis can be performed on the reasonably independent systems. In this study, the typical General Aviation aircraft is a four-place, single engine piston, all aluminum fixed-wing certified aircraft with a fixed tricycle landing gear and a cable operated flight control system. The system breakdown of a GA aircraft "sifts" the aircraft systems and components into five categories: Powerplant, Airframe, Aircraft Control Systems, Cockpit Instrumentation Systems, and the Electrical Systems. This breakdown was performed along the lines of a failure of the system. Any component that caused a system to fail was considered a part of that system.

  20. Verification of Triple Modular Redundancy (TMR) Insertion for Reliable and Trusted Systems

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Kenneth A.

    2016-01-01

    We propose a method for TMR insertion verification that satisfies the process for reliable and trusted systems. If a system is expected to be protected using TMR, improper insertion can jeopardize the reliability and security of the system. Due to the complexity of the verification process, there are currently no available techniques that can provide complete and reliable confirmation of TMR insertion. This manuscript addresses the challenge of confirming that TMR has been inserted without corruption of functionality and with correct application of the expected TMR topology. The proposed verification method combines the usage of existing formal analysis tools with a novel search-detect-and-verify tool. Field programmable gate array (FPGA),Triple Modular Redundancy (TMR),Verification, Trust, Reliability,

  1. Demand Response For Power System Reliability: FAQ

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirby, Brendan J

    2006-12-01

    Demand response is the most underutilized power system reliability resource in North America. Technological advances now make it possible to tap this resource to both reduce costs and improve. Misconceptions concerning response capabilities tend to force loads to provide responses that they are less able to provide and often prohibit them from providing the most valuable reliability services. Fortunately this is beginning to change with some ISOs making more extensive use of load response. This report is structured as a series of short questions and answers that address load response capabilities and power system reliability needs. Its objective is tomore » further the use of responsive load as a bulk power system reliability resource in providing the fastest and most valuable ancillary services.« less

  2. PEM-INST-001: Instructions for Plastic Encapsulated Microcircuit (PEM) Selection, Screening, and Qualification

    NASA Technical Reports Server (NTRS)

    Teverovsky, Alexander; Sahu, Kusum

    2003-01-01

    Potential users of plastic encapsulated microcircuits (PEMs) need to be reminded that unlike the military system of producing robust high-reliability microcircuits that are designed to perform acceptably in a variety of harsh environments, PEMs are primarily designed for use in benign environments where equipment is easily accessed for repair or replacement. The methods of analysis applied to military products to demonstrate high reliability cannot always be applied to PEMs. This makes it difficult for users to characterize PEMs for two reasons: 1. Due to the major differences in design and construction, the standard test practices used to ensure that military devices are robust and have high reliability often cannot be applied to PEMs that have a smaller operating temperature range and are typically more frail and susceptible to moisture absorption. In contrast, high-reliability military microcircuits usually utilize large, robust, high-temperature packages that are hermetically sealed. 2. Unlike the military high-reliability system, users of PEMs have little visibility into commercial manufacturers proprietary design, materials, die traceability, and production processes and procedures. There is no central authority that monitors PEM commercial product for quality, and there are no controls in place that can be imposed across all commercial manufacturers to provide confidence to high-reliability users that a common acceptable level of quality exists for all PEMs manufacturers. Consequently, there is no guaranteed control over the type of reliability that is built into commercial product, and there is no guarantee that different lots from the same manufacturer are equally acceptable. And regarding application, there is no guarantee that commercial products intended for use in benign environments will provide acceptable performance and reliability in harsh space environments. The qualification and screening processes contained in this document are intended to detect poor-quality lots and screen out early random failures from use in space flight hardware. However, since it cannot be guaranteed that quality was designed and built into PEMs that are appropriate for space applications, users cannot screen in quality that may not exist. It must be understood that due to the variety of materials, processes, and technologies used to design and produce PEMs, this test process may not accelerate and detect all failure mechanisms. While the tests herein will increase user confidence that PEMs with otherwise unknown reliability can be used in space environments, such testing may not guarantee the same level of reliability offered by military microcircuits. PEMs should only be used where due to performance needs there are no alternatives in the military high-reliability market, and projects are willing to accept higher risk.

  3. Inter-rater reliability of PATH observations for assessment of ergonomic risk factors in hospital work.

    PubMed

    Park, Jung-Keun; Boyer, Jon; Tessler, Jamie; Casey, Jeffrey; Schemm, Linda; Gore, Rebecca; Punnett, Laura

    2009-07-01

    This study examined the inter-rater reliability of expert observations of ergonomic risk factors by four analysts. Ten jobs were observed at a hospital using a newly expanded version of the PATH method (Buchholz et al. 1996), to which selected upper extremity exposures had been added. Two of the four raters simultaneously observed each worker onsite for a total of 443 observation pairs containing 18 categorical exposure items each. For most exposure items, kappa coefficients were 0.4 or higher. For some items, agreement was higher both for the jobs with less rapid hand activity and for the analysts with a higher level of ergonomic job analysis experience. These upper extremity exposures could be characterised reliably with real-time observation, given adequate experience and training of the observers. The revised version of PATH is applicable to the analysis of jobs where upper extremity musculoskeletal strain is of concern.

  4. An electrophysiological study of the impact of a Forward Collision Warning System in a simulator driving task.

    PubMed

    Bueno, Mercedes; Fabrigoule, Colette; Deleurence, Philippe; Ndiaye, Daniel; Fort, Alexandra

    2012-08-27

    Driver distraction has been identified as the most important contributing factor in rear-end collisions. In this context, Forward Collision Warning Systems (FCWS) have been developed specifically to warn drivers of potential rear-end collisions. The main objective of this work is to evaluate the impact of a surrogate FCWS and of its reliability according to the driver's attentional state by recording both behavioral and electrophysiological data. Participants drove following a lead motorcycle in a simplified simulator with or without a warning system which gave forewarning of the preceding vehicle braking. Participants had to perform this driving task either alone (simple task) or simultaneously with a secondary cognitive task (dual task). Behavioral and electrophysiological data contributed to revealing a positive effect of the warning system. Participants were faster in detecting the brake light when the system was perfect or imperfect, and the time and attentional resources allocation required for processing the target at higher cognitive level were reduced when the system was completely reliable. When both tasks were performed simultaneously, warning effectiveness was considerably affected at both performance and neural levels; however, the analysis of the brain activity revealed fewer differences between distracted and undistracted drivers when using the warning system. These results show that electrophysiological data could be a valuable tool to complement behavioral data and to have a better understanding of how these systems impact the driver. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. HAL/S - The programming language for Shuttle

    NASA Technical Reports Server (NTRS)

    Martin, F. H.

    1974-01-01

    HAL/S is a higher order language and system, now operational, adopted by NASA for programming Space Shuttle on-board software. Program reliability is enhanced through language clarity and readability, modularity through program structure, and protection of code and data. Salient features of HAL/S include output orientation, automatic checking (with strictly enforced compiler rules), the availability of linear algebra, real-time control, a statement-level simulator, and compiler transferability (for applying HAL/S to additional object and host computers). The compiler is described briefly.

  6. Realistic Specific Power Expectations for Advanced Radioisotope Power Systems

    NASA Technical Reports Server (NTRS)

    Mason, Lee S.

    2006-01-01

    Radioisotope Power Systems (RPS) are being considered for a wide range of future NASA space science and exploration missions. Generally, RPS offer the advantages of high reliability, long life, and predictable power production regardless of operating environment. Previous RPS, in the form of Radioisotope Thermoelectric Generators (RTG), have been used successfully on many NASA missions including Apollo, Viking, Voyager, and Galileo. NASA is currently evaluating design options for the next generation of RPS. Of particular interest is the use of advanced, higher efficiency power conversion to replace the previous thermoelectric devices. Higher efficiency reduces the quantity of radioisotope fuel and potentially improves the RPS specific power (watts per kilogram). Power conversion options include Segmented Thermoelectric (STE), Stirling, Brayton, and Thermophotovoltaic (TPV). This paper offers an analysis of the advanced 100 watt-class RPS options and provides credible projections for specific power. Based on the analysis presented, RPS specific power values greater than 10 W/kg appear unlikely.

  7. Systems Issues In Terrestrial Fiber Optic Link Reliability

    NASA Astrophysics Data System (ADS)

    Spencer, James L.; Lewin, Barry R.; Lee, T. Frank S.

    1990-01-01

    This paper reviews fiber optic system reliability issues from three different viewpoints - availability, operating environment, and evolving technologies. Present availability objectives for interoffice links and for the distribution loop must be re-examined for applications such as the Synchronous Optical Network (SONET), Fiber-to-the-Home (FTTH), and analog services. The hostile operating environments of emerging applications (such as FTTH) must be carefully considered in system design as well as reliability assessments. Finally, evolving technologies might require the development of new reliability testing strategies.

  8. Reliability Analysis of the MSC System

    NASA Astrophysics Data System (ADS)

    Kim, Young-Soo; Lee, Do-Kyoung; Lee, Chang-Ho; Woo, Sun-Hee

    2003-09-01

    MSC (Multi-Spectral Camera) is the payload of KOMPSAT-2, which is being developed for earth imaging in optical and near-infrared region. The design of the MSC is completed and its reliability has been assessed from part level to the MSC system level. The reliability was analyzed in worst case and the analysis results showed that the value complies the required value of 0.9. In this paper, a calculation method of reliability for the MSC system is described, and assessment result is presented and discussed.

  9. Analytical Assessment of the Reciprocating Feed System

    NASA Technical Reports Server (NTRS)

    Eddleman, David E.; Blackmon, James B.; Morton, Christopher D.

    2006-01-01

    A preliminary analysis tool has been created in Microsoft Excel to determine deliverable payload mass, total system mass, and performance of spacecraft systems using various types of propellant feed systems. These mass estimates are conducted by inserting into the user interface the basic mission parameters (e.g., thrust, burn time, specific impulse, mixture ratio, etc.), system architecture (e.g., propulsion system type and characteristics, propellants, pressurization system type, etc.), and design properties (e.g., material properties, safety factors, etc.). Different propellant feed and pressurization systems are available for comparison in the program. This gives the user the ability to compare conventional pressure fed, reciprocating feed system (RFS), autogenous pressurization thrust augmentation (APTA RFS), and turbopump systems with the deliverable payload, inert mass, and total system mass being the primary comparison metrics. Analyses of several types of missions and spacecraft were conducted and it was found that the RFS offers a performance improvement, especially in terms of delivered payload, over conventional pressure fed systems. Furthermore, it is competitive with a turbopump system at low to moderate chamber pressures, up to approximately 1,500 psi. Various example cases estimating the system mass and deliverable payload of several types of spacecraft are presented that illustrate the potential system performance advantages of the RFS. In addition, a reliability assessment of the RFS was conducted, comparing it to simplified conventional pressure fed and turbopump systems, based on MIL-STD 756B; these results showed that the RFS offers higher reliability, and thus substantially longer periods between system refurbishment, than turbopump systems, and is competitive with conventional pressure fed systems. This is primarily the result of the intrinsic RFS fail-operational capability with three run tanks, since the system can operate with just two run tanks.

  10. Reliability of the one-crossing approximation in describing the Mott transition

    NASA Astrophysics Data System (ADS)

    Vildosola, V.; Pourovskii, L. V.; Manuel, L. O.; Roura-Bas, P.

    2015-12-01

    We assess the reliability of the one-crossing approximation (OCA) approach in a quantitative description of the Mott transition in the framework of the dynamical mean field theory (DMFT). The OCA approach has been applied in conjunction with DMFT to a number of heavy-fermion, actinide, transition metal compounds and nanoscale systems. However, several recent studies in the framework of impurity models pointed out serious deficiencies of OCA and raised questions regarding its reliability. Here we consider a single band Hubbard model on the Bethe lattice at finite temperatures and compare the results of OCA to those of a numerically exact quantum Monte Carlo (QMC) method. The temperature-local repulsion U phase diagram for the particle-hole symmetric case obtained by OCA is in good agreement with that of QMC, with the metal-insulator transition captured very well. We find, however, that the insulator to metal transition is shifted to higher values of U and, simultaneously, correlations in the metallic phase are significantly overestimated. This counter-intuitive behaviour is due to simultaneous underestimations of the Kondo scale in the metallic phase and the size of the insulating gap. We trace the underestimation of the insulating gap to that of the second moment of the high-frequency expansion of the impurity spectral density. Calculations of the system away from the particle-hole symmetric case are also presented and discussed.

  11. Upper and lower bounds for semi-Markov reliability models of reconfigurable systems

    NASA Technical Reports Server (NTRS)

    White, A. L.

    1984-01-01

    This paper determines the information required about system recovery to compute the reliability of a class of reconfigurable systems. Upper and lower bounds are derived for these systems. The class consists of those systems that satisfy five assumptions: the components fail independently at a low constant rate, fault occurrence and system reconfiguration are independent processes, the reliability model is semi-Markov, the recovery functions which describe system configuration have small means and variances, and the system is well designed. The bounds are easy to compute, and examples are included.

  12. Measurement of stain on extracted teeth using spectrophotometry and digital image analysis.

    PubMed

    Lath, D L; Smith, R N; Guan, Y H; Karmo, M; Brook, A H

    2007-08-01

    The aim of this study was to assess the reliability and validate a customized image analysis system, designed for use within clinical trials of general dental hygiene and whitening products, for the measurement of stain levels on extracted teeth and to compare it with reflectance spectrophotometry. Twenty non-carious extracted teeth were soaked in an artificial saliva, brushed for 1 min using an electric toothbrush and a standard toothpaste, bleached using a 5.3% hydrogen peroxide solution and cycled for 6 h daily through a tea solution. CIE L* values were obtained after each treatment step using the customized image analysis system and a reflectance spectrophotometer. A statistical analysis was carried out in SPSS. Fleiss' coefficient of reliability for intra-operator repeatability of the image analysis system and spectrophotometry was 0.996 and 0.946 respectively. CIE L* values were consistently higher using the image analysis compared with spectrophotometry, and t-tests for each treatment step showed significant differences (P < 0.05) for the two methods. Limits of agreement between the methods were -27.95 to +2.07, with a 95% confidence of the difference calculated as -14.26 to -11.84. The combined results for all treatment steps showed a significant difference between the methods for the CIE L* values (P < 0.05). The image analysis system has proven to be a reliable method for assessment of changes in stain level on extracted teeth. The method has been validated against reflectance spectrophotometry. This method may be used for pilot in vitro studies/trials of oral hygiene and whitening products, before expensive in vivo tests are carried out.

  13. Enabling aspects of fiber optic acoustic sensing in harsh environments

    NASA Astrophysics Data System (ADS)

    Saxena, Indu F.

    2013-05-01

    The advantages of optical fiber sensing in harsh electromagnetic as well as physical stress environments make them uniquely suited for structural health monitoring and non-destructive testing. In addition to aerospace applications they are making a strong footprint in geophysical monitoring and exploration applications for higher temperature and pressure environments, due to the high temperature resilience of fused silica glass sensors. Deeper oil searches and geothermal exploration and harvesting are possible with these novel capabilities. Progress in components and technologies that are enabling these systems to be fieldworthy are reviewed and emerging techniques summarized that could leapfrog the system performance and reliability.

  14. Large-scale-system effectiveness analysis. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, A.D.; Ayoub, A.K.; Foster, J.W.

    1979-11-01

    Objective of the research project has been the investigation and development of methods for calculating system reliability indices which have absolute, and measurable, significance to consumers. Such indices are a necessary prerequisite to any scheme for system optimization which includes the economic consequences of consumer service interruptions. A further area of investigation has been joint consideration of generation and transmission in reliability studies. Methods for finding or estimating the probability distributions of some measures of reliability performance have been developed. The application of modern Monte Carlo simulation methods to compute reliability indices in generating systems has been studied.

  15. Multiprocessor switch with selective pairing

    DOEpatents

    Gara, Alan; Gschwind, Michael K; Salapura, Valentina

    2014-03-11

    System, method and computer program product for a multiprocessing system to offer selective pairing of processor cores for increased processing reliability. A selective pairing facility is provided that selectively connects, i.e., pairs, multiple microprocessor or processor cores to provide one highly reliable thread (or thread group). Each paired microprocessor or processor cores that provide one highly reliable thread for high-reliability connect with a system components such as a memory "nest" (or memory hierarchy), an optional system controller, and optional interrupt controller, optional I/O or peripheral devices, etc. The memory nest is attached to a selective pairing facility via a switch or a bus

  16. Preliminary Modelling of Mass Flux at the Surface of Plant Leaves within the MELiSSA Higher Plant Compartments

    NASA Astrophysics Data System (ADS)

    Holmberg, Madeleine; Paille, Christel; Lasseur, Christophe

    The ESA project Micro Ecological Life Support System Alternative (MELiSSA) is an ecosystem of micro-organisms and higher plants, constructed with the objective of being operated as a tool to understand artificial ecosystems to be used for a long-term or permanent manned planetary base (e.g. Moon or Mars). The purpose of such a system is to provide for generation of food, water recycling, atmospheric regeneration and waste management within defined standards of quality and reliability. As MELiSSA consists of individual compartments which are connected to each other, the robustness of the system is fully dependent on the control of each compartment, as well as the flow management between them. Quality of consumables and reliability of the ecosystem rely on the knowledge, understanding and control of each of the components. This includes the full understanding of all processes related to the higher plants. To progress in that direction, this paper focuses on the mechanical processes driving the gas and liquid exchanges between the plant leaf and its environment. The process responsible for the mass transfer on the surface of plant leaves is diffusion. The diffusion flux is dependent on the behaviour of the stoma of the leaf and also on the leaf boundary layer (BL). In this paper, the physiology of the leaf is briefly examined in order to relate parameters such as light quality, light quantity, CO2 concentration, temperature, leaf water potential, humidity, vapour pressure deficit (VPD) gradients and pollutants to the opening or closing of stomata. The diffusion process is described theoretically and the description is compared to empirical approaches. The variables of the BL are examined and the effect airflow in the compartment has on the BL is investigated. Also presented is the impact changes in different environmental parameters may have on the fluid exchanges. Finally, some tests, to evaluate the accuracy of the concluded model, are suggested.

  17. Sensitivities and Tipping Points of Power System Operations to Fluctuations Caused by Water Availability and Fuel Prices

    NASA Astrophysics Data System (ADS)

    O'Connell, M.; Macknick, J.; Voisin, N.; Fu, T.

    2017-12-01

    The western US electric grid is highly dependent upon water resources for reliable operation. Hydropower and water-cooled thermoelectric technologies represent 67% of generating capacity in the western region of the US. While water resources provide a significant amount of generation and reliability for the grid, these same resources can represent vulnerabilities during times of drought or low flow conditions. A lack of water affects water-dependent technologies and can result in more expensive generators needing to run in order to meet electric grid demand, resulting in higher electricity prices and a higher cost to operate the grid. A companion study assesses the impact of changes in water availability and air temperatures on power operations by directly derating hydro and thermo-electric generators. In this study we assess the sensitivities and tipping points of water availability compared with higher fuel prices in electricity sector operations. We evaluate the impacts of varying electricity prices by modifying fuel prices for coal and natural gas. We then analyze the difference in simulation results between changes in fuel prices in combination with water availability and air temperature variability. We simulate three fuel price scenarios for a 2010 baseline scenario along with 100 historical and future hydro-climate conditions. We use the PLEXOS electricity production cost model to optimize power system dispatch and cost decisions under each combination of fuel price and water constraint. Some of the metrics evaluated are total production cost, generation type mix, emissions, transmission congestion, and reserve procurement. These metrics give insight to how strained the system is, how much flexibility it still has, and to what extent water resource availability or fuel prices drive changes in the electricity sector operations. This work will provide insights into current electricity operations as well as future cases of increased penetration of variable renewable generation technologies such as wind and solar.

  18. The weakest t-norm based intuitionistic fuzzy fault-tree analysis to evaluate system reliability.

    PubMed

    Kumar, Mohit; Yadav, Shiv Prasad

    2012-07-01

    In this paper, a new approach of intuitionistic fuzzy fault-tree analysis is proposed to evaluate system reliability and to find the most critical system component that affects the system reliability. Here weakest t-norm based intuitionistic fuzzy fault tree analysis is presented to calculate fault interval of system components from integrating expert's knowledge and experience in terms of providing the possibility of failure of bottom events. It applies fault-tree analysis, α-cut of intuitionistic fuzzy set and T(ω) (the weakest t-norm) based arithmetic operations on triangular intuitionistic fuzzy sets to obtain fault interval and reliability interval of the system. This paper also modifies Tanaka et al.'s fuzzy fault-tree definition. In numerical verification, a malfunction of weapon system "automatic gun" is presented as a numerical example. The result of the proposed method is compared with the listing approaches of reliability analysis methods. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Reliability and Normative Data for the Dynamic Visual Acuity Test for Vestibular Screening.

    PubMed

    Riska, Kristal M; Hall, Courtney D

    2016-06-01

    The purpose of this study was to determine reliability of computerized dynamic visual acuity (DVA) testing and to determine reference values for younger and older adults. A primary function of the vestibular system is to maintain gaze stability during head motion. The DVA test quantifies gaze stabilization with the head moving versus stationary. Commercially available computerized systems allow clinicians to incorporate DVA into their assessment; however, information regarding reliability and normative values of these systems is sparse. Forty-six healthy adults, grouped by age, with normal vestibular function were recruited. Each participant completed computerized DVA testing including static visual acuity, minimum perception time, and DVA using the NeuroCom inVision System. Testing was performed by two examiners in the same session and then repeated at a follow-up session 3 to 14 days later. Intraclass correlation coefficients (ICCs) were used to determine inter-rater and test-retest reliability. ICCs for inter-rater reliability ranged from 0.323 to 0.937 and from 0.434 to 0.909 for horizontal and vertical head movements, respectively. ICCs for test-retest reliability ranged from 0.154 to 0.856 and from 0.377 to 0.9062 for horizontal and vertical head movements, respectively. Overall, raw scores (left/right DVA and up/down DVA) were more reliable than DVA loss scores. Reliability of a commercially available DVA system has poor-to-fair reliability for DVA loss scores. The use of a convergence paradigm and not incorporating the forced choice paradigm may contribute to poor reliability.

  20. Assuring long-term reliability of concentrator PV systems

    NASA Astrophysics Data System (ADS)

    McConnell, R.; Garboushian, V.; Brown, J.; Crawford, C.; Darban, K.; Dutra, D.; Geer, S.; Ghassemian, V.; Gordon, R.; Kinsey, G.; Stone, K.; Turner, G.

    2009-08-01

    Concentrator PV (CPV) systems have attracted significant interest because these systems incorporate the world's highest efficiency solar cells and they are targeting the lowest cost production of solar electricity for the world's utility markets. Because these systems are just entering solar markets, manufacturers and customers need to assure their reliability for many years of operation. There are three general approaches for assuring CPV reliability: 1) field testing and development over many years leading to improved product designs, 2) testing to internationally accepted qualification standards (especially for new products) and 3) extended reliability tests to identify critical weaknesses in a new component or design. Amonix has been a pioneer in all three of these approaches. Amonix has an internal library of field failure data spanning over 15 years that serves as the basis for its seven generations of CPV systems. An Amonix product served as the test CPV module for the development of the world's first qualification standard completed in March 2001. Amonix staff has served on international standards development committees, such as the International Electrotechnical Commission (IEC), in support of developing CPV standards needed in today's rapidly expanding solar markets. Recently Amonix employed extended reliability test procedures to assure reliability of multijunction solar cell operation in its seventh generation high concentration PV system. This paper will discuss how these three approaches have all contributed to assuring reliability of the Amonix systems.

  1. Use of seatbelts in cars with automatic belts.

    PubMed Central

    Williams, A F; Wells, J K; Lund, A K; Teed, N J

    1992-01-01

    Use of seatbelts in late model cars with automatic or manual belt systems was observed in suburban Washington, DC, Chicago, Los Angeles, and Philadelphia. In cars with automatic two-point belt systems, the use of shoulder belts by drivers was substantially higher than in the same model cars with manual three-point belts. This finding was true in varying degrees whatever the type of automatic belt, including cars with detachable nonmotorized belts, cars with detachable motorized belts, and especially cars with nondetachable motorized belts. Most of these automatic shoulder belts systems include manual lap belts. Use of lap belts was lower in cars with automatic two-point belt systems than in the same model cars with manual three-point belts; precisely how much lower could not be reliably estimated in this survey. Use of shoulder and lap belts was slightly higher in General Motors cars with detachable automatic three-point belts compared with the same model cars with manual three-point belts; in Hondas there was no difference in the rates of use of manual three-point belts and the rates of use of automatic three-point belts. PMID:1561301

  2. Life cycle assessment of overhead and underground primary power distribution.

    PubMed

    Bumby, Sarah; Druzhinina, Ekaterina; Feraldi, Rebe; Werthmann, Danae; Geyer, Roland; Sahl, Jack

    2010-07-15

    Electrical power can be distributed in overhead or underground systems, both of which generate a variety of environmental impacts at all stages of their life cycles. While there is considerable literature discussing the trade-offs between both systems in terms of aesthetics, safety, cost, and reliability, environmental assessments are relatively rare and limited to power cable production and end-of-life management. This paper assesses environmental impacts from overhead and underground medium voltage power distribution systems as they are currently built and managed by Southern California Edison (SCE). It uses process-based life cycle assessment (LCA) according to ISO 14044 (2006) and SCE-specific primary data to the extent possible. Potential environmental impacts have been calculated using a wide range of midpoint indicators, and robustness of the results has been investigated through sensitivity analysis of the most uncertain and potentially significant parameters. The studied underground system has higher environmental impacts in all indicators and for all parameter values, mostly due to its higher material intensity. For both systems and all indicators the majority of impact occurs during cable production. Promising strategies for impact reduction are thus cable failure rate reduction for overhead and cable lifetime extension for underground systems.

  3. Concurrent validity and reliability of the Alberta Infant Motor Scale in premature infants.

    PubMed

    Almeida, Kênnea Martins; Dutra, Maria Virginia Peixoto; Mello, Rosane Reis de; Reis, Ana Beatriz Rodrigues; Martins, Priscila Silveira

    2008-01-01

    To verify the concurrent validity and interobserver reliability of the Alberta Infant Motor Scale (AIMS) in premature infants followed-up at the outpatient clinic of Instituto Fernandes Figueira, Fundação Oswaldo Cruz (IFF/Fiocruz), in Rio de Janeiro, Brazil. A total of 88 premature infants were enrolled at the follow-up clinic at IFF/Fiocruz, between February and December of 2006. For the concurrent validity study, 46 infants were assessed at either 6 (n = 26) or 12 (n = 20) months' corrected age using the AIMS and the second edition of the Bayley Scales of Infant Development, by two different observers, and applying Pearson's correlation coefficient to analyze the results. For the reliability study, 42 infants between 0 and 18 months were assessed using the Alberta Infant Motor Scale, by two different observers and the results analyzed using the intraclass correlation coefficient. The concurrent validity study found a high level of correlation between the two scales (r = 0.95) and one that was statistically significant (p < 0.01) for the entire population of infants, with higher values at 12 months (r = 0.89) than at 6 months (r = 0.74). The interobserver reliability study found satisfactory intraclass correlation coefficients at all ages tested, varying from 0.76 to 0.99. The AIMS is a valid and reliable instrument for the evaluation of motor development in high-risk infants within the Brazilian public health system.

  4. Customer-Driven Reliability Models for Multistate Coherent Systems

    DTIC Science & Technology

    1992-01-01

    AENCYUSEONLY(Leae bank)2. RPO- COVERED 1 11992DISSERTATION 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS Customer -Driven Reliability Models For Multistate Coherent...UNIVERSITY OF OKLAHOMA GRADUATE COLLEGE CUSTOMER -DRIVEN RELIABILITY MODELS FOR MULTISTATE COHERENT SYSTEMS A DISSERTATION SUBMITTED TO THE GRADUATE FACULTY...BOEDIGHEIMER I Norman, Oklahoma Distribution/ Av~ilability Codes 1992 A vil andior Dist Special CUSTOMER -DRIVEN RELIABILITY MODELS FOR MULTISTATE

  5. Older males signal more reliably.

    PubMed Central

    Proulx, Stephen R; Day, Troy; Rowe, Locke

    2002-01-01

    The hypothesis that females prefer older males because they have higher mean fitness than younger males has been the centre of recent controversy. These discussions have focused on the success of a female who prefers males of a particular age class when age cues, but not quality cues, are available. Thus, if the distribution of male quality changes with age, such that older males have on average genotypes with higher fitness than younger males, then a female who mates with older males has fitter offspring, which allows the female preference to spread through a genetic correlation. We develop a general model for male display in a species with multiple reproductive bouts that allows us to identify the conditions that promote reliable signalling within an age class. Because males have opportunities for future reproduction, they will reduce their levels of advertising compared with a semelparous species. In addition, because higher-quality males have more future reproduction, they will reduce their advertising more than low-quality males. Thus, the conditions for reliable signalling in a semelparous organism are generally not sufficient to produce reliable signalling in species with multiple reproductive bouts. This result is due to the possibility of future reproduction so that, as individuals age and the opportunities for future reproduction fade, signalling becomes more reliable. This provides a novel rationale for female preference for older mates; older males reveal more information in their sexual displays. PMID:12495495

  6. Evaluating reliability of WSN with sleep/wake-up interfering nodes

    NASA Astrophysics Data System (ADS)

    Distefano, Salvatore

    2013-10-01

    A wireless sensor network (WSN) (singular and plural of acronyms are spelled the same) is a distributed system composed of autonomous sensor nodes wireless connected and randomly scattered into a geographical area to cooperatively monitor physical or environmental conditions. Adequate techniques and strategies are required to manage a WSN so that it works properly, observing specific quantities and metrics to evaluate the WSN operational conditions. Among them, one of the most important is the reliability. Considering a WSN as a system composed of sensor nodes the system reliability approach can be applied, thus expressing the WSN reliability in terms of its nodes' reliability. More specifically, since often standby power management policies are applied at node level and interferences among nodes may arise, a WSN can be considered as a dynamic system. In this article we therefore consider the WSN reliability evaluation problem from the dynamic system reliability perspective. Static-structural interactions are specified by the WSN topology. Sleep/wake-up standby policies and interferences due to wireless communications can be instead considered as dynamic aspects. Thus, in order to represent and to evaluate the WSN reliability, we use dynamic reliability block diagrams and Petri nets. The proposed technique allows to overcome the limits of Markov models when considering non-linear discharge processes, since they cannot adequately represent the aging processes. In order to demonstrate the effectiveness of the technique, we investigate some specific WSN network topologies, providing guidelines for their representation and evaluation.

  7. Mechanical system reliability for long life space systems

    NASA Technical Reports Server (NTRS)

    Kowal, Michael T.

    1994-01-01

    The creation of a compendium of mechanical limit states was undertaken in order to provide a reference base for the application of first-order reliability methods to mechanical systems in the context of the development of a system level design methodology. The compendium was conceived as a reference source specific to the problem of developing the noted design methodology, and not an exhaustive or exclusive compilation of mechanical limit states. The compendium is not intended to be a handbook of mechanical limit states for general use. The compendium provides a diverse set of limit-state relationships for use in demonstrating the application of probabilistic reliability methods to mechanical systems. The compendium is to be used in the reliability analysis of moderately complex mechanical systems.

  8. Reliability and sensitivity analysis of a system with multiple unreliable service stations and standby switching failures

    NASA Astrophysics Data System (ADS)

    Ke, Jyh-Bin; Lee, Wen-Chiung; Wang, Kuo-Hsiung

    2007-07-01

    This paper presents the reliability and sensitivity analysis of a system with M primary units, W warm standby units, and R unreliable service stations where warm standby units switching to the primary state might fail. Failure times of primary and warm standby units are assumed to have exponential distributions, and service times of the failed units are exponentially distributed. In addition, breakdown times and repair times of the service stations also follow exponential distributions. Expressions for system reliability, RY(t), and mean time to system failure, MTTF are derived. Sensitivity analysis, relative sensitivity analysis of the system reliability and the mean time to failure, with respect to system parameters are also investigated.

  9. [Reliability of a positron emission tomography system (CTI:PT931/04-12)].

    PubMed

    Watanuki, Shoichi; Ishii, Keizo; Itoh, Masatoshi; Orihara, Hikonojyo

    2002-05-01

    The maintenance data of a PET system (PT931/04-12 CTI Inc.) was analyzed to evaluate its reliability. We examined whether the initial performance for the system resolution and efficiency is kept. The reliability of the PET system was evaluated from the value of MTTF (mean time to failure) and MTBF (mean time between failures) for each part of the system obtained from the maintenance data for 13 years. The initial performance was kept for the resolution, but the efficiency decreased to 72% of the initial value. The 83% of the troubles of the system was for detector block (DB) and DB control module (BC). The MTTF of DB and BC were 2,733 and 3,314 days, and the MTBF of DB and BC per detector ring were 38 and 114 days. The MTBF of the system was 23 days. We found seasonal dependence for the number of troubles of DB and BC. This means that the trouble may be related the humidity. The reliability of the PET system strongly depends on the MTBF of DB and BC. The improvement in quality of these parts and optimization of the environment in operation may increase the reliability of the PET system. For the popularization of PET, it is effective to evaluate the reliability of the system and to show it to the users.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reason, J.

    Transmission terminations available today are very reliable, but they need to be. In the field, they are continually exposed to pollution and extremes of ambient temperature. In many cases, they are in the rifle sights of vandals. In contrast, cable joints - often cited as the weakest links from an electrical viewpoint - are generally protected from physical damage underground and many of the short cable systems being installed in the US today can be built without joints. All cable systems need terminations - mostly to air-insulated equipment. At 69 through 138 kV, there is intense competition among manufacturers tomore » supply terminations for solid-dielectric cable that are low in cost, reliable, and require a minimum of skill to install. Some utilities are looking also for terminations that fit a range of cable sizes; terminations that do not contain liquid that can leak out; and terminations that are shatter-proof. All of these improvements are available in the US up to 69 kV. For higher voltages, they are on the horizon, if not already in use, overseas. 16 figs.« less

  11. Research on the Fusion of Dependent Evidence Based on Rank Correlation Coefficient.

    PubMed

    Shi, Fengjian; Su, Xiaoyan; Qian, Hong; Yang, Ning; Han, Wenhua

    2017-10-16

    In order to meet the higher accuracy and system reliability requirements, the information fusion for multi-sensor systems is an increasing concern. Dempster-Shafer evidence theory (D-S theory) has been investigated for many applications in multi-sensor information fusion due to its flexibility in uncertainty modeling. However, classical evidence theory assumes that the evidence is independent of each other, which is often unrealistic. Ignoring the relationship between the evidence may lead to unreasonable fusion results, and even lead to wrong decisions. This assumption severely prevents D-S evidence theory from practical application and further development. In this paper, an innovative evidence fusion model to deal with dependent evidence based on rank correlation coefficient is proposed. The model first uses rank correlation coefficient to measure the dependence degree between different evidence. Then, total discount coefficient is obtained based on the dependence degree, which also considers the impact of the reliability of evidence. Finally, the discount evidence fusion model is presented. An example is illustrated to show the use and effectiveness of the proposed method.

  12. On the design and assessment of a 2.45 GHz radio telecommand system for remote patient monitoring.

    PubMed

    Crumley, G C; Evans, N E; Burns, J B; Trouton, T G

    1998-12-01

    This paper discusses the design and operational assessment of a minimum-power, 2.45 GHz portable pulse receiver and associated base transmitter comprising the interrogation link in a duplex, cross-band RF transponder designed for short-range, remote patient monitoring. A tangential receiver sensitivity of - 53 dBm was achieved using a 50 ohms microstrip stub-matched zero-bias diode detector and a CMOS baseband amplifier consuming 20 microA from + 3 V. The base transmitter generated an on-off keyed peak output of 0.5 W into 50 ohms. Both linear and right-hand circularly-polarised antennas were employed in system evaluations carried out within an operational Coronary Care Unit ward. For transmitting antenna heights of between 0.3 and 2.2 m above floor level. transponder interrogations were 95% reliable within the 82 m2 area of the ward, falling to an average of 46% in the surrounding rooms and corridors. Separating the polarisation modes, using the circular antenna set gave the higher overall reliability.

  13. Research on the Fusion of Dependent Evidence Based on Rank Correlation Coefficient

    PubMed Central

    Su, Xiaoyan; Qian, Hong; Yang, Ning; Han, Wenhua

    2017-01-01

    In order to meet the higher accuracy and system reliability requirements, the information fusion for multi-sensor systems is an increasing concern. Dempster–Shafer evidence theory (D–S theory) has been investigated for many applications in multi-sensor information fusion due to its flexibility in uncertainty modeling. However, classical evidence theory assumes that the evidence is independent of each other, which is often unrealistic. Ignoring the relationship between the evidence may lead to unreasonable fusion results, and even lead to wrong decisions. This assumption severely prevents D–S evidence theory from practical application and further development. In this paper, an innovative evidence fusion model to deal with dependent evidence based on rank correlation coefficient is proposed. The model first uses rank correlation coefficient to measure the dependence degree between different evidence. Then, total discount coefficient is obtained based on the dependence degree, which also considers the impact of the reliability of evidence. Finally, the discount evidence fusion model is presented. An example is illustrated to show the use and effectiveness of the proposed method. PMID:29035341

  14. Near-misses are an opportunity to improve patient safety: adapting strategies of high reliability organizations to healthcare.

    PubMed

    Van Spall, Harriette; Kassam, Alisha; Tollefson, Travis T

    2015-08-01

    Near-miss investigations in high reliability organizations (HROs) aim to mitigate risk and improve system safety. Healthcare settings have a higher rate of near-misses and subsequent adverse events than most high-risk industries, but near-misses are not systematically reported or analyzed. In this review, we will describe the strategies for near-miss analysis that have facilitated a culture of safety and continuous quality improvement in HROs. Near-miss analysis is routine and systematic in HROs such as aviation. Strategies implemented in aviation include the Commercial Aviation Safety Team, which undertakes systematic analyses of near-misses, so that findings can be incorporated into Standard Operating Procedures (SOPs). Other strategies resulting from incident analyses include Crew Resource Management (CRM) for enhanced communication, situational awareness training, adoption of checklists during operations, and built-in redundancy within systems. Health care organizations should consider near-misses as opportunities for quality improvement. The systematic reporting and analysis of near-misses, commonplace in HROs, can be adapted to health care settings to prevent adverse events and improve clinical outcomes.

  15. 18 CFR 39.8 - Delegation to a Regional Entity.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... agreement promotes effective and efficient administration of Bulk-Power System reliability. (d) The... Interconnection-wide basis promotes effective and efficient administration of Bulk-Power System reliability and... THE ELECTRIC RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE ESTABLISHMENT, APPROVAL, AND ENFORCEMENT...

  16. Software reliability models for fault-tolerant avionics computers and related topics

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1987-01-01

    Software reliability research is briefly described. General research topics are reliability growth models, quality of software reliability prediction, the complete monotonicity property of reliability growth, conceptual modelling of software failure behavior, assurance of ultrahigh reliability, and analysis techniques for fault-tolerant systems.

  17. 75 FR 71613 - Mandatory Reliability Standards for Interconnection Reliability Operating Limits

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-24

    ... Reliability Standards. The proposed Reliability Standards were designed to prevent instability, uncontrolled... Reliability Standards.\\2\\ The proposed Reliability Standards were designed to prevent instability... the SOLs, which if exceeded, could expose a widespread area of the bulk electric system to instability...

  18. Software Reliability Issues Concerning Large and Safety Critical Software Systems

    NASA Technical Reports Server (NTRS)

    Kamel, Khaled; Brown, Barbara

    1996-01-01

    This research was undertaken to provide NASA with a survey of state-of-the-art techniques using in industrial and academia to provide safe, reliable, and maintainable software to drive large systems. Such systems must match the complexity and strict safety requirements of NASA's shuttle system. In particular, the Launch Processing System (LPS) is being considered for replacement. The LPS is responsible for monitoring and commanding the shuttle during test, repair, and launch phases. NASA built this system in the 1970's using mostly hardware techniques to provide for increased reliability, but it did so often using custom-built equipment, which has not been able to keep up with current technologies. This report surveys the major techniques used in industry and academia to ensure reliability in large and critical computer systems.

  19. In Space Nuclear Power as an Enabling Technology for Deep Space Exploration

    NASA Technical Reports Server (NTRS)

    Sackheim, Robert L.; Houts, Michael

    2000-01-01

    Deep Space Exploration missions, both for scientific and Human Exploration and Development (HEDS), appear to be as weight limited today as they would have been 35 years ago. Right behind the weight constraints is the nearly equally important mission limitation of cost. Launch vehicles, upper stages and in-space propulsion systems also cost about the same today with the same efficiency as they have had for many years (excluding impact of inflation). Both these dual mission constraints combine to force either very expensive, mega systems missions or very light weight, but high risk/low margin planetary spacecraft designs, such as the recent unsuccessful attempts for an extremely low cost mission to Mars during the 1998-99 opportunity (i.e., Mars Climate Orbiter and the Mars Polar Lander). When one considers spacecraft missions to the outer heliopause or even the outer planets, the enormous weight and cost constraints will impose even more daunting concerns for mission cost, risk and the ability to establish adequate mission margins for success. This paper will discuss the benefits of using a safe in-space nuclear reactor as the basis for providing both sufficient electric power and high performance space propulsion that will greatly reduce mission risk and significantly increase weight (IMLEO) and cost margins. Weight and cost margins are increased by enabling much higher payload fractions and redundant design features for a given launch vehicle (higher payload fraction of IMLEO). The paper will also discuss and summarize the recent advances in nuclear reactor technology and safety of modern reactor designs and operating practice and experience, as well as advances in reactor coupled power generation and high performance nuclear thermal and electric propulsion technologies. It will be shown that these nuclear power and propulsion technologies are major enabling capabilities for higher reliability, higher margin and lower cost deep space missions design to reliably reach the outer planets for scientific exploration.

  20. Improving the driver-automation interaction: an approach using automation uncertainty.

    PubMed

    Beller, Johannes; Heesen, Matthias; Vollrath, Mark

    2013-12-01

    The aim of this study was to evaluate whether communicating automation uncertainty improves the driver-automation interaction. A false system understanding of infallibility may provoke automation misuse and can lead to severe consequences in case of automation failure. The presentation of automation uncertainty may prevent this false system understanding and, as was shown by previous studies, may have numerous benefits. Few studies, however, have clearly shown the potential of communicating uncertainty information in driving. The current study fills this gap. We conducted a driving simulator experiment, varying the presented uncertainty information between participants (no uncertainty information vs. uncertainty information) and the automation reliability (high vs.low) within participants. Participants interacted with a highly automated driving system while engaging in secondary tasks and were required to cooperate with the automation to drive safely. Quantile regressions and multilevel modeling showed that the presentation of uncertainty information increases the time to collision in the case of automation failure. Furthermore, the data indicated improved situation awareness and better knowledge of fallibility for the experimental group. Consequently, the automation with the uncertainty symbol received higher trust ratings and increased acceptance. The presentation of automation uncertaintythrough a symbol improves overall driver-automation cooperation. Most automated systems in driving could benefit from displaying reliability information. This display might improve the acceptance of fallible systems and further enhances driver-automation cooperation.

  1. Tightly Coupled Integration of Ionosphere-Constrained Precise Point Positioning and Inertial Navigation Systems

    PubMed Central

    Gao, Zhouzheng; Zhang, Hongping; Ge, Maorong; Niu, Xiaoji; Shen, Wenbin; Wickert, Jens; Schuh, Harald

    2015-01-01

    The continuity and reliability of precise GNSS positioning can be seriously limited by severe user observation environments. The Inertial Navigation System (INS) can overcome such drawbacks, but its performance is clearly restricted by INS sensor errors over time. Accordingly, the tightly coupled integration of GPS and INS can overcome the disadvantages of each individual system and together form a new navigation system with a higher accuracy, reliability and availability. Recently, ionosphere-constrained (IC) precise point positioning (PPP) utilizing raw GPS observations was proven able to improve both the convergence and positioning accuracy of the conventional PPP using ionosphere-free combined observations (LC-PPP). In this paper, a new mode of tightly coupled integration, in which the IC-PPP instead of LC-PPP is employed, is implemented to further improve the performance of the coupled system. We present the detailed mathematical model and the related algorithm of the new integration of IC-PPP and INS. To evaluate the performance of the new tightly coupled integration, data of both airborne and vehicle experiments with a geodetic GPS receiver and tactical grade inertial measurement unit are processed and the results are analyzed. The statistics show that the new approach can further improve the positioning accuracy compared with both IC-PPP and the tightly coupled integration of the conventional PPP and INS. PMID:25763647

  2. Report on the status of linear drive coolers for the Department of Defense Standard Advanced Dewar Assembly (SADA)

    NASA Astrophysics Data System (ADS)

    Salazar, William

    2003-01-01

    The Standard Advanced Dewar Assembly (SADA) is the critical module in the Department of Defense (DoD) standardization effort of scanning second-generation thermal imaging systems. DoD has established a family of SADA's to address requirements for high performance (SADA I), mid-to-high performance (SADA II), and compact class (SADA III) systems. SADA's consist of the Infrared Focal Plane Array (IRFPA), Dewar, Command and Control Electronics (C&CE), and the cryogenic cooler. SADA's are used in weapons systems such as Comanche and Apache helicopters, the M1 Abrams Tank, the M2 Bradley Fighting Vehicle, the Line of Sight Antitank (LOSAT) system, the Improved Target Acquisition System (ITAS), and Javelin's Command Launch Unit (CLU). DOD has defined a family of tactical linear drive coolers in support of the family of SADA's. The Stirling linear drive cryo-coolers are utilized to cool the SADA's Infrared Focal Plane Arrays (IRFPAs) to their operating cryogenic temperatures. These linear drive coolers are required to meet strict cool-down time requirements along with lower vibration output, lower audible noise, and higher reliability than currently fielded rotary coolers. This paper will (1) outline the characteristics of each cooler, (2) present the status and results of qualification tests, and (3) present the status and test results of efforts to increase linear drive cooler reliability.

  3. General Monte Carlo reliability simulation code including common mode failures and HARP fault/error-handling

    NASA Technical Reports Server (NTRS)

    Platt, M. E.; Lewis, E. E.; Boehm, F.

    1991-01-01

    A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.

  4. Renewal of the Control System and Reliable Long Term Operation of the LHD Cryogenic System

    NASA Astrophysics Data System (ADS)

    Mito, T.; Iwamoto, A.; Oba, K.; Takami, S.; Moriuchi, S.; Imagawa, S.; Takahata, K.; Yamada, S.; Yanagi, N.; Hamaguchi, S.; Kishida, F.; Nakashima, T.

    The Large Helical Device (LHD) is a heliotron-type fusion plasma experimental machine which consists of a fully superconducting magnet system cooled by a helium refrigerator having a total equivalent cooling capacity of 9.2 kW@4.4 K. Seventeenplasma experimental campaigns have been performed successfully since1997 with high reliability of 99%. However, sixteen years have passed from the beginning of the system operation. Improvements are being implementedto prevent serious failures and to pursue further reliability.The LHD cryogenic control system was designed and developed as an open system utilizing latest control equipment of VME controllers and UNIX workstations at the construction time. Howeverthe generation change of control equipment has been advanced. Down-sizing of control deviceshas beenplanned from VME controllers to compact PCI controllers in order to simplify the system configuration and to improve the system reliability. The new system is composed of compact PCI controller and remote I/O connected with EtherNet/IP. Making the system redundant becomes possible by doubling CPU, LAN, and remote I/O respectively. The smooth renewal of the LHD cryogenic controlsystem and the further improvement of the cryogenic system reliability are reported.

  5. Surfing for mouth guards: assessing quality of online information.

    PubMed

    Magunacelaya, Macarena B; Glendor, Ulf

    2011-10-01

    The Internet is an easily accessible and commonly used source of health-related information, but evaluations of the quality of this information within the dental trauma field are still lacking. The aims of this study are (i) to present the most current scientific knowledge regarding mouth guards used in sport activities, (ii) to suggest a scoring system to evaluate the quality of information pertaining to mouth guard protection related to World Wide Web sites and (iii) to employ this scoring system when seeking reliable mouth guard-related websites. First, an Internet search using the keywords 'athletic injuries/prevention and control' and 'mouth protector' or 'mouth guards' in English was performed on PubMed, Cochrane, SvedMed+ and Web of Science to identify scientific knowledge about mouth guards. Second, an Internet search using the keywords 'consumer health information Internet', 'Internet information public health' and 'web usage-seeking behaviour' was performed on PubMed and Web of Science to obtain scientific articles seeking to evaluate the quality of health information on the Web. Based on the articles found in the second search, two scoring systems were selected. Then, an Internet search using the keywords 'mouth protector', 'mouth guards' and 'gum shields' in English was performed on the search engines Google, MSN and Yahoo. The websites selected were evaluated for reliability and accuracy. Of the 223 websites retrieved, 39 were designated valid and evaluated. Nine sites scored 22 or higher. The mean total score of the 39 websites was 14.2. Fourteen websites scored higher than the mean total score, and 25 websites scored less. The highest total score, presented by a Public Institution Web site (Health Canada), was 31 from a maximum possible score of 34, and the lowest score was 0. This study shows that there is a high amount of information about mouth guards on the Internet but that the quality of this information varies. It should be the responsibility of health care professionals to suggest and provide reliable Internet URL addresses to patients. In addition, an appropriate search terminology and search strategy should be made available to persons who want to search beyond the recommended sites. © 2011 John Wiley & Sons A/S.

  6. A proposed simple method for measurement in the anterior chamber angle: biometric gonioscopy.

    PubMed

    Congdon, N G; Spaeth, G L; Augsburger, J; Klancnik, J; Patel, K; Hunter, D G

    1999-11-01

    To design a system of gonioscopy that will allow greater interobserver reliability and more clearly defined screening cutoffs for angle closure than current systems while being simple to teach and technologically appropriate for use in rural Asia, where the prevalence of angle-closure glaucoma is highest. Clinic-based validation and interobserver reliability trial. Study 1: 21 patients 18 years of age and older recruited from a university-based specialty glaucoma clinic; study 2: 32 patients 18 years of age and older recruited from the same clinic. In study 1, all participants underwent conventional gonioscopy by an experienced observer (GLS) using the Spaeth system and in the same eye also underwent Scheimpflug photography, ultrasonographic measurement of anterior chamber depth and axial length, automatic refraction, and biometric gonioscopy with measurement of the distance from iris insertion to Schwalbe's line using a reticule based in the slit-lamp ocular. In study 2, all participants underwent both conventional gonioscopy and biometric gonioscopy by an experienced gonioscopist (NGC) and a medical student with no previous training in gonioscopy (JK). Study 1: The association between biometric gonioscopy and conventional gonioscopy, Scheimpflug photography, and other factors known to correlate with the configuration of the angle. Study 2: Interobserver agreement using biometric gonioscopy compared to that obtained with conventional gonioscopy. In study 1, there was an independent, monotonic, statistically significant relationship between biometric gonioscopy and both Spaeth angle (P = 0.001, t test) and Spaeth insertion (P = 0.008, t test) grades. Biometric gonioscopy correctly identified six of six patients with occludable angles according to Spaeth criteria. Biometric gonioscopic grade was also significantly associated with the anterior chamber angle as measured by Scheimpflug photography (P = 0.005, t test). In study 2, the intraclass correlation coefficient between graders for biometric gonioscopy (0.97) was higher than for Spaeth angle grade (0.72) or Spaeth insertion grade (0.84). Biometric gonioscopy correlates well with other measures of the anterior chamber angle, shows a higher degree of interobserver reliability than conventional gonioscopy, and can readily be learned by an inexperienced observer.

  7. Practical application of power conditioning to electric propulsion for passenger vehicles

    NASA Technical Reports Server (NTRS)

    Demerdash, N. A.; Lee, F. C.; Nehl, T. W.; Overton, B. P.

    1980-01-01

    A functional model 15 HP, 120 volt, 4-pole, 7600 r.p.m. samarium-cobalt permanent magnet type brushless dc motor-transistorized power conditioner unit was designed, fabricated and tested for specific use in propulsion of electric passenger vehicles. This new brushless motor system, including its power conditioner package, has a number of important advantages over existing systems such as reduced weight and volume, higher reliability, and potential for improvements in efficiencies. These advantages are discussed in this paper in light of the substantial test data collected during experimentation with the newly developed conditioner motor propulsion system. Details of the power conditioner design philosophy and particulars are given in the paper. Also, described here are the low level electronic design and operation in relation to the remainder of the system.

  8. A wireless remote high-power laser device for optogenetic experiments

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Gong, Q.; Li, Y. Y.; Li, A. Z.; Zhang, Y. G.; Cao, C. F.; Xu, H. X.; Cui, J.; Gao, J. J.

    2015-04-01

    Optogenetics affords the ability to stimulate genetically targeted neurons in a relatively innocuous manner. Reliable and targetable tools have enabled versatile new classes of investigation in the study of neural systems. However, current hardware systems are generally limited to acute measurements or require external tethering of the system to the light source. Here we provide a low-cost, high-power, remotely controlled blue laser diode (LD) stimulator for the application of optogenetics in neuroscience, focusing on wearable and intelligent devices, which can be carried by monkeys, rats and any other animals under study. Compared with the conventional light emitting diode (LED) device, this LD stimulator has higher efficiency, output power, and stability. Our system is fully wirelessly controlled and suitable for experiments with a large number of animals.

  9. Research Review: Test-retest reliability of standardized diagnostic interviews to assess child and adolescent psychiatric disorders: a systematic review and meta-analysis.

    PubMed

    Duncan, Laura; Comeau, Jinette; Wang, Li; Vitoroulis, Irene; Boyle, Michael H; Bennett, Kathryn

    2018-02-19

    A better understanding of factors contributing to the observed variability in estimates of test-retest reliability in published studies on standardized diagnostic interviews (SDI) is needed. The objectives of this systematic review and meta-analysis were to estimate the pooled test-retest reliability for parent and youth assessments of seven common disorders, and to examine sources of between-study heterogeneity in reliability. Following a systematic review of the literature, multilevel random effects meta-analyses were used to analyse 202 reliability estimates (Cohen's kappa = ҡ) from 31 eligible studies and 5,369 assessments of 3,344 children and youth. Pooled reliability was moderate at ҡ = .58 (CI 95% 0.53-0.63) and between-study heterogeneity was substantial (Q = 2,063 (df = 201), p < .001 and I 2  = 79%). In subgroup analysis, reliability varied across informants for specific types of psychiatric disorder (ҡ = .53-.69 for parent vs. ҡ = .39-.68 for youth) with estimates significantly higher for parents on attention deficit hyperactivity disorder, oppositional defiant disorder and the broad groupings of externalizing and any disorder. Reliability was also significantly higher in studies with indicators of poor or fair study methodology quality (sample size <50, retest interval <7 days). Our findings raise important questions about the meaningfulness of published evidence on the test-retest reliability of SDIs and the usefulness of these tools in both clinical and research contexts. Potential remedies include the introduction of standardized study and reporting requirements for reliability studies, and exploration of other approaches to assessing and classifying child and adolescent psychiatric disorder. © 2018 Association for Child and Adolescent Mental Health.

  10. Methodology for Physics and Engineering of Reliable Products

    NASA Technical Reports Server (NTRS)

    Cornford, Steven L.; Gibbel, Mark

    1996-01-01

    Physics of failure approaches have gained wide spread acceptance within the electronic reliability community. These methodologies involve identifying root cause failure mechanisms, developing associated models, and utilizing these models to inprove time to market, lower development and build costs and higher reliability. The methodology outlined herein sets forth a process, based on integration of both physics and engineering principles, for achieving the same goals.

  11. Development and validation of the coronary heart disease scale under the system of quality of life instruments for chronic diseases QLICD-CHD: combinations of classical test theory and Generalizability Theory.

    PubMed

    Wan, Chonghua; Li, Hezhan; Fan, Xuejin; Yang, Ruixue; Pan, Jiahua; Chen, Wenru; Zhao, Rong

    2014-06-04

    Quality of life (QOL) for patients with coronary heart disease (CHD) is now concerned worldwide with the specific instruments being seldom and no one developed by the modular approach. This paper is aimed to develop the CHD scale of the system of Quality of Life Instruments for Chronic Diseases (QLICD-CHD) by the modular approach and validate it by both classical test theory and Generalizability Theory. The QLICD-CHD was developed based on programmed decision procedures with multiple nominal and focus group discussions, in-depth interview, pre-testing and quantitative statistical procedures. 146 inpatients with CHD were used to provide the data measuring QOL three times before and after treatments. The psychometric properties of the scale were evaluated with respect to validity, reliability and responsiveness employing correlation analysis, factor analyses, multi-trait scaling analysis, t-tests and also G studies and D studies of Genralizability Theory analysis. Multi-trait scaling analysis, correlation and factor analyses confirmed good construct validity and criterion-related validity when using SF-36 as a criterion. The internal consistency α and test-retest reliability coefficients (Pearson r and Intra-class correlations ICC) for the overall instrument and all domains were higher than 0.70 and 0.80 respectively; The overall and all domains except for social domain had statistically significant changes after treatments with moderate effect size SRM (standardized response mea) ranging from 0.32 to 0.67. G-coefficients and index of dependability (Ф coefficients) confirmed the reliability of the scale further with more exact variance components. The QLICD-CHD has good validity, reliability, and moderate responsiveness and some highlights, and can be used as the quality of life instrument for patients with CHD. However, in order to obtain better reliability, the numbers of items for social domain should be increased or the items' quality, not quantity, should be improved.

  12. High-reliability computing for the smarter planet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Heather M; Graham, Paul; Manuzzato, Andrea

    2010-01-01

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities ofmore » inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is necessary. Already critical infrastructure is failing too frequently. In this paper, we will introduce the Cross-Layer Reliability concept for designing more reliable computer systems.« less

  13. Reliability of TMS metrics in patients with chronic incomplete spinal cord injury.

    PubMed

    Potter-Baker, K A; Janini, D P; Frost, F S; Chabra, P; Varnerin, N; Cunningham, D A; Sankarasubramanian, V; Plow, E B

    2016-11-01

    Test-retest reliability analysis in individuals with chronic incomplete spinal cord injury (iSCI). The purpose of this study was to examine the reliability of neurophysiological metrics acquired with transcranial magnetic stimulation (TMS) in individuals with chronic incomplete tetraplegia. Cleveland Clinic Foundation, Cleveland, Ohio, USA. TMS metrics of corticospinal excitability, output, inhibition and motor map distribution were collected in muscles with a higher MRC grade and muscles with a lower MRC grade on the more affected side of the body. Metrics denoting upper limb function were also collected. All metrics were collected at two sessions separated by a minimum of two weeks. Reliability between sessions was determined using Spearman's correlation coefficients and concordance correlation coefficients (CCCs). We found that TMS metrics that were acquired in higher MRC grade muscles were approximately two times more reliable than those collected in lower MRC grade muscles. TMS metrics of motor map output, however, demonstrated poor reliability regardless of muscle choice (P=0.34; CCC=0.51). Correlation analysis indicated that patients with more baseline impairment and/or those in a more chronic phase of iSCI demonstrated greater variability of metrics. In iSCI, reliability of TMS metrics varies depending on the muscle grade of the tested muscle. Variability is also influenced by factors such as baseline motor function and time post SCI. Future studies that use TMS metrics in longitudinal study designs to understand functional recovery should be cautious as choice of muscle and clinical characteristics can influence reliability.

  14. Reliability theory for repair service organization simulation and increase of innovative attraction of industrial enterprises

    NASA Astrophysics Data System (ADS)

    Dolzhenkova, E. V.; Iurieva, L. V.

    2018-05-01

    The study presents the author's algorithm for the industrial enterprise repair service organization simulation based on the reliability theory, as well as the results of its application. The monitoring of the industrial enterprise repair service organization is proposed to perform on the basis of the enterprise's state indexes for the main resources (equipment, labour, finances, repair areas), which allows quantitative evaluation of the reliability level as a resulting summary rating of the said parameters and the ensuring of an appropriate level of the operation reliability of the serviced technical objects. Under the conditions of the tough competition, the following approach is advisable: the higher efficiency of production and a repair service itself, the higher the innovative attractiveness of an industrial enterprise. The results of the calculations show that in order to prevent inefficient losses of production and to reduce the repair costs, it is advisable to apply the reliability theory. The overall reliability rating calculated on the basis of the author's algorithm has low values. The processing of the statistical data forms the reliability characteristics for the different workshops and services of an industrial enterprise, which allows one to define the failure rates of the various units of equipment and to establish the reliability indexes necessary for the subsequent mathematical simulation. The proposed simulating algorithm contributes to an increase of the efficiency of the repair service organization and improvement of the innovative attraction of an industrial enterprise.

  15. Model of load balancing using reliable algorithm with multi-agent system

    NASA Astrophysics Data System (ADS)

    Afriansyah, M. F.; Somantri, M.; Riyadi, M. A.

    2017-04-01

    Massive technology development is linear with the growth of internet users which increase network traffic activity. It also increases load of the system. The usage of reliable algorithm and mobile agent in distributed load balancing is a viable solution to handle the load issue on a large-scale system. Mobile agent works to collect resource information and can migrate according to given task. We propose reliable load balancing algorithm using least time first byte (LFB) combined with information from the mobile agent. In system overview, the methodology consisted of defining identification system, specification requirements, network topology and design system infrastructure. The simulation method for simulated system was using 1800 request for 10 s from the user to the server and taking the data for analysis. Software simulation was based on Apache Jmeter by observing response time and reliability of each server and then compared it with existing method. Results of performed simulation show that the LFB method with mobile agent can perform load balancing with efficient systems to all backend server without bottleneck, low risk of server overload, and reliable.

  16. Evaluation of ZFS as an efficient WLCG storage backend

    NASA Astrophysics Data System (ADS)

    Ebert, M.; Washbrook, A.

    2017-10-01

    A ZFS based software raid system was tested for performance against a hardware raid system providing storage based on the traditional Linux file systems XFS and EXT4. These tests were done for a healthy raid array as well as for a degraded raid array and during the rebuild of a raid array. It was found that ZFS performs better in almost all test scenarios. In addition, distinct features of ZFS were tested for WLCG data storage use, like compression and higher raid levels with triple redundancy information. The long term reliability was observed after converting all production storage servers at the Edinburgh WLCG Tier-2 site to ZFS, resulting in about 1.2PB of ZFS based storage at this site.

  17. Reliability of intracerebral hemorrhage classification systems: A systematic review.

    PubMed

    Rannikmäe, Kristiina; Woodfield, Rebecca; Anderson, Craig S; Charidimou, Andreas; Chiewvit, Pipat; Greenberg, Steven M; Jeng, Jiann-Shing; Meretoja, Atte; Palm, Frederic; Putaala, Jukka; Rinkel, Gabriel Je; Rosand, Jonathan; Rost, Natalia S; Strbian, Daniel; Tatlisumak, Turgut; Tsai, Chung-Fen; Wermer, Marieke Jh; Werring, David; Yeh, Shin-Joe; Al-Shahi Salman, Rustam; Sudlow, Cathie Lm

    2016-08-01

    Accurately distinguishing non-traumatic intracerebral hemorrhage (ICH) subtypes is important since they may have different risk factors, causal pathways, management, and prognosis. We systematically assessed the inter- and intra-rater reliability of ICH classification systems. We sought all available reliability assessments of anatomical and mechanistic ICH classification systems from electronic databases and personal contacts until October 2014. We assessed included studies' characteristics, reporting quality and potential for bias; summarized reliability with kappa value forest plots; and performed meta-analyses of the proportion of cases classified into each subtype. We included 8 of 2152 studies identified. Inter- and intra-rater reliabilities were substantial to perfect for anatomical and mechanistic systems (inter-rater kappa values: anatomical 0.78-0.97 [six studies, 518 cases], mechanistic 0.89-0.93 [three studies, 510 cases]; intra-rater kappas: anatomical 0.80-1 [three studies, 137 cases], mechanistic 0.92-0.93 [two studies, 368 cases]). Reporting quality varied but no study fulfilled all criteria and none was free from potential bias. All reliability studies were performed with experienced raters in specialist centers. Proportions of ICH subtypes were largely consistent with previous reports suggesting that included studies are appropriately representative. Reliability of existing classification systems appears excellent but is unknown outside specialist centers with experienced raters. Future reliability comparisons should be facilitated by studies following recently published reporting guidelines. © 2016 World Stroke Organization.

  18. Study on data acquisition system based on reconfigurable cache technology

    NASA Astrophysics Data System (ADS)

    Zhang, Qinchuan; Li, Min; Jiang, Jun

    2018-03-01

    Waveform capture rate is one of the key features of digital acquisition systems, which represents the waveform processing capability of the system in a unit time. The higher the waveform capture rate is, the larger the chance to capture elusive events is and the more reliable the test result is. First, this paper analyzes the impact of several factors on the waveform capture rate of the system, then the novel technology based on reconfigurable cache is further proposed to optimize system architecture, and the simulation results show that the signal-to-noise ratio of signal, capacity, and structure of cache have significant effects on the waveform capture rate. Finally, the technology is demonstrated by the engineering practice, and the results show that the waveform capture rate of the system is improved substantially without significant increase of system's cost, and the technology proposed has a broad application prospect.

  19. Feasibility study of self-powered magnetorheological damper systems

    NASA Astrophysics Data System (ADS)

    Chen, Chao; Liao, Wei-Hsin

    2012-04-01

    This paper is aimed to provide a feasibility study of self-powered magnetorheological (MR) damper systems, which could convert vibration and shock energy into electrical energy to power itself under control. The self-powered feature could bring merits such as higher reliability, energy saving, and less maintenance for the MR damper systems. A self-powered MR damper system is proposed and modeled. The criterion whether the MR damper system is self-powered or not is proposed. A prototype of MR damper with power generation is designed, fabricated, and tested. The modeling of this damper is experimentally validated. Then the damper is applied to a 2 DOF suspension system under on-off skyhook controller, to obtain the self-powered working range and vibration control performance. Effects of key factors on the self-powered MR damper systems are studied. Design considerations are given in order to increase the self-powered working range.

  20. The development of a multi-target compiler-writing system for flight software development

    NASA Technical Reports Server (NTRS)

    Feyock, S.; Donegan, M. K.

    1977-01-01

    A wide variety of systems designed to assist the user in the task of writing compilers has been developed. A survey of these systems reveals that none is entirely appropriate to the purposes of the MUST project, which involves the compilation of one or at most a small set of higher-order languages to a wide variety of target machines offering little or no software support. This requirement dictates that any compiler writing system employed must provide maximal support in the areas of semantics specification and code generation, the areas in which existing compiler writing systems as well as theoretical underpinnings are weakest. This paper describes an ongoing research and development effort to create a compiler writing system which will overcome these difficulties, thus providing a software system which makes possible the fast, trouble-free creation of reliable compilers for a wide variety of target computers.

  1. Development and application of a T7 RNA polymerase-dependent expression system for antibiotic production improvement in Streptomyces.

    PubMed

    Wei, Junhong; Tian, Jinjin; Pan, Guoqing; Xie, Jie; Bao, Jialing; Zhou, Zeyang

    2017-06-01

    To develop a reliable and easy to use expression system for antibiotic production improvement of Streptomyces. A two-compound T7 RNA polymerase-dependent gene expression system was developed to fulfill this demand. In this system, the T7 RNA polymerase coding sequence was optimized based on the codon usage of Streptomyces coelicolor. To evaluate the functionality of this system, we constructed an activator gene overexpression strain for enhancement of actinorhodin production. By overexpression of the positive regulator actII-ORF4 with this system, the maximum actinorhodin yield of engineered strain was 15-fold higher and the fermentation time was decreased by 48 h. The modified two-compound T7 expression system improves both antibiotic production and accelerates the fermentation process in Streptomyces. This provides a general and useful strategy for strain improvement of important antibiotic producing Streptomyces strains.

  2. DATMAN: A reliability data analysis program using Bayesian updating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, M.; Feltus, M.A.

    1996-12-31

    Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less

  3. Research Methods Tutor: evaluation of a dialogue-based tutoring system in the classroom.

    PubMed

    Arnott, Elizabeth; Hastings, Peter; Allbritton, David

    2008-08-01

    Research Methods Tutor (RMT) is a dialogue-based intelligent tutoring system for use in conjunction with undergraduate psychology research methods courses. RMT includes five topics that correspond to the curriculum of introductory research methods courses: ethics, variables, reliability, validity, and experimental design. We evaluated the effectiveness of the RMT system in the classroom using a nonequivalent control group design. Students in three classes (n = 83) used RMT, and students in two classes (n = 53) did not use RMT. Results indicated that the use of RMT yieldedstrong learning gains of 0.75 standard deviations above classroom instruction alone. Further, the dialogue-based tutoring condition of the system resulted in higher gains than did the textbook-style condition (CAI version) of the system. Future directions for RMT include the addition of new topics and tutoring elements.

  4. Power law-based local search in spider monkey optimisation for lower order system modelling

    NASA Astrophysics Data System (ADS)

    Sharma, Ajay; Sharma, Harish; Bhargava, Annapurna; Sharma, Nirmala

    2017-01-01

    The nature-inspired algorithms (NIAs) have shown efficiency to solve many complex real-world optimisation problems. The efficiency of NIAs is measured by their ability to find adequate results within a reasonable amount of time, rather than an ability to guarantee the optimal solution. This paper presents a solution for lower order system modelling using spider monkey optimisation (SMO) algorithm to obtain a better approximation for lower order systems and reflects almost original higher order system's characteristics. Further, a local search strategy, namely, power law-based local search is incorporated with SMO. The proposed strategy is named as power law-based local search in SMO (PLSMO). The efficiency, accuracy and reliability of the proposed algorithm is tested over 20 well-known benchmark functions. Then, the PLSMO algorithm is applied to solve the lower order system modelling problem.

  5. Photovoltaic performance and reliability workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroposki, B

    1996-10-01

    This proceedings is the compilation of papers presented at the ninth PV Performance and Reliability Workshop held at the Sheraton Denver West Hotel on September 4--6, 1996. This years workshop included presentations from 25 speakers and had over 100 attendees. All of the presentations that were given are included in this proceedings. Topics of the papers included: defining service lifetime and developing models for PV module lifetime; examining and determining failure and degradation mechanisms in PV modules; combining IEEE/IEC/UL testing procedures; AC module performance and reliability testing; inverter reliability/qualification testing; standardization of utility interconnect requirements for PV systems; need activitiesmore » to separate variables by testing individual components of PV systems (e.g. cells, modules, batteries, inverters,charge controllers) for individual reliability and then test them in actual system configurations; more results reported from field experience on modules, inverters, batteries, and charge controllers from field deployed PV systems; and system certification and standardized testing for stand-alone and grid-tied systems.« less

  6. Hierarchical specification of the SIFT fault tolerant flight control system

    NASA Technical Reports Server (NTRS)

    Melliar-Smith, P. M.; Schwartz, R. L.

    1981-01-01

    The specification and mechanical verification of the Software Implemented Fault Tolerance (SIFT) flight control system is described. The methodology employed in the verification effort is discussed, and a description of the hierarchical models of the SIFT system is given. To meet the objective of NASA for the reliability of safety critical flight control systems, the SIFT computer must achieve a reliability well beyond the levels at which reliability can be actually measured. The methodology employed to demonstrate rigorously that the SIFT computer meets as reliability requirements is described. The hierarchy of design specifications from very abstract descriptions of system function down to the actual implementation is explained. The most abstract design specifications can be used to verify that the system functions correctly and with the desired reliability since almost all details of the realization were abstracted out. A succession of lower level models refine these specifications to the level of the actual implementation, and can be used to demonstrate that the implementation has the properties claimed of the abstract design specifications.

  7. Sensitivity Analysis of ProSEDS (Propulsive Small Expendable Deployer System) Data Communication System

    NASA Technical Reports Server (NTRS)

    Park, Nohpill; Reagan, Shawn; Franks, Greg; Jones, William G.

    1999-01-01

    This paper discusses analytical approaches to evaluating performance of Spacecraft On-Board Computing systems, thereby ultimately achieving a reliable spacecraft data communications systems. The sensitivity analysis approach of memory system on the ProSEDS (Propulsive Small Expendable Deployer System) as a part of its data communication system will be investigated. Also, general issues and possible approaches to reliable Spacecraft On-Board Interconnection Network and Processor Array will be shown. The performance issues of a spacecraft on-board computing systems such as sensitivity, throughput, delay and reliability will be introduced and discussed.

  8. Reliability analysis of a phaser measurement unit using a generalized fuzzy lambda-tau(GFLT) technique.

    PubMed

    Komal

    2018-05-01

    Nowadays power consumption is increasing day-by-day. To fulfill failure free power requirement, planning and implementation of an effective and reliable power management system is essential. Phasor measurement unit(PMU) is one of the key device in wide area measurement and control systems. The reliable performance of PMU assures failure free power supply for any power system. So, the purpose of the present study is to analyse the reliability of a PMU used for controllability and observability of power systems utilizing available uncertain data. In this paper, a generalized fuzzy lambda-tau (GFLT) technique has been proposed for this purpose. In GFLT, system components' uncertain failure and repair rates are fuzzified using fuzzy numbers having different shapes such as triangular, normal, cauchy, sharp gamma and trapezoidal. To select a suitable fuzzy number for quantifying data uncertainty, system experts' opinion have been considered. The GFLT technique applies fault tree, lambda-tau method, fuzzified data using different membership functions, alpha-cut based fuzzy arithmetic operations to compute some important reliability indices. Furthermore, in this study ranking of critical components of the system using RAM-Index and sensitivity analysis have also been performed. The developed technique may be helpful to improve system performance significantly and can be applied to analyse fuzzy reliability of other engineering systems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Interrater and intrarater reliability of FDI criteria applied to photographs of posterior tooth-colored restorations.

    PubMed

    Kim, Dohyun; Ahn, So-Yeon; Kim, Junyoung; Park, Sung-Ho

    2017-07-01

    Since 2007, the FDI World Dental Federation (FDI) criteria have been used for the clinical evaluation of dental restorations. However, the reliability of the FDI criteria has not been sufficiently addressed. The purpose of this study was to assess and compare the interrater and intrarater reliability of the FDI criteria by evaluating posterior tooth-colored restorations photographically. A total of 160 clinical photographs of posterior tooth-colored restorations were evaluated independently by 5 raters with 9 of the FDI criteria suitable for photographic evaluation. The raters recorded the score of each restoration by using 5 grades, and the score was dichotomized into the clinical evaluation scores. After 1 month, 2 of the raters reevaluated the same set of 160 photographs in random order. To estimate the interrater reliability among the 5 raters, the proportion of agreement was calculated, and the Fleiss multirater kappa statistic was used. For the intrarater reliability, the proportion of agreement was calculated, and the Cohen standard kappa statistic was used for each of the 2 raters. The interrater proportion of agreement was 0.41 to 0.57, and the kappa value was 0.09 to 0.39. Overall, the intrarater reliability was higher than the interrater reliability, and rater 1 demonstrated higher intrarater reliability than rater 2. The proportion of agreement and kappa values increased when the 5 scores were dichotomized. The reliability was relatively lower for the esthetic properties compared with the functional or biological properties. Within the limitations of this study, the FDI criteria presented slight to fair interrater reliability and fair to excellent intrarater reliability in the photographic evaluation of posterior tooth-colored restorations. The reliability was improved by simplifying the evaluation scores. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  10. Technical Concept Document. Central Archive for Reusable Defense Software (CARDS)

    DTIC Science & Technology

    1994-02-28

    FeNbry 1994 INFORMAL TECHNICAL REPORT For The SOFTWARE TECHNOLOGY FOR ADAPTABLE, RELIABLE SYSTEMS (STARS) Technical Concept Document Central Archive for...February 1994 INFORMAL TECHNICAL REPORT For The SOFTWARE TECHNOLOGY FOR ADAPTABLE, RELIABLE SYSTEMS (STARS) Technical Concept Document Central Archive...accordance with the DFARS Special Works Clause Developed by: This document, developed under the Software Technology for Adaptable, Reliable Systems

  11. Reliability and performance evaluation of systems containing embedded rule-based expert systems

    NASA Technical Reports Server (NTRS)

    Beaton, Robert M.; Adams, Milton B.; Harrison, James V. A.

    1989-01-01

    A method for evaluating the reliability of real-time systems containing embedded rule-based expert systems is proposed and investigated. It is a three stage technique that addresses the impact of knowledge-base uncertainties on the performance of expert systems. In the first stage, a Markov reliability model of the system is developed which identifies the key performance parameters of the expert system. In the second stage, the evaluation method is used to determine the values of the expert system's key performance parameters. The performance parameters can be evaluated directly by using a probabilistic model of uncertainties in the knowledge-base or by using sensitivity analyses. In the third and final state, the performance parameters of the expert system are combined with performance parameters for other system components and subsystems to evaluate the reliability and performance of the complete system. The evaluation method is demonstrated in the context of a simple expert system used to supervise the performances of an FDI algorithm associated with an aircraft longitudinal flight-control system.

  12. Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method

    NASA Technical Reports Server (NTRS)

    Kowal, Michael T.

    1997-01-01

    The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.

  13. Reliability and mass analysis of dynamic power conversion systems with parallel of standby redundancy

    NASA Technical Reports Server (NTRS)

    Juhasz, A. J.; Bloomfield, H. S.

    1985-01-01

    A combinatorial reliability approach is used to identify potential dynamic power conversion systems for space mission applications. A reliability and mass analysis is also performed, specifically for a 100 kWe nuclear Brayton power conversion system with parallel redundancy. Although this study is done for a reactor outlet temperature of 1100K, preliminary system mass estimates are also included for reactor outlet temperatures ranging up to 1500 K.

  14. Concurrent validity and reliability of using ground reaction force and center of pressure parameters in the determination of leg movement initiation during single leg lift.

    PubMed

    Aldabe, Daniela; de Castro, Marcelo Peduzzi; Milosavljevic, Stephan; Bussey, Melanie Dawn

    2016-09-01

    Postural adjustment evaluations during single leg lift requires the initiation of heel lift (T1) identification. T1 measured by means of motion analyses system is the most reliable approach. However, this method involves considerable workspace, expensive cameras, and time processing data and setting up laboratory. The use of ground reaction forces (GRF) and centre of pressure (COP) data is an alternative method as its data processing and setting up is less time consuming. Further, kinetic data is normally collected using frequency samples higher than 1000Hz whereas kinematic data are commonly captured using 50-200Hz. This study describes the concurrent-validity and reliability of GRF and COP measurements in determining T1, using a motion analysis system as reference standard. Kinematic and kinetic data during single leg lift were collected from ten participants. GRF and COP data were collected using one and two force plates. Displacement of a single heel marker was captured by means of ten Vicon(©) cameras. Kinetic and kinematic data were collected using a sample frequency of 1000Hz. Data were analysed in two stages: identification of key events in the kinetic data, and assessing concurrent validity of T1 based on the chosen key events with T1 provided by the kinematic data. The key event presenting the least systematic bias, along with a narrow 95% CI and limits of agreement against the reference standard T1, was the Baseline COPy event. Baseline COPy event was obtained using one force plate and presented excellent between-tester reliability. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Systems engineering principles for the design of biomedical signal processing systems.

    PubMed

    Faust, Oliver; Acharya U, Rajendra; Sputh, Bernhard H C; Min, Lim Choo

    2011-06-01

    Systems engineering aims to produce reliable systems which function according to specification. In this paper we follow a systems engineering approach to design a biomedical signal processing system. We discuss requirements capturing, specification definition, implementation and testing of a classification system. These steps are executed as formal as possible. The requirements, which motivate the system design, are based on diabetes research. The main requirement for the classification system is to be a reliable component of a machine which controls diabetes. Reliability is very important, because uncontrolled diabetes may lead to hyperglycaemia (raised blood sugar) and over a period of time may cause serious damage to many of the body systems, especially the nerves and blood vessels. In a second step, these requirements are refined into a formal CSP‖ B model. The formal model expresses the system functionality in a clear and semantically strong way. Subsequently, the proven system model was translated into an implementation. This implementation was tested with use cases and failure cases. Formal modeling and automated model checking gave us deep insight in the system functionality. This insight enabled us to create a reliable and trustworthy implementation. With extensive tests we established trust in the reliability of the implementation. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  16. 76 FR 73608 - Reliability Technical Conference, North American Electric Reliability Corporation, Public Service...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-29

    ... or municipal authority play in forming your bulk power system reliability plans? b. Do you support..., North American Electric Reliability Corporation (NERC) Nick Akins, CEO of American Electric Power (AEP..., EL11-62-000] Reliability Technical Conference, North American Electric Reliability Corporation, Public...

  17. The Role of Margin in Link Design and Optimization

    NASA Technical Reports Server (NTRS)

    Cheung, K.

    2015-01-01

    Link analysis is a system engineering process in the design, development, and operation of communication systems and networks. Link models that are mathematical abstractions representing the useful signal power and the undesirable noise and attenuation effects (including weather effects if the signal path transverses through the atmosphere) that are integrated into the link budget calculation that provides the estimates of signal power and noise power at the receiver. Then the link margin is applied which attempts to counteract the fluctuations of the signal and noise power to ensure reliable data delivery from transmitter to receiver. (Link margin is dictated by the link margin policy or requirements.) A simple link budgeting approach assumes link parameters to be deterministic values typically adopted a rule-of-thumb policy of 3 dB link margin. This policy works for most S- and X-band links due to their insensitivity to weather effects. But for higher frequency links like Ka-band, Ku-band, and optical communication links, it is unclear if a 3 dB link margin would guarantee link closure. Statistical link analysis that adopted the 2-sigma or 3-sigma link margin incorporates link uncertainties in the sigma calculation. (The Deep Space Network (DSN) link margin policies are 2-sigma for downlink and 3-sigma for uplink.) The link reliability can therefore be quantified statistically even for higher frequency links. However in the current statistical link analysis approach, link reliability is only expressed as the likelihood of exceeding the signal-to-noise ratio (SNR) threshold that corresponds to a given bit-error-rate (BER) or frame-error-rate (FER) requirement. The method does not provide the true BER or FER estimate of the link with margin, or the required signalto-noise ratio (SNR) that would meet the BER or FER requirement in the statistical sense. In this paper, we perform in-depth analysis on the relationship between BER/FER requirement, operating SNR, and coding performance curve, in the case when the channel coherence time of link fluctuation is comparable or larger than the time duration of a codeword. We compute the "true" SNR design point that would meet the BER/FER requirement by taking into account the fluctuation of signal power and noise power at the receiver, and the shape of the coding performance curve. This analysis yields a number of valuable insights on the design choices of coding scheme and link margin for the reliable data delivery of a communication system - space and ground. We illustrate the aforementioned analysis using a number of standard NASA error-correcting codes.

  18. Improving Metrological Reliability of Information-Measuring Systems Using Mathematical Modeling of Their Metrological Characteristics

    NASA Astrophysics Data System (ADS)

    Kurnosov, R. Yu; Chernyshova, T. I.; Chernyshov, V. N.

    2018-05-01

    The algorithms for improving the metrological reliability of analogue blocks of measuring channels and information-measuring systems are developed. The proposed algorithms ensure the optimum values of their metrological reliability indices for a given analogue circuit block solution.

  19. 78 FR 77574 - Protection System Maintenance Reliability Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-24

    ... protection system component type, except that the maintenance program for all batteries associated with the... Electric System reliability and promoting efficiency through consolidation [of protection system-related... ITC that PRC-005-2 promotes efficiency by consolidating protection system maintenance requirements...

  20. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp

    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storagemore » systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results« less

Top