Science.gov

Sample records for achieve reliable operation

  1. Understanding the Elements of Operational Reliability: A Key for Achieving High Reliability

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.

    2010-01-01

    This viewgraph presentation reviews operational reliability and its role in achieving high reliability through design and process reliability. The topics include: 1) Reliability Engineering Major Areas and interfaces; 2) Design Reliability; 3) Process Reliability; and 4) Reliability Applications.

  2. Achieving High Reliability Operations Through Multi-Program Integration

    SciTech Connect

    Holly M. Ashley; Ronald K. Farris; Robert E. Richards

    2009-04-01

    Over the last 20 years the Idaho National Laboratory (INL) has adopted a number of operations and safety-related programs which has each periodically taken its turn in the limelight. As new programs have come along there has been natural competition for resources, focus and commitment. In the last few years, the INL has made real progress in integrating all these programs and are starting to realize important synergies. Contributing to this integration are both collaborative individuals and an emerging shared vision and goal of the INL fully maturing in its high reliability operations. This goal is so powerful because the concept of high reliability operations (and the resulting organizations) is a masterful amalgam and orchestrator of the best of all the participating programs (i.e. conduct of operations, behavior based safety, human performance, voluntary protection, quality assurance, and integrated safety management). This paper is a brief recounting of the lessons learned, thus far, at the INL in bringing previously competing programs into harmony under the goal (umbrella) of seeking to perform regularly as a high reliability organization. In addition to a brief diagram-illustrated historical review, the authors will share the INL’s primary successes (things already effectively stopped or started) and the gaps yet to be bridged.

  3. Achieving reliable operation of a PSG-5000 delivery-water heater's tube system

    NASA Astrophysics Data System (ADS)

    Vasilenko, G. V.; Meshcheryakov, I. M.

    2010-01-01

    We analyze factors due to which damage occurred in the first period of operation in the 12Kh18N1 austenitic-steel tube system of the delivery-water heater used as part of a T-180/210-130 turbine unit operating in a couple with a high-pressure drum boiler. Technical solutions undertaken for achieving reliable operation of the heater are considered.

  4. Operational safety reliability research

    SciTech Connect

    Hall, R.E.; Boccio, J.L.

    1986-01-01

    Operating reactor events such as the TMI accident and the Salem automatic-trip failures raised the concern that during a plant's operating lifetime the reliability of systems could degrade from the design level that was considered in the licensing process. To address this concern, NRC is sponsoring the Operational Safety Reliability Research project. The objectives of this project are to identify the essential tasks of a reliability program and to evaluate the effectiveness and attributes of such a reliability program applicable to maintaining an acceptable level of safety during the operating lifetime at the plant.

  5. Reliability achievement in high technology space systems

    NASA Technical Reports Server (NTRS)

    Lindstrom, D. L.

    1981-01-01

    The production of failure-free hardware is discussed. The elements required to achieve such hardware are: technical expertise to design, analyze, and fully understand the design; use of high reliability parts and materials control in the manufacturing process; and testing to understand the system and weed out defects. The durability of the Hughes family of satellites is highlighted.

  6. Achieving TASAR Operational Readiness

    NASA Technical Reports Server (NTRS)

    Wing, David J.

    2015-01-01

    NASA has been developing and testing the Traffic Aware Strategic Aircrew Requests (TASAR) concept for aircraft operations featuring a NASA-developed cockpit automation tool, the Traffic Aware Planner (TAP), which computes traffic/hazard-compatible route changes to improve flight efficiency. The TAP technology is anticipated to save fuel and flight time and thereby provide immediate and pervasive benefits to the aircraft operator, as well as improving flight schedule compliance, passenger comfort, and pilot and controller workload. Previous work has indicated the potential for significant benefits for TASAR-equipped aircraft, and a flight trial of the TAP software application in the National Airspace System has demonstrated its technical viability. This paper reviews previous and ongoing activities to prepare TASAR for operational use.

  7. Reliability measurement for operational avionics software

    NASA Technical Reports Server (NTRS)

    Thacker, J.; Ovadia, F.

    1979-01-01

    Quantitative measures of reliability for operational software in embedded avionics computer systems are presented. Analysis is carried out on data collected during flight testing and from both static and dynamic simulation testing. Failure rate is found to be a useful statistic for estimating software quality and recognizing reliability trends during the operational phase of software development.

  8. Methods and Costs to Achieve Ultra Reliable Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2012-01-01

    A published Mars mission is used to explore the methods and costs to achieve ultra reliable life support. The Mars mission and its recycling life support design are described. The life support systems were made triply redundant, implying that each individual system will have fairly good reliability. Ultra reliable life support is needed for Mars and other long, distant missions. Current systems apparently have insufficient reliability. The life cycle cost of the Mars life support system is estimated. Reliability can be increased by improving the intrinsic system reliability, adding spare parts, or by providing technically diverse redundant systems. The costs of these approaches are estimated. Adding spares is least costly but may be defeated by common cause failures. Using two technically diverse systems is effective but doubles the life cycle cost. Achieving ultra reliability is worth its high cost because the penalty for failure is very high.

  9. Operational reliability of standby safety systems

    SciTech Connect

    Grant, G.M.; Atwood, C.L.; Gentillon, C.D.

    1995-04-01

    The Idaho National Engineering Laboratory (INEL) is evaluating the operational reliability of several risk-significant standby safety systems based on the operating experience at US commercial nuclear power plants from 1987 through 1993. The reliability assessed is the probability that the system will perform its Probabilistic Risk Assessment (PRA) defined safety function. The quantitative estimates of system reliability are expected to be useful in risk-based regulation. This paper is an overview of the analysis methods and the results of the high pressure coolant injection (HPCI) system reliability study. Key characteristics include (1) descriptions of the data collection and analysis methods, (2) the statistical methods employed to estimate operational unreliability, (3) a description of how the operational unreliability estimates were compared with typical PRA results, both overall and for each dominant failure mode, and (4) a summary of results of the study.

  10. Reliable multicast protocol specifications protocol operations

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd; Whetten, Brian

    1995-01-01

    This appendix contains the complete state tables for Reliable Multicast Protocol (RMP) Normal Operation, Multi-RPC Extensions, Membership Change Extensions, and Reformation Extensions. First the event types are presented. Afterwards, each RMP operation state, normal and extended, is presented individually and its events shown. Events in the RMP specification are one of several things: (1) arriving packets, (2) expired alarms, (3) user events, (4) exceptional conditions.

  11. The impact of reliability on naval aviation operations

    NASA Astrophysics Data System (ADS)

    Lashbrooke, D. P.

    The Gulf war illustrated the effectiveness of Naval helicopters, as well as the impact of reliability on rapidly fitted new equipment. Poor reliability can lead to reduced effectiveness, inadequate spares, high cost, and increased risk to human life. Only limited improvements can be achieved in service so the Navy's Aircraft Support Executive has developed ways of targeting equipment having the most adverse effect. The Merlin will need much higher standards of reliability before it enters service, because of its complexity and the cramped confines from which it will operate.

  12. Reliability Technology to Achieve Insertion of Advanced Packaging (RELTECH) program

    NASA Technical Reports Server (NTRS)

    Fayette, Daniel F.; Speicher, Patricia; Stoklosa, Mark J.; Evans, Jillian V.; Evans, John W.; Gentile, Mike; Pagel, Chuck A.; Hakim, Edward

    1993-01-01

    A joint military-commercial effort to evaluate multichip module (MCM) structures is discussed. The program, Reliability Technology to Achieve Insertion of Advanced Packaging (RELTECH), has been designed to identify the failure mechanisms that are possible in MCM structures. The RELTECH test vehicles, technical assessment task, product evaluation plan, reliability modeling task, accelerated and environmental testing, and post-test physical analysis and failure analysis are described. The information obtained through RELTECH can be used to address standardization issues, through development of cost effective qualification and appropriate screening criteria, for inclusion into a commercial specification and the MIL-H-38534 general specification for hybrid microcircuits.

  13. Impact of staffing parameters on operational reliability

    SciTech Connect

    Hahn, H.A.; Houghton, F.K.

    1993-01-01

    This paper reports on a project related to human resource management of the Department of Energy's (DOE's) High-Level Waste (HLW) Tank program. Safety and reliability of waste tank operations is impacted by several issues, including not only the design of the tanks themselves, but also how operations and operational personnel are managed. As demonstrated by management assessments performed by the Tiger Teams, DOE believes that the effective use of human resources impacts environment safety, and health concerns. For the of the current paper, human resource management activities are identified as Staffing'' and include the of developing the functional responsibilities and qualifications of technical and administrative personnel. This paper discusses the importance of staff plans and management in the overall view of safety and reliability. The work activities and procedures associated with the project, a review of the results of these activities, including a summary of the literature and a preliminary analysis of the data. We conclude that although identification of staffing issues and the development of staffing plans contributes to the overall reliability and safety of the HLW tanks, the relationship is not well understood and is in need of further development.

  14. Impact of staffing parameters on operational reliability

    SciTech Connect

    Hahn, H.A.; Houghton, F.K.

    1993-02-01

    This paper reports on a project related to human resource management of the Department of Energy`s (DOE`s) High-Level Waste (HLW) Tank program. Safety and reliability of waste tank operations is impacted by several issues, including not only the design of the tanks themselves, but also how operations and operational personnel are managed. As demonstrated by management assessments performed by the Tiger Teams, DOE believes that the effective use of human resources impacts environment safety, and health concerns. For the of the current paper, human resource management activities are identified as ``Staffing`` and include the of developing the functional responsibilities and qualifications of technical and administrative personnel. This paper discusses the importance of staff plans and management in the overall view of safety and reliability. The work activities and procedures associated with the project, a review of the results of these activities, including a summary of the literature and a preliminary analysis of the data. We conclude that although identification of staffing issues and the development of staffing plans contributes to the overall reliability and safety of the HLW tanks, the relationship is not well understood and is in need of further development.

  15. 76 FR 42534 - Mandatory Reliability Standards for Interconnection Reliability Operating Limits; System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-19

    ... Reliability Operating Limits; System Restoration Reliability Standards AGENCY: Federal Energy Regulatory... restoration from Blackstart Resources and require reliability coordinators to establish plans and prepare personnel to enable effective coordination of the system restoration process. The Commission also...

  16. A strategy for achieving high reliability for reusable launch vehicles (RLVs)

    NASA Astrophysics Data System (ADS)

    Sholtis, Joseph A.

    2002-01-01

    Expendable launch vehicles (ELVs) have been used since the early 1960s to put numerous payloads, including humans, into space. Yet, in spite of their widespread use since that time, ELV reliability has not improved much. Why has this been the case? And, more importantly, what might be done to substantially improve the reliability of future reusable launch vehicles (RLVs) to levels needed for commercial viability, i.e., approaching that of the U.S. commercial airline industry? This paper attempts to answer these questions-by reviewing the history of launch vehicles, identifying factors important to their reliability and safety, and in doing so, offering a potential strategy for achieving high RLV reliability. The conclusion reached is that there is every reason to believe that high reliability (~0.99999 per mission) is achievable for future RLVs, if key features to enhance their inherent robustness, forgiveness, and recoverability are considered and integrated into RLV design and operation at the outset. It is hoped that this paper will serve as a catalyst for further discussions intended to ensure that high reliability is realized for RLVs. .

  17. The reliable operation of CRYEBIS 1

    NASA Astrophysics Data System (ADS)

    Faure, J.

    1989-06-01

    CRYEBIS has been built in ORSAY UNIVERSITY by ARIANER group as a heavy ions source for SATURNE. It was foreseen to use it to ionise polarised hydrogen and deuterium atoms too. In spite of very encouraging results at the very beginning of the experiments, in ORSAY, CRYEBIS 1 delivered very modest heavy ions beams without any reliability. It was decided, in 1980, to set it up, in SACLAY, near to SATURNE, to try to improve the performances and to adapt it with the usual reliability necessary in the vicinity of a 24 hours a day running accelerator.

  18. 76 FR 16240 - Mandatory Reliability Standards for Interconnection Reliability Operating Limits

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-23

    .... Reliability Coordinator's Procedures for 37 Selecting the SOLs for Evaluation by the Interchange Distribution..., Order No. 693, 72 FR 16416 (Apr. 4, 2007), FERC Stats. & Regs. ] 31,242, order on reh'g, Order No. 693-A... reliability coordinator to use analyses and assessments as methods of achieving the stated goal....

  19. Achieving Operability via the Mission System Paradigm

    NASA Technical Reports Server (NTRS)

    Hammer, Fred J.; Kahr, Joseph R.

    2006-01-01

    In the past, flight and ground systems have been developed largely-independently, with the flight system taking the lead, and dominating the development process. Operability issues have been addressed poorly in planning, requirements, design, I&T, and system-contracting activities. In many cases, as documented in lessons-learned, this has resulted in significant avoidable increases in cost and risk. With complex missions and systems, operability is being recognized as an important end-to-end design issue. Never-the-less, lessons-learned and operability concepts remain, in many cases, poorly understood and sporadically applied. A key to effective application of operability concepts is adopting a 'mission system' paradigm. In this paradigm, flight and ground systems are treated, from an engineering and management perspective, as inter-related elements of a larger mission system. The mission system consists of flight hardware, flight software, telecom services, ground data system, testbeds, flight teams, science teams, flight operations processes, procedures, and facilities. The system is designed in functional layers, which span flight and ground. It is designed in response to project-level requirements, mission design and an operations concept, and is developed incrementally, with early and frequent integration of flight and ground components.

  20. Challenges in Achieving Trajectory-Based Operations

    NASA Technical Reports Server (NTRS)

    Cate, Karen Tung

    2012-01-01

    In the past few years much of the global ATM research community has proposed advanced systems based on Trajectory-Based Operations (TBO). The concept of TBO uses four-dimensional aircraft trajectories as the base information for managing safety and capacity. Both the US and European advanced ATM programs call for the sharing of trajectory data across different decision support tools for successful operations. However, the actual integration of TBO systems presents many challenges. Trajectory predictors are built to meet the specific needs of a particular system and are not always compatible with others. Two case studies are presented which examine the challenges of introducing a new concept into two legacy systems in regards to their trajectory prediction software. The first case describes the issues with integrating a new decision support tool with a legacy operational system which overlap in domain space. These tools perform similar functions but are driven by different requirements. The difference in the resulting trajectories can lead to conflicting advisories. The second case looks at integrating this same new tool with a legacy system originally developed as an integrated system, but diverged many years ago. Both cases illustrate how the lack of common architecture concepts for the trajectory predictors added cost and complexity to the integration efforts.

  1. Technical information report: Plasma melter operation, reliability, and maintenance analysis

    SciTech Connect

    Hendrickson, D.W.

    1995-03-14

    This document provides a technical report of operability, reliability, and maintenance of a plasma melter for low-level waste vitrification, in support of the Hanford Tank Waste Remediation System (TWRS) Low-Level Waste (LLW) Vitrification Program. A process description is provided that minimizes maintenance and downtime and includes material and energy balances, equipment sizes and arrangement, startup/operation/maintence/shutdown cycle descriptions, and basis for scale-up to a 200 metric ton/day production facility. Operational requirements are provided including utilities, feeds, labor, and maintenance. Equipment reliability estimates and maintenance requirements are provided which includes a list of failure modes, responses, and consequences.

  2. Achieving Operational Hydrologic Monitoring of Mosquitoborne Disease

    PubMed Central

    Day, Jonathan F.

    2005-01-01

    Mosquitoes and mosquitoborne disease transmission are sensitive to hydrologic variability. If local hydrologic conditions can be monitored or modeled at the scales at which these conditions affect the population dynamics of vector mosquitoes and the diseases they transmit, a means for monitoring or modeling mosquito populations and mosquitoborne disease transmission may be realized. We review how hydrologic conditions have been associated with mosquito abundances and mosquitoborne disease transmission and discuss the advantages of different measures of hydrologic variability. We propose that the useful application of any measure of hydrologic conditions requires additional consideration of the scales for both the hydrologic measurement and the vector control interventions that will be used to mitigate an outbreak of vectorborne disease. Our efforts to establish operational monitoring of St. Louis encephalitis virus and West Nile virus transmission in Florida are also reviewed. PMID:16229760

  3. Development of Achievement Test: Validity and Reliability Study for Achievement Test on Matter Changing

    ERIC Educational Resources Information Center

    Kara, Filiz; Celikler, Dilek

    2015-01-01

    For "Matter Changing" unit included in the Secondary School 5th Grade Science Program, it is intended to develop a test conforming the gains described in the program, and that can determine students' achievements. For this purpose, a multiple-choice test of 48 questions is arranged, consisting of 8 questions for each gain included in the…

  4. Factors that Affect Operational Reliability of Turbojet Engines

    NASA Technical Reports Server (NTRS)

    1956-01-01

    The problem of improving operational reliability of turbojet engines is studied in a series of papers. Failure statistics for this engine are presented, the theory and experimental evidence on how engine failures occur are described, and the methods available for avoiding failure in operation are discussed. The individual papers of the series are Objectives, Failure Statistics, Foreign-Object Damage, Compressor Blades, Combustor Assembly, Nozzle Diaphrams, Turbine Buckets, Turbine Disks, Rolling Contact Bearings, Engine Fuel Controls, and Summary Discussion.

  5. SLAC modulator operation and reliability in the SLC Era

    SciTech Connect

    Donaldson, A.R.; Ashton, J.R.

    1992-06-01

    A discussion of the operation and reliability of the 244 modulators in the SLAC linac with an emphasis on the past three years of operation. The linac modulators were designed and built in the 60's, upgraded for the SLAC Linear Collider (SLC) in the mid 80s, and despite their age are still reliable accelerator components. The 60s modulator operated at 65 MW peak and 83 kW average power. The upgrade resulted in 150 MW peak output at an average power of 87 kW, a modest increase since the repetition rate was dropped from 360 to 120 Hz. In the present accelerator configuration, the Linac operates as a source of electrons and positrons to a single pass coillider. The classic collider is a storage ring filled with oppositely charged, counter-rotating particles which are allowed to collide until an accelerator fault occurs and the stored beams are aborted. A reasonable storage ring can store and collide particles for as long as eight hours with a 10 or 20 minute filling time. A single pass collider, + on the other hand, can only produce e{sup {minus}} and e{sup +} collisions at whatever rate the source operates. To be effective the SLC must operate at 120 Hz with a very high degree of reliability and on a continuous basis. Fortunately, the linac has a modest excess of modulator/klystron systems which allows some measure of redundancy and hence some freedom from the constraint that all 244 modulator/klystrons operate simultaneously. Nonetheless, high importance is placed on modulator MTBF and MTRR or, in the parlance of reliability experts and accelerator physicists, availability. This is especially true of the modulators associated with the fundamental requirements of a collider such as injection, compression and positron production.

  6. SLAC modulator operation and reliability in the SLC Era

    SciTech Connect

    Donaldson, A.R.; Ashton, J.R.

    1992-06-01

    A discussion of the operation and reliability of the 244 modulators in the SLAC linac with an emphasis on the past three years of operation. The linac modulators were designed and built in the 60`s, upgraded for the SLAC Linear Collider (SLC) in the mid 80s, and despite their age are still reliable accelerator components. The 60s modulator operated at 65 MW peak and 83 kW average power. The upgrade resulted in 150 MW peak output at an average power of 87 kW, a modest increase since the repetition rate was dropped from 360 to 120 Hz. In the present accelerator configuration, the Linac operates as a source of electrons and positrons to a single pass coillider. The classic collider is a storage ring filled with oppositely charged, counter-rotating particles which are allowed to collide until an accelerator fault occurs and the stored beams are aborted. A reasonable storage ring can store and collide particles for as long as eight hours with a 10 or 20 minute filling time. A single pass collider, + on the other hand, can only produce e{sup {minus}} and e{sup +} collisions at whatever rate the source operates. To be effective the SLC must operate at 120 Hz with a very high degree of reliability and on a continuous basis. Fortunately, the linac has a modest excess of modulator/klystron systems which allows some measure of redundancy and hence some freedom from the constraint that all 244 modulator/klystrons operate simultaneously. Nonetheless, high importance is placed on modulator MTBF and MTRR or, in the parlance of reliability experts and accelerator physicists, availability. This is especially true of the modulators associated with the fundamental requirements of a collider such as injection, compression and positron production.

  7. Spaceflight tracking and data network operational reliability assessment for Skylab

    NASA Technical Reports Server (NTRS)

    Seneca, V. I.; Mlynarczyk, R. H.

    1974-01-01

    Data on the spaceflight communications equipment status during the Skylab mission were subjected to an operational reliability assessment. Reliability models were revised to reflect pertinent equipment changes accomplished prior to the beginning of the Skylab missions. Appropriate adjustments were made to fit the data to the models. The availabilities are based on the failure events resulting in the stations inability to support a function of functions and the MTBF's are based on all events including 'can support' and 'cannot support'. Data were received from eleven land-based stations and one ship.

  8. ADVANCED COMPRESSOR ENGINE CONTROLS TO ENHANCE OPERATION, RELIABILITY AND INTEGRITY

    SciTech Connect

    Gary D. Bourn; Jess W. Gingrich; Jack A. Smith

    2004-03-01

    This document is the final report for the ''Advanced Compressor Engine Controls to Enhance Operation, Reliability, and Integrity'' project. SwRI conducted this project for DOE in conjunction with Cooper Compression, under DOE contract number DE-FC26-03NT41859. This report addresses an investigation of engine controls for integral compressor engines and the development of control strategies that implement closed-loop NOX emissions feedback.

  9. ANALYSIS OF AVAILABILITY AND RELIABILITY IN RHIC OPERATIONS.

    SciTech Connect

    PILAT, F.; INGRASSIA, P.; MICHNOFF, R.

    2006-06-26

    RHIC has been successfully operated for 5 years as a collider for different species, ranging from heavy ions including gold and copper, to polarized protons. We present a critical analysis of reliability data for RHIC that not only identifies the principal factors limiting availability but also evaluates critical choices at design times and assess their impact on present machine performance. RHIC availability data are typical when compared to similar high-energy colliders. The critical analysis of operations data is the basis for studies and plans to improve RHIC machine availability beyond the 50-60% typical of high-energy colliders.

  10. USING SEQUENCING TO IMPROVE OPERATIONAL EFFICIENCY AND RELIABILITY

    SciTech Connect

    D OTTAVIO,T.; NIEDZIELA, J.

    2007-10-15

    Operation of an accelerator requires the efficient and reproducible execution of many different types of procedures. Some procedures, like beam acceleration, magnet quench recovery, and species switching can be quite complex. To improve accelerator reliability and efficiency, automated execution of procedures is required. Creation of a single robust sequencing application permits the streamlining of this process and offers many benefits in sequence creation, editing, and control. In this paper, we present key features of a sequencer application commissioned at the Collider-Accelerator Department of Brookhaven National Laboratory during the 2007 run. Included is a categorization of the different types of sequences in use, a discussion of the features considered desirable in a good sequencer, and a description of the tools created to aid in sequence construction and diagnosis. Finally, highlights from our operational experience are presented, with emphasis on Operations control of the sequencer, and the alignment of sequence construction with existing operational paradigms.

  11. Reliability of Operation at SLAC in the LCLS Era

    SciTech Connect

    Wienands, U.; Allen, W.B.; Colocho, W.; Erickson, R.; Stanek, M.; /SLAC

    2009-06-19

    LCLS hardware availability has been above 90% for the first two commissioning runs of the accelerator. In this paper we compare the reliability data for LCLS (availability, MTBF and MTTR) to those of PEP-II, the e{sup +}e{sup -} collider operating previously at SLAC. It may be seen that the linac availability is not significantly different now than it was before, while the availability of the whole LCLS facility is significantly higher than that of the PEP-II facility as a whole (which was about 87%). Most of the improvement is in the MTTR. Ways to improve availability towards the goal of 95% are discussed.

  12. 75 FR 71613 - Mandatory Reliability Standards for Interconnection Reliability Operating Limits

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-24

    ... developing and enforcing mandatory Reliability Standards. The proposed Reliability Standards were designed to... mandatory Reliability Standards.\\2\\ The proposed Reliability Standards were designed to prevent instability..., Order No. 693, 72 FR 16416 (Apr. 4, 2007), FERC Stats. & Regs. ] 31,242, order on reh'g, Order No....

  13. Achieving a high-reliability organization through implementation of the ARCC model for systemwide sustainability of evidence-based practice.

    PubMed

    Melnyk, Bernadette Mazurek

    2012-01-01

    High-reliability health care organizations are those that provide care that is safe and one that minimizes errors while achieving exceptional performance in quality and safety. This article presents major concepts and characteristics of a patient safety culture and a high-reliability health care organization and explains how building a culture of evidence-based practice can assist organizations in achieving high reliability. The ARCC (Advancing Research and Clinical practice through close Collaboration) model for systemwide implementation and sustainability of evidence-based practice is highlighted as a key strategy in achieving high reliability in health care organizations.

  14. Improved models for increasing wind penetration, economics and operating reliability

    NASA Astrophysics Data System (ADS)

    Schlueter, R. A.; Park, G. L.; Sigari, G.; Costi, T.

    1984-04-01

    The need for wind power prediction in order to enable larger wind power penetrations and improve the economics and reliability of power system operation is discussed. Methods for estimating turbulence and prediction of diurnal wind power prediction are reviewed. A method is presented to predict meteorological event induced wind power variation from measurements of wind speed at reference meteorological towers that encircle all wind turbine clusters and from sites within the wind turbine clusters. The methodology uses a recursive least squares model and requires: (1) detection of even propagation direction; and (2) determination of delays between groups of measurements at reference meteorological towers and those measurements at towers in the array. Proper filtering of the data and methods for switching reference sites and delays for the transition from one frontal system to another are also discussed. The performance of the prediction methodology on data sets from both sites was quite good and indicates one or more hour ahead prediction of wind power for meteorological events is feasible.

  15. 76 FR 58101 - Electric Reliability Organization Interpretation of Transmission Operations Reliability Standard

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-20

    ... Reliability Standard, Notice of Proposed Rulemaking, 76 FR 23222 (Apr. 26, 2011), FERC Stats. & Regs. ] 32,674... No. 486, 52 FR 47897 (Dec. 17, 1987), FERC Stats. & Regs. Preambles 1986-1990 ] 30,783 (1987). \\24... Federal Energy Regulatory Commission 18 CFR Part 40 Electric Reliability Organization Interpretation...

  16. 76 FR 23222 - Electric Reliability Organization Interpretation of Transmission Operations Reliability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-26

    ..., 52 FR 47897 (Dec. 17, 1987), FERC Stats. & Regs. Preambles 1986-1990 ] 30,783 (1987). \\23\\ 18 CFR 380... Energy Regulatory Commission 18 CFR Part 40 Electric Reliability Organization Interpretation of.... Background 2. Section 215 of the FPA requires a Commission-certified Electric Reliability Organization...

  17. PLATEAUING COSMIC RAY DETECTORS TO ACHIEVE OPTIMUM OPERATING VOLTAGE

    SciTech Connect

    Knoff, E.N.; Peterson, R.S.

    2008-01-01

    Through QuarkNet, students across the country have access to cosmic ray detectors in their high school classrooms. These detectors operate using a scintillator material and a photomultiplier tube (PMT). A data acquisition (DAQ) board counts cosmic ray hits from the counters. Through an online e-Lab, students can analyze and share their data. In order to collect viable data, the PMTs should operate at their plateau voltages. In these plateau ranges, the number of counts per minute remains relatively constant with small changes in PMT voltage. We sought to plateau the counters in the test array and to clarify the plateauing procedure itself. In order to most effectively plateau the counters, the counters should be stacked and programmed to record the number of coincident hits as well as their singles rates. We also changed the threshold value that a signal must exceed in order to record a hit and replateaued the counters. For counter 1, counter 2, and counter 3, we found plateau voltages around 1V. The singles rate plateau was very small, while the coincidence plateau was very long. The plateau voltages corresponded to a singles rate of 700–850 counts per minute. We found very little effect of changing the threshold voltages. Our chosen plateau voltages produced good performance studies on the e-Lab. Keeping in mind the nature of the experiments conducted by the high school students, we recommend a streamlined plateauing process. Because changing the threshold did not drastically affect the plateau voltage or the performance study, students should choose a threshold value, construct plateau graphs, and analyze their data using a performance study. Even if the counters operate slightly off their plateau voltage, they should deliver good performance studies and return reliable results.

  18. 76 FR 23171 - Electric Reliability Organization Interpretations of Interconnection Reliability Operations and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-26

    ... ``Current Status of Bulk Electric Systems elements (transmission or generation including critical... Energy Regulatory Commission 18 CFR Part 40 Electric Reliability Organization Interpretations of... Federal Power Act, the Federal Energy Regulatory Commission hereby approves the North American...

  19. Recent scientific and operational achievements of D/V Chikyu

    NASA Astrophysics Data System (ADS)

    Taira, Asahiko; Toczko, Sean; Eguchi, Nobu; Kuramoto, Shin'ichi; Kubo, Yusuke; Azuma, Wataru

    2014-12-01

    The D/V Chikyu, a scientific drilling vessel, is equipped with industry-standard riser capabilities. Riser drilling technology enables remarkable drilling and downhole logging capabilities and provides unprecedented hole-stability, enabling the shipboard team to retrieve high-quality wire-line logging data as well as well-preserved core samples. The 11 March 2011 Tohoku Oki mega-earthquake and tsunami cost over 18,000 casualties in NE Japan. Chikyu, docked in the Port of Hachinohe, was damaged by the tsunami. By April 2012, the ship was back in operation; drilling the toe of the Japan Trench fault zone where topographic surveys suggested there was up to 50 m eastward motion, the largest earthquake rupture ever recorded. During Integrated Ocean Drilling Program (IODP) Expeditions 343 and 343 T, Chikyu drilled 850 m below sea floor (mbsf) in 6,900+ m water depth and recovered core samples of a highly brecciated shear zone composed of pelagic claystone. A subseafloor observatory looking for temperature signatures caused by the fault friction during the earthquake, was installed and later successfully recovered. The recovered temperature loggers recorded data from which the level of friction during the mega-earthquake slip could be determined. Following Exp. 343, Chikyu began IODP Exp. 337, a riser drilling expedition into the Shimokita coal beds off Hachinohe, to study the deep subsurface biosphere in sedimentary units including Paleogene-Neogene coal beds. New records in scientific ocean drilling were achieved in deepest penetration (drilling reached 2,466 mbsf) and sample recovery. Currently Chikyu is conducting deep riser drilling at the Nankai Trough in the final stage of the NanTroSEIZE campaign. During the years 2011 to 2013, including drilling in the Okinawa Hydrothermal System, Chikyu's operational and scientific achievements have demonstrated that the ship's capabilities are vital for opening new frontiers in earth and biological sciences.

  20. DG Planning with Amalgamation of Operational and Reliability Considerations

    NASA Astrophysics Data System (ADS)

    Battu, Neelakanteshwar Rao; Abhyankar, A. R.; Senroy, Nilanjan

    2016-04-01

    Distributed Generation has been playing a vital role in dealing issues related to distribution systems. This paper presents an approach which provides policy maker with a set of solutions for DG placement to optimize reliability and real power loss of the system. Optimal location of a Distributed Generator is evaluated based on performance indices derived for reliability index and real power loss. The proposed approach is applied on a 15-bus radial distribution system and a 18-bus radial distribution system with conventional and wind distributed generators individually.

  1. How to achieve an economic and reliable CP design using DnV RP B401

    SciTech Connect

    Thomason, W.H.; Rippon, I.; Foong, J.

    1995-11-01

    The 1993 Revision of Det Norske Veritas Industri Norge AS`s Recommended Practice RP B401 Cathodic Protection Design offers the operator the opportunity to use his own experience and data to justify more or less conservative designs. Examples of the use of this option to achieve an economic Southern North Sea CP design are presented. These examples include cost comparisons of actual bids received for CP/coating systems using different coating types. Some comparisons with NACE`s RP-0176-94 Corrosion. Control of Steel Fixed Offshore Platforms Associated with Petroleum Production are also made.

  2. The Stories Clinicians Tell: Achieving High Reliability and Improving Patient Safety

    PubMed Central

    Cohen, Daniel L; Stewart, Kevin O

    2016-01-01

    The patient safety movement has been deeply affected by the stories patients have shared that have identified numerous opportunities for improvements in safety. These stories have identified system and/or human inefficiencies or dysfunctions, possibly even failures, often resulting in patient harm. Although patients’ stories tell us much, less commonly heard are the stories of clinicians and how their personal observations regarding the environments they work in and the circumstances and pressures under which they work may degrade patient safety and lead to harm. If the health care industry is to function like a high-reliability industry, to improve its processes and achieve the outcomes that patients rightly deserve, then leaders and managers must seek and value input from those on the front lines—both clinicians and patients. Stories from clinicians provided in this article address themes that include incident identification, disclosure and transparency, just culture, the impact of clinical workload pressures, human factors liabilities, clinicians as secondary victims, the impact of disruptive and punitive behaviors, factors affecting professional morale, and personal failings. PMID:26580146

  3. Fire Extinguisher Control System Provides Reliable Cold Weather Operation

    NASA Technical Reports Server (NTRS)

    Branum, J. C.

    1967-01-01

    Fast acting, pneumatically and centrally controlled, fire extinguisher /firex/ system is effective in freezing climates. The easy-to-operate system provides a fail-dry function which is activated by an electrical power failure.

  4. How Long Can the Hubble Space Telescope Operate Reliably?

    NASA Technical Reports Server (NTRS)

    Xapsos, M. A.; Stauffer, C.; Jordan, T.; Poivey, C.; Lum, G.; Haskins, D. N.; Pergosky, A. M.; Smith, D. C.; LaBel, K. A.

    2014-01-01

    Total ionizing dose exposure of electronic parts in the Hubble Space Telescope is analyzed using 3-D ray trace and Monte Carlo simulations. Results are discussed along with other potential failure mechanisms for science operations.

  5. Balancing low cost with reliable operation in the rotordynamic design of the ALS Liquid Hydrogen Fuel Turbopump

    NASA Technical Reports Server (NTRS)

    Greenhill, L. M.

    1990-01-01

    The Air Force/NASA Advanced Launch System (ALS) Liquid Hydrogen Fuel Turbopump (FTP) has primary design goals of low cost and high reliability, with performance and weight having less importance. This approach is atypical compared with other rocket engine turbopump design efforts, such as on the Space Shuttle Main Engine (SSME), which emphasized high performance and low weight. Similar to the SSME turbopumps, the ALS FTP operates supercritically, which implies that stability and bearing loads strongly influence the design. In addition, the use of low cost/high reliability features in the ALS FTP such as hydrostatic bearings, relaxed seal clearances, and unshrouded turbine blades also have a negative influence on rotordynamics. This paper discusses the analysis conducted to achieve a balance between low cost and acceptable rotordynamic behavior, to ensure that the ALS FTP will operate reliably without subsynchronous instabilities or excessive bearing loads.

  6. Using Information from Operating Experience to Inform Human Reliability Analysis

    SciTech Connect

    Bruce P. Hallbert; David I. Gertman; Julie Marble; Erasmia Lois; Nathan Siu

    2004-06-01

    This paper reports on efforts being sponsored by the U.S. NRC and performed by INEEL to develop a technical basis and perform work to extract information from sources for use in HRA. The objectives of this work are to: 1) develop a method for conducting risk-informed event analysis of human performance information that stems from operating experience at nuclear power plants and for compiling and documenting the results in a structured manner; 2) provide information from these analyses for use in risk-informed and performance-based regulatory activities; 3) create methods for information extraction and a repository for this information that, likewise, support HRA methods and their applications.

  7. Reliability of lead-calcium automotive batteries in practical operations

    NASA Astrophysics Data System (ADS)

    Burghoff, H.-G.; Richter, G.

    In order to reach a statistically sound conclusion on the suitability of maintenance-free, lead-calcium automotive batteries for practical operations, the failure behaviour of such batteries has been observed in a large-scale experiment carried out by Mercedes Benz AG and Robert Bosch GmbH in different climatic zones of North America. The results show that the average failure behaviour is not significantly different to that of batteries from other manufacturers using other grid alloy systems and operated under otherwise identical conditions; the cumulative failure probability after 30 months is 17%. The principal causes of failure are: (i) early failure: transport damage, filling errors, and short-circuits due to the outer plates being pushed up during plate-block assembly (manufacturing defect); (ii) statistical failure: short-circuits due to growth of positive plates caused by a reduction in the mechanical strength of the cast positive grid as a result of corrosion; (iii) late failure due to an increased occurrence of short-circuits, especially frequent in outer cell facing the engine of the vehicle (subjected to high temperature), and to defects caused by capacity decay. As expected, the batteries exhibit extremely low water loss in each cell. The poor cyclical performance of stationary batteries, caused by acid stratification and well-known from laboratory tests, has no detrimental effect on the batteries in use. After a thorough analysis of the corrosion process, the battery manufacturer changed the grid alloy and the method of its production, and thus limited the corrosion problem with cast lead-calcium grids and with it the possibility of plate growth. The mathematical methods used in this study, and in particular the characteristic factors derived from them, have proven useful for assessing the suitability of automotive batteries.

  8. The reliability analysis of a separated, dual fail operational redundant strapdown IMU. [inertial measurement unit

    NASA Technical Reports Server (NTRS)

    Motyka, P.

    1983-01-01

    A methodology for quantitatively analyzing the reliability of redundant avionics systems, in general, and the dual, separated Redundant Strapdown Inertial Measurement Unit (RSDIMU), in particular, is presented. The RSDIMU is described and a candidate failure detection and isolation system presented. A Markov reliability model is employed. The operational states of the system are defined and the single-step state transition diagrams discussed. Graphical results, showing the impact of major system parameters on the reliability of the RSDIMU system, are presented and discussed.

  9. Resource reliability, accessibility and governance: pillars for managing water resources to achieve water security in Nepal

    NASA Astrophysics Data System (ADS)

    Biggs, E. M.; Duncan, J.; Atkinson, P.; Dash, J.

    2013-12-01

    As one of the world's most water-abundant countries, Nepal has plenty of water yet resources are both spatially and temporally unevenly distributed. With a population heavily engaged in subsistence farming, whereby livelihoods are entirely dependent on rain-fed agriculture, changes in freshwater resources can substantially impact upon survival. The two main sources of water in Nepal come from monsoon precipitation and glacial runoff. The former is essential for sustaining livelihoods where communities have little or no access to perennial water resources. Much of Nepal's population live in the southern Mid-Hills and Terai regions where dependency on the monsoon system is high and climate-environment interactions are intricate. Any fluctuations in precipitation can severely affect essential potable resources and food security. As the population continues to expand in Nepal, and pressures build on access to adequate and clean water resources, there is a need for institutions to cooperate and increase the effectiveness of water management policies. This research presents a framework detailing three fundamental pillars for managing water resources to achieve sustainable water security in Nepal. These are (i) resource reliability; (ii) adequate accessibility; and (iii) effective governance. Evidence is presented which indicates that water resources are adequate in Nepal to sustain the population. In addition, aspects of climate change are having less impact than previously perceived e.g. results from trend analysis of precipitation time-series indicate a decrease in monsoon extremes and interannual variation over the last half-century. However, accessibility to clean water resources and the potential for water storage is limiting the use of these resources. This issue is particularly prevalent given the heterogeneity in spatial and temporal distributions of water. Water governance is also ineffective due to government instability and a lack of continuity in policy

  10. Operational reliability of end packing of water and chemical pumps

    SciTech Connect

    Golobev, A.I.

    1984-05-01

    The multiplicity of the designs of end packings of water and chemical pumps is explained by the diversity of their operational conditions and specifications of packings. The following groups of packings having some common constructional features could be identified: packings for chemically neutral media; packings for chemically active media; packings for highly active media; packings for highly abrasive media; and packings for high temperature and low temperature media. Examples are given of some designs of end packings. These packings extensively use siliconized graphites as the friction pair material. The material of the friction pair rings should possess antifriction properties, corrosion resistance, thermal strength and erosion resistance. Rubber rings of circular section are most often used as secondary seals in the design of end packings. Among the main drawbacks of rubber seals is their tendency to aging. Bellows made of rubber, Teflon and metal represent more perfect secondary seals. Springs used in sealing systems absorb all of the vibrations of the packings, they experience variable stresses and undergo fatigue failure. The paper describes the failure modes of each component of end seals in more detail and suggests methods for alleviating the problems associated with each one.

  11. Reliability of High Power Laser Diode Arrays Operating in Long Pulse Mode

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Meadows, Byron L.; Barnes, Bruce W.; Lockard, George E.; Singh, Upendra N.; Kavaya, Michael J.; Baker, Nathaniel R.

    2006-01-01

    Reliability and lifetime of quasi-CW laser diode arrays are greatly influenced by their thermal characteristics. This paper examines the thermal properties of laser diode arrays operating in long pulse duration regime.

  12. Inspiration, Perspiration, and Time: Operations and Achievement in Edison Schools

    ERIC Educational Resources Information Center

    Gill, Brian P.; Hamilton, Laura S.; Lockwood, J. R.; Marsh, Julie A.; Zimmer, Ron W.; Hill, Deanna; Pribesh, Shana

    2005-01-01

    New forms of governing and managing public schools have proliferated in recent years, spawning the establishment and growth of companies that operate public schools under contract. Among these education management organizations, or EMOs, the largest and most visible is Edison Schools, Inc., with a nationwide network in 2004-2005 of 103 managed…

  13. Modeling of a bubble-memory organization with self-checking translators to achieve high reliability.

    NASA Technical Reports Server (NTRS)

    Bouricius, W. G.; Carter, W. C.; Hsieh, E. P.; Wadia, A. B.; Jessep, D. C., Jr.

    1973-01-01

    Study of the design and modeling of a highly reliable bubble-memory system that has the capabilities of: (1) correcting a single 16-adjacent bit-group error resulting from failures in a single basic storage module (BSM), and (2) detecting with a probability greater than 0.99 any double errors resulting from failures in BSM's. The results of the study justify the design philosophy adopted of employing memory data encoding and a translator to correct single group errors and detect double group errors to enhance the overall system reliability.

  14. Independent transmission system operators and their role in maintaining reliability in a restructured electric power industry

    SciTech Connect

    1998-01-01

    This report summarizes the current status of proposals to form Independent System Operators (ISOs) to operate high-voltage transmission systems in the United States and reviews their potential role in maintaining bulk power system reliability. As background information, the likely new industry structure, nature of deregulated markets, and institutional framework for bulk power system reliability are reviewed. The report identifies issues related to the formation of ISOs and their roles in markets and in reliability, and describes potential policy directions for encouraging the formation of effective ISOs and ensuring bulk system reliability. Two appendices are provided, which address: (1) system operation arrangements in other countries, and (2) summaries of regional U.S. ISO proposals.

  15. Is It Really Possible to Test All Educationally Significant Achievements with High Levels of Reliability?

    ERIC Educational Resources Information Center

    Davis, Andrew

    2015-01-01

    PISA claims that it can extend its reach from its current core subjects of Reading, Science, Maths and problem-solving. Yet given the requirement for high levels of reliability for PISA, especially in the light of its current high stakes character, proposed widening of its subject coverage cannot embrace some important aspects of the social and…

  16. Reliability and Validity of the "Achievement Emotions Questionnaire": A Study of Argentinean University Students

    ERIC Educational Resources Information Center

    Paoloni, Paola Verónica; Vaja, Arabela Beatriz; Muñoz, Verónica Lilian

    2014-01-01

    Introduction: This paper aims at describing the psychometric features of the Achievement Emotions Questionnaire (AEQ), focusing specifically on the section that measures class emotions. From a theoretical perspective, this instrument was designed based on the control-value theory of achievement emotions. Therefore, a description of the…

  17. A study of the longevity and operational reliability of Goddard Spacecraft, 1960-1980

    NASA Technical Reports Server (NTRS)

    Shockey, E. F.

    1981-01-01

    Compiled data regarding the design lives and lifetimes actually achieved by 104 orbiting satellites launched by the Goddard Spaceflight Center between the years 1960 and 1980 is analyzed. Historical trends over the entire 21 year period are reviewed, and the more recent data is subjected to an examination of several key parameters. An empirical reliability function is derived, and compared with various mathematical models. Data from related studies is also discussed. The results provide insight into the reliability history of Goddard spacecraft an guidance for estimating the reliability of future programs.

  18. 76 FR 23470 - Version One Regional Reliability Standard for Transmission Operations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-27

    ... Regional Reliability Standard for Transmission Operations, Notice of Proposed Rulemaking, 75 FR 81,157 (Dec... associated scheduled flows on major WECC transfer paths do not exceed system operating limits for more than... actual flows and associated scheduled flows on major WECC transfer paths do not exceed system...

  19. Laser welding of automotive aluminum alloys to achieve defect-free, structurally sound and reliable welds

    SciTech Connect

    DebRoy, T.

    2000-11-17

    The objective of this program was to seek improved process control and weldment reliability during laser welding of automotive aluminum alloys while retaining the high speed and accuracy of the laser beam welding process. The effects of various welding variables on the loss of alloying elements and the formation of porosity and other geometric weld defects such as underfill and overfill were studied both experimentally and theoretically.

  20. Operation Reliability Assessment for Cutting Tools by Applying a Proportional Covariate Model to Condition Monitoring Information

    PubMed Central

    Cai, Gaigai; Chen, Xuefeng; Li, Bing; Chen, Baojia; He, Zhengjia

    2012-01-01

    The reliability of cutting tools is critical to machining precision and production efficiency. The conventional statistic-based reliability assessment method aims at providing a general and overall estimation of reliability for a large population of identical units under given and fixed conditions. However, it has limited effectiveness in depicting the operational characteristics of a cutting tool. To overcome this limitation, this paper proposes an approach to assess the operation reliability of cutting tools. A proportional covariate model is introduced to construct the relationship between operation reliability and condition monitoring information. The wavelet packet transform and an improved distance evaluation technique are used to extract sensitive features from vibration signals, and a covariate function is constructed based on the proportional covariate model. Ultimately, the failure rate function of the cutting tool being assessed is calculated using the baseline covariate function obtained from a small sample of historical data. Experimental results and a comparative study show that the proposed method is effective for assessing the operation reliability of cutting tools. PMID:23201980

  1. Operation reliability assessment for cutting tools by applying a proportional covariate model to condition monitoring information.

    PubMed

    Cai, Gaigai; Chen, Xuefeng; Li, Bing; Chen, Baojia; He, Zhengjia

    2012-09-25

    The reliability of cutting tools is critical to machining precision and production efficiency. The conventional statistic-based reliability assessment method aims at providing a general and overall estimation of reliability for a large population of identical units under given and fixed conditions. However, it has limited effectiveness in depicting the operational characteristics of a cutting tool. To overcome this limitation, this paper proposes an approach to assess the operation reliability of cutting tools. A proportional covariate model is introduced to construct the relationship between operation reliability and condition monitoring information. The wavelet packet transform and an improved distance evaluation technique are used to extract sensitive features from vibration signals, and a covariate function is constructed based on the proportional covariate model. Ultimately, the failure rate function of the cutting tool being assessed is calculated using the baseline covariate function obtained from a small sample of historical data. Experimental results and a comparative study show that the proposed method is effective for assessing the operation reliability of cutting tools.

  2. Achieving cost reductions in EOSDIS operations through technology evolution

    NASA Technical Reports Server (NTRS)

    Newsome, Penny; Moe, Karen; Harberts, Robert

    1996-01-01

    The earth observing system (EOS) data information system (EOSDIS) mission includes the cost-effective management and distribution of large amounts of data to the earth science community. The effect of the introduction of new information system technologies on the evolution of EOSDIS is considered. One of the steps taken by NASA to enable the introduction of new information system technologies into the EOSDIS is the funding of technology development through prototyping. Recent and ongoing prototyping efforts and their potential impact on the performance and cost-effectiveness of the EOSDIS are discussed. The technology evolution process as it related to the effective operation of EOSDIS is described, and methods are identified for the support of the transfer of relevant technology to EOSDIS components.

  3. The importance of thermal-vacuum testing in achieving high reliability of spacecraft mechanisms

    NASA Technical Reports Server (NTRS)

    Parker, K.

    1984-01-01

    The work performed on thermal vacuum testing of complex mechanisms is described. The objective of these tests is to assess the mechanism reliability by monitoring performance in an environment that closely resembles the environment that will occur during flight. To be both valid and cost effective, these tests are performed in a detailed, formally controlled manner. A review of the major test observations is given, during which time some failure modes are detected. Full confidence now exists in many mechanism and component designs, and much valuable data obtained.

  4. Quantifying the Operational Benefits of Conventional and Advanced Pumped Storage Hydro on Reliability and Efficiency: Preprint

    SciTech Connect

    Krad, I.; Ela, E.; Koritarov, V.

    2014-07-01

    Pumped storage hydro (PSH) plants have significant potential to provide reliability and efficiency benefits in future electric power systems with high penetrations of variable generation. New PSH technologies, such as adjustable-speed PSH, have been introduced that can also present further benefits. This paper demonstrates and quantifies some of the reliability and efficiency benefits afforded by PSH plants by utilizing the Flexible Energy Scheduling Tool for the Integration of Variable generation (FESTIV), an integrated power system operations tool that evaluates both reliability and production costs.

  5. Operational Impacts of Operating Reserve Demand Curves on Production Cost and Reliability: Preprint

    SciTech Connect

    Krad, Ibrahim; Ibanez, Eduardo; Ela, Erik; Gao, Wenzhong

    2015-10-27

    The electric power industry landscape is continually evolving. As emerging technologies such as wind, solar, electric vehicles, and energy storage systems become more cost-effective and present in the system, traditional power system operating strategies will need to be reevaluated. The presence of wind and solar generation (commonly referred to as variable generation) may result in an increase in the variability and uncertainty of the net load profile. One mechanism to mitigate this is to schedule and dispatch additional operating reserves. These operating reserves aim to ensure that there is enough capacity online in the system to account for the increased variability and uncertainty occurring at finer temporal resolutions. A new operating reserve strategy, referred to as flexibility reserve, has been introduced in some regions. A similar implementation is explored in this paper, and its implications on power system operations are analyzed.

  6. Operational Impacts of Operating Reserve Demand Curves on Production Cost and Reliability

    SciTech Connect

    Krad, Ibrahim; Ibanez, Eduardo; Ela, Erik; Gao, Wenzhong

    2015-11-02

    The electric power industry landscape is continually evolving. As emerging technologies such as wind, solar, electric vehicles, and energy storage systems become more cost-effective and present in the system, traditional power system operating strategies will need to be reevaluated. The presence of wind and solar generation (commonly referred to as variable generation) may result in an increase in the variability and uncertainty of the net load profile. One mechanism to mitigate this is to schedule and dispatch additional operating reserves. These operating reserves aim to ensure that there is enough capacity online in the system to account for the increased variability and uncertainty occurring at finer temporal resolutions. A new operating reserve strategy, referred to as flexibility reserve, has been introduced in some regions. A similar implementation is explored in this paper, and its implications on power system operations are analyzed.

  7. Wind turbine reliability : understanding and minimizing wind turbine operation and maintenance costs.

    SciTech Connect

    Not Available

    2004-11-01

    Wind turbine system reliability is a critical factor in the success of a wind energy project. Poor reliability directly affects both the project's revenue stream through increased operation and maintenance (O&M) costs and reduced availability to generate power due to turbine downtime. Indirectly, the acceptance of wind-generated power by the financial and developer communities as a viable enterprise is influenced by the risk associated with the capital equipment reliability; increased risk, or at least the perception of increased risk, is generally accompanied by increased financing fees or interest rates. Cost of energy (COE) is a key project evaluation metric, both in commercial applications and in the U.S. federal wind energy program. To reflect this commercial reality, the wind energy research community has adopted COE as a decision-making and technology evaluation metric. The COE metric accounts for the effects of reliability through levelized replacement cost and unscheduled maintenance cost parameters. However, unlike the other cost contributors, such as initial capital investment and scheduled maintenance and operating expenses, costs associated with component failures are necessarily speculative. They are based on assumptions about the reliability of components that in many cases have not been operated for a complete life cycle. Due to the logistical and practical difficulty of replacing major components in a wind turbine, unanticipated failures (especially serial failures) can have a large impact on the economics of a project. The uncertainty associated with long-term component reliability has direct bearing on the confidence level associated with COE projections. In addition, wind turbine technology is evolving. New materials and designs are being incorporated in contemporary wind turbines with the ultimate goal of reducing weight, controlling loads, and improving energy capture. While the goal of these innovations is reduction in the COE, there is a

  8. APPLICATION OF TRAVEL TIME RELIABILITY FOR PERFORMANCE ORIENTED OPERATIONAL PLANNING OF EXPRESSWAYS

    NASA Astrophysics Data System (ADS)

    Mehran, Babak; Nakamura, Hideki

    Evaluation of impacts of congestion improvement scheme s on travel time reliability is very significant for road authorities since travel time reliability repr esents operational performance of expressway segments. In this paper, a methodology is presented to estimate travel tim e reliability prior to implementation of congestion relief schemes based on travel time variation modeling as a function of demand, capacity, weather conditions and road accident s. For subject expressway segmen ts, traffic conditions are modeled over a whole year considering demand and capacity as random variables. Patterns of demand and capacity are generated for each five minute interval by appl ying Monte-Carlo simulation technique, and accidents are randomly generated based on a model that links acci dent rate to traffic conditions. A whole year analysis is performed by comparing de mand and available capacity for each scenario and queue length is estimated through shockwave analysis for each time in terval. Travel times are estimated from refined speed-flow relationships developed for intercity expressways and buffer time index is estimated consequently as a measure of travel time reliability. For validation, estimated reliability indices are compared with measured values from empirical data, and it is shown that the proposed method is suitable for operational evaluation and planning purposes.

  9. Optimising contraction and alignment of cellular collagen hydrogels to achieve reliable and consistent engineered anisotropic tissue.

    PubMed

    O'Rourke, Caitriona; Drake, Rosemary A L; Cameron, Grant W W; Loughlin, A Jane; Phillips, James B

    2015-11-01

    Engineered anisotropic tissue constructs containing aligned cell and extracellular matrix structures are useful as in vitro models and for regenerative medicine. They are of particular interest for nervous system modelling and regeneration, where tracts of aligned neurons and glia are required. The self-alignment of cells and matrix due to tension within tethered collagen gels is a useful tool for generating anisotropic tissues, but requires an optimal balance between cell density, matrix concentration and time to be achieved for each specific cell type. The aim of this study was to develop an assay system based on contraction of free-floating cellular gels in 96-well plates that could be used to investigate cell-matrix interactions and to establish optimal parameters for subsequent self-alignment of cells in tethered gels. Using C6 glioma cells, the relationship between contraction and alignment was established, with 60-80% contraction in the 96-well plate assay corresponding to alignment throughout tethered gels made using the same parameters. The assay system was used to investigate the effect of C6 cell density, collagen concentration and time. It was also used to show that blocking α1 integrin reduced the contraction and self-alignment of these cells, whereas blocking α2 integrin had little effect. The approach was validated by using primary astrocytes in the assay system under culture conditions that modified their ability to contract collagen gels. This detailed investigation describes a robust assay for optimising cellular self-alignment and provides a useful reference framework for future development of self-aligned artificial tissue.

  10. On modeling human reliability in space flights - Redundancy and recovery operations

    NASA Astrophysics Data System (ADS)

    Aarset, M.; Wright, J. F.

    The reliability of humans is of paramount importance to the safety of space flight systems. This paper describes why 'back-up' operators might not be the best solution, and in some cases, might even degrade system reliability. The problem associated with human redundancy calls for special treatment in reliability analyses. The concept of Standby Redundancy is adopted, and psychological and mathematical models are introduced to improve the way such problems can be estimated and handled. In the past, human reliability has practically been neglected in most reliability analyses, and, when included, the humans have been modeled as a component and treated numerically the way technical components are. This approach is not wrong in itself, but it may lead to systematic errors if too simple analogies from the technical domain are used in the modeling of human behavior. In this paper redundancy in a man-machine system will be addressed. It will be shown how simplification from the technical domain, when applied to human components of a system, may give non-conservative estimates of system reliability.

  11. Reliable operation of 976nm high power DFB broad area diode lasers with over 60% power conversion efficiency

    NASA Astrophysics Data System (ADS)

    Crump, P.; Schultz, C. M.; Wenzel, H.; Knigge, S.; Brox, O.; Maaßdorf, A.; Bugge, F.; Erbert, G.

    2011-02-01

    Diode lasers that deliver high continuous wave optical output powers (> 5W) within a narrow, temperature-stable spectral window are required for many applications. One technical solution is to bury Bragg-gratings within the semiconductor itself, using epitaxial overgrowth techniques to form distributed-feedback broad-area (DFB-BA) lasers. However, such stabilization is only of interest when reliability, operating power and power conversion efficiency are not compromised. Results will be presented from the ongoing optimization of such DFB-BA lasers at the Ferdinand-Braun- Institut (FBH). Our development work focused on 976nm devices with 90μm stripe width, as required for pumping Nd:YAG, as well as for direct applications. Such devices operate with a narrow spectral width of < 1nm (95% power content) to over 10W continuous wave (CW) optical output. Further optimization of epitaxial growth and device design has now largely eliminated the excess optical loss and electrical resistance typically associated with the overgrown grating layer. These developments have enabled, for the first time, DFB-BA lasers with peak CW power conversion efficiency of > 60% with < 1nm spectral width (95% power content). Reliable operation has also been demonstrated, with 90μm stripe devices operating for over 4000 hours to date without failure at 7W (CW). We detail the technological developments required to achieve these results and discuss the options for further improvements.

  12. Long term reliability and machine operation diagnosis with fiber optic sensors at large turbine generators

    NASA Astrophysics Data System (ADS)

    Bosselmann, T.; Strack, S.; Villnow, M.; Weidner, J. R.; Willsch, M.

    2013-05-01

    The increasing quantity of renewable energy in electric power generation leads to a higher flexibility in the operation of conventional power plants. The turbo generator has to face the influence of frequent start-stop-operation on thermal movement and vibration of the stator end windings. Large indirect cooled turbo generators have been equipped with FBG strain and temperature sensors to monitor the influence of peak load operation. Fiber optic accelerometers measure the vibration of the end windings at several turbine generators since many years of operation. The long term reliability of fiber optic vibration, temperature and strain sensors has been successfully proved during years of online operation. The analysis of these data in correlation to significant operation parameter lead to important diagnostic information.

  13. An operations concept methodology to achieve low-cost mission operations

    NASA Technical Reports Server (NTRS)

    Ledbetter, Kenneth W.; Wall, Stephen D.

    1993-01-01

    Historically, the Mission Operations System (MOS) for a space mission has been designed last because it is needed last. This has usually meant that the ground system must adjust to the flight vehicle design, sometimes at a significant cost. As newer missions have increasingly longer flight operations lifetimes, the MOS becomes proportionally more difficult and more resource-consuming. We can no longer afford to design the MOS last. The MOS concept may well drive the spacecraft, instrument, and mission designs, as well as the ground system. A method to help avoid these difficulties, responding to the changing nature of mission operations is presented. Proper development and use of an Operations Concept document results in a combined flight and ground system design yielding enhanced operability and producing increased flexibility for less cost.

  14. Operating experience and reliability improvements on the 5 kW CW klystron at Jefferson Lab

    SciTech Connect

    Nelson, R.; Holben, S.

    1997-06-01

    With substantial operating hours on the RF system, considerable information on reliability of the 5 kW CW klystrons has been obtained. High early failure rates led to examination of the operating conditions and failure modes. Internal ceramic contamination caused premature failure of gun potting material and ultimate tube demise through arcing or ceramic fracture. A planned course of reporting and reconditioning of approximately 300 klystrons, plus careful attention to operating conditions and periodic analysis of operational data, has substantially reduced the failure rate. It is anticipated that implementation of planned supplemental monitoring systems for the klystrons will allow most catastrophic failures to be avoided. By predicting end of life, tubes can be changed out before they fail, thus minimizing unplanned downtime. Initial tests have also been conducted on this same klystron operated at higher voltages with resultant higher output power. The outcome of these tests will provide information to be considered for future upgrades to the accelerator.

  15. Wind turbine reliability :understanding and minimizing wind turbine operation and maintenance costs.

    SciTech Connect

    Walford, Christopher A. (Global Energy Concepts. Kirkland, WA)

    2006-03-01

    Wind turbine system reliability is a critical factor in the success of a wind energy project. Poor reliability directly affects both the project's revenue stream through increased operation and maintenance (O&M) costs and reduced availability to generate power due to turbine downtime. Indirectly, the acceptance of wind-generated power by the financial and developer communities as a viable enterprise is influenced by the risk associated with the capital equipment reliability; increased risk, or at least the perception of increased risk, is generally accompanied by increased financing fees or interest rates. This paper outlines the issues relevant to wind turbine reliability for wind turbine power generation projects. The first sections describe the current state of the industry, identify the cost elements associated with wind farm O&M and availability and discuss the causes of uncertainty in estimating wind turbine component reliability. The latter sections discuss the means for reducing O&M costs and propose O&M related research and development efforts that could be pursued by the wind energy research community to reduce cost of energy.

  16. Achieving Lights-Out Operation of SMAP Using Ground Data System Automation

    NASA Technical Reports Server (NTRS)

    Sanders, Antonio

    2013-01-01

    The approach used in the SMAP ground data system to provide reliable, automated capabilities to conduct unattended operations has been presented. The impacts of automation on the ground data system architecture were discussed, including the three major automation patterns identified for SMAP and how these patterns address the operations use cases. The architecture and approaches used by SMAP will set the baseline for future JPL Earth Science missions.

  17. Highly Reliable Operation of Red Laser Diodes for POF Data Links

    NASA Astrophysics Data System (ADS)

    Ohgoh, Tsuyoshi; Mukai, Atsushi; Mukaiyama, Akihiro; Asano, Hideki; Hayakawa, Toshiro

    Laser diodes for plastic optical fiber (POF) data links are required stable operation >100,000h at 60°C, 5mW and the transmission speed beyond 1Gbps. By optimizing crystal growth conditions and device structures, we have successfully fabricated highly reliable laser diodes with 1.25 Gbps transmission speed. The median lifetime for 5mW operation at 60°C was estimated to be more than 800,000h. These results indicate that 660 nm band laser diodes are very promising light sources for POF data links.

  18. Improving Reliability of Service Operation Using FMEA Review and New Opportunity for Investigations

    NASA Astrophysics Data System (ADS)

    Sutrisno, Agung; Gunawan, Indra

    2016-01-01

    Despite its growing contribution to the global economy, investigation on the application status of service FMEA study to support realization of reliable service operation is very limited in literature. Motivated by such situation, the paper presented an initial survey on the status and research gaps in developing and applying FMEA in service sectors. Systematic preliminary survey using specific criteria are undertaken. Our study indicated that development and application of service FMEA are partially addressing the characteristics of service operations and it is still applied into the good deed and profit oriented operations. Opportunities for further investigation pertaining to advancement of its decision supporting tool for service risk appraisal, its modification to cope with sustainability related requirements and application of service FMEA in not for profit oriented operations are presented as new avenues for further investigation

  19. Operations & Maintenance Best Practices - A Guide to Achieving Operational Efficiency Release 3.0

    SciTech Connect

    2010-08-01

    This Operations and Maintenance (O&M) Best Practices Guide was developed under the direction of the U.S. Department of Energy’s Federal Energy Management Program (FEMP). The mission of FEMP is to facilitate the Federal Government’s implementation of sound, cost effective energy management and investment practices to enhance the nation’s energy security and environmental stewardship.

  20. Reliable and Fault-Tolerant Software-Defined Network Operations Scheme for Remote 3D Printing

    NASA Astrophysics Data System (ADS)

    Kim, Dongkyun; Gil, Joon-Min

    2015-03-01

    The recent wide expansion of applicable three-dimensional (3D) printing and software-defined networking (SDN) technologies has led to a great deal of attention being focused on efficient remote control of manufacturing processes. SDN is a renowned paradigm for network softwarization, which has helped facilitate remote manufacturing in association with high network performance, since SDN is designed to control network paths and traffic flows, guaranteeing improved quality of services by obtaining network requests from end-applications on demand through the separated SDN controller or control plane. However, current SDN approaches are generally focused on the controls and automation of the networks, which indicates that there is a lack of management plane development designed for a reliable and fault-tolerant SDN environment. Therefore, in addition to the inherent advantage of SDN, this paper proposes a new software-defined network operations center (SD-NOC) architecture to strengthen the reliability and fault-tolerance of SDN in terms of network operations and management in particular. The cooperation and orchestration between SDN and SD-NOC are also introduced for the SDN failover processes based on four principal SDN breakdown scenarios derived from the failures of the controller, SDN nodes, and connected links. The abovementioned SDN troubles significantly reduce the network reachability to remote devices (e.g., 3D printers, super high-definition cameras, etc.) and the reliability of relevant control processes. Our performance consideration and analysis results show that the proposed scheme can shrink operations and management overheads of SDN, which leads to the enhancement of responsiveness and reliability of SDN for remote 3D printing and control processes.

  1. Renewal of the Control System and Reliable Long Term Operation of the LHD Cryogenic System

    NASA Astrophysics Data System (ADS)

    Mito, T.; Iwamoto, A.; Oba, K.; Takami, S.; Moriuchi, S.; Imagawa, S.; Takahata, K.; Yamada, S.; Yanagi, N.; Hamaguchi, S.; Kishida, F.; Nakashima, T.

    The Large Helical Device (LHD) is a heliotron-type fusion plasma experimental machine which consists of a fully superconducting magnet system cooled by a helium refrigerator having a total equivalent cooling capacity of 9.2 kW@4.4 K. Seventeenplasma experimental campaigns have been performed successfully since1997 with high reliability of 99%. However, sixteen years have passed from the beginning of the system operation. Improvements are being implementedto prevent serious failures and to pursue further reliability.The LHD cryogenic control system was designed and developed as an open system utilizing latest control equipment of VME controllers and UNIX workstations at the construction time. Howeverthe generation change of control equipment has been advanced. Down-sizing of control deviceshas beenplanned from VME controllers to compact PCI controllers in order to simplify the system configuration and to improve the system reliability. The new system is composed of compact PCI controller and remote I/O connected with EtherNet/IP. Making the system redundant becomes possible by doubling CPU, LAN, and remote I/O respectively. The smooth renewal of the LHD cryogenic controlsystem and the further improvement of the cryogenic system reliability are reported.

  2. Progress in reliability of fast reactor operation and new trends to increased inherent safety

    SciTech Connect

    Merk, Bruno; Stanculescu, Alexander; Chellapandi, Perumal; Hill, Robert

    2015-06-01

    The reasons for the renewed interest in fast reactors and an overview of the progress in sodium cooled fast reactor operation in the last ten years are given. The excellent operational performance of sodium cooled fast reactors in this period is highlighted as a sound basis for the development of new fast reactors. The operational performance of the BN-600 is compared and evaluated against the performance of German light water reactors to assess the reliability. The relevance of feedback effects for safe reactor design is described, and a new method for the enhancement of feedback effects in fast reactors is proposed. Experimental reactors demonstrating the inherent safety of advanced sodium cooled fast reactor designs are described and the potential safety improvements resulting from the use of fine distributed moderating material are discussed.

  3. Operations & Maintenance Best Practices - A Guide to Achieving Operational Efficiency (Release 3)

    SciTech Connect

    Sullivan, Greg; Pugh, Ray; Melendez, Aldo P.; Hunt, W. D.

    2010-08-04

    This guide highlights operations and maintenance programs targeting energy and water efficiency that are estimated to save 5% to 20% on energy bills without a significant capital investment. The purpose of this guide is to provide you, the Operations and Maintenance (O&M)/Energy manager and practitioner, with useful information about O&M management, technologies, energy and water efficiency, and cost-reduction approaches. To make this guide useful and to reflect your needs and concerns, the authors met with O&M and Energy managers via Federal Energy Management Program (FEMP) workshops. In addition, the authors conducted extensive literature searches and contacted numerous vendors and industry experts. The information and case studies that appear in this guide resulted from these activities. It needs to be stated at the outset that this guide is designed to provide information on effective O&M as it applies to systems and equipment typically found at Federal facilities. This guide is not designed to provide the reader with step-by-step procedures for performing O&M on any specific piece of equipment. Rather, this guide first directs the user to the manufacturer's specifications and recommendations. In no way should the recommendations in this guide be used in place of manufacturer's recommendations. The recommendations in this guide are designed to supplement those of the manufacturer, or, as is all too often the case, provide guidance for systems and equipment for which all technical documentation has been lost. As a rule, this guide will first defer to the manufacturer's recommendations on equipment operation and maintenance.

  4. Reliable assessment of laparoscopic performance in the operating room using videotape analysis.

    PubMed

    Chang, Lily; Hogle, Nancy J; Moore, Brianna B; Graham, Mark J; Sinanan, Mika N; Bailey, Robert; Fowler, Dennis L

    2007-06-01

    The Global Operative Assessment of Laparoscopic Skills (GOALS) is a valid assessment tool for objectively evaluating the technical performance of laparoscopic skills in surgery residents. We hypothesized that GOALS would reliably differentiate between an experienced (expert) and an inexperienced (novice) laparoscopic surgeon (construct validity) based on a blinded videotape review of a laparoscopic cholecystectomy procedure. Ten board-certified surgeons actively engaged in the practice and teaching of laparoscopy reviewed and evaluated the videotaped operative performance of one novice and one expert laparoscopic surgeon using GOALS. Each reviewer recorded a score for both the expert and the novice videotape reviews in each of the 5 domains in GOALS (depth perception, bimanual dexterity, efficiency, tissue handling, and overall competence). The scores for the expert and the novice were compared and statistically analyzed using single-factor analysis of variance (ANOVA). The expert scored significantly higher than the novice did in the domains of depth perception (p = .005), bimanual dexterity (p = .001), efficiency (p = .001), and overall competence ( p = .001). Interrater reliability for the reviewers of the novice tape was Cronbach alpha = .93 and the expert tape was Cronbach alpha = .87. There was no difference between the two for tissue handling. The Global Operative Assessment of Laparoscopic Skills is a valid, objective assessment tool for evaluating technical surgical performance when used to blindly evaluate an intraoperative videotape recording of a laparoscopic procedure.

  5. The Achievement of Therapeutic Objectives Scale: Interrater Reliability and Sensitivity to Change in Short-Term Dynamic Psychotherapy and Cognitive Therapy

    ERIC Educational Resources Information Center

    Valen, Jakob; Ryum, Truls; Svartberg, Martin; Stiles, Tore C.; McCullough, Leigh

    2011-01-01

    This study examined interrater reliability and sensitivity to change of the Achievement of Therapeutic Objectives Scale (ATOS; McCullough, Larsen, et al., 2003) in short-term dynamic psychotherapy (STDP) and cognitive therapy (CT). The ATOS is a process scale originally developed to assess patients' achievements of treatment objectives in STDP,…

  6. Turbine Reliability and Operability Optimization through the use of Direct Detection Lidar Final Technical Report

    SciTech Connect

    Johnson, David K; Lewis, Matthew J; Pavlich, Jane C; Wright, Alan D; Johnson, Kathryn E; Pace, Andrew M

    2013-02-01

    The goal of this Department of Energy (DOE) project is to increase wind turbine efficiency and reliability with the use of a Light Detection and Ranging (LIDAR) system. The LIDAR provides wind speed and direction data that can be used to help mitigate the fatigue stress on the turbine blades and internal components caused by wind gusts, sub-optimal pointing and reactionary speed or RPM changes. This effort will have a significant impact on the operation and maintenance costs of turbines across the industry. During the course of the project, Michigan Aerospace Corporation (MAC) modified and tested a prototype direct detection wind LIDAR instrument; the resulting LIDAR design considered all aspects of wind turbine LIDAR operation from mounting, assembly, and environmental operating conditions to laser safety. Additionally, in co-operation with our partners, the National Renewable Energy Lab and the Colorado School of Mines, progress was made in LIDAR performance modeling as well as LIDAR feed forward control system modeling and simulation. The results of this investigation showed that using LIDAR measurements to change between baseline and extreme event controllers in a switching architecture can reduce damage equivalent loads on blades and tower, and produce higher mean power output due to fewer overspeed events. This DOE project has led to continued venture capital investment and engagement with leading turbine OEMs, wind farm developers, and wind farm owner/operators.

  7. Post-event human decision errors: operator action tree/time reliability correlation

    SciTech Connect

    Hall, R E; Fragola, J; Wreathall, J

    1982-11-01

    This report documents an interim framework for the quantification of the probability of errors of decision on the part of nuclear power plant operators after the initiation of an accident. The framework can easily be incorporated into an event tree/fault tree analysis. The method presented consists of a structure called the operator action tree and a time reliability correlation which assumes the time available for making a decision to be the dominating factor in situations requiring cognitive human response. This limited approach decreases the magnitude and complexity of the decision modeling task. Specifically, in the past, some human performance models have attempted prediction by trying to emulate sequences of human actions, or by identifying and modeling the information processing approach applicable to the task. The model developed here is directed at describing the statistical performance of a representative group of hypothetical individuals responding to generalized situations.

  8. Investigation of the impact of main control room digitalization on operators cognitive reliability in nuclear power plants.

    PubMed

    Zhou, Yong; Mu, Haiying; Jiang, Jianjun; Zhang, Li

    2012-01-01

    Currently, there is a trend in nuclear power plants (NPPs) toward introducing digital and computer technologies into main control rooms (MCRs). Safe generation of electric power in NPPs requires reliable performance of cognitive tasks such as fault detection, diagnosis, and response planning. The digitalization of MCRs has dramatically changed the whole operating environment, and the ways operators interact with the plant systems. If the design and implementation of the digital technology is incompatible with operators' cognitive characteristics, it may have negative effects on operators' cognitive reliability. Firstly, on the basis of three essential prerequisites for successful cognitive tasks, a causal model is constructed to reveal the typical human performance issues arising from digitalization. The cognitive mechanisms which they impact cognitive reliability are analyzed in detail. Then, Bayesian inference is used to quantify and prioritize the influences of these factors. It suggests that interface management and unbalanced workload distribution have more significant impacts on operators' cognitive reliability.

  9. Utilizing clad piping to improve process plant piping integrity, reliability, and operations

    SciTech Connect

    Chakravarti, B.

    1996-07-01

    During the past four years carbon steel piping clad with type 304L (UNS S30403) stainless steel has been used to solve the flow accelerated corrosion (FAC) problem in nuclear power plants with exceptional success. The product is designed to allow ``like for like`` replacement of damaged carbon steel components where the carbon steel remains the pressure boundary and type 304L (UNS S30403) stainless steel the corrosion allowance. More than 3000 feet of piping and 500 fittings in sizes from 6 to 36-in. NPS have been installed in the extraction steam and other lines of these power plants to improve reliability, eliminate inspection program, reduce O and M costs and provide operational benefits. This concept of utilizing clad piping in solving various corrosion problems in industrial and process plants by conservatively selecting a high alloy material as cladding can provide similar, significant benefits in controlling corrosion problems, minimizing maintenance cost, improving operation and reliability to control performance and risks in a highly cost effective manner. This paper will present various material combinations and applications that appear ideally suited for use of the clad piping components in process plants.

  10. School Achievement and Personality. Description of School Achievement in Terms of Ability, Trait, Situational and Background Variables. II: Operations at the Variable Level.

    ERIC Educational Resources Information Center

    Niskanen, Erkki A.

    This monograph contains the second section, operations at the variable level, of a report of studies done in Helsinki, Finland, describing school achievement in terms of ability, trait, situational, and background variables. The report (1) investigates the structure of school achievement, (2) describes school achievement in terms of selected…

  11. Improving Secondary Students' Academic Achievement through a Focus on Reform Reliability: 4- and 9-Year Findings from the High Reliability Schools Project

    ERIC Educational Resources Information Center

    Stringfield, Sam; Reynolds, David; Schaffer, Eugene C.

    2008-01-01

    The authors describe a reform effort in which characteristics derived from High Reliability Organization research were used to shape whole school reform. Longitudinal analyses of outcome data from 12 Welsh secondary schools indicated that 4 years after the effort was initiated, student outcomes at the sites were strongly positive. Additional…

  12. The National Aeronautics and Space Administration Nondestructive Evaluation Program for Safe and Reliable Operations

    NASA Technical Reports Server (NTRS)

    Generazio, Ed

    2005-01-01

    The National Aeronautics and Space Administration (NASA) Nondestructive Evaluation (NDE) Program is presented. As a result of the loss of seven astronauts and the Space Shuttle Columbia on February 1, 2003, NASA has undergone many changes in its organization. NDE is one of the key areas that are recognized by the Columbia Accident Investigation Board (CAIB) that needed to be strengthened by warranting NDE as a discipline with Independent Technical Authority (iTA). The current NASA NDE system and activities are presented including the latest developments in inspection technologies being applied to the Space Transportation System (STS). The unfolding trends and directions in NDE for the future are discussed as they apply to assuring safe and reliable operations.

  13. Effect of system workload on operating system reliability - A study on IBM 3081

    NASA Technical Reports Server (NTRS)

    Iyer, R. K.; Rossetti, D. J.

    1985-01-01

    This paper presents an analysis of operating system failures on an IBM 3081 running VM/SP. Three broad categories of software failures are found: error handling, program control or logic, and hardware related; it is found that more than 25 percent of software failures occur in the hardware/software interface. Measurements show that results on software reliability cannot be considered representative unless the system workload is taken into account. The overall CPU execution rate, although measured to be close to 100 percent most of the time, is not found to correlate strongly with the occurrence of failures. Possible reasons for the observed workload failure dependency, based on detailed investigations of the failure data, are discussed.

  14. Reliability issues for a bolometer detector for ITER at high operating temperatures.

    PubMed

    Meister, H; Kannamüller, M; Koll, J; Pathak, A; Penzel, F; Trautmann, T; Detemple, P; Schmitt, S; Langer, H

    2012-10-01

    The first detector prototypes for the ITER bolometer diagnostic featuring a 12.5 μm thick Pt-absorber have been realized and characterized in laboratory tests. The results show linear dependencies of the calibration parameters and are in line with measurements of prototypes with thinner absorbers. However, thermal cycling tests up to 450 °C of the prototypes with thick absorbers demonstrated that their reliability at these elevated operating temperatures is not yet sufficient. Profilometer measurements showed a deflection of the membrane hinting to stresses due to the deposition processes of the absorber. Finite element analysis (FEA) managed to reproduce the deflection and identified the highest stresses in the membrane in the region around the corners of the absorber. FEA was further used to identify changes in the geometry of the absorber with a positive impact on the intrinsic stresses of the membrane. However, further improvements are still necessary.

  15. Pre-operative Thresholds for Achieving Meaningful Clinical Improvement after Arthroscopic Treatment of Femoroacetabular Impingement

    PubMed Central

    Nwachukwu, Benedict U.; Fields, Kara G.; Nawabi, Danyal H.; Kelly, Bryan T.; Ranawat, Anil S.

    2016-01-01

    Objectives: Knowledge of the thresholds and determinants for successful femoroacetabular impingement (FAI) treatment is evolving. The primary purpose of this study was to define pre-operative outcome score thresholds that can be used to predict patients most likely to achieve meaningful clinically important difference (MCID) after arthroscopic FAI treatment. Secondarily determinants of achieving MCID were evaluated. Methods: A prospective institutional hip arthroscopy registry was reviewed to identify patients with FAI treated with arthroscopic labral surgery, acetabular rim trimming, and femoral osteochondroplasty. The modified Harris Hip Score (mHHS), the Hip Outcome Score (HOS) and the international Hip Outcome Tool (iHOT-33) tools were administered at baseline and at one year post-operatively. MCID was calculated using a distribution-based method. A receiver operating characteristic (ROC) analysis was used to calculate cohort-based threshold values predictive of achieving MCID. Area under the curve (AUC) was used to define predictive ability (strength of association) with AUC >0.7 considered acceptably predictive. Univariate and multivariable analyses were used to analyze demographic, radiographic and intra-operative factors associated with achieving MCID. Results: There were 374 patients (mean + SD age, 32.9 + 10.5) and 56.4% were female. The MCID for mHHS, HOS activities of daily living (HOS-ADL), HOS Sports, and iHOT-33 was 8.2, 8.4,14.5, and 12.0 respectively. ROC analysis (threshold, % achieving MCID, strength of association) for these tools in our population was: mHHS (61.6, 78%, 0.68), HOS-ADL (83.8, 68%, 0.84), HOS-Sports (63.9, 64%, 0.74), and iHOT-33 (54.3, 82%, 0.65). Likelihood for achieving MCID declined above and increased below these thresholds. In univariate analysis female sex, femoral version, lower acetabular outerbridge score and increasing CT sagittal center edge angle (CEA) were predictive of achieving MCID. In multivariable analysis

  16. Renewable Resource Integration Project - Scoping Study of Strategic Transmission, Operations, and Reliability Issues

    SciTech Connect

    Eto, Joseph; Budhraja, Vikram; Ballance, John; Dyer, Jim; Mobasheri, Fred; Eto, Joseph

    2008-07-01

    California is on a path to increase utilization of renewable resources. California will need to integrate approximately 30,000 megawatts (MW) of new renewable generation in the next 20 years. Renewable resources are typically located in remote locations, not near the load centers. Nearly two/thirds or 20,000 MW of new renewable resources needed are likely to be delivered to Los Angeles Basin transmission gateways. Integration of renewable resources requires interconnection to the power grid, expansion of the transmission system capability between the backbone power grid and transmission gateways, and increase in delivery capacity from transmission gateways to the local load centers. To scope the transmission, operations, and reliability issues for renewables integration, this research focused on the Los Angeles Basin Area transmission gateways where most of new renewables are likely. Necessary actions for successful renewables integration include: (1) Expand Los Angeles Basin Area transmission gateway and nomogram limits by 10,000 to 20,000 MW; (2) Upgrade local transmission network for deliverability to load centers; (3) Secure additional storage, demand management, automatic load control, dynamic pricing, and other resources that meet regulation and ramping needed in real time operations; (4) Enhance local voltage support; and (5) Expand deliverability from Los Angeles to San Diego and Northern California.

  17. REMOTES: reliable and modular telescope solution for seamless operation and monitoring of various observation facilities

    NASA Astrophysics Data System (ADS)

    Jakubec, M.; Skala, P.; Sedlacek, M.; Nekola, M.; Strobl, J.; Blazek, M.; Hudec, R.

    2012-09-01

    Astronomers often need to put several pieces of equipment together and have to deploy them at a particular location. This task could prove to be a really tough challenge, especially for distant observing facilities with intricate operating conditions, poor communication infrastructure and unreliable power source. To have this task even more complicated, they also expect secure and reliable operation in both attended and unattended mode, comfortable software with user-friendly interface and full supervision over the observation site at all times. During reconstruction of the D50 robotic telescope facility, we faced many of the issues mentioned above. To get rid of them, we based our solution on a flexible group of hardware modules controlling the equipment of the observation site, connected together by the Ethernet network and orchestrated by our management software. This approach is both affordable and powerful enough to fulfill all of the observation requirements at the same time. We quickly figured out that the outcome of this project could also be useful for other observation facilities, because they are probably facing the same issues we have solved during our project. In this contribution, we will point out the key features and benefits of the solution for observers. We will demonstrate how the solution works at our observing location. We will also discuss typical management and maintenance scenarios and how we have supported them in our solution. Finally, the overall architecture and technical aspects of the solution will be presented and particular design and technology decisions will be clarified.

  18. Field Operations Program Chevrolet S-10 (Lead-Acid) Accelerated Reliability Testing - Final Report

    SciTech Connect

    J. Francfort; J. Argueta; M. Wehrey; D. Karner; L. Tyree

    1999-07-01

    This report summarizes the Accelerated Reliability testing of five lead-acid battery-equipped Chevrolet S-10 electric vehicles by the US Department of Energy's Field Operations Program and the Program's testing partners, Electric Transportation Applications (ETA) and Southern California Edison (SCE). ETA and SCE operated the S-10s with the goal of placing 25,000 miles on each vehicle within 1 year, providing an accelerated life-cycle analysis. The testing was performed according to established and published test procedures. The S-10s' average ranges were highest during summer months; changes in ambient temperature from night to day and from season-to-season impacted range by as much as 10 miles. Drivers also noted that excessive use of power during acceleration also had a dramatic effect on vehicle range. The spirited performance of the S-10s created a great temptation to inexperienced electric vehicle drivers to ''have a good time'' and to fully utilize the S-10's acceleration capability. The price of injudicious use of power is greatly reduced range and a long-term reduction in battery life. The range using full-power accelerations followed by rapid deceleration in city driving has been 20 miles or less.

  19. Evaluating student's academic achievement by a non-additive aggregation operator

    NASA Astrophysics Data System (ADS)

    Abdullah, Siti Rohana Goh; Kasim, Maznah Mat; Ramli, Mohammad Fadzli; Sakib, Elyana

    2014-07-01

    In the context of multi-criteria decision making (MCDM), the average method used in Integrated Students Information System (ISIS) can be classified as an additive measure where the students' academic achievement are aggregated based on the assumption that there is no interaction among the evaluation criteria or the criteria are independent. This method is not suitable to be used if the schools look for equilibrium in their students' achievement. Thus, the non-additive aggregation operator is chosen to analyze students' academic achievements by further taking into accounts the interactions between the subjects. The measures of interaction were represented as λ-fuzzy measures. The effectiveness and success of this non-additive measures can be recognized by comparing the results of the new ranking which was obtained by nonadditive aggregation operator with the current approach of ranking that were based on the global scores using average score method. Throughout this study, it could be postulated that employing the non-additive aggregation operators to obtain an overall evaluation is more suitable because this method able to deal with interactions among subjects whereas the average method only assumes that there is no interaction between subjects or the subjects must be independent.

  20. Instrumentation and Control Needs for Reliable Operation of Lunar Base Surface Nuclear Power Systems

    NASA Technical Reports Server (NTRS)

    Turso, James; Chicatelli, Amy; Bajwa, Anupa

    2005-01-01

    needed to enable this critical functionality of autonomous operation. It will be imperative to consider instrumentation and control requirements in parallel to system configuration development so as to identify control-related, as well as integrated system-related, problem areas early to avoid potentially expensive work-arounds . This paper presents an overview of the enabling technologies necessary for the development of reliable, autonomous lunar base nuclear power systems with an emphasis on system architectures and off-the-shelf algorithms rather than hardware. Autonomy needs are presented in the context of a hypothetical lunar base nuclear power system. The scenarios and applications presented are hypothetical in nature, based on information from open-literature sources, and only intended to provoke thought and provide motivation for the use of autonomous, intelligent control and diagnostics.

  1. Reliability and Validity Evidence of Scores on the Achievement Goal Tendencies Questionnaire in a Sample of Spanish Students of Compulsory Secondary Education

    ERIC Educational Resources Information Center

    Ingles, Candido J.; Garcia-Fernandez, Jose M.; Castejon, Juan L.; Valle, Antonio; Delgado, Beatriz; Marzo, Juan C.

    2009-01-01

    This study examined the reliability and validity evidence drawn from the scores of the Spanish version of the Achievement Goal Tendencies Questionnaire (AGTQ) using a sample of 2,022 (51.1% boys) Spanish students from grades 7 to 10. Confirmatory factor analysis replicated the correlated three-factor structure of the AGTQ in this sample: Learning…

  2. Use of Frequency Response Metrics to Assess the Planning and Operating Requirements for Reliable Integration of Variable Renewable Generation

    SciTech Connect

    Eto, Joseph H.; Undrill, John; Mackin, Peter; Daschmans, Ron; Williams, Ben; Haney, Brian; Hunt, Randall; Ellis, Jeff; Illian, Howard; Martinez, Carlos; O'Malley, Mark; Coughlin, Katie; LaCommare, Kristina Hamachi

    2010-12-20

    An interconnected electric power system is a complex system that must be operated within a safe frequency range in order to reliably maintain the instantaneous balance between generation and load. This is accomplished by ensuring that adequate resources are available to respond to expected and unexpected imbalances and restoring frequency to its scheduled value in order to ensure uninterrupted electric service to customers. Electrical systems must be flexible enough to reliably operate under a variety of"change" scenarios. System planners and operators must understand how other parts of the system change in response to the initial change, and need tools to manage such changes to ensure reliable operation within the scheduled frequency range. This report presents a systematic approach to identifying metrics that are useful for operating and planning a reliable system with increased amounts of variable renewable generation which builds on existing industry practices for frequency control after unexpected loss of a large amount of generation. The report introduces a set of metrics or tools for measuring the adequacy of frequency response within an interconnection. Based on the concept of the frequency nadir, these metrics take advantage of new information gathering and processing capabilities that system operators are developing for wide-area situational awareness. Primary frequency response is the leading metric that will be used by this report to assess the adequacy of primary frequency control reserves necessary to ensure reliable operation. It measures what is needed to arrest frequency decline (i.e., to establish frequency nadir) at a frequency higher than the highest set point for under-frequency load shedding within an interconnection. These metrics can be used to guide the reliable operation of an interconnection under changing circumstances.

  3. Charging performance of automotive batteries-An underestimated factor influencing lifetime and reliable battery operation

    NASA Astrophysics Data System (ADS)

    Sauer, Dirk Uwe; Karden, Eckhard; Fricke, Birger; Blanke, Holger; Thele, Marc; Bohlen, Oliver; Schiffer, Julia; Gerschler, Jochen Bernhard; Kaiser, Rudi

    Dynamic charge acceptance and charge acceptance under constant voltage charging conditions are for two reasons essential for lead-acid battery operation: energy efficiency in applications with limited charging time (e.g. PV systems or regenerative braking in vehicles) and avoidance of accelerated ageing due to sulphation. Laboratory tests often use charge regimes which are beneficial for the battery life, but which differ significantly from the operating conditions in the field. Lead-acid batteries in applications with limited charging time and partial-state-of-charge operation are rarely fully charged due to their limited charge acceptance. Therefore, they suffer from sulphation and early capacity loss. However, when appropriate charging strategies are applied most of the lost capacity and thus performance for the user may be recovered. The paper presents several aspects of charging regimes and charge acceptance. Theoretical and experimental investigations show that temperature is the most critical parameter. Full charging within short times can be achieved only at elevated temperatures. A strong dependency of the charge acceptance during charging pulses on the pre-treatment of the battery can be observed, which is not yet fully understood. But these effects have a significant impact on the fuel efficiency of micro-hybrid electric vehicles.

  4. A new topology of fuel cell hybrid power source for efficient operation and high reliability

    NASA Astrophysics Data System (ADS)

    Bizon, Nicu

    2011-03-01

    This paper analyzes a new fuel cell Hybrid Power Source (HPS) topology having the feature to mitigate the current ripple of the fuel cell inverter system. In the operation of the inverter system that is grid connected or supplies AC motors in vehicle application, the current ripple normally appears at the DC port of the fuel cell HPS. Consequently, if mitigation measures are not applied, this ripple is back propagated to the fuel cell stack. Other features of the proposed fuel cell HPS are the Maximum Power Point (MPP) tracking, high reliability in operation under sharp power pulses and improved energy efficiency in high power applications. This topology uses an inverter system directly powered from the appropriate fuel cell stack and a controlled buck current source as low power source used for ripple mitigation. The low frequency ripple mitigation is based on active control. The anti-ripple current is injected in HPS output node and this has the LF power spectrum almost the same with the inverter ripple. Consequently, the fuel cell current ripple is mitigated by the designed active control. The ripple mitigation performances are evaluated by indicators that are defined to measure the mitigation ratio of the low frequency harmonics. In this paper it is shown that good performances are obtained by using the hysteretic current control, but better if a dedicated nonlinear controller is used. Two ways to design the nonlinear control law are proposed. First is based on simulation trials that help to draw the characteristic of ripple mitigation ratio vs. fuel cell current ripple. The second is based on Fuzzy Logic Controller (FLC). The ripple factor is up to 1% in both cases.

  5. UAV Research at NASA Langley: Towards Safe, Reliable, and Autonomous Operations

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.

    2016-01-01

    Unmanned Aerial Vehicles (UAV) are fundamental components in several aspects of research at NASA Langley, such as flight dynamics, mission-driven airframe design, airspace integration demonstrations, atmospheric science projects, and more. In particular, NASA Langley Research Center (Langley) is using UAVs to develop and demonstrate innovative capabilities that meet the autonomy and robotics challenges that are anticipated in science, space exploration, and aeronautics. These capabilities will enable new NASA missions such as asteroid rendezvous and retrieval (ARRM), Mars exploration, in-situ resource utilization (ISRU), pollution measurements in historically inaccessible areas, and the integration of UAVs into our everyday lives all missions of increasing complexity, distance, pace, and/or accessibility. Building on decades of NASA experience and success in the design, fabrication, and integration of robust and reliable automated systems for space and aeronautics, Langley Autonomy Incubator seeks to bridge the gap between automation and autonomy by enabling safe autonomous operations via onboard sensing and perception systems in both data-rich and data-deprived environments. The Autonomy Incubator is focused on the challenge of mobility and manipulation in dynamic and unstructured environments by integrating technologies such as computer vision, visual odometry, real-time mapping, path planning, object detection and avoidance, object classification, adaptive control, sensor fusion, machine learning, and natural human-machine teaming. These technologies are implemented in an architectural framework developed in-house for easy integration and interoperability of cutting-edge hardware and software.

  6. Design considerations in achieving 1 MW CW operation with a whispering-gallery-mode gyrotron

    SciTech Connect

    Felch, K.; Feinstein, J.; Hess, C.; Huey, H.; Jongewaard, E.; Jory, H.; Neilson, J.; Pendleton, R.; Pirkle, D.; Zitelli, L. )

    1989-09-01

    Varian is developing high-power, CW gyrotrons at frequencies in the range 100 GHz to 150 GHz, for use in electron cyclotron heating applications. Early test vehicles have utilized a TE{sub 15,2,1} interaction cavity, have achieved short-pulse power levels of 820 kW and average power levels of 80 kW at 140 GHz. Present tests are aimed at reaching 400 kW under CW operating conditions and up to 1 MW for short pulse durations. Work is also underway on modifications to the present design that will enable power levels of up to 1 MW CW to be achieved. 7 refs., 2 figs.

  7. Reliable operation of the Brookhaven EBIS for highly charged ion production for RHIC and NSRL

    SciTech Connect

    Beebe, E. Alessi, J. Binello, S. Kanesue, T. McCafferty, D. Morris, J. Okamura, M. Pikin, A. Ritter, J. Schoepfer, R.

    2015-01-09

    An Electron Beam Ion Source for the Relativistic Heavy Ion Collider (RHIC EBIS) was commissioned at Brookhaven in September 2010 and since then it routinely supplies ions for RHIC and NASA Space Radiation Laboratory (NSRL) as the main source of highly charged ions from Helium to Uranium. Using three external primary ion sources for 1+ injection into the EBIS and an electrostatic injection beam line, ion species at the EBIS exit can be switched in 0.2 s. A total of 16 different ion species have been produced to date. The length and the capacity of the ion trap have been increased by 20% by extending the trap by two more drift tubes, compared with the original design. The fraction of Au{sup 32+} in the EBIS Au spectrum is approximately 12% for 70-80% electron beam neutralization and 8 pulses operation in a 5 Hertz train and 4-5 s super cycle. For single pulse per super cycle operation and 25% electron beam neutralization, the EBIS achieves the theoretical Au{sup 32+} fractional output of 18%. Long term stability has been very good with availability of the beam from RHIC EBIS during 2012 and 2014 RHIC runs approximately 99.8%.

  8. Using operational data to estimate the reliable yields of water-supply wells

    NASA Astrophysics Data System (ADS)

    Misstear, Bruce D. R.; Beeson, Sarah

    The reliable yield of a water-supply well depends on many different factors, including the properties of the well and the aquifer; the capacities of the pumps, raw-water mains, and treatment works; the interference effects from other wells; and the constraints imposed by ion licences, water quality, and environmental issues. A relatively simple methodology for estimating reliable yields has been developed that takes into account all of these factors. The methodology is based mainly on an analysis of water-level and source-output data, where such data are available. Good operational data are especially important when dealing with wells in shallow, unconfined, fissure-flow aquifers, where actual well performance may vary considerably from that predicted using a more analytical approach. Key issues in the yield-assessment process are the identification of a deepest advisable pumping water level, and the collection of the appropriate well, aquifer, and operational data. Although developed for water-supply operators in the United Kingdom, this approach to estimating the reliable yields of water-supply wells using operational data should be applicable to a wide range of hydrogeological conditions elsewhere. Résumé La productivité d'un puits capté pour l'adduction d'eau potable dépend de différents facteurs, parmi lesquels les propriétés du puits et de l'aquifère, la puissance des pompes, le traitement des eaux brutes, les effets d'interférences avec d'autres puits et les contraintes imposées par les autorisations d'exploitation, par la qualité des eaux et par les conditions environnementales. Une méthodologie relativement simple d'estimation de la productivité qui prenne en compte tous ces facteurs a été mise au point. Cette méthodologie est basée surtout sur une analyse des données concernant le niveau piézométrique et le débit de prélèvement, quand ces données sont disponibles. De bonnes données opérationnelles sont particuli

  9. Connection Between Internal Structural Stresses of the Ist and the IInd kind and Operational Reliability of the Boiler Heating Surface

    NASA Astrophysics Data System (ADS)

    Lyubimova, Lyudmila; Tabakaev, Roman; Tashlykov, Alexander; Zavorin, Alexander; Zyubanov, Vadim

    2016-02-01

    This paper presents new approaches to solving problems of forecasting the life of heating surface of boilers, based on an analysis of internal structural stresses of the first and second kind that could affect the intragranular and intergranular strength and reliability of the pipeline in continuous operation by making it work without damage by preventing the disclosure of zone cracks.

  10. Highly-reliable operation of 638-nm broad stripe laser diode with high wall-plug efficiency for display applications

    NASA Astrophysics Data System (ADS)

    Yagi, Tetsuya; Shimada, Naoyuki; Nishida, Takehiro; Mitsuyama, Hiroshi; Miyashita, Motoharu

    2013-03-01

    Laser based displays, as pico to cinema laser projectors have gathered much attention because of wide gamut, low power consumption, and so on. Laser light sources for the displays are operated mainly in CW, and heat management is one of the big issues. Therefore, highly efficient operation is necessitated. Also the light sources for the displays are requested to be highly reliable. 638 nm broad stripe laser diode (LD) was newly developed for high efficiency and highly reliable operation. An AlGaInP/GaAs red LD suffers from low wall plug efficiency (WPE) due to electron overflow from an active layer to a p-cladding layer. Large optical confinement factor (Γ) design with AlInP cladding layers is adopted to improve the WPE. The design has a disadvantage for reliable operation because the large Γ causes high optical density and brings a catastrophic optical degradation (COD) at a front facet. To overcome the disadvantage, a window-mirror structure is also adopted in the LD. The LD shows WPE of 35% at 25°C, highest record in the world, and highly stable operation at 35°C, 550 mW up to 8,000 hours without any catastrophic optical degradation.

  11. Cryosat: ESA'S Ice Explorer Mission, 6 years in operations: status and achievements

    NASA Astrophysics Data System (ADS)

    Parrinello, Tommaso; Maestroni, Elia; Krassenburg, Mike; Badessi, Stefano; Bouffard, Jerome; Frommknecht, Bjorn; Davidson, Malcolm; Fornari, Marco; Scagliola, Michele

    2016-04-01

    CryoSat-2 was launched on the 8th April 2010 and it is the first European ice mission dedicated to monitoring precise changes in the thickness of polar ice sheets and floating sea ice over a 3-year period. CryoSat-2 carries an innovative radar altimeter called the Synthetic Aperture Interferometric Altimeter (SIRAL) with two antennas and with extended capabilities to meet the measurement requirements for ice-sheets elevation and sea-ice freeboard. Initial results have shown that data is of high quality thanks to an altimeter that is behaving exceptional well within its design specifications. The CryoSat mission reached its 6th years of operational life in April 2016. Since its launch has delivered high quality products to the worldwide cryospheric and marine community that is increasing every year. Scope of this paper is to describe the current mission status and its main scientific achievements. Topics will also include programmatic highlights and information on the next scientific development of the mission in its extended period of operations.

  12. Inter- and intra-operator reliability and repeatability of shear wave elastography in the liver: a study in healthy volunteers.

    PubMed

    Hudson, John M; Milot, Laurent; Parry, Craig; Williams, Ross; Burns, Peter N

    2013-06-01

    This study assessed the reproducibility of shear wave elastography (SWE) in the liver of healthy volunteers. Intra- and inter-operator reliability and repeatability were quantified in three different liver segments in a sample of 15 subjects, scanned during four independent sessions (two scans on day 1, two scans 1 wk later) by two operators. A total of 1440 measurements were made. Reproducibility was assessed using the intra-class correlation coefficient (ICC) and a repeated measures analysis of variance. The shear wave speed was measured and used to estimate Young's modulus using the Supersonics Imagine Aixplorer. The median Young's modulus measured through the inter-costal space was 5.55 ± 0.74 kPa. The intra-operator reliability was better for same-day evaluations (ICC = 0.91) than the inter-operator reliability (ICC = 0.78). Intra-observer agreement decreased when scans were repeated on a different day. Inter-session repeatability was between 3.3% and 9.9% for intra-day repeated scans, compared with to 6.5%-12% for inter-day repeated scans. No significant difference was observed in subjects with a body mass index greater or less than 25 kg/m(2).

  13. 75 FR 81157 - Version One Regional Reliability Standard for Transmission Operations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-27

    ... as soon as possible ] after identification of a transfer path exceeding its SOL/IROL'' in accordance... explains: Whereas, NERC Reliability Standard TOP-007-0--Reporting SOL and IROL Violations Requirement R2... based on previously conducted contingency studies. The ``WECC Philosophy of SOL and IROL...

  14. A Quantitative Risk Analysis Framework for Evaluating and Monitoring Operational Reliability of Cloud Computing

    ERIC Educational Resources Information Center

    Islam, Muhammad Faysal

    2013-01-01

    Cloud computing offers the advantage of on-demand, reliable and cost efficient computing solutions without the capital investment and management resources to build and maintain in-house data centers and network infrastructures. Scalability of cloud solutions enable consumers to upgrade or downsize their services as needed. In a cloud environment,…

  15. Towards an Operational Definition of Effective Co-Teaching: Instrument Development, Validity, and Reliability

    ERIC Educational Resources Information Center

    La Monte, Michelle Evonne

    2012-01-01

    This study focused on developing a valid and reliable instrument that can not only identify successful co-teaching, but also the professional development needs of co-teachers and their administrators in public schools. Two general questions about the quality of co-teaching were addressed in this study: (a) How well did descriptors within each of…

  16. Emotional-volitional components of operator reliability. [sensorimotor function testing under stress

    NASA Technical Reports Server (NTRS)

    Mileryan, Y. A.

    1975-01-01

    Sensorimotor function testing in a tracking task under stressfull working conditions established a psychological characterization for a successful aviation pilot: Motivation significantly increased the reliability and effectiveness of their work. Their acitivities were aimed at suppressing weariness and the feeling of fear caused by the stress factors; they showed patience, endurance, persistence, and a capacity for lengthy volitional efforts.

  17. The Role of Demand Resources In Regional Transmission Expansion Planning and Reliable Operations

    SciTech Connect

    Kirby, Brendan J

    2006-07-01

    Investigating the role of demand resources in regional transmission planning has provided mixed results. On one hand there are only a few projects where demand response has been used as an explicit alternative to transmission enhancement. On the other hand there is a fair amount of demand response in the form of energy efficiency, peak reduction, emergency load shedding, and (recently) demand providing ancillary services. All of this demand response reduces the need for transmission enhancements. Demand response capability is typically (but not always) factored into transmission planning as a reduction in the load which must be served. In that sense demand response is utilized as an alternative to transmission expansion. Much more demand response is used (involuntarily) as load shedding under extreme conditions to prevent cascading blackouts. The amount of additional transmission and generation that would be required to provide the current level of reliability if load shedding were not available is difficult to imagine and would be impractical to build. In a very real sense demand response solutions are equitably treated in every region - when proposed, demand response projects are evaluated against existing reliability and economic criteria. The regional councils, RTOs, and ISOs identify needs. Others propose transmission, generation, or responsive load based solutions. Few demand response projects get included in transmission enhancement plans because few are proposed. But this is only part of the story. Several factors are responsible for the current very low use of demand response as a transmission enhancement alternative. First, while the generation, transmission, and load business sectors each deal with essentially the same amount of electric power, generation and transmission companies are explicitly in the electric power business but electricity is not the primary business focus of most loads. This changes the institutional focus of each sector. Second

  18. Deep Space Network equipment performance, reliability, and operations management information system

    NASA Technical Reports Server (NTRS)

    Cooper, T.; Lin, J.; Chatillon, M.

    2002-01-01

    The Deep Space Mission System (DSMS) Operations Program Office and the DeepSpace Network (DSN) facilities utilize the Discrepancy Reporting Management System (DRMS) to collect, process, communicate and manage data discrepancies, equipment resets, physical equipment status, and to maintain an internal Station Log. A collaborative effort development between JPL and the Canberra Deep Space Communication Complex delivered a system to support DSN Operations.

  19. Construction and operation of the 4MWth twin-bed PFBC pilot plant for operation reliability test

    SciTech Connect

    Oki, Katsuya; Nishimura, Tsukasa; Yoshioka, Susumu; Yokoyama, Toshiaki

    1995-12-31

    Babcock-Hitachi (BHK) together with Hitachi, Ltd. has been involved in the development and planning of a large-scale coal fired PFBC combined cycle power plant. Based on its past experience in operating a 150KWth benchscale PFBC, a 2MWth PFBC of 4m bed height, and a 15MWth PFBC, several technological ideas are being incorporated into the design of large-scale PFBC boilers. These ideas include: (1) A twin-bed concept in the steam generator for improved efficiency. (2) Employment of SNCR and SCR for stringent NOx regulations. The former idea is to design a steam generator with two beds each contained in separate pressure vessels. One of the two beds of the same dimensions contains the evaporator and superheater, while the other bed contains the reheater and superheater. In this setup, reheater steam temperature can be controlled by adjusting the bed height independently from the other bed without the usual spray tempering strategy under steady state conditions, thus the loss caused by spraying can be eliminated and the overall plant efficiency improved. Aiming at demonstrating the operation and control of a PFBC that uses those technologies and also improving the unit operability, a 4MWth (two 2MWth beds) PFBC pilot plant with necessary auxiliary equipment such as a paste fuel system, an ash withdrawal system, and a gas cleaning system was planned and constructed in 1994. This paper presents the design features of the 4MWth PFBC pilot plant and operating experiences obtained during the initial test period. A brief review of a future large-scale PFBC boiler design concept is also included.

  20. Children's effortful control and academic achievement: do relational peer victimization and classroom participation operate as mediators?

    PubMed

    Valiente, Carlos; Swanson, Jodi; Lemery-Chalfant, Kathryn; Berger, Rebecca H

    2014-08-01

    Given that early academic achievement is related to numerous developmental outcomes, understanding processes that promote early success in school is important. This study was designed to clarify how students' (N=291; M age in fall of kindergarten=5.66 years, SD=0.39 year) effortful control, relational peer victimization, and classroom participation relate to achievement, as students progress from kindergarten to first grade. Effortful control and achievement were assessed in kindergarten, classroom participation and relational peer victimization were assessed in the fall of first grade, and achievement was reassessed in the spring of first grade. Classroom participation, but not relational peer victimization, mediated relations between effortful control and first grade standardized and teacher-rated achievement, controlling for kindergarten achievement. Findings suggest that aspects of classroom participation, such as the ability to work independently, may be useful targets of intervention for enhancing academic achievement in young children. PMID:25107413

  1. Improving Reliability of High Power Quasi-CW Laser Diode Arrays Operating in Long Pulse Mode

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Meadows, Byron L.; Barnes, Bruce W.; Lockard, George E.; Singh, Upendra N.; Kavaya, Michael J.; Baker, Nathaniel R.

    2006-01-01

    Operating high power laser diode arrays in long pulse regime of about 1 msec, which is required for pumping 2-micron thulium and holmium-based lasers, greatly limits their useful lifetime. This paper describes performance of laser diode arrays operating in long pulse mode and presents experimental data of the active region temperature and pulse-to-pulse thermal cycling that are the primary cause of their premature failure and rapid degradation. This paper will then offer a viable approach for determining the optimum design and operational parameters leading to the maximum attainable lifetime.

  2. RELIABILITY MODELS OF AGING PASSIVE COMPONENTS INFORMED BY MATERIALS DEGRADATION METRICS TO SUPPORT LONG-TERM REACTOR OPERATIONS

    SciTech Connect

    Unwin, Stephen D.; Lowry, Peter P.; Toyooka, Michael Y.

    2012-05-01

    Paper describes a methodology for the synthesis of nuclear power plant service data with expert-elicited materials degradation information to estimate the future failure rates of passive components. This method should be an important resource to long-term plant operations and reactor life extension. Conventional probabilistic risk assessments (PRAs) are not well suited to addressing long-term reactor operations. Since passive structures and components are among those for which replacement can be least practical, they might be expected to contribute increasingly to risk in an aging plant; yet, passives receive limited treatment in PRAs. Furthermore, PRAs produce only snapshots of risk based on the assumption of time-independent component failure rates. This assumption is unlikely to be valid in aging systems. The treatment of aging passive components in PRA presents challenges. Service data to quantify component reliability models are sparse, and this is exacerbated by the greater data demands of age-dependent reliability models. Another factor is that there can be numerous potential degradation mechanisms associated with the materials and operating environment of a given component. This deepens the data problem since risk-informed management of component aging will demand an understanding of the long-term risk significance of individual degradation mechanisms. In this paper we describe a Bayesian methodology that integrates metrics of materials degradation susceptibility with available plant service data to estimate age-dependent passive component reliabilities. Integration of these models into conventional PRA will provide a basis for materials degradation management informed by predicted long-term operational risk.

  3. Optimizing the efficiency and reliability of fluid system operations: An ongoing process

    SciTech Connect

    Casada, D.A. |

    1996-05-01

    At most industrial facilities, motor loads associated with pumps and fans are the dominant electric energy users. As plant loads and consequent system functions change, the optimal operating conditions for these components change. In response, modifications to system operations are often made with only one consideration in mind - keeping the system on line. At the Y-12 plant in Oak Ridge, a fluid system energy efficiency improvement methodology is being developed to facilitate the systematic review and modification of system design and operations to increase operational efficiency. Since the bulk of the changes are associated with reducing the numbers and/or loads of motor-driven pumps or fans, there are direct benefits in reduced electrical generation and consequent waste heat production and air emissions. This paper will discuss the types of inefficiencies that tend to evolve as system functional requirements change and equipment ages, describe some of the fundamental parameters that are useful in identifying these inefficiencies, provide examples of design and operating changes being made, and detail the resultant savings in energy.

  4. The role of reliability graph models in assuring dependable operation of complex hardware/software systems

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Davis, Gloria J.; Pedar, A.

    1991-01-01

    The complexity of computer systems currently being designed for critical applications in the scientific, commercial, and military arenas requires the development of new techniques for utilizing models of system behavior in order to assure 'ultra-dependability'. The complexity of these systems, such as Space Station Freedom and the Air Traffic Control System, stems from their highly integrated designs containing both hardware and software as critical components. Reliability graph models, such as fault trees and digraphs, are used frequently to model hardware systems. Their applicability for software systems has also been demonstrated for software safety analysis and the analysis of software fault tolerance. This paper discusses further uses of graph models in the design and implementation of fault management systems for safety critical applications.

  5. NOAA Operational Model Archive Distribution System (NOMADS): High Availability Applications for Reliable Real Time Access to Operational Model Data

    NASA Astrophysics Data System (ADS)

    Alpert, J. C.; Wang, J.

    2009-12-01

    To reduce the impact of natural hazards and environmental changes, the National Centers for Environmental Prediction (NCEP) provide first alert and a preferred partner for environmental prediction services, and represents a critical national resource to operational and research communities affected by climate, weather and water. NOMADS is now delivering high availability services as part of NOAA’s official real time data dissemination at its Web Operations Center (WOC) server. The WOC is a web service used by organizational units in and outside NOAA, and acts as a data repository where public information can be posted to a secure and scalable content server. A goal is to foster collaborations among the research and education communities, value added retailers, and public access for science and development efforts aimed at advancing modeling and GEO-related tasks. The user (client) executes what is efficient to execute on the client and the server efficiently provides format independent access services. Client applications can execute on the server, if it is desired, but the same program can be executed on the client side with no loss of efficiency. In this way this paradigm lends itself to aggregation servers that act as servers of servers listing, searching catalogs of holdings, data mining, and updating information from the metadata descriptions that enable collections of data in disparate places to be simultaneously accessed, with results processed on servers and clients to produce a needed answer. The services used to access the operational model data output are the Open-source Project for a Network Data Access Protocol (OPeNDAP), implemented with the Grid Analysis and Display System (GrADS) Data Server (GDS), and applications for slicing, dicing and area sub-setting the large matrix of real time model data holdings. This approach insures an efficient use of computer resources because users transmit/receive only the data necessary for their tasks including

  6. An Initiative Toward Reliable Long-Duration Operation of Diode Lasers in Space

    NASA Technical Reports Server (NTRS)

    Tratt, David M.; Amzajerdian, Farzin; Stephen, Mark A.; Shapiro, Andrew A.

    2006-01-01

    This viewgraph presentation reviews the workings of the Laser Diode Arrays (LDA) working group. The group facilitates focused interaction between the LDA user and provider communities and it will author standards document for the specification and qualification of LDA's for operation in the space environment. It also reviews the NASA test and evaluation facilities that are available to the community.

  7. Ergodynamics in the Reliability of Power Plant Operators and Prospective Hybrid Intelligence Systems.

    PubMed

    Venda; Chachko

    1996-01-01

    Based on ergodynamics and the hybrid intelligence theory, an analysis of the nuclear power plant operator's performance is given at the levels of strategies, tactics, and actions. Special attention is paid to the strategies used in the course of severe accidents at nuclear power plants. Data from Ukrainian and Russian power plants and training centres, and from accidents around the world were collected and processed. It is shown that in an emergency it is essential for the human operator to be flexible. This flexibility includes two main training and personal factors: a large set of strategies and tactics the operator manages to use, and quick transformations between the strategies (tactics). It was also found that some emergency tasks are too complicated: They require simultaneous use of different strategies, with time strictly limited by nuclear power plant dynamics. Those tasks cannot be successfully solved by any individual operator. Hybrid intelligence systems involving different specialists should be used in those cases in order to avoid failures in emergency problem solving and macroergonomic organizational design.

  8. Method and algorithm of ranking boiler plants at block electric power stations by the criterion of operation reliability and profitability

    NASA Astrophysics Data System (ADS)

    Farhadzadeh, E. M.; Muradaliyev, A. Z.; Farzaliyev, Y. Z.

    2015-10-01

    A method and an algorithm of ranking of boiler installations based on their technical and economic indicators are proposed. One of the basic conditions for ranking is the independence of technical and economic indicators. The assessment of their interrelation was carried out with respect to the correlation rate. The analysis of calculation data has shown that the interrelation stability with respect to the value and sign persists only for those indicators that have an evident relationship between each other. One of the calculation steps is the normalization of quantitative estimates of technical and economic indicators, which makes it possible to eliminate differences in dimensions and indicator units. The analysis of the known methods of normalization has allowed one to recommend the relative deviation from the average value as a normalized value and to use the arithmetic mean of the normalized values of independent indicators of each boiler installation as an integrated index of performance reliability and profitability. The fundamental differences from the existing approach to assess the "weak components" of a boiler installation and the quality of monitoring of its operating regimes are that the given approach takes into account the reliability and profitability of the operation of all other analogous boiler installations of an electric power station; it also implements competing elements with respect to the quality of control among the operating personnel of separate boiler installations and is aimed at encouraging an increased quality of maintenance and repairs.

  9. Review and evaluation of Transamerica Delaval, Inc. , diesel engine reliability and operability: Grand Gulf Nuclear Station Unit 1

    SciTech Connect

    Not Available

    1984-07-01

    PNL and its consultants conclude that the TDI diesel engines at the GGNS have the needed operability and reliability to fulfill their intended (auxiliary) emergency power function for the first refueling cycle. This conclusion is reached with a number of understandings regarding limits to the engine requirements, NRC concurrence with MP and L findings/conclusions regarding items to be supplied to NRC, limitations on the engine Brake Mean Effective Pressure (BMEP), and MP and L's implementation of the modifications to their proposed surveillance and maintenance program.

  10. Highly reliable wind-rolling triboelectric nanogenerator operating in a wide wind speed range

    NASA Astrophysics Data System (ADS)

    Yong, Hyungseok; Chung, Jihoon; Choi, Dukhyun; Jung, Daewoong; Cho, Minhaeng; Lee, Sangmin

    2016-09-01

    Triboelectric nanogenerators are aspiring energy harvesting methods that generate electricity from the triboelectric effect and electrostatic induction. This study demonstrates the harvesting of wind energy by a wind-rolling triboelectric nanogenerator (WR-TENG). The WR-TENG generates electricity from wind as a lightweight dielectric sphere rotates along the vortex whistle substrate. Increasing the kinetic energy of a dielectric converted from the wind energy is a key factor in fabricating an efficient WR-TENG. Computation fluid dynamics (CFD) analysis is introduced to estimate the precise movements of wind flow and to create a vortex flow by adjusting the parameters of the vortex whistle shape to optimize the design parameters to increase the kinetic energy conversion rate. WR-TENG can be utilized as both a self-powered wind velocity sensor and a wind energy harvester. A single unit of WR-TENG produces open-circuit voltage of 11.2 V and closed-circuit current of 1.86 μA. Additionally, findings reveal that the electrical power is enhanced through multiple electrode patterns in a single device and by increasing the number of dielectric spheres inside WR-TENG. The wind-rolling TENG is a novel approach for a sustainable wind-driven TENG that is sensitive and reliable to wind flows to harvest wasted wind energy in the near future.

  11. Highly reliable wind-rolling triboelectric nanogenerator operating in a wide wind speed range

    PubMed Central

    Yong, Hyungseok; Chung, Jihoon; Choi, Dukhyun; Jung, Daewoong; Cho, Minhaeng; Lee, Sangmin

    2016-01-01

    Triboelectric nanogenerators are aspiring energy harvesting methods that generate electricity from the triboelectric effect and electrostatic induction. This study demonstrates the harvesting of wind energy by a wind-rolling triboelectric nanogenerator (WR-TENG). The WR-TENG generates electricity from wind as a lightweight dielectric sphere rotates along the vortex whistle substrate. Increasing the kinetic energy of a dielectric converted from the wind energy is a key factor in fabricating an efficient WR-TENG. Computation fluid dynamics (CFD) analysis is introduced to estimate the precise movements of wind flow and to create a vortex flow by adjusting the parameters of the vortex whistle shape to optimize the design parameters to increase the kinetic energy conversion rate. WR-TENG can be utilized as both a self-powered wind velocity sensor and a wind energy harvester. A single unit of WR-TENG produces open-circuit voltage of 11.2 V and closed-circuit current of 1.86 μA. Additionally, findings reveal that the electrical power is enhanced through multiple electrode patterns in a single device and by increasing the number of dielectric spheres inside WR-TENG. The wind-rolling TENG is a novel approach for a sustainable wind-driven TENG that is sensitive and reliable to wind flows to harvest wasted wind energy in the near future. PMID:27653976

  12. A reliable and efficient method for deleting operational sequences in PACs and BACs

    PubMed Central

    Nistala, Ravi; Sigmund, Curt D.

    2002-01-01

    P1-derived artificial chromosomes (PACs) and bacterial artificial chromosomes (BACs) have become very useful as tools to study gene expression and regulation in cells and in transgenic mice. They carry large fragments of genomic DNA (≥100 kb) and therefore may contain all of the cis-regulatory elements required for expression of a gene. Because of this, even when inserted randomly in the genome, they can emulate the native environment of a gene resulting in a tightly regulated pattern of expression. Because these large genomic clones often contain DNA sequences which can manipulate chromatin at the local level, they become immune to position effects which affect expression of smaller transgenes, and thus their expression is proportional to copy number. Transgenic mice containing large BACs and PACs have become excellent models to examine the regulation of gene expression. Their usefulness would certainly be increased if easy and efficient methods are developed to manipulate them. We describe herein a method to make deletion mutations reliably and efficiently using a novel modification of the Chi-stimulated homologous recombination method. Specifically, we generated and employed a Lox511 ‘floxed’ CAM resistance marker that first affords selection for homologous recombination in Escherichia coli, and then can be easily deleted leaving only a single Lox511 site as the footprint. PMID:12000846

  13. Summary of the Optics, IR, Injection, Operations, Reliability and Instrumentation Working Group

    SciTech Connect

    Wienands, U.; Funakoshi, Y.; /KEK, Tsukuba

    2012-04-20

    The facilities reported on are all in a fairly mature state of operation, as evidenced by the very detailed studies and correction schemes that all groups are working on. First- and higher-order aberrations are diagnosed and planned to be corrected. Very detailed beam measurements are done to get a global picture of the beam dynamics. More than other facilities the high-luminosity colliders are struggling with experimental background issues, mitigation of which is a permanent challenge. The working group dealt with a very wide rage of practical issues which limit performance of the machines and compared their techniques of operations and their performance. We anticipate this to be a first attempt. In a future workshop in this series, we propose to attempt more fundamental comparisons of each machine, including design parameters. For example, DAPHNE and KEKB employ a finite crossing angle. The minimum value of {beta}*{sub y} attainable at KEKB seems to relate to this scheme. Effectiveness of compensation solenoids and turn-by-turn BPMs etc. should be examined in more detail. In the near future, CESR-C and VEPP-2000 will start their operation. We expect to hear important new experiences from these machines; in particular VEPP-2000 will be the first machine to have adopted round beams. At SLAC and KEK, next generation B Factories are being considered. It will be worthwhile to discuss the design issues of these machines based on the experiences of the existing factory machines.

  14. Use of a single miniplate to achieve intra operative intermaxillary fixation.

    PubMed

    Rai, Anshul; Arora, Aakash; Bhradwaj, Vikrant

    2015-06-01

    There are different treatment modalities mentioned in the literature for achieving intermaxillary fixation (IMF). Arch bars are time consuming, can cause damage to the periodontium, maintenance of oral hygiene is poor. Eyelets are not suitable for dentitions that carry extensive crown and bridge work. IMF screw causes root damage. To avoid all these complications we recommended the use of single miniplate for achieving IMF. PMID:26028877

  15. A highly reliable, high performance open avionics architecture for real time Nap-of-the-Earth operations

    NASA Technical Reports Server (NTRS)

    Harper, Richard E.; Elks, Carl

    1995-01-01

    An Army Fault Tolerant Architecture (AFTA) has been developed to meet real-time fault tolerant processing requirements of future Army applications. AFTA is the enabling technology that will allow the Army to configure existing processors and other hardware to provide high throughput and ultrahigh reliability necessary for TF/TA/NOE flight control and other advanced Army applications. A comprehensive conceptual study of AFTA has been completed that addresses a wide range of issues including requirements, architecture, hardware, software, testability, producibility, analytical models, validation and verification, common mode faults, VHDL, and a fault tolerant data bus. A Brassboard AFTA for demonstration and validation has been fabricated, and two operating systems and a flight-critical Army application have been ported to it. Detailed performance measurements have been made of fault tolerance and operating system overheads while AFTA was executing the flight application in the presence of faults.

  16. Design and Varactors: Operational Considerations. A Reliability Study for Robust Planar GaAs

    NASA Technical Reports Server (NTRS)

    Maiwald, Frank; Schlecht, Erich; Ward, John; Lin, Robert; Leon, Rosa; Pearson, John; Mehdi, Imran

    2003-01-01

    Preliminary conclusions include: Limits for reverse currents cannot be set. Based on current data we want to avoid any reverse bias current. We know 1 micro-A is too high. Leakage current gets suppressed when operated at 120K. Migration and verification: a) Reverse Bias Voltage will be limited; b) Health check with I/V curve: 1) Minimal reverse voltage shall be x0.75 of the calculated voltage breakdown Vbr; 2) Degradation of the Reverse Bias voltage at given current will be used as indication of ESD incidents or other Damages (high RF power, heat); 3) Calculation of diodes parameter to verify initial health check result in forward direction. RF output power starts to degrade when diode I/V curve is very strongly degraded only. Experienced on 400GHz doubler and 200GHz doubler

  17. Overall Key Performance Indicator to Optimizing Operation of High-Pressure Homogenizers for a Reliable Quantification of Intracellular Components in Pichia pastoris

    PubMed Central

    Garcia-Ortega, Xavier; Reyes, Cecilia; Montesinos, José Luis; Valero, Francisco

    2015-01-01

    The most commonly used cell disruption procedures may present lack of reproducibility, which introduces significant errors in the quantification of intracellular components. In this work, an approach consisting in the definition of an overall key performance indicator (KPI) was implemented for a lab scale high-pressure homogenizer (HPH) in order to determine the disruption settings that allow the reliable quantification of a wide sort of intracellular components. This innovative KPI was based on the combination of three independent reporting indicators: decrease of absorbance, release of total protein, and release of alkaline phosphatase activity. The yeast Pichia pastoris growing on methanol was selected as model microorganism due to it presents an important widening of the cell wall needing more severe methods and operating conditions than Escherichia coli and Saccharomyces cerevisiae. From the outcome of the reporting indicators, the cell disruption efficiency achieved using HPH was about fourfold higher than other lab standard cell disruption methodologies, such bead milling cell permeabilization. This approach was also applied to a pilot plant scale HPH validating the methodology in a scale-up of the disruption process. This innovative non-complex approach developed to evaluate the efficacy of a disruption procedure or equipment can be easily applied to optimize the most common disruption processes, in order to reach not only reliable quantification but also recovery of intracellular components from cell factories of interest. PMID:26284241

  18. Factors in reliable treatment plant operation for the production of safe water.

    PubMed

    Hendry, Bruce A

    2010-08-01

    This contribution to the International Congress on Production of Safe Water, Izmir, Turkey, 20-24 January, 2009, relates to general aspects of a water supply undertaking rather than to particular technologies or chemistries for water treatment. The paper offers a "creative problem solving" approach following Fogler and LeBlanc (Strategies for creative problem solving. Prentice Hall, NJ, 1995) as a model for generating sustainable solutions when water quality and safety problems arise. Such a structured approach presents a systematic methodology that can promote communication and goal-sharing across the inter-related, but often isolated and dispersed, functions of water scientists and researchers, engineers, operations managers, government departments and communities. A problem-solving strategy, or "heuristic", invokes five main steps (define; generate; decide; implement; evaluate). Associated with each step are various creative and enabling techniques, many of which are quite familiar to us in one form or another, but which we can use more effectively in combination and through our increased awareness and practice. For example, taking a fresh view of a problem can be promoted by a variety of "lateral thinking" tools. First-hand investigation of a problem can trigger new thinking about the real problem and its origins. A good strategy implementation will always address each and every step (though not necessarily every possible technique) and will use them at various stages in the search for and implementation of solutions. The creative nature of our experience with a problem-solving heuristic develops our facility to cope better with complex formal situations, as well as with less formal or everyday problem situations. A few anecdotes are presented that illustrate some of the author's experiences relating to factors involved in safe water supply. Here, the term "factors" may signify people and organisations as agents, as well as meaning those aspects of a problem

  19. Are surgical scrubbing and pre-operative disinfection of the skin in orthopaedic surgery reliable?

    PubMed

    Salvi, M; Chelo, C; Caputo, F; Conte, M; Fontana, C; Peddis, G; Velluti, C

    2006-01-01

    This study attempts to establish the actual effectiveness of pre-surgical disinfection of the patient and surgeon's hands. We evaluated bacterial density and composition on the skin of 15 patients undergoing knee arthroscopy and the left hand of two surgeons after standard disinfection with povidone-iodine. Three samples were taken after the first 6-min scrub in the first surgical operation from the periungual space of the 1 degrees finger, from the interdigital space between the 2 degrees and 3 degrees fingers and from the transverse palmar crest of the left hand of two surgeons for seven consecutive surgical sessions, for a total of 42 samples, and two samples from the pre-patellar skin and from the popliteal skin of 15 patients undergoing knee arthroscopy, for a total of 30 samples. Pre-surgical handwashing and disinfection procedures were identical in each case. Pre-surgical disinfection of the patient's skin with povidone-iodine was shown to be completely effective, with 100% of samples negative. Samples taken from the interdigital space and the palmar crest (100% of samples negative) demonstrated the efficacy of disinfection of the surgeon's hands with povidone-iodine, while the periungual space was contaminated in 50% of the samples. The bacterial strains isolated belong to the staphylococcus genus in 100% of the cases, with pathogenic strains in 29.6% of the cases. Standard pre-surgical disinfection of skin in areas easily accessible to the disinfectant is sufficient in itself to guarantee thorough sanitization. Standard scrubbing of the surgeon's hands is insufficient in eliminating bacterial contamination, including pathogenic germs, in the periungual space, where it is probably difficult for the disinfectant to come into contact with the skin.

  20. Are surgical scrubbing and pre-operative disinfection of the skin in orthopaedic surgery reliable?

    PubMed

    Salvi, M; Chelo, C; Caputo, F; Conte, M; Fontana, C; Peddis, G; Velluti, C

    2006-01-01

    This study attempts to establish the actual effectiveness of pre-surgical disinfection of the patient and surgeon's hands. We evaluated bacterial density and composition on the skin of 15 patients undergoing knee arthroscopy and the left hand of two surgeons after standard disinfection with povidone-iodine. Three samples were taken after the first 6-min scrub in the first surgical operation from the periungual space of the 1 degrees finger, from the interdigital space between the 2 degrees and 3 degrees fingers and from the transverse palmar crest of the left hand of two surgeons for seven consecutive surgical sessions, for a total of 42 samples, and two samples from the pre-patellar skin and from the popliteal skin of 15 patients undergoing knee arthroscopy, for a total of 30 samples. Pre-surgical handwashing and disinfection procedures were identical in each case. Pre-surgical disinfection of the patient's skin with povidone-iodine was shown to be completely effective, with 100% of samples negative. Samples taken from the interdigital space and the palmar crest (100% of samples negative) demonstrated the efficacy of disinfection of the surgeon's hands with povidone-iodine, while the periungual space was contaminated in 50% of the samples. The bacterial strains isolated belong to the staphylococcus genus in 100% of the cases, with pathogenic strains in 29.6% of the cases. Standard pre-surgical disinfection of skin in areas easily accessible to the disinfectant is sufficient in itself to guarantee thorough sanitization. Standard scrubbing of the surgeon's hands is insufficient in eliminating bacterial contamination, including pathogenic germs, in the periungual space, where it is probably difficult for the disinfectant to come into contact with the skin. PMID:16059708

  1. FEMP's O & M Best Practices Guide: A Guide to Achieving Operational Efficiency

    SciTech Connect

    Sullivan, Gregory P. ); Melendez, Aldo P. ); Pugh, Ray )

    2002-10-01

    FEMP's O & M Best Practices Guide (O & M BPG) highlights O & M programs targeting energy efficiency that are estimated to save between 5% and 20% on energy bills without a significant capital investment. Depending on the Federal site, these savings can represent thousands to hundreds-of-thousands of dollars each year, and many can be achieved with minimal cash outlays. In addition to energy/resource savings, a well-run O & M program will (1)increase the safety of all staff because properly maintained equipment is safer equipment; (2)ensure the comfort, health and safety of building occupants through properly functioning equipment providing a healthy indoor environment; (3)confirm the design life expectancy of equipment is achieved; and (4)facilitate the compliance with Federal legislation such as the Clean Air Act and the Clean Water Act. The focus of this guide is to provide the Federal O & M/Energy manager and practitioner with information and actions aimed at achieving these savings and benefits. The O & M BPG was developed under the direction of the Department of Energy's Federal Energy Management Program (FEMP).

  2. Reliability Analysis of Drilling Operation in Open Pit Mines / Analiza niezawodności urządzeń wiertniczych wykorzystywanych w kopalniach odkrywkowych

    NASA Astrophysics Data System (ADS)

    Rahimdel, M. J.; Ataei, M.; Kakaei, R.; Hoseinie, S. H.

    2013-06-01

    Considering the high investment and operation costs, reliability analysis of mining machineries is essential to achieve a lean operation and to prevent the unwanted stoppages. In open pit mining, drilling, as the initial stage of the exploitation operations, has a significant role in the other stages. Failure of drilling machines causes total delay in blasting operation. In this paper, the reliability of drilling operation has been analyzed using the Markov method. The failure and operation data of four heavy rotary drilling machines in Sarcheshme copper mine in Iran have been used as a case study. Failure rate and repair rate of all machines have been calculated using available data. Then, 16 possible operation states have been defined and the probability of being of drilling fleet in each of the states was calculated using Markov theory. The results showed that there was 77.2% probability that all machines in fleet were in operational condition. It means that, considering 360 working days per year, drilling operation will be in a reliable condition in 277.92 days. Biorąc pod uwagę wysokość kosztów inwestycyjnych a także eksploatacyjnych, przeprowadzenie analizy niezawodności maszyn i urządzeń górniczych jest sprawą kluczową dla zapewnienia sprawnego działania i dla wyeliminowania niepożądanych przestojów. W kopalniach odkrywkowych prace wiertnicze prowadzone w początkowych etapach eksploatacji mają ogromne znaczenie również w późniejszych fazach działalności przedsięwzięcia. Awaria urządzeń wiertniczych powoduje opóźnienia przy pracach strzałowych. W pracy tej przeanalizowano niezawodność urządzeń wiertniczych w oparciu o metodę Markowa. Jako studium przypadku wykorzystano dane zebrane w trakcie eksploatacji i awarii czterech obrotowych urządzeń wiertniczych wykorzystywanych w kopalni rud miedzi Sarcheshme w Iranie. Awaryjność maszyn i zakres oraz częstość napraw obliczono na podstawie dostępnych danych. Zdefiniowano 16

  3. The medial and lateral epicondyle as a reliable landmark for intra-operative joint line determination in revision knee arthroplasty

    PubMed Central

    Sen, T.; Cankaya, D.; Kendir, S.; Basarır, K.; Tabak, Y.

    2016-01-01

    Objectives The purpose of this study was to develop an accurate, reliable and easily applicable method for determining the anatomical location of the joint line during revision knee arthroplasty. Methods The transepicondylar width (TEW), the perpendicular distance between the medial and lateral epicondyles and the distal articular surfaces (DMAD, DLAD) and the distance between the medial and lateral epicondyles and the posterior articular surfaces (PMAD, DLAD) were measured in 40 knees from 20 formalin-fixed adult cadavers (11 male and nine female; mean age at death 56.9 years, sd 9.4; 34 to 69). The ratios of the DMAD, PMAD, DLAD and PLAD to TEW were calculated. Results The mean TEW, DMAD, PMAD, DLAD and PLAD were 82.76 mm (standard deviation (sd) 7.74), 28.95 mm (sd 3.3), 28.57 mm (sd 3), 23.97 mm (sd 3.27) and 24.42 mm (sd 3.14), respectively. The ratios between the TEW and the articular distances (DMAD/TEW, DLAD/TEW, PMAD/TEW and PLAD/TEW) were calculated and their means were 0.35 (sd 0.02), 0.34 (sd 0.02), 0.28 (sd 0.03) and 0.29 (sd 0.03), respectively. Conclusion This method provides a simple, reproducible and reliable technique enabling accurate anatomical joint line restoration during revision total knee arthroplasty. Cite this article: B. Ozkurt, T. Sen, D. Cankaya, S. Kendir, K. Basarır, Y. Tabak. The medial and lateral epicondyle as a reliable landmark for intra-operative joint line determination in revision knee arthroplasty. Bone Joint Res 2016;5:280–286. DOI: 10.1302/2046-3758.57.BJR-2016-0002.R1. PMID:27388715

  4. Advanced, Integrated Control for Building Operations to Achieve 40% Energy Saving

    SciTech Connect

    Lu, Yan; Song, Zhen; Loftness, Vivian; Ji, Kun; Zheng, Sam; Lasternas, Bertrand; Marion, Flore; Yuebin, Yu

    2012-10-15

    We developed and demonstrated a software based integrated advanced building control platform called Smart Energy Box (SEB), which can coordinate building subsystem controls, integrate variety of energy optimization algorithms and provide proactive and collaborative energy management and control for building operations using weather and occupancy information. The integrated control system is a low cost solution and also features: Scalable component based architecture allows to build a solution for different building control system configurations with needed components; Open Architecture with a central data repository for data exchange among runtime components; Extendible to accommodate variety of communication protocols. Optimal building control for central loads, distributed loads and onsite energy resource; uses web server as a loosely coupled way to engage both building operators and building occupants in collaboration for energy conservation. Based on the open platform of SEB, we have investigated and evaluated a variety of operation and energy saving control strategies on Carnegie Mellon University Intelligent Work place which is equipped with alternative cooling/heating/ventilation/lighting methods, including radiant mullions, radiant cooling/heating ceiling panels, cool waves, dedicated ventilation unit, motorized window and blinds, and external louvers. Based on the validation results of these control strategies, they were integrated in SEB in a collaborative and dynamic way. This advanced control system was programmed and computer tested with a model of the Intelligent Workplace's northern section (IWn). The advanced control program was then installed in the IWn control system; the performance was measured and compared with that of the state of the art control system to verify the overall energy savings great than 40%. In addition advanced human machine interfaces (HMI's) were developed to communicate both with building occupants and

  5. O&M Best Practices - A Guide to Achieving Operational Efficiency (Release 2.0)

    SciTech Connect

    Sullivan, Gregory P.; Pugh, Ray; Melendez, Aldo P.; Hunt, W. D.

    2004-07-31

    This guide, sponsored by DOE's Federal Energy Management Program, highlights operations and maintenance (O&M) programs targeting energy efficiency that are estimated to save 5% to 20% on energy bills without a significant capital investment. The purpose of this guide is to provide the federal O&M energy manager and practitioner with useful information about O&M management, technologies, energy efficiency and cost-reduction approaches.

  6. The use of ECDIS equipment to achieve an optimum value for energy efficiency operation index

    NASA Astrophysics Data System (ADS)

    Acomi, N.; Acomi, O. C.; Stanca, C.

    2015-11-01

    To reduce air pollution produced by ships, the International Maritime Organization has developed a set of technical, operational and management measures. The subject of our research addresses the operational measures for minimizing CO2 air emissions and the way how the emission value could be influenced by external factors regardless of ship-owners’ will. This study aims to analyse the air emissions for a loaded voyage leg performed by an oil tanker. The formula that allows us to calculate the predicted Energy Efficiency Operational Index involves the estimation of distance and fuel consumption, while the quantity of cargo is known. The electronic chart display and information system, ECDIS Simulation Software, will be used for adjusting the passage plan in real time, given the predicted severe environmental conditions. The distance will be determined using ECDIS, while the prediction of the fuel consumption will consider the sea trial and the vessel experience records. That way it will be possible to compare the estimated EEOI value in the case of great circle navigation in adverse weather condition with the estimated EEOI value for weather navigation.

  7. Reliability and Levels of Difficulty of Objective Test Items in a Mathematics Achievement Test: A Study of Ten Senior Secondary Schools in Five Local Government Areas of Akure, Ondo State

    ERIC Educational Resources Information Center

    Adebule, S. O.

    2009-01-01

    This study examined the reliability and difficult indices of Multiple Choice (MC) and True or False (TF) types of objective test items in a Mathematics Achievement Test (MAT). The instruments used were two variants- 50-items Mathematics achievement test based on the multiple choice and true or false test formats. A total of five hundred (500)…

  8. Conceptual design study of advanced acoustic composite nacelle. [for achieving reductions in community noise and operating expense

    NASA Technical Reports Server (NTRS)

    Goodall, R. G.; Painter, G. W.

    1975-01-01

    Conceptual nacelle designs for wide-bodied and for advanced-technology transports were studied with the objective of achieving significant reductions in community noise with minimum penalties in airplane weight, cost, and in operating expense by the application of advanced composite materials to nacelle structure and sound suppression elements. Nacelle concepts using advanced liners, annular splitters, radial splitters, translating centerbody inlets, and mixed-flow nozzles were evaluated and a preferred concept selected. A preliminary design study of the selected concept, a mixed flow nacelle with extended inlet and no splitters, was conducted and the effects on noise, direct operating cost, and return on investment determined.

  9. Reliability training

    NASA Technical Reports Server (NTRS)

    Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Dillard, Richard B.; Wong, Kam L.; Barber, Frank J.; Barina, Frank J.

    1992-01-01

    Discussed here is failure physics, the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low cost reliable products. A review of reliability for the years 1940 to 2000 is given. Next, a review of mathematics is given as well as a description of what elements contribute to product failures. Basic reliability theory and the disciplines that allow us to control and eliminate failures are elucidated.

  10. Use of Virtual Mission Operations Center Technology to Achieve JPDO's Virtual Tower Vision

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.; Paulsen, Phillip E.

    2006-01-01

    The Joint Program Development Office has proposed that the Next Generation Air Transportation System (NGATS) consolidate control centers. NGATS would be managed from a few strategically located facilities with virtual towers and TRACONS. This consolidation is about combining the delivery locations for these services not about decreasing service. By consolidating these locations, cost savings in the order of $500 million have been projected. Evolving to spaced-based communication, navigation, and surveillance offers the opportunity to reduce or eliminate much of the ground-based infrastructure cost. Dynamically adjusted airspace offers the opportunity to reduce the number of sectors and boundary inconsistencies; eliminate or reduce "handoffs;" and eliminate the distinction between Towers, TRACONS, and Enroute Centers. To realize a consolidation vision for air traffic management there must be investment in networking. One technology that holds great potential is the use of Virtual Mission Operations Centers to provide secure, automated, intelligent management of the NGATS. This paper provides a conceptual framework for incorporating VMOC into the NGATS.

  11. Integrated Operating Scenario to Achieve 100-Second, High Electron Temperature Discharge on EAST

    NASA Astrophysics Data System (ADS)

    Qian, Jinping; Gong, Xianzu; Wan, Baonian; Liu, Fukun; Wang, Mao; Xu, Handong; Hu, Chundong; Wang, Liang; Li, Erzhong; Zeng, Long; Ti, Ang; Shen, Biao; Lin, Shiyao; Shao, Linming; Zang, Qing; Liu, Haiqing; Zhang, Bin; Sun, Youwen; Xu, Guosheng; Liang, Yunfeng; Xiao, Bingjia; Hu, Liqun; Li, Jiangang; EAST Team

    2016-05-01

    Stationary long pulse plasma of high electron temperature was produced on EAST for the first time through an integrated control of plasma shape, divertor heat flux, particle exhaust, wall conditioning, impurity management, and the coupling of multiple heating and current drive power. A discharge with a lower single null divertor configuration was maintained for 103 s at a plasma current of 0.4 MA, q95 ≈7.0, a peak electron temperature of >4.5 keV, and a central density ne(0)˜2.5×1019 m-3. The plasma current was nearly non-inductive (Vloop <0.05 V, poloidal beta ˜ 0.9) driven by a combination of 0.6 MW lower hybrid wave at 2.45 GHz, 1.4 MW lower hybrid wave at 4.6 GHz, 0.5 MW electron cyclotron heating at 140 GHz, and 0.4 MW modulated neutral deuterium beam injected at 60 kV. This progress demonstrated strong synergy of electron cyclotron and lower hybrid electron heating, current drive, and energy confinement of stationary plasma on EAST. It further introduced an example of integrated “hybrid” operating scenario of interest to ITER and CFETR. supported by the National Magnetic Confinement Fusion Science Foundation of China (Nos. 2015GB102000 and 2014GB103000)

  12. Achieving operational two-way laser acquisition for OPALS payload on the International Space Station

    NASA Astrophysics Data System (ADS)

    Abrahamson, Matthew J.; Oaida, Bogdan V.; Sindiy, Oleg; Biswas, Abhijit

    2015-03-01

    The Optical PAyload for Lasercomm Science (OPALS) experiment was installed on the International Space Station (ISS) in April 2014. Developed as a technology demonstration, its objective was to experiment with space-to-ground optical communications transmissions from Low Earth Orbit. More than a dozen successful optical links were established between a Wrightwood, California-based ground telescope and the OPALS flight terminal from June 2014 to September 2014. Each transmission required precise bi-directional pointing to be maintained between the space-based transmitter and ground-based receiver. This was accomplished by acquiring and tracking a laser beacon signal transmitted from the ground telescope to the OPALS flight terminal on the ISS. OPALS demonstrated the ability to nominally acquire the beacon within three seconds at 25° elevation and maintain lock within 140 μrad (3σ) for the full 150-second transmission duration while slewing at rates up to 1°/sec. Additional acquisition attempts in low elevation and weather-challenged conditions provided valuable insight on the optical link robustness under off-nominal operational conditions.

  13. Compressed sensing embedded in an operational wireless sensor network to achieve energy efficiency in long-term monitoring applications

    NASA Astrophysics Data System (ADS)

    O'Connor, S. M.; Lynch, J. P.; Gilbert, A. C.

    2014-08-01

    Compressed sensing (CS) is a powerful new data acquisition paradigm that seeks to accurately reconstruct unknown sparse signals from very few (relative to the target signal dimension) random projections. The specific objective of this study is to save wireless sensor energy by using CS to simultaneously reduce data sampling rates, on-board storage requirements, and communication data payloads. For field-deployed low power wireless sensors that are often operated with limited energy sources, reduced communication translates directly into reduced power consumption and improved operational reliability. In this study, acceleration data from a multi-girder steel-concrete deck composite bridge are processed for the extraction of mode shapes. A wireless sensor node previously designed to perform traditional uniform, Nyquist rate sampling is modified to perform asynchronous, effectively sub-Nyquist rate sampling. The sub-Nyquist data are transmitted off-site to a computational server for reconstruction using the CoSaMP matching pursuit recovery algorithm and further processed for extraction of the structure’s mode shapes. The mode shape metric used for reconstruction quality is the modal assurance criterion (MAC), an indicator of the consistency between CS and traditional Nyquist acquired mode shapes. A comprehensive investigation of modal accuracy from a dense set of acceleration response data reveals that MAC values above 0.90 are obtained for the first four modes of a bridge structure when at least 20% of the original signal is sampled using the CS framework. Reduced data collection, storage and communication requirements are found to lead to substantial reductions in the energy requirements of wireless sensor networks at the expense of modal accuracy. Specifically, total energy reductions of 10-60% can be obtained for a sensor network with 10-100 sensor nodes, respectively. The reduced energy requirements of the CS sensor nodes are shown to directly result in

  14. High Reliability and Excellence in Staffing.

    PubMed

    Mensik, Jennifer

    2015-01-01

    Nurse staffing is a complex issue, with many facets and no one right answer. High-reliability organizations (HROs) strive and succeed in achieving a high degree of safety or reliability despite operating in hazardous conditions. HROs have systems in place that make them extremely consistent in accomplishing their goals and avoiding potential errors. However, the inability to resolve quality issues may very well be related to the lack of adoption of high-reliability principles throughout our organizations.

  15. Proof-of-Concept Demonstrations for Computation-Based Human Reliability Analysis. Modeling Operator Performance During Flooding Scenarios

    SciTech Connect

    Joe, Jeffrey Clark; Boring, Ronald Laurids; Herberger, Sarah Elizabeth Marie; Mandelli, Diego; Smith, Curtis Lee

    2015-09-01

    The United States (U.S.) Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program has the overall objective to help sustain the existing commercial nuclear power plants (NPPs). To accomplish this program objective, there are multiple LWRS “pathways,” or research and development (R&D) focus areas. One LWRS focus area is called the Risk-Informed Safety Margin and Characterization (RISMC) pathway. Initial efforts under this pathway to combine probabilistic and plant multi-physics models to quantify safety margins and support business decisions also included HRA, but in a somewhat simplified manner. HRA experts at Idaho National Laboratory (INL) have been collaborating with other experts to develop a computational HRA approach, called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER), for inclusion into the RISMC framework. The basic premise of this research is to leverage applicable computational techniques, namely simulation and modeling, to develop and then, using RAVEN as a controller, seamlessly integrate virtual operator models (HUNTER) with 1) the dynamic computational MOOSE runtime environment that includes a full-scope plant model, and 2) the RISMC framework PRA models already in use. The HUNTER computational HRA approach is a hybrid approach that leverages past work from cognitive psychology, human performance modeling, and HRA, but it is also a significant departure from existing static and even dynamic HRA methods. This report is divided into five chapters that cover the development of an external flooding event test case and associated statistical modeling considerations.

  16. Reliability assurance for regulation of advanced reactors

    SciTech Connect

    Fullwood, R.; Lofaro, R.; Samanta, P.

    1991-01-01

    The advanced nuclear power plants must achieve higher levels of safety than the first generation of plants. Showing that this is indeed true provides new challenges to reliability and risk assessment methods in the analysis of the designs employing passive and semi-passive protection. Reliability assurance of the advanced reactor systems is important for determining the safety of the design and for determining the plant operability. Safety is the primary concern, but operability is considered indicative of good and safe operation. This paper discusses several concerns for reliability assurance of the advanced design encompassing reliability determination, level of detail required in advanced reactor submittals, data for reliability assurance, systems interactions and common cause effects, passive component reliability, PRA-based configuration control system, and inspection, training, maintenance and test requirements. Suggested approaches are provided for addressing each of these topics.

  17. Reliability assurance for regulation of advanced reactors

    SciTech Connect

    Fullwood, R.; Lofaro, R.; Samanta, P.

    1991-12-31

    The advanced nuclear power plants must achieve higher levels of safety than the first generation of plants. Showing that this is indeed true provides new challenges to reliability and risk assessment methods in the analysis of the designs employing passive and semi-passive protection. Reliability assurance of the advanced reactor systems is important for determining the safety of the design and for determining the plant operability. Safety is the primary concern, but operability is considered indicative of good and safe operation. This paper discusses several concerns for reliability assurance of the advanced design encompassing reliability determination, level of detail required in advanced reactor submittals, data for reliability assurance, systems interactions and common cause effects, passive component reliability, PRA-based configuration control system, and inspection, training, maintenance and test requirements. Suggested approaches are provided for addressing each of these topics.

  18. Validity, Reliability, and Performance Determinants of a New Job-Specific Anaerobic Work Capacity Test for the Norwegian Navy Special Operations Command.

    PubMed

    Angeltveit, Andreas; Paulsen, Gøran; Solberg, Paul A; Raastad, Truls

    2016-02-01

    Operators in Special Operation Forces (SOF) have a particularly demanding profession where physical and psychological capacities can be challenged to the extremes. The diversity of physical capacities needed depend on the mission. Consequently, tests used to monitor SOF operators' physical fitness should cover a broad range of physical capacities. Whereas tests for strength and aerobic endurance are established, there is no test for specific anaerobic work capacity described in the literature. The purpose of this study was therefore to evaluate the reliability, validity, and to identify performance determinants of a new test developed for testing specific anaerobic work capacity in SOF operators. Nineteen active young students were included in the concurrent validity part of the study. The students performed the evacuation (EVAC) test 3 times and the results were compared for reliability and with performance in the Wingate cycle test, 300-m sprint, and a maximal accumulated oxygen deficit (MAOD) test. In part II of the study, 21 Norwegian Navy Special Operations Command operators conducted the EVAC test, anthropometric measurements, a dual x-ray absorptiometry scan, leg press, isokinetic knee extensions, maximal oxygen uptake test, and countermovement jump (CMJ) test. The EVAC test showed good reliability after 1 familiarization trial (intraclass correlation = 0.89; coefficient of variance = 3.7%). The EVAC test correlated well with the Wingate test (r = -0.68), 300-m sprint time (r = 0.51), and 300-m mean power (W) (r = -0.67). No significant correlation was found with the MAOD test. In part II of the study, height, body mass, lean body mass, isokinetic knee extension torque, maximal oxygen uptake, and maximal power in a CMJ was significantly correlated with performance in the EVAC test. The EVAC test is a reliable and valid test for anaerobic work capacity for SOF operators, and muscle mass, leg strength, and leg power seem to be the most important determinants

  19. [The reliability of reliability].

    PubMed

    Blancas Espejo, A

    1991-01-01

    The author critically analyzes an article by Rodolfo Corona Vazquez that questions the reliability of the preliminary results of the Eleventh Census of Population and Housing, conducted in Mexico in March 1990. The need to define what constitutes "reliability" for preliminary results is stressed. PMID:12317739

  20. Field-Induced Crystalline-to-Amorphous Phase Transformation on the Si Nano-Apex and the Achieving of Highly Reliable Si Nano-Cathodes

    PubMed Central

    Huang, Yifeng; Deng, Zexiang; Wang, Weiliang; Liang, Chaolun; She, Juncong; Deng, Shaozhi; Xu, Ningsheng

    2015-01-01

    Nano-scale vacuum channel transistors possess merits of higher cutoff frequency and greater gain power as compared with the conventional solid-state transistors. The improvement in cathode reliability is one of the major challenges to obtain high performance vacuum channel transistors. We report the experimental findings and the physical insight into the field induced crystalline-to-amorphous phase transformation on the surface of the Si nano-cathode. The crystalline Si tip apex deformed to amorphous structure at a low macroscopic field (0.6~1.65 V/nm) with an ultra-low emission current (1~10 pA). First-principle calculation suggests that the strong electrostatic force exerting on the electrons in the surface lattices would take the account for the field-induced atomic migration that result in an amorphization. The arsenic-dopant in the Si surface lattice would increase the inner stress as well as the electron density, leading to a lower amorphization field. Highly reliable Si nano-cathodes were obtained by employing diamond like carbon coating to enhance the electron emission and thus decrease the surface charge accumulation. The findings are crucial for developing highly reliable Si-based nano-scale vacuum channel transistors and have the significance for future Si nano-electronic devices with narrow separation. PMID:25994377

  1. Spaceflight tracking and data network operational reliability computer output for MTBF and availability. Appendix V to CSC-1-395

    NASA Technical Reports Server (NTRS)

    Seneca, V. I.; Mlynarczyk, R. H.

    1974-01-01

    Tables of data are provided to show the availability of Skylab data to selected ground stations during the phases of Skylab preflight, Skylab unmanned condition, and Skylab manned condition. The mean time between failure (MTBF) of the same Skylab functions is tabulated for the selected ground stations. All reliability data are based on a 90 percent confidence interval.

  2. Inter-operator Reliability of Magnetic Resonance Image-Based Computational Fluid Dynamics Prediction of Cerebrospinal Fluid Motion in the Cervical Spine.

    PubMed

    Martin, Bryn A; Yiallourou, Theresia I; Pahlavian, Soroush Heidari; Thyagaraj, Suraj; Bunck, Alexander C; Loth, Francis; Sheffer, Daniel B; Kröger, Jan Robert; Stergiopulos, Nikolaos

    2016-05-01

    For the first time, inter-operator dependence of MRI based computational fluid dynamics (CFD) modeling of cerebrospinal fluid (CSF) in the cervical spinal subarachnoid space (SSS) is evaluated. In vivo MRI flow measurements and anatomy MRI images were obtained at the cervico-medullary junction of a healthy subject and a Chiari I malformation patient. 3D anatomies of the SSS were reconstructed by manual segmentation by four independent operators for both cases. CFD results were compared at nine axial locations along the SSS in terms of hydrodynamic and geometric parameters. Intraclass correlation (ICC) assessed the inter-operator agreement for each parameter over the axial locations and coefficient of variance (CV) compared the percentage of variance for each parameter between the operators. Greater operator dependence was found for the patient (0.19 < ICC < 0.99) near the craniovertebral junction compared to the healthy subject (ICC > 0.78). For the healthy subject, hydraulic diameter and Womersley number had the least variance (CV = ~2%). For the patient, peak diastolic velocity and Reynolds number had the smallest variance (CV = ~3%). These results show a high degree of inter-operator reliability for MRI-based CFD simulations of CSF flow in the cervical spine for healthy subjects and a lower degree of reliability for patients with Type I Chiari malformation.

  3. Reliable experimental setup to test the pressure modulation of Baerveldt Implant tubes for reducing post-operative hypotony

    NASA Astrophysics Data System (ADS)

    Ramani, Ajay

    Glaucoma encompasses a group of conditions that result in damage to the optic nerve and can cause loss of vision and blindness. The nerve is damaged due to an increase in the eye's internal (intraocular) pressure (IOP) above the nominal range of 15 -- 20 mm Hg. There are many treatments available for this group of diseases depending on the complexity and stage of nerve degradation. In extreme cases where drugs or laser surgery do not create better conditions for the patient, ophthalmologists use glaucoma drainage devices to help alleviate the IOP. Many drainage implants have been developed over the years and are in use; but two popular implants are the Baerveldt Glaucoma Implant and the Ahmed Glaucoma Valve Implant. Baerveldt Implants are non-valved and provide low initial resistance to outflow of fluid, resulting in post-operative complications such as hypotony, where the IOP drops below 5 mm of Hg. Ahmed Glaucoma Valve Implants are valved implants which initially restrict the amount of fluid flowing out of the eye. The long term success rates of Baerveldt Implants surpass those of Ahmed Valve Implants because of post-surgical issues; but Baerveldt Implants' initial effectiveness is poor without proper flow restriction. This drives the need to develop new ways to improve the initial effectiveness of Baerveldt Implants. A possible solution proposed by our research team is to place an insert in the Baerveldt Implant tube of inner diameter 305 microns. The insert must be designed to provide flow resistance for the early time frame [e.g., first 30 -- 60 post-operative days] until sufficient scar tissue has formed on the implant. After that initial stage with the insert, the scar tissue will provide the necessary flow resistance to maintain the IOP above 5 mm Hg. The main objective of this project was to develop and validate an experimental apparatus to measure pressure drop across a Baerveldt Implant tube, with and without inserts. This setup will be used in the

  4. 76 FR 66328 - Callaway Golf Ball Operations, Inc., Including On-Site Leased Workers From Reliable Temp Services...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-26

    ... Federal Register on July 8, 2011 (76 FR 40401). At the request of the State agency, the Department... Employment and Training Administration Callaway Golf Ball Operations, Inc., Including On-Site Leased Workers..., 2011, applicable to workers of Callaway Golf Ball Operations, Inc., including on-site leased...

  5. 76 FR 71966 - TC Ravenswood, LLC v. New York Independent System Operator, Inc., New York State Reliability...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-21

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission TC Ravenswood, LLC v. New York Independent System Operator, Inc., New York... complaint against the New York Independent System Operator, Inc. (NYISO) and the New York State...

  6. The quadruple squeeze: defining the safe operating space for freshwater use to achieve a triply green revolution in the anthropocene.

    PubMed

    Rockström, Johan; Karlberg, Louise

    2010-05-01

    Humanity has entered a new phase of sustainability challenges, the Anthropocene, in which human development has reached a scale where it affects vital planetary processes. Under the pressure from a quadruple squeeze-from population and development pressures, the anthropogenic climate crisis, the anthropogenic ecosystem crisis, and the risk of deleterious tipping points in the Earth system-the degrees of freedom for sustainable human exploitation of planet Earth are severely restrained. It is in this reality that a new green revolution in world food production needs to occur, to attain food security and human development over the coming decades. Global freshwater resources are, and will increasingly be, a fundamental limiting factor in feeding the world. Current water vulnerabilities in the regions in most need of large agricultural productivity improvements are projected to increase under the pressure from global environmental change. The sustainability challenge for world agriculture has to be set within the new global sustainability context. We present new proposed sustainability criteria for world agriculture, where world food production systems are transformed in order to allow humanity to stay within the safe operating space of planetary boundaries. In order to secure global resilience and thereby raise the chances of planet Earth to remain in the current desired state, conducive for human development on the long-term, these planetary boundaries need to be respected. This calls for a triply green revolution, which not only more than doubles food production in many regions of the world, but which also is environmentally sustainable, and invests in the untapped opportunities to use green water in rainfed agriculture as a key source of future productivity enhancement. To achieve such a global transformation of agriculture, there is a need for more innovative options for water interventions at the landscape scale, accounting for both green and blue water, as well

  7. Study on the Interrater Reliability of an OSPE (Objective Structured Practical Examination) – Subject to the Evaluation Mode in the Phantom Course of Operative Dentistry

    PubMed Central

    Schmitt, Laura; Möltner, Andreas; Rüttermann, Stefan; Gerhardt-Szép, Susanne

    2016-01-01

    Introduction: The aim of the study presented here was to evaluate the reliability of an OSPE end-of-semester exam in the phantom course for operative dentistry in Frankfurt am Main taking into consideration different modes of evaluation (examiner’s checklist versus instructor’s manual) and number of examiners (three versus four). Methods: In an historic, monocentric, comparative study, two different methods of evaluation were examined in a real end-of-semester setting held in OSPE form (Group I: exclusive use of an examiner’s checklist versus Group II: use of an examiner’s checklist including an instructor’s manual). For the analysis of interrater reliability, the generalisability theory was applied that contains a generalisation of the concept of internal consistency (Cronbach’s alpha). Results: The results show that the exclusive use of the examiner’s checklist led to higher interrater reliability values than the in-depth instructor’s manual used in addition to the list. Conclusion: In summary it can be said that the examiner’s checklists used in the present study, without the instructor’s manual, resulted in the highest interrater reliability in combination with three evaluators within the context of the completed OSPE. PMID:27579361

  8. The challenge of achieving 1% operative mortality for coronary artery bypass grafting: A multi-institution Society of Thoracic Surgeons Database analysis

    PubMed Central

    LaPar, Damien J.; Filardo, Giovanni; Crosby, Ivan K.; Speir, Alan M.; Rich, Jeffrey B.; Kron, Irving L.; Ailawadi, Gorav

    2016-01-01

    Objectives Cardiothoracic surgical leadership recently challenged the surgical community to achieve an operative mortality rate of 1.0% for the performance of isolated coronary artery bypass grafting (CABG). The possibility of achieving this goal remains unknown due to the increasing number of high-risk patients being referred for CABG. The purpose of our study was to identify a patient population in which this operative mortality goal is achievable relative to the estimated operative risk. Methods Patient records from a multi-institution (17 centers) Society of Thoracic Surgeons (STS) database for primary, isolated CABG operations (2001–2012) were analyzed. Multiple logistic regression modeling with spline functions for calculated STS predicted risk of mortality (PROM) was used to rigorously assess the relationship between estimated patient risk and operative mortality, adjusted for operative year and surgeon volume. Results A total of 34,416 patients (average patient age, 63.9 ± 10.7 years; 27% [n = 9190] women) incurred an operative mortality rate of 1.87%. Median STS predicted risk of mortality was 1.06% (interquartile range, 0.60% −2.13% ) and median surgeon CABG volume was 544 (interquartile range, 303–930) operations over the study period. After risk adjustment for the confounding influence of surgeon volume and operative year, the association between STS PROM and operative mortality was highly significant (P < .0001). More importantly, the adjusted spline function revealed that an STS PROM threshold value of 1.27% correlated with a 1.0% probability of death, accounting for 57.3% (n = 19,720) of the total study population. Further, the STS PROM demonstrated a limited predictive capacity for operative mortality for STS PROM > 25% as observed to expected mortality began to diverge. Conclusions Achieving the goal of 1.0% operative mortality for primary, isolated CABG is feasible in appropriately selected patients in the modern surgical era. However, this

  9. Strategies and Decision Support Systems for Integrating Variable Energy Resources in Control Centers for Reliable Grid Operations. Executive Summary

    SciTech Connect

    Jones, Lawrence E.

    2011-11-01

    This is the executive summary for a report that provides findings from the field regarding the best ways in which to guide operational strategies, business processes and control room tools to support the integration of renewable energy into electrical grids.

  10. Strategies and Decision Support Systems for Integrating Variable Energy Resources in Control Centers for Reliable Grid Operations

    SciTech Connect

    Jones, Lawrence E.

    2011-11-01

    This report provides findings from the field regarding the best ways in which to guide operational strategies, business processes and control room tools to support the integration of renewable energy into electrical grids.

  11. Assuring reliability program effectiveness.

    NASA Technical Reports Server (NTRS)

    Ball, L. W.

    1973-01-01

    An attempt is made to provide simple identification and description of techniques that have proved to be most useful either in developing a new product or in improving reliability of an established product. The first reliability task is obtaining and organizing parts failure rate data. Other tasks are parts screening, tabulation of general failure rates, preventive maintenance, prediction of new product reliability, and statistical demonstration of achieved reliability. Five principal tasks for improving reliability involve the physics of failure research, derating of internal stresses, control of external stresses, functional redundancy, and failure effects control. A final task is the training and motivation of reliability specialist engineers.

  12. High-power operation of highly reliable narrow stripe pseudomorphic single quantum well lasers emitting at 980 nm

    NASA Technical Reports Server (NTRS)

    Larsson, A.; Forouhar, S.; Cody, J.; Lang, R. J.

    1990-01-01

    Ridge waveguide pseudomorphic InGaAs/GaAs/AlGaAs single-quantum-well lasers exhibiting record high quantum efficiencies and high output power densities (105 mW per facet from a 6 micron wide stripe) at a lasing wavelength of 980 nm are discussed that were fabricated from a graded index separate confinement heterostructure grown by molecular beam epitaxy. Life testing at an output power of 30 mW per uncoated facet reveals a slow gradual degradation during the initial 500 h of operation after which the operating characteristics of the lasers become stable. The emission wavelength, the high output power, and the fundamental lateral mode operation render these lasers suitable for pumping Er3+-doped fiber amplifiers.

  13. Reliability Prediction

    NASA Technical Reports Server (NTRS)

    1993-01-01

    RELAV, a NASA-developed computer program, enables Systems Control Technology, Inc. (SCT) to predict performance of aircraft subsystems. RELAV provides a system level evaluation of a technology. Systems, the mechanism of a landing gear for example, are first described as a set of components performing a specific function. RELAV analyzes the total system and the individual subsystem probabilities to predict success probability, and reliability. This information is then translated into operational support and maintenance requirements. SCT provides research and development services in support of government contracts.

  14. Can EGFR mutation status be reliably determined in pre-operative needle biopsies from adenocarcinomas of the lung?

    PubMed

    Lindahl, Kim Hein; Sørensen, Flemming Brandt; Jonstrup, Søren Peter; Olsen, Karen Ege; Loeschke, Siegfried

    2015-04-01

    The identification of EGFR mutations in non-small-cell lung cancer is important for selecting patients, who may benefit from treatment with EGFR tyrosine kinase inhibitors. The analysis is usually performed on cytological aspirates and/or histological needle biopsies, representing a small fraction of the tumour volume. The aim of the present investigation was to evaluate the diagnostic performance of this molecular test. We retrospectively included 201 patients with primary adenocarcinoma of the lung. EGFR mutation status (exon 19 deletions and exon 21 L858R point mutation) was evaluated on both pre-operative biopsies (131 histological and 70 cytological) and on the surgical specimens, using PCR. Samples with low tumour cell fraction were assigned to laser micro-dissection (LMD). We found nine (4.5%) patients with EGFR mutation in the lung tumour resections, but failed to identify mutation in one of the corresponding pre-operative, cytological specimens. Several (18.4%) analyses of the pre-operative biopsies were inconclusive, especially in case of biopsies undergoing LMD and regarding exon 21 analysis. Discrepancy of mutation status in one patient may reflect intra-tumoural heterogeneity or technical issues. Moreover, several inconclusive results in the diagnostic biopsies reveal that attention must be paid on the suitability of pre-operative biopsies for EGFR mutation analysis.

  15. Application of gasket performance data for design and operation of low emissions, high-reliability gasketed joints

    SciTech Connect

    Waterland, A.F. III

    1996-07-01

    The MTI project No. 47, Test Methods for Non-Asbestos Gasket Materials, opened everyone`s eyes to the breadth of performance and use information for gasket materials. What the MTI has started has resulted in a quiet revolution in gasketing, and just in time. Today`s emissions and reliability mandates have created a situation whereby gasket materials can no longer be selected and designed into systems simply through practicable experience and personnel judgment. A defined engineering approach is required. Based on the work initiated by the MTI and furthered by groups such as ASME and PVRC, there now exists extensive performance data for all gasketing materials. This presentation addresses the existence and usage of the various MTI and PVRC-type performance data as a tool for initial material selection. With this background, a novel simplification to the future ASME code procedure is introduced which allows for a simple yet accurate means of applying this widely available data to an emissions` control program at the plant level.

  16. Operating experience feedback report: Reliability of safety-related steam turbine-driven standby pumps. Commercial power reactors, Volume 10

    SciTech Connect

    Boardman, J.R.

    1994-10-01

    This report documents a detailed analysis of failure initiators, causes and design features for steam turbine assemblies (turbines with their related components, such as governors and valves) which are used as drivers for standby pumps in the auxiliary feedwater systems of US commercial pressurized water reactor plants, and in the high pressure coolant injection and reactor core isolation cooling systems of US commercial boiling water reactor plants. These standby pumps provide a redundant source of water to remove reactor core heat as specified in individual plant safety analysis reports. The period of review for this report was from January 1974 through December 1990 for licensee event reports (LERS) and January 1985 through December 1990 for Nuclear Plant Reliability Data System (NPRDS) failure data. This study confirmed the continuing validity of conclusions of earlier studies by the US Nuclear Regulatory Commission and by the US nuclear industry that the most significant factors in failures of turbine-driven standby pumps have been the failures of the turbine-drivers and their controls. Inadequate maintenance and the use of inappropriate vendor technical information were identified as significant factors which caused recurring failures.

  17. Family MAASAI (Maintaining African-American Survival Achievement Integrity) Rites of Passage After-School Prevention Program. Operational Manual.

    ERIC Educational Resources Information Center

    Ford, Jerome, Comp.; Jackson, Anthony, Comp.; James, D'Borah, Comp.; Smith, Bryce, Comp.; Robinson, Luke, Comp.; Cherry, Jennifer, Comp.; Trotter, Jennie, Comp.; Harris, Archie, Comp.; Lenior, Sheila, Comp.; Bellinger, Mary Anne, Comp.

    Family MAASAI is a multiservice substance abuse prevention and intervention program for African American at-risk urban youth. The program commemorates the Maasai people of Africa and uses MAASAI as an acronym that stands for Maintaining African American Survival, Achievement, and Integrity. Cultural awareness, pride, and respect for self, elders,…

  18. European tendencies and co-operation in the field of ITS systems - national achievements and challenges in Hungary

    NASA Astrophysics Data System (ADS)

    Lindenbach, Ágnes

    2016-06-01

    The article presents the role of intelligent transport systems/services related to the implementation of the essential European and Hungarian transport policy objectives. The `ITS Directive' will provide a framework for the tasks/works to be performed in the forthcoming years within the priority areas of ITS. The European Commission published regulations / specifications for the priority actions in the form of delegated acts defining the tasks/responsibilities of Member States. Regional/European co-operation for Hungary started after the EU-accession of the country. Hungary was an active partner within the European CONNECT and EasyWay projects, currently Hungary is a member of the CROCODILE consortium.

  19. Comparative Study of Vibration Stability at Operating Light Source Facilities and Lessons Learned in Achieving NSLS II Stability Goals

    SciTech Connect

    Simos,N.; Fallier, M.; Amick, H.

    2008-06-23

    In an effort to ensure that the stability goals of the NSLS II will be met once the accelerator structure is set on the selected BNL site a comprehensive evaluation of the ground vibration observed at existing light source facilities has been undertaken. The study has relied on measurement data collected and reported by the operating facilities as well as on new data collected in the course of this study. The primary goal of this comprehensive effort is to compare the green-field conditions that exist in the various sites both in terms of amplitude as well as frequency content and quantify the effect of the interaction of these accelerator facilities with the green-field vibration. The latter represents the ultimate goal of this effort where the anticipated motion of the NSLS II ring is estimated prior to its construction and compared with the required stability criteria.

  20. Power-gated 32 bit microprocessor with a power controller circuit activated by deep-sleep-mode instruction achieving ultra-low power operation

    NASA Astrophysics Data System (ADS)

    Koike, Hiroki; Ohsawa, Takashi; Miura, Sadahiko; Honjo, Hiroaki; Ikeda, Shoji; Hanyu, Takahiro; Ohno, Hideo; Endoh, Tetsuo

    2015-04-01

    A spintronic-based power-gated micro-processing unit (MPU) is proposed. It includes a power control circuit activated by the newly supported power-off instruction for the deep-sleep mode. These means enable the power-off procedure for the MPU to be executed appropriately. A test chip was designed and fabricated using 90 nm CMOS and an additional 100 nm MTJ process; it was successfully operated. The guideline of the energy reduction effects for this MPU was presented, using the estimation based on the measurement results of the test chip. The result shows that a large operation energy reduction of 1/28 can be achieved when the operation duty is 10%, under the condition of a sufficient number of idle clock cycles.

  1. 76 FR 66055 - North American Electric Reliability Corporation; Order Approving Interpretation of Reliability...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-25

    ... Operations (TOP) Reliability Standard TOP-002-2a (Normal Operations Planning). This Reliability Standard... the planning required to meet all System Operating Limits and Interconnection Reliability Operating... interpretation of Requirement R10 of Reliability Standard TOP-002-2a (Normal Operations Planning). The...

  2. Reliability assessment of GaAs- and InP-based diode lasers for high-energy single-pulse operation

    NASA Astrophysics Data System (ADS)

    Maiorov, M.; Damm, D.; Trofimov, I.; Zeidel, V.; Sellers, R.

    2009-08-01

    With the maturing of high-power diode laser technology, studies of laser-assisted ignition of a variety of substances are becoming an increasingly popular research topic. Its range of applications is wide - from fusing in the defense, construction and exploration industries to ignition in future combustion engines. Recent advances in InP-based technology have expanded the wavelength range that can be covered by multi-watt GaAs- and InP-based diode lasers to about 0.8 to 2 μm. With such a wide range, the wattage is no longer the sole defining factor for efficient ignition. Ignition-related studies should include the interaction of radiation of various wavelengths with matter and the reliability of devices based on different material systems. In this paper, we focus on the reliability of pulsed laser diodes for use in ignition applications. We discuss the existing data on the catastrophic optical damage (COD) of the mirrors of the GaAsbased laser diodes and come up with a non-destructive test method to predict the COD level of a particular device. This allows pre-characterization of the devices intended for fusing to eliminate failures during single-pulse operation in the field. We also tested InP-based devices and demonstrated that the maximum power is not limited by COD. Currently, devices with >10W output power are available from both GaAs- and InP-based devices, which dramatically expands the potential use of laser diodes in ignition systems.

  3. Estimating the Reliability of a Crewed Spacecraft

    NASA Astrophysics Data System (ADS)

    Lutomski, M. G.; Garza, J.

    2012-01-01

    Now that the Space Shuttle Program has been retired, the Russian Soyuz Launcher and Soyuz Spacecraft are the only means for crew transportation to and from the International Space Station (ISS). Are the astronauts and cosmonauts safer on the Soyuz than the Space Shuttle system? How do you estimate the reliability of such a crewed spacecraft? The recent loss of the 44 Progress resupply flight to the ISS has put these questions front and center. The Soyuz launcher has been in operation for over 40 years. There have been only two Loss of Crew (LOC) incidents and two Loss of Mission (LOM) incidents involving crew missions. Given that the most recent crewed Soyuz launcher incident took place in 1983, how do we determine current reliability of such a system? How do all of the failures of unmanned Soyuz family launchers such as the 44P impact the reliability of the currently operational crewed launcher? Does the Soyuz exhibit characteristics that demonstrate reliability growth and how would that be reflected in future estimates of success? In addition NASA has begun development of the Orion or Multi-Purpose Crewed Vehicle as well as started an initiative to purchase Commercial Crew services from private firms. The reliability targets are currently several times higher than the last Shuttle reliability estimate. Can these targets be compared to the reliability of the Soyuz arguably the highest reliable crewed spacecraft and launcher in the world to determine whether they are realistic and achievable? To help answer these questions this paper will explore how to estimate the reliability of the Soyuz launcher/spacecraft system over its mission to give a benchmark for other human spaceflight vehicles and their missions. Specifically this paper will look at estimating the Loss of Mission (LOM) and Loss of Crew (LOC) probability for an ISS crewed Soyuz launcher/spacecraft mission using historical data, reliability growth, and Probabilistic Risk Assessment (PRA) techniques.

  4. General Aviation Aircraft Reliability Study

    NASA Technical Reports Server (NTRS)

    Pettit, Duane; Turnbull, Andrew; Roelant, Henk A. (Technical Monitor)

    2001-01-01

    This reliability study was performed in order to provide the aviation community with an estimate of Complex General Aviation (GA) Aircraft System reliability. To successfully improve the safety and reliability for the next generation of GA aircraft, a study of current GA aircraft attributes was prudent. This was accomplished by benchmarking the reliability of operational Complex GA Aircraft Systems. Specifically, Complex GA Aircraft System reliability was estimated using data obtained from the logbooks of a random sample of the Complex GA Aircraft population.

  5. Cw operation of the FMIT RFQ accelerator

    SciTech Connect

    Cornelius, W.D.

    1985-01-01

    Recently, we have achieved reliable cw operation of the Fusion Materials Irradiation Test (FMIT) radio-frequency quadrupole (RFQ) accelerator. In addition to the operational experiences in achieving this status, some of the modifications of the vacuum system, cooling system, and rf structure are discussed. Preliminary beam-characterization results are presented. 10 refs., 8 figs.

  6. Unmanned Aerial Vehicle (UAV) Dynamic-Tracking Directional Wireless Antennas for Low Powered Applications that Require Reliable Extended Range Operations in Time Critical Scenarios

    SciTech Connect

    Scott G. Bauer; Matthew O. Anderson; James R. Hanneman

    2005-10-01

    The proven value of DOD Unmanned Aerial Vehicles (UAVs) will ultimately transition to National and Homeland Security missions that require real-time aerial surveillance, situation awareness, force protection, and sensor placement. Public services first responders who routinely risk personal safety to assess and report a situation for emergency actions will likely be the first to benefit from these new unmanned technologies. ‘Packable’ or ‘Portable’ small class UAVs will be particularly useful to the first responder. They require the least amount of training, no fixed infrastructure, and are capable of being launched and recovered from the point of emergency. All UAVs require wireless communication technologies for real- time applications. Typically on a small UAV, a low bandwidth telemetry link is required for command and control (C2), and systems health monitoring. If the UAV is equipped with a real-time Electro-Optical or Infrared (EO/Ir) video camera payload, a dedicated high bandwidth analog/digital link is usually required for reliable high-resolution imagery. In most cases, both the wireless telemetry and real-time video links will be integrated into the UAV with unity gain omni-directional antennas. With limited on-board power and payload capacity, a small UAV will be limited with the amount of radio-frequency (RF) energy it transmits to the users. Therefore, ‘packable’ and ‘portable’ UAVs will have limited useful operational ranges for first responders. This paper will discuss the limitations of small UAV wireless communications. The discussion will present an approach of utilizing a dynamic ground based real-time tracking high gain directional antenna to provide extend range stand-off operation, potential RF channel reuse, and assured telemetry and data communications from low-powered UAV deployed wireless assets.

  7. Operational research in primary health care planning: a theoretical model for estimating the coverage achieved by different distributions of staff and facilities

    PubMed Central

    Kemball-Cook, D.; Vaughan, J. P.

    1983-01-01

    This report outlines a basic operational research model for estimating the coverage achieved by different distributions of primary health care staff and facilities, using antenatal home visiting as an illustrative example. Coverage is estimated in terms of the average number of patient contacts achieved per annum. The model takes into account such features as number of facilities and health workers per 10 000 population, the radius of the health facility area, the overall population density in the region, the number of working days in the year, and the health worker's travelling time and work rate. A theoretical planning situation is also presented, showing the application of the model in defining various possible strategies, using certain planning norms for new levels of staff and facilities. This theoretical model is presented as an example of the use of operational research in primary health care, but it requires to be tested and validated in known situations before its usefulness can be assessed. Some indications are given of the ways in which the model could be adapted and improved for application to a real planning situation. PMID:6602666

  8. Working Group summary reports from the Advanced Photon Source reliability workshop

    SciTech Connect

    Not Available

    1992-05-01

    A workshop was held at APS to address reliability goals for accelerator systems. Seventy-one individuals participated in the workshop, including 30 from other institutions. The goals of the workshop were to: (1) Give attendees an introduction to the basic concepts of reliability analysis. (2) Exchange information on operating experience at existing accelerator facilities and strategies for achieving reliability at facilities under design or in construction. (3) Discuss reliability goals for APS and the means of their achievement. This report contains the working group summary report an APS`s following systems: RF Systems; Power Supplies; Magnet Systems; Interlock and Diagnostics; and Vacuum Systems.

  9. Working Group summary reports from the Advanced Photon Source reliability workshop

    SciTech Connect

    Not Available

    1992-05-01

    A workshop was held at APS to address reliability goals for accelerator systems. Seventy-one individuals participated in the workshop, including 30 from other institutions. The goals of the workshop were to: (1) Give attendees an introduction to the basic concepts of reliability analysis. (2) Exchange information on operating experience at existing accelerator facilities and strategies for achieving reliability at facilities under design or in construction. (3) Discuss reliability goals for APS and the means of their achievement. This report contains the working group summary report an APS's following systems: RF Systems; Power Supplies; Magnet Systems; Interlock and Diagnostics; and Vacuum Systems.

  10. Delivery times for caesarean section at Queen Elizabeth Central Hospital, Blantyre, Malawi: is a 30-minute 'informed to start of operative delivery time' achievable?

    PubMed

    O'Regan, M

    2003-08-01

    A timesheet questionnaire was used to assess the time it took from informing the anaesthetist about a case to the start of operative delivery in 78 consecutive patients undergoing caesarean section. Median (IQR [range]) times for grade-1 cases (immediate threat to the life of the mother or fetus) and grade-2 cases (fetal or maternal compromise without immediate threat to life) were 20 (17-35 [6-75]) min and 41 (27-60 [17-136]) min, respectively. Delays occurred in all the component time intervals examined. The primary avoidable delay was the patient's late arrival in theatre. Many significant delays were apparently not perceived by the anaesthetist. In nine (69%) grade-1 cases, the 30-min target decreed by the Association of Anaesthetists of Great Britain & Ireland and the Obstetric Anaesthetists' Association was achieved.

  11. Master/slave clock arrangement for providing reliable clock signal

    NASA Technical Reports Server (NTRS)

    Abbey, Duane L. (Inventor)

    1977-01-01

    The outputs of two like frequency oscillators are combined to form a single reliable clock signal, with one oscillator functioning as a slave under the control of the other to achieve phase coincidence when the master is operative and in a free-running mode when the master is inoperative so that failure of either oscillator produces no effect on the clock signal.

  12. Multiple-factor analysis of the first radioactive iodine therapy in post-operative patients with differentiated thyroid cancer for achieving a disease-free status

    PubMed Central

    Liu, Na; Meng, Zhaowei; Jia, Qiang; Tan, Jian; Zhang, Guizhi; Zheng, Wei; Wang, Renfei; Li, Xue; Hu, Tianpeng; Upadhyaya, Arun; Zhou, Pingping; Wang, Sen

    2016-01-01

    131I treatment is an important management method for patients with differentiated thyroid cancer (DTC). Unsuccessful 131I ablation drastically affects the prognosis of the patients. This study aimed to analyze potential predictive factors influencing the achievement of a disease-free status following the first 131I therapy. This retrospective review included 315 DTC patients, and multiple factors were analyzed. Tumor size, pathological tumor stage, lymph node (LN) metastasis, distant metastasis, American Thyroid Association recommended risks, pre-ablation thyroglobulin (Tg), and thyroid stimulating hormone (TSH) displayed significant differences between unsuccessful and successful group. Cutoff values of Tg and TSH to predict a successful outcome were 3.525 ng/mL and 99.700 uIU/ml by receiver operating characteristic curves analysis. Binary logistic regression analysis showed that tumor stage T3 or T4, LN metastasis to N1b station, intermediate and high risks, pre-ablation Tg ≥ 3.525 ng/ml and TSH <99.700 μIU/mL were significantly associated with unsuccessful outcomes. Logistic regression equation for achieving a disease-free status could be rendered as: y (successful treatment) = −0.270–0.503 X1 (LN metastasis) −0.236 X2 (Tg) + 0.015 X3 (TSH). This study demonstrated LN metastasis, pre-ablation Tg and TSH were the most powerful predictors for achieving a disease-free status by the first 131I therapy. PMID:27721492

  13. Human reliability in petrochemical industry: an action research.

    PubMed

    Silva, João Alexandre Pinheiro; Camarotto, João Alberto

    2012-01-01

    This paper aims to identify conflicts and gaps between the operators' strategies and actions and the organizational managerial approach for human reliability. In order to achieve these goals, the research approach adopted encompasses literature review, mixing action research methodology and Ergonomic Workplace Analysis in field research. The result suggests that the studied company has a classical and mechanistic point of view focusing on error identification and building barriers through procedures, checklists and other prescription alternatives to improve performance in reliability area. However, it was evident the fundamental role of the worker as an agent of maintenance and construction of system reliability during the action research cycle.

  14. Increasing Available Capacity of Equipment Operating in Power Systems with Shortage of Energy Sources and Making It More Reliable and Economically Efficient

    NASA Astrophysics Data System (ADS)

    Zagretdinov, I. Sh.; Pauli, Z. K.; Petrenya, Yu. K.; Khomenok, L. A.; Kruglikov, P. A.; Moiseeva, L. N.

    2008-01-01

    Technical recommendations are given on retrofitting the thermal circuits and the power-generating equipment of steam power installations that allows their available capacity to be increased and their reliability and economic efficiency to be improved without the need to make considerable investments.

  15. Engineering measures to ensure reliable operation of the Tugur Tidal Power Station under the severe ice conditions of the Sea of Okhotsk

    SciTech Connect

    Karnovich, V.N.; Vasilevskii, A.G.; Tregub, G.A.; Shatalina, I.N.; Bernshtein, L.B.

    1994-06-01

    The complex energy situation in the Far East as a consequence of the impossibility of constructing reliable nuclear power stations at the given stage makes quite urgent the use of the energy of the tides occurring in the Tugur Bay of the Sea of Okhotsk.

  16. Multidisciplinary System Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  17. Grid reliability management tools

    SciTech Connect

    Eto, J.; Martinez, C.; Dyer, J.; Budhraja, V.

    2000-10-01

    To summarize, Consortium for Electric Reliability Technology Solutions (CERTS) is engaged in a multi-year program of public interest R&D to develop and prototype software tools that will enhance system reliability during the transition to competitive markets. The core philosophy embedded in the design of these tools is the recognition that in the future reliability will be provided through market operations, not the decisions of central planners. Embracing this philosophy calls for tools that: (1) Recognize that the game has moved from modeling machine and engineering analysis to simulating markets to understand the impacts on reliability (and vice versa); (2) Provide real-time data and support information transparency toward enhancing the ability of operators and market participants to quickly grasp, analyze, and act effectively on information; (3) Allow operators, in particular, to measure, monitor, assess, and predict both system performance as well as the performance of market participants; and (4) Allow rapid incorporation of the latest sensing, data communication, computing, visualization, and algorithmic techniques and technologies.

  18. Stirling machine operating experience

    NASA Technical Reports Server (NTRS)

    Ross, Brad; Dudenhoefer, James E.

    1991-01-01

    Numerous Stirling machines have been built and operated, but the operating experience of these machines is not well known. It is important to examine this operating experience in detail, because it largely substantiates the claim that Stirling machines are capable of reliable and lengthy lives. The amount of data that exists is impressive, considering that many of the machines that have been built are developmental machines intended to show proof of concept, and were not expected to operate for any lengthy period of time. Some Stirling machines (typically free-piston machines) achieve long life through non-contact bearings, while other Stirling machines (typically kinematic) have achieved long operating lives through regular seal and bearing replacements. In addition to engine and system testing, life testing of critical components is also considered.

  19. Does achievement motivation mediate the semantic achievement priming effect?

    PubMed

    Engeser, Stefan; Baumann, Nicola

    2014-10-01

    The aim of our research was to understand the processes of the prime-to-behavior effects with semantic achievement primes. We extended existing models with a perspective from achievement motivation theory and additionally used achievement primes embedded in the running text of excerpts of school textbooks to simulate a more natural priming condition. Specifically, we proposed that achievement primes affect implicit achievement motivation and conducted pilot experiments and 3 main experiments to explore this proposition. We found no reliable positive effect of achievement primes on implicit achievement motivation. In light of these findings, we tested whether explicit (instead of implicit) achievement motivation is affected by achievement primes and found this to be the case. In the final experiment, we found support for the assumption that higher explicit achievement motivation implies that achievement priming affects the outcome expectations. The implications of the results are discussed, and we conclude that primes affect achievement behavior by heightening explicit achievement motivation and outcome expectancies. PMID:24820250

  20. RICOR development of the next generation highly reliable rotary cryocooler

    NASA Astrophysics Data System (ADS)

    Regev, Itai; Nachman, Ilan; Livni, Dorit; Riabzev, Sergey; Filis, Avishai; Segal, Victor

    2016-05-01

    Early rotary cryocoolers were designed for the lifetime of a few thousands operating hours. Ricor K506 model's life expectancy was only 5,000 hours, then the next generation K508 model was designed to achieve 10,000 operating hours in basic conditions, while the modern K508N was designed for 20,000 operating hours. Nowadays, the new challenges in the field of rotary cryocoolers require development of a new generation cooler that could compete with the linear cryocooler reliability, achieving the lifetime goal of 30,000 operating hours, and even more. Such new advanced cryocooler can be used for upgrade existing systems, or to serve the new generation of high-temperature detectors that are currently under development, enabling the cryocooler to work more efficiently in the field. The improvement of the rotary cryocooler reliability is based on a deep analysis and understating of the root failure causes, finding solutions to reduce bearings wear, using modern materials and lubricants. All of those were taken into consideration during the development of the new generation rotary coolers. As a part of reliability challenges, new digital controller was also developed, which allows new options, such as discrete control of the operating frequency, and can extend the cooler operating hours due to new controlling technique. In addition, the digital controller will be able to collect data during cryocooler operation, aiming end of life prediction.

  1. Component Reliability Testing of Long-Life Sorption Cryocoolers

    NASA Technical Reports Server (NTRS)

    Bard, S.; Wu, J.; Karlmann, P.; Mirate, C.; Wade, L.

    1994-01-01

    This paper summarizes ongoing experiments characterizing the ability of critical sorption cryocooler components to achieve highly reliable operation for long-life space missions. Test data obtained over the past several years at JPL are entirely consistent with achieving ten year life for sorption compressors, electrical heaters, container materials, valves, and various sorbent materials suitable for driving 8 to 180 K refrigeration stages. Test results for various compressor systems are reported. Planned future tests necessary to gain a detailed understanding of the sensitivity of cooler performance and component life to operating constraints, design configurations, and fabrication, assembly and handling techniques, are also discussed.

  2. Reliability Centered Maintenance - Methodologies

    NASA Technical Reports Server (NTRS)

    Kammerer, Catherine C.

    2009-01-01

    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  3. High Reliability of Care in Orthopedic Surgery: Are We There Yet?

    PubMed

    Anoushiravani, Afshin A; Sayeed, Zain; El-Othmani, Mouhanad M; Wong, Peter K; Saleh, Khaled J

    2016-10-01

    As health care reimbursement models shift from volume-based to value-based models, orthopedic surgeons must provide patients with highly reliable care, while consciously minimizing cost, maintaining quality, and providing timely interventions. An established means of achieving these goals is by implementing a highly reliable care model; however, before such a model can be initiated, a safety culture, robust improvement strategies, and committed leadership are needed. This article discusses interdependent and critical changes required to implement a highly reliable care system. Specific operative protocols now mandated are discussed as they pertain to high reliability of orthopedic care and elimination of wrong-site procedures. PMID:27637655

  4. Reliable broadcast protocols

    NASA Technical Reports Server (NTRS)

    Joseph, T. A.; Birman, Kenneth P.

    1989-01-01

    A number of broadcast protocols that are reliable subject to a variety of ordering and delivery guarantees are considered. Developing applications that are distributed over a number of sites and/or must tolerate the failures of some of them becomes a considerably simpler task when such protocols are available for communication. Without such protocols the kinds of distributed applications that can reasonably be built will have a very limited scope. As the trend towards distribution and decentralization continues, it will not be surprising if reliable broadcast protocols have the same role in distributed operating systems of the future that message passing mechanisms have in the operating systems of today. On the other hand, the problems of engineering such a system remain large. For example, deciding which protocol is the most appropriate to use in a certain situation or how to balance the latency-communication-storage costs is not an easy question.

  5. Accelerator Availability and Reliability Issues

    SciTech Connect

    Steve Suhring

    2003-05-01

    Maintaining reliable machine operations for existing machines as well as planning for future machines' operability present significant challenges to those responsible for system performance and improvement. Changes to machine requirements and beam specifications often reduce overall machine availability in an effort to meet user needs. Accelerator reliability issues from around the world will be presented, followed by a discussion of the major factors influencing machine availability.

  6. Defining Requirements for Improved Photovoltaic System Reliability

    SciTech Connect

    Maish, A.B.

    1998-12-21

    Reliable systems are an essential ingredient of any technology progressing toward commercial maturity and large-scale deployment. This paper defines reliability as meeting system fictional requirements, and then develops a framework to understand and quantify photovoltaic system reliability based on initial and ongoing costs and system value. The core elements necessary to achieve reliable PV systems are reviewed. These include appropriate system design, satisfactory component reliability, and proper installation and servicing. Reliability status, key issues, and present needs in system reliability are summarized for four application sectors.

  7. Crystalline-silicon reliability lessons for thin-film modules

    NASA Astrophysics Data System (ADS)

    Ross, R. G., Jr.

    1985-10-01

    The reliability of crystalline silicon modules has been brought to a high level with lifetimes approaching 20 years, and excellent industry credibility and user satisfaction. The transition from crystalline modules to thin film modules is comparable to the transition from discrete transistors to integrated circuits. New cell materials and monolithic structures will require new device processing techniques, but the package function and design will evolve to a lesser extent. Although there will be new encapsulants optimized to take advantage of the mechanical flexibility and low temperature processing features of thin films, the reliability and life degradation stresses and mechanisms will remain mostly unchanged. Key reliability technologies in common between crystalline and thin film modules include hot spot heating, galvanic and electrochemical corrosion, hail impact stresses, glass breakage, mechanical fatigue, photothermal degradation of encapsulants, operating temperature, moisture sorption, circuit design strategies, product safety issues, and the process required to achieve a reliable product from a laboratory prototype.

  8. Reliability model of a monopropellant auxiliary propulsion system

    NASA Technical Reports Server (NTRS)

    Greenberg, J. S.

    1971-01-01

    A mathematical model and associated computer code has been developed which computes the reliability of a monopropellant blowdown hydrazine spacecraft auxiliary propulsion system as a function of time. The propulsion system is used to adjust or modify the spacecraft orbit over an extended period of time. The multiple orbit corrections are the multiple objectives which the auxiliary propulsion system is designed to achieve. Thus the reliability model computes the probability of successfully accomplishing each of the desired orbit corrections. To accomplish this, the reliability model interfaces with a computer code that models the performance of a blowdown (unregulated) monopropellant auxiliary propulsion system. The computer code acts as a performance model and as such gives an accurate time history of the system operating parameters. The basic timing and status information is passed on to and utilized by the reliability model which establishes the probability of successfully accomplishing the orbit corrections.

  9. Crystalline-silicon reliability lessons for thin-film modules

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1985-01-01

    The reliability of crystalline silicon modules has been brought to a high level with lifetimes approaching 20 years, and excellent industry credibility and user satisfaction. The transition from crystalline modules to thin film modules is comparable to the transition from discrete transistors to integrated circuits. New cell materials and monolithic structures will require new device processing techniques, but the package function and design will evolve to a lesser extent. Although there will be new encapsulants optimized to take advantage of the mechanical flexibility and low temperature processing features of thin films, the reliability and life degradation stresses and mechanisms will remain mostly unchanged. Key reliability technologies in common between crystalline and thin film modules include hot spot heating, galvanic and electrochemical corrosion, hail impact stresses, glass breakage, mechanical fatigue, photothermal degradation of encapsulants, operating temperature, moisture sorption, circuit design strategies, product safety issues, and the process required to achieve a reliable product from a laboratory prototype.

  10. Virtually simulating the next generation of clean energy technologies: NETL's AVESTAR Center is dedicated to the safe, reliable and efficient operation of advanced energy plants with carbon capture

    SciTech Connect

    Zitney, S.

    2012-01-01

    Imagine using a real-time virtual simulator to learn to fly a space shuttle or rebuild your car's transmission without touching a piece of equipment or getting your hands dirty. Now, apply this concept to learning how to operate and control a state-of-the-art, electricity-producing power plant capable of carbon dioxide (CO{sub 2}) capture. That's what the National Energy Technology Laboratory's (NETL) Advanced Virtual Energy Simulation Training and Research (AVESTAR) Center (www.netl.doe.gov/avestar) is designed to do. Established as part of the Department of Energy's (DOE) initiative to advance new clean energy technology for power generation, the AVESTAR Center focuses primarily on providing simulation-based training for process engineers and energy plant operators, starting with the deployment of a first-of-a-kind operator training simulator for an integrated gasification combined cycle (IGCC) power plant with CO{sub 2} capture. The IGCC dynamic simulator builds on, and reaches beyond, conventional power plant simulators to merge, for the first time, a 'gasification with CO{sub 2} capture' process simulator with a 'combined-cycle' power simulator. Based on Invensys Operations Management's SimSci-Esscor DYNSIM software, the high-fidelity dynamic simulator provides realistic training on IGCC plant operations, including normal and faulted operations, as well as plant start-up, shutdown and power demand load changes. The highly flexible simulator also allows for testing of different types of fuel sources, such as petcoke and biomass, as well as co-firing fuel mixtures. The IGCC dynamic simulator is available at AVESTAR's two locations, NETL (Figure 1) and West Virginia University's National Research Center for Coal and Energy (www.nrcce.wvu.edu), both in Morgantown, W.Va. By offering a comprehensive IGCC training program, AVESTAR aims to develop a workforce well prepared to operate, control and manage commercial-scale gasification-based power plants with CO{sub 2

  11. 14 CFR Appendix P to Part 121 - Requirements for ETOPS and Polar Operations

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... limitations in this appendix. Section I. ETOPS Approvals: Airplanes with Two engines. (a) Propulsion system... demonstrate the ability to achieve and maintain the level of propulsion system reliability, if any, that is...) Following ETOPS operational approval, the operator must monitor the propulsion system reliability for...

  12. 14 CFR Appendix P to Part 121 - Requirements for ETOPS and Polar Operations

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... limitations in this appendix. Section I. ETOPS Approvals: Airplanes with Two engines. (a) Propulsion system... demonstrate the ability to achieve and maintain the level of propulsion system reliability, if any, that is...) Following ETOPS operational approval, the operator must monitor the propulsion system reliability for...

  13. 5,000 h reliable operation of 785nm dual-wavelength DBR-RW diode lasers suitable for Raman spectroscopy and SERDS

    NASA Astrophysics Data System (ADS)

    Sumpf, Bernd; Müller, André; Maiwald, Martin

    2016-03-01

    Monolithic wavelength stabilized diode lasers, e.g. distributed Bragg reflector (DBR) ridge waveguide (RW) lasers, are well-suited light sources for compact and portable Raman spectroscopic systems. In the case of in situ and outdoor investigations, the weak Raman lines are often superimposed by daylight, artificial light sources or fluorescence signals from the samples under study. Among others, shifted excitation Raman difference spectroscopy (SERDS) has been demonstrated as a powerful and easy-to-use technique to separate the Raman lines from disturbing background signals. SERDS is based on subsequential excitation of the sample with two slightly shifted wavelengths. The Raman lines follow the change in the excitation wavelength whereas the non-Raman signals remain unchanged. For SERDS dual-wavelength light sources, e.g., mini-arrays containing two DBR-RW lasers, are requested. Moreover, for portable Raman instruments such as handheld devices robust and reliable excitation light sources with lifetimes > 1,000 h are preferred. In this work, reliability investigations of dual-wavelength DBR-RW mini-arrays over a total test time of 5,000 h are presented. Wavelength stabilization and narrowing of the spectral emission is realized by 10th-order DBR surface gratings defined by i-line wafer stepper technology. The DBR-section has a length of 500 μm, the devices a total length of 3 mm. The ridge waveguide has a stripe width of 2.2 μm. Maximum output powers up to 215 mW per emitter were measured. Over the whole power range, 95 % of the emitted power is within a spectral width of 0.15 nm (2.5 cm-1), which is smaller than the spectral width needed to resolve most Raman lines of solid and liquid samples. In a step-stress test, the devices were tested at 50 mW, followed by 75 mW and finally at 100 mW per emitter. Electro-optical and spectral measurements were performed before, during and after the test. All emitters under study did not show any deterioration of their

  14. High-Reliability Health Care: Getting There from Here

    PubMed Central

    Chassin, Mark R; Loeb, Jerod M

    2013-01-01

    Context Despite serious and widespread efforts to improve the quality of health care, many patients still suffer preventable harm every day. Hospitals find improvement difficult to sustain, and they suffer “project fatigue” because so many problems need attention. No hospitals or health systems have achieved consistent excellence throughout their institutions. High-reliability science is the study of organizations in industries like commercial aviation and nuclear power that operate under hazardous conditions while maintaining safety levels that are far better than those of health care. Adapting and applying the lessons of this science to health care offer the promise of enabling hospitals to reach levels of quality and safety that are comparable to those of the best high-reliability organizations. Methods We combined the Joint Commission's knowledge of health care organizations with knowledge from the published literature and from experts in high-reliability industries and leading safety scholars outside health care. We developed a conceptual and practical framework for assessing hospitals’ readiness for and progress toward high reliability. By iterative testing with hospital leaders, we refined the framework and, for each of its fourteen components, defined stages of maturity through which we believe hospitals must pass to reach high reliability. Findings We discovered that the ways that high-reliability organizations generate and maintain high levels of safety cannot be directly applied to today's hospitals. We defined a series of incremental changes that hospitals should undertake to progress toward high reliability. These changes involve the leadership's commitment to achieving zero patient harm, a fully functional culture of safety throughout the organization, and the widespread deployment of highly effective process improvement tools. Conclusions Hospitals can make substantial progress toward high reliability by undertaking several specific

  15. Improve filtration for optimum equipment reliability

    SciTech Connect

    Cervera, S.M.

    1996-01-01

    The introduction 20 years ago of the American Petroleum Institute Standard API-614 as a purchase specification for lubrication, shaft sealing and control oil systems, had a considerable impact and did much to improve system reliability at that time. Today, however, these recommendations regarding filter rating and flushing cleanliness are outdated. Much research in the tribology field correlates clearance size particulate contamination with accelerated component wear, fatigue and performance degradation. Some of these studies demonstrate that by decreasing the population of clearance size particulate in lubrication oils, component life increases exponentially. Knowing the dynamic clearances of a piece of machinery makes it possible, using the ISO 4406 Cleanliness Code, to determine what cleanliness level will minimize contamination-related component wear/fatigue and thus help optimize machinery performance and reliability. Data obtained by the author through random sampling of rotating equipment lube and seal oil systems indicate that the API-614 standard, as it pertains to filtration and flushing, is insufficient to ensure that particulate contamination is maintained to within the levels necessary to achieve optimum equipment reliability and safety, without increasing operating cost. Adopting and practicing the guidelines presented should result in the following benefits: (1) the frequency of bearing, oil pump, mechanical seal, fluid coupling, gearbox and hydraulic control valve failures would be minimized; (2) the mean time between planned maintenance (MTBPM) would be increased. The result will be a substantial increase in safety and cost savings to the operator.

  16. HEPEX - achievements and challenges!

    NASA Astrophysics Data System (ADS)

    Pappenberger, Florian; Ramos, Maria-Helena; Thielen, Jutta; Wood, Andy; Wang, Qj; Duan, Qingyun; Collischonn, Walter; Verkade, Jan; Voisin, Nathalie; Wetterhall, Fredrik; Vuillaume, Jean-Francois Emmanuel; Lucatero Villasenor, Diana; Cloke, Hannah L.; Schaake, John; van Andel, Schalk-Jan

    2014-05-01

    HEPEX is an international initiative bringing together hydrologists, meteorologists, researchers and end-users to develop advanced probabilistic hydrological forecast techniques for improved flood, drought and water management. HEPEX was launched in 2004 as an independent, cooperative international scientific activity. During the first meeting, the overarching goal was defined as: "to develop and test procedures to produce reliable hydrological ensemble forecasts, and to demonstrate their utility in decision making related to the water, environmental and emergency management sectors." The applications of hydrological ensemble predictions span across large spatio-temporal scales, ranging from short-term and localized predictions to global climate change and regional modeling. Within the HEPEX community, information is shared through its blog (www.hepex.org), meetings, testbeds and intercompaison experiments, as well as project reportings. Key questions of HEPEX are: * What adaptations are required for meteorological ensemble systems to be coupled with hydrological ensemble systems? * How should the existing hydrological ensemble prediction systems be modified to account for all sources of uncertainty within a forecast? * What is the best way for the user community to take advantage of ensemble forecasts and to make better decisions based on them? This year HEPEX celebrates its 10th year anniversary and this poster will present a review of the main operational and research achievements and challenges prepared by Hepex contributors on data assimilation, post-processing of hydrologic predictions, forecast verification, communication and use of probabilistic forecasts in decision-making. Additionally, we will present the most recent activities implemented by Hepex and illustrate how everyone can join the community and participate to the development of new approaches in hydrologic ensemble prediction.

  17. Broad Negative Thermal Expansion Operation-Temperature Window Achieved by Adjusting Fe-Fe Magnetic Exchange Coupling in La(Fe,Si)13 Compounds.

    PubMed

    Li, Shaopeng; Huang, Rongjin; Zhao, Yuqiang; Li, Wen; Wang, Wei; Huang, Chuanjun; Gong, Pifu; Lin, Zheshuai; Li, Laifeng

    2015-08-17

    Cubic La(Fe,Si)13-based compounds have been recently developed as promising negative thermal expansion(NTE) materials, but the narrow NTE operation-temperature window(∼110 K) restricts their actual applications. In this work, we demonstrate that the NTE operation-temperature window of LaFe(13-x)Si(x) can be significantly broadened by adjusting Fe-Fe magnetic exchange coupling as x ranges from 2.8 to 3.1. In particular, the NTE operation-temperature window of LaFe10.1Si2.9 is extended to 220 K. More attractively, the coefficients of thermal expansion of LaFe10.0Si3.0 and LaFe9.9Si3.1 are homogeneous in the NTE operation-temperature range of about 200 K, which is much valuable for the stability of fabricating devices. The further experimental characterizations combined with first-principles studies reveal that the tetragonal phase is gradually introduced into the cubic phase as the Si content increases, hence modifies the Fe-Fe interatomic distance. The reduction of the overall Fe-Fe magnetic exchange interactions contributes to the broadness of NTE operation-temperature window for LaFe(13-x)Si(x). PMID:26196377

  18. Recent advances in computational structural reliability analysis methods

    NASA Technical Reports Server (NTRS)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-01-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  19. Influence of shutdown phases on the microbial community composition and their effects on the operational reliability in a geothermal plant in the North German Basin

    NASA Astrophysics Data System (ADS)

    Westphal, Anke; Lerm, Stephanie; Miethling-Graff, Rona; Seibt, Andrea; Wolfgramm, Markus; Würdemann, Hilke

    2014-05-01

    Microbial activity can influence the dissolution and/or precipitation of minerals, as well as corrosion phenomena that may lead to a lower efficiency of engineered systems. To enhance the understanding of these processes, the microbial biocenosis in fluids produced from the cold well of a deep geothermal heat store located in the North German Basin (NGB) was characterized during normal plant operation and immediately after plant downtime phases. The microbial community composition was dominated by three different genera of sulphate reducing bacteria (SRB) and fermentative Halanaerobiaceae in the 46 ° C tempered fluids during regular operation, whereas after shut down phases sequences of sulphur oxidizing bacteria (SOB) were additionally detected. The detection of SOB is regarded as an indication of oxygen introduction into the well during the downtime phase. This corresponded to the higher redox potential of fluids taken directly after the restart of fluid production in the cold well. In addition to an extremely high particle loading rate after plant restart, a higher DNA content as well as an increase of specific gene copy numbers of SRB and SOB by a factor of 104 and 105 respectively were observed. Obviously stagnant conditions favored the enrichment of biomass and particles in the well. This is supported by the determination of a higher sulphate and hydrogen sulphide content in the fluids taken initially after plant restart. With increasing fluid production during the restart, SRB specific gene copy numbers decreased much slower than SOB specific gene copy numbers, which led to the assumption that SOB abundance is limited to the near wellbore area. Besides the absence of particle removal by fluid flow and the deposition of particles by sedimentation during the shut down phase, oxygen introduction and subsequent activity of SOB may also have favored microbial induced formation of precipitates in the well. It is quite likely that the interaction of SRB and SOB

  20. Load Control System Reliability

    SciTech Connect

    Trudnowski, Daniel

    2015-04-03

    This report summarizes the results of the Load Control System Reliability project (DOE Award DE-FC26-06NT42750). The original grant was awarded to Montana Tech April 2006. Follow-on DOE awards and expansions to the project scope occurred August 2007, January 2009, April 2011, and April 2013. In addition to the DOE monies, the project also consisted of matching funds from the states of Montana and Wyoming. Project participants included Montana Tech; the University of Wyoming; Montana State University; NorthWestern Energy, Inc., and MSE. Research focused on two areas: real-time power-system load control methodologies; and, power-system measurement-based stability-assessment operation and control tools. The majority of effort was focused on area 2. Results from the research includes: development of fundamental power-system dynamic concepts, control schemes, and signal-processing algorithms; many papers (including two prize papers) in leading journals and conferences and leadership of IEEE activities; one patent; participation in major actual-system testing in the western North American power system; prototype power-system operation and control software installed and tested at three major North American control centers; and, the incubation of a new commercial-grade operation and control software tool. Work under this grant certainly supported the DOE-OE goals in the area of “Real Time Grid Reliability Management.”

  1. Developing and establishing the validity and reliability of the perceptions toward Aviation Safety Action Program (ASAP) and Line Operations Safety Audit (LOSA) questionnaires

    NASA Astrophysics Data System (ADS)

    Steckel, Richard J.

    Aviation Safety Action Program (ASAP) and Line Operations Safety Audits (LOSA) are voluntary safety reporting programs developed by the Federal Aviation Administration (FAA) to assist air carriers in discovering and fixing threats, errors and undesired aircraft states during normal flights that could result in a serious or fatal accident. These programs depend on voluntary participation of and reporting by air carrier pilots to be successful. The purpose of the study was to develop and validate a measurement scale to measure U.S. air carrier pilots' perceived benefits and/or barriers to participating in ASAP and LOSA programs. Data from these surveys could be used to make changes to or correct pilot misperceptions of these programs to improve participation and the flow of data. ASAP and LOSA a priori models were developed based on previous research in aviation and healthcare. Sixty thousand ASAP and LOSA paper surveys were sent to 60,000 current U.S. air carrier pilots selected at random from an FAA database of pilot certificates. Two thousand usable ASAP and 1,970 usable LOSA surveys were returned and analyzed using Confirmatory Factor Analysis. Analysis of the data using confirmatory actor analysis and model generation resulted in a five factor ASAP model (Ease of use, Value, Improve, Trust and Risk) and a five factor LOSA model (Value, Improve, Program Trust, Risk and Management Trust). ASAP and LOSA data were not normally distributed, so bootstrapping was used. While both final models exhibited acceptable fit with approximate fit indices, the exact fit hypothesis and the Bollen-Stine p value indicated possible model mis-specification for both ASAP and LOSA models.

  2. Reliable timing systems for computer controlled accelerators

    NASA Astrophysics Data System (ADS)

    Knott, Jürgen; Nettleton, Robert

    1986-06-01

    Over the past decade the use of computers has set new standards for control systems of accelerators with ever increasing complexity coupled with stringent reliability criteria. In fact, with very slow cycling machines or storage rings any erratic operation or timing pulse will cause the loss of precious particles and waste hours of time and effort of preparation. Thus, for the CERN linac and LEAR (Low Energy Antiproton Ring) timing system reliability becomes a crucial factor in the sense that all components must operate practically without fault for very long periods compared to the effective machine cycle. This has been achieved by careful selection of components and design well below thermal and electrical limits, using error detection and correction where possible, as well as developing "safe" decoding techniques for serial data trains. Further, consistent structuring had to be applied in order to obtain simple and flexible modular configurations with very few components on critical paths and to minimize the exchange of information to synchronize accelerators. In addition, this structuring allows the development of efficient strategies for on-line and off-line fault diagnostics. As a result, the timing system for Linac 2 has, so far, been operating without fault for three years, the one for LEAR more than one year since its final debugging.

  3. Statistical modeling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1992-01-01

    This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.

  4. Blade reliability collaborative :

    SciTech Connect

    Ashwill, Thomas D.; Ogilvie, Alistair B.; Paquette, Joshua A.

    2013-04-01

    The Blade Reliability Collaborative (BRC) was started by the Wind Energy Technologies Department of Sandia National Laboratories and DOE in 2010 with the goal of gaining insight into planned and unplanned O&M issues associated with wind turbine blades. A significant part of BRC is the Blade Defect, Damage and Repair Survey task, which will gather data from blade manufacturers, service companies, operators and prior studies to determine details about the largest sources of blade unreliability. This report summarizes the initial findings from this work.

  5. How to Conduct Multimethod Field Studies in the Operating Room: The iPad Combined With a Survey App as a Valid and Reliable Data Collection Tool

    PubMed Central

    Tscholl, David W; Weiss, Mona; Spahn, Donat R

    2016-01-01

    Background Tablet computers such as the Apple iPad are progressively replacing traditional paper-and-pencil-based data collection. We combined the iPad with the ready-to-use survey software, iSurvey (from Harvestyourdata), to create a straightforward tool for data collection during the Anesthesia Pre-Induction Checklist (APIC) study, a hospital-wide multimethod intervention study involving observation of team performance and team member surveys in the operating room (OR). Objective We aimed to provide an analysis of the factors that led to the use of the iPad- and iSurvey-based tool for data collection, illustrate our experiences with the use of this data collection tool, and report the results of an expert survey about user experience with this tool. Methods We used an iPad- and iSurvey-based tool to observe anesthesia inductions conducted by 205 teams (N=557 team members) in the OR. In Phase 1, expert raters used the iPad- and iSurvey-based tool to rate team performance during anesthesia inductions, and anesthesia team members were asked to indicate their perceptions after the inductions. In Phase 2, we surveyed the expert raters about their perceptions regarding the use of the iPad- and iSurvey-based tool to observe, rate, and survey teams in the ORs. Results The results of Phase 1 showed that training data collectors on the iPad- and iSurvey-based data collection tool was effortless and there were no serious problems during data collection, upload, download, and export. Interrater agreement of the combined data collection tool was found to be very high for the team observations (median Fleiss’ kappa=0.88, 95% CI 0.78-1.00). The results of the follow-up expert rater survey (Phase 2) showed that the raters did not prefer a paper-and-pencil-based data collection method they had used during other earlier studies over the iPad- and iSurvey-based tool (median response 1, IQR 1-1; 1=do not agree, 2=somewhat disagree, 3=neutral, 4=somewhat agree, 5=fully agree). They

  6. Synaptic plasticity and memory functions achieved in a WO3-x-based nanoionics device by using the principle of atomic switch operation.

    PubMed

    Yang, Rui; Terabe, Kazuya; Yao, Yiping; Tsuruoka, Tohru; Hasegawa, Tsuyoshi; Gimzewski, James K; Aono, Masakazu

    2013-09-27

    A compact neuromorphic nanodevice with inherent learning and memory properties emulating those of biological synapses is the key to developing artificial neural networks rivaling their biological counterparts. Experimental results showed that memorization with a wide time scale from volatile to permanent can be achieved in a WO3-x-based nanoionics device and can be precisely and cumulatively controlled by adjusting the device's resistance state and input pulse parameters such as the amplitude, interval, and number. This control is analogous to biological synaptic plasticity including short-term plasticity, long-term potentiation, transition from short-term memory to long-term memory, forgetting processes for short- and long-term memory, learning speed, and learning history. A compact WO3-x-based nanoionics device with a simple stacked layer structure should thus be a promising candidate for use as an inorganic synapse in artificial neural networks due to its striking resemblance to the biological synapse. PMID:23999098

  7. Synaptic plasticity and memory functions achieved in a WO3-x-based nanoionics device by using the principle of atomic switch operation

    NASA Astrophysics Data System (ADS)

    Yang, Rui; Terabe, Kazuya; Yao, Yiping; Tsuruoka, Tohru; Hasegawa, Tsuyoshi; Gimzewski, James K.; Aono, Masakazu

    2013-09-01

    A compact neuromorphic nanodevice with inherent learning and memory properties emulating those of biological synapses is the key to developing artificial neural networks rivaling their biological counterparts. Experimental results showed that memorization with a wide time scale from volatile to permanent can be achieved in a WO3-x-based nanoionics device and can be precisely and cumulatively controlled by adjusting the device’s resistance state and input pulse parameters such as the amplitude, interval, and number. This control is analogous to biological synaptic plasticity including short-term plasticity, long-term potentiation, transition from short-term memory to long-term memory, forgetting processes for short- and long-term memory, learning speed, and learning history. A compact WO3-x-based nanoionics device with a simple stacked layer structure should thus be a promising candidate for use as an inorganic synapse in artificial neural networks due to its striking resemblance to the biological synapse.

  8. OP40POST-OPERATIVE T2 HYPERINTENSITY IN PERI RESECTION MARGIN FOLLOWING AWAKE MACROSCOPIC INTRAGYRAL TOTAL RESECTION OF LOW GRADE GLIOMA IS NOT A RELIABLE MARKER OF RESIDUAL TUMOUR

    PubMed Central

    Khor, Huai Hao; Bryne, Paul; Basu, Surajit

    2014-01-01

    INTRODUCTION: Awake craniotomy for resection of tumours from eloquent brain area is an established technique. We describe six year outcome data of awake surgery for radiological low grade glial series tumours resected using natural subpial and vascular intergyral planes. We describe immediate post-operative radiological findings and its correlation with long term outcome. METHOD: This is a retrospective analysis of clinical and radiological records of awake craniotomies undertaken between 2007-2014. Patients were identified from operative department records and radiological data were retrieved from hospital's electronic image archive. A correlative analysis was done between immediate post-operative T2 changes and long term tumour progression. RESULTS: 38 patients underwent awake craniotomy with average age of 41.1 yrs(range 21-79). 6 patients have died (average survival 2.69 years, range 1-84 months) due to tumour progression. 5 of these had initial diagnosis of grade 3 tumour or above; 1 patient had malignant melanoma. 32 (85%) patients have survived the survey period(2.38 years, range 1-72 months). On MRI most patients had post-operative T2 hyperintensity around the resection margins. The T2 hyperintensity persisted in 6 patient. This was correlated with either a peri-operative decision to sub-totally resect, or subsequent tumour progression. In other 32 patients the T2 changes either reduced or remained static. Histology of these patients showed 4 grade 2, 22 grade 3, and 6 grade 4 tumours. CONCLUSION: T2 changes in peri-resection brain parenchyma following a macroscopic complete resection of low grade tumours using awake techniques is not a reliable marker of tumour residual or recurrence. 85% of such changes resolved.

  9. Stirling Convertor Fasteners Reliability Quantification

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Korovaichuk, Igor; Kovacevich, Tiodor; Schreiber, Jeffrey G.

    2006-01-01

    Onboard Radioisotope Power Systems (RPS) being developed for NASA s deep-space science and exploration missions require reliable operation for up to 14 years and beyond. Stirling power conversion is a candidate for use in an RPS because it offers a multifold increase in the conversion efficiency of heat to electric power and reduced inventory of radioactive material. Structural fasteners are responsible to maintain structural integrity of the Stirling power convertor, which is critical to ensure reliable performance during the entire mission. Design of fasteners involve variables related to the fabrication, manufacturing, behavior of fasteners and joining parts material, structural geometry of the joining components, size and spacing of fasteners, mission loads, boundary conditions, etc. These variables have inherent uncertainties, which need to be accounted for in the reliability assessment. This paper describes these uncertainties along with a methodology to quantify the reliability, and provides results of the analysis in terms of quantified reliability and sensitivity of Stirling power conversion reliability to the design variables. Quantification of the reliability includes both structural and functional aspects of the joining components. Based on the results, the paper also describes guidelines to improve the reliability and verification testing.

  10. Compact, Reliable EEPROM Controller

    NASA Technical Reports Server (NTRS)

    Katz, Richard; Kleyner, Igor

    2010-01-01

    A compact, reliable controller for an electrically erasable, programmable read-only memory (EEPROM) has been developed specifically for a space-flight application. The design may be adaptable to other applications in which there are requirements for reliability in general and, in particular, for prevention of inadvertent writing of data in EEPROM cells. Inadvertent writes pose risks of loss of reliability in the original space-flight application and could pose such risks in other applications. Prior EEPROM controllers are large and complex and do not provide all reasonable protections (in many cases, few or no protections) against inadvertent writes. In contrast, the present controller provides several layers of protection against inadvertent writes. The controller also incorporates a write-time monitor, enabling determination of trends in the performance of an EEPROM through all phases of testing. The controller has been designed as an integral subsystem of a system that includes not only the controller and the controlled EEPROM aboard a spacecraft but also computers in a ground control station, relatively simple onboard support circuitry, and an onboard communication subsystem that utilizes the MIL-STD-1553B protocol. (MIL-STD-1553B is a military standard that encompasses a method of communication and electrical-interface requirements for digital electronic subsystems connected to a data bus. MIL-STD- 1553B is commonly used in defense and space applications.) The intent was to both maximize reliability while minimizing the size and complexity of onboard circuitry. In operation, control of the EEPROM is effected via the ground computers, the MIL-STD-1553B communication subsystem, and the onboard support circuitry, all of which, in combination, provide the multiple layers of protection against inadvertent writes. There is no controller software, unlike in many prior EEPROM controllers; software can be a major contributor to unreliability, particularly in fault

  11. Novel test fixture for collecting microswitch reliability data

    NASA Astrophysics Data System (ADS)

    Edelmann, Thomas A.; Coutu, Ronald A., Jr.; Starman, LaVern A.

    2010-02-01

    Microelectromechanical systems (MEMS) are an important enabling technology for reducing electronic component geometries and device power consumption. An example of MEMS technology, used in radio frequency (RF) circuits and systems, is the microswitch. Although the operation of microswitches is relatively simple, they are plagued by poor reliability - they must operate over 100 billion cycles. Improvements in the mechanical design of the microswitch have helped to increase their reliability but further improvements are necessary. To accomplish this, research needs to be conducted on the actual contact surfaces for investigation of the mechanical, thermal and electrical phenomena that affect reliability. The focus of this paper is the development of a unique high lifecycle test fixture capable of the simultaneous measurement of contact resistance and contact force. By incorporating a high resonance force sensor, cycle rates reaching 3kHz will be achieved enabling researchers to conduct a wide range of reliability studies. The fixture will be isolated from vibrations and will be housed in a dry-box enclosure to minimize contamination. The test fixture will be automated with control and data acquisition instrumentation to optimize data collection and test repeatability. It is predicted that this new test fixture will provide the potential for significant work to be done to improve the reliability of MEMS microswitches. Several tests were conducted using components of the new test fixture. Preliminary results indicate the feasibility and support the need for the continuing development of this new test fixture.

  12. Reliability and Maintainability (RAM) Training

    NASA Technical Reports Server (NTRS)

    Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Packard, Michael H. (Editor)

    2000-01-01

    The theme of this manual is failure physics-the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low-cost reliable products. In a broader sense the manual should do more. It should underscore the urgent need CI for mature attitudes toward reliability. Five of the chapters were originally presented as a classroom course to over 1000 Martin Marietta engineers and technicians. Another four chapters and three appendixes have been added, We begin with a view of reliability from the years 1940 to 2000. Chapter 2 starts the training material with a review of mathematics and a description of what elements contribute to product failures. The remaining chapters elucidate basic reliability theory and the disciplines that allow us to control and eliminate failures.

  13. Reliability analysis framework for computer-assisted medical decision systems

    SciTech Connect

    Habas, Piotr A.; Zurada, Jacek M.; Elmaghraby, Adel S.; Tourassi, Georgia D.

    2007-02-15

    We present a technique that enhances computer-assisted decision (CAD) systems with the ability to assess the reliability of each individual decision they make. Reliability assessment is achieved by measuring the accuracy of a CAD system with known cases similar to the one in question. The proposed technique analyzes the feature space neighborhood of the query case to dynamically select an input-dependent set of known cases relevant to the query. This set is used to assess the local (query-specific) accuracy of the CAD system. The estimated local accuracy is utilized as a reliability measure of the CAD response to the query case. The underlying hypothesis of the study is that CAD decisions with higher reliability are more accurate. The above hypothesis was tested using a mammographic database of 1337 regions of interest (ROIs) with biopsy-proven ground truth (681 with masses, 656 with normal parenchyma). Three types of decision models, (i) a back-propagation neural network (BPNN), (ii) a generalized regression neural network (GRNN), and (iii) a support vector machine (SVM), were developed to detect masses based on eight morphological features automatically extracted from each ROI. The performance of all decision models was evaluated using the Receiver Operating Characteristic (ROC) analysis. The study showed that the proposed reliability measure is a strong predictor of the CAD system's case-specific accuracy. Specifically, the ROC area index for CAD predictions with high reliability was significantly better than for those with low reliability values. This result was consistent across all decision models investigated in the study. The proposed case-specific reliability analysis technique could be used to alert the CAD user when an opinion that is unlikely to be reliable is offered. The technique can be easily deployed in the clinical environment because it is applicable with a wide range of classifiers regardless of their structure and it requires neither additional

  14. Ultimately Reliable Pyrotechnic Systems

    NASA Technical Reports Server (NTRS)

    Scott, John H.; Hinkel, Todd

    2015-01-01

    This paper presents the methods by which NASA has designed, built, tested, and certified pyrotechnic devices for high reliability operation in extreme environments and illustrates the potential applications in the oil and gas industry. NASA's extremely successful application of pyrotechnics is built upon documented procedures and test methods that have been maintained and developed since the Apollo Program. Standards are managed and rigorously enforced for performance margins, redundancy, lot sampling, and personnel safety. The pyrotechnics utilized in spacecraft include such devices as small initiators and detonators with the power of a shotgun shell, detonating cord systems for explosive energy transfer across many feet, precision linear shaped charges for breaking structural membranes, and booster charges to actuate valves and pistons. NASA's pyrotechnics program is one of the more successful in the history of Human Spaceflight. No pyrotechnic device developed in accordance with NASA's Human Spaceflight standards has ever failed in flight use. NASA's pyrotechnic initiators work reliably in temperatures as low as -420 F. Each of the 135 Space Shuttle flights fired 102 of these initiators, some setting off multiple pyrotechnic devices, with never a failure. The recent landing on Mars of the Opportunity rover fired 174 of NASA's pyrotechnic initiators to complete the famous '7 minutes of terror.' Even after traveling through extreme radiation and thermal environments on the way to Mars, every one of them worked. These initiators have fired on the surface of Titan. NASA's design controls, procedures, and processes produce the most reliable pyrotechnics in the world. Application of pyrotechnics designed and procured in this manner could enable the energy industry's emergency equipment, such as shutoff valves and deep-sea blowout preventers, to be left in place for years in extreme environments and still be relied upon to function when needed, thus greatly enhancing

  15. Photovoltaic power system reliability considerations

    NASA Technical Reports Server (NTRS)

    Lalli, V. R.

    1980-01-01

    An example of how modern engineering and safety techniques can be used to assure the reliable and safe operation of photovoltaic power systems is presented. This particular application is for a solar cell power system demonstration project designed to provide electric power requirements for remote villages. The techniques utilized involve a definition of the power system natural and operating environment, use of design criteria and analysis techniques, an awareness of potential problems via the inherent reliability and FMEA methods, and use of fail-safe and planned spare parts engineering philosophy.

  16. LCOGT network observatory operations

    NASA Astrophysics Data System (ADS)

    Pickles, Andrew; Hjelstrom, Annie; Boroson, Todd; Burleson, Ben; Conway, Patrick; De Vera, Jon; Elphick, Mark; Haworth, Brian; Rosing, Wayne; Saunders, Eric; Thomas, Doug; White, Gary; Willis, Mark; Walker, Zach

    2014-08-01

    We describe the operational capabilities of the Las Cumbres Observatory Global Telescope Network. We summarize our hardware and software for maintaining and monitoring network health. We focus on methodologies to utilize the automated system to monitor availability of sites, instruments and telescopes, to monitor performance, permit automatic recovery, and provide automatic error reporting. The same jTCS control system is used on telescopes of apertures 0.4m, 0.8m, 1m and 2m, and for multiple instruments on each. We describe our network operational model, including workloads, and illustrate our current tools, and operational performance indicators, including telemetry and metrics reporting from on-site reductions. The system was conceived and designed to establish effective, reliable autonomous operations, with automatic monitoring and recovery - minimizing human intervention while maintaining quality. We illustrate how far we have been able to achieve that.

  17. A Novel Hybridization of Applied Mathematical, Operations Research and Risk-based Methods to Achieve an Optimal Solution to a Challenging Subsurface Contamination Problem

    NASA Astrophysics Data System (ADS)

    Johnson, K. D.; Pinder, G. F.

    2013-12-01

    The objective of the project is the creation of a new, computationally based, approach to the collection, evaluation and use of data for the purpose of determining optimal strategies for investment in the solution of remediation of contaminant source areas and similar environmental problems. The research focuses on the use of existing mathematical tools assembled in a unique fashion. The area of application of this new capability is optimal (least-cost) groundwater contamination source identification; we wish to identify the physical environments wherein it may be cost-prohibitive to identify a contaminant source, the optimal strategy to protect the environment from additional insult and formulate strategies for cost-effective environmental restoration. The computational underpinnings of the proposed approach encompass the integration into a unique of several known applied-mathematical tools. The resulting tool integration achieves the following: 1) simulate groundwater flow and contaminant transport under uncertainty, that is when the physical parameters such as hydraulic conductivity are known to be described by a random field; 2) define such a random field from available field data or be able to provide insight into the sampling strategy needed to create such a field; 3) incorporate subjective information, such as the opinions of experts on the importance of factors such as locations of waste landfills; 4) optimize a search strategy for finding a potential source location and to optimally combine field information with model results to provide the best possible representation of the mean contaminant field and its geostatistics. Our approach combines in a symbiotic manner methodologies found in numerical simulation, random field analysis, Kalman filtering, fuzzy set theory and search theory. Testing the algorithm for this stage of the work, we will focus on fabricated field situations wherein we can a priori specify the degree of uncertainty associated with the

  18. Long-Term (Six Years) Clinical Outcome Discrimination of Patients in the Vegetative State Could be Achieved Based on the Operational Architectonics EEG Analysis: A Pilot Feasibility Study.

    PubMed

    Fingelkurts, Andrew A; Fingelkurts, Alexander A; Bagnato, Sergio; Boccagni, Cristina; Galardi, Giuseppe

    2016-01-01

    Electroencephalogram (EEG) recordings are increasingly used to evaluate patients with disorders of consciousness (DOC) or assess their prognosis outcome in the short-term perspective. However, there is a lack of information concerning the effectiveness of EEG in classifying long-term (many years) outcome in chronic DOC patients. Here we tested whether EEG operational architectonics parameters (geared towards consciousness phenomenon detection rather than neurophysiological processes) could be useful for distinguishing a very long-term (6 years) clinical outcome of DOC patients whose EEGs were registered within 3 months post-injury. The obtained results suggest that EEG recorded at third month after sustaining brain damage, may contain useful information on the long-term outcome of patients in vegetative state: it could discriminate patients who remain in a persistent vegetative state from patients who reach a minimally conscious state or even recover a full consciousness in a long-term perspective (6 years) post-injury. These findings, if confirmed in further studies, may be pivotal for long-term planning of clinical care, rehabilitative programs, medical-legal decisions concerning the patients, and policy makers. PMID:27347266

  19. Long-Term (Six Years) Clinical Outcome Discrimination of Patients in the Vegetative State Could be Achieved Based on the Operational Architectonics EEG Analysis: A Pilot Feasibility Study

    PubMed Central

    Fingelkurts, Andrew A.; Fingelkurts, Alexander A.; Bagnato, Sergio; Boccagni, Cristina; Galardi, Giuseppe

    2016-01-01

    Electroencephalogram (EEG) recordings are increasingly used to evaluate patients with disorders of consciousness (DOC) or assess their prognosis outcome in the short-term perspective. However, there is a lack of information concerning the effectiveness of EEG in classifying long-term (many years) outcome in chronic DOC patients. Here we tested whether EEG operational architectonics parameters (geared towards consciousness phenomenon detection rather than neurophysiological processes) could be useful for distinguishing a very long-term (6 years) clinical outcome of DOC patients whose EEGs were registered within 3 months post-injury. The obtained results suggest that EEG recorded at third month after sustaining brain damage, may contain useful information on the long-term outcome of patients in vegetative state: it could discriminate patients who remain in a persistent vegetative state from patients who reach a minimally conscious state or even recover a full consciousness in a long-term perspective (6 years) post-injury. These findings, if confirmed in further studies, may be pivotal for long-term planning of clinical care, rehabilitative programs, medical-legal decisions concerning the patients, and policy makers. PMID:27347266

  20. Condensate polishers add operating reliability and flexibility

    SciTech Connect

    Layman, C.M.; Bennett, L.L.

    2008-08-15

    Many of today's advanced steam generators favour either an all-volatile treatment or oxygenated treatment chemistry programme, both of which require strict maintenance of an ultra-pure boiler fedwater ro condensate system. Those requirements are many times at odds with the lower-quality water sources, such as greywater, available for plant makeup and cooling water. Adding a condensate polisher can be a simple, cost-effective solution. 4 figs.

  1. Leadership Issues: Raising Achievement.

    ERIC Educational Resources Information Center

    Horsfall, Chris, Ed.

    This document contains five papers examining the meaning and operation of leadership as a variable affecting student achievement in further education colleges in the United Kingdom. "Introduction" (Chris Horsfall) discusses school effectiveness studies' findings regarding the relationship between leadership and effective schools, distinguishes…

  2. Reliability model generator

    NASA Technical Reports Server (NTRS)

    McMann, Catherine M. (Inventor); Cohen, Gerald C. (Inventor)

    1991-01-01

    An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.

  3. Stirling machine operating experience

    SciTech Connect

    Ross, B.; Dudenhoefer, J.E.

    1994-09-01

    Numerous Stirling machines have been built and operated, but the operating experience of these machines is not well known. It is important to examine this operating experience in detail, because it largely substantiates the claim that stirling machines are capable of reliable and lengthy operating lives. The amount of data that exists is impressive, considering that many of the machines that have been built are developmental machines intended to show proof of concept, and are not expected to operate for lengthy periods of time. Some Stirling machines (typically free-piston machines) achieve long life through non-contact bearings, while other Stirling machines (typically kinematic) have achieved long operating lives through regular seal and bearing replacements. In addition to engine and system testing, life testing of critical components is also considered. The record in this paper is not complete, due to the reluctance of some organizations to release operational data and because several organizations were not contacted. The authors intend to repeat this assessment in three years, hoping for even greater participation.

  4. Reliability in the design phase

    SciTech Connect

    Siahpush, A.S.; Hills, S.W.; Pham, H. ); Majumdar, D. )

    1991-12-01

    A study was performed to determine the common methods and tools that are available to calculated or predict a system's reliability. A literature review and software survey are included. The desired product of this developmental work is a tool for the system designer to use in the early design phase so that the final design will achieve the desired system reliability without lengthy testing and rework. Three computer programs were written which provide the first attempt at fulfilling this need. The programs are described and a case study is presented for each one. This is a continuing effort which will be furthered in FY-1992. 10 refs.

  5. Reliability in the design phase

    SciTech Connect

    Siahpush, A.S.; Hills, S.W.; Pham, H.; Majumdar, D.

    1991-12-01

    A study was performed to determine the common methods and tools that are available to calculated or predict a system`s reliability. A literature review and software survey are included. The desired product of this developmental work is a tool for the system designer to use in the early design phase so that the final design will achieve the desired system reliability without lengthy testing and rework. Three computer programs were written which provide the first attempt at fulfilling this need. The programs are described and a case study is presented for each one. This is a continuing effort which will be furthered in FY-1992. 10 refs.

  6. Operating Experience of the Tritium Laboratory at CRL

    SciTech Connect

    Gallagher, C.L.; McCrimmon, K.D.

    2005-07-15

    The Chalk River Laboratories Tritium Laboratory has been operating safely and reliably for over 20 years. Safe operations are achieved through proper management, supervision, training and using approved operating procedures and techniques. Reliability is achieved through appropriate equipment selection, routine equipment surveillance testing and routine preventative maintenance. This paper summarizes the laboratory's standard operating protocols and formal compliance programs followed to ensure safe operations. The paper will also review the general set-up of the laboratory and will focus on the experience gained with the operation of various types of equipment such as tritium monitors, tritium analyzers, pumps, purification systems and other systems used in the laboratory during its 20 years of operation.

  7. Reliability Generalization: "Lapsus Linguae"

    ERIC Educational Resources Information Center

    Smith, Julie M.

    2011-01-01

    This study examines the proposed Reliability Generalization (RG) method for studying reliability. RG employs the application of meta-analytic techniques similar to those used in validity generalization studies to examine reliability coefficients. This study explains why RG does not provide a proper research method for the study of reliability,…

  8. Improve relief valve reliability

    SciTech Connect

    Nelson, W.E.

    1993-01-01

    This paper reports on careful evaluation of safety relief valves and their service conditions which can improve reliability and permit more time between testing. Some factors that aid in getting long-run results are: Use of valves suitable for service, Attention to design of the relieving system (including use of block valves) and Close attention to repair procedures. Use these procedures for each installation, applying good engineering practices. The Clean Air Act of 1990 and other legislation limiting allowable fugitive emissions in a hydrocarbon processing plant will greatly impact safety relief valve installations. Normal leakage rate from a relief valve will require that it be connected to a closed vent system connected to a recovery or control device. Tying the outlet of an existing valve into a header system can cause accelerated corrosion and operating difficulties. Reliability of many existing safety relief valves may be compromised when they are connected to an outlet header without following good engineering practices. The law has been enacted but all the rules have not been promulgated.

  9. Can There Be Reliability without "Reliability?"

    ERIC Educational Resources Information Center

    Mislevy, Robert J.

    2004-01-01

    An "Educational Researcher" article by Pamela Moss (1994) asks the title question, "Can there be validity without reliability?" Yes, she answers, if by reliability one means "consistency among independent observations intended as interchangeable" (Moss, 1994, p. 7), quantified by internal consistency indices such as KR-20 coefficients and…

  10. HELIOS Critical Design Review: Reliability

    NASA Technical Reports Server (NTRS)

    Benoehr, H. C.; Herholz, J.; Prem, H.; Mann, D.; Reichert, L.; Rupp, W.; Campbell, D.; Boettger, H.; Zerwes, G.; Kurvin, C.

    1972-01-01

    This paper presents Helios Critical Design Review Reliability form October 16-20, 1972. The topics include: 1) Reliability Requirement; 2) Reliability Apportionment; 3) Failure Rates; 4) Reliability Assessment; 5) Reliability Block Diagram; and 5) Reliability Information Sheet.

  11. Business of reliability

    NASA Astrophysics Data System (ADS)

    Engel, Pierre

    1999-12-01

    The presentation is organized around three themes: (1) The decrease of reception equipment costs allows non-Remote Sensing organization to access a technology until recently reserved to scientific elite. What this means is the rise of 'operational' executive agencies considering space-based technology and operations as a viable input to their daily tasks. This is possible thanks to totally dedicated ground receiving entities focusing on one application for themselves, rather than serving a vast community of users. (2) The multiplication of earth observation platforms will form the base for reliable technical and financial solutions. One obstacle to the growth of the earth observation industry is the variety of policies (commercial versus non-commercial) ruling the distribution of the data and value-added products. In particular, the high volume of data sales required for the return on investment does conflict with traditional low-volume data use for most applications. Constant access to data sources supposes monitoring needs as well as technical proficiency. (3) Large volume use of data coupled with low- cost equipment costs is only possible when the technology has proven reliable, in terms of application results, financial risks and data supply. Each of these factors is reviewed. The expectation is that international cooperation between agencies and private ventures will pave the way for future business models. As an illustration, the presentation proposes to use some recent non-traditional monitoring applications, that may lead to significant use of earth observation data, value added products and services: flood monitoring, ship detection, marine oil pollution deterrent systems and rice acreage monitoring.

  12. Epistemological Beliefs and Academic Achievement

    ERIC Educational Resources Information Center

    Arslantas, Halis Adnan

    2016-01-01

    This study aimed to identify the relationship between teacher candidates' epistemological beliefs and academic achievement. The participants of the study were 353 teacher candidates studying their fourth year at the Education Faculty. The Epistemological Belief Scale was used which adapted to Turkish through reliability and validity work by…

  13. Promoting health care safety through training high reliability teams.

    PubMed

    Wilson, K A; Burke, C S; Priest, H A; Salas, E

    2005-08-01

    Many organizations have been using teams as a means of achieving organizational outcomes (such as productivity and safety). Research has indicated that teams, especially those operating in complex environments, are not always effective. There is a subset of organizations in which teams operate that are able to balance effectiveness and safety despite the complexities of the environment (for example, aviation, nuclear power). These high reliability organizations (HROs) have begun to be examined as a model for those in other complex domains, such as health care, that strive to reach a status of high reliability. In this paper we analyse the components leading to the effectiveness of HROs by examining the teams that comprise them. We use a systems perspective to uncover the behavioral markers by which high reliability teams (HRTs) are able to uphold the values of their parent organizations, thereby promoting safety. Using these markers, we offer guidelines and developmental strategies that will help the healthcare community to shift more quickly to high reliability status by not focusing solely on the organizational level.

  14. Promoting health care safety through training high reliability teams

    PubMed Central

    Wilson, K; Burke, C; Priest, H; Salas, E

    2005-01-01

    

 Many organizations have been using teams as a means of achieving organizational outcomes (such as productivity and safety). Research has indicated that teams, especially those operating in complex environments, are not always effective. There is a subset of organizations in which teams operate that are able to balance effectiveness and safety despite the complexities of the environment (for example, aviation, nuclear power). These high reliability organizations (HROs) have begun to be examined as a model for those in other complex domains, such as health care, that strive to reach a status of high reliability. In this paper we analyse the components leading to the effectiveness of HROs by examining the teams that comprise them. We use a systems perspective to uncover the behavioral markers by which high reliability teams (HRTs) are able to uphold the values of their parent organizations, thereby promoting safety. Using these markers, we offer guidelines and developmental strategies that will help the healthcare community to shift more quickly to high reliability status by not focusing solely on the organizational level. PMID:16076797

  15. On-orbit spacecraft reliability

    NASA Technical Reports Server (NTRS)

    Bloomquist, C.; Demars, D.; Graham, W.; Henmi, P.

    1978-01-01

    Operational and historic data for 350 spacecraft from 52 U.S. space programs were analyzed for on-orbit reliability. Failure rates estimates are made for on-orbit operation of spacecraft subsystems, components, and piece parts, as well as estimates of failure probability for the same elements during launch. Confidence intervals for both parameters are also given. The results indicate that: (1) the success of spacecraft operation is only slightly affected by most reported incidents of anomalous behavior; (2) the occurrence of the majority of anomalous incidents could have been prevented piror to launch; (3) no detrimental effect of spacecraft dormancy is evident; (4) cycled components in general are not demonstrably less reliable than uncycled components; and (5) application of product assurance elements is conductive to spacecraft success.

  16. High reliability megawatt transformer/rectifier

    NASA Technical Reports Server (NTRS)

    Zwass, Samuel; Ashe, Harry; Peters, John W.

    1991-01-01

    The goal of the two phase program is to develop the technology and design and fabricate ultralightweight high reliability DC to DC converters for space power applications. The converters will operate from a 5000 V dc source and deliver 1 MW of power at 100 kV dc. The power weight density goal is 0.1 kg/kW. The cycle to cycle voltage stability goals was + or - 1 percent RMS. The converter is to operate at an ambient temperature of -40 C with 16 minute power pulses and one hour off time. The uniqueness of the design in Phase 1 resided in the dc switching array which operates the converter at 20 kHz using Hollotron plasma switches along with a specially designed low loss, low leakage inductance and a light weight high voltage transformer. This approach reduced considerably the number of components in the converter thereby increasing the system reliability. To achieve an optimum transformer for this application, the design uses four 25 kV secondary windings to produce the 100 kV dc output, thus reducing the transformer leakage inductance, and the ac voltage stresses. A specially designed insulation system improves the high voltage dielectric withstanding ability and reduces the insulation path thickness thereby reducing the component weight. Tradeoff studies and tests conducted on scaled-down model circuits and using representative coil insulation paths have verified the calculated transformer wave shape parameters and the insulation system safety. In Phase 1 of the program a converter design approach was developed and a preliminary transformer design was completed. A fault control circuit was designed and a thermal profile of the converter was also developed.

  17. Reliability of the Fermilab Antiproton Source

    SciTech Connect

    Harms, E. Jr.

    1993-05-01

    This paper reports on the reliability of the Fermilab Antiproton source since it began operation in 1985. Reliability of the complex as a whole as well as subsystem performance is summarized. Also discussed is the trending done to determine causes of significant machine downtime and actions taken to reduce the incidence of failure. Finally, results of a study to detect previously unidentified reliability limitations are presented.

  18. Diverless remote operated flowline connections

    SciTech Connect

    Johnson, R.; Slider, M.; Galle, G.

    1997-07-01

    The diverless remote horizontal connection for a major project in the South China Sea was performed using the ABB Vetco Gray GSR connector in conjunction with a pull-in tool. New, innovative methods were developed whereby the hubs provide axial and angular misalignment capabilities and an ROV can make and break the connection and replace the innovative magnetic sealing assembly. The significance of this achievement is assessed with a focus on the implemented design philosophies, the principles of operation, the overall system reliability, the operational cost reduction, and the full-scale testing results. Additional comments are made concerning the applicability of this technology in various other subsea applications.

  19. Human Reliability Program Overview

    SciTech Connect

    Bodin, Michael

    2012-09-25

    This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.

  20. Power electronics reliability analysis.

    SciTech Connect

    Smith, Mark A.; Atcitty, Stanley

    2009-12-01

    This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

  1. Reliable Design Versus Trust

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Kenneth A.

    2016-01-01

    This presentation focuses on reliability and trust for the users portion of the FPGA design flow. It is assumed that the manufacturer prior to hand-off to the user tests FPGA internal components. The objective is to present the challenges of creating reliable and trusted designs. The following will be addressed: What makes a design vulnerable to functional flaws (reliability) or attackers (trust)? What are the challenges for verifying a reliable design versus a trusted design?

  2. Reliability in aposematic signaling

    PubMed Central

    2010-01-01

    In light of recent work, we will expand on the role and variability of aposematic signals. The focus of this review will be the concepts of reliability and honesty in aposematic signaling. We claim that reliable signaling can solve the problem of aposematic evolution, and that variability in reliability can shed light on the complexity of aposematic systems. PMID:20539774

  3. Viking Lander reliability program

    NASA Technical Reports Server (NTRS)

    Pilny, M. J.

    1978-01-01

    The Viking Lander reliability program is reviewed with attention given to the development of the reliability program requirements, reliability program management, documents evaluation, failure modes evaluation, production variation control, failure reporting and correction, and the parts program. Lander hardware failures which have occurred during the mission are listed.

  4. Reliability as Argument

    ERIC Educational Resources Information Center

    Parkes, Jay

    2007-01-01

    Reliability consists of both important social and scientific values and methods for evidencing those values, though in practice methods are often conflated with the values. With the two distinctly understood, a reliability argument can be made that articulates the particular reliability values most relevant to the particular measurement situation…

  5. Reliability model generator specification

    NASA Technical Reports Server (NTRS)

    Cohen, Gerald C.; Mccann, Catherine

    1990-01-01

    The Reliability Model Generator (RMG), a program which produces reliability models from block diagrams for ASSIST, the interface for the reliability evaluation tool SURE is described. An account is given of motivation for RMG and the implemented algorithms are discussed. The appendices contain the algorithms and two detailed traces of examples.

  6. Test effectiveness study report: An analytical study of system test effectiveness and reliability growth of three commercial spacecraft programs

    NASA Technical Reports Server (NTRS)

    Feldstein, J. F.

    1977-01-01

    Failure data from 16 commercial spacecraft were analyzed to evaluate failure trends, reliability growth, and effectiveness of tests. It was shown that the test programs were highly effective in ensuring a high level of in-orbit reliability. There was only a single catastrophic problem in 44 years of in-orbit operation on 12 spacecraft. The results also indicate that in-orbit failure rates are highly correlated with unit and systems test failure rates. The data suggest that test effectiveness estimates can be used to guide the content of a test program to ensure that in-orbit reliability goals are achieved.

  7. 76 FR 71011 - Reliability Technical Conference Agenda

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-16

    ..., President and CEO, Consolidated Edison Inc., on behalf of Consolidated Edison and the Edison Electric..., North American Electric Reliability Corporation Thomas J. Galloway, President and Chief Executive... Monroe, Executive Vice President and Chief Operating Officer, Southwest Power Pool (SPP) Thomas...

  8. Reliability and extended-life potential of EBR-II

    SciTech Connect

    King, R W

    1985-01-01

    Although the longlife potential of liquid-metal-cooled reactors (LMRs) has been only partially demonstrated, many factors point to the potential for exceptionally long life. EBR-II has the opportunity to become the first LMR to achieve an operational lifetime of 30 years or more. In 1984 a study of the extended-life potential of EBR-II identified the factors that contribute to the continued successful operation of EBR-II as a power reactor and experimental facility. Also identified were factors that could cause disruptions in the useful life of the facility. Although no factors were found that would inherently limit the life of EBR-II, measures were identified that could help ensure continued plant availability. These measures include the implementation of more effective surveillance, diagnostic, and control systems to complement the inherent safety and reliability features of EBR-II. An operating lifetime of well beyond 30 years is certainly feasible.

  9. The Advanced Communications Technology Satellite - Performance, Reliability and Lessons Learned

    NASA Technical Reports Server (NTRS)

    Krawczyk, Richard J.; Ignaczak, Louis R.

    2000-01-01

    The Advanced Communications Satellite (ACTS) was conceived and developed in the mid- 1980s as an experimental satellite to demonstrate unproven Ka-band technology, and potential new commercial applications and services. Since launch into geostationary orbit in September 1993. ACTS has accumulated almost seven years of essentially trouble-free operation and met all program objectives. The unique technology, service experiments. and system level demonstrations accomplished by ACTS have been reported in many forums over the past several years. As ACTS completes its final experiments activity, this paper will relate the top-level program goals that have been achieved in the design, operation, and performance of the particular satellite subsystems. Pre-launch decisions to ensure satellite reliability and the subsequent operational experiences contribute to lessons learned that may be applicable to other comsat programs.

  10. Estimating the Reliability of Electronic Parts in High Radiation Fields

    NASA Technical Reports Server (NTRS)

    Everline, Chester; Clark, Karla; Man, Guy; Rasmussen, Robert; Johnston, Allan; Kohlhase, Charles; Paulos, Todd

    2008-01-01

    Radiation effects on materials and electronic parts constrain the lifetime of flight systems visiting Europa. Understanding mission lifetime limits is critical to the design and planning of such a mission. Therefore, the operational aspects of radiation dose are a mission success issue. To predict and manage mission lifetime in a high radiation environment, system engineers need capable tools to trade radiation design choices against system design and reliability, and science achievements. Conventional tools and approaches provided past missions with conservative designs without the ability to predict their lifetime beyond the baseline mission.This paper describes a more systematic approach to understanding spacecraft design margin, allowing better prediction of spacecraft lifetime. This is possible because of newly available electronic parts radiation effects statistics and an enhanced spacecraft system reliability methodology. This new approach can be used in conjunction with traditional approaches for mission design. This paper describes the fundamentals of the new methodology.

  11. High reliability linear drive device for artificial hearts

    NASA Astrophysics Data System (ADS)

    Ji, Jinghua; Zhao, Wenxiang; Liu, Guohai; Shen, Yue; Wang, Fangqun

    2012-04-01

    In this paper, a new high reliability linear drive device, termed as stator-permanent-magnet tubular oscillating actuator (SPM-TOA), is proposed for artificial hearts (AHs). The key is to incorporate the concept of two independent phases into this linear AH device, hence achieving high reliability operation. The fault-tolerant teeth are employed to provide the desired decoupling phases in magnetic circuit. Also, as the magnets and the coils are located in the stator, the proposed SPM-TOA takes the definite advantages of robust mover and direct-drive capability. By using the time-stepping finite element method, the electromagnetic characteristics of the proposed SPM-TOA are analyzed, including magnetic field distributions, flux linkages, back- electromotive forces (back-EMFs) self- and mutual inductances, as well as cogging and thrust forces. The results confirm that the proposed SPM-TOA meets the dimension, weight, and force requirements of the AH drive device.

  12. Reliability Value of Fast State Estimation on Power Systems

    SciTech Connect

    Elizondo, Marcelo A.; Chen, Yousu; Huang, Zhenyu

    2012-05-07

    Monitoring the state of a power system under stress is key to achieving reliable operation. State estimation and timely measurements become more important when applying and designing corrective control actions (manual and automatic) to arrest or mitigate cascading blackouts. The execution time of each process, including state estimation, should be as short as possible to allow for timely action. In this paper, we provide a methodology for estimating one of the components of value of faster and more frequent state estimation: the reliability value of state estimation to assist corrective control actions for arresting or mitigating cascading blackouts. We present a new algorithm for estimating the time between successive line trips in a cascading failure. The algorithm combines power flow calculations with characteristics of the protection system to estimate the time between successive equipment trips. Using this algorithm, we illustrate the value of fast state estimation by calculating the time remaining for automatic or manual corrective actions after state estimation is finalized.

  13. Simulation of an algorithm for determining the reliability of unmanned ground vehicle networks

    NASA Astrophysics Data System (ADS)

    Singh, Harpreet; Dixit, Arati M.; Saab, Kassem; Gerhart, Grant R.

    2009-09-01

    There is an increasing interest in the army of small unmanned robots taking part in defense operations. It is considered important to predict the reliability of the group of robots taking part in different operations. A group of robots have both coordination and collaboration. The robot operations are considered as a network graph whose system reliability can be determined with the help of different techniques. Once a specified reliability is achieved the commander controlling the operation can take appropriate action. This paper gives a simulation which can determine the system reliability of the robotic systems having collaboration and coordination. The procedure developed is based on binary decision diagrams to obtain a disjoint Boolean expression. The procedure is applicable for any number of nodes and the branches. For illustration purposes reliability of simple circuits like series network, parallel network, series-parallel and non-series parallel network are illustrated. It is hoped that more work in this area will lead to the development of algorithms which will be ultimately used for a real time environment.

  14. Progress in GaN devices performances and reliability

    NASA Astrophysics Data System (ADS)

    Saunier, P.; Lee, C.; Jimenez, J.; Balistreri, A.; Dumka, D.; Tserng, H. Q.; Kao, M. Y.; Chowdhury, U.; Chao, P. C.; Chu, K.; Souzis, A.; Eliashevich, I.; Guo, S.; del Alamo, J.; Joh, J.; Shur, M.

    2008-02-01

    With the DARPA Wide Bandgap Semiconductor Technology RF Thrust Contract, TriQuint Semiconductor and its partners, BAE Systems, Lockheed Martin, IQE-RF, II-VI, Nitronex, M.I.T., and R.P.I. are achieving great progress towards the overall goal of making Gallium Nitride a revolutionary RF technology ready to be inserted in defense and commercial applications. Performance and reliability are two critical components of success (along with cost and manufacturability). In this paper we will discuss these two aspects. Our emphasis is now operation at 40 V bias voltage (we had been working at 28 V). 1250 µm devices have power densities in the 6 to 9 W/mm with associated efficiencies in the low- to mid 60 % and associated gain in the 12 to 12.5 dB at 10 GHz. We are using a dual field-plate structure to optimize these performances. Very good performances have also been achieved at 18 GHz with 400 µm devices. Excellent progress has been made in reliability. Our preliminary DC and RF reliability tests at 40 V indicate a MTTF of 1E6hrs with1.3 eV activation energy at 150 0C channel temperature. Jesus Del Alamo at MIT has greatly refined our initial findings leading to a strain related theory of degradation that is driven by electric fields. Degradation can occur on the drain edge of the gate due to excessive strain given by inverse piezoelectric effect.

  15. Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues

    SciTech Connect

    Ronald Laurids Boring

    2010-11-01

    This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.

  16. Reliability Growth in Space Life Support Systems

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2014-01-01

    A hardware system's failure rate often increases over time due to wear and aging, but not always. Some systems instead show reliability growth, a decreasing failure rate with time, due to effective failure analysis and remedial hardware upgrades. Reliability grows when failure causes are removed by improved design. A mathematical reliability growth model allows the reliability growth rate to be computed from the failure data. The space shuttle was extensively maintained, refurbished, and upgraded after each flight and it experienced significant reliability growth during its operational life. In contrast, the International Space Station (ISS) is much more difficult to maintain and upgrade and its failure rate has been constant over time. The ISS Carbon Dioxide Removal Assembly (CDRA) reliability has slightly decreased. Failures on ISS and with the ISS CDRA continue to be a challenge.

  17. Designing magnetic systems for reliability

    SciTech Connect

    Heitzenroeder, P.J.

    1991-01-01

    Designing magnetic system is an iterative process in which the requirements are set, a design is developed, materials and manufacturing processes are defined, interrelationships with the various elements of the system are established, engineering analyses are performed, and fault modes and effects are studied. Reliability requires that all elements of the design process, from the seemingly most straightforward such as utilities connection design and implementation, to the most sophisticated such as advanced finite element analyses, receives a balanced and appropriate level of attention. D.B. Montgomery's study of magnet failures has shown that the predominance of magnet failures tend not to be in the most intensively engineered areas, but are associated with insulation, leads, ad unanticipated conditions. TFTR, JET, JT-60, and PBX are all major tokamaks which have suffered loss of reliability due to water leaks. Similarly the majority of causes of loss of magnet reliability at PPPL has not been in the sophisticated areas of the design but are due to difficulties associated with coolant connections, bus connections, and external structural connections. Looking towards the future, the major next-devices such as BPX and ITER are most costly and complex than any of their predecessors and are pressing the bounds of operating levels, materials, and fabrication. Emphasis on reliability is a must as the fusion program enters a phase where there are fewer, but very costly devices with the goal of reaching a reactor prototype stage in the next two or three decades. This paper reviews some of the magnet reliability issues which PPPL has faced over the years the lessons learned from them, and magnet design and fabrication practices which have been found to contribute to magnet reliability.

  18. Human reliability analysis

    SciTech Connect

    Dougherty, E.M.; Fragola, J.R.

    1988-01-01

    The authors present a treatment of human reliability analysis incorporating an introduction to probabilistic risk assessment for nuclear power generating stations. They treat the subject according to the framework established for general systems theory. Draws upon reliability analysis, psychology, human factors engineering, and statistics, integrating elements of these fields within a systems framework. Provides a history of human reliability analysis, and includes examples of the application of the systems approach.

  19. Reliable, efficient systems for biomedical research facility

    SciTech Connect

    Basso, P.

    1997-05-01

    Medical Sciences Research Building III (MSRB III) is a 10-story, 207,000 ft{sup 2} (19,230 m{sup 2}) biomedical research facility on the campus of the University of Michigan. The design of MSRB III required a variety of technological solutions to complex design issues. The systems also had to accommodate future modifications. Closely integrated, modular systems with a high degree of flexibility were designed to respond to this requirement. Additionally, designs were kept as simple as possible for operation and maintenance personnel. Integrated electronic controls were used to provide vital data during troubleshooting and maintenance procedures. Equipment was also specified that provides reliability and minimizes maintenance. Other features include 100% redundancy of all central equipment servicing the animal housing area; redundant temperature controls for each individual animal housing room for fail-safe operation to protect the animals against overheating; and accessibility to all items requiring maintenance through an above-ceiling coordination process. It is critical that the engineering systems for MSRB III provide a safe, comfortable, energy efficient environment. The achievement of this design intent was noted by the University`s Commissioning Review Committee which stated: The Commissioning Process performed during both the design phase and construction phase of MSRB III was a significant success, providing an efficiently functioning facility that has been built in accordance with its design intent.

  20. Reliability of fluid systems

    NASA Astrophysics Data System (ADS)

    Kopáček, Jaroslav; Fojtášek, Kamil; Dvořák, Lukáš

    2016-03-01

    This paper focuses on the importance of detection reliability, especially in complex fluid systems for demanding production technology. The initial criterion for assessing the reliability is the failure of object (element), which is seen as a random variable and their data (values) can be processed using by the mathematical methods of theory probability and statistics. They are defined the basic indicators of reliability and their applications in calculations of serial, parallel and backed-up systems. For illustration, there are calculation examples of indicators of reliability for various elements of the system and for the selected pneumatic circuit.

  1. Reliability and availability studies in the RIA driver linac.

    SciTech Connect

    Lessner, E. S.; Ostroumov, P. N.; Physics

    2005-01-01

    The Rare Isotope Accelerator (RIA) facility will include various complex systems and must provide radioactive beams to many users simultaneously. The availability of radioactive beams for most experiments at the fully-commissioned facility should be as high as possible within design cost limitations. To make a realistic estimate of the achievable reliability a detailed analysis is required. The RIA driver linac is a complex machine containing a large number of superconducting (SC) resonators and capable of accelerating multiple-charge-state beams [1]. At the pre-CDR stage of the design it is essential to identify critical facility subsystem failures that can prevent the driver linac from operating. The reliability and availability of the driver linac were studied using expert information and data from operating machines such as ATLAS, APS, JLab, and LANL. Availability studies are performed with a Monte-Carlo simulation code previously applied to availability assessments of the NLC facility [2] and the results used to identify subsystem failures that most affect the availability and reliability of the RIA driver, and guide design iterations and component specifications to address identified problems.

  2. Reliability assessment for components of large scale photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Ahadi, Amir; Ghadimi, Noradin; Mirabbasi, Davar

    2014-10-01

    Photovoltaic (PV) systems have significantly shifted from independent power generation systems to a large-scale grid-connected generation systems in recent years. The power output of PV systems is affected by the reliability of various components in the system. This study proposes an analytical approach to evaluate the reliability of large-scale, grid-connected PV systems. The fault tree method with an exponential probability distribution function is used to analyze the components of large-scale PV systems. The system is considered in the various sequential and parallel fault combinations in order to find all realistic ways in which the top or undesired events can occur. Additionally, it can identify areas that the planned maintenance should focus on. By monitoring the critical components of a PV system, it is possible not only to improve the reliability of the system, but also to optimize the maintenance costs. The latter is achieved by informing the operators about the system component's status. This approach can be used to ensure secure operation of the system by its flexibility in monitoring system applications. The implementation demonstrates that the proposed method is effective and efficient and can conveniently incorporate more system maintenance plans and diagnostic strategies.

  3. Reliability of genetic networks is evolvable

    NASA Astrophysics Data System (ADS)

    Braunewell, Stefan; Bornholdt, Stefan

    2008-06-01

    Control of the living cell functions with remarkable reliability despite the stochastic nature of the underlying molecular networks—a property presumably optimized by biological evolution. We ask here to what extent the ability of a stochastic dynamical network to produce reliable dynamics is an evolvable trait. Using an evolutionary algorithm based on a deterministic selection criterion for the reliability of dynamical attractors, we evolve networks of noisy discrete threshold nodes. We find that, starting from any random network, reliability of the attractor landscape can often be achieved with only a few small changes to the network structure. Further, the evolvability of networks toward reliable dynamics while retaining their function is investigated and a high success rate is found.

  4. Combining Grades from Different Assessments: How Reliable Is the Result?

    ERIC Educational Resources Information Center

    Cresswell, M. J.

    1988-01-01

    The author suggests combining grades from component assessments to provide an overall student assessment. He explores the concept of reliability and concludes that the overall assessment will be reliable only if the number of grades used to report component achievements equals or exceeds the number used to report overall achievement. (Author/CH)

  5. Hawaii electric system reliability.

    SciTech Connect

    Silva Monroy, Cesar Augusto; Loose, Verne William

    2012-09-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  6. Achieving Extreme Utilization of Excitons by an Efficient Sandwich-Type Emissive Layer Architecture for Reduced Efficiency Roll-Off and Improved Operational Stability in Organic Light-Emitting Diodes.

    PubMed

    Wu, Zhongbin; Sun, Ning; Zhu, Liping; Sun, Hengda; Wang, Jiaxiu; Yang, Dezhi; Qiao, Xianfeng; Chen, Jiangshan; Alshehri, Saad M; Ahamad, Tansir; Ma, Dongge

    2016-02-10

    It has been demonstrated that the efficiency roll-off is generally caused by the accumulation of excitons or charge carriers, which is intimately related to the emissive layer (EML) architecture in organic light-emitting diodes (OLEDs). In this article, an efficient sandwich-type EML structure with a mixed-host EML sandwiched between two single-host EMLs was designed to eliminate this accumulation, thus simultaneously achieving high efficiency, low efficiency roll-off and good operational stability in the resulting OLEDs. The devices show excellent electroluminescence performances, realizing a maximum external quantum efficiency (EQE) of 24.6% with a maximum power efficiency of 105.6 lm W(-1) and a maximum current efficiency of 93.5 cd A(-1). At the high brightness of 5,000 cd m(-2), they still remain as high as 23.3%, 71.1 lm W(-1), and 88.3 cd A(-1), respectively. And, the device lifetime is up to 2000 h at initial luminance of 1000 cd m(-2), which is significantly higher than that of compared devices with conventional EML structures. The improvement mechanism is systematically studied by the dependence of the exciton distribution in EML and the exciton quenching processes. It can be seen that the utilization of the efficient sandwich-type EML broadens the recombination zone width, thus greatly reducing the exciton quenching and increasing the probability of the exciton recombination. It is believed that the design concept provides a new avenue for us to achieve high-performance OLEDs.

  7. Reliability-based casing design

    SciTech Connect

    Maes, M.A.; Gulati, K.C.; Johnson, R.C.; McKenna, D.L.; Brand, P.R.; Lewis, D.B.

    1995-06-01

    The present paper describes the development of reliability-based design criteria for oil and/or gas well casing/tubing. The approach is based on the fundamental principles of limit state design. Limit states for tubulars are discussed and specific techniques for the stochastic modeling of loading and resistance variables are described. Zonation methods and calibration techniques are developed which are geared specifically to the characteristic tubular design for both hydrocarbon drilling and production applications. The application of quantitative risk analysis to the development of risk-consistent design criteria is shown to be a major and necessary step forward in achieving more economic tubular design.

  8. Bayesian methods in reliability

    NASA Astrophysics Data System (ADS)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  9. Characterization of the flowing afterglows of an N2 O2 reduced-pressure discharge: setting the operating conditions to achieve a dominant late afterglow and correlating the NOβ UV intensity variation with the N and O atom densities

    NASA Astrophysics Data System (ADS)

    Boudam, M. K.; Saoudi, B.; Moisan, M.; Ricard, A.

    2007-03-01

    The flowing afterglow of an N2-O2 discharge in the 0.6-10 Torr range is examined in the perspective of achieving sterilization of medical devices (MDs) under conditions ensuring maximum UV intensity with minimum damage to polymer-based MDs. The early afterglow is shown to be responsible for creating strong erosion damage, requiring that the sterilizer be operated in a dominant late-afterglow mode. These two types of afterglow can be characterized by optical emission spectroscopy: the early afterglow is distinguished by an intense emission from the N_{2}^{+} 1st negative system (band head at 391.4 nm) while the late afterglow yields an overpopulation of the v' = 11 ro-vibrational level of the N2(B) state, indicating a reduced contribution from the early afterglow N2 metastable species. We have studied the influence of operating conditions (pressure, O2 content in the N2-O2 mixture, distance of the discharge from the entrance to the afterglow (sterilizer) chamber) in order to achieve a dominant late afterglow that also ensures maximum and almost uniform UV intensity in the sterilization chamber. As far as operating conditions are concerned, moving the plasma source sufficiently far from the chamber entrance is shown to be a practical means for significantly reducing the density of the characteristic species of the early afterglow. Using the NO titration method, we obtain the (absolute) densities of N and O atoms in the afterglow at the NO injection inlet, a few cm before the chamber entrance: the N atom density goes through a maximum at approximately 0.3-0.5% O2 and then decreases, while the O atom density increases regularly with the O2 percentage. The spatial variation of the N atom (relative) density in the chamber is obtained by recording the emission intensity from the 1st positive system at 580 nm: in the 2-5 Torr range, this density is quite uniform everywhere in the chamber. The (relative) densities of N and O atoms in the discharge are determined by using

  10. Key Reliability Drivers of Liquid Propulsion Engines and A Reliability Model for Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Huang, Zhao-Feng; Fint, Jeffry A.; Kuck, Frederick M.

    2005-01-01

    This paper is to address the in-flight reliability of a liquid propulsion engine system for a launch vehicle. We first establish a comprehensive list of system and sub-system reliability drivers for any liquid propulsion engine system. We then build a reliability model to parametrically analyze the impact of some reliability parameters. We present sensitivity analysis results for a selected subset of the key reliability drivers using the model. Reliability drivers identified include: number of engines for the liquid propulsion stage, single engine total reliability, engine operation duration, engine thrust size, reusability, engine de-rating or up-rating, engine-out design (including engine-out switching reliability, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction), propellant specific hazards, engine start and cutoff transient hazards, engine combustion cycles, vehicle and engine interface and interaction hazards, engine health management system, engine modification, engine ground start hold down with launch commit criteria, engine altitude start (1 in. start), Multiple altitude restart (less than 1 restart), component, subsystem and system design, manufacturing/ground operation support/pre and post flight check outs and inspection, extensiveness of the development program. We present some sensitivity analysis results for the following subset of the drivers: number of engines for the propulsion stage, single engine total reliability, engine operation duration, engine de-rating or up-rating requirements, engine-out design, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction, and engine health management system implementation (basic redlines and more advanced health management systems).

  11. Graded Achievement, Tested Achievement, and Validity

    ERIC Educational Resources Information Center

    Brookhart, Susan M.

    2015-01-01

    Twenty-eight studies of grades, over a century, were reviewed using the argument-based approach to validity suggested by Kane as a theoretical framework. The review draws conclusions about the meaning of graded achievement, its relation to tested achievement, and changes in the construct of graded achievement over time. "Graded…

  12. Estimating the Reliability of a Soyuz Spacecraft Mission

    NASA Technical Reports Server (NTRS)

    Lutomski, Michael G.; Farnham, Steven J., II; Grant, Warren C.

    2010-01-01

    Once the US Space Shuttle retires in 2010, the Russian Soyuz Launcher and Soyuz Spacecraft will comprise the only means for crew transportation to and from the International Space Station (ISS). The U.S. Government and NASA have contracted for crew transportation services to the ISS with Russia. The resulting implications for the US space program including issues such as astronaut safety must be carefully considered. Are the astronauts and cosmonauts safer on the Soyuz than the Space Shuttle system? Is the Soyuz launch system more robust than the Space Shuttle? Is it safer to continue to fly the 30 year old Shuttle fleet for crew transportation and cargo resupply than the Soyuz? Should we extend the life of the Shuttle Program? How does the development of the Orion/Ares crew transportation system affect these decisions? The Soyuz launcher has been in operation for over 40 years. There have been only two loss of life incidents and two loss of mission incidents. Given that the most recent incident took place in 1983, how do we determine current reliability of the system? Do failures of unmanned Soyuz rockets impact the reliability of the currently operational man-rated launcher? Does the Soyuz exhibit characteristics that demonstrate reliability growth and how would that be reflected in future estimates of success? NASA s next manned rocket and spacecraft development project is currently underway. Though the projects ultimate goal is to return to the Moon and then to Mars, the launch vehicle and spacecraft s first mission will be for crew transportation to and from the ISS. The reliability targets are currently several times higher than the Shuttle and possibly even the Soyuz. Can these targets be compared to the reliability of the Soyuz to determine whether they are realistic and achievable? To help answer these questions this paper will explore how to estimate the reliability of the Soyuz Launcher/Spacecraft system, compare it to the Space Shuttle, and its

  13. Wear Mechanisms in a Reliability Methodology

    NASA Astrophysics Data System (ADS)

    Tanner, Danelle M.; Dugger, Michael T.

    2003-01-01

    The main thrust in any reliability work is identifying failure modes and mechanisms. This is especially true for the new technology of MicroElectroMechanical Systems (MEMS). The methods are sometimes just as important as the results achieved. This paper will review some of the methods developed specifically for MEMS. Our methodology uses statistical characterization and testing of complex MEMS devices to help us identify dominant failure modes. We strive to determine the root cause of each failure mode and to gain a fundamental understanding of that mechanism. Test structures designed to be sensitive to a particular failure mechanism are typically used to gain understanding. The development of predictive models follows from this basic understanding. This paper will focus on the failure mechanism of wear and how our methodology was exercised to provide a predictive model. The MEMS device stressed in these studies was a Sandia-developed microengine with orthogonal electrostatic linear actuators connected to a gear on a hub. The dominant failure mechanism was wear in the sliding/contacting regions. A sliding beam-on-post test structure was also used to measure friction coefficients and wear morphology for different surface coatings and environments. Results show that a predictive model of failure-time as a function of drive frequency based on wear fits the functional form of the reliability data quite well, and demonstrates the benefit of a fundamental understanding of wear. The results also show that while debris of similar chemistry and morphology was created in the two types of devices, the dependence of debris generation on the operating environment was entirely different. The differences are discussed in terms of wear maps for ceramics, and the mechanical and thermal contact conditions in each device.

  14. Reliability techniques in the petroleum industry

    NASA Technical Reports Server (NTRS)

    Williams, H. L.

    1971-01-01

    Quantitative reliability evaluation methods used in the Apollo Spacecraft Program are translated into petroleum industry requirements with emphasis on offsetting reliability demonstration costs and limited production runs. Described are the qualitative disciplines applicable, the definitions and criteria that accompany the disciplines, and the generic application of these disciplines to the chemical industry. The disciplines are then translated into proposed definitions and criteria for the industry, into a base-line reliability plan that includes these disciplines, and into application notes to aid in adapting the base-line plan to a specific operation.

  15. Reliability analysis of an ultra-reliable fault tolerant control system

    NASA Technical Reports Server (NTRS)

    Curry, R. E.; Vandervelde, W. E.; Frey, P. R.

    1984-01-01

    This report analyzes the reliability of NASA's Ultra-reliable Fault Tolerant Control System (UFTCS) architecture as it is currently envisioned for helicopter control. The analysis is extended to air transport and spacecraft control using the same computational and voter modules applied within the UFTCS architecture. The system reliability is calculated for several points in the helicopter, air transport, and space flight missions when there are initially 4, 5, and 6 operating channels. Sensitivity analyses are used to explore the effects of sensor failure rates and different system configurations at the 10 hour point of the helicopter mission. These analyses show that the primary limitation to system reliability is the number of flux windings on each flux summer (4 are assumed for the baseline case). Tables of system reliability at the 10 hour point are provided to allow designers to choose a configuration to meet specified reliability goals.

  16. Accurate, reliable prototype earth horizon sensor head

    NASA Technical Reports Server (NTRS)

    Schwarz, F.; Cohen, H.

    1973-01-01

    The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.

  17. Reliability on ISS Talk Outline

    NASA Technical Reports Server (NTRS)

    Misiora, Mike

    2015-01-01

    1. Overview of ISS 2. Space Environment and it effects a. Radiation b. Microgravity 3. How we ensure reliability a. Requirements b. Component Selection i. Note: I plan to stay away from talk about Rad Hardened components and talk about why we use older processors because they are less susceptible to SEUs. c. Testing d. Redundancy / Failure Tolerance e. Sparing strategies 4. Operational Examples a. Multiple MDM Failures on 6A due to hard drive failure In general, my plan is to only talk about data that is currently available via normal internet sources to ensure that I stay away from any topics that would be Export Controlled, ITAR, or NDA-controlled. The operational example has been well-reported on in the media and those are the details that I plan to cover. Additionally I am not planning on using any slides or showing any photos during the talk.

  18. Gearbox Reliability Collaborative Phase 3 Gearbox 2 Test Plan

    SciTech Connect

    Link, H.; Keller, J.; Guo, Y.; McNiff, B.

    2013-04-01

    Gearboxes in wind turbines have not been achieving their expected design life even though they commonly meet or exceed the design criteria specified in current design standards. One of the basic premises of the National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) is that the low gearbox reliability results from the absence of critical elements in the design process or insufficient design tools. Key goals of the GRC are to improve design approaches and analysis tools and to recommend practices and test methods resulting in improved design standards for wind turbine gearboxes that lower the cost of energy (COE) through improved reliability. The GRC uses a combined gearbox testing, modeling and analysis approach, along with a database of information from gearbox failures collected from overhauls and investigation of gearbox condition monitoring techniques to improve wind turbine operations and maintenance practices. Testing of Gearbox 2 (GB2) using the two-speed turbine controller that has been used in prior testing. This test series will investigate non-torque loads, high-speed shaft misalignment, and reproduction of field conditions in the dynamometer. This test series will also include vibration testing using an eddy-current brake on the gearbox's high speed shaft.

  19. A Year of Exceptional Achievements FY 2008

    SciTech Connect

    devore, L; Chrzanowski, P

    2008-11-06

    2008 highlights: (1) Stockpile Stewardship and Complex Transformation - LLNL achieved scientific breakthroughs that explain some of the key 'unknowns' in nuclear weapons performance and are critical to developing the predictive science needed to ensure the safety, reliability, and security of the U.S. nuclear deterrent without nuclear testing. In addition, the National Ignition Facility (NIF) passed 99 percent completion, an LLNL supercomputer simulation won the 2007 Gordon Bell Prize, and a significant fraction of our inventory of special nuclear material was shipped to other sites in support of complex transformation. (2) National and Global Security - Laboratory researchers delivered insights, technologies, and operational capabilities that are helping to ensure national security and global stability. Of particular note, they developed advanced detection instruments that provide increased speed, accuracy, specificity, and resolution for identifying and characterizing biological, chemical, nuclear, and high-explosive threats. (3) Exceptional Science and Technology - The Laboratory continued its tradition of scientific excellence and technical innovation. LLNL scientists made significant contributions to Nobel Prize-winning work on climate change. LLNL also received three R&D 100 awards and six Nanotech 50 awards, and dozens of Laboratory scientists and engineers were recognized with professional awards. These honors provide valuable confirmation that peers and outside experts recognize the quality of our staff and our work. (4) Enhanced Business and Operations - A major thrust under LLNS is to make the Laboratory more efficient and cost competitive. We achieved roughly $75 million in cost savings for support activities through organizational changes, consolidation of services, improved governance structures and work processes, technology upgrades, and systems shared with Los Alamos National Laboratory. We realized nonlabor cost savings of $23 million. Severe

  20. Flight control electronics reliability/maintenance study

    NASA Technical Reports Server (NTRS)

    Dade, W. W.; Edwards, R. H.; Katt, G. T.; Mcclellan, K. L.; Shomber, H. A.

    1977-01-01

    Collection and analysis of data are reported that concern the reliability and maintenance experience of flight control system electronics currently in use on passenger carrying jet aircraft. Two airlines B-747 airplane fleets were analyzed to assess the component reliability, system functional reliability, and achieved availability of the CAT II configuration flight control system. Also assessed were the costs generated by this system in the categories of spare equipment, schedule irregularity, and line and shop maintenance. The results indicate that although there is a marked difference in the geographic location and route pattern between the airlines studied, there is a close similarity in the reliability and the maintenance costs associated with the flight control electronics.

  1. Statistical modelling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1991-01-01

    During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety.

  2. Orbiter Autoland reliability analysis

    NASA Technical Reports Server (NTRS)

    Welch, D. Phillip

    1993-01-01

    The Space Shuttle Orbiter is the only space reentry vehicle in which the crew is seated upright. This position presents some physiological effects requiring countermeasures to prevent a crewmember from becoming incapacitated. This also introduces a potential need for automated vehicle landing capability. Autoland is a primary procedure that was identified as a requirement for landing following and extended duration orbiter mission. This report documents the results of the reliability analysis performed on the hardware required for an automated landing. A reliability block diagram was used to evaluate system reliability. The analysis considers the manual and automated landing modes currently available on the Orbiter. (Autoland is presently a backup system only.) Results of this study indicate a +/- 36 percent probability of successfully extending a nominal mission to 30 days. Enough variations were evaluated to verify that the reliability could be altered with missions planning and procedures. If the crew is modeled as being fully capable after 30 days, the probability of a successful manual landing is comparable to that of Autoland because much of the hardware is used for both manual and automated landing modes. The analysis indicates that the reliability for the manual mode is limited by the hardware and depends greatly on crew capability. Crew capability for a successful landing after 30 days has not been determined yet.

  3. Photovoltaic module reliability workshop

    SciTech Connect

    Mrig, L.

    1990-01-01

    The paper and presentations compiled in this volume form the Proceedings of the fourth in a series of Workshops sponsored by Solar Energy Research Institute (SERI/DOE) under the general theme of photovoltaic module reliability during the period 1986--1990. The reliability Photo Voltaic (PV) modules/systems is exceedingly important along with the initial cost and efficiency of modules if the PV technology has to make a major impact in the power generation market, and for it to compete with the conventional electricity producing technologies. The reliability of photovoltaic modules has progressed significantly in the last few years as evidenced by warranties available on commercial modules of as long as 12 years. However, there is still need for substantial research and testing required to improve module field reliability to levels of 30 years or more. Several small groups of researchers are involved in this research, development, and monitoring activity around the world. In the US, PV manufacturers, DOE laboratories, electric utilities and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in this field were brought together under SERI/DOE sponsorship to exchange the technical knowledge and field experience as related to current information in this important field. The papers presented here reflect this effort.

  4. Proposed reliability cost model

    NASA Technical Reports Server (NTRS)

    Delionback, L. M.

    1973-01-01

    The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.

  5. An approximation formula for a class of Markov reliability models

    NASA Technical Reports Server (NTRS)

    White, A. L.

    1984-01-01

    A way of considering a small but often used class of reliability model and approximating algebraically the systems reliability is shown. The models considered are appropriate for redundant reconfigurable digital control systems that operate for a short period of time without maintenance, and for such systems the method gives a formula in terms of component fault rates, system recovery rates, and system operating time.

  6. Software reliability perspectives

    NASA Technical Reports Server (NTRS)

    Wilson, Larry; Shen, Wenhui

    1987-01-01

    Software which is used in life critical functions must be known to be highly reliable before installation. This requires a strong testing program to estimate the reliability, since neither formal methods, software engineering nor fault tolerant methods can guarantee perfection. Prior to the final testing software goes through a debugging period and many models have been developed to try to estimate reliability from the debugging data. However, the existing models are poorly validated and often give poor performance. This paper emphasizes the fact that part of their failures can be attributed to the random nature of the debugging data given to these models as input, and it poses the problem of correcting this defect as an area of future research.

  7. Reliability and availability requirements analysis for DEMO: fuel cycle system

    SciTech Connect

    Pinna, T.; Borgognoni, F.

    2015-03-15

    The Demonstration Power Plant (DEMO) will be a fusion reactor prototype designed to demonstrate the capability to produce electrical power in a commercially acceptable way. Two of the key elements of the engineering development of the DEMO reactor are the definitions of reliability and availability requirements (or targets). The availability target for a hypothesized Fuel Cycle has been analysed as a test case. The analysis has been done on the basis of the experience gained in operating existing tokamak fusion reactors and developing the ITER design. Plant Breakdown Structure (PBS) and Functional Breakdown Structure (FBS) related to the DEMO Fuel Cycle and correlations between PBS and FBS have been identified. At first, a set of availability targets has been allocated to the various systems on the basis of their operating, protection and safety functions. 75% and 85% of availability has been allocated to the operating functions of fuelling system and tritium plant respectively. 99% of availability has been allocated to the overall systems in executing their safety functions. The chances of the systems to achieve the allocated targets have then been investigated through a Failure Mode and Effect Analysis and Reliability Block Diagram analysis. The following results have been obtained: 1) the target of 75% for the operations of the fuelling system looks reasonable, while the target of 85% for the operations of the whole tritium plant should be reduced to 80%, even though all the tritium plant systems can individually reach quite high availability targets, over 90% - 95%; 2) all the DEMO Fuel Cycle systems can reach the target of 99% in accomplishing their safety functions. (authors)

  8. Scheduling and Achievement. Research Brief

    ERIC Educational Resources Information Center

    Walker, Karen

    2006-01-01

    To use a block schedule or a traditional schedule? Which structure will produce the best and highest achievement rates for students? The research is mixed on this due to numerous variables such as: (1) socioeconomic levels; (2) academic levels; (3) length of time a given schedule has been in operation; (4) strategies being used in the classrooms;…

  9. Gearbox Reliability Collaborative Update (Presentation)

    SciTech Connect

    Sheng, S.

    2013-10-01

    This presentation was given at the Sandia Reliability Workshop in August 2013 and provides information on current statistics, a status update, next steps, and other reliability research and development activities related to the Gearbox Reliability Collaborative.

  10. Materials reliability issues in microelectronics

    SciTech Connect

    Lloyd, J.R. ); Yost, F.G. ); Ho, P.S. )

    1991-01-01

    This book covers the proceedings of a MRS symposium on materials reliability in microelectronics. Topics include: electromigration; stress effects on reliability; stress and packaging; metallization; device, oxide and dielectric reliability; new investigative techniques; and corrosion.

  11. Development of brain systems for nonsymbolic numerosity and the relationship to formal math academic achievement.

    PubMed

    Haist, Frank; Wazny, Jarnet H; Toomarian, Elizabeth; Adamo, Maha

    2015-02-01

    A central question in cognitive and educational neuroscience is whether brain operations supporting nonlinguistic intuitive number sense (numerosity) predict individual acquisition and academic achievement for symbolic or "formal" math knowledge. Here, we conducted a developmental functional magnetic resonance imaging (MRI) study of nonsymbolic numerosity task performance in 44 participants including 14 school age children (6-12 years old), 14 adolescents (13-17 years old), and 16 adults and compared a brain activity measure of numerosity precision to scores from the Woodcock-Johnson III Broad Math index of math academic achievement. Accuracy and reaction time from the numerosity task did not reliably predict formal math achievement. We found a significant positive developmental trend for improved numerosity precision in the parietal cortex and intraparietal sulcus specifically. Controlling for age and overall cognitive ability, we found a reliable positive relationship between individual math achievement scores and parietal lobe activity only in children. In addition, children showed robust positive relationships between math achievement and numerosity precision within ventral stream processing areas bilaterally. The pattern of results suggests a dynamic developmental trajectory for visual discrimination strategies that predict the acquisition of formal math knowledge. In adults, the efficiency of visual discrimination marked by numerosity acuity in ventral occipital-temporal cortex and hippocampus differentiated individuals with better or worse formal math achievement, respectively. Overall, these results suggest that two different brain systems for nonsymbolic numerosity acuity may contribute to individual differences in math achievement and that the contribution of these systems differs across development.

  12. Reliability and Failure in NASA Missions: Blunders, Normal Accidents, High Reliability, Bad Luck

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2015-01-01

    NASA emphasizes crew safety and system reliability but several unfortunate failures have occurred. The Apollo 1 fire was mistakenly unanticipated. After that tragedy, the Apollo program gave much more attention to safety. The Challenger accident revealed that NASA had neglected safety and that management underestimated the high risk of shuttle. Probabilistic Risk Assessment was adopted to provide more accurate failure probabilities for shuttle and other missions. NASA's "faster, better, cheaper" initiative and government procurement reform led to deliberately dismantling traditional reliability engineering. The Columbia tragedy and Mars mission failures followed. Failures can be attributed to blunders, normal accidents, or bad luck. Achieving high reliability is difficult but possible.

  13. Quantifying Human Performance Reliability.

    ERIC Educational Resources Information Center

    Askren, William B.; Regulinski, Thaddeus L.

    Human performance reliability for tasks in the time-space continuous domain is defined and a general mathematical model presented. The human performance measurement terms time-to-error and time-to-error-correction are defined. The model and measurement terms are tested using laboratory vigilance and manual control tasks. Error and error-correction…

  14. Parametric Mass Reliability Study

    NASA Technical Reports Server (NTRS)

    Holt, James P.

    2014-01-01

    The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.

  15. Software reliability report

    NASA Technical Reports Server (NTRS)

    Wilson, Larry

    1991-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Unfortunately, the models appear to be unable to account for the random nature of the data. If the same code is debugged multiple times and one of the models is used to make predictions, intolerable variance is observed in the resulting reliability predictions. It is believed that data replication can remove this variance in lab type situations and that it is less than scientific to talk about validating a software reliability model without considering replication. It is also believed that data replication may prove to be cost effective in the real world, thus the research centered on verification of the need for replication and on methodologies for generating replicated data in a cost effective manner. The context of the debugging graph was pursued by simulation and experimentation. Simulation was done for the Basic model and the Log-Poisson model. Reasonable values of the parameters were assigned and used to generate simulated data which is then processed by the models in order to determine limitations on their accuracy. These experiments exploit the existing software and program specimens which are in AIR-LAB to measure the performance of reliability models.

  16. Reliable solar cookers

    SciTech Connect

    Magney, G.K.

    1992-12-31

    The author describes the activities of SERVE, a Christian relief and development agency, to introduce solar ovens to the Afghan refugees in Pakistan. It has provided 5,000 solar cookers since 1984. The experience has demonstrated the potential of the technology and the need for a durable and reliable product. Common complaints about the cookers are discussed and the ideal cooker is described.

  17. Continuous Reliability Enhancement for Wind (CREW) database :

    SciTech Connect

    Hines, Valerie Ann-Peters; Ogilvie, Alistair B.; Bond, Cody R.

    2013-09-01

    To benchmark the current U.S. wind turbine fleet reliability performance and identify the major contributors to component-level failures and other downtime events, the Department of Energy funded the development of the Continuous Reliability Enhancement for Wind (CREW) database by Sandia National Laboratories. This report is the third annual Wind Plant Reliability Benchmark, to publically report on CREW findings for the wind industry. The CREW database uses both high resolution Supervisory Control and Data Acquisition (SCADA) data from operating plants and Strategic Power Systems ORAPWindª (Operational Reliability Analysis Program for Wind) data, which consist of downtime and reserve event records and daily summaries of various time categories for each turbine. Together, these data are used as inputs into CREWs reliability modeling. The results presented here include: the primary CREW Benchmark statistics (operational availability, utilization, capacity factor, mean time between events, and mean downtime); time accounting from an availability perspective; time accounting in terms of the combination of wind speed and generation levels; power curve analysis; and top system and component contributors to unavailability.

  18. Reliable vision-guided grasping

    NASA Technical Reports Server (NTRS)

    Nicewarner, Keith E.; Kelley, Robert B.

    1992-01-01

    Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided 'docking' problems. It differs from other hand-in-eye visual servoing problems, such as tracking, in that the distance from the target is a relevant servo parameter. The methodology described in this paper is hierarchy of levels in which the vision/robot interface is decreasingly 'intelligent,' and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system.

  19. Reliable vision-guided grasping

    NASA Technical Reports Server (NTRS)

    Nicewarner, Keith E.; Kelley, Robert B.

    1992-01-01

    Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided 'docking' problems. It differs from other hand-in-eye visual servoing problems such as tracking in that the distance from the target is a relevant servo parameter. The methodology described in this paper is a hierarchy of levels in which the vision/robot interface is decreasingly 'intelligent', and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system.

  20. Apply reliability centered maintenance to sealless pumps

    SciTech Connect

    Pradhan, S. )

    1993-01-01

    This paper reports on reliability centered maintenance (RCM) which is considered a crucial part of future reliability engineering. RCM determines the maintenance requirements of plants and equipment in their operating context. The RCM method has been applied to the management of critical sealless pumps in fire/toxic risk services, typical of the petrochemical industry. The method provides advantages from a detailed study of any critical engineering system. RCM is a team exercise and fosters team spirit in the plant environment. The maintenance strategy that evolves is based on team decisions and relies on maximizing the inherent reliability built into the equipment. RCM recommends design upgrades where this inherent reliability is being questioned. Sealless pumps of canned motor design are used as main reactor charge pumps in PVC plants. These pumps handle fresh vinyl chloride monomer (VCM), which is both carcinogenic and flammable.

  1. MOV reliability evaluation and periodic verification scheduling

    SciTech Connect

    Bunte, B.D.

    1996-12-01

    The purpose of this paper is to establish a periodic verification testing schedule based on the expected long term reliability of gate or globe motor operated valves (MOVs). The methodology in this position paper determines the nominal (best estimate) design margin for any MOV based on the best available information pertaining to the MOVs design requirements, design parameters, existing hardware design, and present setup. The uncertainty in this margin is then determined using statistical means. By comparing the nominal margin to the uncertainty, the reliability of the MOV is estimated. The methodology is appropriate for evaluating the reliability of MOVs in the GL 89-10 program. It may be used following periodic testing to evaluate and trend MOV performance and reliability. It may also be used to evaluate the impact of proposed modifications and maintenance activities such as packing adjustments. In addition, it may be used to assess the impact of new information of a generic nature which impacts safety related MOVs.

  2. Reliability, Maintainability, and Availability: Consideration During the Design Phase in Ground Systems to Ensure Successful Launch Support

    NASA Technical Reports Server (NTRS)

    Gillespie, Amanda M.

    2012-01-01

    The future of Space Exploration includes missions to the moon, asteroids, Mars, and beyond. To get there, the mission concept is to launch multiple launch vehicles months, even years apart. In order to achieve this, launch vehicles, payloads (satellites and crew capsules), and ground systems must be highly reliable and/or available, to include maintenance concepts and procedures in the event of a launch scrub. In order to achieve this high probability of mission success, Ground Systems Development and Operations (GSDO) has allocated Reliability, Maintainability, and Availability (RMA) requirements to all hardware and software required for both launch operations and, in the event of a launch scrub, required to support a repair of the ground systems, launch vehicle, or payload. This is done concurrently with the design process (30/60/90 reviews).

  3. Reliability analysis based on the losses from failures.

    PubMed

    Todinov, M T

    2006-04-01

    early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.

  4. Science Operations on the Lunar Surface - Understanding the Past, Testing in the Present, Considering the Future

    NASA Technical Reports Server (NTRS)

    Eppler, Dean B.

    2013-01-01

    The scientific success of any future human lunar exploration mission will be strongly dependent on design of both the systems and operations practices that underpin crew operations on the lunar surface. Inept surface mission preparation and design will either ensure poor science return, or will make achieving quality science operation unacceptably difficult for the crew and the mission operations and science teams. In particular, ensuring a robust system for managing real-time science information flow during surface operations, and ensuring the crews receive extensive field training in geological sciences, are as critical to mission success as reliable spacecraft and a competent operations team.

  5. Multi-Disciplinary System Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Han, Song

    1997-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  6. Commercial and operational impacts on design for the Hotol advanced launch vehicle

    NASA Astrophysics Data System (ADS)

    Salt, D. J.; Parkinson, R. C.

    1990-10-01

    The development of future Space exploration and exploitation will be paced by launch system capabilities. Current systems are high cost, low reliability, unavailable and inflexible when compared to other forms of transport. Advanced launch systems now being proposed (Hotol, Saenger, NASP) seek to dramatically reduce these drawbacks, particularly to reduce the cost of transport into low earth orbit. There is a more severe requirement on vehicle design and operation than hitherto. The high cost of vehicle losses require system reliability and survivability. Survivability requires an extensive abort capability in all phases of flight. Achieving low operational costs places requirements on vehicle maintainability, turn-around and integration, and the requirements for achieving a high flight rate without compromising system reliability or resiliency. The paper considers the way in which commercial and operational aspects have affected the physical design of the Hotol system.

  7. System reliability, performance and trust in adaptable automation.

    PubMed

    Chavaillaz, Alain; Wastell, David; Sauer, Jürgen

    2016-01-01

    The present study examined the effects of reduced system reliability on operator performance and automation management in an adaptable automation environment. 39 operators were randomly assigned to one of three experimental groups: low (60%), medium (80%), and high (100%) reliability of automation support. The support system provided five incremental levels of automation which operators could freely select according to their needs. After 3 h of training on a simulated process control task (AutoCAMS) in which the automation worked infallibly, operator performance and automation management were measured during a 2.5-h testing session. Trust and workload were also assessed through questionnaires. Results showed that although reduced system reliability resulted in lower levels of trust towards automation, there were no corresponding differences in the operators' reliance on automation. While operators showed overall a noteworthy ability to cope with automation failure, there were, however, decrements in diagnostic speed and prospective memory with lower reliability.

  8. Reliability Degradation Due to Stockpile Aging

    SciTech Connect

    Robinson, David G.

    1999-04-01

    The objective of this reseach is the investigation of alternative methods for characterizing the reliability of systems with time dependent failure modes associated with stockpile aging. Reference to 'reliability degradation' has, unfortunately, come to be associated with all types of aging analyes: both deterministic and stochastic. In this research, in keeping with the true theoretical definition, reliability is defined as a probabilistic description of system performance as a funtion of time. Traditional reliability methods used to characterize stockpile reliability depend on the collection of a large number of samples or observations. Clearly, after the experiments have been performed and the data has been collected, critical performance problems can be identified. A Major goal of this research is to identify existing methods and/or develop new mathematical techniques and computer analysis tools to anticipate stockpile problems before they become critical issues. One of the most popular methods for characterizing the reliability of components, particularly electronic components, assumes that failures occur in a completely random fashion, i.e. uniformly across time. This method is based primarily on the use of constant failure rates for the various elements that constitute the weapon system, i.e. the systems do not degrade while in storage. Experience has shown that predictions based upon this approach should be regarded with great skepticism since the relationship between the life predicted and the observed life has been difficult to validate. In addition to this fundamental problem, the approach does not recognize that there are time dependent material properties and variations associated with the manufacturing process and the operational environment. To appreciate the uncertainties in predicting system reliability a number of alternative methods are explored in this report. All of the methods are very different from those currently used to assess stockpile

  9. Reliability of photovoltaic modules

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1986-01-01

    In order to assess the reliability of photovoltaic modules, four categories of known array failure and degradation mechanisms are discussed, and target reliability allocations have been developed within each category based on the available technology and the life-cycle-cost requirements of future large-scale terrestrial applications. Cell-level failure mechanisms associated with open-circuiting or short-circuiting of individual solar cells generally arise from cell cracking or the fatigue of cell-to-cell interconnects. Power degradation mechanisms considered include gradual power loss in cells, light-induced effects, and module optical degradation. Module-level failure mechanisms and life-limiting wear-out mechanisms are also explored.

  10. Reliability and durability problems

    NASA Astrophysics Data System (ADS)

    Bojtsov, B. V.; Kondrashov, V. Z.

    The papers presented in this volume focus on methods for determining the stress-strain state of structures and machines and evaluating their reliability and service life. Specific topics discussed include a method for estimating the service life of thin-sheet automotive structures, stressed state at the tip of small cracks in anisotropic plates under biaxial tension, evaluation of the elastic-dissipative characteristics of joints by vibrational diagnostics methods, and calculation of the reliability of ceramic structures for arbitrary long-term loading programs. Papers are also presented on the effect of prior plastic deformation on fatigue damage kinetics, axisymmetric and local deformation of cylindrical parts during finishing-hardening treatments, and adhesion of polymers to diffusion coatings on steels.

  11. Human Reliability Program Workshop

    SciTech Connect

    Landers, John; Rogers, Erin; Gerke, Gretchen

    2014-05-18

    A Human Reliability Program (HRP) is designed to protect national security as well as worker and public safety by continuously evaluating the reliability of those who have access to sensitive materials, facilities, and programs. Some elements of a site HRP include systematic (1) supervisory reviews, (2) medical and psychological assessments, (3) management evaluations, (4) personnel security reviews, and (4) training of HRP staff and critical positions. Over the years of implementing an HRP, the Department of Energy (DOE) has faced various challenges and overcome obstacles. During this 4-day activity, participants will examine programs that mitigate threats to nuclear security and the insider threat to include HRP, Nuclear Security Culture (NSC) Enhancement, and Employee Assistance Programs. The focus will be to develop an understanding of the need for a systematic HRP and to discuss challenges and best practices associated with mitigating the insider threat.

  12. Space Shuttle Propulsion System Reliability

    NASA Technical Reports Server (NTRS)

    Welzyn, Ken; VanHooser, Katherine; Moore, Dennis; Wood, David

    2011-01-01

    This session includes the following sessions: (1) External Tank (ET) System Reliability and Lessons, (2) Space Shuttle Main Engine (SSME), Reliability Validated by a Million Seconds of Testing, (3) Reusable Solid Rocket Motor (RSRM) Reliability via Process Control, and (4) Solid Rocket Booster (SRB) Reliability via Acceptance and Testing.

  13. Spacecraft transmitter reliability

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A workshop on spacecraft transmitter reliability was held at the NASA Lewis Research Center on September 25 and 26, 1979, to discuss present knowledge and to plan future research areas. Since formal papers were not submitted, this synopsis was derived from audio tapes of the workshop. The following subjects were covered: users' experience with space transmitters; cathodes; power supplies and interfaces; and specifications and quality assurance. A panel discussion ended the workshop.

  14. ATLAS reliability analysis

    SciTech Connect

    Bartsch, R.R.

    1995-09-01

    Key elements of the 36 MJ ATLAS capacitor bank have been evaluated for individual probabilities of failure. These have been combined to estimate system reliability which is to be greater than 95% on each experimental shot. This analysis utilizes Weibull or Weibull-like distributions with increasing probability of failure with the number of shots. For transmission line insulation, a minimum thickness is obtained and for the railgaps, a method for obtaining a maintenance interval from forthcoming life tests is suggested.

  15. Reliability study of 1060nm 25Gbps VCSEL in terms of high speed modulation

    NASA Astrophysics Data System (ADS)

    Suzuki, Toshihito; Imai, Suguru; Kamiya, Shinichi; Hiraiwa, Koji; Funabashi, Masaki; Kawakita, Yasumasa; Shimizu, Hitoshi; Ishikawa, Takuya; Kasukawa, Akihiko

    2012-03-01

    Furukawa's 1060nm VCSELs with double-intra-cavity structure and Al-free InGaAs/GaAs QWs enable us to realize low power consumption, high speed operation and high reliability simultaneously. The power dissipation was as low as 140fJ/bit. Clear eye opening up to 20Gbps was achieved. Random failure rate and wear-out lifetime were evaluated as 30FIT/channel and 300 years. For higher speed operation, thickness of oxidation layer was increased for lower parasitic capacitance of device. Preliminary reliability test was performed on those devices. In high speed operation faster than 10Gbps, conventional lifetime definition as 2dB down of output power is not sufficient due to smaller margin of modulation characteristics. We suggest threshold current as a barometer for degradation of modulation characteristics. The threshold currents of our VCSELs degrade small enough during accelerated aging test. We also observed no remarkable change in 25Gbps eye diagram after aging test. The definition of life time for high speed VCSEL is discussed from the change in threshold current and so on in addition to the conventional power degradation during aging. It is experimentally verified that our VCSELs are promising candidate for highly reliable light source including long term stable high speed operation.

  16. Reliability of steam generator tubing

    SciTech Connect

    Kadokami, E.

    1997-02-01

    The author presents results on studies made of the reliability of steam generator (SG) tubing. The basis for this work is that in Japan the issue of defects in SG tubing is addressed by the approach that any detected defect should be repaired, either by plugging the tube or sleeving it. However, this leaves open the issue that there is a detection limit in practice, and what is the effect of nondetectable cracks on the performance of tubing. These studies were commissioned to look at the safety issues involved in degraded SG tubing. The program has looked at a number of different issues. First was an assessment of the penetration and opening behavior of tube flaws due to internal pressure in the tubing. They have studied: penetration behavior of the tube flaws; primary water leakage from through-wall flaws; opening behavior of through-wall flaws. In addition they have looked at the question of the reliability of tubing with flaws during normal plant operation. Also there have been studies done on the consequences of tube rupture accidents on the integrity of neighboring tubes.

  17. Software reliability studies

    NASA Technical Reports Server (NTRS)

    Hoppa, Mary Ann; Wilson, Larry W.

    1994-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.

  18. Methodology for Physics and Engineering of Reliable Products

    NASA Technical Reports Server (NTRS)

    Cornford, Steven L.; Gibbel, Mark

    1996-01-01

    Physics of failure approaches have gained wide spread acceptance within the electronic reliability community. These methodologies involve identifying root cause failure mechanisms, developing associated models, and utilizing these models to inprove time to market, lower development and build costs and higher reliability. The methodology outlined herein sets forth a process, based on integration of both physics and engineering principles, for achieving the same goals.

  19. Construction of Valid and Reliable Test for Assessment of Students

    ERIC Educational Resources Information Center

    Osadebe, P. U.

    2015-01-01

    The study was carried out to construct a valid and reliable test in Economics for secondary school students. Two research questions were drawn to guide the establishment of validity and reliability for the Economics Achievement Test (EAT). It is a multiple choice objective test of five options with 100 items. A sample of 1000 students was randomly…

  20. Three brief assessments of math achievement.

    PubMed

    Steiner, Eric T; Ashcraft, Mark H

    2012-12-01

    Because of wide disparities in college students' math knowledge-that is, their math achievement-studies of cognitive processing in math tasks also need to assess their individual level of math achievement. For many research settings, however, using existing math achievement tests is either too costly or too time consuming. To solve this dilemma, we present three brief tests of math achievement here, two drawn from the Wide Range Achievement Test and one composed of noncopyrighted items. All three correlated substantially with the full achievement test and with math anxiety, our original focus, and all show acceptable to excellent reliability. When lengthy testing is not feasible, one of these brief tests can be substituted.

  1. Comparing Science Achievement Constructs: Targeted and Achieved

    ERIC Educational Resources Information Center

    Ferrara, Steve; Duncan, Teresa

    2011-01-01

    This article illustrates how test specifications based solely on academic content standards, without attention to other cognitive skills and item response demands, can fall short of their targeted constructs. First, the authors inductively describe the science achievement construct represented by a statewide sixth-grade science proficiency test.…

  2. Reliable, Economic, Efficient CO2 Heat Pump Water Heater for North America

    SciTech Connect

    Radcliff, Thomas D; Sienel, Tobias; Huff, Hans-Joachim; Thompson, Adrian; Sadegh, Payman; Olsommer, Benoit; Park, Young

    2006-12-31

    Adoption of heat pump water heating technology for commercial hot water could save up to 0.4 quads of energy and 5 million metric tons of CO2 production annually in North America, but industry perception is that this technology does not offer adequate performance or reliability and comes at too high of a cost. Development and demonstration of a CO2 heat pump water heater is proposed to reduce these barriers to adoption. Three major themes are addressed: market analysis to understand barriers to adoption, use of advanced reliability models to design optimum qualification test plans, and field testing of two phases of water heater prototypes. Market experts claim that beyond good performance, market adoption requires 'drop and forget' system reliability and a six month payback of first costs. Performance, reliability and cost targets are determined and reliability models are developed to evaluate the minimum testing required to meet reliability targets. Three phase 1 prototypes are designed and installed in the field. Based on results from these trials a product specification is developed and a second phase of five field trial units are built and installed. These eight units accumulate 11 unit-years of service including 15,650 hours and 25,242 cycles of compressor operation. Performance targets can be met. An availability of 60% is achieved and the capability to achieve >90% is demonstrated, but overall reliability is below target, with an average of 3.6 failures/unit-year on the phase 2 demonstration. Most reliability issues are shown to be common to new HVAC products, giving high confidence in mature product reliability, but the need for further work to minimize leaks and ensure reliability of the electronic expansion valve is clear. First cost is projected to be above target, leading to an expectation of 8-24 month payback when substituted for an electric water heater. Despite not meeting all targets, arguments are made that an industry leader could sufficiently

  3. Mobility and Reading Achievement.

    ERIC Educational Resources Information Center

    Waters, Theresa Z.

    A study examined the effect of geographic mobility on elementary school students' achievement. Although such mobility, which requires students to make multiple moves among schools, can have a negative impact on academic achievement, the hypothesis for the study was that it was not a determining factor in reading achievement test scores. Subjects…

  4. Illustrated structural application of universal first-order reliability method

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1994-01-01

    The general application of the proposed first-order reliability method was achieved through the universal normalization of engineering probability distribution data. The method superimposes prevailing deterministic techniques and practices on the first-order reliability method to surmount deficiencies of the deterministic method and provide benefits of reliability techniques and predictions. A reliability design factor is derived from the reliability criterion to satisfy a specified reliability and is analogous to the deterministic safety factor. Its application is numerically illustrated on several practical structural design and verification cases with interesting results and insights. Two concepts of reliability selection criteria are suggested. Though the method was developed to support affordable structures for access to space, the method should also be applicable for most high-performance air and surface transportation systems.

  5. On the reliability of seasonal climate forecasts.

    PubMed

    Weisheimer, A; Palmer, T N

    2014-07-01

    Seasonal climate forecasts are being used increasingly across a range of application sectors. A recent UK governmental report asked: how good are seasonal forecasts on a scale of 1-5 (where 5 is very good), and how good can we expect them to be in 30 years time? Seasonal forecasts are made from ensembles of integrations of numerical models of climate. We argue that 'goodness' should be assessed first and foremost in terms of the probabilistic reliability of these ensemble-based forecasts; reliable inputs are essential for any forecast-based decision-making. We propose that a '5' should be reserved for systems that are not only reliable overall, but where, in particular, small ensemble spread is a reliable indicator of low ensemble forecast error. We study the reliability of regional temperature and precipitation forecasts of the current operational seasonal forecast system of the European Centre for Medium-Range Weather Forecasts, universally regarded as one of the world-leading operational institutes producing seasonal climate forecasts. A wide range of 'goodness' rankings, depending on region and variable (with summer forecasts of rainfall over Northern Europe performing exceptionally poorly) is found. Finally, we discuss the prospects of reaching '5' across all regions and variables in 30 years time.

  6. Reliability systems for implantable cardiac defibrillator batteries

    NASA Astrophysics Data System (ADS)

    Takeuchi, Esther S.

    The reliability of the power sources used in implantable cardiac defibrillators is critical due to the life-saving nature of the device. Achieving a high reliability power source depends on several systems functioning together. Appropriate cell design is the first step in assuring a reliable product. Qualification of critical components and of the cells using those components is done prior to their designation as implantable grade. Product consistency is assured by control of manufacturing practices and verified by sampling plans using both accelerated and real-time testing. Results to date show that lithium/silver vanadium oxide cells used for implantable cardiac defibrillators have a calculated maximum random failure rate of 0.005% per test month.

  7. Reliability evaluation methodology for NASA applications

    NASA Technical Reports Server (NTRS)

    Taneja, Vidya S.

    1992-01-01

    Liquid rocket engine technology has been characterized by the development of complex systems containing large number of subsystems, components, and parts. The trend to even larger and more complex system is continuing. The liquid rocket engineers have been focusing mainly on performance driven designs to increase payload delivery of a launch vehicle for a given mission. In otherwords, although the failure of a single inexpensive part or component may cause the failure of the system, reliability in general has not been considered as one of the system parameters like cost or performance. Up till now, quantification of reliability has not been a consideration during system design and development in the liquid rocket industry. Engineers and managers have long been aware of the fact that the reliability of the system increases during development, but no serious attempts have been made to quantify reliability. As a result, a method to quantify reliability during design and development is needed. This includes application of probabilistic models which utilize both engineering analysis and test data. Classical methods require the use of operating data for reliability demonstration. In contrast, the method described in this paper is based on similarity, analysis, and testing combined with Bayesian statistical analysis.

  8. Photovoltaic-Reliability R&D Toward a Solar-Powered World (Presentation)

    SciTech Connect

    Kurtz, S.; Granata, J.

    2009-08-01

    Presentation about the importance of continued progress toward low-cost, high-reliability, and high-performance PV systems. High reliability is an essential element in achieving low-cost solar electricity.

  9. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    SciTech Connect

    Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp; Vazhkudai, Sudharshan S.; Cao, Qing

    2014-11-01

    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storage systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results

  10. Availability and End-to-end Reliability in Low Duty Cycle Multihop Wireless Sensor Networks

    PubMed Central

    Suhonen, Jukka; Hämäläinen, Timo D.; Hännikäinen, Marko

    2009-01-01

    A wireless sensor network (WSN) is an ad-hoc technology that may even consist of thousands of nodes, which necessitates autonomic, self-organizing and multihop operations. A typical WSN node is battery powered, which makes the network lifetime the primary concern. The highest energy efficiency is achieved with low duty cycle operation, however, this alone is not enough. WSNs are deployed for different uses, each requiring acceptable Quality of Service (QoS). Due to the unique characteristics of WSNs, such as dynamic wireless multihop routing and resource constraints, the legacy QoS metrics are not feasible as such. We give a new definition to measure and implement QoS in low duty cycle WSNs, namely availability and reliability. Then, we analyze the effect of duty cycling for reaching the availability and reliability. The results are obtained by simulations with ZigBee and proprietary TUTWSN protocols. Based on the results, we also propose a data forwarding algorithm suitable for resource constrained WSNs that guarantees end-to-end reliability while adding a small overhead that is relative to the packet error rate (PER). The forwarding algorithm guarantees reliability up to 30% PER. PMID:22574002

  11. [Interrater reliability of the Braden scale].

    PubMed

    Kottner, Jan; Tannen, Antje; Dassen, Theo

    2008-04-01

    Pressure ulcer risk assessment scales can assist nurses in determining the individual pressure ulcer risk. Although the Braden scale is widely used throughout Germany, its psychometric properties are yet unknown. The aim of the study was to determine the interrater reliability of the Braden scale and to compare the results with those of published data. A literature review was conducted. 20 studies measuring the interrater reliability of the Braden scale were evaluated. Only three of those studies investigated the interrater reliability of single items. The Pearson product-moment correlation coefficient (0.80 to 1.00) was calculated in most studies for an evaluation of the Braden scale as a whole. However, the use of correlation coefficients is inappropriate for measuring the interrater reliability of the Braden scale. Measures of the intraclass correlation coefficient varied from 0.83 to 0.99. The investigation of the interrater reliability of the Braden scale's German version was conducted in a German nursing home in 2006. Nurses independently rated 18 and 32 residents twice. Nurses achieved the highest agreement when rating the items "friction and shear" and "activity" (overall proportion of agreement = 0.67 to 0.84, Cohen's Kappa = 0.57 to 0.73). The lowest agreement was achieved when the item "nutrition" (overall proportion of agreement = 0.47 to 0.51, Cohen's Kappa = 0.28 to 0.30) was rated. For 66% of the rated residents the difference in the obtained Braden scores was equal or less than one point. Intraclass correlation coefficients were 0.91 (95% confidence interval 0.82 to 0.96) and 0.88 (95% confidence interval 0.61 to 0.96). This indicates that the interrater reliability of the Braden scale was high in the examined setting.

  12. Ultra Reliable Closed Loop Life Support for Long Space Missions

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.; Ewert, Michael K.

    2010-01-01

    Spacecraft human life support systems can achieve ultra reliability by providing sufficient spares to replace all failed components. The additional mass of spares for ultra reliability is approximately equal to the original system mass, provided that the original system reliability is not too low. Acceptable reliability can be achieved for the Space Shuttle and Space Station by preventive maintenance and by replacing failed units. However, on-demand maintenance and repair requires a logistics supply chain in place to provide the needed spares. In contrast, a Mars or other long space mission must take along all the needed spares, since resupply is not possible. Long missions must achieve ultra reliability, a very low failure rate per hour, since they will take years rather than weeks and cannot be cut short if a failure occurs. Also, distant missions have a much higher mass launch cost per kilogram than near-Earth missions. Achieving ultra reliable spacecraft life support systems with acceptable mass will require a well-planned and extensive development effort. Analysis must determine the reliability requirement and allocate it to subsystems and components. Ultra reliability requires reducing the intrinsic failure causes, providing spares to replace failed components and having "graceful" failure modes. Technologies, components, and materials must be selected and designed for high reliability. Long duration testing is needed to confirm very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The system must be designed, developed, integrated, and tested with system reliability in mind. Maintenance and reparability of failed units must not add to the probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass should start soon since it must be a long term effort.

  13. Reliability Driven Space Logistics Demand Analysis

    NASA Technical Reports Server (NTRS)

    Knezevic, J.

    1995-01-01

    Accurate selection of the quantity of logistic support resources has a strong influence on mission success, system availability and the cost of ownership. At the same time the accurate prediction of these resources depends on the accurate prediction of the reliability measures of the items involved. This paper presents a method for the advanced and accurate calculation of the reliability measures of complex space systems which are the basis for the determination of the demands for logistics resources needed during the operational life or mission of space systems. The applicability of the method presented is demonstrated through several examples.

  14. Gearbox Reliability Collaborative Update: A Brief (Presentation)

    SciTech Connect

    Sheng, S.; Keller, J.; McDade, M.

    2012-01-01

    This presentation is an update on the Gearbox Reliability Collaborative (GRC) for the AWEA Wind Project Operations, Maintenance & Reliability Seminar. GRC accomplishments are: (1) Failure database software deployed - partners see business value for themselves and customers; (2) Designed, built, instrumented, and tested two gearboxes - (a) Generated unprecedented public domain test data from both field testing and dynamometer testing, (b) Different responses from 'identical' gearboxes, (c) Demonstrated importance of non-torque loading and modeling approach; and (3) Active collaborative, with wide industry support, leveraging DOE funding - Modeling round robin and Condition Monitoring round robin.

  15. DMD reliability: a MEMS success story

    NASA Astrophysics Data System (ADS)

    Douglass, Michael

    2003-01-01

    The Digital Micromirror Device (DMD) developed by Texas Instruments (TI) has made tremendous progress in both performance and reliability since it was first invented in 1987. From the first working concept of a bistable mirror, the DMD is now providing high-brightness, high-contrast, and high-reliability in over 1,500,000 projectors using Digital Light Processing technology. In early 2000, TI introduced the first DMD chip with a smaller mirror (14-micron pitch versus 17-micron pitch). This allowed a greater number of high-resolution DMD chips per wafer, thus providing an increased output capacity as well as the flexibility to use existing package designs. By using existing package designs, subsequent DMDs cost less as well as met our customers' demand for faster time to market. In recent years, the DMD achieved the status of being a commercially successful MEMS device. It reached this status by the efforts of hundreds of individuals working toward a common goal over many years. Neither textbooks nor design guidelines existed at the time. There was little infrastructure in place to support such a large endeavor. The knowledge we gained through our characterization and testing was all we had available to us through the first few years of development. Reliability was only a goal in 1992 when production development activity started; a goal that many throughout the industry and even within Texas Instruments doubted the DMD could achieve. The results presented in this paper demonstrate that we succeeded by exceeding the reliability goals.

  16. Reliability and Maintainability Engineering - A Major Driver for Safety and Affordability

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.

    2011-01-01

    The United States National Aeronautics and Space Administration (NASA) is in the midst of an effort to design and build a safe and affordable heavy lift vehicle to go to the moon and beyond. To achieve that, NASA is seeking more innovative and efficient approaches to reduce cost while maintaining an acceptable level of safety and mission success. One area that has the potential to contribute significantly to achieving NASA safety and affordability goals is Reliability and Maintainability (R&M) engineering. Inadequate reliability or failure of critical safety items may directly jeopardize the safety of the user(s) and result in a loss of life. Inadequate reliability of equipment may directly jeopardize mission success. Systems designed to be more reliable (fewer failures) and maintainable (fewer resources needed) can lower the total life cycle cost. The Department of Defense (DOD) and industry experience has shown that optimized and adequate levels of R&M are critical for achieving a high level of safety and mission success, and low sustainment cost. Also, lessons learned from the Space Shuttle program clearly demonstrated the importance of R&M engineering in designing and operating safe and affordable launch systems. The Challenger and Columbia accidents are examples of the severe impact of design unreliability and process induced failures on system safety and mission success. These accidents demonstrated the criticality of reliability engineering in understanding component failure mechanisms and integrated system failures across the system elements interfaces. Experience from the shuttle program also shows that insufficient Reliability, Maintainability, and Supportability (RMS) engineering analyses upfront in the design phase can significantly increase the sustainment cost and, thereby, the total life cycle cost. Emphasis on RMS during the design phase is critical for identifying the design features and characteristics needed for time efficient processing

  17. Fault Tree Reliability Analysis and Design-for-reliability

    1998-05-05

    WinR provides a fault tree analysis capability for performing systems reliability and design-for-reliability analyses. The package includes capabilities for sensitivity and uncertainity analysis, field failure data analysis, and optimization.

  18. On Component Reliability and System Reliability for Space Missions

    NASA Technical Reports Server (NTRS)

    Chen, Yuan; Gillespie, Amanda M.; Monaghan, Mark W.; Sampson, Michael J.; Hodson, Robert F.

    2012-01-01

    This paper is to address the basics, the limitations and the relationship between component reliability and system reliability through a study of flight computing architectures and related avionics components for NASA future missions. Component reliability analysis and system reliability analysis need to be evaluated at the same time, and the limitations of each analysis and the relationship between the two analyses need to be understood.

  19. Integrated circuit reliability testing

    NASA Technical Reports Server (NTRS)

    Buehler, Martin G. (Inventor); Sayah, Hoshyar R. (Inventor)

    1990-01-01

    A technique is described for use in determining the reliability of microscopic conductors deposited on an uneven surface of an integrated circuit device. A wafer containing integrated circuit chips is formed with a test area having regions of different heights. At the time the conductors are formed on the chip areas of the wafer, an elongated serpentine assay conductor is deposited on the test area so the assay conductor extends over multiple steps between regions of different heights. Also, a first test conductor is deposited in the test area upon a uniform region of first height, and a second test conductor is deposited in the test area upon a uniform region of second height. The occurrence of high resistances at the steps between regions of different height is indicated by deriving the measured length of the serpentine conductor using the resistance measured between the ends of the serpentine conductor, and comparing that to the design length of the serpentine conductor. The percentage by which the measured length exceeds the design length, at which the integrated circuit will be discarded, depends on the required reliability of the integrated circuit.

  20. Integrated circuit reliability testing

    NASA Technical Reports Server (NTRS)

    Buehler, Martin G. (Inventor); Sayah, Hoshyar R. (Inventor)

    1988-01-01

    A technique is described for use in determining the reliability of microscopic conductors deposited on an uneven surface of an integrated circuit device. A wafer containing integrated circuit chips is formed with a test area having regions of different heights. At the time the conductors are formed on the chip areas of the wafer, an elongated serpentine assay conductor is deposited on the test area so the assay conductor extends over multiple steps between regions of different heights. Also, a first test conductor is deposited in the test area upon a uniform region of first height, and a second test conductor is deposited in the test area upon a uniform region of second height. The occurrence of high resistances at the steps between regions of different height is indicated by deriving the measured length of the serpentine conductor using the resistance measured between the ends of the serpentine conductor, and comparing that to the design length of the serpentine conductor. The percentage by which the measured length exceeds the design length, at which the integrated circuit will be discarded, depends on the required reliability of the integrated circuit.

  1. Reliable Entanglement Verification

    NASA Astrophysics Data System (ADS)

    Arrazola, Juan; Gittsovich, Oleg; Donohue, John; Lavoie, Jonathan; Resch, Kevin; Lütkenhaus, Norbert

    2013-05-01

    Entanglement plays a central role in quantum protocols. It is therefore important to be able to verify the presence of entanglement in physical systems from experimental data. In the evaluation of these data, the proper treatment of statistical effects requires special attention, as one can never claim to have verified the presence of entanglement with certainty. Recently increased attention has been paid to the development of proper frameworks to pose and to answer these type of questions. In this work, we apply recent results by Christandl and Renner on reliable quantum state tomography to construct a reliable entanglement verification procedure based on the concept of confidence regions. The statements made do not require the specification of a prior distribution nor the assumption of an independent and identically distributed (i.i.d.) source of states. Moreover, we develop efficient numerical tools that are necessary to employ this approach in practice, rendering the procedure ready to be employed in current experiments. We demonstrate this fact by analyzing the data of an experiment where photonic entangled two-photon states were generated and whose entanglement is verified with the use of an accessible nonlinear witness.

  2. Time-Dependent Reliability Analysis

    1999-10-27

    FRANTIC-3 was developed to evaluate system unreliability using time-dependent techniques. The code provides two major options: to evaluate standby system unavailability or, in addition to the unavailability to calculate the total system failure probability by including both the unavailability of the system on demand as well as the probability that it will operate for an arbitrary time period following the demand. The FRANTIC-3 time dependent reliability models provide a large selection of repair and testingmore » policies applicable to standby or continously operating systems consisting of periodically tested, monitored, and non-repairable (non-testable) components. Time-dependent and test frequency dependent failures, as well as demand stress related failure, test-caused degradation and wear-out, test associated human errors, test deficiencies, test override, unscheduled and scheduled maintenance, component renewal and replacement policies, and test strategies can be prescribed. The conditional system unavailabilities associated with the downtimes of the user specified failed component are also evaluated. Optionally, the code can perform a sensitivity study for system unavailability or total failure probability to the failure characteristics of the standby components.« less

  3. Time-Dependent Reliability Analysis

    SciTech Connect

    Sartori, Enrico

    1999-10-27

    FRANTIC-3 was developed to evaluate system unreliability using time-dependent techniques. The code provides two major options: to evaluate standby system unavailability or, in addition to the unavailability to calculate the total system failure probability by including both the unavailability of the system on demand as well as the probability that it will operate for an arbitrary time period following the demand. The FRANTIC-3 time dependent reliability models provide a large selection of repair and testing policies applicable to standby or continously operating systems consisting of periodically tested, monitored, and non-repairable (non-testable) components. Time-dependent and test frequency dependent failures, as well as demand stress related failure, test-caused degradation and wear-out, test associated human errors, test deficiencies, test override, unscheduled and scheduled maintenance, component renewal and replacement policies, and test strategies can be prescribed. The conditional system unavailabilities associated with the downtimes of the user specified failed component are also evaluated. Optionally, the code can perform a sensitivity study for system unavailability or total failure probability to the failure characteristics of the standby components.

  4. Optimal PGU operation strategy in CHP systems

    NASA Astrophysics Data System (ADS)

    Yun, Kyungtae

    Traditional power plants only utilize about 30 percent of the primary energy that they consume, and the rest of the energy is usually wasted in the process of generating or transmitting electricity. On-site and near-site power generation has been considered by business, labor, and environmental groups to improve the efficiency and the reliability of power generation. Combined heat and power (CHP) systems are a promising alternative to traditional power plants because of the high efficiency and low CO2 emission achieved by recovering waste thermal energy produced during power generation. A CHP operational algorithm designed to optimize operational costs must be relatively simple to implement in practice such as to minimize the computational requirements from the hardware to be installed. This dissertation focuses on the following aspects pertaining the design of a practical CHP operational algorithm designed to minimize the operational costs: (a) real-time CHP operational strategy using a hierarchical optimization algorithm; (b) analytic solutions for cost-optimal power generation unit operation in CHP Systems; (c) modeling of reciprocating internal combustion engines for power generation and heat recovery; (d) an easy to implement, effective, and reliable hourly building load prediction algorithm.

  5. The 747 primary flight control systems reliability and maintenance study

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The major operational characteristics of the 747 Primary Flight Control Systems (PFCS) are described. Results of reliability analysis for separate control functions are presented. The analysis makes use of a NASA computer program which calculates reliability of redundant systems. Costs for maintaining the 747 PFCS in airline service are assessed. The reliabilities and cost will provide a baseline for use in trade studies of future flight control system design.

  6. [Study of the relationship between human quality and reliability].

    PubMed

    Long, S; Wang, C; Wang, L i; Yuan, J; Liu, H; Jiao, X

    1997-02-01

    To clarify the relationship between human quality and reliability, 1925 experiments in 20 subjects were carried out to study the relationship between disposition character, digital memory, graphic memory, multi-reaction time and education level and simulated aircraft operation. Meanwhile, effects of task difficulty and enviromental factor on human reliability were also studied. The results showed that human quality can be predicted and evaluated through experimental methods. The better the human quality, the higher the human reliability. PMID:11539889

  7. 1992 annual report of the North American Electric Reliability Council

    SciTech Connect

    Not Available

    1992-01-01

    This report describes the operation and concerns of the North American Electric Reliability Council and contains contributions from each of its member regional councils concerning reliability of electric power in their representative areas. The topics of the reports include current and future electric system reliability, regional monitoring, planning, telecommunications, the generating availability data system, technical services available, installed capacity changes, equipment changes, unusual events, hurricane Andrew, and ice storms.

  8. Effect of oxygen, moisture and illumination on the stability and reliability of dinaphtho[2,3-b:2',3'-f]thieno[3,2-b]thiophene (DNTT) OTFTs during operation and storage.

    PubMed

    Ding, Ziqian; Abbas, Gamal; Assender, Hazel E; Morrison, John J; Yeates, Stephen G; Patchett, Eifion R; Taylor, D Martin

    2014-09-10

    We report a systemic study of the stability of organic thin film transistors (OTFTs) both in storage and under operation. Apart from a thin polystyrene buffer layer spin-coated onto the gate dielectric, the constituent parts of the OTFTs were all prepared by vacuum evaporation. The OTFTs are based on the semiconducting small molecule dinaphtho[2,3-b:2',3'-f]thieno[3,2-b]thiophene (DNTT) deposited onto the surface of a polystyrene-buffered in situ polymerized diacrylate gate insulator. Over a period of 9 months, no degradation of the hole mobility occurred in devices stored either in the dark in dry air or in uncontrolled air and normal laboratory fluorescent lighting conditions. In the latter case, rather than decreasing, the mobility actually increased almost 2-fold to 1.5 cm(2)/(V · s). The devices also showed good stability during repeat on/off cycles in the dark in dry air. Exposure to oxygen and light during the on/off cycles led to a positive shift of the transfer curves due to electron trapping when the DNTT was biased into depletion by the application of positive gate voltage. When operated in accumulation, negative gate voltage under the same conditions, the transfer curves were stable. When voltage cycling in moist air in the dark, the transfer curves shifted to negative voltages, thought to be due to the generation of hole traps either in the semiconductor or its interface with the dielectric layer. When subjected to gate bias stress in dry air in the dark for at least 144 h, the device characteristics remained stable. PMID:25116597

  9. Study of Optimal Perimetric Testing in Children (OPTIC): Feasibility, Reliability and Repeatability of Perimetry in Children

    PubMed Central

    Patel, Dipesh E.; Cumberland, Phillippa M.; Walters, Bronwen C.; Russell-Eggitt, Isabelle; Rahi, Jugnoo S.

    2015-01-01

    Purpose To investigate feasibility, reliability and repeatability of perimetry in children. Methods A prospective, observational study recruiting 154 children aged 5–15 years, without an ophthalmic condition that affects the visual field (controls), identified consecutively between May 2012 and November 2013 from hospital eye clinics. Perimetry was undertaken in a single sitting, with standardised protocols, in a randomised order using the Humphrey static (SITA 24–2 FAST), Goldmann and Octopus kinetic perimeters. Data collected included test duration, subjective experience and test quality (incorporating examiner ratings on comprehension of instructions, fatigue, response to visual and auditory stimuli, concentration and co-operation) to assess feasibility and reliability. Testing was repeated within 6 months to assess repeatability. Results Overall feasibility was very high (Goldmann=96.1%, Octopus=89% and Humphrey=100% completed the tests). Examiner rated reliability was ‘good’ in 125 (81.2%) children for Goldmann, 100 (64.9%) for Octopus and 98 (63.6%) for Humphrey perimetry. Goldmann perimetry was the most reliable method in children under 9 years of age. Reliability improved with increasing age (multinomial logistic regression (Goldmann, Octopus and Humphrey), p<0.001). No significant differences were found for any of the three test strategies when examining initial and follow-up data outputs (Bland-Altman plots, n=43), suggesting good test repeatability, although the sample size may preclude detection of a small learning effect. Conclusions Feasibility and reliability of formal perimetry in children improves with age. By the age of 9 years, all the strategies used here were highly feasible and reliable. Clinical assessment of the visual field is achievable in children as young as 5 years, and should be considered where visual field loss is suspected. Since Goldmann perimetry is the most effective strategy in children aged 5–8 years and this

  10. Development of a Pattern Recognition Methodology for Determining Operationally Optimal Heat Balance Instrumentation Calibration Schedules

    SciTech Connect

    Kurt Beran; John Christenson; Dragos Nica; Kenny Gross

    2002-12-15

    The goal of the project is to enable plant operators to detect with high sensitivity and reliability the onset of decalibration drifts in all of the instrumentation used as input to the reactor heat balance calculations. To achieve this objective, the collaborators developed and implemented at DBNPS an extension of the Multivariate State Estimation Technique (MSET) pattern recognition methodology pioneered by ANAL. The extension was implemented during the second phase of the project and fully achieved the project goal.

  11. Methodology for making environmental as low as reasonably achievable (ALARA) determinations

    SciTech Connect

    Brown, R.C.; Speer, D.R.

    1982-01-01

    An overall evaluation concept for use in making differential cost-benefit analyses in environmental as low as reasonably achievable (ALARA) determinations is being implemented by Rockwell Hanford Operations. This evaluation includes consideration of seven categories: (1) capital costs; (2) operating costs; (3) state of the art; (4) safety; (5) accident or upset consequences; (6) reliability, operability, and maintainability; and (7) decommissionability. Appropriate weighting factors for each of these categories are under development so that ALARA determinations can be made by comparing scores of alternative proposals for facility design, operations, and upgrade. This method of evaluation circumvents the traditional basis of a stated monetary sum per person-rem of dose commitment. This alternative was generated by advice from legal counsel who advised against formally pursuing this avenue of approach to ALARA for environmental and occupational dose commitments.

  12. 14 CFR 91.1415 - CAMP: Mechanical reliability reports.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 2 2012-01-01 2012-01-01 false CAMP: Mechanical reliability reports. 91.1415 Section 91.1415 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF... Ownership Operations Program Management § 91.1415 CAMP: Mechanical reliability reports. (a) Each...

  13. 14 CFR 91.1415 - CAMP: Mechanical reliability reports.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 2 2014-01-01 2014-01-01 false CAMP: Mechanical reliability reports. 91.1415 Section 91.1415 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF... Ownership Operations Program Management § 91.1415 CAMP: Mechanical reliability reports. (a) Each...

  14. 14 CFR 91.1415 - CAMP: Mechanical reliability reports.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 2 2013-01-01 2013-01-01 false CAMP: Mechanical reliability reports. 91.1415 Section 91.1415 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF... Ownership Operations Program Management § 91.1415 CAMP: Mechanical reliability reports. (a) Each...

  15. 14 CFR 91.1415 - CAMP: Mechanical reliability reports.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 2 2011-01-01 2011-01-01 false CAMP: Mechanical reliability reports. 91.1415 Section 91.1415 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF... Ownership Operations Program Management § 91.1415 CAMP: Mechanical reliability reports. (a) Each...

  16. Testing for PV Reliability (Presentation)

    SciTech Connect

    Kurtz, S.; Bansal, S.

    2014-09-01

    The DOE SUNSHOT workshop is seeking input from the community about PV reliability and how the DOE might address gaps in understanding. This presentation describes the types of testing that are needed for PV reliability and introduces a discussion to identify gaps in our understanding of PV reliability testing.

  17. Making Reliability Arguments in Classrooms

    ERIC Educational Resources Information Center

    Parkes, Jay; Giron, Tilia

    2006-01-01

    Reliability methodology needs to evolve as validity has done into an argument supported by theory and empirical evidence. Nowhere is the inadequacy of current methods more visible than in classroom assessment. Reliability arguments would also permit additional methodologies for evidencing reliability in classrooms. It would liberalize methodology…

  18. Factor reliability into load management

    SciTech Connect

    Feight, G.R.

    1983-07-01

    Hardware reliability is a major factor to consider when selecting a direct-load-control system. The author outlines a method of estimating present-value costs associated with system reliability. He points out that small differences in receiver reliability make a significant difference in owning cost. 4 figures.

  19. Optimum Reliability of Gain Scores.

    ERIC Educational Resources Information Center

    Sharma, K. K.; Gupta, J. K.

    1986-01-01

    This paper gives a mathematical treatment to findings of Zimmerman and Williams and establishes a minimum reliability for gain scores when the pretest and posttest have equal reliabilities and equal standard deviations. It discusses the behavior of the reliability of gain scores in terms of variations in other test parameters. (Author/LMO)

  20. Schoolbook Texts: Behavioral Achievement Priming in Math and Language

    PubMed Central

    Engeser, Stefan; Baumann, Nicola; Baum, Ingrid

    2016-01-01

    Prior research found reliable and considerably strong effects of semantic achievement primes on subsequent performance. In order to simulate a more natural priming condition to better understand the practical relevance of semantic achievement priming effects, running texts of schoolbook excerpts with and without achievement primes were used as priming stimuli. Additionally, we manipulated the achievement context; some subjects received no feedback about their achievement and others received feedback according to a social or individual reference norm. As expected, we found a reliable (albeit small) positive behavioral priming effect of semantic achievement primes on achievement in math (Experiment 1) and language tasks (Experiment 2). Feedback moderated the behavioral priming effect less consistently than we expected. The implication that achievement primes in schoolbooks can foster performance is discussed along with general theoretical implications. PMID:26938446

  1. Schoolbook Texts: Behavioral Achievement Priming in Math and Language.

    PubMed

    Engeser, Stefan; Baumann, Nicola; Baum, Ingrid

    2016-01-01

    Prior research found reliable and considerably strong effects of semantic achievement primes on subsequent performance. In order to simulate a more natural priming condition to better understand the practical relevance of semantic achievement priming effects, running texts of schoolbook excerpts with and without achievement primes were used as priming stimuli. Additionally, we manipulated the achievement context; some subjects received no feedback about their achievement and others received feedback according to a social or individual reference norm. As expected, we found a reliable (albeit small) positive behavioral priming effect of semantic achievement primes on achievement in math (Experiment 1) and language tasks (Experiment 2). Feedback moderated the behavioral priming effect less consistently than we expected. The implication that achievement primes in schoolbooks can foster performance is discussed along with general theoretical implications.

  2. Natural Circulation in Water Cooled Nuclear Power Plants Phenomena, models, and methodology for system reliability assessments

    SciTech Connect

    Jose Reyes

    2005-02-14

    In recent years it has been recognized that the application of passive safety systems (i.e., those whose operation takes advantage of natural forces such as convection and gravity), can contribute to simplification and potentially to improved economics of new nuclear power plant designs. In 1991 the IAEA Conference on ''The Safety of Nuclear Power: Strategy for the Future'' noted that for new plants the use of passive safety features is a desirable method of achieving simplification and increasing the reliability of the performance of essential safety functions, and should be used wherever appropriate''.

  3. General Achievement Trends: Oklahoma

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  4. General Achievement Trends: Georgia

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  5. General Achievement Trends: Nebraska

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  6. General Achievement Trends: Arkansas

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  7. General Achievement Trends: Maryland

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  8. General Achievement Trends: Maine

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  9. General Achievement Trends: Iowa

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  10. General Achievement Trends: Texas

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  11. General Achievement Trends: Hawaii

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  12. General Achievement Trends: Kansas

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  13. General Achievement Trends: Florida

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  14. General Achievement Trends: Massachusetts

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  15. General Achievement Trends: Tennessee

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  16. General Achievement Trends: Alabama

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  17. General Achievement Trends: Virginia

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  18. General Achievement Trends: Michigan

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  19. General Achievement Trends: Colorado

    ERIC Educational Resources Information Center

    Center on Education Policy, 2009

    2009-01-01

    This general achievement trends profile includes information that the Center on Education Policy (CEP) and the Human Resources Research Organization (HumRRO) obtained from states from fall 2008 through April 2009. Included herein are: (1) Bullet points summarizing key findings about achievement trends in that state at three performance…

  20. Inverting the Achievement Pyramid

    ERIC Educational Resources Information Center

    White-Hood, Marian; Shindel, Melissa

    2006-01-01

    Attempting to invert the pyramid to improve student achievement and increase all students' chances for success is not a new endeavor. For decades, educators have strategized, formed think tanks, and developed school improvement teams to find better ways to improve the achievement of all students. Currently, the No Child Left Behind Act (NCLB) is…

  1. Achievement Test Program.

    ERIC Educational Resources Information Center

    Ohio State Dept. of Education, Columbus. Trade and Industrial Education Service.

    The Ohio Trade and Industrial Education Achievement Test battery is comprised of seven basic achievement tests: Machine Trades, Automotive Mechanics, Basic Electricity, Basic Electronics, Mechanical Drafting, Printing, and Sheet Metal. The tests were developed by subject matter committees and specialists in testing and research. The Ohio Trade and…

  2. School Effects on Achievement.

    ERIC Educational Resources Information Center

    Nichols, Robert C.

    The New York State Education Department conducts a Pupil Evaluation Program (PEP) in which each year all third, sixth, and ninth grade students in the state are given a series of achievement tests in reading and mathematics. The data accumulated by the department includes achievement test scores, teacher characteristics, building and curriculum…

  3. Heritability of Creative Achievement

    ERIC Educational Resources Information Center

    Piffer, Davide; Hur, Yoon-Mi

    2014-01-01

    Although creative achievement is a subject of much attention to lay people, the origin of individual differences in creative accomplishments remain poorly understood. This study examined genetic and environmental influences on creative achievement in an adult sample of 338 twins (mean age = 26.3 years; SD = 6.6 years). Twins completed the Creative…

  4. Confronting the Achievement Gap

    ERIC Educational Resources Information Center

    Gardner, David

    2007-01-01

    This article talks about the large achievement gap between children of color and their white peers. The reasons for the achievement gap are varied. First, many urban minorities come from a background of poverty. One of the detrimental effects of growing up in poverty is receiving inadequate nourishment at a time when bodies and brains are rapidly…

  5. Achieving Public Schools

    ERIC Educational Resources Information Center

    Abowitz, Kathleen Knight

    2011-01-01

    Public schools are functionally provided through structural arrangements such as government funding, but public schools are achieved in substance, in part, through local governance. In this essay, Kathleen Knight Abowitz explains the bifocal nature of achieving public schools; that is, that schools are both subject to the unitary Public compact of…

  6. Real Time Grid Reliability Management 2005

    SciTech Connect

    Eto, Joe; Eto, Joe; Lesieutre, Bernard; Lewis, Nancy Jo; Parashar, Manu

    2008-07-07

    The increased need to manage California?s electricity grid in real time is a result of the ongoing transition from a system operated by vertically-integrated utilities serving native loads to one operated by an independent system operator supporting competitive energy markets. During this transition period, the traditional approach to reliability management -- construction of new transmission lines -- has not been pursued due to unresolved issues related to the financing and recovery of transmission project costs. In the absence of investments in new transmission infrastructure, the best strategy for managing reliability is to equip system operators with better real-time information about actual operating margins so that they can better understand and manage the risk of operating closer to the edge. A companion strategy is to address known deficiencies in offline modeling tools that are needed to ground the use of improved real-time tools. This project: (1) developed and conducted first-ever demonstrations of two prototype real-time software tools for voltage security assessment and phasor monitoring; and (2) prepared a scoping study on improving load and generator response models. Additional funding through two separate subsequent work authorizations has already been provided to build upon the work initiated in this project.

  7. Allocating SMART Reliability and Maintainability Goals to NASA Ground Systems

    NASA Technical Reports Server (NTRS)

    Gillespie, Amanda; Monaghan, Mark

    2013-01-01

    This paper will describe the methodology used to allocate Reliability and Maintainability (R&M) goals to Ground Systems Development and Operations (GSDO) subsystems currently being designed or upgraded.

  8. Experiences with Two Reliability Data Collection Efforts (Presentation)

    SciTech Connect

    Sheng, S.; Lantz, E.

    2013-08-01

    This presentation, given by NREL at the Wind Reliability Experts Meeting in Albuquerque, New Mexico, outlines the causes of wind plant operational expenditures and gearbox failures and describes NREL's efforts to create a gearbox failure database.

  9. Reliability of Radioisotope Stirling Convertor Linear Alternator

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin; Korovaichuk, Igor; Geng, Steven M.; Schreiber, Jeffrey G.

    2006-01-01

    Onboard radioisotope power systems being developed and planned for NASA s deep-space missions would require reliable design lifetimes of up to 14 years. Critical components and materials of Stirling convertors have been undergoing extensive testing and evaluation in support of a reliable performance for the specified life span. Of significant importance to the successful development of the Stirling convertor is the design of a lightweight and highly efficient linear alternator. Alternator performance could vary due to small deviations in the permanent magnet properties, operating temperature, and component geometries. Durability prediction and reliability of the alternator may be affected by these deviations from nominal design conditions. Therefore, it is important to evaluate the effect of these uncertainties in predicting the reliability of the linear alternator performance. This paper presents a study in which a reliability-based methodology is used to assess alternator performance. The response surface characterizing the induced open-circuit voltage performance is constructed using 3-D finite element magnetic analysis. Fast probability integration method is used to determine the probability of the desired performance and its sensitivity to the alternator design parameters.

  10. Reliability analysis of wastewater treatment plants.

    PubMed

    Oliveira, Sílvia C; Von Sperling, Marcos

    2008-02-01

    This article presents a reliability analysis of 166 full-scale wastewater treatment plants operating in Brazil. Six different processes have been investigated, comprising septic tank+anaerobic filter, facultative pond, anaerobic pond+facultative pond, activated sludge, upflow anaerobic sludge blanket (UASB) reactors alone and UASB reactors followed by post-treatment. A methodology developed by Niku et al. [1979. Performance of activated sludge process and reliability-based design. J. Water Pollut. Control Assoc., 51(12), 2841-2857] is used for determining the coefficients of reliability (COR), in terms of the compliance of effluent biochemical oxygen demand (BOD), chemical oxygen demand (COD), total suspended solids (TSS), total nitrogen (TN), total phosphorus (TP) and fecal or thermotolerant coliforms (FC) with discharge standards. The design concentrations necessary to meet the prevailing discharge standards and the expected compliance percentages have been calculated from the COR obtained. The results showed that few plants, under the observed operating conditions, would be able to present reliable performances considering the compliance with the analyzed standards. The article also discusses the importance of understanding the lognormal behavior of the data in setting up discharge standards, in interpreting monitoring results and compliance with the legislation. PMID:17897694

  11. Reliability analysis of wastewater treatment plants.

    PubMed

    Oliveira, Sílvia C; Von Sperling, Marcos

    2008-02-01

    This article presents a reliability analysis of 166 full-scale wastewater treatment plants operating in Brazil. Six different processes have been investigated, comprising septic tank+anaerobic filter, facultative pond, anaerobic pond+facultative pond, activated sludge, upflow anaerobic sludge blanket (UASB) reactors alone and UASB reactors followed by post-treatment. A methodology developed by Niku et al. [1979. Performance of activated sludge process and reliability-based design. J. Water Pollut. Control Assoc., 51(12), 2841-2857] is used for determining the coefficients of reliability (COR), in terms of the compliance of effluent biochemical oxygen demand (BOD), chemical oxygen demand (COD), total suspended solids (TSS), total nitrogen (TN), total phosphorus (TP) and fecal or thermotolerant coliforms (FC) with discharge standards. The design concentrations necessary to meet the prevailing discharge standards and the expected compliance percentages have been calculated from the COR obtained. The results showed that few plants, under the observed operating conditions, would be able to present reliable performances considering the compliance with the analyzed standards. The article also discusses the importance of understanding the lognormal behavior of the data in setting up discharge standards, in interpreting monitoring results and compliance with the legislation.

  12. Self Regulated Learning of High Achievers

    ERIC Educational Resources Information Center

    Rathod, Ami

    2010-01-01

    The study was conducted on high achievers of Senior Secondary school. Main objectives were to identify the self regulated learners among the high achievers, to find out dominant components and characteristics operative in self regulated learners and to compare self regulated learning of learners with respect to their subject (science and non…

  13. Operation Poorman

    SciTech Connect

    Pruvost, N.; Tsitouras, J.

    1981-03-18

    The objectives of Operation Poorman were to design and build a portable seismic system and to set up and use this system in a cold-weather environment. The equipment design uses current technology to achieve a low-power, lightweight system that is configured into three modules. The system was deployed in Alaska during wintertime, and the results provide a basis for specifying a mission-ready seismic verification system.

  14. Trust sensor interface for improving reliability of EMG-based user intent recognition.

    PubMed

    Liu, Yuhong; Zhang, Fan; Sun, Yan Lindsay; Huang, He

    2011-01-01

    To achieve natural and smooth control of prostheses, Electromyographic (EMG) signals have been investigated for decoding user intent. However, EMG signals can be easily contaminated by diverse disturbances, leading to errors in user intent recognition and threatening the safety of prostheses users. To address this problem, we propose a trust sensor interface (TSI) that contains 2 modules: (1) abnormality detector that detects diverse disturbances with high accuracy and low latency and (2) trust evaluation that dynamically evaluates the reliability of EMG sensors. Based on the output of the TSI, the user intention recognition (UIR) algorithm is able to dynamically adjust their operations or decisions. Our experiments on an able-bodied subject have demonstrated that the proposed TSI can effectively detect two types of disturbances (i.e. motion artifacts and baseline shifts) and improve the reliability of the UIR.

  15. Safeguarding patients: complexity science, high reliability organizations, and implications for team training in healthcare.

    PubMed

    McKeon, Leslie M; Oswaks, Jill D; Cunningham, Patricia D

    2006-01-01

    Serious events within healthcare occur daily exposing the failure of the system to safeguard patient and providers. The complex nature of healthcare contributes to myriad ambiguities affecting quality nursing care and patient outcomes. Leaders in healthcare organizations are looking outside the industry for ways to improve care because of the slow rates of improvement in patient safety and insufficient application of evidenced-based research in practice. Military and aviation industry strategies are recognized by clinicians in high-risk care settings such as the operating room, emergency departments, and intensive care units as having great potential to create safe and effective systems of care. Complexity science forms the basis for high reliability teams to recognize even the most minor variances in expected outcomes and take strong action to prevent serious error from occurring. Cultural and system barriers to achieving high reliability performance within healthcare and implications for team training are discussed.

  16. Multi-core fiber technology for highly reliable optical network in access areas

    NASA Astrophysics Data System (ADS)

    Tanaka, Ken-ichi; Lee, Yong; Nomoto, Etsuko; Arimoto, Hideo; Sugawara, Toshiki

    2015-03-01

    A failure recovery system utilizing a multi-core fiber (MCF) link with field programmable gate array-based optical switch units was developed to achieve high capacity and highly reliable optical networks in access areas. We describe the novel MCF link based on a multi-ring structure and a protection scheme to prevent link failures. Fan-in/ -out devices and connectors are also presented to demonstrate the development status of the MCF connection technology for the link. We demonstrated path recovery by switching operation within a sufficiently short time, which is required by ITU-T. The selection of a protecting path as a failure working path was also optimized as the minimum passage of units for low loss transmission. The results we obtained indicate that our proposed link has potential for the network design of highly reliable network topologies in access areas such as data centers, systems in business areas, and fiber to the home systems in residential areas.

  17. On Improving the Reliability of Distribution Networks Based on Investment Scenarios Using Reference Networks

    NASA Astrophysics Data System (ADS)

    Kawahara, Koji

    Distribution systems are inherent monopolies and therefore these have generally been regulated in order to protect customers and to ensure cost-effective operation. In the UK this is one of the functions of OFGEM (Office of Gas and Electricity Markets). Initially the regulation was based on the value of assets but there is a trend nowadays towards performance-based regulation. In order to achieve this, a methodology is needed that enables the reliability performance associated with alternative investment strategies to be compared with the investment cost of these strategies. At present there is no accepted approach for such assessments. Building on the concept of reference networks proposed in Refs. (1), (2), this paper describes how these networks can be used to assess the impact that performance driven investment strategies will have on the improvement in reliability indices. The method has been tested using the underground and overhead part of a real system.

  18. NASA Applications and Lessons Learned in Reliability Engineering

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.; Fuller, Raymond P.

    2011-01-01

    Since the Shuttle Challenger accident in 1986, communities across NASA have been developing and extensively using quantitative reliability and risk assessment methods in their decision making process. This paper discusses several reliability engineering applications that NASA has used over the year to support the design, development, and operation of critical space flight hardware. Specifically, the paper discusses several reliability engineering applications used by NASA in areas such as risk management, inspection policies, components upgrades, reliability growth, integrated failure analysis, and physics based probabilistic engineering analysis. In each of these areas, the paper provides a brief discussion of a case study to demonstrate the value added and the criticality of reliability engineering in supporting NASA project and program decisions to fly safely. Examples of these case studies discussed are reliability based life limit extension of Shuttle Space Main Engine (SSME) hardware, Reliability based inspection policies for Auxiliary Power Unit (APU) turbine disc, probabilistic structural engineering analysis for reliability prediction of the SSME alternate turbo-pump development, impact of ET foam reliability on the Space Shuttle System risk, and reliability based Space Shuttle upgrade for safety. Special attention is given in this paper to the physics based probabilistic engineering analysis applications and their critical role in evaluating the reliability of NASA development hardware including their potential use in a research and technology development environment.

  19. Utility transformer reliability not always related to loading practices

    SciTech Connect

    Not Available

    1985-08-01

    Transformer reliability is not a function of loading in a majority of utility situations. Transformer loading problems are generally the result of either the designer not understanding the load conditions and environment or the operator not operating the transformer within the proper design constraints. A corporate engineering advisor describes transformer reliability in terms of common folklore, then makes computer simulation comparisons of five transformers made between 1942 and 1982 to illustrate various methods of determining optimum loading. 1 table.

  20. Effect of amplifier component maintenance on laser system availability and reliability for the US National Ignition Facility

    SciTech Connect

    Erlandson, A.C.; Lambert, H.; Zapata, L.E.

    1996-12-01

    We have analyzed the availability and reliability of the flashlamp- pumped, Nd:glass amplifiers that, as a part of a laser now being designed for future experiments, in inertial confinement fusion (ICF), will be used in the National Ignition Facility (NIF). Clearly , in order for large ICF systems such as the NIF to operate effectively as a whole, all components must meet demanding availability and reliability requirements. Accordingly, the NIF amplifiers can achieve high reliability and availability by using reliable parts, and by using a cassette-based maintenance design that allows most key amplifier parts to be 1744 replaced within a few hours. In this way, parts that degrade slowly, as the laser slabs, silver reflectors, and blastshields can be expected to do, based on previous experience, can be replaced either between shots or during scheduled maintenance periods, with no effect on availability or reliability. In contrast, parts that fail rapidly, such as the flashlamps, can and do cause unavailability or unreliability. Our analysis demonstrates that the amplifiers for the NIF will meet availability and reliability goals, respectively, of 99.8% and 99.4%, provided that the 7680 NIF flashlamps in NIF have failure rates of less than, or equal to, those experienced on Nova, a 5000-lamp laser at Lawrence Livermore National Laboratory (LLNL).

  1. Improving Search Engine Reliability

    NASA Astrophysics Data System (ADS)

    Pruthi, Jyoti; Kumar, Ela

    2010-11-01

    Search engines on the Internet are used daily to access and find information. While these services are providing an easy way to find information globally, they are also suffering from artificially created false results. This paper describes two techniques that are being used to manipulate the search engines: spam pages (used to achieve higher rankings on the result page) and cloaking (used to feed falsified data into search engines). This paper also describes two proposed methods to fight this kind of misuse, algorithms for both of the formerly mentioned cases of spamdexing.

  2. Reliable aerial thermography for energy conservation

    NASA Technical Reports Server (NTRS)

    Jack, J. R.; Bowman, R. L.

    1981-01-01

    A method for energy conservation, the aerial thermography survey, is discussed. It locates sources of energy losses and wasteful energy management practices. An operational map is presented for clear sky conditions. The map outlines the key environmental conditions conductive to obtaining reliable aerial thermography. The map is developed from defined visual and heat loss discrimination criteria which are quantized based on flat roof heat transfer calculations.

  3. History of Robotic and Remotely Operated Telescopes

    NASA Astrophysics Data System (ADS)

    Genet, Russell M.

    2011-03-01

    While automated instrument sequencers were employed on solar eclipse expeditions in the late 1800s, it wasn't until the 1960s that Art Code and associates at Wisconsin used a PDP minicomputer to automate an 8-inch photometric telescope. Although this pioneering project experienced frequent equipment failures and was shut down after a couple of years, it paved the way for the first space telescopes. Reliable microcomputers initiated the modern era of robotic telescopes. Louis Boyd and I applied single board microcomputers with 64K of RAM and floppy disk drives to telescope automation at the Fairborn Observatory, achieving reliable, fully robotic operation in 1983 that has continued uninterrupted for 28 years. In 1985 the Smithsonian Institution provided us with a suburb operating location on Mt. Hopkins in southern Arizona, while the National Science Foundation funded additional telescopes. Remote access to our multiple robotic telescopes at the Fairborn Observatory began in the late 1980s. The Fairborn Observatory, with its 14 fully robotic telescopes and staff of two (one full and one part time) illustrates the potential for low operating and maintenance costs. As the information capacity of the Internet has expanded, observational modes beyond simple differential photometry opened up, bringing us to the current era of real-time remote access to remote observatories and global observatory networks. Although initially confined to smaller telescopes, robotic operation and remote access are spreading to larger telescopes as telescopes from afar becomes the normal mode of operation.

  4. A New Tissue Resonator Indenter Device and Reliability Study

    PubMed Central

    Jia, Ming; Zu, Jean W.; Hariri, Alireza

    2011-01-01

    Knowledge of tissue mechanical properties is widely required by medical applications, such as disease diagnostics, surgery operation, simulation, planning, and training. A new portable device, called Tissue Resonator Indenter Device (TRID), has been developed for measurement of regional viscoelastic properties of soft tissues at the Bio-instrument and Biomechanics Lab of the University of Toronto. As a device for soft tissue properties in-vivo measurements, the reliability of TRID is crucial. This paper presents TRID’s working principle and the experimental study of TRID’s reliability with respect to inter-reliability, intra-reliability, and the indenter misalignment effect as well. PMID:22346623

  5. Reliability impact of solar electric generation upon electric utility systems

    NASA Astrophysics Data System (ADS)

    Day, J. T.; Hobbs, W. J.

    1982-08-01

    The introduction of solar electric systems into an electric utility grid brings new considerations in the assessment of the utility's power supply reliability. This paper summarizes a methodology for estimating the reliability impact of solar electric technologies upon electric utilities for value assessment and planning purposes. Utility expansion and operating impacts are considered. Sample results from photovoltaic analysis show that solar electric plants can increase the reliable load-carrying capability of a utility system. However, the load-carrying capability of the incremental power tends to decrease, particularly at significant capacity penetration levels. Other factors influencing reliability impact are identified.

  6. Student Achievement and Motivation

    ERIC Educational Resources Information Center

    Flammer, Gordon H.; Mecham, Robert C.

    1974-01-01

    Compares the lecture and self-paced methods of instruction on the basis of student motivation and achieveme nt, comparing motivating and demotivating factors in each, and their potential for motivation and achievement. (Authors/JR)

  7. Electrical network reliability and system blackout development simulations

    NASA Astrophysics Data System (ADS)

    Nepomnyashchiy, V. A.

    2015-12-01

    The main regulations of the author's model of electrical network reliability and system blackout development are stated. The model allows one to analytically determine the main technical and economic parameters indicators of reliability of electrical network operation, taking into account the generating power dislocations and electric loads, operation conditions, and dynamic and static stability of operation, while simultaneously calculating short circuit currents. The model also considers open-phase modes at singlephase short circuits and allows one to choose the most efficient operation conditions. The calculations are finished with an estimate of the annual averages of undersupply of energy and economic losses of customers due to their power supply interruptions.

  8. Reliability and quality assurance on the MOD 2 wind system

    NASA Technical Reports Server (NTRS)

    Mason, W. E. B.; Jones, B. G.

    1981-01-01

    The Safety, Reliability, and Quality Assurance (R&QA) approach developed for the largest wind turbine generator, the Mod 2, is described. The R&QA approach assures that the machine is not hazardous to the public or to the operating personnel, is operated unattended on a utility grid, demonstrates reliable operation, and helps establish the quality assurance and maintainability requirements for future wind turbine projects. The significant guideline consisted of a failure modes and effects analysis (FMEA) during the design phase, hardware inspections during parts fabrication, and three simple documents to control activities during machine construction and operation.

  9. SLAC modulator system improvements and reliability results

    SciTech Connect

    Donaldson, A.R.

    1998-06-01

    In 1995, an improvement project was completed on the 244 klystron modulators in the linear accelerator. The modulator system has been previously described. This article offers project details and their resulting effect on modulator and component reliability. Prior to the project, the authors had collected four operating cycles (1991 through 1995) of MTTF data. In this discussion, the '91 data will be excluded since the modulators operated at 60 Hz. The five periods following the '91 run were reviewed due to the common repetition rate at 120 Hz.

  10. Developing Ultra Reliable Life Support for the Moon and Mars

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2009-01-01

    Recycling life support systems can achieve ultra reliability by using spares to replace failed components. The added mass for spares is approximately equal to the original system mass, provided the original system reliability is not very low. Acceptable reliability can be achieved for the space shuttle and space station by preventive maintenance and by replacing failed units, However, this maintenance and repair depends on a logistics supply chain that provides the needed spares. The Mars mission must take all the needed spares at launch. The Mars mission also must achieve ultra reliability, a very low failure rate per hour, since it requires years rather than weeks and cannot be cut short if a failure occurs. Also, the Mars mission has a much higher mass launch cost per kilogram than shuttle or station. Achieving ultra reliable space life support with acceptable mass will require a well-planned and extensive development effort. Analysis must define the reliability requirement and allocate it to subsystems and components. Technologies, components, and materials must be designed and selected for high reliability. Extensive testing is needed to ascertain very low failure rates. Systems design should segregate the failure causes in the smallest, most easily replaceable parts. The systems must be designed, produced, integrated, and tested without impairing system reliability. Maintenance and failed unit replacement should not introduce any additional probability of failure. The overall system must be tested sufficiently to identify any design errors. A program to develop ultra reliable space life support systems with acceptable mass must start soon if it is to produce timely results for the moon and Mars.

  11. Identifying a reliable boredom induction.

    PubMed

    Markey, Amanda; Chin, Alycia; Vanepps, Eric M; Loewenstein, George

    2014-08-01

    None of the tasks used to induce boredom have undergone rigorous psychometric validation, which creates potential problems for operational equivalence, comparisons across studies, statistical power, and confounding results. This methodological concern was addressed by testing and comparing the effectiveness of six 5-min. computerized boredom inductions (peg turning, audio, video, signature matching, one-back, and an air traffic control task). The tasks were evaluated using standard criteria for emotion inductions: intensity and discreteness. Intensity, the amount of boredom elicited, was measured using a subset of the Multidimensional State Boredom Scale. Discreteness, the extent to which the task elicited boredom and did not elicit other emotions, was measured using a modification of the Differential Emotion Scale. In both a laboratory setting (Study 1; N = 241) and an online setting with Amazon Mechanical Turk workers (Study 2; N = 416), participants were randomly assigned to one of seven tasks (six boredom tasks or a comparison task, a clip from Planet Earth) before rating their boredom using the MSBS and other emotions using the modified DES. In both studies, each task had significantly higher intensity and discreteness than the comparison task, with moderate to large effect sizes. The peg-turning task outperformed the other tasks in both intensity and discreteness, making it the recommended induction. Identification of reliable and valid boredom inductions and systematic comparison of their relative results should help advance state boredom research.

  12. Reliability prediction of corroding pipelines

    SciTech Connect

    Strutt, J.E.; Allsopp, K.; Newman, D.; Trille, C.

    1996-12-01

    Recent data collection studies relating to loss of containment of pipeline and risers indicate that corrosion is now the dominant failure mode for steel pipelines in the North Sea area. As the North Sea pipeline infrastructure ages, it is expected that the proportion of pipelines failing by corrosion will increase further and this raises the question of the relationship between probability of pipeline corrosion failure and the reliability of the corrosion control and monitoring systems used by operators to prevent corrosion failures. This paper describes a methodology for predicting the probability of corrosion failure of a specific submarine pipeline or riser system. The paper illustrates how the model can be used to predict the safe life of a pipeline, given knowledge of the underlying corrosion behavior and corrosion control system and how the time to failure can be updated in the light of inspection and monitoring results enabling inspection policy to be evaluated for its impact on risk. The paper also shows how different assumptions concerning the underlying cause of failure influences the estimation of the probability of failure.

  13. Identifying a reliable boredom induction.

    PubMed

    Markey, Amanda; Chin, Alycia; Vanepps, Eric M; Loewenstein, George

    2014-08-01

    None of the tasks used to induce boredom have undergone rigorous psychometric validation, which creates potential problems for operational equivalence, comparisons across studies, statistical power, and confounding results. This methodological concern was addressed by testing and comparing the effectiveness of six 5-min. computerized boredom inductions (peg turning, audio, video, signature matching, one-back, and an air traffic control task). The tasks were evaluated using standard criteria for emotion inductions: intensity and discreteness. Intensity, the amount of boredom elicited, was measured using a subset of the Multidimensional State Boredom Scale. Discreteness, the extent to which the task elicited boredom and did not elicit other emotions, was measured using a modification of the Differential Emotion Scale. In both a laboratory setting (Study 1; N = 241) and an online setting with Amazon Mechanical Turk workers (Study 2; N = 416), participants were randomly assigned to one of seven tasks (six boredom tasks or a comparison task, a clip from Planet Earth) before rating their boredom using the MSBS and other emotions using the modified DES. In both studies, each task had significantly higher intensity and discreteness than the comparison task, with moderate to large effect sizes. The peg-turning task outperformed the other tasks in both intensity and discreteness, making it the recommended induction. Identification of reliable and valid boredom inductions and systematic comparison of their relative results should help advance state boredom research. PMID:25153752

  14. Wind turbine reliability : a database and analysis approach.

    SciTech Connect

    Linsday, James; Briand, Daniel; Hill, Roger Ray; Stinebaugh, Jennifer A.; Benjamin, Allan S.

    2008-02-01

    The US wind Industry has experienced remarkable growth since the turn of the century. At the same time, the physical size and electrical generation capabilities of wind turbines has also experienced remarkable growth. As the market continues to expand, and as wind generation continues to gain a significant share of the generation portfolio, the reliability of wind turbine technology becomes increasingly important. This report addresses how operations and maintenance costs are related to unreliability - that is the failures experienced by systems and components. Reliability tools are demonstrated, data needed to understand and catalog failure events is described, and practical wind turbine reliability models are illustrated, including preliminary results. This report also presents a continuing process of how to proceed with controlling industry requirements, needs, and expectations related to Reliability, Availability, Maintainability, and Safety. A simply stated goal of this process is to better understand and to improve the operable reliability of wind turbine installations.

  15. Are specialist certification examinations a reliable measure of physician competence?

    PubMed

    Burch, V C; Norman, G R; Schmidt, H G; van der Vleuten, C P M

    2008-11-01

    High stakes postgraduate specialist certification examinations have considerable implications for the future careers of examinees. Medical colleges and professional boards have a social and professional responsibility to ensure their fitness for purpose. To date there is a paucity of published data about the reliability of specialist certification examinations and objective methods for improvement. Such data are needed to improve current assessment practices and sustain the international credibility of specialist certification processes. To determine the component and composite reliability of the Fellowship examination of the College of Physicians of South Africa, and identify strategies for further improvement, generalizability and multivariate generalizability theory were used to estimate the reliability of examination subcomponents and the overall reliability of the composite examination. Decision studies were used to identify strategies for improving the composition of the examination. Reliability coefficients of the component subtests ranged from 0.58 to 0.64. The composite reliability of the examination was 0.72. This could be increased to 0.8 by weighting all test components equally or increasing the number of patient encounters in the clinical component of the examination. Correlations between examination components were high, suggesting that similar parameters of competence were being assessed. This composite certification examination, if equally weighted, achieved an overall reliability sufficient for high stakes examination purposes. Increasing the weighting of the clinical component decreased the reliability. This could be rectified by increasing the number of patient encounters in the examination. Practical ways of achieving this are suggested.

  16. Improving reliability of live/dead cell counting through automated image mosaicing.

    PubMed

    Piccinini, Filippo; Tesei, Anna; Paganelli, Giulia; Zoli, Wainer; Bevilacqua, Alessandro

    2014-12-01

    Cell counting is one of the basic needs of most biological experiments. Numerous methods and systems have been studied to improve the reliability of counting. However, at present, manual cell counting performed with a hemocytometer still represents the gold standard, despite several problems limiting reproducibility and repeatability of the counts and, at the end, jeopardizing their reliability in general. We present our own approach based on image processing techniques to improve counting reliability. It works in two stages: first building a high-resolution image of the hemocytometer's grid, then counting the live and dead cells by tagging the image with flags of different colours. In particular, we introduce GridMos (http://sourceforge.net/p/gridmos), a fully-automated mosaicing method to obtain a mosaic representing the whole hemocytometer's grid. In addition to offering more significant statistics, the mosaic "freezes" the culture status, thus permitting analysis by more than one operator. Finally, the mosaic achieved can thus be tagged by using an image editor, thus markedly improving counting reliability. The experiments performed confirm the improvements brought about by the proposed counting approach in terms of both reproducibility and repeatability, also suggesting the use of a mosaic of an entire hemocytometer's grid, then labelled trough an image editor, as the best likely candidate for the new gold standard method in cell counting.

  17. Nuclear weapon reliability evaluation methodology

    SciTech Connect

    Wright, D.L.

    1993-06-01

    This document provides an overview of those activities that are normally performed by Sandia National Laboratories to provide nuclear weapon reliability evaluations for the Department of Energy. These reliability evaluations are first provided as a prediction of the attainable stockpile reliability of a proposed weapon design. Stockpile reliability assessments are provided for each weapon type as the weapon is fielded and are continuously updated throughout the weapon stockpile life. The reliability predictions and assessments depend heavily on data from both laboratory simulation and actual flight tests. An important part of the methodology are the opportunities for review that occur throughout the entire process that assure a consistent approach and appropriate use of the data for reliability evaluation purposes.

  18. Lithium battery safety and reliability

    NASA Astrophysics Data System (ADS)

    Levy, Samuel C.

    Lithium batteries have been used in a variety of applications for a number of years. As their use continues to grow, particularly in the consumer market, a greater emphasis needs to be placed on safety and reliability. There is a useful technique which can help to design cells and batteries having a greater degree of safety and higher reliability. This technique, known as fault tree analysis, can also be useful in determining the cause of unsafe behavior and poor reliability in existing designs.

  19. Probabilistic Methods for Structural Design and Reliability

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Whitlow, Woodrow, Jr. (Technical Monitor)

    2002-01-01

    This report describes a formal method to quantify structural damage tolerance and reliability in the presence of a multitude of uncertainties in turbine engine components. The method is based at the material behavior level where primitive variables with their respective scatter ranges are used to describe behavior. Computational simulation is then used to propagate the uncertainties to the structural scale where damage tolerance and reliability are usually specified. Several sample cases are described to illustrate the effectiveness, versatility, and maturity of the method. Typical results from this method demonstrate, that it is mature and that it can be used to probabilistically evaluate turbine engine structural components. It may be inferred from the results that the method is suitable for probabilistically predicting the remaining life in aging or in deteriorating structures, for making strategic projections and plans, and for achieving better, cheaper, faster products that give competitive advantages in world markets.

  20. Optimal Implementations for Reliable Circadian Clocks

    NASA Astrophysics Data System (ADS)

    Hasegawa, Yoshihiko; Arita, Masanori

    2014-09-01

    Circadian rhythms are acquired through evolution to increase the chances for survival through synchronizing with the daylight cycle. Reliable synchronization is realized through two trade-off properties: regularity to keep time precisely, and entrainability to synchronize the internal time with daylight. We find by using a phase model with multiple inputs that achieving the maximal limit of regularity and entrainability entails many inherent features of the circadian mechanism. At the molecular level, we demonstrate the role sharing of two light inputs, phase advance and delay, as is well observed in mammals. At the behavioral level, the optimal phase-response curve inevitably contains a dead zone, a time during which light pulses neither advance nor delay the clock. We reproduce the results of phase-controlling experiments entrained by two types of periodic light pulses. Our results indicate that circadian clocks are designed optimally for reliable clockwork through evolution.

  1. A fourth generation reliability predictor

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Martensen, Anna L.

    1988-01-01

    A reliability/availability predictor computer program has been developed and is currently being beta-tested by over 30 US companies. The computer program is called the Hybrid Automated Reliability Predictor (HARP). HARP was developed to fill an important gap in reliability assessment capabilities. This gap was manifested through the use of its third-generation cousin, the Computer-Aided Reliability Estimation (CARE III) program, over a six-year development period and an additional three-year period during which CARE III has been in the public domain. The accumulated experience of the over 30 establishments now using CARE III was used in the development of the HARP program.

  2. Reliability centered maintenance in astronomical infrastructure facilities

    NASA Astrophysics Data System (ADS)

    Ansorge, W. R.

    2006-06-01

    Hundreds of mirror segment, thousands of high precision actuators, highly complex mechanical, hydraulic, electrical and other technology subsystems, and highly sophisticated control systems: an ELT system consists of millions of individual parts and components, each of them may fail and lead to a partial or complete system breakdown. The traditional maintenance concepts characterized by predefined preventive maintenance activities and rigid schedules are not suitable for handling this large number of potential failures and malfunctions and the extreme maintenance workload. New maintenance strategies have to be found suitable to increase reliability while reducing the cost of needless maintenance services. The Reliability Centred Maintenance (RCM) methodology is already used extensively by airlines, and in industrial and marine facilities and even by scientific institutions like NASA. Its application increases the operational reliability while reducing the cost of unnecessary maintenance activities and is certainly also a solution for current and future ELT facilities. RCM is a concept of developing a maintenance scheme based on the reliability of the various components of a system by using "feedback loops between instrument / system performance monitoring and preventive/corrective maintenance cycles." Ideally RCM has to be designed within a system and should be located in the requirement definition, the preliminary and final design phases of new equipment and complicated systems. However, under certain conditions, an implementation of RCM into the maintenance management strategy of already existing astronomical infrastructure facilities is also possible. This presentation outlines the principles of the RCM methodology, explains the advantages, and highlights necessary changes in the observatory development, operation and maintenance philosophies. Presently, it is the right time to implement RCM into current and future ELT projects and to save up to 50% maintenance

  3. Confirmatory Factor Analysis of Achieving the Beginning Teacher Standards Inventory

    ERIC Educational Resources Information Center

    Chen, Weiyun

    2009-01-01

    The purpose of this study was to examine the factorial validity and reliability of the "Achieving the NASPE Standards Inventory (ANSI)" that assesses pre-service physical education teachers' perceptions of achieving the National Association for Sport and Physical Education (NASPE) beginning teacher standards (2003). Four hundred fifty-two…

  4. Reliability analysis of a wastewater treatment plant using fault tree analysis and Monte Carlo simulation.

    PubMed

    Taheriyoun, Masoud; Moradinejad, Saber

    2015-01-01

    The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.

  5. Mass and Reliability System (MaRS)

    NASA Technical Reports Server (NTRS)

    Barnes, Sarah

    2016-01-01

    The Safety and Mission Assurance (S&MA) Directorate is responsible for mitigating risk, providing system safety, and lowering risk for space programs from ground to space. The S&MA is divided into 4 divisions: The Space Exploration Division (NC), the International Space Station Division (NE), the Safety & Test Operations Division (NS), and the Quality and Flight Equipment Division (NT). The interns, myself and Arun Aruljothi, will be working with the Risk & Reliability Analysis Branch under the NC Division's. The mission of this division is to identify, characterize, diminish, and communicate risk by implementing an efficient and effective assurance model. The team utilizes Reliability and Maintainability (R&M) and Probabilistic Risk Assessment (PRA) to ensure decisions concerning risks are informed, vehicles are safe and reliable, and program/project requirements are realistic and realized. This project pertains to the Orion mission, so it is geared toward a long duration Human Space Flight Program(s). For space missions, payload is a critical concept; balancing what hardware can be replaced by components verse by Orbital Replacement Units (ORU) or subassemblies is key. For this effort a database was created that combines mass and reliability data, called Mass and Reliability System or MaRS. The U.S. International Space Station (ISS) components are used as reference parts in the MaRS database. Using ISS components as a platform is beneficial because of the historical context and the environment similarities to a space flight mission. MaRS uses a combination of systems: International Space Station PART for failure data, Vehicle Master Database (VMDB) for ORU & components, Maintenance & Analysis Data Set (MADS) for operation hours and other pertinent data, & Hardware History Retrieval System (HHRS) for unit weights. MaRS is populated using a Visual Basic Application. Once populated, the excel spreadsheet is comprised of information on ISS components including

  6. Design-for-reliability (DfR) of aerospace electronics: Attributes and challenges

    NASA Astrophysics Data System (ADS)

    Bensoussan, A.; Suhir, E.

    The next generation of multi-beam satellite systems that would be able to provide effective interactive communication services will have to operate within a highly flexible architecture. One option to develop such flexibility is to employ microwaves and/or optoelectronic components and to make them reliable. The use of optoelectronic devices, equipments and systems will result indeed in significant improvement in the state-of-the-art only provided that the new designs will suggest a novel and effective architecture that will combine the merits of good functional performance, satisfactory mechanical (structural) reliability and high cost effectiveness. The obvious challenge is the ability to design and fabricate equipment based on EEE components that would be able to successfully withstand harsh space environments for the entire duration of the mission. It is imperative that the major players in the space industry, such as manufacturers, industrial users, and space agencies, understand the importance and the limits of the achievable quality and reliability of optoelectronic devices operated in harsh environments. It is equally imperative that the physics of possible failures is well understood and, if necessary, minimized, and that adequate Quality Standards are developed and employed. The space community has to identify and to develop the strategic approach for validating optoelectronic products. This should be done with consideration of numerous intrinsic and extrinsic requirements for the systems' performance. When considering a particular next generation optoelectronic space system, the space community needs to address the following major issues: proof of concept for this system, proof of reliability and proof of performance. This should be done with taking into account the specifics of the anticipated application. High operational reliability cannot be left to the prognostics and health monitoring/management (PHM) effort and stage, no matter how important and

  7. Space reliability technology - A historical perspective

    NASA Technical Reports Server (NTRS)

    Cohen, H.

    1984-01-01

    The progressive improvements in reliability of launch vehicles is traced from the Vanguard rocket to the STS. The Vanguard, built with minimal redundancy and a high mass ratio, was used as an operational vehicle midway through its test program in an attempt to meet the perceived challenge represented by the Sputnik. The fourth Vanguard failed due to inadequate contamination prevention and lack of inspection ports. Automatic firing sequences were adopted for the Titan rockets, which were an order of magnitude larger than the Vanguard and therefore had room for interior inspections. Qualification testing and reporting were introduced for components, along with X ray inspection of fuel tank welds. Dual systems were added for flight critical components when the Titan became man-rated for the Gemini program. Designs incorporated full failure mode effects and criticality analyses for the Apollo program, which exposed the limits of applicability of numerical reliability models. Fault tree analyses and program milestone reviews were initiated. The worth of man-in-the-loop in space activities for reliability was demonstrated with the rescue of Skylab after solar panel and meteoroid shield failures. It is now the reliability of the payload, rather than the vehicle, that is questioned for Shuttle launches.

  8. Development of brain systems for nonsymbolic numerosity and the relationship to formal math academic achievement

    PubMed Central

    Haist, Frank; Wazny, Jarnet H.; Toomarian, Elizabeth; Adamo, Maha

    2015-01-01

    A central question in cognitive and educational neuroscience is whether brain operations supporting non-linguistic intuitive number sense (numerosity) predict individual acquisition and academic achievement for symbolic or “formal” math knowledge. Here, we conducted a developmental functional MRI study of nonsymbolic numerosity task performance in 44 participants including 14 school age children (6–12 years-old), 14 adolescents (13–17 years-old), and 16 adults and compared a brain activity measure of numerosity precision to scores from the Woodcock-Johnson III Broad Math index of math academic achievement. Accuracy and reaction time from the numerosity task did not reliably predict formal math achievement. We found a significant positive developmental trend for improved numerosity precision in the parietal cortex and intraparietal sulcus (IPS) specifically. Controlling for age and overall cognitive ability, we found a reliable positive relationship between individual math achievement scores and parietal lobe activity only in children. In addition, children showed robust positive relationships between math achievement and numerosity precision within ventral stream processing areas bilaterally. The pattern of results suggests a dynamic developmental trajectory for visual discrimination strategies that predict the acquisition of formal math knowledge. In adults, the efficiency of visual discrimination marked by numerosity acuity in ventral occipital-temporal cortex and hippocampus differentiated individuals with better or worse formal math achievement, respectively. Overall, these results suggest that two different brain systems for nonsymbolic numerosity acuity may contribute to individual differences in math achievement and that the contribution of these systems differs across development. PMID:25327879

  9. Tutorial: Performance and reliability in redundant disk arrays

    NASA Technical Reports Server (NTRS)

    Gibson, Garth A.

    1993-01-01

    A disk array is a collection of physically small magnetic disks that is packaged as a single unit but operates in parallel. Disk arrays capitalize on the availability of small-diameter disks from a price-competitive market to provide the cost, volume, and capacity of current disk systems but many times their performance. Unfortunately, relative to current disk systems, the larger number of components in disk arrays leads to higher rates of failure. To tolerate failures, redundant disk arrays devote a fraction of their capacity to an encoding of their information. This redundant information enables the contents of a failed disk to be recovered from the contents of non-failed disks. The simplest and least expensive encoding for this redundancy, known as N+1 parity is highlighted. In addition to compensating for the higher failure rates of disk arrays, redundancy allows highly reliable secondary storage systems to be built much more cost-effectively than is now achieved in conventional duplicated disks. Disk arrays that combine redundancy with the parallelism of many small-diameter disks are often called Redundant Arrays of Inexpensive Disks (RAID). This combination promises improvements to both the performance and the reliability of secondary storage. For example, IBM's premier disk product, the IBM 3390, is compared to a redundant disk array constructed of 84 IBM 0661 3 1/2-inch disks. The redundant disk array has comparable or superior values for each of the metrics given and appears likely to cost less. In the first section of this tutorial, I explain how disk arrays exploit the emergence of high performance, small magnetic disks to provide cost-effective disk parallelism that combats the access and transfer gap problems. The flexibility of disk-array configurations benefits manufacturer and consumer alike. In contrast, I describe in this tutorial's second half how parallelism, achieved through increasing numbers of components, causes overall failure rates to rise

  10. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  11. Survey of Software Assurance Techniques for Highly Reliable Systems

    NASA Technical Reports Server (NTRS)

    Nelson, Stacy

    2004-01-01

    This document provides a survey of software assurance techniques for highly reliable systems including a discussion of relevant safety standards for various industries in the United States and Europe, as well as examples of methods used during software development projects. It contains one section for each industry surveyed: Aerospace, Defense, Nuclear Power, Medical Devices and Transportation. Each section provides an overview of applicable standards and examples of a mission or software development project, software assurance techniques used and reliability achieved.

  12. Reliability and coverage analysis of non-repairable fault-tolerant memory systems

    NASA Technical Reports Server (NTRS)

    Cox, G. W.; Carroll, B. D.

    1976-01-01

    A method was developed for the construction of probabilistic state-space models for nonrepairable systems. Models were developed for several systems which achieved reliability improvement by means of error-coding, modularized sparing, massive replication and other fault-tolerant techniques. From the models developed, sets of reliability and coverage equations for the systems were developed. Comparative analyses of the systems were performed using these equation sets. In addition, the effects of varying subunit reliabilities on system reliability and coverage were described. The results of these analyses indicated that a significant gain in system reliability may be achieved by use of combinations of modularized sparing, error coding, and software error control. For sufficiently reliable system subunits, this gain may far exceed the reliability gain achieved by use of massive replication techniques, yet result in a considerable saving in system cost.

  13. Reliability mechanisms in distributed data base systems

    SciTech Connect

    Son, S.H.

    1986-01-01

    Distributed database systems operate in computer networking environments where component failures are inevitable during normal operation. Failures not only threaten normal operation of the system, but they may destroy the correctness of the data base by direct damage to the storage subsystem. In order to cope with these failures, distributed data base systems must provide reliability mechanisms that maintain the system consistency. There are two major parts in this dissertation. In the first part, mechanisms are presented for recovery management in distributed data base system. The recovery management of a distributed data bases system consists of two parts: the preparation for the recovery by saving necessary information during normal operation of the data base system, and the coordination of the actual recovery in order to avoid the possible inconsistency after the recovery. The preparation for the recovery is done through the checkpointing and logging. A new scheme is proposed for reconstruction of the data base in distributed environments. In the second part, a token-based resiliency control scheme for replicated distributed data base systems. The proposed control scheme increases the reliability as well as the degree of concurrency while maintaining the consistency of the system.

  14. Iowa Women of Achievement.

    ERIC Educational Resources Information Center

    Ohrn, Deborah Gore, Ed.

    1993-01-01

    This issue of the Goldfinch highlights some of Iowa's 20th century women of achievement. These women have devoted their lives to working for human rights, education, equality, and individual rights. They come from the worlds of politics, art, music, education, sports, business, entertainment, and social work. They represent Native Americans,…

  15. Achieving Peace through Education.

    ERIC Educational Resources Information Center

    Clarken, Rodney H.

    While it is generally agreed that peace is desirable, there are barriers to achieving a peaceful world. These barriers are classified into three major areas: (1) an erroneous view of human nature; (2) injustice; and (3) fear of world unity. In a discussion of these barriers, it is noted that although the consciousness and conscience of the world…

  16. Increasing Male Academic Achievement

    ERIC Educational Resources Information Center

    Jackson, Barbara Talbert

    2008-01-01

    The No Child Left Behind legislation has brought greater attention to the academic performance of American youth. Its emphasis on student achievement requires a closer analysis of assessment data by school districts. To address the findings, educators must seek strategies to remedy failing results. In a mid-Atlantic district of the Unites States,…

  17. Achievements or Disasters?

    ERIC Educational Resources Information Center

    Goodwin, MacArthur

    2000-01-01

    Focuses on policy issues that have affected arts education in the twentieth century, such as: interest in discipline-based arts education, influence of national arts associations, and national standards and coordinated assessment. States that whether the policy decisions are viewed as achievements or disasters are for future determination. (CMK)

  18. Achieving True Consensus.

    ERIC Educational Resources Information Center

    Napier, Rod; Sanaghan, Patrick

    2002-01-01

    Uses the example of Vermont's Middlebury College to explore the challenges and possibilities of achieving consensus about institutional change. Discusses why, unlike in this example, consensus usually fails, and presents four demands of an effective consensus process. Includes a list of "test" questions on successful collaboration. (EV)

  19. School Students' Science Achievement

    ERIC Educational Resources Information Center

    Shymansky, James; Wang, Tzu-Ling; Annetta, Leonard; Everett, Susan; Yore, Larry D.

    2013-01-01

    This paper is a report of the impact of an externally funded, multiyear systemic reform project on students' science achievement on a modified version of the Third International Mathematics and Science Study (TIMSS) test in 33 small, rural school districts in two Midwest states. The systemic reform effort utilized a cascading leadership strategy…

  20. Essays on Educational Achievement

    ERIC Educational Resources Information Center

    Ampaabeng, Samuel Kofi

    2013-01-01

    This dissertation examines the determinants of student outcomes--achievement, attainment, occupational choices and earnings--in three different contexts. The first two chapters focus on Ghana while the final chapter focuses on the US state of Massachusetts. In the first chapter, I exploit the incidence of famine and malnutrition that resulted to…

  1. Assessing Handwriting Achievement.

    ERIC Educational Resources Information Center

    Ediger, Marlow

    Teachers in the school setting need to emphasize quality handwriting across the curriculum. Quality handwriting means that the written content is easy to read in either manuscript or cursive form. Handwriting achievement can be assessed, but not compared to the precision of assessing basic addition, subtraction, multiplication, and division facts.…

  2. Intelligence and Educational Achievement

    ERIC Educational Resources Information Center

    Deary, Ian J.; Strand, Steve; Smith, Pauline; Fernandes, Cres

    2007-01-01

    This 5-year prospective longitudinal study of 70,000+ English children examined the association between psychometric intelligence at age 11 years and educational achievement in national examinations in 25 academic subjects at age 16. The correlation between a latent intelligence trait (Spearman's "g"from CAT2E) and a latent trait of educational…

  3. Explorations in achievement motivation

    NASA Technical Reports Server (NTRS)

    Helmreich, Robert L.

    1982-01-01

    Recent research on the nature of achievement motivation is reviewed. A three-factor model of intrinsic motives is presented and related to various criteria of performance, job satisfaction and leisure activities. The relationships between intrinsic and extrinsic motives are discussed. Needed areas for future research are described.

  4. NCLB: Achievement Robin Hood?

    ERIC Educational Resources Information Center

    Bracey, Gerald W.

    2008-01-01

    In his "Wall Street Journal" op-ed on the 25th of anniversary of "A Nation At Risk", former assistant secretary of education Chester E. Finn Jr. applauded the report for turning U.S. education away from equality and toward achievement. It was not surprising, then, that in mid-2008, Finn arranged a conference to examine the potential "Robin Hood…

  5. Achieving All Our Ambitions

    ERIC Educational Resources Information Center

    Hartley, Tricia

    2009-01-01

    National learning and skills policy aims both to build economic prosperity and to achieve social justice. Participation in higher education (HE) has the potential to contribute substantially to both aims. That is why the Campaign for Learning has supported the ambition to increase the proportion of the working-age population with a Level 4…

  6. INTELLIGENCE, PERSONALITY AND ACHIEVEMENT.

    ERIC Educational Resources Information Center

    MUIR, R.C.; AND OTHERS

    A LONGITUDINAL DEVELOPMENTAL STUDY OF A GROUP OF MIDDLE CLASS CHILDREN IS DESCRIBED, WITH EMPHASIS ON A SEGMENT OF THE RESEARCH INVESTIGATING THE RELATIONSHIP OF ACHIEVEMENT, INTELLIGENCE, AND EMOTIONAL DISTURBANCE. THE SUBJECTS WERE 105 CHILDREN AGED FIVE TO 6.3 ATTENDING TWO SCHOOLS IN MONTREAL. EACH CHILD WAS ASSESSED IN THE AREAS OF…

  7. SALT and Spelling Achievement.

    ERIC Educational Resources Information Center

    Nelson, Joan

    A study investigated the effects of suggestopedic accelerative learning and teaching (SALT) on the spelling achievement, attitudes toward school, and memory skills of fourth-grade students. Subjects were 20 male and 28 female students from two self-contained classrooms at Kennedy Elementary School in Rexburg, Idaho. The control classroom and the…

  8. Appraising Reading Achievement.

    ERIC Educational Resources Information Center

    Ediger, Marlow

    To determine quality sequence in pupil progress, evaluation approaches need to be used which guide the teacher to assist learners to attain optimally. Teachers must use a variety of procedures to appraise student achievement in reading, because no one approach is adequate. Appraisal approaches might include: (1) observation and subsequent…

  9. Reliability history of the Apollo guidance computer

    NASA Technical Reports Server (NTRS)

    Hall, E. C.

    1972-01-01

    The Apollo guidance computer was designed to provide the computation necessary for guidance, navigation and control of the command module and the lunar landing module of the Apollo spacecraft. The computer was designed using the technology of the early 1960's and the production was completed by 1969. During the development, production, and operational phase of the program, the computer has accumulated a very interesting history which is valuable for evaluating the technology, production methods, system integration, and the reliability of the hardware. The operational experience in the Apollo guidance systems includes 17 computers which flew missions and another 26 flight type computers which are still in various phases of prelaunch activity including storage, system checkout, prelaunch spacecraft checkout, etc. These computers were manufactured and maintained under very strict quality control procedures with requirements for reporting and analyzing all indications of failure. Probably no other computer or electronic equipment with equivalent complexity has been as well documented and monitored. Since it has demonstrated a unique reliability history, it is important to evaluate the techniques and methods which have contributed to the high reliability of this computer.

  10. 75 FR 80391 - Electric Reliability Organization Interpretations of Interconnection Reliability Operations and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-22

    ..., NOPR, Docket No. RM10-15-000, 75 FR 71613 (Nov. 24, 2010), 133 FERC ] 61,151, at P 65 (2010... Electrical and Electronics Engineers, Inc. (IEEE) definition of degraded as ``the inability of an item to... request for interpretation at 4-5 (citing full IEEE definitions of degraded: ``A failure that is...

  11. Reliability Impacts in Life Support Architecture and Technology Selection

    NASA Technical Reports Server (NTRS)

    Lange Kevin E.; Anderson, Molly S.

    2012-01-01

    Quantitative assessments of system reliability and equivalent system mass (ESM) were made for different life support architectures based primarily on International Space Station technologies. The analysis was applied to a one-year deep-space mission. System reliability was increased by adding redundancy and spares, which added to the ESM. Results were thus obtained allowing a comparison of the ESM for each architecture at equivalent levels of reliability. Although the analysis contains numerous simplifications and uncertainties, the results suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support ESM and could influence the optimal degree of life support closure. Approaches for reducing reliability impacts were investigated and are discussed.

  12. Issues in Modeling System Reliability

    NASA Astrophysics Data System (ADS)

    Cruse, Thomas A.; Annis, Chuck; Booker, Jane; Robinson, David; Sues, Rob

    2002-10-01

    This paper discusses various issues in modeling system reliability. The topics include: 1) Statistical formalisms versus pragmatic numerics; 2) Language; 3) Statistical methods versus reliability-based design methods; 4) Professional bias; and 5) Real issues that need to be identified and resolved prior to certifying designs. This paper is in viewgraph form.

  13. Avionics design for reliability bibliography

    NASA Technical Reports Server (NTRS)

    1976-01-01

    A bibliography with abstracts was presented in support of AGARD lecture series No. 81. The following areas were covered: (1) program management, (2) design for high reliability, (3) selection of components and parts, (4) environment consideration, (5) reliable packaging, (6) life cycle cost, and (7) case histories.

  14. Reliable Wireless Data Acquisition and Control Techniques within Nuclear Hot Cell Facilities

    SciTech Connect

    Kurtz, J.L.; Tulenko, J.

    2000-09-20

    On this NEER project the University of Florida has investigated and applied advanced communications techniques to address data acquisition and control problems within the Fuel Conditioning Facility (FCF) of Argonne National Laboratory-West (ANL-W) in Idaho Falls. The goals of this project have been to investigate and apply wireless communications techniques to solve the problem of communicating with and controlling equipment and systems within a nuclear hot cell facility with its attendant high radiation levels. Different wireless techniques, including radio frequency, infrared and power line communications were reviewed. For each technique, the challenges of radiation-hardened implementation were addressed. In addition, it has been a project goal to achieve the highest level of system reliability to ensure safe nuclear operations. Achievement of these goals would allow the eventual elimination of through-the-wall, hardwired cabling that is currently employed in the hot cell, along wit h all of the attendant problems that limit measurement mobility and flexibility.

  15. Photovoltaic performance and reliability workshop

    SciTech Connect

    Mrig, L.

    1993-12-01

    This workshop was the sixth in a series of workshops sponsored by NREL/DOE under the general subject of photovoltaic testing and reliability during the period 1986--1993. PV performance and PV reliability are at least as important as PV cost, if not more. In the US, PV manufacturers, DOE laboratories, electric utilities, and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in the field were brought together to exchange the technical knowledge and field experience as related to current information in this evolving field of PV reliability. The papers presented here reflect this effort since the last workshop held in September, 1992. The topics covered include: cell and module characterization, module and system testing, durability and reliability, system field experience, and standards and codes.

  16. CERTS: Consortium for Electric Reliability Technology Solutions - Research Highlights

    SciTech Connect

    Eto, Joseph

    2003-07-30

    Historically, the U.S. electric power industry was vertically integrated, and utilities were responsible for system planning, operations, and reliability management. As the nation moves to a competitive market structure, these functions have been disaggregated, and no single entity is responsible for reliability management. As a result, new tools, technologies, systems, and management processes are needed to manage the reliability of the electricity grid. However, a number of simultaneous trends prevent electricity market participants from pursuing development of these reliability tools: utilities are preoccupied with restructuring their businesses, research funding has declined, and the formation of Independent System Operators (ISOs) and Regional Transmission Organizations (RTOs) to operate the grid means that control of transmission assets is separate from ownership of these assets; at the same time, business uncertainty, and changing regulatory policies have created a climate in which needed investment for transmission infrastructure and tools for reliability management has dried up. To address the resulting emerging gaps in reliability R&D, CERTS has undertaken much-needed public interest research on reliability technologies for the electricity grid. CERTS' vision is to: (1) Transform the electricity grid into an intelligent network that can sense and respond automatically to changing flows of power and emerging problems; (2) Enhance reliability management through market mechanisms, including transparency of real-time information on the status of the grid; (3) Empower customers to manage their energy use and reliability needs in response to real-time market price signals; and (4) Seamlessly integrate distributed technologies--including those for generation, storage, controls, and communications--to support the reliability needs of both the grid and individual customers.

  17. A Vision for Spaceflight Reliability: NASA's Objectives Based Strategy

    NASA Technical Reports Server (NTRS)

    Groen, Frank; Evans, John; Hall, Tony

    2015-01-01

    In defining the direction for a new Reliability and Maintainability standard, OSMA has extracted the essential objectives that our programs need, to undertake a reliable mission. These objectives have been structured to lead mission planning through construction of an objective hierarchy, which defines the critical approaches for achieving high reliability and maintainability (R M). Creating a hierarchy, as a basis for assurance implementation, is a proven approach; yet, it holds the opportunity to enable new directions, as NASA moves forward in tackling the challenges of space exploration.

  18. A forward view on reliable computers for flight control

    NASA Technical Reports Server (NTRS)

    Goldberg, J.; Wensley, J. H.

    1976-01-01

    The requirements for fault-tolerant computers for flight control of commercial aircraft are examined; it is concluded that the reliability requirements far exceed those typically quoted for space missions. Examination of circuit technology and alternative computer architectures indicates that the desired reliability can be achieved with several different computer structures, though there are obvious advantages to those that are more economic, more reliable, and, very importantly, more certifiable as to fault tolerance. Progress in this field is expected to bring about better computer systems that are more rigorously designed and analyzed even though computational requirements are expected to increase significantly.

  19. SEASAT economic assessment. Volume 10: The SATIL 2 program (a program for the evaluation of the costs of an operational SEASAT system as a function of operational requirements and reliability. [computer programs for economic analysis and systems analysis of SEASAT satellite systems

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The SATIL 2 computer program was developed to assist with the programmatic evaluation of alternative approaches to establishing and maintaining a specified mix of operational sensors on spacecraft in an operational SEASAT system. The program computes the probability distributions of events (i.e., number of launch attempts, number of spacecraft purchased, etc.), annual recurring cost, and present value of recurring cost. This is accomplished for the specific task of placing a desired mix of sensors in orbit in an optimal fashion in order to satisfy a specified sensor demand function. Flow charts are shown, and printouts of the programs are given.

  20. High Reliability R-10 Windows Using Vacuum Insulating Glass Units

    SciTech Connect

    Stark, David

    2012-08-16

    The objective of this effort was for EverSealed Windows (“EverSealed” or “ESW”) to design, assemble, thermally and environmentally test and demonstrate a Vacuum Insulating Glass Unit (“VIGU” or “VIG”) that would enable a whole window to meet or exceed the an R-10 insulating value (U-factor ≤ 0.1). To produce a VIGU that could withstand any North American environment, ESW believed it needed to design, produce and use a flexible edge seal system. This is because a rigid edge seal, used by all other know VIG producers and developers, limits the size and/or thermal environment of the VIG to where the unit is not practical for typical IG sizes and cannot withstand severe outdoor environments. The rigid-sealed VIG’s use would be limited to mild climates where it would not have a reasonable economic payback when compared to traditional double-pane or triple-pane IGs. ESW’s goals, in addition to achieving a sufficiently high R-value to enable a whole window to achieve R-10, included creating a VIG design that could be produced for a cost equal to or lower than a traditional triple-pane IG (low-e, argon filled). ESW achieved these goals. EverSealed produced, tested and demonstrated a flexible edge-seal VIG that had an R-13 insulating value and the edge-seal system durability to operate reliably for at least 40 years in the harshest climates of North America.

  1. Achieving Energy Efficiency Through Real-Time Feedback

    SciTech Connect

    Nesse, Ronald J.

    2011-09-01

    Through the careful implementation of simple behavior change measures, opportunities exist to achieve strategic gains, including greater operational efficiencies, energy cost savings, greater tenant health and ensuing productivity and an improved brand value through sustainability messaging and achievement.

  2. Reliability Estimation for Double Containment Piping

    SciTech Connect

    L. Cadwallader; T. Pinna

    2012-08-01

    Double walled or double containment piping is considered for use in the ITER international project and other next-generation fusion device designs to provide an extra barrier for tritium gas and other radioactive materials. The extra barrier improves confinement of these materials and enhances safety of the facility. This paper describes some of the design challenges in designing double containment piping systems. There is also a brief review of a few operating experiences of double walled piping used with hazardous chemicals in different industries. This paper recommends approaches for the reliability analyst to use to quantify leakage from a double containment piping system in conceptual and more advanced designs. The paper also cites quantitative data that can be used to support such reliability analyses.

  3. Reliability Analysis of Brittle, Thin Walled Structures

    SciTech Connect

    Jonathan A Salem and Lynn Powers

    2007-02-09

    One emerging application for ceramics is diesel particulate filters being used order to meet EPA regulations going into effect in 2008. Diesel particulates are known to be carcinogenic and thus need to be minimized. Current systems use filters made from ceramics such as mullite and corderite. The filters are brittle and must operate at very high temperatures during a burn out cycle used to remove the soot buildup. Thus the filters are subjected to thermal shock stresses and life time reliability analysis is required. NASA GRC has developed reliability based design methods and test methods for such applications, such as CARES/Life and American Society for Testing and Materials (ASTM) C1499 “Standard Test Method for Equibiaxial Strength of Ceramics.”

  4. Understanding biological computation: reliable learning and recognition.

    PubMed Central

    Hogg, T; Huberman, B A

    1984-01-01

    We experimentally examine the consequences of the hypothesis that the brain operates reliably, even though individual components may intermittently fail, by computing with dynamical attractors. Specifically, such a mechanism exploits dynamic collective behavior of a system with attractive fixed points in its phase space. In contrast to the usual methods of reliable computation involving a large number of redundant elements, this technique of self-repair only requires collective computation with a few units, and it is amenable to quantitative investigation. Experiments on parallel computing arrays show that this mechanism leads naturally to rapid self-repair, adaptation to the environment, recognition and discrimination of fuzzy inputs, and conditional learning, properties that are commonly associated with biological computation. PMID:6593731

  5. Practical Issues in Implementing Software Reliability Measurement

    NASA Technical Reports Server (NTRS)

    Nikora, Allen P.; Schneidewind, Norman F.; Everett, William W.; Munson, John C.; Vouk, Mladen A.; Musa, John D.

    1999-01-01

    Many ways of estimating software systems' reliability, or reliability-related quantities, have been developed over the past several years. Of particular interest are methods that can be used to estimate a software system's fault content prior to test, or to discriminate between components that are fault-prone and those that are not. The results of these methods can be used to: 1) More accurately focus scarce fault identification resources on those portions of a software system most in need of it. 2) Estimate and forecast the risk of exposure to residual faults in a software system during operation, and develop risk and safety criteria to guide the release of a software system to fielded use. 3) Estimate the efficiency of test suites in detecting residual faults. 4) Estimate the stability of the software maintenance process.

  6. The determination of measures of software reliability

    NASA Technical Reports Server (NTRS)

    Maxwell, F. D.; Corn, B. C.

    1978-01-01

    Measurement of software reliability was carried out during the development of data base software for a multi-sensor tracking system. The failure ratio and failure rate were found to be consistent measures. Trend lines could be established from these measurements that provide good visualization of the progress on the job as a whole as well as on individual modules. Over one-half of the observed failures were due to factors associated with the individual run submission rather than with the code proper. Possible application of these findings for line management, project managers, functional management, and regulatory agencies is discussed. Steps for simplifying the measurement process and for use of these data in predicting operational software reliability are outlined.

  7. Project ACHIEVE final report

    SciTech Connect

    1997-06-13

    Project ACHIEVE was a math/science academic enhancement program aimed at first year high school Hispanic American students. Four high schools -- two in El Paso, Texas and two in Bakersfield, California -- participated in this Department of Energy-funded program during the spring and summer of 1996. Over 50 students, many of whom felt they were facing a nightmare future, were given the opportunity to work closely with personal computers and software, sophisticated calculators, and computer-based laboratories -- an experience which their regular academic curriculum did not provide. Math and science projects, exercises, and experiments were completed that emphasized independent and creative applications of scientific and mathematical theories to real world problems. The most important outcome was the exposure Project ACHIEVE provided to students concerning the college and technical-field career possibilities available to them.

  8. Learning reliable manipulation strategies without initial physical models

    NASA Technical Reports Server (NTRS)

    Christiansen, Alan D.; Mason, Matthew T.; Mitchell, Tom M.

    1990-01-01

    A description is given of a robot, possessing limited sensory and effectory capabilities but no initial model of the effects of its actions on the world, that acquires such a model through exploration, practice, and observation. By acquiring an increasingly correct model of its actions, it generates increasingly successful plans to achieve its goals. In an apparently nondeterministic world, achieving reliability requires the identification of reliable actions and a preference for using such actions. Furthermore, by selecting its training actions carefully, the robot can significantly improve its learning rate.

  9. Calculating system reliability with SRFYDO

    SciTech Connect

    Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M

    2010-01-01

    SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for the system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.

  10. Reliability improvements in tunable Pb1-xSnxSe diode lasers

    NASA Technical Reports Server (NTRS)

    Linden, K. J.; Butler, J. F.; Nill, K. W.; Reeder, R. E.

    1980-01-01

    Recent developments in the technology of Pb-salt diode lasers which have led to significant improvements in reliability and lifetime, and to improved operation at very long wavelengths are described. A combination of packaging and contacting-metallurgy improvements has led to diode lasers that are stable both in terms of temperature cycling and shelf-storage time. Lasers cycled over 500 times between 77 K and 300 K have exhibited no measurable changes in either electrical contact resistance or threshold current. Utilizing metallurgical contacting process, both lasers and experimental n-type and p-type bulk materials are shown to have electrical contact resistance values that are stable for shelf storage periods well in excess of one year. Problems and experiments which have led to devices with improved performance stability are discussed. Stable device configurations achieved for material compositions yielding lasers which operate continuously at wavelengths as long as 30.3 micrometers are described.

  11. High beam current operation of a PETtrace™ cyclotron for 18F- production.

    PubMed

    Eberl, S; Eriksson, T; Svedberg, O; Norling, J; Henderson, D; Lam, P; Fulham, M

    2012-06-01

    Upgrades and optimisation achieved 160 μA total target current operation of a GE PETtrace cyclotron in dual target mode for the routine production of [(18)F]FDG for >2 years. Approximately 900 GBq of (18)F(-) and >500 GBq of [(18)F]FDG can be produced routinely in a single production run, meeting the routine [(18)F]FDG requirements with our customer base and achieving economies of scale. Production of >1 TBq of (18)F(-) in a single run was achieved. Reliability, saturation and synthesis yields were not adversely affected.

  12. Aerospace reliability applied to biomedicine.

    NASA Technical Reports Server (NTRS)

    Lalli, V. R.; Vargo, D. J.

    1972-01-01

    An analysis is presented that indicates that the reliability and quality assurance methodology selected by NASA to minimize failures in aerospace equipment can be applied directly to biomedical devices to improve hospital equipment reliability. The Space Electric Rocket Test project is used as an example of NASA application of reliability and quality assurance (R&QA) methods. By analogy a comparison is made to show how these same methods can be used in the development of transducers, instrumentation, and complex systems for use in medicine.

  13. Reliability analysis of interdependent lattices

    NASA Astrophysics Data System (ADS)

    Limiao, Zhang; Daqing, Li; Pengju, Qin; Bowen, Fu; Yinan, Jiang; Zio, Enrico; Rui, Kang

    2016-06-01

    Network reliability analysis has drawn much attention recently due to the risks of catastrophic damage in networked infrastructures. These infrastructures are dependent on each other as a result of various interactions. However, most of the reliability analyses of these interdependent networks do not consider spatial constraints, which are found important for robustness of infrastructures including power grid and transport systems. Here we study the reliability properties of interdependent lattices with different ranges of spatial constraints. Our study shows that interdependent lattices with strong spatial constraints are more resilient than interdependent Erdös-Rényi networks. There exists an intermediate range of spatial constraints, at which the interdependent lattices have minimal resilience.

  14. Integrating reliability analysis and design

    SciTech Connect

    Rasmuson, D. M.

    1980-10-01

    This report describes the Interactive Reliability Analysis Project and demonstrates the advantages of using computer-aided design systems (CADS) in reliability analysis. Common cause failure problems require presentations of systems, analysis of fault trees, and evaluation of solutions to these. Results have to be communicated between the reliability analyst and the system designer. Using a computer-aided design system saves time and money in the analysis of design. Computer-aided design systems lend themselves to cable routing, valve and switch lists, pipe routing, and other component studies. At EG and G Idaho, Inc., the Applicon CADS is being applied to the study of water reactor safety systems.

  15. (Centralized Reliability Data Organization (CRDO))

    SciTech Connect

    Haire, M J

    1987-04-21

    One of the primary goals of the Centralized Reliability Data Organization (CREDO) is to be an international focal point for the collection, analysis, and dissemination of liquid metal reactor (LMR) component reliability, availability, and maintainability (RAM) data. During FY-1985, the Department of Energy (DOE) entered into a Specific Memorandum of Agreement (SMA) with Japan's Power Reactor and Nuclear Fuel Development Corporation (PNC) regarding cooperative data exchange efforts. This agreement was CREDO's first step toward internationalization and represented an initial realization of the previously mentioned goal. DOE's interest in further internationalization of the CREDO system was the primary motivation for the traveler's attendance at the Reliability '87 conference.

  16. Magnetic tape recorder for long operating life in space.

    NASA Technical Reports Server (NTRS)

    Bahm, E. J.; Hoffman, J. K.

    1971-01-01

    Magnetic tape recorders have long been used on satellites and spacecraft for onboard storage of large quantities of data. As satellites enter into commercial service, long operating life at high reliability becomes important. Also, the presently planned long-duration space flights to the outer planets require long-life tape recorders. Past satellite tape recorders have achieved a less than satisfactory performance record and the operating life of other spacecraft tape recorders has been relatively short and unpredictable. Most failures have resulted from malfunctions of the mechanical tape transport. Recent advances in electric motors and static memories have allowed the development of a new tape recorder which uses a very simple tape transport with few possible failure modes. It consists only of two brushless dc motors, two tape guides, and the recording heads. Relatively low tape tension, wide torque capability, and precise speed control facilitate design for mechanical reliability to match that of tape-recorder electronics.

  17. Individual Differences in Human Reliability Analysis

    SciTech Connect

    Jeffrey C. Joe; Ronald L. Boring

    2014-06-01

    While human reliability analysis (HRA) methods include uncertainty in quantification, the nominal model of human error in HRA typically assumes that operator performance does not vary significantly when they are given the same initiating event, indicators, procedures, and training, and that any differences in operator performance are simply aleatory (i.e., random). While this assumption generally holds true when performing routine actions, variability in operator response has been observed in multiple studies, especially in complex situations that go beyond training and procedures. As such, complexity can lead to differences in operator performance (e.g., operator understanding and decision-making). Furthermore, psychological research has shown that there are a number of known antecedents (i.e., attributable causes) that consistently contribute to observable and systematically measurable (i.e., not random) differences in behavior. This paper reviews examples of individual differences taken from operational experience and the psychological literature. The impact of these differences in human behavior and their implications for HRA are then discussed. We propose that individual differences should not be treated as aleatory, but rather as epistemic. Ultimately, by understanding the sources of individual differences, it is possible to remove some epistemic uncertainty from analyses.

  18. Evolving Reliability and Maintainability Allocations for NASA Ground Systems

    NASA Technical Reports Server (NTRS)

    Munoz, Gisela; Toon, Troy; Toon, Jamie; Conner, Angelo C.; Adams, Timothy C.; Miranda, David J.

    2016-01-01

    This paper describes the methodology and value of modifying allocations to reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) program’s subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. This iterative process provided an opportunity for the reliability engineering team to reevaluate allocations as systems moved beyond their conceptual and preliminary design phases. These new allocations are based on updated designs and maintainability characteristics of the components. It was found that trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper discusses the results of reliability and maintainability reallocations made for the GSDO subsystems as the program nears the end of its design phase.

  19. Evolving Reliability and Maintainability Allocations for NASA Ground Systems

    NASA Technical Reports Server (NTRS)

    Munoz, Gisela; Toon, Jamie; Toon, Troy; Adams, Timothy C.; Miranda, David J.

    2016-01-01

    This paper describes the methodology that was developed to allocate reliability and maintainability requirements for the NASA Ground Systems Development and Operations (GSDO) program's subsystems. As systems progressed through their design life cycle and hardware data became available, it became necessary to reexamine the previously derived allocations. Allocating is an iterative process; as systems moved beyond their conceptual and preliminary design phases this provided an opportunity for the reliability engineering team to reevaluate allocations based on updated designs and maintainability characteristics of the components. Trade-offs in reliability and maintainability were essential to ensuring the integrity of the reliability and maintainability analysis. This paper will discuss the value of modifying reliability and maintainability allocations made for the GSDO subsystems as the program nears the end of its design phase.

  20. Joint Architecture Standard (JAS) Reliable Data Delivery Protocol (RDDP) specification.

    SciTech Connect

    Enderle, Justin Wayne; Daniels, James W.; Gardner, Michael T.; Eldridge, John M.; Hunt, Richard D.; Gallegos, Daniel E.

    2011-05-01

    The Joint Architecture Standard (JAS) program at Sandia National Laboratories requires the use of a reliable data delivery protocol over SpaceWire. The National Aeronautics and Space Administration at the Goddard Spaceflight Center in Greenbelt, Maryland, developed and specified a reliable protocol for its Geostationary Operational Environment Satellite known as GOES-R Reliable Data Delivery Protocol (GRDDP). The JAS program implemented and tested GRDDP and then suggested a number of modifications to the original specification to meet its program specific requirements. This document details the full RDDP specification as modified for JAS. The JAS Reliable Data Delivery Protocol uses the lower-level SpaceWire data link layer to provide reliable packet delivery services to one or more higher-level host application processes. This document specifies the functional requirements for JRDDP but does not specify the interfaces to the lower- or higher-level processes, which may be implementation-dependent.