Sample records for cost high reliability

  1. Methods and Costs to Achieve Ultra Reliable Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2012-01-01

    A published Mars mission is used to explore the methods and costs to achieve ultra reliable life support. The Mars mission and its recycling life support design are described. The life support systems were made triply redundant, implying that each individual system will have fairly good reliability. Ultra reliable life support is needed for Mars and other long, distant missions. Current systems apparently have insufficient reliability. The life cycle cost of the Mars life support system is estimated. Reliability can be increased by improving the intrinsic system reliability, adding spare parts, or by providing technically diverse redundant systems. The costs of these approaches are estimated. Adding spares is least costly but may be defeated by common cause failures. Using two technically diverse systems is effective but doubles the life cycle cost. Achieving ultra reliability is worth its high cost because the penalty for failure is very high.

  2. Power Electronics Packaging Reliability | Transportation Research | NREL

    Science.gov Websites

    interface materials, are a key enabling technology for compact, lightweight, low-cost, and reliable power , reliability, and cost. High-temperature bonded interface materials are an important facilitating technology for compact, lightweight, low-cost, reliable power electronics packaging that fully utilizes the

  3. A cost assessment of reliability requirements for shuttle-recoverable experiments

    NASA Technical Reports Server (NTRS)

    Campbell, J. W.

    1975-01-01

    The relaunching of unsuccessful experiments or satellites will become a real option with the advent of the space shuttle. An examination was made of the cost effectiveness of relaxing reliability requirements for experiment hardware by allowing more than one flight of an experiment in the event of its failure. Any desired overall reliability or probability of mission success can be acquired by launching an experiment with less reliability two or more times if necessary. Although this procedure leads to uncertainty in total cost projections, because the number of flights is not known in advance, a considerable cost reduction can sometimes be achieved. In cases where reflight costs are low relative to the experiment's cost, three flights with overall reliability 0.9 can be made for less than half the cost of one flight with a reliability of 0.9. An example typical of shuttle payload cost projections is cited where three low reliability flights would cost less than $50 million and a single high reliability flight would cost over $100 million. The ratio of reflight cost to experiment cost is varied and its effect on the range in total cost is observed. An optimum design reliability selection criterion to minimize expected cost is proposed, and a simple graphical method of determining this reliability is demonstrated.

  4. Bearing Procurement Analysis Method by Total Cost of Ownership Analysis and Reliability Prediction

    NASA Astrophysics Data System (ADS)

    Trusaji, Wildan; Akbar, Muhammad; Sukoyo; Irianto, Dradjad

    2018-03-01

    In making bearing procurement analysis, price and its reliability must be considered as decision criteria, since price determines the direct cost as acquisition cost and reliability of bearing determine the indirect cost such as maintenance cost. Despite the indirect cost is hard to identify and measured, it has high contribution to overall cost that will be incurred. So, the indirect cost of reliability must be considered when making bearing procurement analysis. This paper tries to explain bearing evaluation method with the total cost of ownership analysis to consider price and maintenance cost as decision criteria. Furthermore, since there is a lack of failure data when bearing evaluation phase is conducted, reliability prediction method is used to predict bearing reliability from its dynamic load rating parameter. With this method, bearing with a higher price but has higher reliability is preferable for long-term planning. But for short-term planning the cheaper one but has lower reliability is preferable. This contextuality can give rise to conflict between stakeholders. Thus, the planning horizon needs to be agreed by all stakeholder before making a procurement decision.

  5. Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method

    NASA Technical Reports Server (NTRS)

    Kowal, Michael T.

    1997-01-01

    The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.

  6. Reliability and cost: A sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Suich, Ronald C.; Patterson, Richard L.

    1991-01-01

    In the design phase of a system, how a design engineer or manager choose between a subsystem with .990 reliability and a more costly subsystem with .995 reliability is examined, along with the justification of the increased cost. High reliability is not necessarily an end in itself but may be desirable in order to reduce the expected cost due to subsystem failure. However, this may not be the wisest use of funds since the expected cost due to subsystem failure is not the only cost involved. The subsystem itself may be very costly. The cost of the subsystem nor the expected cost due to subsystem failure should not be considered separately but the total of the two costs should be maximized, i.e., the total of the cost of the subsystem plus the expected cost due to subsystem failure.

  7. Mission Reliability Estimation for Repairable Robot Teams

    NASA Technical Reports Server (NTRS)

    Trebi-Ollennu, Ashitey; Dolan, John; Stancliff, Stephen

    2010-01-01

    A mission reliability estimation method has been designed to translate mission requirements into choices of robot modules in order to configure a multi-robot team to have high reliability at minimal cost. In order to build cost-effective robot teams for long-term missions, one must be able to compare alternative design paradigms in a principled way by comparing the reliability of different robot models and robot team configurations. Core modules have been created including: a probabilistic module with reliability-cost characteristics, a method for combining the characteristics of multiple modules to determine an overall reliability-cost characteristic, and a method for the generation of legitimate module combinations based on mission specifications and the selection of the best of the resulting combinations from a cost-reliability standpoint. The developed methodology can be used to predict the probability of a mission being completed, given information about the components used to build the robots, as well as information about the mission tasks. In the research for this innovation, sample robot missions were examined and compared to the performance of robot teams with different numbers of robots and different numbers of spare components. Data that a mission designer would need was factored in, such as whether it would be better to have a spare robot versus an equivalent number of spare parts, or if mission cost can be reduced while maintaining reliability using spares. This analytical model was applied to an example robot mission, examining the cost-reliability tradeoffs among different team configurations. Particularly scrutinized were teams using either redundancy (spare robots) or repairability (spare components). Using conservative estimates of the cost-reliability relationship, results show that it is possible to significantly reduce the cost of a robotic mission by using cheaper, lower-reliability components and providing spares. This suggests that the current design paradigm of building a minimal number of highly robust robots may not be the best way to design robots for extended missions.

  8. Balancing low cost with reliable operation in the rotordynamic design of the ALS Liquid Hydrogen Fuel Turbopump

    NASA Technical Reports Server (NTRS)

    Greenhill, L. M.

    1990-01-01

    The Air Force/NASA Advanced Launch System (ALS) Liquid Hydrogen Fuel Turbopump (FTP) has primary design goals of low cost and high reliability, with performance and weight having less importance. This approach is atypical compared with other rocket engine turbopump design efforts, such as on the Space Shuttle Main Engine (SSME), which emphasized high performance and low weight. Similar to the SSME turbopumps, the ALS FTP operates supercritically, which implies that stability and bearing loads strongly influence the design. In addition, the use of low cost/high reliability features in the ALS FTP such as hydrostatic bearings, relaxed seal clearances, and unshrouded turbine blades also have a negative influence on rotordynamics. This paper discusses the analysis conducted to achieve a balance between low cost and acceptable rotordynamic behavior, to ensure that the ALS FTP will operate reliably without subsynchronous instabilities or excessive bearing loads.

  9. Reliability and cost analysis methods

    NASA Technical Reports Server (NTRS)

    Suich, Ronald C.

    1991-01-01

    In the design phase of a system, how does a design engineer or manager choose between a subsystem with .990 reliability and a more costly subsystem with .995 reliability? When is the increased cost justified? High reliability is not necessarily an end in itself but may be desirable in order to reduce the expected cost due to subsystem failure. However, this may not be the wisest use of funds since the expected cost due to subsystem failure is not the only cost involved. The subsystem itself may be very costly. We should not consider either the cost of the subsystem or the expected cost due to subsystem failure separately but should minimize the total of the two costs, i.e., the total of the cost of the subsystem plus the expected cost due to subsystem failure. This final report discusses the Combined Analysis of Reliability, Redundancy, and Cost (CARRAC) methods which were developed under Grant Number NAG 3-1100 from the NASA Lewis Research Center. CARRAC methods and a CARRAC computer program employ five models which can be used to cover a wide range of problems. The models contain an option which can include repair of failed modules.

  10. Thermal Management and Reliability of Automotive Power Electronics and Electric Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narumanchi, Sreekant V; Bennion, Kevin S; Cousineau, Justine E

    Low-cost, high-performance thermal management technologies are helping meet aggressive power density, specific power, cost, and reliability targets for power electronics and electric machines. The National Renewable Energy Laboratory is working closely with numerous industry and research partners to help influence development of components that meet aggressive performance and cost targets through development and characterization of cooling technologies, and thermal characterization and improvements of passive stack materials and interfaces. Thermomechanical reliability and lifetime estimation models are important enablers for industry in cost-and time-effective design.

  11. The influence of various test plans on mission reliability. [for Shuttle Spacelab payloads

    NASA Technical Reports Server (NTRS)

    Stahle, C. V.; Gongloff, H. R.; Young, J. P.; Keegan, W. B.

    1977-01-01

    Methods have been developed for the evaluation of cost effective vibroacoustic test plans for Shuttle Spacelab payloads. The shock and vibration environments of components have been statistically represented, and statistical decision theory has been used to evaluate the cost effectiveness of five basic test plans with structural test options for two of the plans. Component, subassembly, and payload testing have been performed for each plan along with calculations of optimum test levels and expected costs. The tests have been ranked according to both minimizing expected project costs and vibroacoustic reliability. It was found that optimum costs may vary up to $6 million with the lowest plan eliminating component testing and maintaining flight vibration reliability via subassembly tests at high acoustic levels.

  12. Can real time location system technology (RTLS) provide useful estimates of time use by nursing personnel?

    PubMed

    Jones, Terry L; Schlegel, Cara

    2014-02-01

    Accurate, precise, unbiased, reliable, and cost-effective estimates of nursing time use are needed to insure safe staffing levels. Direct observation of nurses is costly, and conventional surrogate measures have limitations. To test the potential of electronic capture of time and motion through real time location systems (RTLS), a pilot study was conducted to assess efficacy (method agreement) of RTLS time use; inter-rater reliability of RTLS time-use estimates; and associated costs. Method agreement was high (mean absolute difference = 28 seconds); inter-rater reliability was high (ICC = 0.81-0.95; mean absolute difference = 2 seconds); and costs for obtaining RTLS time-use estimates on a single nursing unit exceeded $25,000. Continued experimentation with RTLS to obtain time-use estimates for nursing staff is warranted. © 2013 Wiley Periodicals, Inc.

  13. Demonstration of Cost-Effective, High-Performance Computing at Performance and Reliability Levels Equivalent to a 1994 Vector Supercomputer

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2000-01-01

    The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.

  14. Taguchi Approach to Design Optimization for Quality and Cost: An Overview

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Dean, Edwin B.

    1990-01-01

    Calibrations to existing cost of doing business in space indicate that to establish human presence on the Moon and Mars with the Space Exploration Initiative (SEI) will require resources, felt by many, to be more than the national budget can afford. In order for SEI to succeed, we must actually design and build space systems at lower cost this time, even with tremendous increases in quality and performance requirements, such as extremely high reliability. This implies that both government and industry must change the way they do business. Therefore, new philosophy and technology must be employed to design and produce reliable, high quality space systems at low cost. In recognizing the need to reduce cost and improve quality and productivity, Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) have initiated Total Quality Management (TQM). TQM is a revolutionary management strategy in quality assurance and cost reduction. TQM requires complete management commitment, employee involvement, and use of statistical tools. The quality engineering methods of Dr. Taguchi, employing design of experiments (DOE), is one of the most important statistical tools of TQM for designing high quality systems at reduced cost. Taguchi methods provide an efficient and systematic way to optimize designs for performance, quality, and cost. Taguchi methods have been used successfully in Japan and the United States in designing reliable, high quality products at low cost in such areas as automobiles and consumer electronics. However, these methods are just beginning to see application in the aerospace industry. The purpose of this paper is to present an overview of the Taguchi methods for improving quality and reducing cost, describe the current state of applications and its role in identifying cost sensitive design parameters.

  15. High-reliability gas-turbine combined-cycle development program: Phase II. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hecht, K.G.; Sanderson, R.A.; Smith, M.J.

    This three-volume report presents the results of Phase II of the multiphase EPRI-sponsored High-Reliability Gas Turbine Combined-Cycle Development Program whose goal is to achieve a highly reliable gas turbine combined-cycle power plant, available by the mid-1980s, which would be an economically attractive baseload generation alternative for the electric utility industry. The Phase II program objective was to prepare the preliminary design of this power plant. This volume presents information of the reliability, availability, and maintainability (RAM) analysis of a representative plant and the preliminary design of the gas turbine, the gas turbine ancillaries, and the balance of plant including themore » steam turbine generator. To achieve the program goals, a gas turbine was incorporated which combined proven reliability characteristics with improved performance features. This gas turbine, designated the V84.3, is the result of a cooperative effort between Kraftwerk Union AG and United Technologies Corporation. Gas turbines of similar design operating in Europe under baseload conditions have demonstrated mean time between failures in excess of 40,000 hours. The reliability characteristics of the gas turbine ancillaries and balance-of-plant equipment were improved through system simplification and component redundancy and by selection of component with inherent high reliability. A digital control system was included with logic, communications, sensor redundancy, and mandual backup. An independent condition monitoring and diagnostic system was also included. Program results provide the preliminary design of a gas turbine combined-cycle baseload power plant. This power plant has a predicted mean time between failure of nearly twice the 3000-hour EPRI goal. The cost of added reliability features is offset by improved performance, which results in a comparable specific cost and an 8% lower cost of electricity compared to present market offerings.« less

  16. A pragmatic decision model for inventory management with heterogeneous suppliers

    NASA Astrophysics Data System (ADS)

    Nakandala, Dilupa; Lau, Henry; Zhang, Jingjing; Gunasekaran, Angappa

    2018-05-01

    For enterprises, it is imperative that the trade-off between the cost of inventory and risk implications is managed in the most efficient manner. To explore this, we use the common example of a wholesaler operating in an environment where suppliers demonstrate heterogeneous reliability. The wholesaler has partial orders with dual suppliers and uses lateral transshipments. While supplier reliability is a key concern in inventory management, reliable suppliers are more expensive and investment in strategic approaches that improve supplier performance carries a high cost. Here we consider the operational strategy of dual sourcing with reliable and unreliable suppliers and model the total inventory cost where the likely scenario lead-time of the unreliable suppliers extends beyond the scheduling period. We then develop a Customized Integer Programming Optimization Model to determine the optimum size of partial orders with multiple suppliers. In addition to the objective of total cost optimization, this study takes into account the volatility of the cost associated with the uncertainty of an inventory system.

  17. Real-time reliable determination of binding kinetics of DNA hybridization using a multi-channel graphene biosensor

    NASA Astrophysics Data System (ADS)

    Xu, Shicai; Zhan, Jian; Man, Baoyuan; Jiang, Shouzhen; Yue, Weiwei; Gao, Shoubao; Guo, Chengang; Liu, Hanping; Li, Zhenhua; Wang, Jihua; Zhou, Yaoqi

    2017-03-01

    Reliable determination of binding kinetics and affinity of DNA hybridization and single-base mismatches plays an essential role in systems biology, personalized and precision medicine. The standard tools are optical-based sensors that are difficult to operate in low cost and to miniaturize for high-throughput measurement. Biosensors based on nanowire field-effect transistors have been developed, but reliable and cost-effective fabrication remains a challenge. Here, we demonstrate that a graphene single-crystal domain patterned into multiple channels can measure time- and concentration-dependent DNA hybridization kinetics and affinity reliably and sensitively, with a detection limit of 10 pM for DNA. It can distinguish single-base mutations quantitatively in real time. An analytical model is developed to estimate probe density, efficiency of hybridization and the maximum sensor response. The results suggest a promising future for cost-effective, high-throughput screening of drug candidates, genetic variations and disease biomarkers by using an integrated, miniaturized, all-electrical multiplexed, graphene-based DNA array.

  18. Requirements and approach for a space tourism launch system

    NASA Astrophysics Data System (ADS)

    Penn, Jay P.; Lindley, Charles A.

    2003-01-01

    Market surveys suggest that a viable space tourism industry will require flight rates about two orders of magnitude higher than those required for conventional spacelift. Although enabling round-trip cost goals for a viable space tourism business are about 240/pound (529/kg), or 72,000/passenger round-trip, goals should be about 50/pound (110/kg) or approximately 15,000 for a typical passenger and baggage. The lower price will probably open space tourism to the general population. Vehicle reliabilities must approach those of commercial aircraft as closely as possible. This paper addresses the development of spaceplanes optimized for the ultra-high flight rate and high reliability demands of the space tourism mission. It addresses the fundamental operability, reliability, and cost drivers needed to satisfy this mission need. Figures of merit similar to those used to evaluate the economic viability of conventional commercial aircraft are developed, including items such as payload/vehicle dry weight, turnaround time, propellant cost per passenger, and insurance and depreciation costs, which show that infrastructure can be developed for a viable space tourism industry. A reference spaceplane design optimized for space tourism is described. Subsystem allocations for reliability, operability, and costs are made and a route to developing such a capability is discussed. The vehicle's ability to satisfy the traditional spacelift market is also shown.

  19. Cutting costs of multiple mini-interviews – changes in reliability and efficiency of the Hamburg medical school admission test between two applications

    PubMed Central

    2014-01-01

    Background Multiple mini-interviews (MMIs) are a valuable tool in medical school selection due to their broad acceptance and promising psychometric properties. With respect to the high expenses associated with this procedure, the discussion about its feasibility should be extended to cost-effectiveness issues. Methods Following a pilot test of MMIs for medical school admission at Hamburg University in 2009 (HAM-Int), we took several actions to improve reliability and to reduce costs of the subsequent procedure in 2010. For both years, we assessed overall and inter-rater reliabilities based on multilevel analyses. Moreover, we provide a detailed specification of costs, as well as an extrapolation of the interrelation of costs, reliability, and the setup of the procedure. Results The overall reliability of the initial 2009 HAM-Int procedure with twelve stations and an average of 2.33 raters per station was ICC=0.75. Following the improvement actions, in 2010 the ICC remained stable at 0.76, despite the reduction of the process to nine stations and 2.17 raters per station. Moreover, costs were cut down from $915 to $495 per candidate. With the 2010 modalities, we could have reached an ICC of 0.80 with 16 single rater stations ($570 per candidate). Conclusions With respect to reliability and cost-efficiency, it is generally worthwhile to invest in scoring, rater training and scenario development. Moreover, it is more beneficial to increase the number of stations instead of raters within stations. However, if we want to achieve more than 80 % reliability, a minor improvement is paid with skyrocketing costs. PMID:24645665

  20. Human Reliability and the Cost of Doing Business

    NASA Technical Reports Server (NTRS)

    DeMott, Diana

    2014-01-01

    Most businesses recognize that people will make mistakes and assume errors are just part of the cost of doing business, but does it need to be? Companies with high risk, or major consequences, should consider the effect of human error. In a variety of industries, Human Errors have caused costly failures and workplace injuries. These have included: airline mishaps, medical malpractice, administration of medication and major oil spills have all been blamed on human error. A technique to mitigate or even eliminate some of these costly human errors is the use of Human Reliability Analysis (HRA). Various methodologies are available to perform Human Reliability Assessments that range from identifying the most likely areas for concern to detailed assessments with human error failure probabilities calculated. Which methodology to use would be based on a variety of factors that would include: 1) how people react and act in different industries, and differing expectations based on industries standards, 2) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 3) type and availability of data and 4) how the industry views risk & reliability influences ( types of emergencies, contingencies and routine tasks versus cost based concerns). The Human Reliability Assessments should be the first step to reduce, mitigate or eliminate the costly mistakes or catastrophic failures. Using Human Reliability techniques to identify and classify human error risks allows a company more opportunities to mitigate or eliminate these risks and prevent costly failures.

  1. Comparison of sampling methodologies for nutrient monitoring in streams: uncertainties, costs and implications for mitigation

    NASA Astrophysics Data System (ADS)

    Audet, J.; Martinsen, L.; Hasler, B.; de Jonge, H.; Karydi, E.; Ovesen, N. B.; Kronvang, B.

    2014-07-01

    Eutrophication of aquatic ecosystems caused by excess concentrations of nitrogen and phosphorus may have harmful consequences for biodiversity and poses a health risk to humans via the water supplies. Reduction of nitrogen and phosphorus losses to aquatic ecosystems involves implementation of costly measures, and reliable monitoring methods are therefore essential to select appropriate mitigation strategies and to evaluate their effects. Here, we compare the performances and costs of three methodologies for the monitoring of nutrients in rivers: grab sampling, time-proportional sampling and passive sampling using flow proportional samplers. Assuming time-proportional sampling to be the best estimate of the "true" nutrient load, our results showed that the risk of obtaining wrong total nutrient load estimates by passive samplers is high despite similar costs as the time-proportional sampling. Our conclusion is that for passive samplers to provide a reliable monitoring alternative, further development is needed. Grab sampling was the cheapest of the three methods and was more precise and accurate than passive sampling. We conclude that although monitoring employing time-proportional sampling is costly, its reliability precludes unnecessarily high implementation expenses.

  2. Comparison of sampling methodologies for nutrient monitoring in streams: uncertainties, costs and implications for mitigation

    NASA Astrophysics Data System (ADS)

    Audet, J.; Martinsen, L.; Hasler, B.; de Jonge, H.; Karydi, E.; Ovesen, N. B.; Kronvang, B.

    2014-11-01

    Eutrophication of aquatic ecosystems caused by excess concentrations of nitrogen and phosphorus may have harmful consequences for biodiversity and poses a health risk to humans via water supplies. Reduction of nitrogen and phosphorus losses to aquatic ecosystems involves implementation of costly measures, and reliable monitoring methods are therefore essential to select appropriate mitigation strategies and to evaluate their effects. Here, we compare the performances and costs of three methodologies for the monitoring of nutrients in rivers: grab sampling; time-proportional sampling; and passive sampling using flow-proportional samplers. Assuming hourly time-proportional sampling to be the best estimate of the "true" nutrient load, our results showed that the risk of obtaining wrong total nutrient load estimates by passive samplers is high despite similar costs as the time-proportional sampling. Our conclusion is that for passive samplers to provide a reliable monitoring alternative, further development is needed. Grab sampling was the cheapest of the three methods and was more precise and accurate than passive sampling. We conclude that although monitoring employing time-proportional sampling is costly, its reliability precludes unnecessarily high implementation expenses.

  3. High-reliability gas-turbine combined-cycle development program: Phase II, Volume 3. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hecht, K.G.; Sanderson, R.A.; Smith, M.J.

    This three-volume report presents the results of Phase II of the multiphase EPRI-sponsored High-Reliability Gas Turbine Combined-Cycle Development Program whose goal is to achieve a highly reliable gas turbine combined-cycle power plant, available by the mid-1980s, which would be an economically attractive baseload generation alternative for the electric utility industry. The Phase II program objective was to prepare the preliminary design of this power plant. The power plant was addressed in three areas: (1) the gas turbine, (2) the gas turbine ancillaries, and (3) the balance of plant including the steam turbine generator. To achieve the program goals, a gasmore » turbine was incorporated which combined proven reliability characteristics with improved performance features. This gas turbine, designated the V84.3, is the result of a cooperative effort between Kraftwerk Union AG and United Technologies Corporation. Gas turbines of similar design operating in Europe under baseload conditions have demonstrated mean time between failures in excess of 40,000. The reliability characteristics of the gas turbine ancillaries and balance-of-plant equipment were improved through system simplification and component redundancy and by selection of component with inherent high reliability. A digital control system was included with logic, communications, sensor redundancy, and manual backup. An independent condition monitoring and diagnostic system was also included. Program results provide the preliminary design of a gas turbine combined-cycle baseload power plant. This power plant has a predicted mean time between failure of nearly twice the 3000-h EPRI goal. The cost of added reliability features is offset by improved performance, which results in a comparable specific cost and an 8% lower cost of electricty compared to present market offerings.« less

  4. System engineering of complex optical systems for mission assurance and affordability

    NASA Astrophysics Data System (ADS)

    Ahmad, Anees

    2017-08-01

    Affordability and reliability are equally important as the performance and development time for many optical systems for military, space and commercial applications. These characteristics are even more important for the systems meant for space and military applications where total lifecycle costs must be affordable. Most customers are looking for high performance optical systems that are not only affordable but are designed with "no doubt" mission assurance, reliability and maintainability in mind. Both US military and commercial customers are now demanding an optimum balance between performance, reliability and affordability. Therefore, it is important to employ a disciplined systems design approach for meeting the performance, cost and schedule targets while keeping affordability and reliability in mind. The US Missile Defense Agency (MDA) now requires all of their systems to be engineered, tested and produced according to the Mission Assurance Provisions (MAP). These provisions or requirements are meant to ensure complex and expensive military systems are designed, integrated, tested and produced with the reliability and total lifecycle costs in mind. This paper describes a system design approach based on the MAP document for developing sophisticated optical systems that are not only cost-effective but also deliver superior and reliable performance during their intended missions.

  5. Validity and reliability of a low-cost digital dynamometer for measuring isometric strength of lower limb.

    PubMed

    Romero-Franco, Natalia; Jiménez-Reyes, Pedro; Montaño-Munuera, Juan A

    2017-11-01

    Lower limb isometric strength is a key parameter to monitor the training process or recognise muscle weakness and injury risk. However, valid and reliable methods to evaluate it often require high-cost tools. The aim of this study was to analyse the concurrent validity and reliability of a low-cost digital dynamometer for measuring isometric strength in lower limb. Eleven physically active and healthy participants performed maximal isometric strength for: flexion and extension of ankle, flexion and extension of knee, flexion, extension, adduction, abduction, internal and external rotation of hip. Data obtained by the digital dynamometer were compared with the isokinetic dynamometer to examine its concurrent validity. Data obtained by the digital dynamometer from 2 different evaluators and 2 different sessions were compared to examine its inter-rater and intra-rater reliability. Intra-class correlation (ICC) for validity was excellent in every movement (ICC > 0.9). Intra and inter-tester reliability was excellent for all the movements assessed (ICC > 0.75). The low-cost digital dynamometer demonstrated strong concurrent validity and excellent intra and inter-tester reliability for assessing isometric strength in the main lower limb movements.

  6. Application of a truncated normal failure distribution in reliability testing

    NASA Technical Reports Server (NTRS)

    Groves, C., Jr.

    1968-01-01

    Statistical truncated normal distribution function is applied as a time-to-failure distribution function in equipment reliability estimations. Age-dependent characteristics of the truncated function provide a basis for formulating a system of high-reliability testing that effectively merges statistical, engineering, and cost considerations.

  7. A reliability evaluation methodology for memory chips for space applications when sample size is small

    NASA Technical Reports Server (NTRS)

    Chen, Y.; Nguyen, D.; Guertin, S.; Berstein, J.; White, M.; Menke, R.; Kayali, S.

    2003-01-01

    This paper presents a reliability evaluation methodology to obtain the statistical reliability information of memory chips for space applications when the test sample size needs to be kept small because of the high cost of the radiation hardness memories.

  8. On the Path to SunShot. The Role of Advancements in Solar Photovoltaic Efficiency, Reliability, and Costs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodhouse, Michael; Jones-Albertus, Rebecca; Feldman, David

    2016-05-01

    This report examines the remaining challenges to achieving the competitive photovoltaic (PV) costs and large-scale deployment envisioned under the U.S. Department of Energy's SunShot Initiative. Solar-energy cost reductions can be realized through lower PV module and balance-of-system (BOS) costs as well as improved system efficiency and reliability. Numerous combinations of PV improvements could help achieve the levelized cost of electricity (LCOE) goals because of the tradeoffs among key metrics like module price, efficiency, and degradation rate as well as system price and lifetime. Using LCOE modeling based on bottom-up cost analysis, two specific pathways are mapped to exemplify the manymore » possible approaches to module cost reductions of 29%-38% between 2015 and 2020. BOS hardware and soft cost reductions, ranging from 54%-77% of total cost reductions, are also modeled. The residential sector's high supply-chain costs, labor requirements, and customer-acquisition costs give it the greatest BOS cost-reduction opportunities, followed by the commercial sector, although opportunities are available to the utility-scale sector as well. Finally, a future scenario is considered in which very high PV penetration requires additional costs to facilitate grid integration and increased power-system flexibility--which might necessitate even lower solar LCOEs. The analysis of a pathway to 3-5 cents/kWh PV systems underscores the importance of combining robust improvements in PV module and BOS costs as well as PV system efficiency and reliability if such aggressive long-term targets are to be achieved.« less

  9. Total systems design analysis of high performance structures

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1993-01-01

    Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.

  10. Critical issues in assuring long lifetime and fail-safe operation of optical communications network

    NASA Astrophysics Data System (ADS)

    Paul, Dilip K.

    1993-09-01

    Major factors in assuring long lifetime and fail-safe operation in optical communications networks are reviewed in this paper. Reliable functionality to design specifications, complexity of implementation, and cost are the most critical issues. As economics is the driving force to set the goals as well as priorities for the design, development, safe operation, and maintenance schedules of reliable networks, a balance is sought between the degree of reliability enhancement, cost, and acceptable outage of services. Protecting both the link and the network with high reliability components, hardware duplication, and diversity routing can ensure the best network availability. Case examples include both fiber optic and lasercom systems. Also, the state-of-the-art reliability of photonics in space environment is presented.

  11. Standard semiconductor packaging for high-reliability low-cost MEMS applications

    NASA Astrophysics Data System (ADS)

    Harney, Kieran P.

    2005-01-01

    Microelectronic packaging technology has evolved over the years in response to the needs of IC technology. The fundamental purpose of the package is to provide protection for the silicon chip and to provide electrical connection to the circuit board. Major change has been witnessed in packaging and today wafer level packaging technology has further revolutionized the industry. MEMS (Micro Electro Mechanical Systems) technology has created new challenges for packaging that do not exist in standard ICs. However, the fundamental objective of MEMS packaging is the same as traditional ICs, the low cost and reliable presentation of the MEMS chip to the next level interconnect. Inertial MEMS is one of the best examples of the successful commercialization of MEMS technology. The adoption of MEMS accelerometers for automotive airbag applications has created a high volume market that demands the highest reliability at low cost. The suppliers to these markets have responded by exploiting standard semiconductor packaging infrastructures. However, there are special packaging needs for MEMS that cannot be ignored. New applications for inertial MEMS devices are emerging in the consumer space that adds the imperative of small size to the need for reliability and low cost. These trends are not unique to MEMS accelerometers. For any MEMS technology to be successful the packaging must provide the basic reliability and interconnection functions, adding the least possible cost to the product. This paper will discuss the evolution of MEMS packaging in the accelerometer industry and identify the main issues that needed to be addressed to enable the successful commercialization of the technology in the automotive and consumer markets.

  12. Standard semiconductor packaging for high-reliability low-cost MEMS applications

    NASA Astrophysics Data System (ADS)

    Harney, Kieran P.

    2004-12-01

    Microelectronic packaging technology has evolved over the years in response to the needs of IC technology. The fundamental purpose of the package is to provide protection for the silicon chip and to provide electrical connection to the circuit board. Major change has been witnessed in packaging and today wafer level packaging technology has further revolutionized the industry. MEMS (Micro Electro Mechanical Systems) technology has created new challenges for packaging that do not exist in standard ICs. However, the fundamental objective of MEMS packaging is the same as traditional ICs, the low cost and reliable presentation of the MEMS chip to the next level interconnect. Inertial MEMS is one of the best examples of the successful commercialization of MEMS technology. The adoption of MEMS accelerometers for automotive airbag applications has created a high volume market that demands the highest reliability at low cost. The suppliers to these markets have responded by exploiting standard semiconductor packaging infrastructures. However, there are special packaging needs for MEMS that cannot be ignored. New applications for inertial MEMS devices are emerging in the consumer space that adds the imperative of small size to the need for reliability and low cost. These trends are not unique to MEMS accelerometers. For any MEMS technology to be successful the packaging must provide the basic reliability and interconnection functions, adding the least possible cost to the product. This paper will discuss the evolution of MEMS packaging in the accelerometer industry and identify the main issues that needed to be addressed to enable the successful commercialization of the technology in the automotive and consumer markets.

  13. Design of preventive maintenance system using the reliability engineering and maintenance value stream mapping methods in PT. XYZ

    NASA Astrophysics Data System (ADS)

    Sembiring, N.; Panjaitan, N.; Angelita, S.

    2018-02-01

    PT. XYZ is a company owned by non-governmental organizations engaged in the field of production of rubber processing becoming crumb rubber. Part of the production is supported by some of machines and interacting equipment to achieve optimal productivity. Types of the machine that are used in the production process are Conveyor Breaker, Breaker, Rolling Pin, Hammer Mill, Mill Roll, Conveyor, Shredder Crumb, and Dryer. Maintenance system in PT. XYZ is corrective maintenance i.e. repairing or replacing the engine components after the crash on the machine. Replacement of engine components on corrective maintenance causes the machine to stop operating during the production process is in progress. The result is in the loss of production time due to the operator must replace the damaged engine components. The loss of production time can impact on the production targets which were not reached and lead to high loss costs. The cost for all components is Rp. 4.088.514.505. This cost is really high just for maintaining a Mill Roll Machine. Therefore PT. XYZ is needed to do preventive maintenance i.e. scheduling engine components and improving maintenance efficiency. The used methods are Reliability Engineering and Maintenance Value Stream Mapping (MVSM). The needed data in this research are the interval of time damage to engine components, opportunity cost, labor cost, component cost, corrective repair time, preventive repair time, Mean Time To Opportunity (MTTO), Mean Time To Repair (MTTR), and Mean Time To Yield (MTTY). In this research, the critical components of Mill Roll machine are Spier, Bushing, Bearing, Coupling and Roll. Determination of damage distribution, reliability, MTTF, cost of failure, cost of preventive, current state map, and future state map are done so that the replacement time for each critical component with the lowest maintenance cost and preparation of Standard Operation Procedure (SOP) are developed. For the critical component that has been determined, the Spier component replacement time interval is 228 days with a reliability value of 0,503171, Bushing component is 240 days with reliability value of 0.36861, Bearing component is 202 days with reliability value of 0,503058, Coupling component is 247 days with reliability value of 0,50108 and Roll component is 301 days with reliability value of 0,373525. The results show that the cost decreases from Rp 300,688,114 to Rp 244,384,371 obtained from corrective maintenance to preventive maintenance. While maintenance efficiency increases with the application of preventive maintenance i.e. for Spier component from 54,0540541% to 74,07407%, Bushing component from 52,3809524% to 68,75%, Bearing component from 40% to 52,63158%, Coupling component from 60.9756098% to 71.42857%, and Roll components from 64.516129% to 74.7663551%.

  14. Using Ensemble Decisions and Active Selection to Improve Low-Cost Labeling for Multi-View Data

    NASA Technical Reports Server (NTRS)

    Rebbapragada, Umaa; Wagstaff, Kiri L.

    2011-01-01

    This paper seeks to improve low-cost labeling in terms of training set reliability (the fraction of correctly labeled training items) and test set performance for multi-view learning methods. Co-training is a popular multiview learning method that combines high-confidence example selection with low-cost (self) labeling. However, co-training with certain base learning algorithms significantly reduces training set reliability, causing an associated drop in prediction accuracy. We propose the use of ensemble labeling to improve reliability in such cases. We also discuss and show promising results on combining low-cost ensemble labeling with active (low-confidence) example selection. We unify these example selection and labeling strategies under collaborative learning, a family of techniques for multi-view learning that we are developing for distributed, sensor-network environments.

  15. Networking via wireless bridge produces greater speed and flexibility, lowers cost.

    PubMed

    1998-10-01

    Wireless computer networking. Computer connectivity is essential in today's high-tech health care industry. But telephone lines aren't fast enough, and high-speed connections like T-1 lines are costly. Read about an Ohio community hospital that installed a wireless network "bridge" to connect buildings that are miles apart, creating a reliable high-speed link that costs one-tenth of a T-1 line.

  16. A low-cost, high-field-strength magnetic resonance imaging-compatible actuator.

    PubMed

    Secoli, Riccardo; Robinson, Matthew; Brugnoli, Michele; Rodriguez y Baena, Ferdinando

    2015-03-01

    To perform minimally invasive surgical interventions with the aid of robotic systems within a magnetic resonance imaging scanner offers significant advantages compared to conventional surgery. However, despite the numerous exciting potential applications of this technology, the introduction of magnetic resonance imaging-compatible robotics has been hampered by safety, reliability and cost concerns: the robots should not be attracted by the strong magnetic field of the scanner and should operate reliably in the field without causing distortion to the scan data. Development of non-conventional sensors and/or actuators is thus required to meet these strict operational and safety requirements. These demands commonly result in expensive actuators, which mean that cost effectiveness remains a major challenge for such robotic systems. This work presents a low-cost, high-field-strength magnetic resonance imaging-compatible actuator: a pneumatic stepper motor which is controllable in open loop or closed loop, along with a rotary encoder, both fully manufactured in plastic, which are shown to perform reliably via a set of in vitro trials while generating negligible artifacts when imaged within a standard clinical scanner. © IMechE 2015.

  17. Radiation Challenges for Electronics in the Vision for Space Exploration

    NASA Technical Reports Server (NTRS)

    LaBel, Kenneth A.

    2006-01-01

    The slides present a brief snapshot discussing electronics and exploration-related challenges. Radiation effects have been the prime target, however, electronic parts reliability issues must also be considered. Modern electronics are designed with a 3-5 year lifetime. Upscreening does not improve reliability, merely determines inherent levels. Testing costs are driven by device complexity; they increase tester complexity, beam requirements, and facility choices. Commercial devices may improve performance, but are not cost panaceas. There is need for a more cost-effective access to high energy heavy ion facilities such as NSCL and NSRL. Costs for capable test equipment can run more than $1M for full testing.

  18. Modeling and Simulation Reliable Spacecraft On-Board Computing

    NASA Technical Reports Server (NTRS)

    Park, Nohpill

    1999-01-01

    The proposed project will investigate modeling and simulation-driven testing and fault tolerance schemes for Spacecraft On-Board Computing, thereby achieving reliable spacecraft telecommunication. A spacecraft communication system has inherent capabilities of providing multipoint and broadcast transmission, connectivity between any two distant nodes within a wide-area coverage, quick network configuration /reconfiguration, rapid allocation of space segment capacity, and distance-insensitive cost. To realize the capabilities above mentioned, both the size and cost of the ground-station terminals have to be reduced by using reliable, high-throughput, fast and cost-effective on-board computing system which has been known to be a critical contributor to the overall performance of space mission deployment. Controlled vulnerability of mission data (measured in sensitivity), improved performance (measured in throughput and delay) and fault tolerance (measured in reliability) are some of the most important features of these systems. The system should be thoroughly tested and diagnosed before employing a fault tolerance into the system. Testing and fault tolerance strategies should be driven by accurate performance models (i.e. throughput, delay, reliability and sensitivity) to find an optimal solution in terms of reliability and cost. The modeling and simulation tools will be integrated with a system architecture module, a testing module and a module for fault tolerance all of which interacting through a centered graphical user interface.

  19. Diverse Redundant Systems for Reliable Space Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2015-01-01

    Reliable life support systems are required for deep space missions. The probability of a fatal life support failure should be less than one in a thousand in a multi-year mission. It is far too expensive to develop a single system with such high reliability. Using three redundant units would require only that each have a failure probability of one in ten over the mission. Since the system development cost is inverse to the failure probability, this would cut cost by a factor of one hundred. Using replaceable subsystems instead of full systems would further cut cost. Using full sets of replaceable components improves reliability more than using complete systems as spares, since a set of components could repair many different failures instead of just one. Replaceable components would require more tools, space, and planning than full systems or replaceable subsystems. However, identical system redundancy cannot be relied on in practice. Common cause failures can disable all the identical redundant systems. Typical levels of common cause failures will defeat redundancy greater than two. Diverse redundant systems are required for reliable space life support. Three, four, or five diverse redundant systems could be needed for sufficient reliability. One system with lower level repair could be substituted for two diverse systems to save cost.

  20. Creating Highly Reliable Accountable Care Organizations.

    PubMed

    Vogus, Timothy J; Singer, Sara J

    2016-12-01

    Accountable Care Organizations' (ACOs) pursuit of the triple aim of higher quality, lower cost, and improved population health has met with mixed results. To improve the design and implementation of ACOs we look to organizations that manage similarly complex, dynamic, and tightly coupled conditions while sustaining exceptional performance known as high-reliability organizations. We describe the key processes through which organizations achieve reliability, the leadership and organizational practices that enable it, and the role that professionals can play when charged with enacting it. Specifically, we present concrete practices and processes from health care organizations pursuing high-reliability and from early ACOs to illustrate how the triple aim may be met by cultivating mindful organizing, practicing reliability-enhancing leadership, and identifying and supporting reliability professionals. We conclude by proposing a set of research questions to advance the study of ACOs and high-reliability research. © The Author(s) 2016.

  1. Space tourism optimized reusable spaceplane design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Penn, J.P.; Lindley, C.A.

    Market surveys suggest that a viable space tourism industry will require flight rates about two orders of magnitude higher than those required for conventional spacelift. Although enabling round-trip cost goals for a viable space tourism business are about {dollar_sign}240 per pound ({dollar_sign}529/kg), or {dollar_sign}72,000 per passenger round-trip, goals should be about {dollar_sign}50 per pound ({dollar_sign}110/kg) or approximately {dollar_sign}15,000 for a typical passenger and baggage. The lower price will probably open space tourism to the general population. Vehicle reliabilities must approach those of commercial aircraft as closely as possible. This paper addresses the development of spaceplanes optimized for the ultra-high flightmore » rate and high reliability demands of the space tourism mission. It addresses the fundamental operability, reliability, and cost drivers needed to satisfy this mission need. Figures of merit similar to those used to evaluate the economic viability of conventional commercial aircraft are developed, including items such as payload/vehicle dry weight, turnaround time, propellant cost per passenger, and insurance and depreciation costs, which show that infrastructure can be developed for a viable space tourism industry. A reference spaceplane design optimized for space tourism is described. Subsystem allocations for reliability, operability, and costs are made and a route to developing such a capability is discussed. The vehicle{close_quote}s ability to also satisfy the traditional spacelift market is shown. {copyright} {ital 1997 American Institute of Physics.}« less

  2. Applying Organization Theory to Understanding the Adoption and Implementation of Accountable Care Organizations: Commentary.

    PubMed

    Shortell, Stephen M

    2016-12-01

    This commentary highights the key arguments and contributions of institutional thoery, transaction cost economics (TCE) theory, high reliability theory, and organizational learning theory to understanding the development and evolution of Accountable Care Organizations (ACOs). Institutional theory and TCE theory primarily emphasize the external influences shaping ACOs while high reliability theory and organizational learning theory underscore the internal fctors influencing ACO perfromance. A framework based on Implementation Science is proposed to conside the multiple perspectives on ACOs and, in particular, their abiity to innovate to achieve desired cost, quality, and population health goals. © The Author(s) 2016.

  3. High-power direct-diode laser successes

    NASA Astrophysics Data System (ADS)

    Haake, John M.; Zediker, Mark S.

    2004-06-01

    Direct diode laser will become much more prevalent in the solar system of manufacturing due to their high efficiency, small portable size, unique beam profiles, and low ownership costs. There has been many novel applications described for high power direct diode laser [HPDDL] systems but few have been implemented in extreme production environments due to diode and diode system reliability. We discuss several novel applications in which the HPDDL have been implemented and proven reliable and cost-effective in production environments. These applications are laser hardening/surface modification, laser wire feed welding and laser paint stripping. Each of these applications uniquely tests the direct diode laser systems capabilities and confirms their reliability in production environments. A comparison of the advantages direct diode laser versus traditional industrial lasers such as CO2 and Nd:YAG and non-laser technologies such a RF induction, and MIG welders for each of these production applications is presented.

  4. Electric service reliability cost/worth assessment in a developing country

    NASA Astrophysics Data System (ADS)

    Pandey, Mohan Kumar

    Considerable work has been done in developed countries to optimize the reliability of electric power systems on the basis of reliability cost versus reliability worth. This has yet to be considered in most developing countries, where development plans are still based on traditional deterministic measures. The difficulty with these criteria is that they cannot be used to evaluate the economic impacts of changing reliability levels on the utility and the customers, and therefore cannot lead to an optimum expansion plan for the system. The critical issue today faced by most developing countries is that the demand for electric power is high and growth in supply is constrained by technical, environmental, and most importantly by financial impediments. Many power projects are being canceled or postponed due to a lack of resources. The investment burden associated with the electric power sector has already led some developing countries into serious debt problems. This thesis focuses on power sector issues facing by developing countries and illustrates how a basic reliability cost/worth approach can be used in a developing country to determine appropriate planning criteria and justify future power projects by application to the Nepal Integrated Electric Power System (NPS). A reliability cost/worth based system evaluation framework is proposed in this thesis. Customer surveys conducted throughout Nepal using in-person interviews with approximately 2000 sample customers are presented. The survey results indicate that the interruption cost is dependent on both customer and interruption characteristics, and it varies from one location or region to another. Assessments at both the generation and composite system levels have been performed using the customer cost data and the developed NPS reliability database. The results clearly indicate the implications of service reliability to the electricity consumers of Nepal, and show that the reliability cost/worth evaluation is both possible and practical in a developing country. The average customer interruption costs of Rs 35/kWh at Hierarchical Level I and Rs 26/kWh at Hierarchical Level II evaluated in this research work led to an optimum reserve margin of 7.5%, which is considerably lower than the traditional reserve margin of 15% used in the NPS. A similar conclusion may result in other developing countries facing difficulties in power system expansion planning using the traditional approach. A new framework for system planning is therefore recommended for developing countries which would permit an objective review of the traditional system planning approach, and the evaluation of future power projects using a new approach based on fundamental principles of power system reliability and economics.

  5. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization

    PubMed Central

    Chen, Qingkui; Zhao, Deyu; Wang, Jingjuan

    2017-01-01

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services. PMID:28777325

  6. RGCA: A Reliable GPU Cluster Architecture for Large-Scale Internet of Things Computing Based on Effective Performance-Energy Optimization.

    PubMed

    Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan

    2017-08-04

    This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.

  7. Reliability and Cost Impacts for Attritable Systems

    DTIC Science & Technology

    2017-03-23

    and cost risk metrics to convey the value of reliability and reparability trades. Investigation of the benefit of trading system reparability...illustrates the benefit that reliability engineering can have on total cost . 2.3.1 Contexts of System Reliability Hogge (2012) identifies two distinct...reliability and reparability trades. Investigation of the benefit of trading system reparability shows a marked increase in cost risk. Yet, trades in

  8. Limitations of Reliability for Long-Endurance Human Spaceflight

    NASA Technical Reports Server (NTRS)

    Owens, Andrew C.; de Weck, Olivier L.

    2016-01-01

    Long-endurance human spaceflight - such as missions to Mars or its moons - will present a never-before-seen maintenance logistics challenge. Crews will be in space for longer and be farther way from Earth than ever before. Resupply and abort options will be heavily constrained, and will have timescales much longer than current and past experience. Spare parts and/or redundant systems will have to be included to reduce risk. However, the high cost of transportation means that this risk reduction must be achieved while also minimizing mass. The concept of increasing system and component reliability is commonly discussed as a means to reduce risk and mass by reducing the probability that components will fail during a mission. While increased reliability can reduce maintenance logistics mass requirements, the rate of mass reduction decreases over time. In addition, reliability growth requires increased test time and cost. This paper assesses trends in test time requirements, cost, and maintenance logistics mass savings as a function of increase in Mean Time Between Failures (MTBF) for some or all of the components in a system. In general, reliability growth results in superlinear growth in test time requirements, exponential growth in cost, and sublinear benefits (in terms of logistics mass saved). These trends indicate that it is unlikely that reliability growth alone will be a cost-effective approach to maintenance logistics mass reduction and risk mitigation for long-endurance missions. This paper discusses these trends as well as other options to reduce logistics mass such as direct reduction of part mass, commonality, or In-Space Manufacturing (ISM). Overall, it is likely that some combination of all available options - including reliability growth - will be required to reduce mass and mitigate risk for future deep space missions.

  9. MEMS reliability: coming of age

    NASA Astrophysics Data System (ADS)

    Douglass, Michael R.

    2008-02-01

    In today's high-volume semiconductor world, one could easily take reliability for granted. As the MOEMS/MEMS industry continues to establish itself as a viable alternative to conventional manufacturing in the macro world, reliability can be of high concern. Currently, there are several emerging market opportunities in which MOEMS/MEMS is gaining a foothold. Markets such as mobile media, consumer electronics, biomedical devices, and homeland security are all showing great interest in microfabricated products. At the same time, these markets are among the most demanding when it comes to reliability assurance. To be successful, each company developing a MOEMS/MEMS device must consider reliability on an equal footing with cost, performance and manufacturability. What can this maturing industry learn from the successful development of DLP technology, air bag accelerometers and inkjet printheads? This paper discusses some basic reliability principles which any MOEMS/MEMS device development must use. Examples from the commercially successful and highly reliable Digital Micromirror Device complement the discussion.

  10. NREL to Lead New Consortium to Improve Reliability and Performance of Solar

    Science.gov Websites

    for photovoltaics (PV) and lower the cost of electricity generated by solar power. The Durable Module the cost of electricity from photovoltaics." The Energy Department's Office of Energy Efficiency , DuraMat will address the substantial opportunities that exist for durable, high-performance, low-cost

  11. An Overview of Advanced Data Acquisition System (ADAS)

    NASA Technical Reports Server (NTRS)

    Mata, Carlos T.; Steinrock, T. (Technical Monitor)

    2001-01-01

    The paper discusses the following: 1. Historical background. 2. What is ADAS? 3. R and D status. 4. Reliability/cost examples (1, 2, and 3). 5. What's new? 6. Technical advantages. 7. NASA relevance. 8. NASA plans/options. 9. Remaining R and D. 10. Applications. 11. Product benefits. 11. Commercial advantages. 12. intellectual property. Aerospace industry requires highly reliable data acquisition systems. Traditional Acquisition systems employ end-to-end hardware and software redundancy. Typically, redundancy adds weight, cost, power consumption, and complexity.

  12. Present status and future prospects of heavy ion beams as drivers for ICF

    NASA Astrophysics Data System (ADS)

    Godlove, Terry F.

    1986-01-01

    A candidate driver for a practical inertial fusion reactor system must, among other characteristics, be cost effective and reliable for the parameters required by the fusion target and the remainder of the system. Although the history of large particle accelerators provides abundant evidence of their reliability at high repetition rates, their capital cost for the fusion application has been open to question. Attempts to design cost effective systems began with accelerators based on currently available technology such as RF linacs and storage rings. The West German HIBALL and the Japanese HIBLIC are examples of this initial effort. These designs are sufficiently credible that a strong argument can be made for the heavy ion method in general, but to reduce the cost per unit power it was found necessary to design for large scale, hence high capital cost. Emphasis in the U.S. shifted to newer technologies which offer hope of significant improvement in cost. In this paper the status of various heavy ion driver designs are compared with currently perceived requirements in order to illustrate their potential and assess their development needs.

  13. A standard for test reliability in group research.

    PubMed

    Ellis, Jules L

    2013-03-01

    Many authors adhere to the rule that test reliabilities should be at least .70 or .80 in group research. This article introduces a new standard according to which reliabilities can be evaluated. This standard is based on the costs or time of the experiment and of administering the test. For example, if test administration costs are 7 % of the total experimental costs, the efficient value of the reliability is .93. If the actual reliability of a test is equal to this efficient reliability, the test size maximizes the statistical power of the experiment, given the costs. As a standard in experimental research, it is proposed that the reliability of the dependent variable be close to the efficient reliability. Adhering to this standard will enhance the statistical power and reduce the costs of experiments.

  14. Reliability and cost evaluation of small isolated power systems containing photovoltaic and wind energy

    NASA Astrophysics Data System (ADS)

    Karki, Rajesh

    Renewable energy application in electric power systems is growing rapidly worldwide due to enhanced public concerns for adverse environmental impacts and escalation in energy costs associated with the use of conventional energy sources. Photovoltaics and wind energy sources are being increasingly recognized as cost effective generation sources. A comprehensive evaluation of reliability and cost is required to analyze the actual benefits of utilizing these energy sources. The reliability aspects of utilizing renewable energy sources have largely been ignored in the past due the relatively insignificant contribution of these sources in major power systems, and consequently due to the lack of appropriate techniques. Renewable energy sources have the potential to play a significant role in the electrical energy requirements of small isolated power systems which are primarily supplied by costly diesel fuel. A relatively high renewable energy penetration can significantly reduce the system fuel costs but can also have considerable impact on the system reliability. Small isolated systems routinely plan their generating facilities using deterministic adequacy methods that cannot incorporate the highly erratic behavior of renewable energy sources. The utilization of a single probabilistic risk index has not been generally accepted in small isolated system evaluation despite its utilization in most large power utilities. Deterministic and probabilistic techniques are combined in this thesis using a system well-being approach to provide useful adequacy indices for small isolated systems that include renewable energy. This thesis presents an evaluation model for small isolated systems containing renewable energy sources by integrating simulation models that generate appropriate atmospheric data, evaluate chronological renewable power outputs and combine total available energy and load to provide useful system indices. A software tool SIPSREL+ has been developed which generates risk, well-being and energy based indices to provide realistic cost/reliability measures of utilizing renewable energy. The concepts presented and the examples illustrated in this thesis will help system planners to decide on appropriate installation sites, the types and mix of different energy generating sources, the optimum operating policies, and the optimum generation expansion plans required to meet increasing load demands in small isolated power systems containing photovoltaic and wind energy sources.

  15. Towards cost-effective reliability through visualization of the reliability option space

    NASA Technical Reports Server (NTRS)

    Feather, Martin S.

    2004-01-01

    In planning a complex system's development there can be many options to improve its reliability. Typically their sum total cost exceeds the budget available, so it is necessary to select judiciously from among them. Reliability models can be employed to calculate the cost and reliability implications of a candidate selection.

  16. Regulation of transmission line capacity and reliability in electric networks

    NASA Astrophysics Data System (ADS)

    Celebi, Metin

    This thesis is composed of two essays that analyze the incentives and optimal regulation of a monopolist line owner in providing capacity and reliability. Similar analyses in the economic literature resulted in under-investment by an unregulated line owner when line reliability was treated as an exogenous variable. However, reliability should be chosen on the basis of economic principles as well, taking into account not only engineering principles but also the preferences of electricity users. When reliability is treated as a choice variable, both over- and under-investment by the line owner becomes possible. The result depends on the cross-cost elasticity of line construction and on the interval in which the optimal choices of capacity take place. We present some sufficient conditions that lead to definite results about the incentives of the line owner. We also characterize the optimal regulation of the line owner under incomplete information. Our analysis shows that the existence of a line is justified for the social planner when the reliability of other lines on the network is not too high, or when the marginal cost of generation at the expensive generating plant is high. The expectation of higher demand in the future makes the regulator less likely to build the line if it will be congested and reliability of other lines is high enough. It is always optimal to have a congested line under complete information, but not necessarily under incomplete information.

  17. Monolithic ceramic capacitors for high reliability applications

    NASA Technical Reports Server (NTRS)

    Thornley, E. B.

    1981-01-01

    Monolithic multi-layer ceramic dielectric capacitors are widely used in high reliability applications in spacecraft, launch vehicles, and military equipment. Their relatively low cost, wide range of values, and package styles are attractive features that result in high usage in electronic circuitry in these applications. Design and construction of monolithic ceramic dielectric capacitors, defects that can lead to failure, and methods for defect detection that are being incorporated in military specifications are discussed.

  18. Choosing a reliability inspection plan for interval censored data

    DOE PAGES

    Lu, Lu; Anderson-Cook, Christine Michaela

    2017-04-19

    Reliability test plans are important for producing precise and accurate assessment of reliability characteristics. This paper explores different strategies for choosing between possible inspection plans for interval censored data given a fixed testing timeframe and budget. A new general cost structure is proposed for guiding precise quantification of total cost in inspection test plan. Multiple summaries of reliability are considered and compared as the criteria for choosing the best plans using an easily adapted method. Different cost structures and representative true underlying reliability curves demonstrate how to assess different strategies given the logistical constraints and nature of the problem. Resultsmore » show several general patterns exist across a wide variety of scenarios. Given the fixed total cost, plans that inspect more units with less frequency based on equally spaced time points are favored due to the ease of implementation and consistent good performance across a large number of case study scenarios. Plans with inspection times chosen based on equally spaced probabilities offer improved reliability estimates for the shape of the distribution, mean lifetime, and failure time for a small fraction of population only for applications with high infant mortality rates. The paper uses a Monte Carlo simulation based approach in addition to the common evaluation based on the asymptotic variance and offers comparison and recommendation for different applications with different objectives. Additionally, the paper outlines a variety of different reliability metrics to use as criteria for optimization, presents a general method for evaluating different alternatives, as well as provides case study results for different common scenarios.« less

  19. Choosing a reliability inspection plan for interval censored data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Lu; Anderson-Cook, Christine Michaela

    Reliability test plans are important for producing precise and accurate assessment of reliability characteristics. This paper explores different strategies for choosing between possible inspection plans for interval censored data given a fixed testing timeframe and budget. A new general cost structure is proposed for guiding precise quantification of total cost in inspection test plan. Multiple summaries of reliability are considered and compared as the criteria for choosing the best plans using an easily adapted method. Different cost structures and representative true underlying reliability curves demonstrate how to assess different strategies given the logistical constraints and nature of the problem. Resultsmore » show several general patterns exist across a wide variety of scenarios. Given the fixed total cost, plans that inspect more units with less frequency based on equally spaced time points are favored due to the ease of implementation and consistent good performance across a large number of case study scenarios. Plans with inspection times chosen based on equally spaced probabilities offer improved reliability estimates for the shape of the distribution, mean lifetime, and failure time for a small fraction of population only for applications with high infant mortality rates. The paper uses a Monte Carlo simulation based approach in addition to the common evaluation based on the asymptotic variance and offers comparison and recommendation for different applications with different objectives. Additionally, the paper outlines a variety of different reliability metrics to use as criteria for optimization, presents a general method for evaluating different alternatives, as well as provides case study results for different common scenarios.« less

  20. Signal modulation as a mechanism for handicap disposal

    PubMed Central

    Gavassa, Sat; Silva, Ana C.; Gonzalez, Emmanuel; Stoddard, Philip K.

    2012-01-01

    Signal honesty may be compromised when heightened competition provides incentive for signal exaggeration. Some degree of honesty might be maintained by intrinsic handicap costs on signalling or through imposition of extrinsic costs, such as social punishment of low quality cheaters. Thus, theory predicts a delicate balance between signal enhancement and signal reliability that varies with degree of social competition, handicap cost, and social cost. We investigated whether male sexual signals of the electric fish Brachyhypopomus gauderio would become less reliable predictors of body length when competition provides incentives for males to boost electric signal amplitude. As expected, social competition under natural field conditions and in controlled lab experiments drove males to enhance their signals. However, signal enhancement improved the reliability of the information conveyed by the signal, as revealed in the tightening of the relationship between signal amplitude and body length. Signal augmentation in male B. gauderio was independent of body length, and thus appeared not to be curtailed through punishment of low quality (small) individuals. Rather, all individuals boosted their signals under high competition, but those whose signals were farthest from the predicted value under low competition boosted signal amplitude the most. By elimination, intrinsic handicap cost of signal production, rather than extrinsic social cost, appears to be the basis for the unexpected reinforcement of electric signal honesty under social competition. Signal modulation may provide its greatest advantage to the signaller as a mechanism for handicap disposal under low competition rather than as a mechanism for exaggeration of quality under high competition. PMID:22665940

  1. Developing Portfolios of Water Supply Transfers

    NASA Astrophysics Data System (ADS)

    Characklis, G. W.; Kirsch, B. R.; Ramsey, J.; Dillard, K. E.; Kelley, C. T.

    2005-12-01

    Most cities rely on firm water supply capacity to meet demand, but increasing scarcity and supply costs are encouraging greater use of temporary transfers (e.g., spot leases, options). This raises questions regarding how best to coordinate the use of these transfers in meeting cost and reliability objectives. This work combines a hydrologic-water market simulation with an optimization approach to identify portfolios of permanent rights, options and leases that minimize expected costs of meeting a city's annual demand with a specified reliability. Spot market prices are linked to hydrologic conditions and described by monthly lease price distributions which are used to price options via a risk neutral approach. Monthly choices regarding when and how much water to acquire through temporary transfers are made on the basis of anticipatory decision rules related to the ratio of expected supply-to-expected demand. The simulation is linked with an algorithm that uses an implicit filtering search method designed for solution surfaces that exhibit high frequency, low amplitude noise. This simulation-optimization approach is applied to a region that currently supports an active water market, with results suggesting that the use of temporary transfers can reduce expected water supply costs substantially, while still maintaining high reliability levels. Also evaluated are tradeoffs between expected costs and cost variability that occur with variation in a portfolio's distribution of rights, options and leases. While this work represents firm supply capacity as permanent water rights, a similar approach could be used to develop portfolios integrating options and/or leases with hard supply infrastructure.

  2. Repeatability of measurements of removal of mite-infested brood to assess Varroa Sensitive Hygiene

    USDA-ARS?s Scientific Manuscript database

    Varroa Sensitive Hygiene is a useful resistance trait that bee breeders could increase in different populations with cost-effective and reliable tests. We investigated the reliability of a one-week test estimating the changes in infestation of brood introduced into highly selected and unselected co...

  3. Modeling Electricity Sector Vulnerabilities and Costs Associated with Water Temperatures Under Scenarios of Climate Change

    NASA Astrophysics Data System (ADS)

    Macknick, J.; Miara, A.; Brinkman, G.; Ibanez, E.; Newmark, R. L.

    2014-12-01

    The reliability of the power sector is highly vulnerable to variability in the availability and temperature of water resources, including those that might result from potential climatic changes or from competition from other users. In the past decade, power plants throughout the United States have had to shut down or curtail generation due to a lack of available water or from elevated water temperatures. These disruptions in power plant performance can have negative impacts on energy security and can be costly to address. Analysis of water-related vulnerabilities requires modeling capabilities with high spatial and temporal resolution. This research provides an innovative approach to energy-water modeling by evaluating the costs and reliability of a power sector region under policy and climate change scenarios that affect water resource availability and temperatures. This work utilizes results from a spatially distributed river water temperature model coupled with a thermoelectric power plant model to provide inputs into an electricity production cost model that operates on a high spatial and temporal resolution. The regional transmission organization ISO-New England, which includes six New England states and over 32 Gigawatts of power capacity, is utilized as a case study. Hydrological data and power plant operations are analyzed over an eleven year period from 2000-2010 under four scenarios that include climate impacts on water resources and air temperatures as well as strict interpretations of regulations that can affect power plant operations due to elevated water temperatures. Results of these model linkages show how the power sector's reliability and economic performance can be affected by changes in water temperatures and water availability. The effective reliability and capacity value of thermal electric generators are quantified and discussed in the context of current as well as potential future water resource characteristics.

  4. Development of ultracapacitor modules for 42-V automotive electrical systems

    NASA Astrophysics Data System (ADS)

    Jung, Do Yang; Kim, Young Ho; Kim, Sun Wook; Lee, Suck-Hyun

    Two types of ultracapacitor modules have been developed for use as energy-storage devices for 42-V systems in automobiles. The modules show high performance and good reliability in terms of discharge and recharge capability, long-term endurance, and high energy and power. During a 42-V system simulation test of 6-kW power boosting/regenerative braking, the modules demonstrate very good performance. In high-power applications such as 42-V and hybrid vehicle systems, ultracapacitors have many merits compared with batteries, especially with respect to specific power at high rate, thermal stability, charge-discharge efficiency, and cycle-life. Ultracapacitors are also very safe, reliable and environmentally friendly. The cost of ultracapacitors is still high compared with batteries because of the low production scale, but is decreasing very rapidly. It is estimated that the cost of ultracapacitors will decrease to US$ 300 per 42-V module in the near future. Also, the maintenance cost of the ultracapacitor is nearly zero because of its high cycle-life. Therefore, the combined cost of the capacitor and maintenance will be lower than that of batteries in the near future. Overall, comparing performance, price and other parameters of ultracapacitors with batteries, ultracapacitors are the most likely candidate for energy-storage in 42-V systems.

  5. Reliable contact fabrication on nanostructured Bi2Te3-based thermoelectric materials.

    PubMed

    Feng, Shien-Ping; Chang, Ya-Huei; Yang, Jian; Poudel, Bed; Yu, Bo; Ren, Zhifeng; Chen, Gang

    2013-05-14

    A cost-effective and reliable Ni-Au contact on nanostructured Bi2Te3-based alloys for a solar thermoelectric generator (STEG) is reported. The use of MPS SAMs creates a strong covalent binding and more nucleation sites with even distribution for electroplating contact electrodes on nanostructured thermoelectric materials. A reliable high-performance flat-panel STEG can be obtained by using this new method.

  6. Advanced energy system program

    NASA Astrophysics Data System (ADS)

    Trester, K.

    1989-02-01

    The objectives of the program are to design, develop and demonstrate a natural-gas-fueled, highly recuperated, 50 kW Brayton-cycle cogeneration system for commercial, institutional, and multifamily residential applications. Marketing studies have shown that this Advanced Energy System (AES), with its many unique and cost-effective features, has the potential to offer significant reductions in annual electrical and thermal energy costs to the consumer. Specific advantages of the system that result in low cost of ownership are high electrical efficiency (30 percent, HHV), low maintenance, high reliability and long life (20 years).

  7. Organize to manage reliability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ricketts, R.

    An analysis of maintenance costs in hydrocarbon processing industry (HPI) plants has revealed that attitudes and practices of personnel are the major single bottom line factor. In reaching this conclusion, Solomon Associates examined comparative analysis of plant records over the past decade. The authors learned that there was a wide range of performance independent of refinery age, capacity, processing complexity, and location. Facilities of all extremes in these attributes are included in both high-cost and low-cost categories. Those in the lowest quartile of performance posted twice the resource consumption as the best quartile. Furthermore, there was almost no similarity betweenmore » refineries within a single company. The paper discusses cost versus availability, maintenance spending, two organizational approaches used (repair focused and reliability focused), and organizational style and structure.« less

  8. Optimized Generator Designs for the DTU 10-MW Offshore Wind Turbine using GeneratorSE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sethuraman, Latha; Maness, Michael; Dykes, Katherine

    Compared to land-based applications, offshore wind imposes challenges for the development of next generation wind turbine generator technology. Direct-drive generators are believed to offer high availability, efficiency, and reduced operation and maintenance requirements; however, previous research suggests difficulties in scaling to several megawatts or more in size. The resulting designs are excessively large and/or massive, which are major impediments to transportation logistics, especially for offshore applications. At the same time, geared wind turbines continue to sustain offshore market growth through relatively cheaper and lightweight generators. However, reliability issues associated with mechanical components in a geared system create significant operation andmore » maintenance costs, and these costs make up a large portion of overall system costs offshore. Thus, direct-drive turbines are likely to outnumber their gear-driven counterparts for this market, and there is a need to review the costs or opportunities of building machines with different types of generators and examining their competitiveness at the sizes necessary for the next generation of offshore wind turbines. In this paper, we use GeneratorSE, the National Renewable Energy Laboratory's newly developed systems engineering generator sizing tool to estimate mass, efficiency, and the costs of different generator technologies satisfying the electromagnetic, structural, and basic thermal design requirements for application in a very large-scale offshore wind turbine such as the Technical University of Denmark's (DTU) 10-MW reference wind turbine. For the DTU reference wind turbine, we use the previously mentioned criteria to optimize a direct-drive, radial flux, permanent-magnet synchronous generator; a direct-drive electrically excited synchronous generator; a medium-speed permanent-magnet generator; and a high-speed, doubly-fed induction generator. Preliminary analysis of leveled costs of energy indicate that for large turbines, the cost of permanent magnets and reliability issues associated with brushes in electrically excited machines are the biggest deterrents for building direct-drive systems. The advantage of medium-speed permanent-magnet machines over doubly-fed induction generators is evident, yet, variability in magnet prices and solutions to address reliability issues associated with gearing and brushes can change this outlook. This suggests the need to potentially pursue fundamentally new innovations in generator designs that help avoid high capital costs but still have significant reliability related to performance.« less

  9. Optimized Generator Designs for the DTU 10-MW Offshore Wind Turbine using GeneratorSE: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sethuraman, Latha; Maness, Michael; Dykes, Katherine

    Compared to land-based applications, offshore wind imposes challenges for the development of next generation wind turbine generator technology. Direct-drive generators are believed to offer high availability, efficiency, and reduced operation and maintenance requirements; however, previous research suggests difficulties in scaling to several megawatts or more in size. The resulting designs are excessively large and/or massive, which are major impediments to transportation logistics, especially for offshore applications. At the same time, geared wind turbines continue to sustain offshore market growth through relatively cheaper and lightweight generators. However, reliability issues associated with mechanical components in a geared system create significant operation andmore » maintenance costs, and these costs make up a large portion of overall system costs offshore. Thus, direct-drive turbines are likely to outnumber their gear-driven counterparts for this market, and there is a need to review the costs or opportunities of building machines with different types of generators and examining their competitiveness at the sizes necessary for the next generation of offshore wind turbines. In this paper, we use GeneratorSE, the National Renewable Energy Laboratory's newly developed systems engineering generator sizing tool to estimate mass, efficiency, and the costs of different generator technologies satisfying the electromagnetic, structural, and basic thermal design requirements for application in a very large-scale offshore wind turbine such as the Technical University of Denmark's (DTU) 10-MW reference wind turbine. For the DTU reference wind turbine, we use the previously mentioned criteria to optimize a direct-drive, radial flux, permanent-magnet synchronous generator; a direct-drive electrically excited synchronous generator; a medium-speed permanent-magnet generator; and a high-speed, doubly-fed induction generator. Preliminary analysis of leveled costs of energy indicate that for large turbines, the cost of permanent magnets and reliability issues associated with brushes in electrically excited machines are the biggest deterrents for building direct-drive systems. The advantage of medium-speed permanent-magnet machines over doubly-fed induction generators is evident, yet, variability in magnet prices and solutions to address reliability issues associated with gearing and brushes can change this outlook. This suggests the need to potentially pursue fundamentally new innovations in generator designs that help avoid high capital costs but still have significant reliability related to performance.« less

  10. Do photovoltaics have a future

    NASA Technical Reports Server (NTRS)

    Williams, B. F.

    1979-01-01

    There is major concern as to the economic practicality of widespread terrestrial use because of the high cost of the photovoltaic arrays themselves. Based on their high efficiency, photovoltaic collectors should be one of the cheapest forms of energy generators known. Present photovoltaic panels are violating the trend of lower costs with increasing efficiency due to their reliance on expensive materials. A medium technology solution should provide electricity competitive with the existing medium to high technology energy generators such as oil, coal, gas, and nuclear fission thermal plants. Programs to reduce the cost of silicon and develop reliable thin film materials have a realistic chance of producing cost effective photovoltaic panels.

  11. Novel Low Cost, High Reliability Wind Turbine Drivetrain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chobot, Anthony; Das, Debarshi; Mayer, Tyler

    2012-09-13

    Clipper Windpower, in collaboration with United Technologies Research Center, the National Renewable Energy Laboratory, and Hamilton Sundstrand Corporation, developed a low-cost, deflection-compliant, reliable, and serviceable chain drive speed increaser. This chain and sprocket drivetrain design offers significant breakthroughs in the areas of cost and serviceability and addresses the key challenges of current geared and direct-drive systems. The use of gearboxes has proven to be challenging; the large torques and bending loads associated with use in large multi-MW wind applications have generally limited demonstrated lifetime to 8-10 years [1]. The large cost of gearbox replacement and the required use of large,more » expensive cranes can result in gearbox replacement costs on the order of $1M, representing a significant impact to overall cost of energy (COE). Direct-drive machines eliminate the gearbox, thereby targeting increased reliability and reduced life-cycle cost. However, the slow rotational speeds require very large and costly generators, which also typically have an undesirable dependence on expensive rare-earth magnet materials and large structural penalties for precise air gap control. The cost of rare-earth materials has increased 20X in the last 8 years representing a key risk to ever realizing the promised cost of energy reductions from direct-drive generators. A common challenge to both geared and direct drive architectures is a limited ability to manage input shaft deflections. The proposed Clipper drivetrain is deflection-compliant, insulating later drivetrain stages and generators from off-axis loads. The system is modular, allowing for all key parts to be removed and replaced without the use of a high capacity crane. Finally, the technology modularity allows for scalability and many possible drivetrain topologies. These benefits enable reductions in drivetrain capital cost by 10.0%, levelized replacement and O&M costs by 26.7%, and overall cost of energy by 10.2%. This design was achieved by: (1) performing an extensive optimization study that deter-mined the preliminary cost for all practical chain drive topologies to ensure the most competitive configuration; (2) conducting detailed analysis of chain dynamics, contact stresses, and wear and efficiency characteristics over the chain's life to ensure accurate physics-based predictions of chain performance; and (3) developing a final product design, including reliability analysis, chain replacement procedures, and bearing and sprocket analysis. Definition of this final product configuration was used to develop refined cost of energy estimates. Finally, key system risks for the chain drive were defined and a comprehensive risk reduction plan was created for execution in Phase 2.« less

  12. Predicting Cost/Reliability/Maintainability of Advanced General Aviation Avionics Equipment

    NASA Technical Reports Server (NTRS)

    Davis, M. R.; Kamins, M.; Mooz, W. E.

    1978-01-01

    A methodology is provided for assisting NASA in estimating the cost, reliability, and maintenance (CRM) requirements for general avionics equipment operating in the 1980's. Practical problems of predicting these factors are examined. The usefulness and short comings of different approaches for modeling coast and reliability estimates are discussed together with special problems caused by the lack of historical data on the cost of maintaining general aviation avionics. Suggestions are offered on how NASA might proceed in assessing cost reliability CRM implications in the absence of reliable generalized predictive models.

  13. Reliable, Low-Cost, Low-Weight, Non-Hermetic Coating for MCM Applications

    NASA Technical Reports Server (NTRS)

    Jones, Eric W.; Licari, James J.

    2000-01-01

    Through an Air Force Research Laboratory sponsored STM program, reliable, low-cost, low-weight, non-hermetic coatings for multi-chip-module(MCK applications were developed. Using the combination of Sandia Laboratory ATC-01 test chips, AvanTeco's moisture sensor chips(MSC's), and silicon slices, we have shown that organic and organic/inorganic overcoatings are reliable and practical non-hermetic moisture and oxidation barriers. The use of the MSC and unpassivated ATC-01 test chips provided rapid test results and comparison of moisture barrier quality of the overcoatings. The organic coatings studied were Parylene and Cyclotene. The inorganic coatings were Al2O3 and SiO2. The choice of coating(s) is dependent on the environment that the device(s) will be exposed to. We have defined four(4) classes of environments: Class I(moderate temperature/moderate humidity). Class H(high temperature/moderate humidity). Class III(moderate temperature/high humidity). Class IV(high temperature/high humidity). By subjecting the components to adhesion, FTIR, temperature-humidity(TH), pressure cooker(PCT), and electrical tests, we have determined that it is possible to reduce failures 50-70% for organic/inorganic coated components compared to organic coated components. All materials and equipment used are readily available commercially or are standard in most semiconductor fabrication lines. It is estimated that production cost for the developed technology would range from $1-10/module, compared to $20-200 for hermetically sealed packages.

  14. Achieving High Reliability with People, Processes, and Technology.

    PubMed

    Saunders, Candice L; Brennan, John A

    2017-01-01

    High reliability as a corporate value in healthcare can be achieved by meeting the "Quadruple Aim" of improving population health, reducing per capita costs, enhancing the patient experience, and improving provider wellness. This drive starts with the board of trustees, CEO, and other senior leaders who ingrain high reliability throughout the organization. At WellStar Health System, the board developed an ambitious goal to become a top-decile health system in safety and quality metrics. To achieve this goal, WellStar has embarked on a journey toward high reliability and has committed to Lean management practices consistent with the Institute for Healthcare Improvement's definition of a high-reliability organization (HRO): one that is committed to the prevention of failure, early identification and mitigation of failure, and redesign of processes based on identifiable failures. In the end, a successful HRO can provide safe, effective, patient- and family-centered, timely, efficient, and equitable care through a convergence of people, processes, and technology.

  15. The welfare effects of integrating renewable energy into electricity markets

    NASA Astrophysics Data System (ADS)

    Lamadrid, Alberto J.

    The challenges of deploying more renewable energy sources on an electric grid are caused largely by their inherent variability. In this context, energy storage can help make the electric delivery system more reliable by mitigating this variability. This thesis analyzes a series of models for procuring electricity and ancillary services for both individuals and social planners with high penetrations of stochastic wind energy. The results obtained for an individual decision maker using stochastic optimization are ambiguous, with closed form solutions dependent on technological parameters, and no consideration of the system reliability. The social planner models correctly reflect the effect of system reliability, and in the case of a Stochastic, Security Constrained Optimal Power Flow (S-SC-OPF or SuperOPF), determine reserve capacity endogenously so that system reliability is maintained. A single-period SuperOPF shows that including ramping costs in the objective function leads to more wind spilling and increased capacity requirements for reliability. However, this model does not reflect the inter temporal tradeoffs of using Energy Storage Systems (ESS) to improve reliability and mitigate wind variability. The results with the multiperiod SuperOPF determine the optimum use of storage for a typical day, and compare the effects of collocating ESS at wind sites with the same amount of storage (deferrable demand) located at demand centers. The collocated ESS has slightly lower operating costs and spills less wind generation compared to deferrable demand, but the total amount of conventional generating capacity needed for system adequacy is higher. In terms of the total system costs, that include the capital cost of conventional generating capacity, the costs with deferrable demand is substantially lower because the daily demand profile is flattened and less conventional generation capacity is then needed for reliability purposes. The analysis also demonstrates that the optimum daily pattern of dispatch and reserves is seriously distorted if the stochastic characteristics of wind generation are ignored.

  16. Developing an Internet-based Communication System for Residency Training Programs

    PubMed Central

    Fortin, Auguste H; Luzzi, Kristina; Galaty, Leslie; Wong, Jeffrey G; Huot, Stephen J

    2002-01-01

    Administrative communication is increasingly challenging for residency programs as the number of training sites expands. The Internet provides a cost-effective opportunity to address these needs. Using the World Wide Web, we developed a single, reliable, accurate, and accessible source of administrative information for residents, faculty, and staff in a multisite internal medicine residency at reduced costs. Evaluation of the effectiveness of the website was determined by tracking website use, materials and personnel costs, and resident, staff, and faculty satisfaction. Office supply and personnel costs were reduced by 89% and personnel effort by 85%. All users were highly satisfied with the web communication tool and all reported increased knowledge of program information and a greater sense of “connectedness.” We conclude that an internet-based communication system that provides a single, reliable, accurate, and accessible source of information for residents, faculty, and staff can be developed with minimum resources and reduced costs. PMID:11972724

  17. Technology developments toward 30-year-life of photovoltaic modules

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1984-01-01

    As part of the United States National Photovoltaics Program, the Jet Propulsion Laboratory's Flat-Plate Solar Array Project (FSA) has maintained a comprehensive reliability and engineering sciences activity addressed toward understanding the reliability attributes of terrestrial flat-plate photovoltaic arrays and to deriving analysis and design tools necessary to achieve module designs with a 30-year useful life. The considerable progress to date stemming from the ongoing reliability research is discussed, and the major areas requiring continued research are highlighted. The result is an overview of the total array reliability problem and of available means of achieving high reliability at minimum cost.

  18. DPSSL and FL pumps based on 980-nm telecom pump laser technology: changing the industry

    NASA Astrophysics Data System (ADS)

    Lichtenstein, Norbert; Schmidt, Berthold E.; Fily, Arnaud; Weiss, Stefan; Arlt, Sebastian; Pawlik, Susanne; Sverdlov, Boris; Muller, Jurgen; Harder, Christoph S.

    2004-06-01

    Diode-pumped solid state laser (DPSSL) and fiber laser (FL) are believed to become the dominant systems of very high power lasers in the industrial environment. Today, ranging from 100 W to 5 - 10 kW in light output power, their field of applications spread from biomedical and sensoring to material processing. Key driver for the wide spread of such systems is a competitive ratio of cost, performance and reliability. Enabling high power, highly reliable broad-area laser diodes and laser diode bars with excellent performance at the relevant wavelengths can further optimize this ratio. In this communication we present, that this can be achieved by leveraging the tremendous improvements in reliability and performance together with the high volume, low cost manufacturing areas established during the "telecom-bubble." From today's generations of 980-nm narrow-stripe laser diodes 1.8 W of maximum CW output power can be obtained fulfilling the stringent telecom reliability at operating conditions. Single-emitter broad-area lasers deliver in excess of 11 W CW while from similar 940-nm laser bars more than 160 W output power (CW) can be obtained at 200 A. In addition, introducing telecom-grade AuSn-solder mounting technology on expansion matched subassemblies enables excellent reliability performance. Degradation rates of less than 1% over 1000 h at 60 A are observed for both 808-nm and 940-nm laser bars even under harsh intermittent operation conditions.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vanderwiel, Scott A; Wilson, Alyson G; Graves, Todd L

    Both the U. S. Department of Defense (DoD) and Department of Energy (DOE) maintain weapons stockpiles: items like bullets, missiles and bombs that have already been produced and are being stored until needed. Ideally, these stockpiles maintain high reliability over time. To assess reliability, a surveillance program is implemented, where units are periodically removed from the stockpile and tested. The most definitive tests typically destroy the weapons so a given unit is tested only once. Surveillance managers need to decide how many units should be tested, how often they should be tested, what tests should be done, and how themore » resulting data are used to estimate the stockpile's current and future reliability. These issues are particularly critical from a planning perspective: given what has already been observed and our understanding of the mechanisms of stockpile aging, what is an appropriate and cost-effective surveillance program? Surveillance programs are costly, broad, and deep, especially in the DOE, where the US nuclear weapons surveillance program must 'ensure, through various tests, that the reliability of nuclear weapons is maintained' in the absence of full-system testing (General Accounting Office, 1996). The DOE program consists primarily of three types of tests: nonnuclear flight tests, that involve the actual dropping or launching of a weapon from which the nuclear components have been removed; and nonnuclear and nuclear systems laboratory tests, which detect defects due to aging, manufacturing, and design of the nonnuclear and nuclear portions of the weapons. Fully integrated analysis of the suite of nuclear weapons surveillance data is an ongoing area of research (Wilson et al., 2007). This paper introduces a simple model that captures high-level features of stockpile reliability over time and can be used to answer broad policy questions about surveillance programs. Our intention is to provide a framework that generates tractable answers that integrate expert knowledge and high-level summaries of surveillance data to allow decision-making about appropriate trade-offs between the cost of data and the precision of stockpile reliability estimates.« less

  20. Reliability of adherence and competence assessment in cognitive behavioral therapy: influence of clinical experience.

    PubMed

    Weck, Florian; Hilling, Christine; Schermelleh-Engel, Karin; Rudari, Visar; Stangier, Ulrich

    2011-04-01

    The use of highly experienced expert judges was suggested for the assessment of therapists' adherence and competence. However, such an approach implies high costs. It can be questioned whether only experts are able to evaluate therapists' adherence and competence reliably. To test this, 4 judges evaluated therapist adherence and competence in 30 randomly selected videotapes of cognitive therapy sessions for depression. In that, 2 judges exhibited high clinical experience (experts), whereas 2 judges did not (novices). We could demonstrate that novices evaluated an aggregated adherence and competence measure with high reliability. However, several adherence and competence aspects were not assessed with satisfactory reliability by novices. Although adherence ratings of experts and novices showed high concordance, the concordance of competence ratings was only moderate. Results revealed that therapists' adherence could be evaluated satisfactorily by trained novices with some restrictions, but not their competence.

  1. Thick resist for MEMS processing

    NASA Astrophysics Data System (ADS)

    Brown, Joe; Hamel, Clifford

    2001-11-01

    The need for technical innovation is always present in today's economy. Microfabrication methods have evolved in support of the demand for smaller and faster integrated circuits with price performance improvements always in the scope of the manufacturing design engineer. The dispersion of processing technology spans well beyond IC fabrication today with batch fabrication and wafer scale processing lending advantages to MEMES applications from biotechnology to consumer electronics from oil exploration to aerospace. Today the demand for innovative processing techniques that enable technology is apparent where only a few years ago appeared too costly or not reliable. In high volume applications where yield and cost improvements are measured in fractions of a percent it is imperative to have process technologies that produce consistent results. Only a few years ago thick resist coatings were limited to thickness less than 20 microns. Factors such as uniformity, edge bead and multiple coatings made high volume production impossible. New developments in photoresist formulation combined with advanced coating equipment techniques that closely controls process parameters have enable thick photoresist coatings of 70 microns with acceptable uniformity and edge bead in one pass. Packaging of microelectronic and micromechanical devices is often a significant cost factor and a reliability issue for high volume low cost production. Technologies such as flip- chip assembly provide a solution for cost and reliability improvements over wire bond techniques. The processing for such technology demands dimensional control and presents a significant cost savings if it were compatible with mainstream technologies. Thick photoresist layers, with good sidewall control would allow wafer-bumping technologies to penetrate the barriers to yield and production where costs for technology are the overriding issue. Single pass processing is paramount to the manufacturability of packaging technology. Uniformity and edge bead control defined the success of process implementation. Today advanced packaging solutions are created with thick photoresist coatings. The techniques and results will be presented.

  2. A Simple and Reliable Method of Design for Standalone Photovoltaic Systems

    NASA Astrophysics Data System (ADS)

    Srinivasarao, Mantri; Sudha, K. Rama; Bhanu, C. V. K.

    2017-06-01

    Standalone photovoltaic (SAPV) systems are seen as a promoting method of electrifying areas of developing world that lack power grid infrastructure. Proliferations of these systems require a design procedure that is simple, reliable and exhibit good performance over its life time. The proposed methodology uses simple empirical formulae and easily available parameters to design SAPV systems, that is, array size with energy storage. After arriving at the different array size (area), performance curves are obtained for optimal design of SAPV system with high amount of reliability in terms of autonomy at a specified value of loss of load probability (LOLP). Based on the array to load ratio (ALR) and levelized energy cost (LEC) through life cycle cost (LCC) analysis, it is shown that the proposed methodology gives better performance, requires simple data and is more reliable when compared with conventional design using monthly average daily load and insolation.

  3. Balancing reliability and cost to choose the best power subsystem

    NASA Technical Reports Server (NTRS)

    Suich, Ronald C.; Patterson, Richard L.

    1991-01-01

    A mathematical model is presented for computing total (spacecraft) subsystem cost including both the basic subsystem cost and the expected cost due to the failure of the subsystem. This model is then used to determine power subsystem cost as a function of reliability and redundancy. Minimum cost and maximum reliability and/or redundancy are not generally equivalent. Two example cases are presented. One is a small satellite, and the other is an interplanetary spacecraft.

  4. Medical image digital archive: a comparison of storage technologies

    NASA Astrophysics Data System (ADS)

    Chunn, Timothy; Hutchings, Matt

    1998-07-01

    A cost effective, high capacity digital archive system is one of the remaining key factors that will enable a radiology department to eliminate film as an archive medium. The ever increasing amount of digital image data is creating the need for huge archive systems that can reliably store and retrieve millions of images and hold from a few terabytes of data to possibly hundreds of terabytes. Selecting the right archive solution depends on a number of factors: capacity requirements, write and retrieval performance requirements, scaleability in capacity and performance, conformance to open standards, archive availability and reliability, security, cost, achievable benefits and cost savings, investment protection, and more. This paper addresses many of these issues. It compares and positions optical disk and magnetic tape technologies, which are the predominant archive mediums today. New technologies will be discussed, such as DVD and high performance tape. Price and performance comparisons will be made at different archive capacities, plus the effect of file size on random and pre-fetch retrieval time will be analyzed. The concept of automated migration of images from high performance, RAID disk storage devices to high capacity, NearlineR storage devices will be introduced as a viable way to minimize overall storage costs for an archive.

  5. Inhibition in task switching: The reliability of the n - 2 repetition cost.

    PubMed

    Kowalczyk, Agnieszka W; Grange, James A

    2017-12-01

    The n - 2 repetition cost seen in task switching is the effect of slower response times performing a recently completed task (e.g. an ABA sequence) compared to performing a task that was not recently completed (e.g. a CBA sequence). This cost is thought to reflect cognitive inhibition of task representations and as such, the n - 2 repetition cost has begun to be used as an assessment of individual differences in inhibitory control; however, the reliability of this measure has not been investigated in a systematic manner. The current study addressed this important issue. Seventy-two participants performed three task switching paradigms; participants were also assessed on rumination traits and processing speed-measures of individual differences potentially modulating the n - 2 repetition cost. We found significant n - 2 repetition costs for each paradigm. However, split-half reliability tests revealed that this cost was not reliable at the individual-difference level. Neither rumination tendencies nor processing speed predicted this cost. We conclude that the n - 2 repetition cost is not reliable as a measure of individual differences in inhibitory control.

  6. Improvement of the cost-benefit analysis algorithm for high-rise construction projects

    NASA Astrophysics Data System (ADS)

    Gafurov, Andrey; Skotarenko, Oksana; Plotnikov, Vladimir

    2018-03-01

    The specific nature of high-rise investment projects entailing long-term construction, high risks, etc. implies a need to improve the standard algorithm of cost-benefit analysis. An improved algorithm is described in the article. For development of the improved algorithm of cost-benefit analysis for high-rise construction projects, the following methods were used: weighted average cost of capital, dynamic cost-benefit analysis of investment projects, risk mapping, scenario analysis, sensitivity analysis of critical ratios, etc. This comprehensive approach helped to adapt the original algorithm to feasibility objectives in high-rise construction. The authors put together the algorithm of cost-benefit analysis for high-rise construction projects on the basis of risk mapping and sensitivity analysis of critical ratios. The suggested project risk management algorithms greatly expand the standard algorithm of cost-benefit analysis in investment projects, namely: the "Project analysis scenario" flowchart, improving quality and reliability of forecasting reports in investment projects; the main stages of cash flow adjustment based on risk mapping for better cost-benefit project analysis provided the broad range of risks in high-rise construction; analysis of dynamic cost-benefit values considering project sensitivity to crucial variables, improving flexibility in implementation of high-rise projects.

  7. High efficiency low cost monolithic module for SARSAT distress beacons

    NASA Technical Reports Server (NTRS)

    Petersen, Wendell C.; Siu, Daniel P.

    1992-01-01

    The program objectives were to develop a highly efficient, low cost RF module for SARSAT beacons; achieve significantly lower battery current drain, amount of heat generated, and size of battery required; utilize MMIC technology to improve efficiency, reliability, packaging, and cost; and provide a technology database for GaAs based UHF RF circuit architectures. Presented in viewgraph form are functional block diagrams of the SARSAT distress beacon and beacon RF module as well as performance goals, schematic diagrams, predicted performances, and measured performances for the phase modulator and power amplifier.

  8. Regenerating the Natural Longleaf Pine Forest

    Treesearch

    William D. Boyer

    1979-01-01

    Natural regeneration by the shldterwood system is a reliable, low-cost alternative for existing longleaf pine (Pine palustris Mill.) forests. The system is well suited to the nautral attributes and requirements of the species. It may be attractive to landownders wishing to retain a natural forest and aboid high costs of site preparation and...

  9. On the Path to SunShot - The Role of Advancements in Solar Photovoltaic Efficiency, Reliability, and Costs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodhouse, Michael; Jones-Albertus, Rebecca; Feldman, David

    2016-05-01

    Although tremendous progress has been made in reducing the cost of PV systems, additional LCOE reductions of 40%–50% between 2015 and 2020 will be required to reach the SunShot Initiative’s targets (see Woodhouse et al. 2016). Understanding the tradeoffs between installed prices and other PV system characteristics—such as module efficiency, module degradation rate, and system lifetime—are vital. For example, with 29%-efficient modules and high reliability (a 50-year lifetime and a 0.2%/year module degradation rate), a residential PV system could achieve the SunShot LCOE goal with modules priced at almost $1.20/W. But change the lifetime to 10 years and the degradationmore » rate to 2%/year, and the system would need those very high-efficiency modules at zero cost to achieve the same LCOE. Although these examples are extreme, they serve to illustrate the wide range of technological combinations that could help drive PV toward the LCOE goals. SunShot’s PV roadmaps illustrate specific potential pathways to the target cost reductions.« less

  10. Reliability model for ductile hybrid FRP rebar using randomly dispersed chopped fibers

    NASA Astrophysics Data System (ADS)

    Behnam, Bashar Ramzi

    Fiber reinforced polymer composites or simply FRP composites have become more attractive to civil engineers in the last two decades due to their unique mechanical properties. However, there are many obstacles such as low elasticity modulus, non-ductile behavior, high cost of the fibers, high manufacturing costs, and absence of rigorous characterization of the uncertainties of the mechanical properties that restrict the use of these composites. However, when FRP composites are used to develop reinforcing rebars in concrete structural members to replace the conventional steel, a huge benefit can be achieved since FRP materials don't corrode. Two FRP rebar models are proposed that make use of multiple types of fibers to achieve ductility, and chopped fibers are used to reduce the manufacturing costs. In order to reach the most optimum fractional volume of each type of fiber, to minimize the cost of the proposed rebars, and to achieve a safe design by considering uncertainties in the materials and geometry of sections, appropriate material resistance factors have been developed, and a Reliability Based Design Optimization (RBDO), has been conducted for the proposed schemes.

  11. High day-to-day reliability in lower leg volume measured by water displacement.

    PubMed

    Pasley, Jeffrey D; O'Connor, Patrick J

    2008-07-01

    The day-to-day reliability of lower leg volume is poorly documented. This investigation determined the day-to-day reliability of lower leg volume (soleus and gastrocnemius) measured using water displacement. Thirty young adults (15 men and 15 women) had their right lower leg volume measured by water displacement on five separate occasions. The participants performed normal activities of daily living and were measured at the same time of day after being seated for 30 min. The results revealed a high day-to-day reliability for lower leg volume. The mean percentage change in lower leg volume across days compared to day 1 ranged between 0 and 0.37%. The mean within subjects coefficient of variation in lower leg volume was 0.72% and the coefficient of variation for the entire sample across days ranged from 5.66 to 6.32%. A two way mixed model intraclass correlation (30 subjects x 5 days) showed that the lower leg volume measurement was highly reliable (ICC = 0.972). Foot and total lower leg volumes showed similarly high reliability. Water displacement offers a cost effective and reliable solution for the measurement of lower leg edema across days.

  12. Development and Testing of an Inflatable, Rigidizable Space Structure Experiment

    DTIC Science & Technology

    2006-03-01

    successful, including physical dimension, weight , and cost. Inflatable structures have the potential to achieve greater efficiency in all of these...potential for low cost, high mechanical packaging efficiency, deployment reliability and low weight (13). The term inflatable structure indicates that a...back-up inflation gas a necessity for long term success. This addition can be very costly in terms of volume, weight , and expense due to added or

  13. An Analysis of Failure Handling in Chameleon, A Framework for Supporting Cost-Effective Fault Tolerant Services

    NASA Technical Reports Server (NTRS)

    Haakensen, Erik Edward

    1998-01-01

    The desire for low-cost reliable computing is increasing. Most current fault tolerant computing solutions are not very flexible, i.e., they cannot adapt to reliability requirements of newly emerging applications in business, commerce, and manufacturing. It is important that users have a flexible, reliable platform to support both critical and noncritical applications. Chameleon, under development at the Center for Reliable and High-Performance Computing at the University of Illinois, is a software framework. for supporting cost-effective adaptable networked fault tolerant service. This thesis details a simulation of fault injection, detection, and recovery in Chameleon. The simulation was written in C++ using the DEPEND simulation library. The results obtained from the simulation included the amount of overhead incurred by the fault detection and recovery mechanisms supported by Chameleon. In addition, information about fault scenarios from which Chameleon cannot recover was gained. The results of the simulation showed that both critical and noncritical applications can be executed in the Chameleon environment with a fairly small amount of overhead. No single point of failure from which Chameleon could not recover was found. Chameleon was also found to be capable of recovering from several multiple failure scenarios.

  14. Reliability of two social cognition tests: The combined stories test and the social knowledge test.

    PubMed

    Thibaudeau, Élisabeth; Cellard, Caroline; Legendre, Maxime; Villeneuve, Karèle; Achim, Amélie M

    2018-04-01

    Deficits in social cognition are common in psychiatric disorders. Validated social cognition measures with good psychometric properties are necessary to assess and target social cognitive deficits. Two recent social cognition tests, the Combined Stories Test (COST) and the Social Knowledge Test (SKT), respectively assess theory of mind and social knowledge. Previous studies have shown good psychometric properties for these tests, but the test-retest reliability has never been documented. The aim of this study was to evaluate the test-retest reliability and the inter-rater reliability of the COST and the SKT. The COST and the SKT were administered twice to a group of forty-two healthy adults, with a delay of approximately four weeks between the assessments. Excellent test-retest reliability was observed for the COST, and a good test-retest reliability was observed for the SKT. There was no evidence of practice effect. Furthermore, an excellent inter-rater reliability was observed for both tests. This study shows a good reliability of the COST and the SKT that adds to the good validity previously reported for these two tests. These good psychometrics properties thus support that the COST and the SKT are adequate measures for the assessment of social cognition. Copyright © 2018. Published by Elsevier B.V.

  15. Sintered tantalum carbide coatings on graphite substrates: Highly reliable protective coatings for bulk and epitaxial growth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakamura, Daisuke; Suzumura, Akitoshi; Shigetoh, Keisuke

    2015-02-23

    Highly reliable low-cost protective coatings have been sought after for use in crucibles and susceptors for bulk and epitaxial film growth processes involving wide bandgap materials. Here, we propose a production technique for ultra-thick (50–200 μmt) tantalum carbide (TaC) protective coatings on graphite substrates, which consists of TaC slurry application and subsequent sintering processes, i.e., a wet ceramic process. Structural analysis of the sintered TaC layers indicated that they have a dense granular structure containing coarse grain with sizes of 10–50 μm. Furthermore, no cracks or pinholes penetrated through the layers, i.e., the TaC layers are highly reliable protective coatings. The analysismore » also indicated that no plastic deformation occurred during the production process, and the non-textured crystalline orientation of the TaC layers is the origin of their high reliability and durability. The TaC-coated graphite crucibles were tested in an aluminum nitride (AlN) sublimation growth process, which involves extremely corrosive conditions, and demonstrated their practical reliability and durability in the AlN growth process as a TaC-coated graphite. The application of the TaC-coated graphite materials to crucibles and susceptors for use in bulk AlN single crystal growth, bulk silicon carbide (SiC) single crystal growth, chemical vapor deposition of epitaxial SiC films, and metal-organic vapor phase epitaxy of group-III nitrides will lead to further improvements in crystal quality and reduced processing costs.« less

  16. Three phase power conversion system for utility interconnected PV applications

    NASA Astrophysics Data System (ADS)

    Porter, David G.

    1999-03-01

    Omnion Power Engineering Corporation has developed a new three phase inverter that improves the cost, reliability, and performance of three phase utility interconnected photovoltaic inverters. The inverter uses a new, high manufacturing volume IGBT bridge that has better thermal performance than previous designs. A custom easily manufactured enclosure was designed. Controls were simplified to increase reliability while maintaining important user features.

  17. Application of exercise ECG stress test in the current high cost modern-era healthcare system.

    PubMed

    Vaidya, Gaurang Nandkishor

    Exercise electrocardiogram (ECG) tests boasts of being more widely available, less resource intensive, lower cost and absence of radiation. In the presence of a normal baseline ECG, an exercise ECG test is able to generate a reliable and reproducible result almost comparable to Technitium-99m sestamibi perfusion imaging. Exercise ECG changes when combined with other clinical parameters obtained during the test has the potential to allow effective redistribution of scarce resources by excluding low risk patients with significant accuracy. As we look towards a future of rising healthcare costs, increased prevalence of cardiovascular disease and the need for proper allocation of limited resources; exercise ECG test offers low cost, vital and reliable disease interpretation. This article highlights the physiology of the exercise ECG test, patient selection, effective interpretation, describe previously reported scores and their clinical application in today's clinical practice. Copyright © 2017. Published by Elsevier B.V.

  18. Deterministic Ethernet for Space Applications

    NASA Astrophysics Data System (ADS)

    Fidi, C.; Wolff, B.

    2015-09-01

    Typical spacecraft systems are distributed to be able to achieve the required reliability and availability targets of the mission. However the requirements on these systems are different for launchers, satellites, human space flight and exploration missions. Launchers require typically high reliability with very short mission times whereas satellites or space exploration missions require very high availability at very long mission times. Comparing a distributed system of launchers with satellites it shows very fast reaction times in launchers versus much slower once in satellite applications. Human space flight missions are maybe most challenging concerning reliability and availability since human lives are involved and the mission times can be very long e.g. ISS. Also the reaction times of these vehicles can get challenging during mission scenarios like landing or re-entry leading to very fast control loops. In these different applications more and more autonomous functions are required to fulfil the needs of current and future missions. This autonomously leads to new requirements with respect to increase performance, determinism, reliability and availability. On the other hand side the pressure on reducing costs of electronic components in space applications is increasing, leading to the use of more and more COTS components especially for launchers and LEO satellites. This requires a technology which is able to provide a cost competitive solution for both the high reliable and available deep-space as well as the low cost “new space” markets. Future spacecraft communication standards therefore have to be much more flexible, scalable and modular to be able to deal with these upcoming challenges. The only way to fulfill these requirements is, if they are based on open standards which are used cross industry leading to a reduction of the lifecycle costs and an increase in performance. The use of a communication network that fulfills these requirements will be essential for such spacecraft’s to allow the use in launcher, satellite, human space flight and exploration missions. Using one technology and the related infrastructure for these different applications will lead to a significant reduction of complexity and would moreover lead to significant savings in size weight and power while increasing the performance of the overall system. The paper focuses on the use of the TTEthernet technology for launchers, satellites and human spaceflight and will demonstrate the scalability of the technology for the different applications. The data used is derived from the ESA TRP 7594 on “Reliable High-Speed Data Bus/Network for Safety-Oriented Missions”.

  19. Starship Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2009-01-01

    The design and mass cost of a starship and its life support system are investigated. The mission plan for a multi generational interstellar voyage to colonize a new planet is used to describe the starship design, including the crew habitat, accommodations, and life support. Only current technology is assumed. Highly reliable life support systems can be provided with reasonably small additional mass, suggesting that they can support long duration missions. Bioregenerative life support, growing crop plants that provide food, water, and oxygen, has been thought to need less mass than providing stored food for long duration missions. The large initial mass of hydroponics systems is paid for over time by saving the mass of stored food. However, the yearly logistics mass required to support a bioregenerative system exceeds the mass of food solids it produces, so that supplying stored dehydrated food always requires less mass than bioregenerative food production. A mixed system that grows about half the food and supplies the other half dehydrated has advantages that allow it to breakeven with stored dehydrated food in about 66 years. However, moderate increases in the hydroponics system mass to achieve high reliability, such as adding spares that double the system mass and replacing the initial system every 100 years, increase the mass cost of bioregenerative life support. In this case, the high reliability half food growing, half food supplying system does not breakeven for 389 years. An even higher reliability half and half system, with three times original system mass and replacing the system every 50 years, never breaks even. Growing food for starship life support requires more mass than providing dehydrated food, even for multigeneration voyages of hundreds of years. The benefits of growing some food may justify the added mass cost. Much more efficient recycling food production is wanted but may not be possible. A single multigenerational interstellar voyage to colonize a new planet would have cost similar to that of the Apollo program. Cost is reduced if a small crew travels slowly and lands with minimal equipment. We can go to the stars!

  20. Proposed Reliability/Cost Model

    NASA Technical Reports Server (NTRS)

    Delionback, L. M.

    1982-01-01

    New technique estimates cost of improvement in reliability for complex system. Model format/approach is dependent upon use of subsystem cost-estimating relationships (CER's) in devising cost-effective policy. Proposed methodology should have application in broad range of engineering management decisions.

  1. Advanced Launch System advanced development oxidizer turbopump program: Technical implementation plan

    NASA Technical Reports Server (NTRS)

    Ferlita, F.

    1989-01-01

    The Advanced Launch Systems (ALS) Advanced Development Oxidizer Turbopump Program has designed, fabricated and demonstrated a low cost, highly reliable oxidizer turbopump for the Space Transportation Engine that minimizes the recurring cost for the ALS engines. Pratt and Whitney's (P and W's) plan for integrating the analyses, testing, fabrication, and other program efforts is addressed. This plan offers a comprehensive description of the total effort required to design, fabricate, and test the ALS oxidizer turbopump. The proposed ALS oxidizer turbopump reduces turbopump costs over current designs by taking advantage of design simplicity and state-of-the-art materials and producibility features without compromising system reliability. This is accomplished by selecting turbopump operating conditions that are within known successful operating regions and by using proven manufacturing techniques.

  2. The roles of vibration analysis and infrared thermography in monitoring air-handling equipment

    NASA Astrophysics Data System (ADS)

    Wurzbach, Richard N.

    2003-04-01

    Industrial and commercial building equipment maintenance has not historically been targeted for implementation of PdM programs. The focus instead has been on manufacturing, aerospace and energy industries where production interruption has significant cost implications. As cost-effectiveness becomes more pervasive in corporate culture, even office space and labor activities housed in large facilities are being scrutinized for cost-cutting measures. When the maintenance costs for these facilities are reviewed, PdM can be considered for improving the reliability of the building temperature regulation, and reduction of maintenance repair costs. An optimized program to direct maintenance resources toward a cost effective and pro-active management of the facility can result in reduced operating budgets, and greater occupant satisfaction. A large majority of the significant rotating machinery in a large building environment are belt-driven air handling units. These machines are often poorly designed or utilized within the facility. As a result, the maintenance staff typically find themselves scrambling to replace belts and bearings, going from one failure to another. Instead of the reactive-mode maintenance, some progressive and critical institutions are adopting predictive and proactive technologies of infrared thermography and vibration analysis. Together, these technologies can be used to identify design and installation problems, that when corrected, significantly reduce maintenance and increase reliability. For critical building use, such as laboratories, research facilities, and other high value non-industrial settings, the cost-benefits of more reliable machinery can contribute significantly to the operational success.

  3. Enabling technologies for fiber optic sensing

    NASA Astrophysics Data System (ADS)

    Ibrahim, Selwan K.; Farnan, Martin; Karabacak, Devrez M.; Singer, Johannes M.

    2016-04-01

    In order for fiber optic sensors to compete with electrical sensors, several critical parameters need to be addressed such as performance, cost, size, reliability, etc. Relying on technologies developed in different industrial sectors helps to achieve this goal in a more efficient and cost effective way. FAZ Technology has developed a tunable laser based optical interrogator based on technologies developed in the telecommunication sector and optical transducer/sensors based on components sourced from the automotive market. Combining Fiber Bragg Grating (FBG) sensing technology with the above, high speed, high precision, reliable quasi distributed optical sensing systems for temperature, pressure, acoustics, acceleration, etc. has been developed. Careful design needs to be considered to filter out any sources of measurement drifts/errors due to different effects e.g. polarization and birefringence, coating imperfections, sensor packaging etc. Also to achieve high speed and high performance optical sensing systems, combining and synchronizing multiple optical interrogators similar to what has been used with computer/processors to deliver super computing power is an attractive solution. This path can be achieved by using photonic integrated circuit (PIC) technology which opens the doors to scaling up and delivering powerful optical sensing systems in an efficient and cost effective way.

  4. Estimating Teacher Turnover Costs: A Case Study

    ERIC Educational Resources Information Center

    Levy, Abigail Jurist; Joy, Lois; Ellis, Pamela; Jablonski, Erica; Karelitz, Tzur M.

    2012-01-01

    High teacher turnover in large U.S. cities is a critical issue for schools and districts, and the students they serve; but surprisingly little work has been done to develop methodologies and standards that districts and schools can use to make reliable estimates of turnover costs. Even less is known about how to detect variations in turnover costs…

  5. A Highly Reliable and Cost-Efficient Multi-Sensor System for Land Vehicle Positioning.

    PubMed

    Li, Xu; Xu, Qimin; Li, Bin; Song, Xianghui

    2016-05-25

    In this paper, we propose a novel positioning solution for land vehicles which is highly reliable and cost-efficient. The proposed positioning system fuses information from the MEMS-based reduced inertial sensor system (RISS) which consists of one vertical gyroscope and two horizontal accelerometers, low-cost GPS, and supplementary sensors and sources. First, pitch and roll angle are accurately estimated based on a vehicle kinematic model. Meanwhile, the negative effect of the uncertain nonlinear drift of MEMS inertial sensors is eliminated by an H∞ filter. Further, a distributed-dual-H∞ filtering (DDHF) mechanism is adopted to address the uncertain nonlinear drift of the MEMS-RISS and make full use of the supplementary sensors and sources. The DDHF is composed of a main H∞ filter (MHF) and an auxiliary H∞ filter (AHF). Finally, a generalized regression neural network (GRNN) module with good approximation capability is specially designed for the MEMS-RISS. A hybrid methodology which combines the GRNN module and the AHF is utilized to compensate for RISS position errors during GPS outages. To verify the effectiveness of the proposed solution, road-test experiments with various scenarios were performed. The experimental results illustrate that the proposed system can achieve accurate and reliable positioning for land vehicles.

  6. A Highly Reliable and Cost-Efficient Multi-Sensor System for Land Vehicle Positioning

    PubMed Central

    Li, Xu; Xu, Qimin; Li, Bin; Song, Xianghui

    2016-01-01

    In this paper, we propose a novel positioning solution for land vehicles which is highly reliable and cost-efficient. The proposed positioning system fuses information from the MEMS-based reduced inertial sensor system (RISS) which consists of one vertical gyroscope and two horizontal accelerometers, low-cost GPS, and supplementary sensors and sources. First, pitch and roll angle are accurately estimated based on a vehicle kinematic model. Meanwhile, the negative effect of the uncertain nonlinear drift of MEMS inertial sensors is eliminated by an H∞ filter. Further, a distributed-dual-H∞ filtering (DDHF) mechanism is adopted to address the uncertain nonlinear drift of the MEMS-RISS and make full use of the supplementary sensors and sources. The DDHF is composed of a main H∞ filter (MHF) and an auxiliary H∞ filter (AHF). Finally, a generalized regression neural network (GRNN) module with good approximation capability is specially designed for the MEMS-RISS. A hybrid methodology which combines the GRNN module and the AHF is utilized to compensate for RISS position errors during GPS outages. To verify the effectiveness of the proposed solution, road-test experiments with various scenarios were performed. The experimental results illustrate that the proposed system can achieve accurate and reliable positioning for land vehicles. PMID:27231917

  7. Maximally reliable Markov chains under energy constraints.

    PubMed

    Escola, Sean; Eisele, Michael; Miller, Kenneth; Paninski, Liam

    2009-07-01

    Signal-to-noise ratios in physical systems can be significantly degraded if the outputs of the systems are highly variable. Biological processes for which highly stereotyped signal generations are necessary features appear to have reduced their signal variabilities by employing multiple processing steps. To better understand why this multistep cascade structure might be desirable, we prove that the reliability of a signal generated by a multistate system with no memory (i.e., a Markov chain) is maximal if and only if the system topology is such that the process steps irreversibly through each state, with transition rates chosen such that an equal fraction of the total signal is generated in each state. Furthermore, our result indicates that by increasing the number of states, it is possible to arbitrarily increase the reliability of the system. In a physical system, however, an energy cost is associated with maintaining irreversible transitions, and this cost increases with the number of such transitions (i.e., the number of states). Thus, an infinite-length chain, which would be perfectly reliable, is infeasible. To model the effects of energy demands on the maximally reliable solution, we numerically optimize the topology under two distinct energy functions that penalize either irreversible transitions or incommunicability between states, respectively. In both cases, the solutions are essentially irreversible linear chains, but with upper bounds on the number of states set by the amount of available energy. We therefore conclude that a physical system for which signal reliability is important should employ a linear architecture, with the number of states (and thus the reliability) determined by the intrinsic energy constraints of the system.

  8. Displaying contextual information reduces the costs of imperfect decision automation in rapid retasking of ISR assets.

    PubMed

    Rovira, Ericka; Cross, Austin; Leitch, Evan; Bonaceto, Craig

    2014-09-01

    The impact of a decision support tool designed to embed contextual mission factors was investigated. Contextual information may enable operators to infer the appropriateness of data underlying the automation's algorithm. Research has shown the costs of imperfect automation are more detrimental than perfectly reliable automation when operators are provided with decision support tools. Operators may trust and rely on the automation more appropriately if they understand the automation's algorithm. The need to develop decision support tools that are understandable to the operator provides the rationale for the current experiment. A total of 17 participants performed a simulated rapid retasking of intelligence, surveillance, and reconnaissance (ISR) assets task with manual, decision automation, or contextual decision automation differing in two levels of task demand: low or high. Automation reliability was set at 80%, resulting in participants experiencing a mixture of reliable and automation failure trials. Dependent variables included ISR coverage and response time of replanning routes. Reliable automation significantly improved ISR coverage when compared with manual performance. Although performance suffered under imperfect automation, contextual decision automation helped to reduce some of the decrements in performance. Contextual information helps overcome the costs of imperfect decision automation. Designers may mitigate some of the performance decrements experienced with imperfect automation by providing operators with interfaces that display contextual information, that is, the state of factors that affect the reliability of the automation's recommendation.

  9. The B-747 flight control system maintenance and reliability data base for cost effectiveness tradeoff studies

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Primary and automatic flight controls are combined for a total flight control reliability and maintenance cost data base using information from two previous reports and additional cost data gathered from a major airline. A comparison of the current B-747 flight control system effects on reliability and operating cost with that of a B-747 designed for an active control wing load alleviation system is provided.

  10. Signal verification can promote reliable signalling.

    PubMed

    Broom, Mark; Ruxton, Graeme D; Schaefer, H Martin

    2013-11-22

    The central question in communication theory is whether communication is reliable, and if so, which mechanisms select for reliability. The primary approach in the past has been to attribute reliability to strategic costs associated with signalling as predicted by the handicap principle. Yet, reliability can arise through other mechanisms, such as signal verification; but the theoretical understanding of such mechanisms has received relatively little attention. Here, we model whether verification can lead to reliability in repeated interactions that typically characterize mutualisms. Specifically, we model whether fruit consumers that discriminate among poor- and good-quality fruits within a population can select for reliable fruit signals. In our model, plants either signal or they do not; costs associated with signalling are fixed and independent of plant quality. We find parameter combinations where discriminating fruit consumers can select for signal reliability by abandoning unprofitable plants more quickly. This self-serving behaviour imposes costs upon plants as a by-product, rendering it unprofitable for unrewarding plants to signal. Thus, strategic costs to signalling are not a prerequisite for reliable communication. We expect verification to more generally explain signal reliability in repeated consumer-resource interactions that typify mutualisms but also in antagonistic interactions such as mimicry and aposematism.

  11. Different Approaches for Ensuring Performance/Reliability of Plastic Encapsulated Microcircuits (PEMs) in Space Applications

    NASA Technical Reports Server (NTRS)

    Gerke, R. David; Sandor, Mike; Agarwal, Shri; Moor, Andrew F.; Cooper, Kim A.

    2000-01-01

    Engineers within the commercial and aerospace industries are using trade-off and risk analysis to aid in reducing spacecraft system cost while increasing performance and maintaining high reliability. In many cases, Commercial Off-The-Shelf (COTS) components, which include Plastic Encapsulated Microcircuits (PEMs), are candidate packaging technologies for spacecrafts due to their lower cost, lower weight and enhanced functionality. Establishing and implementing a parts program that effectively and reliably makes use of these potentially less reliable, but state-of-the-art devices, has become a significant portion of the job for the parts engineer. Assembling a reliable high performance electronic system, which includes COTS components, requires that the end user assume a risk. To minimize the risk involved, companies have developed methodologies by which they use accelerated stress testing to assess the product and reduce the risk involved to the total system. Currently, there are no industry standard procedures for accomplishing this risk mitigation. This paper will present the approaches for reducing the risk of using PEMs devices in space flight systems as developed by two independent Laboratories. The JPL procedure involves primarily a tailored screening with accelerated stress philosophy while the APL procedure is primarily, a lot qualification procedure. Both Laboratories successfully have reduced the risk of using the particular devices for their respective systems and mission requirements.

  12. Puncture Self-Healing Polymers for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Gordon, Keith L.; Penner, Ronald K.; Bogert, Phil B.; Yost, W. T.; Siochi, Emilie J.

    2011-01-01

    Space exploration launch costs on the order of $10K per pound provide ample incentive to seek innovative, cost-effective ways to reduce structural mass without sacrificing safety and reliability. Damage-tolerant structural systems can provide a route to avoiding weight penalty while enhancing vehicle safety and reliability. Self-healing polymers capable of spontaneous puncture repair show great promise to mitigate potentially catastrophic damage from events such as micrometeoroid penetration. Effective self-repair requires these materials to heal instantaneously following projectile penetration while retaining structural integrity. Poly(ethylene-co-methacrylic acid) (EMMA), also known as Surlyn is an ionomer-based copolymer that undergoes puncture reversal (self-healing) following high impact puncture at high velocities. However EMMA is not a structural engineering polymer, and will not meet the demands of aerospace applications requiring self-healing engineering materials. Current efforts to identify candidate self-healing polymer materials for structural engineering systems are reported. Rheology, high speed thermography, and high speed video for self-healing semi-crystalline and amorphous polymers will be reported.

  13. SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws

    NASA Technical Reports Server (NTRS)

    Cooke, Daniel; Rushton, Nelson

    2013-01-01

    With the introduction of new parallel architectures like the cell and multicore chips from IBM, Intel, AMD, and ARM, as well as the petascale processing available for highend computing, a larger number of programmers will need to write parallel codes. Adding the parallel control structure to the sequence, selection, and iterative control constructs increases the complexity of code development, which often results in increased development costs and decreased reliability. SequenceL is a high-level programming language that is, a programming language that is closer to a human s way of thinking than to a machine s. Historically, high-level languages have resulted in decreased development costs and increased reliability, at the expense of performance. In recent applications at JSC and in industry, SequenceL has demonstrated the usual advantages of high-level programming in terms of low cost and high reliability. SequenceL programs, however, have run at speeds typically comparable with, and in many cases faster than, their counterparts written in C and C++ when run on single-core processors. Moreover, SequenceL is able to generate parallel executables automatically for multicore hardware, gaining parallel speedups without any extra effort from the programmer beyond what is required to write the sequen tial/singlecore code. A SequenceL-to-C++ translator has been developed that automatically renders readable multithreaded C++ from a combination of a SequenceL program and sample data input. The SequenceL language is based on two fundamental computational laws, Consume-Simplify- Produce (CSP) and Normalize-Trans - pose (NT), which enable it to automate the creation of parallel algorithms from high-level code that has no annotations of parallelism whatsoever. In our anecdotal experience, SequenceL development has been in every case less costly than development of the same algorithm in sequential (that is, single-core, single process) C or C++, and an order of magnitude less costly than development of comparable parallel code. Moreover, SequenceL not only automatically parallelizes the code, but since it is based on CSP-NT, it is provably race free, thus eliminating the largest quality challenge the parallelized software developer faces.

  14. Pediatric laryngeal simulator using 3D printed models: A novel technique.

    PubMed

    Kavanagh, Katherine R; Cote, Valerie; Tsui, Yvonne; Kudernatsch, Simon; Peterson, Donald R; Valdez, Tulio A

    2017-04-01

    Simulation to acquire and test technical skills is an essential component of medical education and residency training in both surgical and nonsurgical specialties. High-quality simulation education relies on the availability, accessibility, and reliability of models. The objective of this work was to describe a practical pediatric laryngeal model for use in otolaryngology residency training. Ideally, this model would be low-cost, have tactile properties resembling human tissue, and be reliably reproducible. Pediatric laryngeal models were developed using two manufacturing methods: direct three-dimensional (3D) printing of anatomical models and casted anatomical models using 3D-printed molds. Polylactic acid, acrylonitrile butadiene styrene, and high-impact polystyrene (HIPS) were used for the directly printed models, whereas a silicone elastomer (SE) was used for the casted models. The models were evaluated for anatomic quality, ease of manipulation, hardness, and cost of production. A tissue likeness scale was created to validate the simulation model. Fleiss' Kappa rating was performed to evaluate interrater agreement, and analysis of variance was performed to evaluate differences among the materials. The SE provided the most anatomically accurate models, with the tactile properties allowing for surgical manipulation of the larynx. Direct 3D printing was more cost-effective than the SE casting method but did not possess the material properties and tissue likeness necessary for surgical simulation. The SE models of the pediatric larynx created from a casting method demonstrated high quality anatomy, tactile properties comparable to human tissue, and easy manipulation with standard surgical instruments. Their use in a reliable, low-cost, accessible, modular simulation system provides a valuable training resource for otolaryngology residents. N/A. Laryngoscope, 127:E132-E137, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  15. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    NASA Astrophysics Data System (ADS)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  16. A pilot study to explore the feasibility of using theClinical Care Classification System for developing a reliable costing method for nursing services.

    PubMed

    Dykes, Patricia C; Wantland, Dean; Whittenburg, Luann; Lipsitz, Stuart; Saba, Virginia K

    2013-01-01

    While nursing activities represent a significant proportion of inpatient care, there are no reliable methods for determining nursing costs based on the actual services provided by the nursing staff. Capture of data to support accurate measurement and reporting on the cost of nursing services is fundamental to effective resource utilization. Adopting standard terminologies that support tracking both the quality and the cost of care could reduce the data entry burden on direct care providers. This pilot study evaluated the feasibility of using a standardized nursing terminology, the Clinical Care Classification System (CCC), for developing a reliable costing method for nursing services. Two different approaches are explored; the Relative Value Unit RVU and the simple cost-to-time methods. We found that the simple cost-to-time method was more accurate and more transparent in its derivation than the RVU method and may support a more consistent and reliable approach for costing nursing services.

  17. Reliable 6 PEP LTPS device for AMOLED's

    NASA Astrophysics Data System (ADS)

    Chou, Cheng-Wei; Wang, Pei-Yun; Hu, Chin-Wei; Chang, York; Chuang, Ching-Sang; Lin, Yusin

    2013-09-01

    This study presents a TFT structure which has less photo process and higher cost competitiveness in AMOLED display markets. A novel LTPS based 6 masks TFT structure for bottom emission AMOLED display is demonstrated in this paper. High field effect mobility (PMOS < 80 cm2/Vs ) and high reliability (PBTS △Vth< 0.02V @ 50oC VG=15V 10ks) was accomplished without the high temperature and rapid thermal annealing (RTA) activation process. Furthermore, a 14-inch AMOLED TV was achieved on the proposed 6-pep TFT backplane using the Gen. 3.5 mass production factory.

  18. Shuttle payload vibroacoustic test plan evaluation

    NASA Technical Reports Server (NTRS)

    Stahle, C. V.; Gongloff, H. R.; Young, J. P.; Keegan, W. B.

    1977-01-01

    Statistical decision theory is used to evaluate seven alternate vibro-acoustic test plans for Space Shuttle payloads; test plans include component, subassembly and payload testing and combinations of component and assembly testing. The optimum test levels and the expected cost are determined for each test plan. By including all of the direct cost associated with each test plan and the probabilistic costs due to ground test and flight failures, the test plans which minimize project cost are determined. The lowest cost approach eliminates component testing and maintains flight vibration reliability by performing subassembly tests at a relatively high acoustic level.

  19. Improving the Defense Acquisition System and Reducing System Costs

    DTIC Science & Technology

    1981-03-30

    The need for this specific commitment results from the competition among the conflicting objectives of high perform- ance, lower cost, shorter... conflict with initiatives to improve reliability and support. Whereas the fastest acquisition approach involves initiating production prxor to...their Individual thrusts result in confusion on the part of OASD who tries to implement conflicting programs, and of defense contractors performing

  20. The Automated Array Assembly Task of the Low-cost Silicon Solar Array Project, Phase 2

    NASA Technical Reports Server (NTRS)

    Coleman, M. G.; Grenon, L.; Pastirik, E. M.; Pryor, R. A.; Sparks, T. G.

    1978-01-01

    An advanced process sequence for manufacturing high efficiency solar cells and modules in a cost-effective manner is discussed. Emphasis is on process simplicity and minimizing consumed materials. The process sequence incorporates texture etching, plasma processes for damage removal and patterning, ion implantation, low pressure silicon nitride deposition, and plated metal. A reliable module design is presented. Specific process step developments are given. A detailed cost analysis was performed to indicate future areas of fruitful cost reduction effort. Recommendations for advanced investigations are included.

  1. Second Generation Novel High Temperature Commercial Receiver & Low Cost High Performance Mirror Collector for Parabolic Solar Trough

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stettenheim, Joel

    Norwich Technologies (NT) is developing a disruptively superior solar field for trough concentrating solar power (CSP). Troughs are the leading CSP technology (85% of installed capacity), being highly deployable and similar to photovoltaic (PV) systems for siting. NT has developed the SunTrap receiver, a disruptive alternative to vacuum-tube concentrating solar power (CSP) receivers, a market currently dominated by the Schott PTR-70. The SunTrap receiver will (1) operate at higher temperature (T) by using an insulated, recessed radiation-collection system to overcome the energy losses that plague vacuum-tube receivers at high T, (2) decrease acquisition costs via simpler structure, and (3) dramaticallymore » increase reliability by eliminating vacuum. It offers comparable optical efficiency with thermal loss reduction from ≥ 26% (at presently standard T) to ≥ 55% (at high T), lower acquisition costs, and near-zero O&M costs.« less

  2. Feasibility of groundwater recharge dam projects in arid environments

    NASA Astrophysics Data System (ADS)

    Jaafar, H. H.

    2014-05-01

    A new method for determining feasibility and prioritizing investments for agricultural and domestic recharge dams in arid regions is developed and presented. The method is based on identifying the factors affecting the decision making process and evaluating these factors, followed by determining the indices in a GIS-aided environment. Evaluated parameters include results from field surveys and site visits, land cover and soils data, precipitation data, runoff data and modeling, number of beneficiaries, domestic irrigation demand, reservoir objectives, demography, reservoirs yield and reliability, dam structures, construction costs, and operation and maintenance costs. Results of a case study on more than eighty proposed dams indicate that assessment of reliability, annualized cost/demand satisfied and yield is crucial prior to investment decision making in arid areas. Irrigation demand is the major influencing parameter on yield and reliability of recharge dams, even when only 3 months of the demand were included. Reliability of the proposed reservoirs as related to their standardized size and net inflow was found to increase with increasing yield. High priority dams were less than 4% of the total, and less priority dams amounted to 23%, with the remaining found to be not feasible. The results of this methodology and its application has proved effective in guiding stakeholders for defining most favorable sites for preliminary and detailed design studies and commissioning.

  3. Surviving the Lead Reliability Engineer Role in High Unit Value Projects

    NASA Technical Reports Server (NTRS)

    Perez, Reinaldo J.

    2011-01-01

    A project with a very high unit value within a company is defined as a project where a) the project constitutes one of a kind (or two-of-a-kind) national asset type of project, b) very large cost, and c) a mission failure would be a very public event that will hurt the company's image. The Lead Reliability engineer in a high visibility project is by default involved in all phases of the project, from conceptual design to manufacture and testing. This paper explores a series of lessons learned, over a period of ten years of practical industrial experience by a Lead Reliability Engineer. We expand on the concepts outlined by these lessons learned via examples. The lessons learned are applicable to all industries.

  4. Space Transportation Main Engine

    NASA Technical Reports Server (NTRS)

    Monk, Jan C.

    1992-01-01

    The topics are presented in viewgraph form and include the following: Space Transportation Main Engine (STME) definition, design philosophy, robust design, maximum design condition, casting vs. machined and welded forgings, operability considerations, high reliability design philosophy, engine reliability enhancement, low cost design philosophy, engine systems requirements, STME schematic, fuel turbopump, liquid oxygen turbopump, main injector, and gas generator. The major engine components of the STME and the Space Shuttle Main Engine are compared.

  5. Vulcain engine tests prove reliability

    NASA Astrophysics Data System (ADS)

    Covault, Craig

    1994-04-01

    The development of the oxygen/hydrogen Vulcain first-stage engine for the Ariane 5 involves more than 30 European companies and $1.19-billion. These companies are using existing technology to produce a low-cost system with high thrust and reliability. This article describes ground test of this engine, and provides a comparison of the Vulcain's capabilities with the capabilities of other systems. A list of key Vulcain team members is also given.

  6. High Energy Density Capacitors for Pulsed Power Applications

    DTIC Science & Technology

    2009-07-01

    As a result of this effort, the US Military has access to capacitors that are about a third the size and half the cost of the capacitors that were...resistor in terms of shock and vibration, mounting requirements, total volume, system reliability, and cost . All of these parameters were improved...it t tipo ymer m qua y an capac or cons ruc on. Energy Density of 10,000 Shot High Efficiency Pulse Power Capacitors The primary driver was 1 5

  7. Reliability and concurrent validity of a peripheral pulse oximeter and health-app system for the quantification of heart rate in healthy adults.

    PubMed

    Losa-Iglesias, Marta Elena; Becerro-de-Bengoa-Vallejo, Ricardo; Becerro-de-Bengoa-Losa, Klark Ricardo

    2016-06-01

    There are downloadable applications (Apps) for cell phones that can measure heart rate in a simple and painless manner. The aim of this study was to assess the reliability of this type of App for a Smartphone using an Android system, compared to the radial pulse and a portable pulse oximeter. We performed a pilot observational study of diagnostic accuracy, randomized in 46 healthy volunteers. The patients' demographic data and cardiac pulse were collected. Radial pulse was measured by palpation of the radial artery with three fingers at the wrist over the radius; a low-cost portable, liquid crystal display finger pulse oximeter; and a Heart Rate Plus for Samsung Galaxy Note®. This study demonstrated high reliability and consistency between systems with respect to the heart rate parameter of healthy adults using three systems. For all parameters, ICC was > 0.93, indicating excellent reliability. Moreover, CVME values for all parameters were between 1.66-4.06 %. We found significant correlation coefficients and no systematic differences between radial pulse palpation and pulse oximeter and a high precision. Low-cost pulse oximeter and App systems can serve as valid instruments for the assessment of heart rate in healthy adults. © The Author(s) 2014.

  8. Alternative Fuels Data Center: Minnesota School District Finds Cost

    Science.gov Websites

    Savings, Cold-Weather Reliability with Propane Buses Minnesota School District Finds Cost Center: Minnesota School District Finds Cost Savings, Cold-Weather Reliability with Propane Buses on Facebook Tweet about Alternative Fuels Data Center: Minnesota School District Finds Cost Savings, Cold

  9. Reliability of hospital cost profiles in inpatient surgery.

    PubMed

    Grenda, Tyler R; Krell, Robert W; Dimick, Justin B

    2016-02-01

    With increased policy emphasis on shifting risk from payers to providers through mechanisms such as bundled payments and accountable care organizations, hospitals are increasingly in need of metrics to understand their costs relative to peers. However, it is unclear whether Medicare payments for surgery can reliably compare hospital costs. We used national Medicare data to assess patients undergoing colectomy, pancreatectomy, and open incisional hernia repair from 2009 to 2010 (n = 339,882 patients). We first calculated risk-adjusted hospital total episode payments for each procedure. We then used hierarchical modeling techniques to estimate the reliability of total episode payments for each procedure and explored the impact of hospital caseload on payment reliability. Finally, we quantified the number of hospitals meeting published reliability benchmarks. Mean risk-adjusted total episode payments ranged from $13,262 (standard deviation [SD] $14,523) for incisional hernia repair to $25,055 (SD $22,549) for pancreatectomy. The reliability of hospital episode payments varied widely across procedures and depended on sample size. For example, mean episode payment reliability for colectomy (mean caseload, 157) was 0.80 (SD 0.18), whereas for pancreatectomy (mean caseload, 13) the mean reliability was 0.45 (SD 0.27). Many hospitals met published reliability benchmarks for each procedure. For example, 90% of hospitals met reliability benchmarks for colectomy, 40% for pancreatectomy, and 66% for incisional hernia repair. Episode payments for inpatient surgery are a reliable measure of hospital costs for commonly performed procedures, but are less reliable for lower volume operations. These findings suggest that hospital cost profiles based on Medicare claims data may be used to benchmark efficiency, especially for more common procedures. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Field Programmable Gate Array Reliability Analysis Guidelines for Launch Vehicle Reliability Block Diagrams

    NASA Technical Reports Server (NTRS)

    Al Hassan, Mohammad; Britton, Paul; Hatfield, Glen Spencer; Novack, Steven D.

    2017-01-01

    Field Programmable Gate Arrays (FPGAs) integrated circuits (IC) are one of the key electronic components in today's sophisticated launch and space vehicle complex avionic systems, largely due to their superb reprogrammable and reconfigurable capabilities combined with relatively low non-recurring engineering costs (NRE) and short design cycle. Consequently, FPGAs are prevalent ICs in communication protocols and control signal commands. This paper will identify reliability concerns and high level guidelines to estimate FPGA total failure rates in a launch vehicle application. The paper will discuss hardware, hardware description language, and radiation induced failures. The hardware contribution of the approach accounts for physical failures of the IC. The hardware description language portion will discuss the high level FPGA programming languages and software/code reliability growth. The radiation portion will discuss FPGA susceptibility to space environment radiation.

  11. All about Listening.

    ERIC Educational Resources Information Center

    Grunkemeyer, Florence B.

    1992-01-01

    Discusses the importance of effective listening and problems in the listening process. Presents a matrix evaluating 18 listening inventories on 8 criteria: cost effectiveness, educational use, business use, reliability, validity, adult audience, high school audience, and potential barriers. (JOW)

  12. Adaptation of the low-cost and low-power tactical split Stirling cryogenic cooler for aerospace applications

    NASA Astrophysics Data System (ADS)

    Veprik, A.; Zechtzer, S.; Pundak, N.; Kirkconnell, C.; Freeman, J.; Riabzev, S.

    2011-06-01

    Cryogenic coolers are often used in modern spacecraft in conjunction with sensitive electronics and sensors of military, commercial and scientific instrumentation. The typical space requirements are: power efficiency, low vibration export, proven reliability, ability to survive launch vibration/shock and long-term exposure to space radiation. A long-standing paradigm of exclusively using "space heritage" equipment has become the standard practice for delivering high reliability components. Unfortunately, this conservative "space heritage" practice can result in using outdated, oversized, overweight and overpriced cryogenic coolers and is becoming increasingly unacceptable for space agencies now operating within tough monetary and time constraints. The recent trend in developing mini and micro satellites for relatively inexpensive missions has prompted attempts to adapt leading-edge tactical cryogenic coolers for suitability in the space environment. The primary emphasis has been on reducing cost, weight and size. The authors are disclosing theoretical and practical aspects of a collaborative effort to develop a space qualified cryogenic refrigerator system based on the tactical cooler model Ricor K527 and the Iris Technology radiation hardened Low Cost Cryocooler Electronics (LCCE). The K27/LCCE solution is ideal for applications where cost, size, weight, power consumption, vibration export, reliability and time to spacecraft integration are of concern.

  13. Spacecraft expected cost analysis with k-out-of-n:G subsystems

    NASA Technical Reports Server (NTRS)

    Patterson, Richard; Suich, Ron

    1991-01-01

    In designing a subsystem for a spacecraft, the design engineer is often faced with a number of options ranging from planning an inexpensive subsystem with low reliability to selecting a highly reliable system that would cost much more. We minimize the total of the cost of the subsytem and the costs that would occur if the subsystem fails. We choose the subsystem with the lowest total. A k-out-of-n:G subsystem has n modules, of which k are required to be good for the subsystem to be good. We examine two models to illustrate the principles of the k-out-of-n:G subsystem designs. For the first model, the following assumptions are necessary: the probability of failure of any module in the system is not affected by the failure of any other module; and each of the modules has the same probabillity of success. For the second model we are also free to choose k in our subsystem.

  14. Financial challenges of immunization: a look at GAVI.

    PubMed Central

    Kaddar, Miloud; Lydon, Patrick; Levine, Ruth

    2004-01-01

    Securing reliable and adequate public funding for prevention services, even those that are considered highly cost effective, often presents a challenge. This has certainly been the case with childhood immunizations in developing countries. Although the traditional childhood vaccines cost relatively little, funding in poor countries is often at risk and subject to the political whims of donors and national governments. With the introduction of newer and more costly vaccines made possible under the Global Alliance for Vaccines and Immunization (GAVI), the future financial challenges have become even greater. Experience so far suggests that choosing to introduce new combination vaccines can significantly increase the costs of national immunization programmes. With this experience comes a growing concern about their affordability in the medium term and long term and a realization that, for many countries, shared financial responsibility between national governments and international donors may initially be required. This article focuses on how GAVI is addressing the challenge of sustaining adequate and reliable funding for immunizations in the poorest countries. PMID:15628208

  15. Launch vehicle systems design analysis

    NASA Technical Reports Server (NTRS)

    Ryan, Robert; Verderaime, V.

    1993-01-01

    Current launch vehicle design emphasis is on low life-cycle cost. This paper applies total quality management (TQM) principles to a conventional systems design analysis process to provide low-cost, high-reliability designs. Suggested TQM techniques include Steward's systems information flow matrix method, quality leverage principle, quality through robustness and function deployment, Pareto's principle, Pugh's selection and enhancement criteria, and other design process procedures. TQM quality performance at least-cost can be realized through competent concurrent engineering teams and brilliance of their technical leadership.

  16. Design for a Crane Metallic Structure Based on Imperialist Competitive Algorithm and Inverse Reliability Strategy

    NASA Astrophysics Data System (ADS)

    Fan, Xiao-Ning; Zhi, Bo

    2017-07-01

    Uncertainties in parameters such as materials, loading, and geometry are inevitable in designing metallic structures for cranes. When considering these uncertainty factors, reliability-based design optimization (RBDO) offers a more reasonable design approach. However, existing RBDO methods for crane metallic structures are prone to low convergence speed and high computational cost. A unilevel RBDO method, combining a discrete imperialist competitive algorithm with an inverse reliability strategy based on the performance measure approach, is developed. Application of the imperialist competitive algorithm at the optimization level significantly improves the convergence speed of this RBDO method. At the reliability analysis level, the inverse reliability strategy is used to determine the feasibility of each probabilistic constraint at each design point by calculating its α-percentile performance, thereby avoiding convergence failure, calculation error, and disproportionate computational effort encountered using conventional moment and simulation methods. Application of the RBDO method to an actual crane structure shows that the developed RBDO realizes a design with the best tradeoff between economy and safety together with about one-third of the convergence speed and the computational cost of the existing method. This paper provides a scientific and effective design approach for the design of metallic structures of cranes.

  17. Low-cost solution to the grid reliability problem with 100% penetration of intermittent wind, water, and solar for all purposes.

    PubMed

    Jacobson, Mark Z; Delucchi, Mark A; Cameron, Mary A; Frew, Bethany A

    2015-12-08

    This study addresses the greatest concern facing the large-scale integration of wind, water, and solar (WWS) into a power grid: the high cost of avoiding load loss caused by WWS variability and uncertainty. It uses a new grid integration model and finds low-cost, no-load-loss, nonunique solutions to this problem on electrification of all US energy sectors (electricity, transportation, heating/cooling, and industry) while accounting for wind and solar time series data from a 3D global weather model that simulates extreme events and competition among wind turbines for available kinetic energy. Solutions are obtained by prioritizing storage for heat (in soil and water); cold (in ice and water); and electricity (in phase-change materials, pumped hydro, hydropower, and hydrogen), and using demand response. No natural gas, biofuels, nuclear power, or stationary batteries are needed. The resulting 2050-2055 US electricity social cost for a full system is much less than for fossil fuels. These results hold for many conditions, suggesting that low-cost, reliable 100% WWS systems should work many places worldwide.

  18. Low-cost solution to the grid reliability problem with 100% penetration of intermittent wind, water, and solar for all purposes

    PubMed Central

    Jacobson, Mark Z.; Delucchi, Mark A.; Cameron, Mary A.; Frew, Bethany A.

    2015-01-01

    This study addresses the greatest concern facing the large-scale integration of wind, water, and solar (WWS) into a power grid: the high cost of avoiding load loss caused by WWS variability and uncertainty. It uses a new grid integration model and finds low-cost, no-load-loss, nonunique solutions to this problem on electrification of all US energy sectors (electricity, transportation, heating/cooling, and industry) while accounting for wind and solar time series data from a 3D global weather model that simulates extreme events and competition among wind turbines for available kinetic energy. Solutions are obtained by prioritizing storage for heat (in soil and water); cold (in ice and water); and electricity (in phase-change materials, pumped hydro, hydropower, and hydrogen), and using demand response. No natural gas, biofuels, nuclear power, or stationary batteries are needed. The resulting 2050–2055 US electricity social cost for a full system is much less than for fossil fuels. These results hold for many conditions, suggesting that low-cost, reliable 100% WWS systems should work many places worldwide. PMID:26598655

  19. Affordable Launch Services using the Sport Orbit Transfer System

    NASA Astrophysics Data System (ADS)

    Goldstein, D. J.

    2002-01-01

    Despite many advances in small satellite technology, a low-cost, reliable method is needed to place spacecraft in their de- sired orbits. AeroAstro has developed the Small Payload ORbit Transfer (SPORTTM) system to provide a flexible low-cost orbit transfer capability, enabling small payloads to use low-cost secondary launch opportunities and still reach their desired final orbits. This capability allows small payloads to effectively use a wider variety of launch opportunities, including nu- merous under-utilized GTO slots. Its use, in conjunction with growing opportunities for secondary launches, enable in- creased access to space using proven technologies and highly reliable launch vehicles such as the Ariane family and the Starsem launcher. SPORT uses a suite of innovative technologies that are packaged in a simple, reliable, modular system. The command, control and data handling of SPORT is provided by the AeroAstro BitsyTM core electronics module. The Bitsy module also provides power regulation for the batteries and optional solar arrays. The primary orbital maneuvering capability is provided by a nitrous oxide monopropellant propulsion system. This system exploits the unique features of nitrous oxide, which in- clude self-pressurization, good performance, and safe handling, to provide a light-weight, low-cost and reliable propulsion capability. When transferring from a higher energy orbit to a lower energy orbit (i.e. GTO to LEO), SPORT uses aerobraking technol- ogy. After using the propulsion system to lower the orbit perigee, the aerobrake gradually slows SPORT via atmospheric drag. After the orbit apogee is reduced to the target level, an apogee burn raises the perigee and ends the aerobraking. At the conclusion of the orbit transfer maneuver, either the aerobrake or SPORT can be shed, as desired by the payload. SPORT uses a simple design for high reliability and a modular architecture for maximum mission flexibility. This paper will discuss the launch system and its application to small satellite launch without increasing risk. It will also discuss relevant issues such as aerobraking operations and radiation issues, as well as existing partnerships and patents for the system.

  20. The impact of municipal refuse utilization on energy and our environment

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The incinerator/boiler configuration is stressed as the most reliable method of waste utilization. It is also pointed out that the high cost of refuse disposal and the ever increasing cost of energy, have made this method attractive. A plan is outlined for operating a waste utilization plant. Community participation is encouraged in investigating the feasibility of refuse to energy facilities in their area.

  1. Low cost split stirling cryogenic cooler for aerospace applications

    NASA Astrophysics Data System (ADS)

    Veprik, Alexander; Zechtzer, Semeon; Pundak, Nachman; Riabzev, Sergey; Kirckconnel, C.; Freeman, Jeremy

    2012-06-01

    Cryogenic coolers are used in association with sensitive electronics and sensors for military, commercial or scientific space payloads. The general requirements are high reliability and power efficiency, low vibration export and ability to survive launch vibration extremes and long-term exposure to space radiation. A long standing paradigm of using exclusively space heritage derivatives of legendary "Oxford" cryocoolers featuring linear actuators, flexural bearings, contactless seals and active vibration cancellation is so far the best known practice aiming at delivering high reliability components for the critical and usually expensive space missions. The recent tendency of developing mini and micro satellites for the budget constrained missions has spurred attempts to adapt leading-edge tactical cryogenic coolers to meet the space requirements. The authors are disclosing theoretical and practical aspects of a collaborative effort on developing a space qualified cryogenic refrigerator based on the Ricor model K527 tactical cooler and Iris Technology radiation hardened, low cost cryocooler electronics. The initially targeted applications are cost-sensitive flight experiments, but should the results show promise, some long-life "traditional" cryocooler missions may well be satisfied by this approach.

  2. Proposed reliability cost model

    NASA Technical Reports Server (NTRS)

    Delionback, L. M.

    1973-01-01

    The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.

  3. Reliability and Maintainability Engineering - A Major Driver for Safety and Affordability

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.

    2011-01-01

    The United States National Aeronautics and Space Administration (NASA) is in the midst of an effort to design and build a safe and affordable heavy lift vehicle to go to the moon and beyond. To achieve that, NASA is seeking more innovative and efficient approaches to reduce cost while maintaining an acceptable level of safety and mission success. One area that has the potential to contribute significantly to achieving NASA safety and affordability goals is Reliability and Maintainability (R&M) engineering. Inadequate reliability or failure of critical safety items may directly jeopardize the safety of the user(s) and result in a loss of life. Inadequate reliability of equipment may directly jeopardize mission success. Systems designed to be more reliable (fewer failures) and maintainable (fewer resources needed) can lower the total life cycle cost. The Department of Defense (DOD) and industry experience has shown that optimized and adequate levels of R&M are critical for achieving a high level of safety and mission success, and low sustainment cost. Also, lessons learned from the Space Shuttle program clearly demonstrated the importance of R&M engineering in designing and operating safe and affordable launch systems. The Challenger and Columbia accidents are examples of the severe impact of design unreliability and process induced failures on system safety and mission success. These accidents demonstrated the criticality of reliability engineering in understanding component failure mechanisms and integrated system failures across the system elements interfaces. Experience from the shuttle program also shows that insufficient Reliability, Maintainability, and Supportability (RMS) engineering analyses upfront in the design phase can significantly increase the sustainment cost and, thereby, the total life cycle cost. Emphasis on RMS during the design phase is critical for identifying the design features and characteristics needed for time efficient processing, improved operational availability, and optimized maintenance and logistic support infrastructure. This paper discusses the role of R&M in a program acquisition phase and the potential impact of R&M on safety, mission success, operational availability, and affordability. This includes discussion of the R&M elements that need to be addressed and the R&M analyses that need to be performed in order to support a safe and affordable system design. The paper also provides some lessons learned from the Space Shuttle program on the impact of R&M on safety and affordability.

  4. Analysis of the Seismic Performance of Isolated Buildings according to Life-Cycle Cost

    PubMed Central

    Dang, Yu; Han, Jian-ping; Li, Yong-tao

    2015-01-01

    This paper proposes an indicator of seismic performance based on life-cycle cost of a building. It is expressed as a ratio of lifetime damage loss to life-cycle cost and determines the seismic performance of isolated buildings. Major factors are considered, including uncertainty in hazard demand and structural capacity, initial costs, and expected loss during earthquakes. Thus, a high indicator value indicates poor building seismic performance. Moreover, random vibration analysis is conducted to measure structural reliability and evaluate the expected loss and life-cycle cost of isolated buildings. The expected loss of an actual, seven-story isolated hospital building is only 37% of that of a fixed-base building. Furthermore, the indicator of the structural seismic performance of the isolated building is much lower in value than that of the structural seismic performance of the fixed-base building. Therefore, isolated buildings are safer and less risky than fixed-base buildings. The indicator based on life-cycle cost assists owners and engineers in making investment decisions in consideration of structural design, construction, and expected loss. It also helps optimize the balance between building reliability and building investment. PMID:25653677

  5. Analysis of the seismic performance of isolated buildings according to life-cycle cost.

    PubMed

    Dang, Yu; Han, Jian-Ping; Li, Yong-Tao

    2015-01-01

    This paper proposes an indicator of seismic performance based on life-cycle cost of a building. It is expressed as a ratio of lifetime damage loss to life-cycle cost and determines the seismic performance of isolated buildings. Major factors are considered, including uncertainty in hazard demand and structural capacity, initial costs, and expected loss during earthquakes. Thus, a high indicator value indicates poor building seismic performance. Moreover, random vibration analysis is conducted to measure structural reliability and evaluate the expected loss and life-cycle cost of isolated buildings. The expected loss of an actual, seven-story isolated hospital building is only 37% of that of a fixed-base building. Furthermore, the indicator of the structural seismic performance of the isolated building is much lower in value than that of the structural seismic performance of the fixed-base building. Therefore, isolated buildings are safer and less risky than fixed-base buildings. The indicator based on life-cycle cost assists owners and engineers in making investment decisions in consideration of structural design, construction, and expected loss. It also helps optimize the balance between building reliability and building investment.

  6. Tutorial: Performance and reliability in redundant disk arrays

    NASA Technical Reports Server (NTRS)

    Gibson, Garth A.

    1993-01-01

    A disk array is a collection of physically small magnetic disks that is packaged as a single unit but operates in parallel. Disk arrays capitalize on the availability of small-diameter disks from a price-competitive market to provide the cost, volume, and capacity of current disk systems but many times their performance. Unfortunately, relative to current disk systems, the larger number of components in disk arrays leads to higher rates of failure. To tolerate failures, redundant disk arrays devote a fraction of their capacity to an encoding of their information. This redundant information enables the contents of a failed disk to be recovered from the contents of non-failed disks. The simplest and least expensive encoding for this redundancy, known as N+1 parity is highlighted. In addition to compensating for the higher failure rates of disk arrays, redundancy allows highly reliable secondary storage systems to be built much more cost-effectively than is now achieved in conventional duplicated disks. Disk arrays that combine redundancy with the parallelism of many small-diameter disks are often called Redundant Arrays of Inexpensive Disks (RAID). This combination promises improvements to both the performance and the reliability of secondary storage. For example, IBM's premier disk product, the IBM 3390, is compared to a redundant disk array constructed of 84 IBM 0661 3 1/2-inch disks. The redundant disk array has comparable or superior values for each of the metrics given and appears likely to cost less. In the first section of this tutorial, I explain how disk arrays exploit the emergence of high performance, small magnetic disks to provide cost-effective disk parallelism that combats the access and transfer gap problems. The flexibility of disk-array configurations benefits manufacturer and consumer alike. In contrast, I describe in this tutorial's second half how parallelism, achieved through increasing numbers of components, causes overall failure rates to rise. Redundant disk arrays overcome this threat to data reliability by ensuring that data remains available during and after component failures.

  7. eLaunch Hypersonics: An Advanced Launch System

    NASA Technical Reports Server (NTRS)

    Starr, Stanley

    2010-01-01

    This presentation describes a new space launch system that NASA can and should develop. This approach can significantly reduce ground processing and launch costs, improve reliability, and broaden the scope of what we do in near earth orbit. The concept (not new) is to launch a re-usable air-breathing hypersonic vehicle from a ground based electric track. This vehicle launches a final rocket stage at high altitude/velocity for the final leg to orbit. The proposal here differs from past studies in that we will launch above Mach 1.5 (above transonic pinch point) which further improves the efficiency of air breathing, horizontal take-off launch systems. The approach described here significantly reduces cost per kilogram to orbit, increases safety and reliability of the boost systems, and reduces ground costs due to horizontal-processing. Finally, this approach provides significant technology transfer benefits for our national infrastructure.

  8. [Reliability and validity of the Chinese version on Comprehensive Scores for Financial Toxicity based on the patient-reported outcome measures].

    PubMed

    Yu, H H; Bi, X; Liu, Y Y

    2017-08-10

    Objective: To evaluate the reliability and validity of the Chinese version on comprehensive scores for financial toxicity (COST), based on the patient-reported outcome measures. Methods: A total of 118 cancer patients were face-to-face interviewed by well-trained investigators. Cronbach's α and Pearson correlation coefficient were used to evaluate reliability. Content validity index (CVI) and exploratory factor analysis (EFA) were used to evaluate the content validity and construct validity, respectively. Results: The Cronbach's α coefficient appeared as 0.889 for the whole questionnaire, with the results of test-retest were between 0.77 and 0.98. Scale-content validity index (S-CVI) appeared as 0.82, with item-content validity index (I-CVI) between 0.83 and 1.00. Two components were extracted from the Exploratory factor analysis, with cumulative rate as 68.04% and loading>0.60 on every item. Conclusion: The Chinese version of COST scale showed high reliability and good validity, thus can be applied to assess the financial situation in cancer patients.

  9. An accurate and efficient reliability-based design optimization using the second order reliability method and improved stability transformation method

    NASA Astrophysics Data System (ADS)

    Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo

    2018-05-01

    The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.

  10. Optimal periodic proof test based on cost-effective and reliability criteria

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1976-01-01

    An exploratory study for the optimization of periodic proof tests for fatigue-critical structures is presented. The optimal proof load level and the optimal number of periodic proof tests are determined by minimizing the total expected (statistical average) cost, while the constraint on the allowable level of structural reliability is satisfied. The total expected cost consists of the expected cost of proof tests, the expected cost of structures destroyed by proof tests, and the expected cost of structural failure in service. It is demonstrated by numerical examples that significant cost saving and reliability improvement for fatigue-critical structures can be achieved by the application of the optimal periodic proof test. The present study is relevant to the establishment of optimal maintenance procedures for fatigue-critical structures.

  11. Individual styles of professional operator's performance for the needs of interplanetary mission.

    NASA Astrophysics Data System (ADS)

    Boritko, Yaroslav; Gushin, Vadim; Zavalko, Irina; Smoleevskiy, Alexandr; Dudukin, Alexandr

    Maintenance of the cosmonaut’s professional performance reliability is one of the priorities of long-term space flights safety. Cosmonaut’s performance during long-term space flight decreases due to combination of the microgravity effects and inevitable degradation of skills during prolonged breaks in training. Therefore, the objective of the elaboration of countermeasures against skill decrement is very relevant. During the experiment with prolonged isolation "Mars-500" in IMBP two virtual models of professional operator’s activities were used to investigate the influence of extended isolation, monotony and confinement on professional skills degradation. One is well-known “PILOT-1” (docking to the space station), another - "VIRTU" (manned operations of planet exploration). Individual resistance to the artificial sensory conflict was estimated using computerized version of “Mirror koordinograf” with GSR registration. Two different individual performance styles, referring to the different types of response to stress, have been identified. Individual performance style, called "conservative control", manifested in permanent control of parameters, conditions and results of the operator’s activity. Operators with this performance style demonstrate high reliability in performing tasks. The drawback of the style is intensive resource expenditure - both the operator (physiological "cost") and the technical system operated (fuel, time). This style is more efficient while executing tasks that require long work with high reliability required according to a detailed protocol, such as orbital flight. Individual style, called "exploratory ", manifested in the search of new ways of task fulfillment. This style is accompanied by partial, periodic lack of control of the conditions and result of operator’s activity due to flexible approach to the tasks perfect implementation. Operators spent less resource (fuel, time, lower physiological "cost") due to high self-regulation in tasks not requiring high reliability. "Exploratory" style is more effective when working in nonregulated and off-nominal situations, such as interplanetary mission, due to possibility to use nonstandard innovative solutions, save physiological resources and rapidly mobilize to demonstrate high reliability at key moments.

  12. Autonomous navigation system based on GPS and magnetometer data

    NASA Technical Reports Server (NTRS)

    Julie, Thienel K. (Inventor); Richard, Harman R. (Inventor); Bar-Itzhack, Itzhack Y. (Inventor)

    2004-01-01

    This invention is drawn to an autonomous navigation system using Global Positioning System (GPS) and magnetometers for low Earth orbit satellites. As a magnetometer is reliable and always provides information on spacecraft attitude, rate, and orbit, the magnetometer-GPS configuration solves GPS initialization problem, decreasing the convergence time for navigation estimate and improving the overall accuracy. Eventually the magnetometer-GPS configuration enables the system to avoid costly and inherently less reliable gyro for rate estimation. Being autonomous, this invention would provide for black-box spacecraft navigation, producing attitude, orbit, and rate estimates without any ground input with high accuracy and reliability.

  13. High-Frequency ac Power-Distribution System

    NASA Technical Reports Server (NTRS)

    Hansen, Irving G.; Mildice, James

    1987-01-01

    Loads managed automatically under cycle-by-cycle control. 440-V rms, 20-kHz ac power system developed. System flexible, versatile, and "transparent" to user equipment, while maintaining high efficiency and low weight. Electrical source, from dc to 2,200-Hz ac converted to 440-V rms, 20-kHz, single-phase ac. Power distributed through low-inductance cables. Output power either dc or variable ac. Energy transferred per cycle reduced by factor of 50. Number of parts reduced by factor of about 5 and power loss reduced by two-thirds. Factors result in increased reliability and reduced costs. Used in any power-distribution system requiring high efficiency, high reliability, low weight, and flexibility to handle variety of sources and loads.

  14. Climate and Water Vulnerability of the US Electricity Grid Under High Penetrations of Renewable Energy

    NASA Astrophysics Data System (ADS)

    Macknick, J.; Miara, A.; O'Connell, M.; Vorosmarty, C. J.; Newmark, R. L.

    2017-12-01

    The US power sector is highly dependent upon water resources for reliable operations, primarily for thermoelectric cooling and hydropower technologies. Changes in the availability and temperature of water resources can limit electricity generation and cause outages at power plants, which substantially affect grid-level operational decisions. While the effects of water variability and climate changes on individual power plants are well documented, prior studies have not identified the significance of these impacts at the regional systems-level at which the grid operates, including whether there are risks for large-scale blackouts, brownouts, or increases in production costs. Adequately assessing electric grid system-level impacts requires detailed power sector modeling tools that can incorporate electric transmission infrastructure, capacity reserves, and other grid characteristics. Here, we present for the first time, a study of how climate and water variability affect operations of the power sector, considering different electricity sector configurations (low vs. high renewable) and environmental regulations. We use a case study of the US Eastern Interconnection, building off the Eastern Renewable Generation Integration Study (ERGIS) that explored operational challenges of high penetrations of renewable energy on the grid. We evaluate climate-water constraints on individual power plants, using the Thermoelectric Power and Thermal Pollution (TP2M) model coupled with the PLEXOS electricity production cost model, in the context of broader electricity grid operations. Using a five minute time step for future years, we analyze scenarios of 10% to 30% renewable energy penetration along with considerations of river temperature regulations to compare the cost, performance, and reliability tradeoffs of water-dependent thermoelectric generation and variable renewable energy technologies under climate stresses. This work provides novel insights into the resilience and reliability of different configurations of the US electric grid subject to changing climate conditions.

  15. Evaluation on Cost Overrun Risks of Long-distance Water Diversion Project Based on SPA-IAHP Method

    NASA Astrophysics Data System (ADS)

    Yuanyue, Yang; Huimin, Li

    2018-02-01

    Large investment, long route, many change orders and etc. are main causes for costs overrun of long-distance water diversion project. This paper, based on existing research, builds a full-process cost overrun risk evaluation index system for water diversion project, apply SPA-IAHP method to set up cost overrun risk evaluation mode, calculate and rank weight of every risk evaluation indexes. Finally, the cost overrun risks are comprehensively evaluated by calculating linkage measure, and comprehensive risk level is acquired. SPA-IAHP method can accurately evaluate risks, and the reliability is high. By case calculation and verification, it can provide valid cost overrun decision making information to construction companies.

  16. Shuttle payload minimum cost vibroacoustic tests

    NASA Technical Reports Server (NTRS)

    Stahle, C. V.; Gongloff, H. R.; Young, J. P.; Keegan, W. B.

    1977-01-01

    This paper is directed toward the development of the methodology needed to evaluate cost effective vibroacoustic test plans for Shuttle Spacelab payloads. Statistical decision theory is used to quantitatively evaluate seven alternate test plans by deriving optimum test levels and the expected cost for each multiple mission payload considered. The results indicate that minimum costs can vary by as much as $6 million for the various test plans. The lowest cost approach eliminates component testing and maintains flight vibration reliability by performing subassembly tests at a relatively high acoustic level. Test plans using system testing or combinations of component and assembly level testing are attractive alternatives. Component testing alone is shown not to be cost effective.

  17. 7 CFR 1788.2 - General insurance requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... consistent with cost-effectiveness, reliability, safety, and expedition. It is recognized that Prudent... accomplish the desired result at the lowest reasonable cost consistent with cost-effectiveness, reliability... which is used or useful in the borrower's business and which shall be covered by insurance, unless each...

  18. Process-based costing.

    PubMed

    Lee, Robert H; Bott, Marjorie J; Forbes, Sarah; Redford, Linda; Swagerty, Daniel L; Taunton, Roma Lee

    2003-01-01

    Understanding how quality improvement affects costs is important. Unfortunately, low-cost, reliable ways of measuring direct costs are scarce. This article builds on the principles of process improvement to develop a costing strategy that meets both criteria. Process-based costing has 4 steps: developing a flowchart, estimating resource use, valuing resources, and calculating direct costs. To illustrate the technique, this article uses it to cost the care planning process in 3 long-term care facilities. We conclude that process-based costing is easy to implement; generates reliable, valid data; and allows nursing managers to assess the costs of new or modified processes.

  19. Towards Integrating Distributed Energy Resources and Storage Devices in Smart Grid.

    PubMed

    Xu, Guobin; Yu, Wei; Griffith, David; Golmie, Nada; Moulema, Paul

    2017-02-01

    Internet of Things (IoT) provides a generic infrastructure for different applications to integrate information communication techniques with physical components to achieve automatic data collection, transmission, exchange, and computation. The smart grid, as one of typical applications supported by IoT, denoted as a re-engineering and a modernization of the traditional power grid, aims to provide reliable, secure, and efficient energy transmission and distribution to consumers. How to effectively integrate distributed (renewable) energy resources and storage devices to satisfy the energy service requirements of users, while minimizing the power generation and transmission cost, remains a highly pressing challenge in the smart grid. To address this challenge and assess the effectiveness of integrating distributed energy resources and storage devices, in this paper we develop a theoretical framework to model and analyze three types of power grid systems: the power grid with only bulk energy generators, the power grid with distributed energy resources, and the power grid with both distributed energy resources and storage devices. Based on the metrics of the power cumulative cost and the service reliability to users, we formally model and analyze the impact of integrating distributed energy resources and storage devices in the power grid. We also use the concept of network calculus, which has been traditionally used for carrying out traffic engineering in computer networks, to derive the bounds of both power supply and user demand to achieve a high service reliability to users. Through an extensive performance evaluation, our data shows that integrating distributed energy resources conjointly with energy storage devices can reduce generation costs, smooth the curve of bulk power generation over time, reduce bulk power generation and power distribution losses, and provide a sustainable service reliability to users in the power grid.

  20. Towards Integrating Distributed Energy Resources and Storage Devices in Smart Grid

    PubMed Central

    Xu, Guobin; Yu, Wei; Griffith, David; Golmie, Nada; Moulema, Paul

    2017-01-01

    Internet of Things (IoT) provides a generic infrastructure for different applications to integrate information communication techniques with physical components to achieve automatic data collection, transmission, exchange, and computation. The smart grid, as one of typical applications supported by IoT, denoted as a re-engineering and a modernization of the traditional power grid, aims to provide reliable, secure, and efficient energy transmission and distribution to consumers. How to effectively integrate distributed (renewable) energy resources and storage devices to satisfy the energy service requirements of users, while minimizing the power generation and transmission cost, remains a highly pressing challenge in the smart grid. To address this challenge and assess the effectiveness of integrating distributed energy resources and storage devices, in this paper we develop a theoretical framework to model and analyze three types of power grid systems: the power grid with only bulk energy generators, the power grid with distributed energy resources, and the power grid with both distributed energy resources and storage devices. Based on the metrics of the power cumulative cost and the service reliability to users, we formally model and analyze the impact of integrating distributed energy resources and storage devices in the power grid. We also use the concept of network calculus, which has been traditionally used for carrying out traffic engineering in computer networks, to derive the bounds of both power supply and user demand to achieve a high service reliability to users. Through an extensive performance evaluation, our data shows that integrating distributed energy resources conjointly with energy storage devices can reduce generation costs, smooth the curve of bulk power generation over time, reduce bulk power generation and power distribution losses, and provide a sustainable service reliability to users in the power grid1. PMID:29354654

  1. Electrochemistry-based Approaches to Low Cost, High Sensitivity, Automated, Multiplexed Protein Immunoassays for Cancer Diagnostics

    PubMed Central

    Dixit, Chandra K.; Kadimisetty, Karteek; Otieno, Brunah A.; Tang, Chi; Malla, Spundana; Krause, Colleen E.; Rusling, James F.

    2015-01-01

    Early detection and reliable diagnostics are keys to effectively design cancer therapies with better prognoses. Simultaneous detection of panels of biomarker proteins holds great promise as a general tool for reliable cancer diagnostics. A major challenge in designing such a panel is to decide upon a coherent group of biomarkers which have higher specificity for a given type of cancer. The second big challenge is to develop test devices to measure these biomarkers quantitatively with high sensitivity and specificity, such that there are no interferences from the complex serum or tissue matrices. Lastly, integrating all these tests into a technology that doesn’t require exclusive training to operate, and can be used at point-of-care (POC) is another potential bottleneck in futuristic cancer diagnostics. In this article, we review electrochemistry-based tools and technologies developed and/or used in our laboratories to construct low-cost microfluidic protein arrays for highly sensitive detection of the panel of cancer-specific biomarkers with high specificity and at the same time have the potential to be translated into a POC. PMID:26525998

  2. Electrochemistry-based approaches to low cost, high sensitivity, automated, multiplexed protein immunoassays for cancer diagnostics.

    PubMed

    Dixit, Chandra K; Kadimisetty, Karteek; Otieno, Brunah A; Tang, Chi; Malla, Spundana; Krause, Colleen E; Rusling, James F

    2016-01-21

    Early detection and reliable diagnostics are keys to effectively design cancer therapies with better prognoses. The simultaneous detection of panels of biomarker proteins holds great promise as a general tool for reliable cancer diagnostics. A major challenge in designing such a panel is to decide upon a coherent group of biomarkers which have higher specificity for a given type of cancer. The second big challenge is to develop test devices to measure these biomarkers quantitatively with high sensitivity and specificity, such that there are no interferences from the complex serum or tissue matrices. Lastly, integrating all these tests into a technology that does not require exclusive training to operate, and can be used at point-of-care (POC) is another potential bottleneck in futuristic cancer diagnostics. In this article, we review electrochemistry-based tools and technologies developed and/or used in our laboratories to construct low-cost microfluidic protein arrays for the highly sensitive detection of a panel of cancer-specific biomarkers with high specificity which at the same time has the potential to be translated into POC applications.

  3. A model for studying the energetics of sustained high frequency firing

    PubMed Central

    Morris, Catherine E.

    2018-01-01

    Regulating membrane potential and synaptic function contributes significantly to the energetic costs of brain signaling, but the relative costs of action potentials (APs) and synaptic transmission during high-frequency firing are unknown. The continuous high-frequency (200-600Hz) electric organ discharge (EOD) of Eigenmannia, a weakly electric fish, underlies its electrosensing and communication. EODs reflect APs fired by the muscle-derived electrocytes of the electric organ (EO). Cholinergic synapses at the excitable posterior membranes of the elongated electrocytes control AP frequency. Based on whole-fish O2 consumption, ATP demand per EOD-linked AP increases exponentially with AP frequency. Continual EOD-AP generation implies first, that ion homeostatic processes reliably counteract any dissipation of posterior membrane ENa and EK and second that high frequency synaptic activation is reliably supported. Both of these processes require energy. To facilitate an exploration of the expected energy demands of each, we modify a previous excitability model and include synaptic currents able to drive APs at frequencies as high as 600 Hz. Synaptic stimuli are modeled as pulsatile cation conductance changes, with or without a small (sustained) background conductance. Over the full species range of EOD frequencies (200–600 Hz) we calculate frequency-dependent “Na+-entry budgets” for an electrocyte AP as a surrogate for required 3Na+/2K+-ATPase activity. We find that the cost per AP of maintaining constant-amplitude APs increases nonlinearly with frequency, whereas the cost per AP for synaptic input current is essentially constant. This predicts that Na+ channel density should correlate positively with EOD frequency, whereas AChR density should be the same across fish. Importantly, calculated costs (inferred from Na+-entry through Nav and ACh channels) for electrocyte APs as frequencies rise are much less than expected from published whole-fish EOD-linked O2 consumption. For APs at increasingly high frequencies, we suggest that EOD-related costs external to electrocytes (including packaging of synaptic transmitter) substantially exceed the direct cost of electrocyte ion homeostasis. PMID:29708986

  4. Evaluation of the ASCO Value Framework for Anticancer Drugs at an Academic Medical Center.

    PubMed

    Wilson, Leslie; Lin, Tracy; Wang, Ling; Patel, Tanuja; Tran, Denise; Kim, Sarah; Dacey, Katie; Yuen, Courtney; Kroon, Lisa; Brodowy, Bret; Rodondi, Kevin

    2017-02-01

    Anticancer drug prices have increased by an average of 12% each year from 1996 to 2014. A major concern is that the increasing cost and responsibility of evaluating treatment options are being shifted to patients. This research compared 2 value-based pricing models that were being considered for use at the University of California, San Francisco (UCSF) Medical Center to address the growing burden of high-cost cancer drugs while improving patient-centered care. The Medication Outcomes Center (MOC) in the Department of Clinical Pharmacy, University of California, San Francisco (UCSF), School of Pharmacy focuses on assessing the value of medication-related health care interventions and disseminating findings to the UCSF Medical Center. The High Cost Oncology Drug Initiative at the MOC aims to assess and adopt tools for the critical assessment and amelioration of high-cost cancer drugs. The American Society of Clinical Oncology (ASCO) Value Framework (2016 update) and a cost-effectiveness analysis (CEA) framework were identified as potential tools for adoption. To assess 1 prominent value framework, the study investigators (a) asked 8 clinicians to complete the ASCO Value Framework for 11 anticancer medications selected by the MOC; (b) reviewed CEAs assessing the drugs; (c) generated descriptive statistics; and (d) analyzed inter-rater reliability, convergence validity, and ranking consistency. On the scale of -20 to 180, the mean ASCO net health benefit (NHB) total score across 11 drugs ranged from 7.6 (SD = 7.8) to 53 (SD = 9.8). The Kappa coefficient (κ) for NHB scores across raters was 0.11, which is categorized as "slightly reliable." The combined κ score was 0.22, which is interpreted as low to fair inter-rater reliability. Convergent validity indicates that the correlation between NHB scores and CEA-based incremental cost-effectiveness ratios (ICERs) was low (-0.215). Ranking of ICERs, ASCO scores, and wholesale acquisition costs indicated different results between frameworks. The ASCO Value Framework requires further specificity before use in a clinical setting, since it currently results in low inter-rater reliability and validity. Furthermore, ASCO scores were unable to discriminate between drugs providing the most and least value. The evaluation provides specific areas of weakness that can be addressed in future updates of the ASCO framework to improve usability. Meanwhile, the UCSF Medical Center should rely on CEAs, which are highly accessible for the highlighted cancer drugs. The MOC role can include summarizing and disseminating available CEA studies for interpretation by clinicians and financial counselors around drug value. Funding for this research was contributed by the University of California, San Francisco, Medical Center Campus Strategic Initiative Program. The authors have no conflicts of interest to disclose. Study concept and design were contributed primarily by Wilson, along with Wang and Patel. Kim, Dacey, and Yuen collected the data, and data interpretation was performed by Wilson and Lin. The manuscript was written by Wilson, Lin, Wang, and Tran and revised by Lin, Redondi, Brodowy, and Kroon.

  5. Development of 1-D Shake Table Testing Facility for Liquefaction Studies

    NASA Astrophysics Data System (ADS)

    Unni, Kartha G.; Beena, K. S.; Mahesh, C.

    2018-04-01

    One of the major challenges researchers face in the field of earthquake geotechnical engineering in India is the high cost of laboratory infrastructure. Developing a reliable and low cost experimental set up is attempted in this research. The paper details the design and development of a uniaxial shake table and the data acquisition system with accelerometers and pore water pressure sensors which can be used for liquefaction studies.

  6. Quality assurance and reliability in the Japanese electronics industry

    NASA Astrophysics Data System (ADS)

    Pecht, Michael; Boulton, William R.

    1995-02-01

    Quality and reliability are two attributes required for all Japanese products, although the JTEC panel found these attributes to be secondary to customer cost requirements. While our Japanese hosts gave presentations on the challenges of technology, cost, and miniaturization, quality and reliability were infrequently the focus of our discussions. Quality and reliability were assumed to be sufficient to meet customer needs. Fujitsu's slogan, 'quality built-in, with cost and performance as prime consideration,' illustrates this point. Sony's definition of a next-generation product is 'one that is going to be half the size and half the price at the same performance of the existing one'. Quality and reliability are so integral to Japan's electronics industry that they need no new emphasis.

  7. Quality assurance and reliability in the Japanese electronics industry

    NASA Technical Reports Server (NTRS)

    Pecht, Michael; Boulton, William R.

    1995-01-01

    Quality and reliability are two attributes required for all Japanese products, although the JTEC panel found these attributes to be secondary to customer cost requirements. While our Japanese hosts gave presentations on the challenges of technology, cost, and miniaturization, quality and reliability were infrequently the focus of our discussions. Quality and reliability were assumed to be sufficient to meet customer needs. Fujitsu's slogan, 'quality built-in, with cost and performance as prime consideration,' illustrates this point. Sony's definition of a next-generation product is 'one that is going to be half the size and half the price at the same performance of the existing one'. Quality and reliability are so integral to Japan's electronics industry that they need no new emphasis.

  8. A modular and cost-effective superconducting generator design for offshore wind turbines

    NASA Astrophysics Data System (ADS)

    Keysan, Ozan; Mueller, Markus

    2015-03-01

    Superconducting generators have the potential to reduce the tower head mass for large (∼10 MW) offshore wind turbines. However, a high temperature superconductor generator should be as reliable as conventional generators for successful entry into the market. Most of the proposed designs use the superconducting synchronous generator concept, which has a higher cost than conventional generators and suffers from reliability issues. In this paper, a novel claw pole type of superconducting machine is presented. The design has a stationary superconducting field winding, which simplifies the design and increases the reliability. The machine can be operated in independent modules; thus even if one of the sections fails, the rest can operate until the next planned maintenance. Another advantage of the design is the very low superconducting wire requirement; a 10 MW, 10 rpm design is presented which uses 13 km of MgB2 wire at 30 K. The outer diameter of the machine is 6.63 m and it weighs 184 tonnes including the structural mass. The design is thought to be a good candidate for entering the renewable energy market, with its low cost and robust structure.

  9. Body postures and patterns as amplifiers of physical condition.

    PubMed Central

    Taylor, P W; Hasson, O; Clark, D L

    2000-01-01

    The question of why receivers accept a selfish signaller's message as reliable or 'honest' has fuelled ample controversy in discussions of communication. The handicap mechanism is now widely accepted as a potent constraint on cheating. Handicap signals are deemed reliable by their costs: signallers must choose between investing in the signal or in other aspects of fitness. Accordingly, resources allocated to the signal come to reflect the signaller's fitness budget and, on average, cheating is uneconomic. However, that signals may also be deemed reliable by their design, regardless of costs, is not widely appreciated. Here we briefly describe indices and amplifiers, reliable signals that may be essentially cost free. Indices are reliable because they bear a direct association with the signalled quality rather than costs. Amplifiers do not directly provide information about signaller quality, but they facilitate assessment by increasing the apparency of pre-existing cues and signals that are associated with quality. We present results of experiments involving a jumping spider (Plexippus paykulli) to illustrate how amplifiers can facilitate assessment of cues associated with physical condition without invoking the costs required for handicap signalling. PMID:10853735

  10. The CRAC cohort model: A computerized low cost registry of interventional cardiology with daily update and long-term follow-up.

    PubMed

    Rangé, G; Chassaing, S; Marcollet, P; Saint-Étienne, C; Dequenne, P; Goralski, M; Bardiére, P; Beverilli, F; Godillon, L; Sabine, B; Laure, C; Gautier, S; Hakim, R; Albert, F; Angoulvant, D; Grammatico-Guillon, L

    2018-05-01

    To assess the reliability and low cost of a computerized interventional cardiology (IC) registry to prospectively and systematically collect high-quality data for all consecutive coronary patients referred for coronary angiogram or/and coronary angioplasty. Rigorous clinical practice assessment is a key factor to improve prognosis in IC. A prospective and permanent registry could achieve this goal but, presumably, at high cost and low level of data quality. One multicentric IC registry (CRAC registry), fully integrated to usual coronary activity report software, started in the centre Val-de-Loire (CVL) French region in 2014. Quality assessment of CRAC registry was conducted on five IC CathLab of the CVL region, from January 1st to December 31st 2014. Quality of collected data was evaluated by measuring procedure exhaustivity (comparing with data from hospital information system), data completeness (quality controls) and data consistency (by checking complete medical charts as gold standard). Cost per procedure (global registry operating cost/number of collected procedures) was also estimated. CRAC model provided a high-quality level with 98.2% procedure completeness, 99.6% data completeness and 89% data consistency. The operating cost per procedure was €14.70 ($16.51) for data collection and quality control, including ST-segment elevation myocardial infarction (STEMI) preadmission information and one-year follow-up after angioplasty. This integrated computerized IC registry led to the construction of an exhaustive, reliable and costless database, including all coronary patients entering in participating IC centers in the CVL region. This solution will be developed in other French regions, setting up a national IC database for coronary patients in 2020: France PCI. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  11. Development of SiC Large Tapered Crystal Growth

    NASA Technical Reports Server (NTRS)

    Neudeck, Phil

    2010-01-01

    Majority of very large potential benefits of wide band gap semiconductor power electronics have NOT been realized due in large part to high cost and high defect density of commercial wafers. Despite 20 years of development, present SiC wafer growth approach is yet to deliver majority of SiC's inherent performance and cost benefits to power systems. Commercial SiC power devices are significantly de-rated in order to function reliably due to the adverse effects of SiC crystal dislocation defects (thousands per sq cm) in the SiC wafer.

  12. An improved classification tree analysis of high cost modules based upon an axiomatic definition of complexity

    NASA Technical Reports Server (NTRS)

    Tian, Jianhui; Porter, Adam; Zelkowitz, Marvin V.

    1992-01-01

    Identification of high cost modules has been viewed as one mechanism to improve overall system reliability, since such modules tend to produce more than their share of problems. A decision tree model was used to identify such modules. In this current paper, a previously developed axiomatic model of program complexity is merged with the previously developed decision tree process for an improvement in the ability to identify such modules. This improvement was tested using data from the NASA Software Engineering Laboratory.

  13. Engine for the next-generation launcher

    NASA Astrophysics Data System (ADS)

    Beichel, Rudi; Grey, Jerry

    1995-05-01

    The proposed dual-fuel/dual-expansion engine for the Reusable Launch Vehicle (RLV) could solve the vehicle's need for a high-performance, lightweight, low-cost, maintainable engine. The features that make dual-fuel/dual-expansion engine a prime candidate for RLV include oxygen-rich combustion, high-pressure staged-combustion cycle and dual-fuel operation. Cost-reducing, reliability-enhancing innovations such as the elimination of regenerative cooling, elimination of gimbaling and replacement of kerosene-based hydrocarbon fuel by subcooled propane have also made the this type of engine an attractive option.

  14. The 747 primary flight control systems reliability and maintenance study

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The major operational characteristics of the 747 Primary Flight Control Systems (PFCS) are described. Results of reliability analysis for separate control functions are presented. The analysis makes use of a NASA computer program which calculates reliability of redundant systems. Costs for maintaining the 747 PFCS in airline service are assessed. The reliabilities and cost will provide a baseline for use in trade studies of future flight control system design.

  15. Assuring Electronics Reliability: What Could and Should Be Done Differently

    NASA Astrophysics Data System (ADS)

    Suhir, E.

    The following “ ten commandments” for the predicted and quantified reliability of aerospace electronic, and photonic products are addressed and discussed: 1) The best product is the best compromise between the needs for reliability, cost effectiveness and time-to-market; 2) Reliability cannot be low, need not be higher than necessary, but has to be adequate for a particular product; 3) When reliability is imperative, ability to quantify it is a must, especially if optimization is considered; 4) One cannot design a product with quantified, optimized and assured reliability by limiting the effort to the highly accelerated life testing (HALT) that does not quantify reliability; 5) Reliability is conceived at the design stage and should be taken care of, first of all, at this stage, when a “ genetically healthy” product should be created; reliability evaluations and assurances cannot be delayed until the product is fabricated and shipped to the customer, i.e., cannot be left to the prognostics-and-health-monitoring/managing (PHM) stage; it is too late at this stage to change the design or the materials for improved reliability; that is why, when reliability is imperative, users re-qualify parts to assess their lifetime and use redundancy to build a highly reliable system out of insufficiently reliable components; 6) Design, fabrication, qualification and PHM efforts should consider and be specific for particular products and their most likely actual or at least anticipated application(s); 7) Probabilistic design for reliability (PDfR) is an effective means for improving the state-of-the-art in the field: nothing is perfect, and the difference between an unreliable product and a robust one is “ merely” the probability of failure (PoF); 8) Highly cost-effective and highly focused failure oriented accelerated testing (FOAT) geared to a particular pre-determined reliability model and aimed at understanding the physics of failure- anticipated by this model is an important constituent part of the PDfR effort; 9) Predictive modeling (PM) is another important constituent of the PDfR approach; in combination with FOAT, it is a powerful means to carry out sensitivity analyses (SA), to quantify and nearly eliminate failures (“ principle of practical confidence” ); 10) Consistent, comprehensive and physically meaningful PDfR can effectively contribute to the most feasible and the most effective qualification test (QT) methodologies, practices and specifications. The general concepts addressed in the paper are illustrated by numerical examples. It is concluded that although the suggested concept is promising and fruitful, further research, refinement, and validations are needed before this concept becomes widely accepted by the engineering community and implemented into practice. It is important that this novel approach is introduced gradually, whenever feasible and appropriate, in addition to, and in some situations even instead of, the currently employed various types and modifications of the forty year old HALT.

  16. COTS-Based Fault Tolerance in Deep Space: Qualitative and Quantitative Analyses of a Bus Network Architecture

    NASA Technical Reports Server (NTRS)

    Tai, Ann T.; Chau, Savio N.; Alkalai, Leon

    2000-01-01

    Using COTS products, standards and intellectual properties (IPs) for all the system and component interfaces is a crucial step toward significant reduction of both system cost and development cost as the COTS interfaces enable other COTS products and IPs to be readily accommodated by the target system architecture. With respect to the long-term survivable systems for deep-space missions, the major challenge for us is, under stringent power and mass constraints, to achieve ultra-high reliability of the system comprising COTS products and standards that are not developed for mission-critical applications. The spirit of our solution is to exploit the pertinent standard features of a COTS product to circumvent its shortcomings, though these standard features may not be originally designed for highly reliable systems. In this paper, we discuss our experiences and findings on the design of an IEEE 1394 compliant fault-tolerant COTS-based bus architecture. We first derive and qualitatively analyze a -'stacktree topology" that not only complies with IEEE 1394 but also enables the implementation of a fault-tolerant bus architecture without node redundancy. We then present a quantitative evaluation that demonstrates significant reliability improvement from the COTS-based fault tolerance.

  17. Ultrastrong Polyoxyzole Nanofiber Membranes for Dendrite-Proof and Heat-Resistant Battery Separators.

    PubMed

    Hao, Xiaoming; Zhu, Jian; Jiang, Xiong; Wu, Haitao; Qiao, Jinshuo; Sun, Wang; Wang, Zhenhua; Sun, Kening

    2016-05-11

    Polymeric nanomaterials emerge as key building blocks for engineering materials in a variety of applications. In particular, the high modulus polymeric nanofibers are suitable to prepare flexible yet strong membrane separators to prevent the growth and penetration of lithium dendrites for safe and reliable high energy lithium metal-based batteries. High ionic conductance, scalability, and low cost are other required attributes of the separator important for practical implementations. Available materials so far are difficult to comply with such stringent criteria. Here, we demonstrate a high-yield exfoliation of ultrastrong poly(p-phenylene benzobisoxazole) nanofibers from the Zylon microfibers. A highly scalable blade casting process is used to assemble these nanofibers into nanoporous membranes. These membranes possess ultimate strengths of 525 MPa, Young's moduli of 20 GPa, thermal stability up to 600 °C, and impressively low ionic resistance, enabling their use as dendrite-suppressing membrane separators in electrochemical cells. With such high-performance separators, reliable lithium-metal based batteries operated at 150 °C are also demonstrated. Those polyoxyzole nanofibers would enrich the existing library of strong nanomaterials and serve as a promising material for large-scale and cost-effective safe energy storage.

  18. High Reliability Prototype Quadrupole for the Next Linear Collider

    NASA Astrophysics Data System (ADS)

    Spencer, C. M.

    2001-01-01

    The Next Linear Collider (NLC) will require over 5600 magnets, each of which must be highly reliable and/or quickly repairable in order that the NLC reach its 85/ overall availability goal. A multidiscipline engineering team was assembled at SLAC to develop a more reliable electromagnet design than historically had been achieved at SLAC. This team carried out a Failure Mode and Effects Analysis (FMEA) on a standard SLAC quadrupole magnet system. They overcame a number of longstanding design prejudices, producing 10 major design changes. This paper describes how a prototype magnet was constructed and the extensive testing carried out on it to prove full functionality with an improvement in reliability. The magnet's fabrication cost will be compared to the cost of a magnet with the same requirements made in the historic SLAC way. The NLC will use over 1600 of these 12.7 mm bore quadrupoles with a range of integrated strengths from 0.6 to 132 Tesla, a maximum gradient of 135 Tesla per meter, an adjustment range of 0 to -20/ and core lengths from 324 mm to 972 mm. The magnetic center must remain stable to within 1 micron during the 20/ adjustment. A magnetic measurement set-up has been developed that can measure sub-micron shifts of a magnetic center. The prototype satisfied the center shift requirement over the full range of integrated strengths.

  19. The Reliability, Impact, and Cost-Effectiveness of Value-Added Teacher Assessment Methods

    ERIC Educational Resources Information Center

    Yeh, Stuart S.

    2012-01-01

    This article reviews evidence regarding the intertemporal reliability of teacher rankings based on value-added methods. Value-added methods exhibit low reliability, yet are broadly supported by prominent educational researchers and are increasingly being used to evaluate and fire teachers. The article then presents a cost-effectiveness analysis…

  20. Current medical staff governance and physician sensemaking: a formula for resistance to high reliability.

    PubMed

    Flitter, Marc A; Riesenmy, Kelly Rouse; van Stralen, Daved

    2012-01-01

    To offer a theoretical explanation for observed physician resistance and rejection of high reliability patient safety initiatives. A grounded theoretical qualitative approach, utilizing the organizational theory of sensemaking, provided the foundation for inductive and deductive reasoning employed to analyze medical staff rejection of two successfully performing high reliability programs at separate hospitals. Physician behaviors resistant to patient-centric high reliability processes were traced to provider-centric physician sensemaking. Research, conducted with the advantage that prospective studies have over the limitations of this retrospective investigation, is needed to evaluate the potential for overcoming physician resistance to innovation implementation, employing strategies based upon these findings and sensemaking theory in general. If hospitals are to emulate high reliability industries that do successfully manage environments of extreme hazard, physicians must be fully integrated into the complex teams required to accomplish this goal. Reforming health care, through high reliability organizing, with its attendant continuous focus on patient-centric processes, offers a distinct alternative to efforts directed primarily at reforming health care insurance. It is by changing how health care is provided that true cost efficiencies can be achieved. Technology and the insights of organizational science present the opportunity of replacing the current emphasis on privileged information with collective tools capable of providing quality and safety in health care. The fictions that have sustained a provider-centric health care system have been challenged. The benefits of patient-centric care should be obtainable.

  1. In Space Nuclear Power as an Enabling Technology for Deep Space Exploration

    NASA Technical Reports Server (NTRS)

    Sackheim, Robert L.; Houts, Michael

    2000-01-01

    Deep Space Exploration missions, both for scientific and Human Exploration and Development (HEDS), appear to be as weight limited today as they would have been 35 years ago. Right behind the weight constraints is the nearly equally important mission limitation of cost. Launch vehicles, upper stages and in-space propulsion systems also cost about the same today with the same efficiency as they have had for many years (excluding impact of inflation). Both these dual mission constraints combine to force either very expensive, mega systems missions or very light weight, but high risk/low margin planetary spacecraft designs, such as the recent unsuccessful attempts for an extremely low cost mission to Mars during the 1998-99 opportunity (i.e., Mars Climate Orbiter and the Mars Polar Lander). When one considers spacecraft missions to the outer heliopause or even the outer planets, the enormous weight and cost constraints will impose even more daunting concerns for mission cost, risk and the ability to establish adequate mission margins for success. This paper will discuss the benefits of using a safe in-space nuclear reactor as the basis for providing both sufficient electric power and high performance space propulsion that will greatly reduce mission risk and significantly increase weight (IMLEO) and cost margins. Weight and cost margins are increased by enabling much higher payload fractions and redundant design features for a given launch vehicle (higher payload fraction of IMLEO). The paper will also discuss and summarize the recent advances in nuclear reactor technology and safety of modern reactor designs and operating practice and experience, as well as advances in reactor coupled power generation and high performance nuclear thermal and electric propulsion technologies. It will be shown that these nuclear power and propulsion technologies are major enabling capabilities for higher reliability, higher margin and lower cost deep space missions design to reliably reach the outer planets for scientific exploration.

  2. Evaluating alternative service contracts for medical equipment.

    PubMed

    De Vivo, L; Derrico, P; Tomaiuolo, D; Capussotto, C; Reali, A

    2004-01-01

    Managing medical equipments is a formidable task that has to be pursued maximizing the benefits within a highly regulated and cost-constrained environment. Clinical engineers are uniquely equipped to determine which policies are the most efficacious and cost effective for a health care institution to ensure that medical devices meet appropriate standards of safety, quality and performance. Part of this support is a strategy for preventive and corrective maintenance. This paper describes an alternative scheme of OEM (Original Equipment Manufacturer) service contract for medical equipment that combines manufacturers' technical support and in-house maintenance. An efficient and efficacious organization can reduce the high cost of medical equipment maintenance while raising reliability and quality. Methodology and results are discussed.

  3. Space Transportation Booster Engine Configuration Study. Volume 3: Program Cost estimates and work breakdown structure and WBS dictionary

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The objective of the Space Transportation Booster Engine Configuration Study is to contribute to the ALS development effort by providing highly reliable, low cost booster engine concepts for both expendable and reusable rocket engines. The objectives of the Space Transportation Booster Engine (STBE) Configuration Study were: (1) to identify engine development configurations which enhance vehicle performance and provide operational flexibility at low cost; and (2) to explore innovative approaches to the follow-on Full-Scale Development (FSD) phase for the STBE.

  4. Plastic packaged microcircuits: Quality, reliability, and cost issues

    NASA Astrophysics Data System (ADS)

    Pecht, Michael G.; Agarwal, Rakesh; Quearry, Dan

    1993-12-01

    Plastic encapsulated microcircuits (PEMs) find their main application in commercial and telecommunication electronics. The advantages of PEMs in cost, size, weight, performance, and market lead-time, have attracted 97% of the market share of worldwide microcircuit sales. However, PEMs have always been resisted in US Government and military applications due to the perception that PEM reliability is low. This paper surveys plastic packaging with respect to the issues of reliability, market lead-time, performance, cost, and weight as a means to guide part-selection and system-design.

  5. Marginal Cost Pricing in a World without Perfect Competition: Implications for Electricity Markets with High Shares of Low Marginal Cost Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frew, Bethany A.; Clark, Kara; Bloom, Aaron P.

    A common approach to regulating electricity is through auction-based competitive wholesale markets. The goal of this approach is to provide a reliable supply of power at the lowest reasonable cost to the consumer. This necessitates market structures and operating rules that ensure revenue sufficiency for all generators needed for resource adequacy purposes. Wholesale electricity markets employ marginal-cost pricing to provide cost-effective dispatch such that resources are compensated for their operational costs. However, marginal-cost pricing alone cannot guarantee cost recovery outside of perfect competition, and electricity markets have at least six attributes that preclude them from functioning as perfectly competitive markets.more » These attributes include market power, externalities, public good attributes, lack of storage, wholesale price caps, and ineffective demand curve. Until (and unless) these failures are ameliorated, some form of corrective action(s) will be necessary to improve market efficiency so that prices can correctly reflect the needed level of system reliability. Many of these options necessarily involve some form of administrative or out-of-market actions, such as scarcity pricing, capacity payments, bilateral or other out-of-market contracts, or some hybrid combination. A key focus with these options is to create a connection between the electricity market and long-term reliability/loss-of-load expectation targets, which are inherently disconnected in the native markets because of the aforementioned market failures. The addition of variable generation resources can exacerbate revenue sufficiency and resource adequacy concerns caused by these underlying market failures. Because variable generation resources have near-zero marginal costs, they effectively suppress energy prices and reduce the capacity factors of conventional generators through the merit-order effect in the simplest case of a convex market; non-convexities can also suppress prices.« less

  6. Enhancing ultra-high CPV passive cooling using least-material finned heat sinks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Micheli, Leonardo, E-mail: lm409@exeter.ac.uk; Mallick, Tapas K., E-mail: T.K.Mallick@exeter.ac.uk; Fernandez, Eduardo F., E-mail: E.Fernandez-Fernandez2@exeter.ac.uk

    2015-09-28

    Ultra-high concentrating photovoltaic (CPV) systems aim to increase the cost-competiveness of CPV by increasing the concentrations over 2000 suns. In this work, the design of a heat sink for ultra-high concentrating photovoltaic (CPV) applications is presented. For the first time, the least-material approach, widely used in electronics to maximize the thermal dissipation while minimizing the weight of the heat sink, has been applied in CPV. This method has the potential to further decrease the cost of this technology and to keep the multijunction cell within the operative temperature range. The designing procedure is described in the paper and the resultsmore » of a thermal simulation are shown to prove the reliability of the solution. A prediction of the costs is also reported: a cost of 0.151$/W{sub p} is expected for a passive least-material heat sink developed for 4000x applications.« less

  7. A GIS-based assessment of the suitability of SCIAMACHY satellite sensor measurements for estimating reliable CO concentrations in a low-latitude climate.

    PubMed

    Fagbeja, Mofoluso A; Hill, Jennifer L; Chatterton, Tim J; Longhurst, James W S

    2015-02-01

    An assessment of the reliability of the Scanning Imaging Absorption Spectrometer for Atmospheric Cartography (SCIAMACHY) satellite sensor measurements to interpolate tropospheric concentrations of carbon monoxide considering the low-latitude climate of the Niger Delta region in Nigeria was conducted. Monthly SCIAMACHY carbon monoxide (CO) column measurements from January 2,003 to December 2005 were interpolated using ordinary kriging technique. The spatio-temporal variations observed in the reliability were based on proximity to the Atlantic Ocean, seasonal variations in the intensities of rainfall and relative humidity, the presence of dust particles from the Sahara desert, industrialization in Southwest Nigeria and biomass burning during the dry season in Northern Nigeria. Spatial reliabilities of 74 and 42 % are observed for the inland and coastal areas, respectively. Temporally, average reliability of 61 and 55 % occur during the dry and wet seasons, respectively. Reliability in the inland and coastal areas was 72 and 38 % during the wet season, and 75 and 46 % during the dry season, respectively. Based on the results, the WFM-DOAS SCIAMACHY CO data product used for this study is therefore relevant in the assessment of CO concentrations in developing countries within the low latitudes that could not afford monitoring infrastructure due to the required high costs. Although the SCIAMACHY sensor is no longer available, it provided cost-effective, reliable and accessible data that could support air quality assessment in developing countries.

  8. Note: High temperature pulsed solenoid valve.

    PubMed

    Shen, Wei; Sulkes, Mark

    2010-01-01

    We have developed a high temperature pulsed solenoid valve with reliable long term operation to at least 400 degrees C. As in earlier published designs, a needle extension sealing a heated orifice is lifted via solenoid actuation; the solenoid is thermally isolated from the heated orifice region. In this new implementation, superior sealing and reliability were attained by choosing a solenoid that produces considerably larger lifting forces on the magnetically actuated plunger. It is this property that facilitates easily attainable sealing and reliability, albeit with some tradeoff in attainable gas pulse durations. The cost of the solenoid valve employed is quite low and the necessary machining quite simple. Our ultimate level of sealing was attained by making a simple modification to the polished seal at the needle tip. The same sealing tip modification could easily be applied to one of the earlier high T valve designs, which could improve the attainability and tightness of sealing for these implementations.

  9. The replacement of dry heat in generic reliability assurance requirements for passive optical components

    NASA Astrophysics Data System (ADS)

    Ren, Xusheng; Qian, Longsheng; Zhang, Guiyan

    2005-12-01

    According to Generic Reliability Assurance Requirements for Passive Optical Components GR-1221-CORE (Issue 2, January 1999), reliability determination test of different kinds of passive optical components which using in uncontrolled environments is taken. The test condition of High Temperature Storage Test (Dry Test) and Damp Test is in below sheet. Except for humidity condition, all is same. In order to save test time and cost, after a sires of contrast tests, the replacement of Dry Heat is discussed. Controlling the Failure mechanism of dry heat and damp heat of passive optical components, the contrast test of dry heat and damp heat for passive optical components (include DWDM, CWDM, Coupler, Isolator, mini Isolator) is taken. The test result of isolator is listed. Telcordia test not only test the reliability of the passive optical components, but also test the patience of the experimenter. The cost of Telcordia test in money, manpower and material resources, especially in time is heavy burden for the company. After a series of tests, we can find that Damp heat could factually test the reliability of passive optical components, and equipment manufacturer in accord with component manufacture could omit the dry heat test if damp heat test is taken first and passed.

  10. A reliable algorithm for optimal control synthesis

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1992-01-01

    In recent years, powerful design tools for linear time-invariant multivariable control systems have been developed based on direct parameter optimization. In this report, an algorithm for reliable optimal control synthesis using parameter optimization is presented. Specifically, a robust numerical algorithm is developed for the evaluation of the H(sup 2)-like cost functional and its gradients with respect to the controller design parameters. The method is specifically designed to handle defective degenerate systems and is based on the well-known Pade series approximation of the matrix exponential. Numerical test problems in control synthesis for simple mechanical systems and for a flexible structure with densely packed modes illustrate positively the reliability of this method when compared to a method based on diagonalization. Several types of cost functions have been considered: a cost function for robust control consisting of a linear combination of quadratic objectives for deterministic and random disturbances, and one representing an upper bound on the quadratic objective for worst case initial conditions. Finally, a framework for multivariable control synthesis has been developed combining the concept of closed-loop transfer recovery with numerical parameter optimization. The procedure enables designers to synthesize not only observer-based controllers but also controllers of arbitrary order and structure. Numerical design solutions rely heavily on the robust algorithm due to the high order of the synthesis model and the presence of near-overlapping modes. The design approach is successfully applied to the design of a high-bandwidth control system for a rotorcraft.

  11. Stretchable and high-performance supercapacitors with crumpled graphene papers.

    PubMed

    Zang, Jianfeng; Cao, Changyong; Feng, Yaying; Liu, Jie; Zhao, Xuanhe

    2014-10-01

    Fabrication of unconventional energy storage devices with high stretchability and performance is challenging, but critical to practical operations of fully power-independent stretchable electronics. While supercapacitors represent a promising candidate for unconventional energy-storage devices, existing stretchable supercapacitors are limited by their low stretchability, complicated fabrication process, and high cost. Here, we report a simple and low-cost method to fabricate extremely stretchable and high-performance electrodes for supercapacitors based on new crumpled-graphene papers. Electrolyte-mediated-graphene paper bonded on a compliant substrate can be crumpled into self-organized patterns by harnessing mechanical instabilities in the graphene paper. As the substrate is stretched, the crumpled patterns unfold, maintaining high reliability of the graphene paper under multiple cycles of large deformation. Supercapacitor electrodes based on the crumpled graphene papers exhibit a unique combination of high stretchability (e.g., linear strain ~300%, areal strain ~800%), high electrochemical performance (e.g., specific capacitance ~196 F g(-1)), and high reliability (e.g., over 1000 stretch/relax cycles). An all-solid-state supercapacitor capable of large deformation is further fabricated to demonstrate practical applications of the crumpled-graphene-paper electrodes. Our method and design open a wide range of opportunities for manufacturing future energy-storage devices with desired deformability together with high performance.

  12. Stretchable and High-Performance Supercapacitors with Crumpled Graphene Papers

    NASA Astrophysics Data System (ADS)

    Zang, Jianfeng; Cao, Changyong; Feng, Yaying; Liu, Jie; Zhao, Xuanhe

    2014-10-01

    Fabrication of unconventional energy storage devices with high stretchability and performance is challenging, but critical to practical operations of fully power-independent stretchable electronics. While supercapacitors represent a promising candidate for unconventional energy-storage devices, existing stretchable supercapacitors are limited by their low stretchability, complicated fabrication process, and high cost. Here, we report a simple and low-cost method to fabricate extremely stretchable and high-performance electrodes for supercapacitors based on new crumpled-graphene papers. Electrolyte-mediated-graphene paper bonded on a compliant substrate can be crumpled into self-organized patterns by harnessing mechanical instabilities in the graphene paper. As the substrate is stretched, the crumpled patterns unfold, maintaining high reliability of the graphene paper under multiple cycles of large deformation. Supercapacitor electrodes based on the crumpled graphene papers exhibit a unique combination of high stretchability (e.g., linear strain ~300%, areal strain ~800%), high electrochemical performance (e.g., specific capacitance ~196 F g-1), and high reliability (e.g., over 1000 stretch/relax cycles). An all-solid-state supercapacitor capable of large deformation is further fabricated to demonstrate practical applications of the crumpled-graphene-paper electrodes. Our method and design open a wide range of opportunities for manufacturing future energy-storage devices with desired deformability together with high performance.

  13. Stretchable and High-Performance Supercapacitors with Crumpled Graphene Papers

    PubMed Central

    Zang, Jianfeng; Cao, Changyong; Feng, Yaying; Liu, Jie; Zhao, Xuanhe

    2014-01-01

    Fabrication of unconventional energy storage devices with high stretchability and performance is challenging, but critical to practical operations of fully power-independent stretchable electronics. While supercapacitors represent a promising candidate for unconventional energy-storage devices, existing stretchable supercapacitors are limited by their low stretchability, complicated fabrication process, and high cost. Here, we report a simple and low-cost method to fabricate extremely stretchable and high-performance electrodes for supercapacitors based on new crumpled-graphene papers. Electrolyte-mediated-graphene paper bonded on a compliant substrate can be crumpled into self-organized patterns by harnessing mechanical instabilities in the graphene paper. As the substrate is stretched, the crumpled patterns unfold, maintaining high reliability of the graphene paper under multiple cycles of large deformation. Supercapacitor electrodes based on the crumpled graphene papers exhibit a unique combination of high stretchability (e.g., linear strain ~300%, areal strain ~800%), high electrochemical performance (e.g., specific capacitance ~196 F g−1), and high reliability (e.g., over 1000 stretch/relax cycles). An all-solid-state supercapacitor capable of large deformation is further fabricated to demonstrate practical applications of the crumpled-graphene-paper electrodes. Our method and design open a wide range of opportunities for manufacturing future energy-storage devices with desired deformability together with high performance. PMID:25270673

  14. Cost prediction model for various payloads and instruments for the Space Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Hoffman, F. E.

    1984-01-01

    The following cost parameters of the space shuttle were undertaken: (1) to develop a cost prediction model for various payload classes of instruments and experiments for the Space Shuttle Orbiter; and (2) to show the implications of various payload classes on the cost of: reliability analysis, quality assurance, environmental design requirements, documentation, parts selection, and other reliability enhancing activities.

  15. Research on the optimal structure configuration of dither RLG used in skewed redundant INS

    NASA Astrophysics Data System (ADS)

    Gao, Chunfeng; Wang, Qi; Wei, Guo; Long, Xingwu

    2016-05-01

    The actual combat effectiveness of weapon equipment is restricted by the performance of Inertial Navigation System (INS), especially in high reliability required situations such as fighter, satellite and submarine. Through the use of skewed sensor geometries, redundant technique has been applied to reduce the cost and improve the reliability of the INS. In this paper, the structure configuration and the inertial sensor characteristics of Skewed Redundant Strapdown Inertial Navigation System (SRSINS) using dithered Ring Laser Gyroscope (RLG) are analyzed. For the dither coupling effects of the dither gyro, the system measurement errors can be amplified either the individual gyro dither frequency is near one another or the structure of the SRSINS is unreasonable. Based on the characteristics of RLG, the research on coupled vibration of dithered RLG in SRSINS is carried out. On the principle of optimal navigation performance, optimal reliability and optimal cost-effectiveness, the comprehensive evaluation scheme of the inertial sensor configuration of SRINS is given.

  16. Inspection planning development: An evolutionary approach using reliability engineering as a tool

    NASA Technical Reports Server (NTRS)

    Graf, David A.; Huang, Zhaofeng

    1994-01-01

    This paper proposes an evolutionary approach for inspection planning which introduces various reliability engineering tools into the process and assess system trade-offs among reliability, engineering requirement, manufacturing capability and inspection cost to establish an optimal inspection plan. The examples presented in the paper illustrate some advantages and benefits of the new approach. Through the analysis, reliability and engineering impacts due to manufacturing process capability and inspection uncertainty are clearly understood; the most cost effective and efficient inspection plan can be established and associated risks are well controlled; some inspection reductions and relaxations are well justified; and design feedbacks and changes may be initiated from the analysis conclusion to further enhance reliability and reduce cost. The approach is particularly promising as global competitions and customer quality improvement expectations are rapidly increasing.

  17. Proactive replica checking to assure reliability of data in cloud storage with minimum replication

    NASA Astrophysics Data System (ADS)

    Murarka, Damini; Maheswari, G. Uma

    2017-11-01

    The two major issues for cloud storage systems are data reliability and storage costs. For data reliability protection, multi-replica replication strategy which is used mostly in current clouds acquires huge storage consumption, leading to a large storage cost for applications within the loud specifically. This paper presents a cost-efficient data reliability mechanism named PRCR to cut back the cloud storage consumption. PRCR ensures data reliability of large cloud information with the replication that might conjointly function as a price effective benchmark for replication. The duplication shows that when resembled to the standard three-replica approach, PRCR will scale back to consume only a simple fraction of the cloud storage from one-third of the storage, thence considerably minimizing the cloud storage price.

  18. Design of low-cost general purpose microcontroller based neuromuscular stimulator.

    PubMed

    Koçer, S; Rahmi Canal, M; Güler, I

    2000-04-01

    In this study, a general purpose, low-cost, programmable, portable and high performance stimulator is designed and implemented. For this purpose, a microcontroller is used in the design of the stimulator. The duty cycle and amplitude of the designed system can be controlled using a keyboard. The performance test of the system has shown that the results are reliable. The overall system can be used as the neuromuscular stimulator under safe conditions.

  19. Family System of Advanced Charring Ablators for Planetary Exploration Missions

    NASA Technical Reports Server (NTRS)

    Congdon, William M.; Curry, Donald M.

    2005-01-01

    Advanced Ablators Program Objectives: 1) Flight-ready(TRL-6) ablative heat shields for deep-space missions; 2) Diversity of selection from family-system approach; 3) Minimum weight systems with high reliability; 4) Optimized formulations and processing; 5) Fully characterized properties; and 6) Low-cost manufacturing. Definition and integration of candidate lightweight structures. Test and analysis database to support flight-vehicle engineering. Results from production scale-up studies and production-cost analyses.

  20. Cryocoolers for the new high-temperature superconductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, G.; Ellison, W.; Zylstra, S.

    1988-06-01

    Compact, reliable, low-cost cryocoolers operated simply by closing a switch are an essential requirement for the coming age of superconductivity and cold electronic systems. The advent of high-temperature superconductors has substantially eased the task of those seeking to fill the above need. This article reviews some recent developments in cryocooler systems and examined some prospects for the future.

  1. The common engine concept for ALS application - A cost reduction approach

    NASA Technical Reports Server (NTRS)

    Bair, E. K.; Schindler, C. M.

    1989-01-01

    Future launch systems require the application of propulsion systems which have been designed and developed to meet mission model needs while providing high degrees of reliability and cost effectiveness. Vehicle configurations which utilize different propellant combinations for booster and core stages can benefit from a common engine approach where a single engine design can be configured to operate on either set of propellants and thus serve as either a booster or core engine. Engine design concepts and mission application for a vehicle employing a common engine are discussed. Engine program cost estimates were made and cost savings, over the design and development of two unique engines, estimated.

  2. Developing a real-time incident decision support system (IDSS) for the freight industry.

    DOT National Transportation Integrated Search

    2015-01-01

    Our nation's economy is highly dependent on reliable and cost-effective truck-freight : transportation. Delays to truck movement are of particular concern to the nation. Building upon : our previous effort, we developed an Incident Decision Support S...

  3. An alternative to the balance error scoring system: using a low-cost balance board to improve the validity/reliability of sports-related concussion balance testing.

    PubMed

    Chang, Jasper O; Levy, Susan S; Seay, Seth W; Goble, Daniel J

    2014-05-01

    Recent guidelines advocate sports medicine professionals to use balance tests to assess sensorimotor status in the management of concussions. The present study sought to determine whether a low-cost balance board could provide a valid, reliable, and objective means of performing this balance testing. Criterion validity testing relative to a gold standard and 7 day test-retest reliability. University biomechanics laboratory. Thirty healthy young adults. Balance ability was assessed on 2 days separated by 1 week using (1) a gold standard measure (ie, scientific grade force plate), (2) a low-cost Nintendo Wii Balance Board (WBB), and (3) the Balance Error Scoring System (BESS). Validity of the WBB center of pressure path length and BESS scores were determined relative to the force plate data. Test-retest reliability was established based on intraclass correlation coefficients. Composite scores for the WBB had excellent validity (r = 0.99) and test-retest reliability (R = 0.88). Both the validity (r = 0.10-0.52) and test-retest reliability (r = 0.61-0.78) were lower for the BESS. These findings demonstrate that a low-cost balance board can provide improved balance testing accuracy/reliability compared with the BESS. This approach provides a potentially more valid/reliable, yet affordable, means of assessing sports-related concussion compared with current methods.

  4. A Near-Term, High-Confidence Heavy Lift Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Rothschild, William J.; Talay, Theodore A.

    2009-01-01

    The use of well understood, legacy elements of the Space Shuttle system could yield a near-term, high-confidence Heavy Lift Launch Vehicle that offers significant performance, reliability, schedule, risk, cost, and work force transition benefits. A side-mount Shuttle-Derived Vehicle (SDV) concept has been defined that has major improvements over previous Shuttle-C concepts. This SDV is shown to carry crew plus large logistics payloads to the ISS, support an operationally efficient and cost effective program of lunar exploration, and offer the potential to support commercial launch operations. This paper provides the latest data and estimates on the configurations, performance, concept of operations, reliability and safety, development schedule, risks, costs, and work force transition opportunities for this optimized side-mount SDV concept. The results presented in this paper have been based on established models and fully validated analysis tools used by the Space Shuttle Program, and are consistent with similar analysis tools commonly used throughout the aerospace industry. While these results serve as a factual basis for comparisons with other launch system architectures, no such comparisons are presented in this paper. The authors welcome comparisons between this optimized SDV and other Heavy Lift Launch Vehicle concepts.

  5. Use of CYPRES™ cutters with a Kevlar clamp band for hold-down and release of the Icarus De-Orbit Sail payload on TechDemoSat-1

    NASA Astrophysics Data System (ADS)

    Kingston, J.; Hobbs, S.; Roberts, P.; Juanes-Vallejo, C.; Robinson, F.; Sewell, R.; Snapir, B.; Llop, J. Virgili; Patel, M.

    2014-07-01

    TechDemoSat-1 is a UK-funded technology demonstration satellite, carrying 8 payloads provided by UK organisations, which is due to be launched in the first quarter of 2014. Cranfield University has supplied a De-Orbit Sail (DOS) payload to allow the mission to comply with end-of-life debris mitigation guidelines. The payload provides a passive, simple, and low-cost means of mitigating debris proliferation in Low Earth Orbit, by enhancing spacecraft aerodynamic drag at end-of-life and reducing time to natural orbital decay and re-entry. This paper describes the use of small commercial electro-explosive devices (EEDs), produced for use as parachute tether-cutters in reserve chute deployment systems, as low-cost but high-reliability release mechanisms for space applications. A testing campaign, including thermal vacuum and mechanical vibration, is described, which demonstrates the suitability of these CYPRES™ cutters, with a flexible Kevlar clamp band, for use as a hold-down and release mechanism (HDRM) for a deployable de-orbit sail. The HDRM is designed to be three-failure-tolerant, highly reliable, yet simple and low-cost.

  6. Digital Avionics Information System (DAIS): Life Cycle Cost Impact Modeling System Reliability, Maintainability, and Cost Model (RMCM)--Description. Users Guide. Final Report.

    ERIC Educational Resources Information Center

    Goclowski, John C.; And Others

    The Reliability, Maintainability, and Cost Model (RMCM) described in this report is an interactive mathematical model with a built-in sensitivity analysis capability. It is a major component of the Life Cycle Cost Impact Model (LCCIM), which was developed as part of the DAIS advanced development program to be used to assess the potential impacts…

  7. Assessing the Cost of Large-Scale Power Outages to Residential Customers.

    PubMed

    Baik, Sunhee; Davis, Alexander L; Morgan, M Granger

    2018-02-01

    Residents in developed economies depend heavily on electric services. While distributed resources and a variety of new smart technologies can increase the reliability of that service, adopting them involves costs, necessitating tradeoffs between cost and reliability. An important input to making such tradeoffs is an estimate of the value customers place on reliable electric services. We develop an elicitation framework that helps individuals think systematically about the value they attach to reliable electric service. Our approach employs a detailed and realistic blackout scenario, full or partial (20 A) backup service, questions about willingness to pay (WTP) using a multiple bounded discrete choice method, information regarding inconveniences and economic losses, and checks for bias and consistency. We applied this method to a convenience sample of residents in Allegheny County, Pennsylvania, finding that respondents valued a kWh for backup services they assessed to be high priority more than services that were seen as low priority ($0.75/kWh vs. $0.51/kWh). As more information about the consequences of a blackout was provided, this difference increased ($1.2/kWh vs. $0.35/kWh), and respondents' uncertainty about the backup services decreased (Full: $11 to $9.0, Partial: $13 to $11). There was no evidence that the respondents were anchored by their previous WTP statements, but they demonstrated only weak scope sensitivity. In sum, the consumer surplus associated with providing a partial electric backup service during a blackout may justify the costs of such service, but measurement of that surplus depends on the public having accurate information about blackouts and their consequences. © 2017 Society for Risk Analysis.

  8. Developing Reliable Life Support for Mars

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2017-01-01

    A human mission to Mars will require highly reliable life support systems. Mars life support systems may recycle water and oxygen using systems similar to those on the International Space Station (ISS). However, achieving sufficient reliability is less difficult for ISS than it will be for Mars. If an ISS system has a serious failure, it is possible to provide spare parts, or directly supply water or oxygen, or if necessary bring the crew back to Earth. Life support for Mars must be designed, tested, and improved as needed to achieve high demonstrated reliability. A quantitative reliability goal should be established and used to guide development t. The designers should select reliable components and minimize interface and integration problems. In theory a system can achieve the component-limited reliability, but testing often reveal unexpected failures due to design mistakes or flawed components. Testing should extend long enough to detect any unexpected failure modes and to verify the expected reliability. Iterated redesign and retest may be required to achieve the reliability goal. If the reliability is less than required, it may be improved by providing spare components or redundant systems. The number of spares required to achieve a given reliability goal depends on the component failure rate. If the failure rate is under estimated, the number of spares will be insufficient and the system may fail. If the design is likely to have undiscovered design or component problems, it is advisable to use dissimilar redundancy, even though this multiplies the design and development cost. In the ideal case, a human tended closed system operational test should be conducted to gain confidence in operations, maintenance, and repair. The difficulty in achieving high reliability in unproven complex systems may require the use of simpler, more mature, intrinsically higher reliability systems. The limitations of budget, schedule, and technology may suggest accepting lower and less certain expected reliability. A plan to develop reliable life support is needed to achieve the best possible reliability.

  9. Temperature and Humidity Calibration of a Low-Cost Wireless Dust Sensor for Real-Time Monitoring.

    PubMed

    Hojaiji, Hannaneh; Kalantarian, Haik; Bui, Alex A T; King, Christine E; Sarrafzadeh, Majid

    2017-03-01

    This paper introduces the design, calibration, and validation of a low-cost portable sensor for the real-time measurement of dust particles within the environment. The proposed design consists of low hardware cost and calibration based on temperature and humidity sensing to achieve accurate processing of airborne dust density. Using commercial particulate matter sensors, a highly accurate air quality monitoring sensor was designed and calibrated using real world variations in humidity and temperature for indoor and outdoor applications. Furthermore, to provide a low-cost secure solution for real-time data transfer and monitoring, an onboard Bluetooth module with AES data encryption protocol was implemented. The wireless sensor was tested against a Dylos DC1100 Pro Air Quality Monitor, as well as an Alphasense OPC-N2 optical air quality monitoring sensor for accuracy. The sensor was also tested for reliability by comparing the sensor to an exact copy of itself under indoor and outdoor conditions. It was found that accurate measurements under real-world humid and temperature varying and dynamically changing conditions were achievable using the proposed sensor when compared to the commercially available sensors. In addition to accurate and reliable sensing, this sensor was designed to be wearable and perform real-time data collection and transmission, making it easy to collect and analyze data for air quality monitoring and real-time feedback in remote health monitoring applications. Thus, the proposed device achieves high quality measurements at lower-cost solutions than commercially available wireless sensors for air quality.

  10. Long life, low cost, rechargeable AgZn battery for non-military applications

    NASA Astrophysics Data System (ADS)

    Brown, Curtis C.

    1996-03-01

    Of the rechargeable (secondary) battery systems with mature technology, the silver oxide-zinc system (AgZn) safely offers the highest power and energy (watts and watt hours) per unit of volume and mass. As a result they have long been used for aerospace and defense applications where they have also proven their high reliability. In the past, the expense associated with the cost of silver and the resulting low production volume have limited their commercial application. However, the relative low cost of silver now make this system feasible in many applications where high energy and reliability are required. One area of commercial potential is power for a new generation of sophisticated, portable medical equipment. AgZn batteries have recently proven ``enabling technology'' for power critical, advanced medical devices. By extending the cycle calendar life to the system (offers both improved performance and lower operating cost), a combination is achieved which may enable a wide range of future electrical devices. Other areas where AgZn batteries have been used in nonmilitary applications to provide power to aid in the development of commercial equipment have been: (a) Electrically powered vehicles; (b) Remote sensing in nuclear facilities; (c) Special effects equipment for movies; (d) Remote sensing in petroleum pipe lines; (e) Portable computers; (f) Fly by wire systems for commercial aircraft; and (g) Robotics. However none of these applications have progressed to the level where the volume required will significantly lower cost.

  11. A PC program to optimize system configuration for desired reliability at minimum cost

    NASA Technical Reports Server (NTRS)

    Hills, Steven W.; Siahpush, Ali S.

    1994-01-01

    High reliability is desired in all engineered systems. One way to improve system reliability is to use redundant components. When redundant components are used, the problem becomes one of allocating them to achieve the best reliability without exceeding other design constraints such as cost, weight, or volume. Systems with few components can be optimized by simply examining every possible combination but the number of combinations for most systems is prohibitive. A computerized iteration of the process is possible but anything short of a super computer requires too much time to be practical. Many researchers have derived mathematical formulations for calculating the optimum configuration directly. However, most of the derivations are based on continuous functions whereas the real system is composed of discrete entities. Therefore, these techniques are approximations of the true optimum solution. This paper describes a computer program that will determine the optimum configuration of a system of multiple redundancy of both standard and optional components. The algorithm is a pair-wise comparative progression technique which can derive the true optimum by calculating only a small fraction of the total number of combinations. A designer can quickly analyze a system with this program on a personal computer.

  12. Preliminary design of the redundant software experiment

    NASA Technical Reports Server (NTRS)

    Campbell, Roy; Deimel, Lionel; Eckhardt, Dave, Jr.; Kelly, John; Knight, John; Lauterbach, Linda; Lee, Larry; Mcallister, Dave; Mchugh, John

    1985-01-01

    The goal of the present experiment is to characterize the fault distributions of highly reliable software replicates, constructed using techniques and environments which are similar to those used in comtemporary industrial software facilities. The fault distributions and their effect on the reliability of fault tolerant configurations of the software will be determined through extensive life testing of the replicates against carefully constructed randomly generated test data. Each detected error will be carefully analyzed to provide insight in to their nature and cause. A direct objective is to develop techniques for reducing the intensity of coincident errors, thus increasing the reliability gain which can be achieved with fault tolerance. Data on the reliability gains realized, and the cost of the fault tolerant configurations can be used to design a companion experiment to determine the cost effectiveness of the fault tolerant strategy. Finally, the data and analysis produced by this experiment will be valuable to the software engineering community as a whole because it will provide a useful insight into the nature and cause of hard to find, subtle faults which escape standard software engineering validation techniques and thus persist far into the software life cycle.

  13. SERI Solar Energy Storage Program: FY 1984

    NASA Astrophysics Data System (ADS)

    Luft, W.; Bohn, M.; Copeland, R. J.; Kreith, F.; Nix, R. G.

    1985-02-01

    The activities of the Solar Energy Research Institute's Solar Energy Research Institute's Solar Energy Storage Program during its sixth year are summarized. During FY 1984 a study was conducted to identify the most promising high-temperature containment concepts considering corrosion resistance, material strength at high temperature, reliability of performance, and cost. Of the two generic types of high-temperature thermal storage concepts, the single-tank system was selected using a two-medium approach to the thermocline maintenance. This concept promises low costs, but further research is required. A conceptual design for a sand-to-air direct-contact heat exchanger was developed using dual-lock hoppers to introduce the sand into the fluidized-bed exchanger, and using cyclones to remove sand particles from the output air stream. Preliminary cost estimates indicate heat exchanger subsystem annual levelized costs of about $4/GJ with compressor costs of an additional $0.75/GJ. An economic analysis comparing sensible and latent heat storage for nitrate and carbonate salts with solely sensible heat storage showed 3%-21% cost savings with combined sensible and latent heat storage.

  14. System Risk Assessment and Allocation in Conceptual Design

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Smith, Natasha L.; Zang, Thomas A. (Technical Monitor)

    2003-01-01

    As aerospace systems continue to evolve in addressing newer challenges in air and space transportation, there exists a heightened priority for significant improvement in system performance, cost effectiveness, reliability, and safety. Tools, which synthesize multidisciplinary integration, probabilistic analysis, and optimization, are needed to facilitate design decisions allowing trade-offs between cost and reliability. This study investigates tools for probabilistic analysis and probabilistic optimization in the multidisciplinary design of aerospace systems. A probabilistic optimization methodology is demonstrated for the low-fidelity design of a reusable launch vehicle at two levels, a global geometry design and a local tank design. Probabilistic analysis is performed on a high fidelity analysis of a Navy missile system. Furthermore, decoupling strategies are introduced to reduce the computational effort required for multidisciplinary systems with feedback coupling.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    GLASS, S. JILL; LOEHMAN, RONALD E.; HOSKING, F. MICHAEL

    The main objective of this project was to develop reliable, low-cost techniques for joining silicon nitride (Si{sub 3}N{sub 4}) to itself and to metals. For Si{sub 3}N{sub 4} to be widely used in advanced turbomachinery applications, joining techniques must be developed that are reliable, cost-effective, and manufacturable. This project addressed those needs by developing and testing two Si{sub 3}N{sub 4} joining systems; oxynitride glass joining materials and high temperature braze alloys. Extensive measurements were also made of the mechanical properties and oxidation resistance of the braze materials. Finite element models were used to predict the magnitudes and positions of themore » stresses in the ceramic regions of ceramic-to-metal joints sleeve and butt joints, similar to the geometries used for stator assemblies.« less

  16. Oil-free centrifugal hydrogen compression technology demonstration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heshmat, Hooshang

    2014-05-31

    One of the key elements in realizing a mature market for hydrogen vehicles is the deployment of a safe and efficient hydrogen production and delivery infrastructure on a scale that can compete economically with current fuels. The challenge, however, is that hydrogen, being the lightest and smallest of gases with a lower viscosity and density than natural gas, readily migrates through small spaces and is difficult to compresses efficiently. While efficient and cost effective compression technology is crucial to effective pipeline delivery of hydrogen, the compression methods used currently rely on oil lubricated positive displacement (PD) machines. PD compression technologymore » is very costly, has poor reliability and durability, especially for components subjected to wear (e.g., valves, rider bands and piston rings) and contaminates hydrogen with lubricating fluid. Even so called “oil-free” machines use oil lubricants that migrate into and contaminate the gas path. Due to the poor reliability of PD compressors, current hydrogen producers often install duplicate units in order to maintain on-line times of 98-99%. Such machine redundancy adds substantially to system capital costs. As such, DOE deemed that low capital cost, reliable, efficient and oil-free advanced compressor technologies are needed. MiTi’s solution is a completely oil-free, multi-stage, high-speed, centrifugal compressor designed for flow capacity of 500,000 kg/day with a discharge pressure of 1200 psig. The design employs oil-free compliant foil bearings and seals to allow for very high operating speeds, totally contamination free operation, long life and reliability. This design meets the DOE’s performance targets and achieves an extremely aggressive, specific power metric of 0.48 kW-hr/kg and provides significant improvements in reliability/durability, energy efficiency, sealing and freedom from contamination. The multi-stage compressor system concept has been validated through full scale performance testing of a single stage with helium similitude gas at full speed in accordance with ASME PTC-10. The experimental results indicated that aerodynamic performance, with respect to compressor discharge pressure, flow, power and efficiency exceeded theoretical prediction. Dynamic testing of a simulated multistage centrifugal compressor was also completed under a parallel program to validate the integrity and viability of the system concept. The results give strong confidence in the feasibility of the multi-stage design for use in hydrogen gas transportation and delivery from production locations to point of use.« less

  17. Laser System Reliability

    DTIC Science & Technology

    1977-03-01

    system acquisition cycle since they provide necessary inputs to comparative analyses, cost/benefit trade -offs, and system simulations. In addition, the...Management Program from above performs the function of analyzing the system trade -offs with respect to reliability to determine a reliability goal...one encounters the problem of comparing present dollars with future dollars. In this analysis, we are trading off costs expended initially (or at

  18. Scoping study on trends in the economic value of electricity reliability to the U.S. economy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eto, Joseph; Koomey, Jonathan; Lehman, Bryan

    During the past three years, working with more than 150 organizations representing public and private stakeholders, EPRI has developed the Electricity Technology Roadmap. The Roadmap identifies several major strategic challenges that must be successfully addressed to ensure a sustainable future in which electricity continues to play an important role in economic growth. Articulation of these anticipated trends and challenges requires a detailed understanding of the role and importance of reliable electricity in different sectors of the economy. This report is intended to contribute to that understanding by analyzing key aspects of trends in the economic value of electricity reliability inmore » the U.S. economy. We first present a review of recent literature on electricity reliability costs. Next, we describe three distinct end-use approaches for tracking trends in reliability needs: (1) an analysis of the electricity-use requirements of office equipment in different commercial sectors; (2) an examination of the use of aggregate statistical indicators of industrial electricity use and economic activity to identify high reliability-requirement customer market segments; and (3) a case study of cleanrooms, which is a cross-cutting market segment known to have high reliability requirements. Finally, we present insurance industry perspectives on electricity reliability as an example of a financial tool for addressing customers' reliability needs.« less

  19. Preliminary study, analysis and design for a power switch for digital engine actuators

    NASA Technical Reports Server (NTRS)

    Beattie, E. C.; Zickwolf, H. C., Jr.

    1979-01-01

    Innovative control configurations using high temperature switches to operate actuator driving solenoids were studied. The impact on engine control system life cycle costs and reliability of electronic control and (ECU) heat dissipation due to power conditioning and interface drivers were addressed. Various power supply and actuation schemes were investigated, including optical signal transmission and electronics on the actuator, engine driven alternator, and inside the ECU. The use of a switching shunt power conditioner results in the most significant decrease in heat dissipation within the ECU. No overall control system reliability improvement is projected by the use of remote high temperature switches for solenoid drivers.

  20. Sharp-Tip Silver Nanowires Mounted on Cantilevers for High-Aspect-Ratio High-Resolution Imaging.

    PubMed

    Ma, Xuezhi; Zhu, Yangzhi; Kim, Sanggon; Liu, Qiushi; Byrley, Peter; Wei, Yang; Zhang, Jin; Jiang, Kaili; Fan, Shoushan; Yan, Ruoxue; Liu, Ming

    2016-11-09

    Despite many efforts to fabricate high-aspect-ratio atomic force microscopy (HAR-AFM) probes for high-fidelity, high-resolution topographical imaging of three-dimensional (3D) nanostructured surfaces, current HAR probes still suffer from unsatisfactory performance, low wear-resistivity, and extravagant prices. The primary objective of this work is to demonstrate a novel design of a high-resolution (HR) HAR AFM probe, which is fabricated through a reliable, cost-efficient benchtop process to precisely implant a single ultrasharp metallic nanowire on a standard AFM cantilever probe. The force-displacement curve indicated that the HAR-HR probe is robust against buckling and bending up to 150 nN. The probes were tested on polymer trenches, showing a much better image fidelity when compared with standard silicon tips. The lateral resolution, when scanning a rough metal thin film and single-walled carbon nanotubes (SW-CNTs), was found to be better than 8 nm. Finally, stable imaging quality in tapping mode was demonstrated for at least 15 continuous scans indicating high resistance to wear. These results demonstrate a reliable benchtop fabrication technique toward metallic HAR-HR AFM probes with performance parallel or exceeding that of commercial HAR probes, yet at a fraction of their cost.

  1. The Impact Of Multimode Fiber Chromatic Dispersion On Data Communications

    NASA Astrophysics Data System (ADS)

    Hackert, Michael J.

    1990-01-01

    Capability for the lowest cost is the goal of contemporary communications managers. With all of the competitive pressures that modern businesses are experiencing these days, communications needs must be met with the most information carrying capacity for the lowest cost. Optical fiber communication systems meet these requirements while providing reliability, system integrity, and potential future upgradability. Consequently, optical fiber is finding numerous applications in addition to its traditional telephony plant. Fiber based systems are meeting these requirements in building networks and computer interconnects at a lower cost than copper based systems. A fiber type being chosen by industry to meet these needs in standard systems such as FDDI, is multimode fiber. Multimode fiber systems offer cost advantages over single-mode fiber through lower fiber connection costs. Also, system designers can gain savings by using low cost, high reliability, wide spectral width sources such as LEDs instead of lasers and by operating at higher bit rates than used for multimode systems in the past. However, in order to maximize the cost savings while ensuring the system will operate as intended, the chromatic dispersion of the fiber must be taken into account. This paper explains how to do that and shows how to calculate multimode chromatic dispersion for each of the standard fiber sizes (50 μm, 62.5 μm, 85 μm, and 100μm core diameter).

  2. The relationship between cost estimates reliability and BIM adoption: SEM analysis

    NASA Astrophysics Data System (ADS)

    Ismail, N. A. A.; Idris, N. H.; Ramli, H.; Rooshdi, R. R. Raja Muhammad; Sahamir, S. R.

    2018-02-01

    This paper presents the usage of Structural Equation Modelling (SEM) approach in analysing the effects of Building Information Modelling (BIM) technology adoption in improving the reliability of cost estimates. Based on the questionnaire survey results, SEM analysis using SPSS-AMOS application examined the relationships between BIM-improved information and cost estimates reliability factors, leading to BIM technology adoption. Six hypotheses were established prior to SEM analysis employing two types of SEM models, namely the Confirmatory Factor Analysis (CFA) model and full structural model. The SEM models were then validated through the assessment on their uni-dimensionality, validity, reliability, and fitness index, in line with the hypotheses tested. The final SEM model fit measures are: P-value=0.000, RMSEA=0.079<0.08, GFI=0.824, CFI=0.962>0.90, TLI=0.956>0.90, NFI=0.935>0.90 and ChiSq/df=2.259; indicating that the overall index values achieved the required level of model fitness. The model supports all the hypotheses evaluated, confirming that all relationship exists amongst the constructs are positive and significant. Ultimately, the analysis verified that most of the respondents foresee better understanding of project input information through BIM visualization, its reliable database and coordinated data, in developing more reliable cost estimates. They also perceive to accelerate their cost estimating task through BIM adoption.

  3. Do aggressive signals evolve towards higher reliability or lower costs of assessment?

    PubMed

    Ręk, P

    2014-12-01

    It has been suggested that the evolution of signals must be a wasteful process for the signaller, aimed at the maximization of signal honesty. However, the reliability of communication depends not only on the costs paid by signallers but also on the costs paid by receivers during assessment, and less attention has been given to the interaction between these two types of costs during the evolution of signalling systems. A signaller and receiver may accept some level of signal dishonesty by choosing signals that are cheaper in terms of assessment but that are stabilized with less reliable mechanisms. I studied the potential trade-off between signal reliability and the costs of signal assessment in the corncrake (Crex crex). I found that the birds prefer signals that are less costly regarding assessment rather than more reliable. Despite the fact that the fundamental frequency of calls was a strong predictor of male size, it was ignored by receivers unless they could directly compare signal variants. My data revealed a response advantage of costly signals when comparison between calls differing with fundamental frequencies is fast and straightforward, whereas cheap signalling is preferred in natural conditions. These data might improve our understanding of the influence of receivers on signal design because they support the hypothesis that fully honest signalling systems may be prone to dishonesty based on the effects of receiver costs and be replaced by signals that are cheaper in production and reception but more susceptible to cheating. © 2014 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.

  4. RS-600 programmable controller: Solar heating and cooling

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Three identical microprocessor control subsystems were developed which can be used in heating, heating and cooling, and/or hot water systems for single family, multifamily, or commercial applications. The controller incorporates a low cost, highly reliable (all solid state) microprocessor which can be easily reprogrammed.

  5. Body of Knowledge (BOK) for Leadless Quad Flat No-Lead/bottom Termination Components (QFN/BTC) Package Trends and Reliability

    NASA Technical Reports Server (NTRS)

    Ghaffarian, Reza

    2014-01-01

    Bottom terminated components and quad flat no-lead (BTC/QFN) packages have been extensively used by commercial industry for more than a decade. Cost and performance advantages and the closeness of the packages to the boards make them especially unique for radio frequency (RF) applications. A number of high-reliability parts are now available in this style of package configuration. This report presents a summary of literature surveyed and provides a body of knowledge (BOK) gathered on the status of BTC/QFN and their advanced versions of multi-row QFN (MRQFN) packaging technologies. The report provides a comprehensive review of packaging trends and specifications on design, assembly, and reliability. Emphasis is placed on assembly reliability and associated key design and process parameters because they show lower life than standard leaded package assembly under thermal cycling exposures. Inspection of hidden solder joints for assuring quality is challenging and is similar to ball grid arrays (BGAs). Understanding the key BTC/QFN technology trends, applications, processing parameters, workmanship defects, and reliability behavior is important when judicially selecting and narrowing the follow-on packages for evaluation and testing, as well as for the low risk insertion in high-reliability applications.

  6. Optimizing the Reliability and Performance of Service Composition Applications with Fault Tolerance in Wireless Sensor Networks

    PubMed Central

    Wu, Zhao; Xiong, Naixue; Huang, Yannong; Xu, Degang; Hu, Chunyang

    2015-01-01

    The services composition technology provides flexible methods for building service composition applications (SCAs) in wireless sensor networks (WSNs). The high reliability and high performance of SCAs help services composition technology promote the practical application of WSNs. The optimization methods for reliability and performance used for traditional software systems are mostly based on the instantiations of software components, which are inapplicable and inefficient in the ever-changing SCAs in WSNs. In this paper, we consider the SCAs with fault tolerance in WSNs. Based on a Universal Generating Function (UGF) we propose a reliability and performance model of SCAs in WSNs, which generalizes a redundancy optimization problem to a multi-state system. Based on this model, an efficient optimization algorithm for reliability and performance of SCAs in WSNs is developed based on a Genetic Algorithm (GA) to find the optimal structure of SCAs with fault-tolerance in WSNs. In order to examine the feasibility of our algorithm, we have evaluated the performance. Furthermore, the interrelationships between the reliability, performance and cost are investigated. In addition, a distinct approach to determine the most suitable parameters in the suggested algorithm is proposed. PMID:26561818

  7. Body of Knowledge (BOK) for Leadless Quad Flat No-Lead/Bottom Termination Components (QFN/BTC) Package Trends and Reliability

    NASA Technical Reports Server (NTRS)

    Ghaffarian, Reza

    2014-01-01

    Bottom terminated components and quad flat no-lead (BTC/QFN) packages have been extensively used by commercial industry for more than a decade. Cost and performance advantages and the closeness of the packages to the boards make them especially unique for radio frequency (RF) applications. A number of high-reliability parts are now available in this style of package configuration. This report presents a summary of literature surveyed and provides a body of knowledge (BOK) gathered on the status of BTC/QFN and their advanced versions of multi-row QFN (MRQFN) packaging technologies. The report provides a comprehensive review of packaging trends and specifications on design, assembly, and reliability. Emphasis is placed on assembly reliability and associated key design and process parameters because they show lower life than standard leaded package assembly under thermal cycling exposures. Inspection of hidden solder joints for assuring quality is challenging and is similar to ball grid arrays (BGAs). Understanding the key BTC/QFN technology trends, applications, processing parameters, workmanship defects, and reliability behavior is important when judicially selecting and narrowing the follow-on packages for evaluation and testing, as well as for the low risk insertion in high-reliability applications.

  8. Research and design of smart grid monitoring control via terminal based on iOS system

    NASA Astrophysics Data System (ADS)

    Fu, Wei; Gong, Li; Chen, Heli; Pan, Guangji

    2017-06-01

    Aiming at a series of problems existing in current smart grid monitoring Control Terminal, such as high costs, poor portability, simple monitoring system, poor software extensions, low system reliability when transmitting information, single man-machine interface, poor security, etc., smart grid remote monitoring system based on the iOS system has been designed. The system interacts with smart grid server so that it can acquire grid data through WiFi/3G/4G networks, and monitor each grid line running status, as well as power plant equipment operating conditions. When it occurs an exception in the power plant, incident information can be sent to the user iOS terminal equipment timely, which will provide troubleshooting information to help the grid staff to make the right decisions in a timely manner, to avoid further accidents. Field tests have shown the system realizes the integrated grid monitoring functions, low maintenance cost, friendly interface, high security and reliability, and it possesses certain applicable value.

  9. Investigation of low cost, high reliability sealing techniques for hybrid microcircuits, phase 1

    NASA Technical Reports Server (NTRS)

    Perkins, K. L.; Licari, J. J.

    1976-01-01

    A preliminary investigation was made to determine the feasibility of using adhesive package sealing for hybrid microcircuits. Major effort consisted of: (1) surveying representative hybrid manufacturers to assess the current use of adhesives for package sealing; (2) making a cost comparison of metallurgical versus adhesive package sealing; (3) determining the seal integrity of gold plated flatpack type packages sealed with selected adhesives, thermal shock, temperature cycling, mechanical shock, and constant acceleration test environments; and (4) defining a more comprehensive study to continue the evaluation of adhesives for package sealing. Results showed that 1.27 cm square gold plated flatpack type packages sealed with the film adhesives and the paste adhesive retained their seal integrity after all tests, and that similarly prepared 2.54 cm square packages retained their seal integrity after all tests except the 10,000 g's constant acceleration test. It is concluded that these results are encouraging, but by no means sufficient to establish the suitability of adhesives for sealing high reliability hybrid microcircuits.

  10. 2D scanning Rotman lens structure for smart collision avoidance sensors

    NASA Astrophysics Data System (ADS)

    Hall, Leonard T.; Hansen, Hedley J.; Abbott, Derek

    2004-03-01

    Although electronically scanned antenna arrays can provide effective mm-wave search radar sensors, their high cost and complexity are leading to the consideration of alternative beam-forming arrangements. Rotman lenses offer a compact, rugged, reliable, alternative solution. This paper considers the design of a microstrip based Rotman lens for high-resolution, frequency-controlled scanning applications. Its implementation in microstrip is attractive because this technology is low-cost, conformal, and lightweight. A sensor designed for operation at 77 GHz is presented and an ~80° azimuthal scan over a 30 GHz bandwidth is demonstrated.

  11. Design of Water Temperature Control System Based on Single Chip Microcomputer

    NASA Astrophysics Data System (ADS)

    Tan, Hanhong; Yan, Qiyan

    2017-12-01

    In this paper, we mainly introduce a multi-function water temperature controller designed with 51 single-chip microcomputer. This controller has automatic and manual water, set the water temperature, real-time display of water and temperature and alarm function, and has a simple structure, high reliability, low cost. The current water temperature controller on the market basically use bimetal temperature control, temperature control accuracy is low, poor reliability, a single function. With the development of microelectronics technology, monolithic microprocessor function is increasing, the price is low, in all aspects of widely used. In the water temperature controller in the application of single-chip, with a simple design, high reliability, easy to expand the advantages of the function. Is based on the appeal background, so this paper focuses on the temperature controller in the intelligent control of the discussion.

  12. Glass for low-cost photovoltaic solar arrays

    NASA Technical Reports Server (NTRS)

    Bouquet, F. L.

    1980-01-01

    Various aspects of glass encapsulation that are important for the designer of photovoltaic systems are discussed. Candidate glasses and available information defining the state of the art of glass encapsulation materials and processes for automated, high volume production of terrestrial photovoltaic devices and related applications are presented. The criteria for consideration of the glass encapsulation systems were based on the low-cost solar array project goals for arrays: (1) a low degradation rate, (2) high reliability, (3) an efficiency greater than 10 percent, (4) a total array price less than $500/kW, and (5) a production capacity of 500,000 kW/yr. The glass design areas discussed include the types of glass, sources and costs, physical properties, and glass modifications, such as antireflection coatings.

  13. The measurement of maintenance function efficiency through financial KPIs

    NASA Astrophysics Data System (ADS)

    Galar, D.; Parida, A.; Kumar, U.; Baglee, D.; Morant, A.

    2012-05-01

    The measurement of the performance in the maintenance function has produced large sets of indicators that due to their nature and disparity in criteria and objectives have been grouped in different subsets lately, emphasizing the set of financial indicators. The generation of these indicators demands data collection of high reliability that is only made possible through a model of costs adapted to the special casuistry of the maintenance function, characterized by the occultism of these costs.

  14. Coupling long and short term decisions in the design of urban water supply infrastructure for added reliability and flexibility

    NASA Astrophysics Data System (ADS)

    Marques, G.; Fraga, C. C. S.; Medellin-Azuara, J.

    2016-12-01

    The expansion and operation of urban water supply systems under growing demands, hydrologic uncertainty and water scarcity requires a strategic combination of supply sources for reliability, reduced costs and improved operational flexibility. The design and operation of such portfolio of water supply sources involves integration of long and short term planning to determine what and when to expand, and how much to use of each supply source accounting for interest rates, economies of scale and hydrologic variability. This research presents an integrated methodology coupling dynamic programming optimization with quadratic programming to optimize the expansion (long term) and operations (short term) of multiple water supply alternatives. Lagrange Multipliers produced by the short-term model provide a signal about the marginal opportunity cost of expansion to the long-term model, in an iterative procedure. A simulation model hosts the water supply infrastructure and hydrologic conditions. Results allow (a) identification of trade offs between cost and reliability of different expansion paths and water use decisions; (b) evaluation of water transfers between urban supply systems; and (c) evaluation of potential gains by reducing water system losses as a portfolio component. The latter is critical in several developing countries where water supply system losses are high and often neglected in favor of more system expansion.

  15. FLUIDIC: Metal Air Recharged

    ScienceCinema

    Friesen, Cody

    2018-02-14

    Fluidic, with the help of ARPA-E funding, has developed and deployed the world's first proven high cycle life metal air battery. Metal air technology, often used in smaller scale devices like hearing aids, has the lowest cost per electron of any rechargeable battery storage in existence. Deploying these batteries for grid reliability is competitive with pumped hydro installations while having the advantages of a small footprint. Fluidic's battery technology allows utilities and other end users to store intermittent energy generated from solar and wind, as well as maintain reliable electrical delivery during power outages. The batteries are manufactured in the US and currently deployed to customers in emerging markets for cell tower reliability. As they continue to add customers, they've gained experience and real world data that will soon be leveraged for US grid reliability.

  16. New Opportunitie s for Small Satellite Programs Provided by the Falcon Family of Launch Vehicles

    NASA Astrophysics Data System (ADS)

    Dinardi, A.; Bjelde, B.; Insprucker, J.

    2008-08-01

    The Falcon family of launch vehicles, developed by Space Exploration Technologies Corporation (SpaceX), are designed to provide the world's lowest cost access to orbit. Highly reliable, low cost launch services offer considerable opportunities for risk reduction throughout the life cycle of satellite programs. The significantly lower costs of Falcon 1 and Falcon 9 as compared with other similar-class launch vehicles results in a number of new business case opportunities; which in turn presents the possibility for a paradigm shift in how the satellite industry thinks about launch services.

  17. Performance of High-Reliability Space-Qualified Processors Implementing Software Defined Radios

    DTIC Science & Technology

    2014-03-01

    ADDRESS(ES) AND ADDRESS(ES) Naval Postgraduate School, Department of Electrical and Computer Engineering, 833 Dyer Road, Monterey, CA 93943-5121 8...Chairman Jeffrey D. Paduan Electrical and Computer Engineering Dean of Research iii THIS PAGE...capability. Radiation in space poses a considerable threat to modern microelectronic devices, in particular to the high-performance low-cost computing

  18. Development of Ultra-Efficient Electric Motors Final Technical Report Covering work from April 2002 through September 2007

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rich Schiferl

    2008-05-30

    High temperature superconducting (HTS) motors offer the potential for dramatic volume and loss reduction compared to conventional, high horspower, industrial motors. This report is the final report on the results of eight research tasks that address some of the issues related to HTS motor development that affect motor efficiency, cost, and reliability.

  19. Archival storage solutions for PACS

    NASA Astrophysics Data System (ADS)

    Chunn, Timothy

    1997-05-01

    While they are many, one of the inhibitors to the wide spread diffusion of PACS systems has been robust, cost effective digital archive storage solutions. Moreover, an automated Nearline solution is key to a central, sharable data repository, enabling many applications such as PACS, telemedicine and teleradiology, and information warehousing and data mining for research such as patient outcome analysis. Selecting the right solution depends on a number of factors: capacity requirements, write and retrieval performance requirements, scaleability in capacity and performance, configuration architecture and flexibility, subsystem availability and reliability, security requirements, system cost, achievable benefits and cost savings, investment protection, strategic fit and more.This paper addresses many of these issues. It compares and positions optical disk and magnetic tape technologies, which are the predominant archive mediums today. Price and performance comparisons will be made at different archive capacities, plus the effect of file size on storage system throughput will be analyzed. The concept of automated migration of images from high performance, high cost storage devices to high capacity, low cost storage devices will be introduced as a viable way to minimize overall storage costs for an archive. The concept of access density will also be introduced and applied to the selection of the most cost effective archive solution.

  20. 2nd & 3rd Generation Vehicle Subsystems

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This paper contains viewgraph presentation on the "2nd & 3rd Generation Vehicle Subsystems" project. The objective behind this project is to design, develop and test advanced avionics, power systems, power control and distribution components and subsystems for insertion into a highly reliable and low-cost system for a Reusable Launch Vehicles (RLV). The project is divided into two sections: 3rd Generation Vehicle Subsystems and 2nd Generation Vehicle Subsystems. The following topics are discussed under the first section, 3rd Generation Vehicle Subsystems: supporting the NASA RLV program; high-performance guidance & control adaptation for future RLVs; Evolvable Hardware (EHW) for 3rd generation avionics description; Scaleable, Fault-tolerant Intelligent Network or X(trans)ducers (SFINIX); advance electric actuation devices and subsystem technology; hybrid power sources and regeneration technology for electric actuators; and intelligent internal thermal control. Topics discussed in the 2nd Generation Vehicle Subsystems program include: design, development and test of a robust, low-maintenance avionics with no active cooling requirements and autonomous rendezvous and docking systems; design and development of a low maintenance, high reliability, intelligent power systems (fuel cells and battery); and design of a low cost, low maintenance high horsepower actuation systems (actuators).

  1. Advanced energy system program

    NASA Astrophysics Data System (ADS)

    Trester, K.

    1987-06-01

    The ogjectives are to design, develop, and demonstrate a natural-gas-fueled, highly recuperated, 50 kw Brayton-cycle cogeneration system for commercial, institutional, and multifamily residential applications. Recent marketing studies have shown that the Advanced Energy System (AES), with its many cost-effective features, has the potential to offer significant reductions in annual electrical and thermal energy costs to the consumer. Specific advantates of the system that result in low cost ownership are high electrical efficiency (34 percent, LHV), low maintenance, high reliability and long life (20 years). Significant technical features include: an integral turbogenerator with shaft-speed permanent magnet generator; a rotating assembly supported by compliant foil air bearings; a formed-tubesheet plate/fin recuperator with 91 percent effectiveness; and a bi-directional power conditioner to ultilize the generator for system startup. The planned introduction of catalytic combustion will further enhance the economic and ecological attractiveness.

  2. Approach to developing reliable space reactor power systems

    NASA Technical Reports Server (NTRS)

    Mondt, Jack F.; Shinbrot, Charles H.

    1991-01-01

    During Phase II, the Engineering Development Phase, the SP-100 Project has defined and is pursuing a new approach to developing reliable power systems. The approach to developing such a system during the early technology phase is described along with some preliminary examples to help explain the approach. Developing reliable components to meet space reactor power system requirements is based on a top-down systems approach which includes a point design based on a detailed technical specification of a 100-kW power system. The SP-100 system requirements implicitly recognize the challenge of achieving a high system reliability for a ten-year lifetime, while at the same time using technologies that require very significant development efforts. A low-cost method for assessing reliability, based on an understanding of fundamental failure mechanisms and design margins for specific failure mechanisms, is being developed as part of the SP-100 Program.

  3. Health management and controls for Earth-to-orbit propulsion systems

    NASA Astrophysics Data System (ADS)

    Bickford, R. L.

    1995-03-01

    Avionics and health management technologies increase the safety and reliability while decreasing the overall cost for Earth-to-orbit (ETO) propulsion systems. New ETO propulsion systems will depend on highly reliable fault tolerant flight avionics, advanced sensing systems and artificial intelligence aided software to ensure critical control, safety and maintenance requirements are met in a cost effective manner. Propulsion avionics consist of the engine controller, actuators, sensors, software and ground support elements. In addition to control and safety functions, these elements perform system monitoring for health management. Health management is enhanced by advanced sensing systems and algorithms which provide automated fault detection and enable adaptive control and/or maintenance approaches. Aerojet is developing advanced fault tolerant rocket engine controllers which provide very high levels of reliability. Smart sensors and software systems which significantly enhance fault coverage and enable automated operations are also under development. Smart sensing systems, such as flight capable plume spectrometers, have reached maturity in ground-based applications and are suitable for bridging to flight. Software to detect failed sensors has reached similar maturity. This paper will discuss fault detection and isolation for advanced rocket engine controllers as well as examples of advanced sensing systems and software which significantly improve component failure detection for engine system safety and health management.

  4. Development of highly efficient laser bars emitting at around 1060 nm for medical applications

    NASA Astrophysics Data System (ADS)

    Pietrzak, Agnieszka; Zorn, Martin; Meusel, Jens; Huelsewede, Ralf; Sebastian, Juergen

    2018-02-01

    An overview is presented on the recent progress in the development of high power laser bars at wavelengths around 1060nm. The development is focused on highly efficient and reliable laser performance under pulsed operation for medical applications. The epitaxial structure and lateral layout of the laser bars were tailored to meet the application requirements. Reliable operation peak powers of 350W and 500W are demonstrated from laser bars with fill-factor FF=75% and resonator lengths 1.5mm and 2.0mm, respectively. Moreover, 60W at current 65A with lifetime <10.000h are presented. The power scaling with fill-factor enables a cost reduction ($/W) up to 35%.

  5. Implementation of Integrated System Fault Management Capability

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando; Schmalzel, John; Morris, Jon; Smith, Harvey; Turowski, Mark

    2008-01-01

    Fault Management to support rocket engine test mission with highly reliable and accurate measurements; while improving availability and lifecycle costs. CORE ELEMENTS: Architecture, taxonomy, and ontology (ATO) for DIaK management. Intelligent Sensor Processes; Intelligent Element Processes; Intelligent Controllers; Intelligent Subsystem Processes; Intelligent System Processes; Intelligent Component Processes.

  6. CICS Region Virtualization for Cost Effective Application Development

    ERIC Educational Resources Information Center

    Khan, Kamal Waris

    2012-01-01

    Mainframe is used for hosting large commercial databases, transaction servers and applications that require a greater degree of reliability, scalability and security. Customer Information Control System (CICS) is a mainframe software framework for implementing transaction services. It is designed for rapid, high-volume online processing. In order…

  7. The concurrent validity and reliability of a low-cost, high-speed camera-based method for measuring the flight time of vertical jumps.

    PubMed

    Balsalobre-Fernández, Carlos; Tejero-González, Carlos M; del Campo-Vecino, Juan; Bavaresco, Nicolás

    2014-02-01

    Flight time is the most accurate and frequently used variable when assessing the height of vertical jumps. The purpose of this study was to analyze the validity and reliability of an alternative method (i.e., the HSC-Kinovea method) for measuring the flight time and height of vertical jumping using a low-cost high-speed Casio Exilim FH-25 camera (HSC). To this end, 25 subjects performed a total of 125 vertical jumps on an infrared (IR) platform while simultaneously being recorded with a HSC at 240 fps. Subsequently, 2 observers with no experience in video analysis analyzed the 125 videos independently using the open-license Kinovea 0.8.15 software. The flight times obtained were then converted into vertical jump heights, and the intraclass correlation coefficient (ICC), Bland-Altman plot, and Pearson correlation coefficient were calculated for those variables. The results showed a perfect correlation agreement (ICC = 1, p < 0.0001) between both observers' measurements of flight time and jump height and a highly reliable agreement (ICC = 0.997, p < 0.0001) between the observers' measurements of flight time and jump height using the HSC-Kinovea method and those obtained using the IR system, thus explaining 99.5% (p < 0.0001) of the differences (shared variance) obtained using the IR platform. As a result, besides requiring no previous experience in the use of this technology, the HSC-Kinovea method can be considered to provide similarly valid and reliable measurements of flight time and vertical jump height as more expensive equipment (i.e., IR). As such, coaches from many sports could use the HSC-Kinovea method to measure the flight time and height of their athlete's vertical jumps.

  8. Benefits and costs of low thrust propulsion systems

    NASA Technical Reports Server (NTRS)

    Robertson, R. I.; Rose, L. J.; Maloy, J. E.

    1983-01-01

    The results of costs/benefits analyses of three chemical propulsion systems that are candidates for transferring high density, low volume STS payloads from LEO to GEO are reported. Separate algorithms were developed for benefits and costs of primary propulsion systems (PPS) as functions of the required thrust levels. The life cycle costs of each system were computed based on the developmental, production, and deployment costs. A weighted criteria rating approach was taken for the benefits, with each benefit assigned a value commensurate to its relative worth to the overall system. Support costs were included in the costs modeling. Reference missions from NASA, commercial, and DoD catalog payloads were examined. The program was concluded reliable and flexible for evaluating benefits and costs of launch and orbit transfer for any catalog mission, with the most beneficial PPS being a dedicated low thrust configuration using the RL-10 system.

  9. Study of turboprop systems reliability and maintenance costs

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The overall reliability and maintenance costs (R&MC's) of past and current turboprop systems were examined. Maintenance cost drivers were found to be scheduled overhaul (40%), lack of modularity particularly in the propeller and reduction gearbox, and lack of inherent durability (reliability) of some parts. Comparisons were made between the 501-D13/54H60 turboprop system and the widely used JT8D turbofan. It was found that the total maintenance cost per flight hour of the turboprop was 75% higher than that of the JT8D turbofan. Part of this difference was due to propeller and gearbox costs being higher than those of the fan and reverser, but most of the difference was in the engine core where the older technology turboprop core maintenance costs were nearly 70 percent higher than for the turbofan. The estimated maintenance cost of both the advanced turboprop and advanced turbofan were less than the JT8D. The conclusion was that an advanced turboprop and an advanced turbofan, using similar cores, will have very competitive maintenance costs per flight hour.

  10. Ethernet for Aerospace Applications - Ethernet Heads for the Skies

    NASA Technical Reports Server (NTRS)

    Grams, Paul R.

    2015-01-01

    One of the goals of aerospace applications is to reduce the cost and complexity of avionic systems. Ethernet is a highly scalable, flexible, and popular protocol. The aerospace market is large, with a forecasted production of over 50,000 turbine-powered aircraft valued at $1.7 trillion between 2012 and 2022. Boeing estimates demand for commercial aircraft by 2033 to total over 36,000 with a value of over $5 trillion. In 2014 US airlines served over 750 million passengers and this is growing over 2% yearly. Electronic fly-by-wire is now used for all airliners and high performance aircraft. Although Ethernet has been widely used for four decades, its use in aerospace applications is just beginning to become common. Ethernet is the universal solution in commercial networks because of its high bandwidths, lower cost, openness, reliability, maintainability, flexibility, and interoperability. However, when Ethernet was designed applications with time-critical, safety relevant and deterministic requirements were not given much consideration. Many aerospace applications use a variety of communication architectures that add cost and complexity. Some of them are SpaceWire, MIL-STD-1553, Avionics Full Duplex Switched Ethernet (AFDX), and Time-Triggered Ethernet (TTE). Aerospace network designers desire to decrease the number of networks to reduce cost and effort while improving scalability, flexibility, openness, maintainability, and reliability. AFDX and TTE are being considered more for critical aerospace systems because they provide redundancy, failover protection, guaranteed timing, and frame priority and are based on Ethernet IEEE 802.3. This paper explores the use of AFDX and TTE for aerospace applications.

  11. Microgrid Analysis Tools Summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jimenez, Antonio; Haase, Scott G; Mathur, Shivani

    2018-03-05

    The over-arching goal of the Alaska Microgrid Partnership is to reduce the use of total imported fuel into communities to secure all energy services by at least 50% in Alaska's remote microgrids without increasing system life cycle costs while also improving overall system reliability, security, and resilience. One goal of the Alaska Microgrid Partnership is to investigate whether a combination of energy efficiency and high-contribution (from renewable energy) power systems can reduce total imported energy usage by 50% while reducing life cycle costs and improving reliability and resiliency. This presentation provides an overview of the following four renewable energy optimizationmore » tools. Information is from respective tool websites, tool developers, and author experience. Distributed Energy Resources Customer Adoption Model (DER-CAM) Microgrid Design Toolkit (MDT) Renewable Energy Optimization (REopt) Tool Hybrid Optimization Model for Electric Renewables (HOMER).« less

  12. Space transportation booster engine configuration study. Volume 1: Executive Summary

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The objective of the Space Transportation Booster Engine (STBE) Configuration Study is to contribute to the Advanced Launch System (ALS) development effort by providing highly reliable, low cost booster engine concepts for both expendable and reusable rocket engines. The objectives of the Space Transportation Booster Engine (STBE) Configuration Study were to identify engine configurations which enhance vehicle performance and provide operational flexibility at low cost, and to explore innovative approaches to the follow-on full-scale development (FSD) phase for the STBE.

  13. Air Force Manufacturing Technology Electronics Program, FY72-FY85.

    DTIC Science & Technology

    1985-04-01

    magnetic films of the composition Yl.52 EuO.30 TmO.30 CaO.88 Fe4.12 012 on 1.5 inch and 2.0 inch gadolinium gallium garnet substrates. Ten film were...volume manufacturing of hybrid MIC’s. A systematic integrated cost effective approach to testing, trimming/matching, fabri - cation, and assembly is...ESTABLISH MANUFACTURING METHODS FOR LOW COST HIGH RELIABILITY FABRI - CATION AND ACTIVATION OF OXIDE CATHODES FOR USE IN SPACE TRAVELING WAVE TUBES

  14. The European ALMA production antennas: new drive applications for better performances and low cost management

    NASA Astrophysics Data System (ADS)

    Giacomel, L.; Manfrin, C.; Marchiori, G.

    2008-07-01

    From the first application on the VLT Telescopes till today, the linear motor identifies the best solution in terms of quality/cost for any technological application in the astronomical field. Its application also in the radio-astronomy sector with the ALMA project represents a whole of forefront technology, high reliability and minimum maintenance. The adoption of embedded electronics on each motor sector makes it a system at present modular, redundant with resetting of EMC troubles.

  15. DEVELOPMENT OF A SCALABLE, LOW-COST, ULTRANANOCRYSTALLINE DIAMOND ELECTROCHEMICAL PROCESS FOR THE DESTRUCTION OF CONTAMINANTS OF EMERGING CONCERN (CECS) - PHASE II

    EPA Science Inventory

    This Small Business Innovation Research (SBIR) Phase II project will employ the large scale; highly reliable boron-doped ultrananocrystalline diamond (BD-UNCD®) electrodes developed during Phase I project to build and test Electrochemical Anodic Oxidation process (EAOP)...

  16. Beliefs about the Consequences of Maternal Employment for Children.

    ERIC Educational Resources Information Center

    Greenberger, Ellen; And Others

    1988-01-01

    Developed a 24-item scale to measure Beliefs about the Consequences of Maternal Employment for Children, including beliefs about both benefits and costs. Demonstrated reliability and high convergent, divergent, and concurrent validity. Assessed sex-role traditionalism, women's employment status, work hours, age of child at which employment is…

  17. PHOBOS Exploration using Two Small Solar Electric Propulsion (SEP) Spacecraft

    NASA Technical Reports Server (NTRS)

    Lang, J. J.; Baker, J. D.; McElrath, T. P.; Piacentine, J. S.; Snyder, J. S.

    2012-01-01

    Phobos Surveyor Mission concept provides an innovative low cost, highly reliable approach to exploring the inner solar system 1/16/2013 3 Dual manifest launch. Use only flight proven, well characterize commercial off-the-shelf components. Flexible mission architecture allows for a slew of unique measurements.

  18. Reliability, Availability, and Maintainability of the Heat Recovery Incinerator at Naval Station Mayport.

    DTIC Science & Technology

    1984-10-01

    appears to have cost $6.54 to produce 1,000,000 Btu’s of heat. This equation took into account the cost of repair and replacement parts, consumable...waste incineration rate, thermal efficiency, and steam cost . Actual results for incinerating waste to produce steam were: reliability 58% (75% of design...87% of goal); incineration rate 1.75 tons/hr (105% of goal); and cost of steam $6.05/MBtu. The HRI was expected to save $26,600/yr from landfill

  19. An Investment Level Decision Method to Secure Long-term Reliability

    NASA Astrophysics Data System (ADS)

    Bamba, Satoshi; Yabe, Kuniaki; Seki, Tomomichi; Shibaya, Tetsuji

    The slowdown in power demand increase and facility replacement causes the aging and lower reliability in power facility. And the aging is followed by the rapid increase of repair and replacement when many facilities reach their lifetime in future. This paper describes a method to estimate the repair and replacement costs in future by applying the life-cycle cost model and renewal theory to the historical data. This paper also describes a method to decide the optimum investment plan, which replaces facilities in the order of cost-effectiveness by setting replacement priority formula, and the minimum investment level to keep the reliability. Estimation examples applied to substation facilities show that the reasonable and leveled future cash-out can keep the reliability by lowering the percentage of replacements caused by fatal failures.

  20. Reliability and Maintainability Analysis of a High Air Pressure Compressor Facility

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.; Ring, Robert W.; Cole, Stuart K.

    2013-01-01

    This paper discusses a Reliability, Availability, and Maintainability (RAM) independent assessment conducted to support the refurbishment of the Compressor Station at the NASA Langley Research Center (LaRC). The paper discusses the methodologies used by the assessment team to derive the repair by replacement (RR) strategies to improve the reliability and availability of the Compressor Station (Ref.1). This includes a RAPTOR simulation model that was used to generate the statistical data analysis needed to derive a 15-year investment plan to support the refurbishment of the facility. To summarize, study results clearly indicate that the air compressors are well past their design life. The major failures of Compressors indicate that significant latent failure causes are present. Given the occurrence of these high-cost failures following compressor overhauls, future major failures should be anticipated if compressors are not replaced. Given the results from the RR analysis, the study team recommended a compressor replacement strategy. Based on the data analysis, the RR strategy will lead to sustainable operations through significant improvements in reliability, availability, and the probability of meeting the air demand with acceptable investment cost that should translate, in the long run, into major cost savings. For example, the probability of meeting air demand improved from 79.7 percent for the Base Case to 97.3 percent. Expressed in terms of a reduction in the probability of failing to meet demand (1 in 5 days to 1 in 37 days), the improvement is about 700 percent. Similarly, compressor replacement improved the operational availability of the facility from 97.5 percent to 99.8 percent. Expressed in terms of a reduction in system unavailability (1 in 40 to 1 in 500), the improvement is better than 1000 percent (an order of magnitude improvement). It is worthy to note that the methodologies, tools, and techniques used in the LaRC study can be used to evaluate similar high value equipment components and facilities. Also, lessons learned in data collection and maintenance practices derived from the observations, findings, and recommendations of the study are extremely important in the evaluation and sustainment of new compressor facilities.

  1. Using Length of Stay to Control for Unobserved Heterogeneity When Estimating Treatment Effect on Hospital Costs with Observational Data: Issues of Reliability, Robustness, and Usefulness.

    PubMed

    May, Peter; Garrido, Melissa M; Cassel, J Brian; Morrison, R Sean; Normand, Charles

    2016-10-01

    To evaluate the sensitivity of treatment effect estimates when length of stay (LOS) is used to control for unobserved heterogeneity when estimating treatment effect on cost of hospital admission with observational data. We used data from a prospective cohort study on the impact of palliative care consultation teams (PCCTs) on direct cost of hospital care. Adult patients with an advanced cancer diagnosis admitted to five large medical and cancer centers in the United States between 2007 and 2011 were eligible for this study. Costs were modeled using generalized linear models with a gamma distribution and a log link. We compared variability in estimates of PCCT impact on hospitalization costs when LOS was used as a covariate, as a sample parameter, and as an outcome denominator. We used propensity scores to account for patient characteristics associated with both PCCT use and total direct hospitalization costs. We analyzed data from hospital cost databases, medical records, and questionnaires. Our propensity score weighted sample included 969 patients who were discharged alive. In analyses of hospitalization costs, treatment effect estimates are highly sensitive to methods that control for LOS, complicating interpretation. Both the magnitude and significance of results varied widely with the method of controlling for LOS. When we incorporated intervention timing into our analyses, results were robust to LOS-controls. Treatment effect estimates using LOS-controls are not only suboptimal in terms of reliability (given concerns over endogeneity and bias) and usefulness (given the need to validate the cost-effectiveness of an intervention using overall resource use for a sample defined at baseline) but also in terms of robustness (results depend on the approach taken, and there is little evidence to guide this choice). To derive results that minimize endogeneity concerns and maximize external validity, investigators should match and analyze treatment and comparison arms on baseline factors only. Incorporating intervention timing may deliver results that are more reliable, more robust, and more useful than those derived using LOS-controls. © Health Research and Educational Trust.

  2. Integrated Design Methodology for Highly Reliable Liquid Rocket Engine

    NASA Astrophysics Data System (ADS)

    Kuratani, Naoshi; Aoki, Hiroshi; Yasui, Masaaki; Kure, Hirotaka; Masuya, Goro

    The Integrated Design Methodology is strongly required at the conceptual design phase to achieve the highly reliable space transportation systems, especially the propulsion systems, not only in Japan but also all over the world in these days. Because in the past some catastrophic failures caused some losses of mission and vehicle (LOM/LOV) at the operational phase, moreover did affect severely the schedule delays and cost overrun at the later development phase. Design methodology for highly reliable liquid rocket engine is being preliminarily established and investigated in this study. The sensitivity analysis is systematically performed to demonstrate the effectiveness of this methodology, and to clarify and especially to focus on the correlation between the combustion chamber, turbopump and main valve as main components. This study describes the essential issues to understand the stated correlations, the need to apply this methodology to the remaining critical failure modes in the whole engine system, and the perspective on the engine development in the future.

  3. Development of a Whole Slide Imaging System on Smartphones and Evaluation With Frozen Section Samples

    PubMed Central

    Jiang, Liren

    2017-01-01

    Background The aim was to develop scalable Whole Slide Imaging (sWSI), a WSI system based on mainstream smartphones coupled with regular optical microscopes. This ultra-low-cost solution should offer diagnostic-ready imaging quality on par with standalone scanners, supporting both oil and dry objective lenses of different magnifications, and reasonably high throughput. These performance metrics should be evaluated by expert pathologists and match those of high-end scanners. Objective The aim was to develop scalable Whole Slide Imaging (sWSI), a whole slide imaging system based on smartphones coupled with optical microscopes. This ultra-low-cost solution should offer diagnostic-ready imaging quality on par with standalone scanners, supporting both oil and dry object lens of different magnification. All performance metrics should be evaluated by expert pathologists and match those of high-end scanners. Methods In the sWSI design, the digitization process is split asynchronously between light-weight clients on smartphones and powerful cloud servers. The client apps automatically capture FoVs at up to 12-megapixel resolution and process them in real-time to track the operation of users, then give instant feedback of guidance. The servers first restitch each pair of FoVs, then automatically correct the unknown nonlinear distortion introduced by the lens of the smartphone on the fly, based on pair-wise stitching, before finally combining all FoVs into one gigapixel VS for each scan. These VSs can be viewed using Internet browsers anywhere. In the evaluation experiment, 100 frozen section slides from patients randomly selected among in-patients of the participating hospital were scanned by both a high-end Leica scanner and sWSI. All VSs were examined by senior pathologists whose diagnoses were compared against those made using optical microscopy as ground truth to evaluate the image quality. Results The sWSI system is developed for both Android and iPhone smartphones and is currently being offered to the public. The image quality is reliable and throughput is approximately 1 FoV per second, yielding a 15-by-15 mm slide under 20X object lens in approximately 30-35 minutes, with little training required for the operator. The expected cost for setup is approximately US $100 and scanning each slide costs between US $1 and $10, making sWSI highly cost-effective for infrequent or low-throughput usage. In the clinical evaluation of sample-wise diagnostic reliability, average accuracy scores achieved by sWSI-scan-based diagnoses were as follows: 0.78 for breast, 0.88 for uterine corpus, 0.68 for thyroid, and 0.50 for lung samples. The respective low-sensitivity rates were 0.05, 0.05, 0.13, and 0.25 while the respective low-specificity rates were 0.18, 0.08, 0.20, and 0.25. The participating pathologists agreed that the overall quality of sWSI was generally on par with that produced by high-end scanners, and did not affect diagnosis in most cases. Pathologists confirmed that sWSI is reliable enough for standard diagnoses of most tissue categories, while it can be used for quick screening of difficult cases. Conclusions As an ultra-low-cost alternative to whole slide scanners, diagnosis-ready VS quality and robustness for commercial usage is achieved in the sWSI solution. Operated on main-stream smartphones installed on normal optical microscopes, sWSI readily offers affordable and reliable WSI to resource-limited or infrequent clinical users. PMID:28916508

  4. Electronics for a focal plane crystal spectrometer

    NASA Technical Reports Server (NTRS)

    Goeke, R. F.

    1978-01-01

    The HEAO-B program forced the usual constraints upon the spacecraft experiment electronics: high reliability, low power consumption, and tight packaging at reasonable cost. The programmable high voltage power supplies were unique in both application and simplicity of manufacture. The hybridized measurement chain is a modification of that used on the SAS-C program; the charge amplifier design in particular shows definite improvement in performance over previous work.

  5. Monolithically interconnected GaAs solar cells: A new interconnection technology for high voltage solar cell output

    NASA Astrophysics Data System (ADS)

    Dinetta, L. C.; Hannon, M. H.

    1995-10-01

    Photovoltaic linear concentrator arrays can benefit from high performance solar cell technologies being developed at AstroPower. Specifically, these are the integration of thin GaAs solar cell and epitaxial lateral overgrowth technologies with the application of monolithically interconnected solar cell (MISC) techniques. This MISC array has several advantages which make it ideal for space concentrator systems. These are high system voltage, reliable low cost monolithically formed interconnections, design flexibility, costs that are independent of array voltage, and low power loss from shorts, opens, and impact damage. This concentrator solar cell will incorporate the benefits of light trapping by growing the device active layers over a low-cost, simple, PECVD deposited silicon/silicon dioxide Bragg reflector. The high voltage-low current output results in minimal 12R losses while properly designing the device allows for minimal shading and resistance losses. It is possible to obtain open circuit voltages as high as 67 volts/cm of solar cell length with existing technology. The projected power density for the high performance device is 5 kW/m for an AMO efficiency of 26% at 1 5X. Concentrator solar cell arrays are necessary to meet the power requirements of specific mission platforms and can supply high voltage power for electric propulsion systems. It is anticipated that the high efficiency, GaAs monolithically interconnected linear concentrator solar cell array will enjoy widespread application for space based solar power needs. Additional applications include remote man-portable or ultra-light unmanned air vehicle (UAV) power supplies where high power per area, high radiation hardness and a high bus voltage or low bus current are important. The monolithic approach has a number of inherent advantages, including reduced cost per interconnect and increased reliability of array connections. There is also a high potential for a large number of consumer products. Dual-use applications can include battery chargers and remote power supplies for consumer electronics products such as portable telephones/beepers, portable radios, CD players, dashboard radar detectors, remote walkway lighting, etc.

  6. Monolithically interconnected GaAs solar cells: A new interconnection technology for high voltage solar cell output

    NASA Technical Reports Server (NTRS)

    Dinetta, L. C.; Hannon, M. H.

    1995-01-01

    Photovoltaic linear concentrator arrays can benefit from high performance solar cell technologies being developed at AstroPower. Specifically, these are the integration of thin GaAs solar cell and epitaxial lateral overgrowth technologies with the application of monolithically interconnected solar cell (MISC) techniques. This MISC array has several advantages which make it ideal for space concentrator systems. These are high system voltage, reliable low cost monolithically formed interconnections, design flexibility, costs that are independent of array voltage, and low power loss from shorts, opens, and impact damage. This concentrator solar cell will incorporate the benefits of light trapping by growing the device active layers over a low-cost, simple, PECVD deposited silicon/silicon dioxide Bragg reflector. The high voltage-low current output results in minimal 12R losses while properly designing the device allows for minimal shading and resistance losses. It is possible to obtain open circuit voltages as high as 67 volts/cm of solar cell length with existing technology. The projected power density for the high performance device is 5 kW/m for an AMO efficiency of 26% at 1 5X. Concentrator solar cell arrays are necessary to meet the power requirements of specific mission platforms and can supply high voltage power for electric propulsion systems. It is anticipated that the high efficiency, GaAs monolithically interconnected linear concentrator solar cell array will enjoy widespread application for space based solar power needs. Additional applications include remote man-portable or ultra-light unmanned air vehicle (UAV) power supplies where high power per area, high radiation hardness and a high bus voltage or low bus current are important. The monolithic approach has a number of inherent advantages, including reduced cost per interconnect and increased reliability of array connections. There is also a high potential for a large number of consumer products. Dual-use applications can include battery chargers and remote power supplies for consumer electronics products such as portable telephones/beepers, portable radios, CD players, dashboard radar detectors, remote walkway lighting, etc.

  7. Breaking Barriers to Low-Cost Modular Inverter Production & Use

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bogdan Borowy; Leo Casey; Jerry Foshage

    2005-05-31

    The goal of this cost share contract is to advance key technologies to reduce size, weight and cost while enhancing performance and reliability of Modular Inverter Product for Distributed Energy Resources (DER). Efforts address technology development to meet technical needs of DER market protection, isolation, reliability, and quality. Program activities build on SatCon Technology Corporation inverter experience (e.g., AIPM, Starsine, PowerGate) for Photovoltaic, Fuel Cell, Energy Storage applications. Efforts focused four technical areas, Capacitors, Cooling, Voltage Sensing and Control of Parallel Inverters. Capacitor efforts developed a hybrid capacitor approach for conditioning SatCon's AIPM unit supply voltages by incorporating several typesmore » and sizes to store energy and filter at high, medium and low frequencies while minimizing parasitics (ESR and ESL). Cooling efforts converted the liquid cooled AIPM module to an air-cooled unit using augmented fin, impingement flow cooling. Voltage sensing efforts successfully modified the existing AIPM sensor board to allow several, application dependent configurations and enabling voltage sensor galvanic isolation. Parallel inverter control efforts realized a reliable technique to control individual inverters, connected in a parallel configuration, without a communication link. Individual inverter currents, AC and DC, were balanced in the paralleled modules by introducing a delay to the individual PWM gate pulses. The load current sharing is robust and independent of load types (i.e., linear and nonlinear, resistive and/or inductive). It is a simple yet powerful method for paralleling both individual devices dramatically improves reliability and fault tolerance of parallel inverter power systems. A patent application has been made based on this control technology.« less

  8. Reliability Study of Solder Paste Alloy for the Improvement of Solder Joint at Surface Mount Fine-Pitch Components.

    PubMed

    Rahman, Mohd Nizam Ab; Zubir, Noor Suhana Mohd; Leuveano, Raden Achmad Chairdino; Ghani, Jaharah A; Mahmood, Wan Mohd Faizal Wan

    2014-12-02

    The significant increase in metal costs has forced the electronics industry to provide new materials and methods to reduce costs, while maintaining customers' high-quality expectations. This paper considers the problem of most electronic industries in reducing costly materials, by introducing a solder paste with alloy composition tin 98.3%, silver 0.3%, and copper 0.7%, used for the construction of the surface mount fine-pitch component on a Printing Wiring Board (PWB). The reliability of the solder joint between electronic components and PWB is evaluated through the dynamic characteristic test, thermal shock test, and Taguchi method after the printing process. After experimenting with the dynamic characteristic test and thermal shock test with 20 boards, the solder paste was still able to provide a high-quality solder joint. In particular, the Taguchi method is used to determine the optimal control parameters and noise factors of the Solder Printer (SP) machine, that affects solder volume and solder height. The control parameters include table separation distance, squeegee speed, squeegee pressure, and table speed of the SP machine. The result shows that the most significant parameter for the solder volume is squeegee pressure (2.0 mm), and the solder height is the table speed of the SP machine (2.5 mm/s).

  9. Reliability Study of Solder Paste Alloy for the Improvement of Solder Joint at Surface Mount Fine-Pitch Components

    PubMed Central

    Rahman, Mohd Nizam Ab.; Zubir, Noor Suhana Mohd; Leuveano, Raden Achmad Chairdino; Ghani, Jaharah A.; Mahmood, Wan Mohd Faizal Wan

    2014-01-01

    The significant increase in metal costs has forced the electronics industry to provide new materials and methods to reduce costs, while maintaining customers’ high-quality expectations. This paper considers the problem of most electronic industries in reducing costly materials, by introducing a solder paste with alloy composition tin 98.3%, silver 0.3%, and copper 0.7%, used for the construction of the surface mount fine-pitch component on a Printing Wiring Board (PWB). The reliability of the solder joint between electronic components and PWB is evaluated through the dynamic characteristic test, thermal shock test, and Taguchi method after the printing process. After experimenting with the dynamic characteristic test and thermal shock test with 20 boards, the solder paste was still able to provide a high-quality solder joint. In particular, the Taguchi method is used to determine the optimal control parameters and noise factors of the Solder Printer (SP) machine, that affects solder volume and solder height. The control parameters include table separation distance, squeegee speed, squeegee pressure, and table speed of the SP machine. The result shows that the most significant parameter for the solder volume is squeegee pressure (2.0 mm), and the solder height is the table speed of the SP machine (2.5 mm/s). PMID:28788270

  10. Design optimization for cost and quality: The robust design approach

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1990-01-01

    Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.

  11. Low-cost optical interconnect module for parallel optical data links

    NASA Astrophysics Data System (ADS)

    Noddings, Chad; Hirsch, Tom J.; Olla, M.; Spooner, C.; Yu, Jason J.

    1995-04-01

    We have designed, fabricated, and tested a prototype parallel ten-channel unidirectional optical data link. When scaled to production, we project that this technology will satisfy the following market penetration requirements: (1) up to 70 meters transmission distance, (2) at least 1 gigabyte/second data rate, and (3) 0.35 to 0.50 MByte/second volume selling price. These goals can be achieved by means of the assembly innovations described in this paper: a novel alignment method that is integrated with low-cost, few chip module packaging techniques, yielding high coupling and reducing the component count. Furthermore, high coupling efficiency increases projected reliability reducing the driver's power requirements.

  12. Breast volumetric analysis for aesthetic planning in breast reconstruction: a literature review of techniques

    PubMed Central

    Rozen, Warren Matthew; Spychal, Robert T.; Hunter-Smith, David J.

    2016-01-01

    Background Accurate volumetric analysis is an essential component of preoperative planning in both reconstructive and aesthetic breast procedures towards achieving symmetrization and patient-satisfactory outcome. Numerous comparative studies and reviews of individual techniques have been reported. However, a unifying review of all techniques comparing their accuracy, reliability, and practicality has been lacking. Methods A review of the published English literature dating from 1950 to 2015 using databases, such as PubMed, Medline, Web of Science, and EMBASE, was undertaken. Results Since Bouman’s first description of water displacement method, a range of volumetric assessment techniques have been described: thermoplastic casting, direct anthropomorphic measurement, two-dimensional (2D) imaging, and computed tomography (CT)/magnetic resonance imaging (MRI) scans. However, most have been unreliable, difficult to execute and demonstrate limited practicability. Introduction of 3D surface imaging has revolutionized the field due to its ease of use, fast speed, accuracy, and reliability. However, its widespread use has been limited by its high cost and lack of high level of evidence. Recent developments have unveiled the first web-based 3D surface imaging program, 4D imaging, and 3D printing. Conclusions Despite its importance, an accurate, reliable, and simple breast volumetric analysis tool has been elusive until the introduction of 3D surface imaging technology. However, its high cost has limited its wide usage. Novel adjunct technologies, such as web-based 3D surface imaging program, 4D imaging, and 3D printing, appear promising. PMID:27047788

  13. Explosion Clad for Upstream Oil and Gas Equipment

    NASA Astrophysics Data System (ADS)

    Banker, John G.; Massarello, Jack; Pauly, Stephane

    2011-01-01

    Today's upstream oil and gas facilities frequently involve the combination of high pressures, high temperatures, and highly corrosive environments, requiring equipment that is thick wall, corrosion resistant, and cost effective. When significant concentrations of CO2 and/or H2S and/or chlorides are present, corrosion resistant alloys (CRA) can become the material of choice for separator equipment, piping, related components, and line pipe. They can provide reliable resistance to both corrosion and hydrogen embrittlement. For these applications, the more commonly used CRA's are 316L, 317L and duplex stainless steels, alloy 825 and alloy 625, dependent upon the application and the severity of the environment. Titanium is also an exceptional choice from the technical perspective, but is less commonly used except for heat exchangers. Explosion clad offers significant savings by providing a relatively thin corrosion resistant alloy on the surface metallurgically bonded to a thick, lower cost, steel substrate for the pressure containment. Developed and industrialized in the 1960's the explosion cladding technology can be used for cladding the more commonly used nickel based and stainless steel CRA's as well as titanium. It has many years of proven experience as a reliable and highly robust clad manufacturing process. The unique cold welding characteristics of explosion cladding reduce problems of alloy sensitization and dissimilar metal incompatibility. Explosion clad materials have been used extensively in both upstream and downstream oil, gas and petrochemical facilities for well over 40 years. The explosion clad equipment has demonstrated excellent resistance to corrosion, embrittlement and disbonding. Factors critical to insure reliable clad manufacture and equipment design and fabrication are addressed.

  14. Development of tungsten armor and bonding to copper for plasma-interactive components

    NASA Astrophysics Data System (ADS)

    Smid, I.; Akiba, M.; Vieider, G.; Plöchl, L.

    1998-10-01

    For the highest sputtering threshold of all possible candidates, tungsten will be the most likely armor material in highly loaded plasma-interactive components of commercially relevant fusion reactors. The development of new materials, as well as joining and coating techniques are needed to find the best balance in plasma compatibility, lifetime, reliability, neutron irradiation resistance, and safety. Further important issues for selection are availability, costs of machining and production, etc. Tungsten doped with lanthanum oxide is a commercially available W grade for electrodes, designed for low electron work function, higher recrystallization temperature, reduced secondary grain growth, and machinability at relatively low costs. W-Re and related tungsten base alloys are preferred for application at high temperatures, when high strength, high thermal shock and recrystallization resistance are required. Due to the high costs and limited global availability of Re, however, the amount of such alloys in a commercial reactor should be kept low. Newly measured material properties up to high temperatures are presented for lanthanated and W-Re alloys, and the impact on fusion application is discussed. Recently developed coatings of chemical vapor deposited tungsten (CVD-W) on copper substrates have proven to be resistant to repeated thermal and shock loading. Layers of more than 5 mm, as required for the International Thermonuclear Experimental Reactor (ITER), became available. Vacuum plasma sprayed tungsten (VPS-W) in particular is attractive for its lower costs, and the potential of in situ repair. However, the advantage of sacrificial plasma-interactive tungsten coatings in long-term fusion devices has yet to be demonstrated. A durable and reliable joining of bulk tungsten to copper is needed to achieve an acceptable component lifetime in a fusion environment. The material properties of the copper alloys proposed for ITER, and their impact on the quality of bonding to tungsten is discussed. Future materials R&D should concern issues such as plasma compatibility, and above all neutron irradiation damage of promising tungsten-copper joints.

  15. Survey points to practices that reduce refinery maintenance spending

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ricketts, R.

    During the past decade, Solomon Associates Inc., Dallas, has conducted several comparative analyses of maintenance costs in the refining industry. These investigations have brought to light maintenance practices and reliability improvement activities that are responsible for the wide range of maintenance costs recorded by refineries. Some of the practices are of an organizational nature and thus are of interest to managers reviewing their operations. The paper discusses maintenance costs; profitability; cost trends; equipment availability; funds application; two basic organizational approached to maintenance (repair-focused organization and reliability-focused organization); low-cost practices; and organizational style.

  16. Warranty optimisation based on the prediction of costs to the manufacturer using neural network model and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Stamenkovic, Dragan D.; Popovic, Vladimir M.

    2015-02-01

    Warranty is a powerful marketing tool, but it always involves additional costs to the manufacturer. In order to reduce these costs and make use of warranty's marketing potential, the manufacturer needs to master the techniques for warranty cost prediction according to the reliability characteristics of the product. In this paper a combination free replacement and pro rata warranty policy is analysed as warranty model for one type of light bulbs. Since operating conditions have a great impact on product reliability, they need to be considered in such analysis. A neural network model is used to predict light bulb reliability characteristics based on the data from the tests of light bulbs in various operating conditions. Compared with a linear regression model used in the literature for similar tasks, the neural network model proved to be a more accurate method for such prediction. Reliability parameters obtained in this way are later used in Monte Carlo simulation for the prediction of times to failure needed for warranty cost calculation. The results of the analysis make possible for the manufacturer to choose the optimal warranty policy based on expected product operating conditions. In such a way, the manufacturer can lower the costs and increase the profit.

  17. Prognostics-based qualification of high-power white LEDs using Lévy process approach

    NASA Astrophysics Data System (ADS)

    Yung, Kam-Chuen; Sun, Bo; Jiang, Xiaopeng

    2017-01-01

    Due to their versatility in a variety of applications and the growing market demand, high-power white light-emitting diodes (LEDs) have attracted considerable attention. Reliability qualification testing is an essential part of the product development process to ensure the reliability of a new LED product before its release. However, the widely used IES-TM-21 method does not provide comprehensive reliability information. For more accurate and effective qualification, this paper presents a novel method based on prognostics techniques. Prognostics is an engineering technology predicting the future reliability or determining the remaining useful lifetime (RUL) of a product by assessing the extent of deviation or degradation from its expected normal operating conditions. A Lévy subordinator of a mixed Gamma and compound Poisson process is used to describe the actual degradation process of LEDs characterized by random sporadic small jumps of degradation degree, and the reliability function is derived for qualification with different distribution forms of jump sizes. The IES LM-80 test results reported by different LED vendors are used to develop and validate the qualification methodology. This study will be helpful for LED manufacturers to reduce the total test time and cost required to qualify the reliability of an LED product.

  18. Flight control electronics reliability/maintenance study

    NASA Technical Reports Server (NTRS)

    Dade, W. W.; Edwards, R. H.; Katt, G. T.; Mcclellan, K. L.; Shomber, H. A.

    1977-01-01

    Collection and analysis of data are reported that concern the reliability and maintenance experience of flight control system electronics currently in use on passenger carrying jet aircraft. Two airlines B-747 airplane fleets were analyzed to assess the component reliability, system functional reliability, and achieved availability of the CAT II configuration flight control system. Also assessed were the costs generated by this system in the categories of spare equipment, schedule irregularity, and line and shop maintenance. The results indicate that although there is a marked difference in the geographic location and route pattern between the airlines studied, there is a close similarity in the reliability and the maintenance costs associated with the flight control electronics.

  19. Optimizing preventive maintenance policy: A data-driven application for a light rail braking system.

    PubMed

    Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel

    2017-10-01

    This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions.

  20. Optimizing preventive maintenance policy: A data-driven application for a light rail braking system

    PubMed Central

    Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel

    2017-01-01

    This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions. PMID:29278245

  1. Optimization of Adaptive Intraply Hybrid Fiber Composites with Reliability Considerations

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Chamis, Christos C.

    1994-01-01

    The reliability with bounded distribution parameters (mean, standard deviation) was maximized and the reliability-based cost was minimized for adaptive intra-ply hybrid fiber composites by using a probabilistic method. The probabilistic method accounts for all naturally occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry, and control-related parameters. Probabilistic sensitivity factors were computed and used in the optimization procedures. For actuated change in the angle of attack of an airfoil-like composite shell structure with an adaptive torque plate, the reliability was maximized to 0.9999 probability, with constraints on the mean and standard deviation of the actuation material volume ratio (percentage of actuation composite material in a ply) and the actuation strain coefficient. The reliability-based cost was minimized for an airfoil-like composite shell structure with an adaptive skin and a mean actuation material volume ratio as the design parameter. At a O.9-mean actuation material volume ratio, the minimum cost was obtained.

  2. Super Ball Bot - Structures for Planetary Landing and Exploration, NIAC Phase 2 Final Report

    NASA Technical Reports Server (NTRS)

    SunSpiral, Vytas; Agogino, Adrian; Atkinson, David

    2015-01-01

    Small, light-weight and low-cost missions will become increasingly important to NASA's exploration goals. Ideally teams of small, collapsible, light weight robots, will be conveniently packed during launch and would reliably separate and unpack at their destination. Such robots will allow rapid, reliable in-situ exploration of hazardous destination such as Titan, where imprecise terrain knowledge and unstable precipitation cycles make single-robot exploration problematic. Unfortunately landing lightweight conventional robots is difficult with current technology. Current robot designs are delicate, requiring a complex combination of devices such as parachutes, retrorockets and impact balloons to minimize impact forces and to place a robot in a proper orientation. Instead we are developing a radically different robot based on a "tensegrity" structure and built purely with tensile and compression elements. Such robots can be both a landing and a mobility platform allowing for dramatically simpler mission profile and reduced costs. These multi-purpose robots can be light-weight, compactly stored and deployed, absorb strong impacts, are redundant against single-point failures, can recover from different landing orientations and can provide surface mobility. These properties allow for unique mission profiles that can be carried out with low cost and high reliability and which minimizes the inefficient dependance on "use once and discard" mass associated with traditional landing systems. We believe tensegrity robot technology can play a critical role in future planetary exploration.

  3. Special Issue on a Fault Tolerant Network on Chip Architecture

    NASA Astrophysics Data System (ADS)

    Janidarmian, Majid; Tinati, Melika; Khademzadeh, Ahmad; Ghavibazou, Maryam; Fekr, Atena Roshan

    2010-06-01

    In this paper a fast and efficient spare switch selection algorithm is presented in a reliable NoC architecture based on specific application mapped onto mesh topology called FERNA. Based on ring concept used in FERNA, this algorithm achieves best results equivalent to exhaustive algorithm with much less run time improving two parameters. Inputs of FERNA algorithm for response time of the system and extra communication cost minimization are derived from simulation of high transaction level using SystemC TLM and mathematical formulation, respectively. The results demonstrate that improvement of above mentioned parameters lead to advance whole system reliability that is analytically calculated. Mapping algorithm has been also investigated as an effective issue on extra bandwidth requirement and system reliability.

  4. Propulsion controls

    NASA Technical Reports Server (NTRS)

    Harkney, R. D.

    1980-01-01

    Increased system requirements and functional integration with the aircraft have placed an increased demand on control system capability and reliability. To provide these at an affordable cost and weight and because of the rapid advances in electronic technology, hydromechanical systems are being phased out in favor of digital electronic systems. The transition is expected to be orderly from electronic trimming of hydromechanical controls to full authority digital electronic control. Future propulsion system controls will be highly reliable full authority digital electronic with selected component and circuit redundancy to provide the required safety and reliability. Redundancy may include a complete backup control of a different technology for single engine applications. The propulsion control will be required to communicate rapidly with the various flight and fire control avionics as part of an integrated control concept.

  5. Relative Economic Merits of Storage and Combustion Turbines for Meeting Peak Capacity Requirements under Increased Penetration of Solar Photovoltaics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denholm, Paul; Diakov, Victor; Margolis, Robert

    Batteries with several hours of capacity provide an alternative to combustion turbines for meeting peak capacity requirements. Even when compared to state-of-the-art highly flexible combustion turbines, batteries can provide a greater operational value, which is reflected in a lower system-wide production cost. By shifting load and providing operating reserves, batteries can reduce the cost of operating the power system to a traditional electric utility. This added value means that, depending on battery life, batteries can have a higher cost than a combustion turbine of equal capacity and still produce a system with equal or lower overall life-cycle cost. For amore » utility considering investing in new capacity, the cost premium for batteries is highly sensitive to a variety of factors, including lifetime, natural gas costs, PV penetration, and grid generation mix. In addition, as PV penetration increases, the net electricity demand profile changes, which may reduce the amount of battery energy capacity needed to reliably meet peak demand.« less

  6. Implementing eco friendly highly reliable upload feature using multi 3G service

    NASA Astrophysics Data System (ADS)

    Tanutama, Lukas; Wijaya, Rico

    2017-12-01

    The current trend of eco friendly Internet access is preferred. In this research the understanding of eco friendly is minimum power consumption. The devices that are selected have operationally low power consumption and normally have no power consumption as they are hibernating during idle state. To have the reliability a router of a router that has internal load balancing feature will provide the improvement of previous research on multi 3G services for broadband lines. Previous studies emphasized on accessing and downloading information files from Public Cloud residing Web Servers. The demand is not only for speed but high reliability of access as well. High reliability will mean mitigating both direct and indirect high cost due to repeated attempts of uploading and downloading the large files. Nomadic and mobile computer users need viable solution. Following solution for downloading information has been proposed and tested. The solution is promising. The result is now extended to providing reliable access line by means of redundancy and automatic reconfiguration for uploading and downloading large information files to a Web Server in the Cloud. The technique is taking advantage of internal load balancing feature to provision a redundant line acting as a backup line. A router that has the ability to provide load balancing to several WAN lines is chosen. The WAN lines are constructed using multiple 3G lines. The router supports the accessing Internet with more than one 3G access line which increases the reliability and availability of the Internet access as the second line immediately takes over if the first line is disturbed.

  7. Measuring financial toxicity as a clinically relevant patient-reported outcome: The validation of the COmprehensive Score for financial Toxicity (COST).

    PubMed

    de Souza, Jonas A; Yap, Bonnie J; Wroblewski, Kristen; Blinder, Victoria; Araújo, Fabiana S; Hlubocky, Fay J; Nicholas, Lauren H; O'Connor, Jeremy M; Brockstein, Bruce; Ratain, Mark J; Daugherty, Christopher K; Cella, David

    2017-02-01

    Cancer and its treatment lead to increased financial distress for patients. To the authors' knowledge, to date, no standardized patient-reported outcome measure has been validated to assess this distress. Patients with AJCC Stage IV solid tumors receiving chemotherapy for at least 2 months were recruited. Financial toxicity was measured by the COmprehensive Score for financial Toxicity (COST) measure. The authors collected data regarding patient characteristics, clinical trial participation, health care use, willingness to discuss costs, psychological distress (Brief Profile of Mood States [POMS]), and health-related quality of life (HRQOL) as measured by the Functional Assessment of Cancer Therapy: General (FACT-G) and the European Organization for Research and Treatment of Cancer (EORTC) QOL questionnaires. Test-retest reliability, internal consistency, and validity of the COST measure were assessed using standard-scale construction techniques. Associations between the resulting factors and other variables were assessed using multivariable analyses. A total of 375 patients with advanced cancer were approached, 233 of whom (62.1%) agreed to participate. The COST measure demonstrated high internal consistency and test-retest reliability. Factor analyses revealed a coherent, single, latent variable (financial toxicity). COST values were found to be correlated with income (correlation coefficient [r] = 0.28; P<.001), psychosocial distress (r = -0.26; P<.001), and HRQOL, as measured by the FACT-G (r = 0.42; P<.001) and by the EORTC QOL instruments (r = 0.33; P<.001). Independent factors found to be associated with financial toxicity were race (P = .04), employment status (P<.001), income (P = .003), number of inpatient admissions (P = .01), and psychological distress (P = .003). Willingness to discuss costs was not found to be associated with the degree of financial distress (P = .49). The COST measure demonstrated reliability and validity in measuring financial toxicity. Its correlation with HRQOL indicates that financial toxicity is a clinically relevant patient-centered outcome. Cancer 2017;123:476-484. © 2016 American Cancer Society. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.

  8. The reliability of running economy expressed as oxygen cost and energy cost in trained distance runners.

    PubMed

    Shaw, Andrew J; Ingham, Stephen A; Fudge, Barry W; Folland, Jonathan P

    2013-12-01

    This study assessed the between-test reliability of oxygen cost (OC) and energy cost (EC) in distance runners, and contrasted it with the smallest worthwhile change (SWC) of these measures. OC and EC displayed similar levels of within-subject variation (typical error < 3.85%). However, the typical error (2.75% vs 2.74%) was greater than the SWC (1.38% vs 1.71%) for both OC and EC, respectively, indicating insufficient sensitivity to confidently detect small, but meaningful, changes in OC and EC.

  9. Electrocatalysts by atomic layer deposition for fuel cell applications

    DOE PAGES

    Cheng, Niancai; Shao, Yuyan; Liu, Jun; ...

    2016-01-22

    Here, fuel cells are a promising technology solution for reliable and clean energy because they offer high energy conversion efficiency and low emission of pollutants. However, high cost and insufficient durability are considerable challenges for widespread adoption of polymer electrolyte membrane fuel cells (PEMFCs) in practical applications. Current PEMFCs catalysts have been identified as major contributors to both the high cost and limited durability. Atomic layer deposition (ALD) is emerging as a powerful technique for solving these problems due to its exclusive advantages over other methods. In this review, we summarize recent developments of ALD in PEMFCs with a focusmore » on design of materials for improved catalyst activity and durability. New research directions and future trends have also been discussed.« less

  10. Hardware for Accelerating N-Modular Redundant Systems for High-Reliability Computing

    NASA Technical Reports Server (NTRS)

    Dobbs, Carl, Sr.

    2012-01-01

    A hardware unit has been designed that reduces the cost, in terms of performance and power consumption, for implementing N-modular redundancy (NMR) in a multiprocessor device. The innovation monitors transactions to memory, and calculates a form of sumcheck on-the-fly, thereby relieving the processors of calculating the sumcheck in software

  11. Development of an Expanded, High Reliability Cost and Performance Database for In Situ Remediation Technologies

    DTIC Science & Technology

    2016-03-01

    Tinker DRA-3 Chem. Ox. Potassium permanganate 10 2.2 Advantages and Limitations Potential advantages and disadvantages of our dataset, and...Washington DC. Thomson, N.R., E.D. Hood, and G.J. Farquhar, 2007. “ Permanganate Treatment of an Emplaced DNAPL Source,” Ground Water Monitoring

  12. Stirling engines for automobiles

    NASA Technical Reports Server (NTRS)

    Beremand, D. G.

    1979-01-01

    The results of recent and ongoing automobile Stirling engine development efforts are reviewed and technology status and requirements are identified. Key technology needs include those for low cost, high temperature (1300 - 1500 F) metal alloys for heater heads, and reliable long-life, low-leakage shaft seals. Various fuel economy projections for Stirling powered automobiles are reviewed and assessed.

  13. Hidden Savings in your Bus Budget

    ERIC Educational Resources Information Center

    Newby, Ruth

    2005-01-01

    School transportation industry statistics show the annual average costs for operating and maintaining a single school bus range from $34,000 to $38,000. Operating a school bus fleet at high efficiency has a real impact on the dollars saved for a school district and the reliability of transportation service to students. In this article, the author…

  14. Electromechanical actuation for thrust vector control applications

    NASA Technical Reports Server (NTRS)

    Roth, Mary Ellen

    1990-01-01

    The advanced launch system (ALS), is a launch vehicle that is designed to be cost-effective, highly reliable, and operationally efficient with a goal of reducing the cost per pound to orbit. An electromechanical actuation (EMA) system is being developed as an attractive alternative to the hydraulic systems. The controller will integrate 20 kHz resonant link power management and distribution (PMAD) technology and pulse population modulation (PPM) techniques to implement field-oriented vector control (FOVC) of a new advanced induction motor. The driver and the FOVC will be microprocessor controlled. For increased system reliability, a built-in test (BITE) capability will be included. This involves introducing testability into the design of a system such that testing is calibrated and exercised during the design, manufacturing, maintenance, and prelaunch activities. An actuator will be integrated with the motor controller for performance testing of the EMA thrust vector control (TVC) system. The EMA system and work proposed for the future are discussed.

  15. Small space station electrical power system design concepts

    NASA Technical Reports Server (NTRS)

    Jones, G. M.; Mercer, L. N.

    1976-01-01

    A small manned facility, i.e., a small space station, placed in earth orbit by the Shuttle transportation system would be a viable, cost effective addition to the basic Shuttle system to provide many opportunities for R&D programs, particularly in the area of earth applications. The small space station would have many similarities with Skylab. This paper presents design concepts for an electrical power system (EPS) for the small space station based on Skylab experience, in-house work at Marshall Space Flight Center, SEPS (Solar Electric Propulsion Stage) solar array development studies, and other studies sponsored by MSFC. The proposed EPS would be a solar array/secondary battery system. Design concepts expressed are based on maximizing system efficiency and five year operational reliability. Cost, weight, volume, and complexity considerations are inherent in the concepts presented. A small space station EPS based on these concepts would be highly efficient, reliable, and relatively inexpensive.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Black, Billy D.; Akhil, Abbas Ali

    This is the final report on a field evaluation by the Department of the Navy of twenty 5-kW PEM fuel cells carried out during 2004 and 2005 at five Navy sites located in New York, California, and Hawaii. The key objective of the effort was to obtain an engineering assessment of their military applications. Particular issues of interest were fuel cell cost, performance, reliability, and the readiness of commercial fuel cells for use as a standalone (grid-independent) power option. Two corollary objectives of the demonstration were to promote technological advances and to improve fuel performance and reliability. From a costmore » perspective, the capital cost of PEM fuel cells at this stage of their development is high compared to other power generation technologies. Sandia National Laboratories technical recommendation to the Navy is to remain involved in evaluating successive generations of this technology, particularly in locations with greater environmental extremes, and it encourages their increased use by the Navy.« less

  17. Back-bombardment compensation in microwave thermionic electron guns

    NASA Astrophysics Data System (ADS)

    Kowalczyk, Jeremy M. D.; Madey, John M. J.

    2014-12-01

    The development of capable, reliable, and cost-effective compact electron beam sources remains a long-standing objective of the efforts to develop the accelerator systems needed for on-site research and industrial applications ranging from electron beam welding to high performance x-ray and gamma ray light sources for element-resolved microanalysis and national security. The need in these applications for simplicity, reliability, and low cost has emphasized solutions compatible with the use of the long established and commercially available pulsed microwave rf sources and L-, S- or X-band linear accelerators. Thermionic microwave electron guns have proven to be one successful approach to the development of the electron sources for these systems providing high macropulse average current beams with picosecond pulse lengths and good emittance out to macropulse lengths of 4-5 microseconds. But longer macropulse lengths are now needed for use in inverse-Compton x-ray sources and other emerging applications. We describe in this paper our approach to extending the usable macropulse current and pulse length of these guns through the use of thermal diffusion to compensate for the increase in cathode surface temperature due to back-bombardment.

  18. Advanced composites characterization with x-ray technologies

    NASA Astrophysics Data System (ADS)

    Baaklini, George Y.

    1993-12-01

    Recognizing the critical need to advance new composites for the aeronautics and aerospace industries, we are focussing on advanced test methods that are vital to successful modeling and manufacturing of future generations of high temperature and durable composite materials. These newly developed composites are necessary to reduce propulsion cost and weight, to improve performance and reliability, and to address longer-term national strategic thrusts for sustaining global preeminence in high speed air transport and in high performance military aircraft.

  19. Land use mapping from CBERS-2 images with open source tools by applying different classification algorithms

    NASA Astrophysics Data System (ADS)

    Sanhouse-García, Antonio J.; Rangel-Peraza, Jesús Gabriel; Bustos-Terrones, Yaneth; García-Ferrer, Alfonso; Mesas-Carrascosa, Francisco J.

    2016-02-01

    Land cover classification is often based on different characteristics between their classes, but with great homogeneity within each one of them. This cover is obtained through field work or by mean of processing satellite images. Field work involves high costs; therefore, digital image processing techniques have become an important alternative to perform this task. However, in some developing countries and particularly in Casacoima municipality in Venezuela, there is a lack of geographic information systems due to the lack of updated information and high costs in software license acquisition. This research proposes a low cost methodology to develop thematic mapping of local land use and types of coverage in areas with scarce resources. Thematic mapping was developed from CBERS-2 images and spatial information available on the network using open source tools. The supervised classification method per pixel and per region was applied using different classification algorithms and comparing them among themselves. Classification method per pixel was based on Maxver algorithms (maximum likelihood) and Euclidean distance (minimum distance), while per region classification was based on the Bhattacharya algorithm. Satisfactory results were obtained from per region classification, where overall reliability of 83.93% and kappa index of 0.81% were observed. Maxver algorithm showed a reliability value of 73.36% and kappa index 0.69%, while Euclidean distance obtained values of 67.17% and 0.61% for reliability and kappa index, respectively. It was demonstrated that the proposed methodology was very useful in cartographic processing and updating, which in turn serve as a support to develop management plans and land management. Hence, open source tools showed to be an economically viable alternative not only for forestry organizations, but for the general public, allowing them to develop projects in economically depressed and/or environmentally threatened areas.

  20. Shuttle-Derived Launch Vehicles' Capablities: An Overview

    NASA Technical Reports Server (NTRS)

    Rothschild, William J.; Bailey, Debra A.; Henderson, Edward M.; Crumbly, Chris

    2005-01-01

    Shuttle-Derived Launch Vehicle (SDLV) concepts have been developed by a collaborative team comprising the Johnson Space Center, Marshall Space Flight Center, Kennedy Space Center, ATK-Thiokol, Lockheed Martin Space Systems Company, The Boeing Company, and United Space Alliance. The purpose of this study was to provide timely information on a full spectrum of low-risk, cost-effective options for STS-Derived Launch Vehicle concepts to support the definition of crew and cargo launch requirements for the Space Exploration Vision. Since the SDLV options use high-reliability hardware, existing facilities, and proven processes, they can provide relatively low-risk capabilities to launch extremely large payloads to low Earth orbit. This capability to reliably lift very large, high-dollar-value payloads could reduce mission operational risks by minimizing the number of complex on-orbit operations compared to architectures based on multiple smaller launchers. The SDLV options also offer several logical spiral development paths for larger exploration payloads. All of these development paths make practical and cost-effective use of existing Space Shuttle Program (SSP) hardware, infrastructure, and launch and flight operations systems. By utilizing these existing assets, the SDLV project could support the safe and orderly transition of the current SSP through the planned end of life in 2010. The SDLV concept definition work during 2004 focused on three main configuration alternatives: a side-mount heavy lifter (approximately 77 MT payload), an in-line medium lifter (approximately 22 MT Crew Exploration Vehicle payload), and an in-line heavy lifter (greater than 100 MT payload). This paper provides an overview of the configuration, performance capabilities, reliability estimates, concept of operations, and development plans for each of the various SDLV alternatives. While development, production, and operations costs have been estimated for each of the SDLV configuration alternatives, these proprietary data have not been included in this paper.

  1. Fast Multiscale Algorithms for Wave Propagation in Heterogeneous Environments

    DTIC Science & Technology

    2016-01-07

    methods for waves’’, Nonlinear solvers for high- intensity focused ultrasound with application to cancer treatment, AIMS, Palo Alto, 2012. ``Hermite...formulation but different parametrizations. . . . . . . . . . . . 6 4 Density µ(t) at mode 0 for scattering of a plane Gaussian pulse from a sphere. On the...spatiotemporal scales. Two crucial components of the highly-efficient, general-purpose wave simulator we envision are • Reliable, low -cost methods for truncating

  2. Ion plated electronic tube device

    DOEpatents

    Meek, T.T.

    1983-10-18

    An electronic tube and associated circuitry which is produced by ion plating techniques. The process is carried out in an automated process whereby both active and passive devices are produced at very low cost. The circuitry is extremely reliable and is capable of functioning in both high radiation and high temperature environments. The size of the electronic tubes produced are more than an order of magnitude smaller than conventional electronic tubes.

  3. Reliability Assessment for Low-cost Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Freeman, Paul Michael

    Existing low-cost unmanned aerospace systems are unreliable, and engineers must blend reliability analysis with fault-tolerant control in novel ways. This dissertation introduces the University of Minnesota unmanned aerial vehicle flight research platform, a comprehensive simulation and flight test facility for reliability and fault-tolerance research. An industry-standard reliability assessment technique, the failure modes and effects analysis, is performed for an unmanned aircraft. Particular attention is afforded to the control surface and servo-actuation subsystem. Maintaining effector health is essential for safe flight; failures may lead to loss of control incidents. Failure likelihood, severity, and risk are qualitatively assessed for several effector failure modes. Design changes are recommended to improve aircraft reliability based on this analysis. Most notably, the control surfaces are split, providing independent actuation and dual-redundancy. The simulation models for control surface aerodynamic effects are updated to reflect the split surfaces using a first-principles geometric analysis. The failure modes and effects analysis is extended by using a high-fidelity nonlinear aircraft simulation. A trim state discovery is performed to identify the achievable steady, wings-level flight envelope of the healthy and damaged vehicle. Tolerance of elevator actuator failures is studied using familiar tools from linear systems analysis. This analysis reveals significant inherent performance limitations for candidate adaptive/reconfigurable control algorithms used for the vehicle. Moreover, it demonstrates how these tools can be applied in a design feedback loop to make safety-critical unmanned systems more reliable. Control surface impairments that do occur must be quickly and accurately detected. This dissertation also considers fault detection and identification for an unmanned aerial vehicle using model-based and model-free approaches and applies those algorithms to experimental faulted and unfaulted flight test data. Flight tests are conducted with actuator faults that affect the plant input and sensor faults that affect the vehicle state measurements. A model-based detection strategy is designed and uses robust linear filtering methods to reject exogenous disturbances, e.g. wind, while providing robustness to model variation. A data-driven algorithm is developed to operate exclusively on raw flight test data without physical model knowledge. The fault detection and identification performance of these complementary but different methods is compared. Together, enhanced reliability assessment and multi-pronged fault detection and identification techniques can help to bring about the next generation of reliable low-cost unmanned aircraft.

  4. Fast Reliability Assessing Method for Distribution Network with Distributed Renewable Energy Generation

    NASA Astrophysics Data System (ADS)

    Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming

    2018-01-01

    This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.

  5. Electrical service reliability: the customer perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samsa, M.E.; Hub, K.A.; Krohm, G.C.

    1978-09-01

    Electric-utility-system reliability criteria have traditionally been established as a matter of utility policy or through long-term engineering practice, generally with no supportive customer cost/benefit analysis as justification. This report presents results of an initial study of the customer perspective toward electric-utility-system reliability, based on critical review of over 20 previous and ongoing efforts to quantify the customer's value of reliable electric service. A possible structure of customer classifications is suggested as a reasonable level of disaggregation for further investigation of customer value, and these groups are characterized in terms of their electricity use patterns. The values that customers assign tomore » reliability are discussed in terms of internal and external cost components. A list of options for effecting changes in customer service reliability is set forth, and some of the many policy issues that could alter customer-service reliability are identified.« less

  6. Museum genomics: low-cost and high-accuracy genetic data from historical specimens.

    PubMed

    Rowe, Kevin C; Singhal, Sonal; Macmanes, Matthew D; Ayroles, Julien F; Morelli, Toni Lyn; Rubidge, Emily M; Bi, Ke; Moritz, Craig C

    2011-11-01

    Natural history collections are unparalleled repositories of geographical and temporal variation in faunal conditions. Molecular studies offer an opportunity to uncover much of this variation; however, genetic studies of historical museum specimens typically rely on extracting highly degraded and chemically modified DNA samples from skins, skulls or other dried samples. Despite this limitation, obtaining short fragments of DNA sequences using traditional PCR amplification of DNA has been the primary method for genetic study of historical specimens. Few laboratories have succeeded in obtaining genome-scale sequences from historical specimens and then only with considerable effort and cost. Here, we describe a low-cost approach using high-throughput next-generation sequencing to obtain reliable genome-scale sequence data from a traditionally preserved mammal skin and skull using a simple extraction protocol. We show that single-nucleotide polymorphisms (SNPs) from the genome sequences obtained independently from the skin and from the skull are highly repeatable compared to a reference genome. © 2011 Blackwell Publishing Ltd.

  7. Comparison of Quantity Versus Quality Using Performance, Reliability, and Life Cycle Cost Data. A Case Study of the F-15, F-16, and A-10 Aircraft.

    DTIC Science & Technology

    1985-09-01

    CoC S~04 COMPARISON OF QUANTITY VERSUS QUALITY USING PERFORMANCE, RELIABILITY, AND LIFE CYCLE COST DATA. A CASE STUDY OF THE F-15, F-16, AND A-10...CYCLE COSTIATU.AT CAE AIR ORE HEO OG .- jAITR UIVERSITY W right.,Patterson Air Force Base, Ohio .! 5ൔ ,6 198 C.IT. U AF’IT/GSL,4/L3Q/65:S Ŗ J...COMPARISON OF QUANTITY VERSUS QUALITY USING PERFORMANCE, RELIABILITY, AND LIFE CYCLE COST DATA. A CASE STUDY OF THE F-15, F-16, AND A-10 AIRCRAFT THESIS David

  8. Design of high-reliability low-cost amorphous silicon modules for high energy yield

    NASA Astrophysics Data System (ADS)

    Jansen, Kai W.; Varvar, Anthony; Twesme, Edward; Berens, Troy; Dhere, Neelkanth G.

    2008-08-01

    For PV modules to fulfill their intended purpose, they must generate sufficient economic return over their lifetime to justify their initial cost. Not only must modules be manufactured at a low cost/Wp with a high energy yield (kWh/kWp), they must also be designed to withstand the significant environmental stresses experienced throughout their 25+ year lifetime. Based on field experience, the most common factors affecting the lifetime energy yield of glass-based amorphous silicon (a-Si) modules have been identified; these include: 1) light-induced degradation; 2) moisture ingress and thin film corrosion; 3) transparent conductive oxide (TCO) delamination; and 4) glass breakage. The current approaches to mitigating the effect of these degradation mechanisms are discussed and the accelerated tests designed to simulate some of the field failures are described. In some cases, novel accelerated tests have been created to facilitate the development of improved manufacturing processes, including a unique test to screen for TCO delamination. Modules using the most reliable designs are tested in high voltage arrays at customer and internal test sites, as well as at independent laboratories. Data from tests at the Florida Solar Energy Center has shown that a-Si tandem modules can demonstrate an energy yield exceeding 1200 kWh/kWp/yr in a subtropical climate. In the same study, the test arrays demonstrated low long-term power loss over two years of data collection, after initial stabilization. The absolute power produced by the test arrays varied seasonally by approximately +/-7%, as expected.

  9. Self-audit of lockout/tagout in manufacturing workplaces: A pilot study.

    PubMed

    Yamin, Samuel C; Parker, David L; Xi, Min; Stanley, Rodney

    2017-05-01

    Occupational health and safety (OHS) self-auditing is a common practice in industrial workplaces. However, few audit instruments have been tested for inter-rater reliability and accuracy. A lockout/tagout (LOTO) self-audit checklist was developed for use in manufacturing enterprises. It was tested for inter-rater reliability and accuracy using responses of business self-auditors and external auditors. Inter-rater reliability at ten businesses was excellent (κ = 0.84). Business self-auditors had high (100%) accuracy in identifying elements of LOTO practice that were present as well those that were absent (81% accuracy). Reliability and accuracy increased further when problematic checklist questions were removed from the analysis. Results indicate that the LOTO self-audit checklist would be useful in manufacturing firms' efforts to assess and improve their LOTO programs. In addition, a reliable self-audit instrument removes the need for external auditors to visit worksites, thereby expanding capacity for outreach and intervention while minimizing costs. © 2017 Wiley Periodicals, Inc.

  10. PV O&M Cost Model and Cost Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Andy

    This is a presentation on PV O&M cost model and cost reduction for the annual Photovoltaic Reliability Workshop (2017), covering estimating PV O&M costs, polynomial expansion, and implementation of Net Present Value (NPV) and reserve account in cost models.

  11. The environmental monitoring of Cultural Heritage through Low Cost strategies: The frescoes of the crypt of St. Francesco d'Assisi's, Irsina (Basilicata, Southern Italy)

    NASA Astrophysics Data System (ADS)

    Sileo, Maria; Gizzi, Fabrizio; Masini, Nicola

    2015-04-01

    One of the main tools of assessment and diagnosis used to define appropriate strategies for the preservation of cultural heritage is the environmental monitoring. To achieve an environmental monitoring are needed high costs of purchase and maintenance, high costs of instrumental and for the management of the plants and processing of results. These costs imply that the technologies for environmental monitoring are not as common but their use is limited to the study very famous monuments or sites. To extend the use and dissemination of such technologies to a greater number of monuments, through the project Pro_Cult (Advanced methodological approaches and technologies for Protection and Security of Cultural Heritage) a research aimed at testing low cost technologies has been performed. The aim of the research is to develop low cost monitoring systems, assessing their effectiveness in a comparative way with commercial high cost ones. To this aim an environmental monitoring system using the Arduino system was designed and developed. It is an electronics prototyping platform based on open-source hardware and software flexible and user friendly. This system is connected to sensors for the detection of environmental parameters of non high purchase cost but with respect to the medium potential detection sensors accurately. This low cost system was tested in the framework of a microclimate monitoring project of the crypt of St. Francis of Assisi in Irsina (Southern Italy) enriched by a precious cycle of medieval frescoes. The aim of this research was to compare two monitoring systems, the first, at low cost, using Arduino system, and the second, a standard commercial product for a full yearly cycle and assess the reliability and the results obtained by the two systems. This paper shows the results of the comparative analysis of an entire monitoring yearly cycle in relation to the problems of degradation affecting the paintings of medieval crypt [1]. The obtained results proved the capability and reliability of the designed low cost monitoring system for investigating the indoor microclimate in relation with decay pathologies. Acknowledgements The authors thank Basilicata Region for supporting this activity in the framework of the Project "PRO_CULT" (Advanced methodological approaches and technologies for Protection and Security of Cultural Heritage) financed by Regional Operational Programme ERDF 2007/2013 [1] M. Sileo, M. Biscione, F.T. Gizzi, N. Masini & M.I. Martinez-Garrido, 2014 - Low cost strategies for the environmental monitoring of Cultural Heritage: Preliminary data from the crypt of St. Francesco d'Assisi, Irsina (Basilicata, Southern Italy). Science, Technology and Cultural Heritage, Edited by Miguel Angel Rogerio-Candelera, 27-34. ISBN: 978-1-138-02744-2.

  12. Customer Dissatisfaction Index and its Improvement Costs

    NASA Astrophysics Data System (ADS)

    Lvovs, Aleksandrs; Mutule, Anna

    2010-01-01

    The paper gives description of customer dissatisfaction index (CDI) that can be used as reliability level characterizing factor. The factor is directly joined with customer satisfaction of power supply and can be used for control of reliability level of power supply for residential customers. CDI relations with other reliability indices are shown. Paper also gives a brief overview of legislation of Latvia in power industry that is the base for CDI introduction. Calculations of CDI improvement costs are performed in the paper too.

  13. First Order Reliability Application and Verification Methods for Semistatic Structures

    NASA Technical Reports Server (NTRS)

    Verderaime, Vincent

    1994-01-01

    Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored by conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments, its stress audits are shown to be arbitrary and incomplete, and it compromises high strength materials performance. A reliability method is proposed which combines first order reliability principles with deterministic design variables and conventional test technique to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety index expression. The application is reduced to solving for a factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and with the pace of semistatic structural designs.

  14. Added Value of Reliability to a Microgrid: Simulations of Three California Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marnay, Chris; Lai, Judy; Stadler, Michael

    The Distributed Energy Resources Customer Adoption Model is used to estimate the value an Oakland nursing home, a Riverside high school, and a Sunnyvale data center would need to put on higher electricity service reliability for them to adopt a Consortium for Electric Reliability Technology Solutions Microgrid (CM) based on economics alone. A fraction of each building's load is deemed critical based on its mission, and the added cost of CM capability to meet it added to on-site generation options. The three sites are analyzed with various resources available as microgrid components. Results show that the value placed on highermore » reliability often does not have to be significant for CM to appear attractive, about 25 $/kWcdota and up, but the carbon footprint consequences are mixed because storage is often used to shift cheaper off-peak electricity to use during afternoon hours in competition with the solar sources.« less

  15. Reliability models: the influence of model specification in generation expansion planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stremel, J.P.

    1982-10-01

    This paper is a critical evaluation of reliability methods used for generation expansion planning. It is shown that the methods for treating uncertainty are critical for determining the relative reliability value of expansion alternatives. It is also shown that the specification of the reliability model will not favor all expansion options equally. Consequently, the model is biased. In addition, reliability models should be augmented with an economic value of reliability (such as the cost of emergency procedures or energy not served). Generation expansion evaluations which ignore the economic value of excess reliability can be shown to be inconsistent. The conclusionsmore » are that, in general, a reliability model simplifies generation expansion planning evaluations. However, for a thorough analysis, the expansion options should be reviewed for candidates which may be unduly rejected because of the bias of the reliability model. And this implies that for a consistent formulation in an optimization framework, the reliability model should be replaced with a full economic optimization which includes the costs of emergency procedures and interruptions in the objective function.« less

  16. Design for Reliability and Safety Approach for the NASA New Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal, M.; Weldon, Danny M.

    2007-01-01

    The United States National Aeronautics and Space Administration (NASA) is in the midst of a space exploration program intended for sending crew and cargo to the international Space Station (ISS), to the moon, and beyond. This program is called Constellation. As part of the Constellation program, NASA is developing new launch vehicles aimed at significantly increase safety and reliability, reduce the cost of accessing space, and provide a growth path for manned space exploration. Achieving these goals requires a rigorous process that addresses reliability, safety, and cost upfront and throughout all the phases of the life cycle of the program. This paper discusses the "Design for Reliability and Safety" approach for the NASA new crew launch vehicle called ARES I. The ARES I is being developed by NASA Marshall Space Flight Center (MSFC) in support of the Constellation program. The ARES I consists of three major Elements: A solid First Stage (FS), an Upper Stage (US), and liquid Upper Stage Engine (USE). Stacked on top of the ARES I is the Crew exploration vehicle (CEV). The CEV consists of a Launch Abort System (LAS), Crew Module (CM), Service Module (SM), and a Spacecraft Adapter (SA). The CEV development is being led by NASA Johnson Space Center (JSC). Designing for high reliability and safety require a good integrated working environment and a sound technical design approach. The "Design for Reliability and Safety" approach addressed in this paper discusses both the environment and the technical process put in place to support the ARES I design. To address the integrated working environment, the ARES I project office has established a risk based design group called "Operability Design and Analysis" (OD&A) group. This group is an integrated group intended to bring together the engineering, design, and safety organizations together to optimize the system design for safety, reliability, and cost. On the technical side, the ARES I project has, through the OD&A environment, implemented a probabilistic approach to analyze and evaluate design uncertainties and understand their impact on safety, reliability, and cost. This paper focuses on the use of the various probabilistic approaches that have been pursued by the ARES I project. Specifically, the paper discusses an integrated functional probabilistic analysis approach that addresses upffont some key areas to support the ARES I Design Analysis Cycle (DAC) pre Preliminary Design (PD) Phase. This functional approach is a probabilistic physics based approach that combines failure probabilities with system dynamics and engineering failure impact models to identify key system risk drivers and potential system design requirements. The paper also discusses other probabilistic risk assessment approaches planned by the ARES I project to support the PD phase and beyond.

  17. Study of fail-safe abort system for an actively cooled hypersonic aircraft, volume 2

    NASA Technical Reports Server (NTRS)

    Peeples, M. E.; Herring, R. L.

    1976-01-01

    Conceptual designs of a fail-safe abort system for hydrogen fueled actively cooled high speed aircraft are examined. The fail-safe concept depends on basically three factors: (1) a reliable method of detecting a failure or malfunction in the active cooling system, (2) the optimization of abort trajectories which minimize the descent heat load to the aircraft, and (3) fail-safe thermostructural concepts to minimize both the weight and the maximum temperature the structure will reach during descent. These factors are examined and promising approaches are evaluated based on weight, reliability, ease of manufacture and cost.

  18. Monitoring Hurricane Rita Inland Storm Surge: Chapter 7J in Science and the storms-the USGS response to the hurricanes of 2005

    USGS Publications Warehouse

    McGee, Benton D.; Tollett, Roland W.; Goree, Burl B.

    2007-01-01

    Pressure transducers (sensors) are accurate, reliable, and cost-effective tools to measure and record the magnitude, extent, and timing of hurricane storm surge. Sensors record storm-surge peaks more accurately and reliably than do high-water marks. Data collected by sensors may be used in storm-surge models to estimate when, where, and to what degree stormsurge flooding will occur during future storm-surge events and to calibrate and verify stormsurge models, resulting in a better understanding of the dynamics of storm surge.

  19. Gear systems for advanced turboprops

    NASA Technical Reports Server (NTRS)

    Wagner, Douglas A.

    1987-01-01

    A new generation of transport aircraft will be powered by efficient, advanced turboprop propulsion systems. Systems that develop 5,000 to 15,000 horsepower have been studied. Reduction gearing for these advanced propulsion systems is discussed. Allison Gas Turbine Division's experience with the 5,000 horsepower reduction gearing for the T56 engine is reviewed and the impact of that experience on advanced gear systems is considered. The reliability needs for component design and development are also considered. Allison's experience and their research serve as a basis on which to characterize future gear systems that emphasize low cost and high reliability.

  20. Advanced propulsion engine assessment based on a cermet reactor

    NASA Technical Reports Server (NTRS)

    Parsley, Randy C.

    1993-01-01

    A preferred Pratt & Whitney conceptual Nuclear Thermal Rocket Engine (NTRE) has been designed based on the fundamental NASA priorities of safety, reliability, cost, and performance. The basic philosophy underlying the design of the XNR2000 is the utilization of the most reliable form of ultrahigh temperature nuclear fuel and development of a core configuration which is optimized for uniform power distribution, operational flexibility, power maneuverability, weight, and robustness. The P&W NTRE system employs a fast spectrum, cermet fueled reactor configured in an expander cycle to ensure maximum operational safety. The cermet fuel form provides retention of fuel and fission products as well as high strength. A high level of confidence is provided by benchmark analysis and independent evaluations.

  1. Nuclear electric propulsion mission engineering study. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Results of a mission engineering analysis of nuclear-thermionic electric propulsion spacecraft for unmanned interplanetary and geocentric missions are summarized. Critical technologies associated with the development of nuclear electric propulsion (NEP) are assessed. Outer planet and comet rendezvous mission analysis, NEP stage design for geocentric and interplanetary missions, NEP system development cost and unit costs, and technology requirements for NEP stage development are studied. The NEP stage design provides both inherent reliability and high payload mass capability. The NEP stage and payload integration was found to be compatible with the space shuttle.

  2. Monolithic Microwave Integrated Circuits Based on GaAs Mesfet Technology

    NASA Astrophysics Data System (ADS)

    Bahl, Inder J.

    Advanced military microwave systems are demanding increased integration, reliability, radiation hardness, compact size and lower cost when produced in large volume, whereas the microwave commercial market, including wireless communications, mandates low cost circuits. Monolithic Microwave Integrated Circuit (MMIC) technology provides an economically viable approach to meeting these needs. In this paper the design considerations for several types of MMICs and their performance status are presented. Multifunction integrated circuits that advance the MMIC technology are described, including integrated microwave/digital functions and a highly integrated transceiver at C-band.

  3. Space transportation booster engine configuration study. Volume 2: Design definition document and environmental analysis

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The objective of the Space Transportation Booster Engine (STBE) Configuration Study is to contribute to the Advanced Launch System (ALS) development effort by providing highly reliable, low cost booster engine concepts for both expendable and reusable rocket engines. The objectives of the space Transportation Booster Engine (STBE) Configuration Study were: (1) to identify engine configurations which enhance vehicle performance and provide operational flexibility at low cost, and (2) to explore innovative approaches to the follow-on Full-Scale Development (FSD) phase for the STBE.

  4. Cyberspace security system

    DOEpatents

    Abercrombie, Robert K; Sheldon, Frederick T; Ferragut, Erik M

    2014-06-24

    A system evaluates reliability, performance and/or safety by automatically assessing the targeted system's requirements. A cost metric quantifies the impact of failures as a function of failure cost per unit of time. The metrics or measurements may render real-time (or near real-time) outcomes by initiating active response against one or more high ranked threats. The system may support or may be executed in many domains including physical domains, cyber security domains, cyber-physical domains, infrastructure domains, etc. or any other domains that are subject to a threat or a loss.

  5. The NASA Lewis Research Center program in space solar cell research and technology. [efficient silicon solar cell development program

    NASA Technical Reports Server (NTRS)

    Brandhorst, H. W., Jr.

    1979-01-01

    Progress in space solar cell research and technology is reported. An 18 percent-AMO-efficient silicon solar cell, reduction in the radiation damage suffered by silicon solar cells in space, and high efficiency wrap-around contact and thin (50 micrometer) coplanar back contact silicon cells are among the topics discussed. Reduction in the cost of silicon cells for space use, cost effective GaAs solar cells, the feasibility of 30 percent AMO solar energy conversion, and reliable encapsulants for space blankets are also considered.

  6. Time-dependent Reliability of Dynamic Systems using Subset Simulation with Splitting over a Series of Correlated Time Intervals

    DTIC Science & Technology

    2013-08-01

    cost due to potential warranty costs, repairs and loss of market share. Reliability is the probability that the system will perform its intended...MCMC and splitting sampling schemes. Our proposed SS/ STP method is presented in Section 4, including accuracy bounds and computational effort

  7. Parts and Components Reliability Assessment: A Cost Effective Approach

    NASA Technical Reports Server (NTRS)

    Lee, Lydia

    2009-01-01

    System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.

  8. Revenue Sufficiency and Reliability in a Zero Marginal Cost Future

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frew, Bethany A.

    Features of existing wholesale electricity markets, such as administrative pricing rules and policy-based reliability standards, can distort market incentives from allowing generators sufficient opportunities to recover both fixed and variable costs. Moreover, these challenges can be amplified by other factors, including (1) inelastic demand resulting from a lack of price signal clarity, (2) low- or near-zero marginal cost generation, particularly arising from low natural gas fuel prices and variable generation (VG), such as wind and solar, and (3) the variability and uncertainty of this VG. As power systems begin to incorporate higher shares of VG, many questions arise about themore » suitability of the existing marginal-cost-based price formation, primarily within an energy-only market structure, to ensure the economic viability of resources that might be needed to provide system reliability. This article discusses these questions and provides a summary of completed and ongoing modelling-based work at the National Renewable Energy Laboratory to better understand the impacts of evolving power systems on reliability and revenue sufficiency.« less

  9. Revenue Sufficiency and Reliability in a Zero Marginal Cost Future: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frew, Bethany A.; Milligan, Michael; Brinkman, Greg

    Features of existing wholesale electricity markets, such as administrative pricing rules and policy-based reliability standards, can distort market incentives from allowing generators sufficient opportunities to recover both fixed and variable costs. Moreover, these challenges can be amplified by other factors, including (1) inelastic demand resulting from a lack of price signal clarity, (2) low- or near-zero marginal cost generation, particularly arising from low natural gas fuel prices and variable generation (VG), such as wind and solar, and (3) the variability and uncertainty of this VG. As power systems begin to incorporate higher shares of VG, many questions arise about themore » suitability of the existing marginal-cost-based price formation, primarily within an energy-only market structure, to ensure the economic viability of resources that might be needed to provide system reliability. This article discusses these questions and provides a summary of completed and ongoing modelling-based work at the National Renewable Energy Laboratory to better understand the impacts of evolving power systems on reliability and revenue sufficiency.« less

  10. Photovoltaic power systems for rural areas of developing countries

    NASA Technical Reports Server (NTRS)

    Rosenblum, L.; Bifano, W. J.; Hein, G. F.; Ratajczak, A. F.

    1979-01-01

    Systems technology, reliability, and present and projected costs of photovoltaic systems are discussed using data derived from NASA, Lewis Research Center experience with photovoltaic systems deployed with a variety of users. Operating systems in two villages, one in Upper Volta and the other in southwestern Arizona are described. Energy cost comparisons are presented for photovoltaic systems versus alternative energy sources. Based on present system technology, reliability, and costs, photovoltaics provides a realistic energy option for developing nations.

  11. Wholesale electricity market design with increasing levels of renewable generation: Revenue sufficiency and long-term reliability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milligan, Michael; Frew, Bethany A.; Bloom, Aaron

    This paper discusses challenges that relate to assessing and properly incentivizing the resources necessary to ensure a reliable electricity system with growing penetrations of variable generation (VG). The output of VG (primarily wind and solar generation) varies over time and cannot be predicted precisely. Therefore, the energy from VG is not always guaranteed to be available at times when it is most needed. This means that its contribution towards resource adequacy can be significantly less than the contribution from traditional resources. Variable renewable resources also have near-zero variable costs, and with production-based subsidies they may even have negative offer costs.more » Because variable costs drive the spot price of energy, this can lead to reduced prices, sales, and therefore revenue for all resources within the energy market. The characteristics of VG can also result in increased price volatility as well as the need for more flexibility in the resource fleet in order to maintain system reliability. We explore both traditional and evolving electricity market designs in the United States that aim to ensure resource adequacy and sufficient revenues to recover costs when those resources are needed for longterm reliability. We also investigate how reliability needs may be evolving and discuss how VG may affect future electricity market designs« less

  12. System principles, mathematical models and methods to ensure high reliability of safety systems

    NASA Astrophysics Data System (ADS)

    Zaslavskyi, V.

    2017-04-01

    Modern safety and security systems are composed of a large number of various components designed for detection, localization, tracking, collecting, and processing of information from the systems of monitoring, telemetry, control, etc. They are required to be highly reliable in a view to correctly perform data aggregation, processing and analysis for subsequent decision making support. On design and construction phases of the manufacturing of such systems a various types of components (elements, devices, and subsystems) are considered and used to ensure high reliability of signals detection, noise isolation, and erroneous commands reduction. When generating design solutions for highly reliable systems a number of restrictions and conditions such as types of components and various constrains on resources should be considered. Various types of components perform identical functions; however, they are implemented using diverse principles, approaches and have distinct technical and economic indicators such as cost or power consumption. The systematic use of different component types increases the probability of tasks performing and eliminates the common cause failure. We consider type-variety principle as an engineering principle of system analysis, mathematical models based on this principle, and algorithms for solving optimization problems of highly reliable safety and security systems design. Mathematical models are formalized in a class of two-level discrete optimization problems of large dimension. The proposed approach, mathematical models, algorithms can be used for problem solving of optimal redundancy on the basis of a variety of methods and control devices for fault and defects detection in technical systems, telecommunication networks, and energy systems.

  13. Reliability and Productivity Modeling for the Optimization of Separated Spacecraft Interferometers

    NASA Technical Reports Server (NTRS)

    Kenny, Sean (Technical Monitor); Wertz, Julie

    2002-01-01

    As technological systems grow in capability, they also grow in complexity. Due to this complexity, it is no longer possible for a designer to use engineering judgement to identify the components that have the largest impact on system life cycle metrics, such as reliability, productivity, cost, and cost effectiveness. One way of identifying these key components is to build quantitative models and analysis tools that can be used to aid the designer in making high level architecture decisions. Once these key components have been identified, two main approaches to improving a system using these components exist: add redundancy or improve the reliability of the component. In reality, the most effective approach to almost any system will be some combination of these two approaches, in varying orders of magnitude for each component. Therefore, this research tries to answer the question of how to divide funds, between adding redundancy and improving the reliability of components, to most cost effectively improve the life cycle metrics of a system. While this question is relevant to any complex system, this research focuses on one type of system in particular: Separate Spacecraft Interferometers (SSI). Quantitative models are developed to analyze the key life cycle metrics of different SSI system architectures. Next, tools are developed to compare a given set of architectures in terms of total performance, by coupling different life cycle metrics together into one performance metric. Optimization tools, such as simulated annealing and genetic algorithms, are then used to search the entire design space to find the "optimal" architecture design. Sensitivity analysis tools have been developed to determine how sensitive the results of these analyses are to uncertain user defined parameters. Finally, several possibilities for the future work that could be done in this area of research are presented.

  14. Integration of RAMS in LCC analysis for linear transport infrastructures. A case study for railways.

    NASA Astrophysics Data System (ADS)

    Calle-Cordón, Álvaro; Jiménez-Redondo, Noemi; Morales-Gámiz, F. J.; García-Villena, F. A.; Garmabaki, Amir H. S.; Odelius, Johan

    2017-09-01

    Life-cycle cost (LCC) analysis is an economic technique used to assess the total costs associated with the lifetime of a system in order to support decision making in long term strategic planning. For complex systems, such as railway and road infrastructures, the cost of maintenance plays an important role in the LCC analysis. Costs associated with maintenance interventions can be more reliably estimated by integrating the probabilistic nature of the failures associated to these interventions in the LCC models. Reliability, Maintainability, Availability and Safety (RAMS) parameters describe the maintenance needs of an asset in a quantitative way by using probabilistic information extracted from registered maintenance activities. Therefore, the integration of RAMS in the LCC analysis allows obtaining reliable predictions of system maintenance costs and the dependencies of these costs with specific cost drivers through sensitivity analyses. This paper presents an innovative approach for a combined RAMS & LCC methodology for railway and road transport infrastructures being developed under the on-going H2020 project INFRALERT. Such RAMS & LCC analysis provides relevant probabilistic information to be used for condition and risk-based planning of maintenance activities as well as for decision support in long term strategic investment planning.

  15. The Iridium (tm) system: Personal communications anytime, anyplace

    NASA Technical Reports Server (NTRS)

    Hatlelid, John E.; Casey, Larry

    1993-01-01

    The Iridium system is designed to provide handheld personal communications between diverse locations around the world at any time and without prior knowledge of the location of the personal units. This paper provides an overview of the system, the services it provides, its operation, and an overview of the commercial practices and relatively high volume satellite production techniques which will make the system cost effective. A constellation of 66 satellites will provide an orbiting, spherical-shell, infrastructure for this global calling capability. The satellites act as tall cellular towers and allow convenient operation for portable handheld telephones. The system will provide a full range of services including voice, paging, data, geolocation, and fax capabilities. Motorola is a world leader in the production of high volume, high quality, reliable telecommunications hardware. One of Iridium's goals is to apply these production techniques to high reliability space hardware. Concurrent engineering, high performance work teams, advanced manufacturing technologies, and improved assembly and test methods are some of the techniques that will keep the Iridium system cost effective. Mobile, global, flexible personal communications are coming that will allow anyone to call or receive a call from/to anyplace at anytime. The Iridium system will provide communications where none exist today. This connectivity will allow increased information transfer, open new markets for various business endeavors, and in general increase productivity and development.

  16. The Iridium (tm) system: Personal communications anytime, anyplace

    NASA Astrophysics Data System (ADS)

    Hatlelid, John E.; Casey, Larry

    The Iridium system is designed to provide handheld personal communications between diverse locations around the world at any time and without prior knowledge of the location of the personal units. This paper provides an overview of the system, the services it provides, its operation, and an overview of the commercial practices and relatively high volume satellite production techniques which will make the system cost effective. A constellation of 66 satellites will provide an orbiting, spherical-shell, infrastructure for this global calling capability. The satellites act as tall cellular towers and allow convenient operation for portable handheld telephones. The system will provide a full range of services including voice, paging, data, geolocation, and fax capabilities. Motorola is a world leader in the production of high volume, high quality, reliable telecommunications hardware. One of Iridium's goals is to apply these production techniques to high reliability space hardware. Concurrent engineering, high performance work teams, advanced manufacturing technologies, and improved assembly and test methods are some of the techniques that will keep the Iridium system cost effective. Mobile, global, flexible personal communications are coming that will allow anyone to call or receive a call from/to anyplace at anytime. The Iridium system will provide communications where none exist today. This connectivity will allow increased information transfer, open new markets for various business endeavors, and in general increase productivity and development.

  17. An economic analysis of a commercial approach to the design and fabrication of a space power system

    NASA Technical Reports Server (NTRS)

    Putney, Z.; Been, J. F.

    1979-01-01

    A commercial approach to the design and fabrication of an economical space power system is presented. Cost reductions are projected through the conceptual design of a 2 kW space power system built with the capability for having serviceability. The approach to system costing that is used takes into account both the constraints of operation in space and commercial production engineering approaches. The cost of this power system reflects a variety of cost/benefit tradeoffs that would reduce system cost as a function of system reliability requirements, complexity, and the impact of rigid specifications. A breakdown of the system design, documentation, fabrication, and reliability and quality assurance cost estimates are detailed.

  18. Business Cases for Microgrids: Modeling Interactions of Technology Choice, Reliability, Cost, and Benefit

    NASA Astrophysics Data System (ADS)

    Hanna, Ryan

    Distributed energy resources (DERs), and increasingly microgrids, are becoming an integral part of modern distribution systems. Interest in microgrids--which are insular and autonomous power networks embedded within the bulk grid--stems largely from the vast array of flexibilities and benefits they can offer stakeholders. Managed well, they can improve grid reliability and resiliency, increase end-use energy efficiency by coupling electric and thermal loads, reduce transmission losses by generating power locally, and may reduce system-wide emissions, among many others. Whether these public benefits are realized, however, depends on whether private firms see a "business case", or private value, in investing. To this end, firms need models that evaluate costs, benefits, risks, and assumptions that underlie decisions to invest. The objectives of this dissertation are to assess the business case for microgrids that provide what industry analysts forecast as two primary drivers of market growth--that of providing energy services (similar to an electric utility) as well as reliability service to customers within. Prototypical first adopters are modeled--using an existing model to analyze energy services and a new model that couples that analysis with one of reliability--to explore interactions between technology choice, reliability, costs, and benefits. The new model has a bi-level hierarchy; it uses heuristic optimization to select and size DERs and analytical optimization to schedule them. It further embeds Monte Carlo simulation to evaluate reliability as well as regression models for customer damage functions to monetize reliability. It provides least-cost microgrid configurations for utility customers who seek to reduce interruption and operating costs. Lastly, the model is used to explore the impact of such adoption on system-wide greenhouse gas emissions in California. Results indicate that there are, at present, co-benefits for emissions reductions when customers adopt and operate microgrids for private benefit, though future analysis is needed as the bulk grid continues to transition toward a less carbon intensive system.

  19. 2017 NREL Photovoltaic Reliability Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurtz, Sarah

    NREL's Photovoltaic (PV) Reliability Workshop (PVRW) brings together PV reliability experts to share information, leading to the improvement of PV module reliability. Such improvement reduces the cost of solar electricity and promotes investor confidence in the technology -- both critical goals for moving PV technologies deeper into the electricity marketplace.

  20. Improving quality of laser scanning data acquisition through calibrated amplitude and pulse deviation measurement

    NASA Astrophysics Data System (ADS)

    Pfennigbauer, Martin; Ullrich, Andreas

    2010-04-01

    Newest developments in laser scanner technologies put surveyors in the position to comply with the ever increasing demand of high-speed, high-accuracy, and highly reliable data acquisition from terrestrial, mobile, and airborne platforms. Echo digitization in pulsed time-of-flight laser ranging has demonstrated its superior performance in the field of bathymetry and airborne laser scanning for more than a decade, however at the cost of somewhat time consuming off line post processing. State-of-the-art online waveform processing as implemented in RIEGL's V-Line not only saves users post-processing time to obtain true 3D point clouds, it also adds the assets of calibrated amplitude and reflectance measurement for data classification and pulse deviation determination for effective and reliable data validation. We present results from data acquisitions in different complex target situations.

  1. Low-grade geothermal energy conversion by organic Rankine cycle turbine generator

    NASA Astrophysics Data System (ADS)

    Zarling, J. P.; Aspnes, J. D.

    Results of a demonstration project which helped determine the feasibility of converting low-grade thermal energy in 49 C water into electrical energy via an organic Rankine cycle 2500 watt (electrical) turbine-generator are presented. The geothermal source which supplied the water is located in a rural Alaskan village. The reasons an organic Rankine cycle turbine-generator was investigated as a possible source of electric power in rural Alaska are: (1) high cost of operating diesel-electric units and their poor long-term reliability when high-quality maintenance is unavailable and (2) the extremely high level of long-term reliability reportedly attained by commercially available organic Rankine cycle turbines. Data is provided on the thermal and electrical operating characteristics of an experimental organic Rankine cycle turbine-generator operating at a uniquely low vaporizer temperature.

  2. The case against one-shot testing for initial dental licensure.

    PubMed

    Chambers, David W; Dugoni, Arthur A; Paisley, Ian

    2004-03-01

    High-stakes testing are expected to meet standards for cost-effectiveness, fairness, transparency, high reliability, and high validity. It is questionable whether initial licensure examinations in dentistry meet such standards. Decades of piecemeal adjustments in the system have resulted in limited improvement. The essential flaw in the system is reliance on a one-shot sample of a small segment of the skills, understanding, and supporting values needed for today's professional practice of dentistry. The "snapshot" approach to testing produces inherently substandard levels of reliability and validity. A three-step alternative is proposed: boards should (1) define the competencies required of beginning practitioners, (2) establish the psychometric standards needed to make defensible judgments about candidates, and (3) base licensure decisions only on portfolios of evidence that test for defined competencies at established levels of quality.

  3. Main propulsion system design recommendations for an advanced Orbit Transfer Vehicle

    NASA Technical Reports Server (NTRS)

    Redd, L.

    1985-01-01

    Various main propulsion system configurations of an advanced OTV are evaluated with respect to the probability of nonindependent failures, i.e., engine failures that disable the entire main propulsion system. Analysis of the life-cycle cost (LCC) indicates that LCC is sensitive to the main propulsion system reliability, vehicle dry weight, and propellant cost; it is relatively insensitive to the number of missions/overhaul, failures per mission, and EVA and IVA cost. In conclusion, two or three engines are recommended in view of their highest reliability, minimum life-cycle cost, and fail operational/fail safe capability.

  4. Repurposing a Benchtop Centrifuge for High-Throughput Single-Molecule Force Spectroscopy.

    PubMed

    Yang, Darren; Wong, Wesley P

    2018-01-01

    We present high-throughput single-molecule manipulation using a benchtop centrifuge, overcoming limitations common in other single-molecule approaches such as high cost, low throughput, technical difficulty, and strict infrastructure requirements. An inexpensive and compact Centrifuge Force Microscope (CFM) adapted to a commercial centrifuge enables use by nonspecialists, and integration with DNA nanoswitches facilitates both reliable measurements and repeated molecular interrogation. Here, we provide detailed protocols for constructing the CFM, creating DNA nanoswitch samples, and carrying out single-molecule force measurements.

  5. State-of-the-Art for Small Satellite Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Parker, Khary I.

    2016-01-01

    SmallSats are a low cost access to space with an increasing need for propulsion systems. NASA, and other organizations, will be using SmallSats that require propulsion systems to: a) Conduct high quality near and far reaching on-orbit research and b) Perform technology demonstrations. Increasing call for high reliability and high performing for SmallSat components. Many SmallSat propulsion technologies are currently under development: a) Systems at various levels of maturity and b) Wide variety of systems for many mission applications.

  6. NREL to Host Ninth Annual PV Reliability Workshop | News | NREL

    Science.gov Websites

    share research leading to more durable and reliable PV modules, thus reducing the cost of solar to Host Ninth Annual PV Reliability Workshop NREL to Host Ninth Annual PV Reliability Workshop their results during a poster session at the 2017 PV Reliability Workshop. 4 people consult two

  7. Communicate or pay the price of silence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Derry, F.E.

    The electric utility industry's efforts to communicate with its customers through advertising, while highly criticized by consumer interest and other groups, is an important link in providing information that is in the public interest and which the industry has the right and obligation to provide. Advertising represents an efficient and economical way to share information and increase public understanding of the factors affecting utility reliability and cost. Surveys of utility customers show that they want an accounting of what the utility does with its money and consider advertising an appropriate vehicle. By pinpointing cost-related issues, advertising also helps to marketmore » programs that will reduce utility costs, such as off-peak energy use.« less

  8. Chapter 15: Reliability of Wind Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, Shuangwen; O'Connor, Ryan

    The global wind industry has witnessed exciting developments in recent years. The future will be even brighter with further reductions in capital and operation and maintenance costs, which can be accomplished with improved turbine reliability, especially when turbines are installed offshore. One opportunity for the industry to improve wind turbine reliability is through the exploration of reliability engineering life data analysis based on readily available data or maintenance records collected at typical wind plants. If adopted and conducted appropriately, these analyses can quickly save operation and maintenance costs in a potentially impactful manner. This chapter discusses wind turbine reliability bymore » highlighting the methodology of reliability engineering life data analysis. It first briefly discusses fundamentals for wind turbine reliability and the current industry status. Then, the reliability engineering method for life analysis, including data collection, model development, and forecasting, is presented in detail and illustrated through two case studies. The chapter concludes with some remarks on potential opportunities to improve wind turbine reliability. An owner and operator's perspective is taken and mechanical components are used to exemplify the potential benefits of reliability engineering analysis to improve wind turbine reliability and availability.« less

  9. Integrated Model-Based Controls and PHM for Improving Turbine Engine Performance, Reliability, and Cost

    DTIC Science & Technology

    2009-09-01

    capable of surviving the high-temperature, high- vibration environment of a jet engine. Active control spans active surge/stall control and three...other closely related areas, viz., active combustion control (references 21-22), active noise control, and active vibration control. All of these are...self-powered sensors that harvest energy from engine heat or vibrations replace sensors that require power. The long-term vision is one of a

  10. dc-plasma-sprayed electronic-tube device

    DOEpatents

    Meek, T.T.

    1982-01-29

    An electronic tube and associated circuitry which is produced by dc plasma arc spraying techniques is described. The process is carried out in a single step automated process whereby both active and passive devices are produced at very low cost. The circuitry is extremely reliable and is capable of functioning in both high radiation and high temperature environments. The size of the electronic tubes produced are more than an order of magnitude smaller than conventional electronic tubes.

  11. Instructions for Plastic Encapsulated Microcircuit(PEM) Selection, Screening and Qualification.

    NASA Technical Reports Server (NTRS)

    King, Terry; Teverovsky, Alexander; Leidecker, Henning

    2002-01-01

    The use of Plastic Encapsulated Microcircuits (PEMs) is permitted on NASA Goddard Space Flight Center (GSFC) spaceflight applications, provided each use is thoroughly evaluated for thermal, mechanical, and radiation implications of the specific application and found to meet mission requirements. PEMs shall be selected for their functional advantage and availability, not for cost saving; the steps necessary to ensure reliability usually negate any initial apparent cost advantage. A PEM shall not be substituted for a form, fit and functional equivalent, high reliability, hermetic device in spaceflight applications. Due to the rapid change in wafer-level designs typical of commercial parts and the unknown traceability between packaging lots and wafer lots, lot specific testing is required for PEMs, unless specifically excepted by the Mission Assurance Requirements (MAR) for the project. Lot specific qualification, screening, radiation hardness assurance analysis and/or testing, shall be consistent with the required reliability level as defined in the MAR. Developers proposing to use PEMs shall address the following items in their Performance Assurance Implementation Plan: source selection (manufacturers and distributors), storage conditions for all stages of use, packing, shipping and handling, electrostatic discharge (ESD), screening and qualification testing, derating, radiation hardness assurance, test house selection and control, data collection and retention.

  12. Screening the High-Risk Newborn for Hearing Loss: The Crib-O-Gram v the Auditory Brainstem Response.

    ERIC Educational Resources Information Center

    Cox, L. Clarke

    1988-01-01

    Presented are a rationale for identifying hearing loss in infancy and a history of screening procedures. The Crib-O-Gram and auditory brainstem response (ABR) tests are evaluated for reliability, validity, and cost-effectiveness. The ABR is recommended, and fully automated ABR instrumentation, which lowers expenses for trained personnel and…

  13. Framework for a National Testing and Evaluation Program Based Upon the National Stormwater Testing and Evaluation for Products and Practices (STEPP) Initiative (WERF Report INFR2R14)

    EPA Science Inventory

    Abstract:The National STEPP Program seeks to improve water quality by accelerating the effective implementation and adoption of innovative stormwater management technologies. Itwill attempt to accomplish this by establishing practices through highly reliable, and cost-effective S...

  14. Monitoring visitor use in backcountry and wilderness: a review of methods

    Treesearch

    Steven J. Hollenhorst; Steven A. Whisman; Alan W. Ewert

    1992-01-01

    Obtaining accurate and usable visitor counts in backcountry and wilderness settings continues to be problematic for resource managers because use of these areas is dispersed and costs can be prohibitively high. An overview of the available methods for obtaining reliable data on recreation use levels is provided. Monitoring methods were compared and selection criteria...

  15. Motivation for Knowledge Sharing by Expert Participants in Company-Hosted Online User Communities

    ERIC Educational Resources Information Center

    Cheng, Jingli

    2014-01-01

    Company-hosted online user communities are increasingly popular as firms continue to search for ways to provide their customers with high quality and reliable support in a low cost and scalable way. Yet, empirical understanding of motivations for knowledge sharing in this type of online communities is lacking, especially with regard to an…

  16. Reducing maintenance costs in agreement with CNC machine tools reliability

    NASA Astrophysics Data System (ADS)

    Ungureanu, A. L.; Stan, G.; Butunoi, P. A.

    2016-08-01

    Aligning maintenance strategy with reliability is a challenge due to the need to find an optimal balance between them. Because the various methods described in the relevant literature involve laborious calculations or use of software that can be costly, this paper proposes a method that is easier to implement on CNC machine tools. The new method, called the Consequence of Failure Analysis (CFA) is based on technical and economic optimization, aimed at obtaining a level of required performance with minimum investment and maintenance costs.

  17. Loss of Load Probability Calculation for West Java Power System with Nuclear Power Plant Scenario

    NASA Astrophysics Data System (ADS)

    Azizah, I. D.; Abdullah, A. G.; Purnama, W.; Nandiyanto, A. B. D.; Shafii, M. A.

    2017-03-01

    Loss of Load Probability (LOLP) index showing the quality and performance of an electrical system. LOLP value is affected by load growth, the load duration curve, forced outage rate of the plant, number and capacity of generating units. This reliability index calculation begins with load forecasting to 2018 using multiple regression method. Scenario 1 with compositions of conventional plants produce the largest LOLP in 2017 amounted to 71.609 days / year. While the best reliability index generated in scenario 2 with the NPP amounted to 6.941 days / year in 2015. Improved reliability of systems using nuclear power more efficiently when compared to conventional plants because it also has advantages such as emission-free, inexpensive fuel costs, as well as high level of plant availability.

  18. Redundant disk arrays: Reliable, parallel secondary storage. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gibson, Garth Alan

    1990-01-01

    During the past decade, advances in processor and memory technology have given rise to increases in computational performance that far outstrip increases in the performance of secondary storage technology. Coupled with emerging small-disk technology, disk arrays provide the cost, volume, and capacity of current disk subsystems, by leveraging parallelism, many times their performance. Unfortunately, arrays of small disks may have much higher failure rates than the single large disks they replace. Redundant arrays of inexpensive disks (RAID) use simple redundancy schemes to provide high data reliability. The data encoding, performance, and reliability of redundant disk arrays are investigated. Organizing redundant data into a disk array is treated as a coding problem. Among alternatives examined, codes as simple as parity are shown to effectively correct single, self-identifying disk failures.

  19. State and location dependence of action potential metabolic cost in cortical pyramidal neurons.

    PubMed

    Hallermann, Stefan; de Kock, Christiaan P J; Stuart, Greg J; Kole, Maarten H P

    2012-06-03

    Action potential generation and conduction requires large quantities of energy to restore Na(+) and K(+) ion gradients. We investigated the subcellular location and voltage dependence of this metabolic cost in rat neocortical pyramidal neurons. Using Na(+)/K(+) charge overlap as a measure of action potential energy efficiency, we found that action potential initiation in the axon initial segment (AIS) and forward propagation into the axon were energetically inefficient, depending on the resting membrane potential. In contrast, action potential backpropagation into dendrites was efficient. Computer simulations predicted that, although the AIS and nodes of Ranvier had the highest metabolic cost per membrane area, action potential backpropagation into the dendrites and forward propagation into axon collaterals dominated energy consumption in cortical pyramidal neurons. Finally, we found that the high metabolic cost of action potential initiation and propagation down the axon is a trade-off between energy minimization and maximization of the conduction reliability of high-frequency action potentials.

  20. Photovoltaic Module Reliability Workshop 2011: February 16-17, 2011

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurtz, S.

    2013-11-01

    NREL's Photovoltaic (PV) Module Reliability Workshop (PVMRW) brings together PV reliability experts to share information, leading to the improvement of PV module reliability. Such improvement reduces the cost of solar electricity and promotes investor confidence in the technology--both critical goals for moving PV technologies deeper into the electricity marketplace.

  1. Photovoltaic Module Reliability Workshop 2014: February 25-26, 2014

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurtz, S.

    2014-02-01

    NREL's Photovoltaic (PV) Module Reliability Workshop (PVMRW) brings together PV reliability experts to share information, leading to the improvement of PV module reliability. Such improvement reduces the cost of solar electricity and promotes investor confidence in the technology--both critical goals for moving PV technologies deeper into the electricity marketplace.

  2. Photovoltaic Module Reliability Workshop 2013: February 26-27, 2013

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurtz, S.

    2013-10-01

    NREL's Photovoltaic (PV) Module Reliability Workshop (PVMRW) brings together PV reliability experts to share information, leading to the improvement of PV module reliability. Such improvement reduces the cost of solar electricity and promotes investor confidence in the technology--both critical goals for moving PV technologies deeper into the electricity marketplace.

  3. Photovoltaic Module Reliability Workshop 2010: February 18-19, 2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurtz, J.

    2013-11-01

    NREL's Photovoltaic (PV) Module Reliability Workshop (PVMRW) brings together PV reliability experts to share information, leading to the improvement of PV module reliability. Such improvement reduces the cost of solar electricity and promotes investor confidence in the technology--both critical goals for moving PV technologies deeper into the electricity marketplace.

  4. 2016 NREL Photovoltaic Module Reliability Workshop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurtz, Sarah

    NREL's Photovoltaic (PV) Module Reliability Workshop (PVMRW) brings together PV reliability experts to share information, leading to the improvement of PV module reliability. Such improvement reduces the cost of solar electricity and promotes investor confidence in the technology - both critical goals for moving PV technologies deeper into the electricity marketplace.

  5. 2015 NREL Photovoltaic Module Reliability Workshops

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurtz, Sarah

    NREL's Photovoltaic (PV) Module Reliability Workshop (PVMRW) brings together PV reliability experts to share information, leading to the improvement of PV module reliability. Such improvement reduces the cost of solar electricity and promotes investor confidence in the technology--both critical goals for moving PV technologies deeper into the electricity marketplace.

  6. A reliable, low-cost picture archiving and communications system for small and medium veterinary practices built using open-source technology.

    PubMed

    Iotti, Bryan; Valazza, Alberto

    2014-10-01

    Picture Archiving and Communications Systems (PACS) are the most needed system in a modern hospital. As an integral part of the Digital Imaging and Communications in Medicine (DICOM) standard, they are charged with the responsibility for secure storage and accessibility of the diagnostic imaging data. These machines need to offer high performance, stability, and security while proving reliable and ergonomic in the day-to-day and long-term storage and retrieval of the data they safeguard. This paper reports the experience of the authors in developing and installing a compact and low-cost solution based on open-source technologies in the Veterinary Teaching Hospital for the University of Torino, Italy, during the course of the summer of 2012. The PACS server was built on low-cost x86-based hardware and uses an open source operating system derived from Oracle OpenSolaris (Oracle Corporation, Redwood City, CA, USA) to host the DCM4CHEE PACS DICOM server (DCM4CHEE, http://www.dcm4che.org ). This solution features very high data security and an ergonomic interface to provide easy access to a large amount of imaging data. The system has been in active use for almost 2 years now and has proven to be a scalable, cost-effective solution for practices ranging from small to very large, where the use of different hardware combinations allows scaling to the different deployments, while the use of paravirtualization allows increased security and easy migrations and upgrades.

  7. Effectiveness of back-to-back testing

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.; Eckhardt, David E.; Caglayan, Alper; Kelly, John P. J.

    1987-01-01

    Three models of back-to-back testing processes are described. Two models treat the case where there is no intercomponent failure dependence. The third model describes the more realistic case where there is correlation among the failure probabilities of the functionally equivalent components. The theory indicates that back-to-back testing can, under the right conditions, provide a considerable gain in software reliability. The models are used to analyze the data obtained in a fault-tolerant software experiment. It is shown that the expected gain is indeed achieved, and exceeded, provided the intercomponent failure dependence is sufficiently small. However, even with the relatively high correlation the use of several functionally equivalent components coupled with back-to-back testing may provide a considerable reliability gain. Implications of this finding are that the multiversion software development is a feasible and cost effective approach to providing highly reliable software components intended for fault-tolerant software systems, on condition that special attention is directed at early detection and elimination of correlated faults.

  8. High-Temperature High-Power Packaging Techniques for HEV Traction Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barlow, F.D.; Elshabini, A.

    A key issue associated with the wider adoption of hybrid-electric vehicles (HEV) and plug in hybrid-electric vehicles (PHEV) is the implementation of the power electronic systems that are required in these products [1]. To date, many consumers find the adoption of these technologies problematic based on a financial analysis of the initial cost versus the savings available from reduced fuel consumption. Therefore, one of the primary industry goals is the reduction in the price of these vehicles relative to the cost of traditional gasoline powered vehicles. Part of this cost reduction must come through optimization of the power electronics requiredmore » by these vehicles. In addition, the efficiency of the systems must be optimized in order to provide the greatest range possible. For some drivers, any reduction in the range associated with a potential HEV or PHEV solution in comparison to a gasoline powered vehicle represents a significant barrier to adoption and the efficiency of the power electronics plays an important role in this range. Likewise, high efficiencies are also important since lost power further complicates the thermal management of these systems. Reliability is also an important concern since most drivers have a high level of comfort with gasoline powered vehicles and are somewhat reluctant to switch to a less proven technology. Reliability problems in the power electronics or associated components could not only cause a high warranty cost to the manufacturer, but may also taint these technologies in the consumer's eyes. A larger vehicle offering in HEVs is another important consideration from a power electronics point of view. A larger vehicle will need more horsepower, or a larger rated drive. In some ways this will be more difficult to implement from a cost and size point of view. Both the packaging of these modules and the thermal management of these systems at competitive price points create significant challenges. One way in which significant cost reduction of these systems could be achieved is through the use of a single coolant loop for both the power electronics as well as the internal combustion engine (ICE) [2]. This change would reduce the complexity of the cooling system which currently relies on two loops to a single loop [3]. However, the current nominal coolant temperature entering these inverters is 65 C [3], whereas a normal ICE coolant temperature would be much higher at approximately 100 C. This change in coolant temperature significantly increases the junction temperatures of the devices and creates a number of challenges for both device fabrication and the assembly of these devices into inverters and converters for HEV and PHEV applications. With this change in mind, significant progress has been made on the use of SiC devices for inverters that can withstand much higher junction temperatures than traditional Si based inverters [4,5,6]. However, a key problem which the single coolant loop and high temperature devices is the effective packaging of these devices and related components into a high temperature inverter. The elevated junction temperatures that exist in these modules are not compatible with reliable inverters based on existing packaging technology. This report seeks to provide a literature survey of high temperature packaging and to highlight the issues related to the implementation of high temperature power electronic modules for HEV and PHEV applications. For purposes of discussion, it will be assumed in this report that 200 C is the targeted maximum junction temperature.« less

  9. The Role of Demand Response in Reducing Water-Related Power Plant Vulnerabilities

    NASA Astrophysics Data System (ADS)

    Macknick, J.; Brinkman, G.; Zhou, E.; O'Connell, M.; Newmark, R. L.; Miara, A.; Cohen, S. M.

    2015-12-01

    The electric sector depends on readily available water supplies for reliable and efficient operation. Elevated water temperatures or low water levels can trigger regulatory or plant-level decisions to curtail power generation, which can affect system cost and reliability. In the past decade, dozens of power plants in the U.S. have curtailed generation due to water temperatures and water shortages. Curtailments occur during the summer, when temperatures are highest and there is greatest demand for electricity. Climate change could alter the availability and temperature of water resources, exacerbating these issues. Constructing alternative cooling systems to address vulnerabilities can be capital intensive and can also affect power plant efficiencies. Demand response programs are being implemented by electric system planners and operators to reduce and shift electricity demands from peak usage periods to other times of the day. Demand response programs can also play a role in reducing water-related power sector vulnerabilities during summer months. Traditionally, production cost modeling and demand response analyses do not include water resources. In this effort, we integrate an electricity production cost modeling framework with water-related impacts on power plants in a test system to evaluate the impacts of demand response measures on power system costs and reliability. Specifically, we i) quantify the cost and reliability implications of incorporating water resources into production cost modeling, ii) evaluate the impacts of demand response measures on reducing system costs and vulnerabilities, and iii) consider sensitivity analyses with cooling systems to highlight a range of potential benefits of demand response measures. Impacts from climate change on power plant performance and water resources are discussed. Results provide key insights to policymakers and practitioners for reducing water-related power plant vulnerabilities via lower cost methods.

  10. Verification and Validation in a Rapid Software Development Process

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Easterbrook, Steve M.

    1997-01-01

    The high cost of software production is driving development organizations to adopt more automated design and analysis methods such as rapid prototyping, computer-aided software engineering (CASE) tools, and high-level code generators. Even developers of safety-critical software system have adopted many of these new methods while striving to achieve high levels Of quality and reliability. While these new methods may enhance productivity and quality in many cases, we examine some of the risks involved in the use of new methods in safety-critical contexts. We examine a case study involving the use of a CASE tool that automatically generates code from high-level system designs. We show that while high-level testing on the system structure is highly desirable, significant risks exist in the automatically generated code and in re-validating releases of the generated code after subsequent design changes. We identify these risks and suggest process improvements that retain the advantages of rapid, automated development methods within the quality and reliability contexts of safety-critical projects.

  11. The cost-effectiveness of mandatory 20 mph zones for the prevention of injuries.

    PubMed

    Peters, Jaime L; Anderson, Rob

    2013-03-01

    Traffic calming and speed limits are major public health strategies for further reducing road injuries, especially for vulnerable pedestrians such as children and the elderly. We conducted a cost-benefit analysis (CBA-favoured by transport economists) alongside a cost-utility analysis (CUA-favoured by health economists) of mandatory 20 mph zones, providing a unique opportunity to compare assumptions and results. A CUA from the public sector perspective and a CBA from a broader societal perspective. One-way, threshold and probabilistic sensitivity analyses were undertaken. In low casualty areas the intervention was not cost-effective regardless of approach (CUA: cost per QALY = £429 800; CBA: net present value = -£25 500). In high casualty areas, the intervention was cost-effective from the CBA (a saving of £90 600), but not from the CUA [cost per quality-adjusted life year (QALY) = £86 500; assuming National Institute for Health and Clinical Excellence's benchmark for approving health technologies]. Mandatory 20 mph zones may be cost-effective in high casualty areas when a CBA from a societal perspective is considered. Although CBA may appear, in principle, more appropriate, the quality, age or absence of reliable data for many parameters means that there is a great deal of uncertainty and the results should be interpreted with caution.

  12. 2nd Generation Reusable Launch Vehicle (2G RLV). Revised

    NASA Technical Reports Server (NTRS)

    Matlock, Steve; Sides, Steve; Kmiec, Tom; Arbogast, Tim; Mayers, Tom; Doehnert, Bill

    2001-01-01

    This is a revised final report and addresses all of the work performed on this program. Specifically, it covers vehicle architecture background, definition of six baseline engine cycles, reliability baseline (space shuttle main engine QRAS), and component level reliability/performance/cost for the six baseline cycles, and selection of 3 cycles for further study. This report further addresses technology improvement selection and component level reliability/performance/cost for the three cycles selected for further study, as well as risk reduction plans, and recommendation for future studies.

  13. Low-cost Photoacoustic-based Measurement System for Carbon Dioxide Fluxes with the Potential for large-scale Monitoring

    NASA Astrophysics Data System (ADS)

    Scholz, L. T.; Bierer, B.; Ortiz Perez, A.; Woellenstein, J.; Sachs, T.; Palzer, S.

    2016-12-01

    The determination of carbon dioxide (CO2) fluxes between ecosystems and the atmosphere is crucial for understanding ecological processes on regional and global scales. High quality data sets with full uncertainty estimates are needed to evaluate model simulations. However, current flux monitoring techniques are unsuitable to provide reliable data of a large area at both a detailed level and an appropriate resolution, at best in combination with a high sampling rate. Currently used sensing technologies, such as non-dispersive infrared (NDIR) gas analyzers, cannot be deployed in large numbers to provide high spatial resolution due to their costs and complex maintenance requirements. Here, we propose a novel CO2 measurement system, whose gas sensing unit is made up of low-cost, low-power consuming components only, such as an IR-LED and a photoacoustic detector. The sensor offers a resolution of < 50 ppm in the interesting concentration range up to 5000 ppm and an almost linear and fast sensor response of just a few seconds. Since the sensor can be applied in-situ without special precautions, it allows for environmental monitoring in a non-invasive way. Its low energy consumption enables long-term measurements. The low overall costs favor the manufacturing in large quantities. This allows the operation of multiple sensors at a reasonable price and thus provides concentration measurements at any desired spatial coverage and at high temporal resolution. With appropriate 3D configuration of the units, vertical and horizontal fluxes can be determined. By applying a closely meshed wireless sensor network, inhomogeneities as well as CO2 sources and sinks in the lower atmosphere can be monitored. In combination with sensors for temperature, pressure and humidity, our sensor paves the way towards the reliable and extensive monitoring of ecosystem-atmosphere exchange rates. The technique can also be easily adapted to other relevant greenhouse gases.

  14. The evolution of index signals to avoid the cost of dishonesty.

    PubMed

    Biernaskie, Jay M; Grafen, Alan; Perry, Jennifer C

    2014-09-07

    Animals often convey useful information, despite a conflict of interest between the signaller and receiver. There are two major explanations for such 'honest' signalling, particularly when the size or intensity of signals reliably indicates the underlying quality of the signaller. Costly signalling theory (including the handicap principle) predicts that dishonest signals are too costly to fake, whereas the index hypothesis predicts that dishonest signals cannot be faked. Recent evidence of a highly conserved causal link between individual quality and signal growth appears to bolster the index hypothesis. However, it is not clear that this also diminishes costly signalling theory, as is often suggested. Here, by incorporating a mechanism of signal growth into costly signalling theory, we show that index signals can actually be favoured owing to the cost of dishonesty. We conclude that costly signalling theory provides the ultimate, adaptive rationale for honest signalling, whereas the index hypothesis describes one proximate (and potentially very general) mechanism for achieving honesty. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  15. The evolution of index signals to avoid the cost of dishonesty

    PubMed Central

    Biernaskie, Jay M.; Grafen, Alan; Perry, Jennifer C.

    2014-01-01

    Animals often convey useful information, despite a conflict of interest between the signaller and receiver. There are two major explanations for such ‘honest’ signalling, particularly when the size or intensity of signals reliably indicates the underlying quality of the signaller. Costly signalling theory (including the handicap principle) predicts that dishonest signals are too costly to fake, whereas the index hypothesis predicts that dishonest signals cannot be faked. Recent evidence of a highly conserved causal link between individual quality and signal growth appears to bolster the index hypothesis. However, it is not clear that this also diminishes costly signalling theory, as is often suggested. Here, by incorporating a mechanism of signal growth into costly signalling theory, we show that index signals can actually be favoured owing to the cost of dishonesty. We conclude that costly signalling theory provides the ultimate, adaptive rationale for honest signalling, whereas the index hypothesis describes one proximate (and potentially very general) mechanism for achieving honesty. PMID:25056623

  16. Business of reliability

    NASA Astrophysics Data System (ADS)

    Engel, Pierre

    1999-12-01

    The presentation is organized around three themes: (1) The decrease of reception equipment costs allows non-Remote Sensing organization to access a technology until recently reserved to scientific elite. What this means is the rise of 'operational' executive agencies considering space-based technology and operations as a viable input to their daily tasks. This is possible thanks to totally dedicated ground receiving entities focusing on one application for themselves, rather than serving a vast community of users. (2) The multiplication of earth observation platforms will form the base for reliable technical and financial solutions. One obstacle to the growth of the earth observation industry is the variety of policies (commercial versus non-commercial) ruling the distribution of the data and value-added products. In particular, the high volume of data sales required for the return on investment does conflict with traditional low-volume data use for most applications. Constant access to data sources supposes monitoring needs as well as technical proficiency. (3) Large volume use of data coupled with low- cost equipment costs is only possible when the technology has proven reliable, in terms of application results, financial risks and data supply. Each of these factors is reviewed. The expectation is that international cooperation between agencies and private ventures will pave the way for future business models. As an illustration, the presentation proposes to use some recent non-traditional monitoring applications, that may lead to significant use of earth observation data, value added products and services: flood monitoring, ship detection, marine oil pollution deterrent systems and rice acreage monitoring.

  17. Potential for deserts to supply reliable renewable electric power

    NASA Astrophysics Data System (ADS)

    Labordena, Mercè; Lilliestam, Johan

    2015-04-01

    To avoid dangerous climate change, the electricity systems must be decarbonized by mid-century. The world has sufficient renewable electricity resources for complete power sector decarbonization, but an expansion of renewables poses several challenges for the electricity systems. First, wind and solar PV power are intermittent and supply-controlled, making it difficult to securely integrate this fluctuating generation into the power systems. Consequently, power sources that are both renewable and dispatchable, such as biomass, hydro and concentrating solar power (CSP), are particularly important. Second, renewable power has a low power density and needs vast areas of land, which is problematic both due to cost reasons and due to land-use conflicts, in particular with agriculture. Renewable and dispatchable technologies that can be built in sparsely inhabited regions or on land with low competition with agriculture would therefore be especially valuable; this land-use competition greatly limits the potential for hydro and biomass electricity. Deserts, however, are precisely such low-competition land, and are at the same time the most suited places for CSP generation, but this option would necessitate long transmission lines from remote places in the deserts to the demand centers such as big cities. We therefore study the potential for fleets of CSP plants in the large deserts of the world to produce reliable and reasonable-cost renewable electricity for regions with high and/or rapidly increasing electricity demand and with a desert within or close to its borders. The regions in focus here are the European Union, North Africa and the Middle East, China and Australia. We conduct the analysis in three steps. First, we identify the best solar generation areas in the selected deserts using geographic information systems (GIS), and applying restrictions to minimize impact on biodiversity, soils, human heath, and land-use and land-cover change. Second, we identify transmission corridors from the generation areas to the demand centers in the target regions, using a GIS-based transmission algorithm that minimizes economic, social and environmental costs. Third, we use the multi-scale energy system model Calliope to specify the optimal configuration and operation of the CSP fleet to reliably follow the demand every hour of the year in the target regions, and to calculate the levelized cost of doing so, including both generation and transmission costs. The final output will show whether and how much reliable renewable electricity can be supplied from CSP fleets in deserts to demand centers in adjacent regions, at which costs this is possible, as well as a detailed description of the routes of HVDC transmission links. We expect to find that the potential for deserts to supply reliable CSP to the regions in focus is very large in all cases, despite the long distances.

  18. Major design issues of molten carbonate fuel cell power generation unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, T.P.

    1996-04-01

    In addition to the stack, a fuel cell power generation unit requires fuel desulfurization and reforming, fuel and oxidant preheating, process heat removal, waste heat recovery, steam generation, oxidant supply, power conditioning, water supply and treatment, purge gas supply, instrument air supply, and system control. These support facilities add considerable cost and system complexity. Bechtel, as a system integrator of M-C Power`s molten carbonate fuel cell development team, has spent substantial effort to simplify and minimize these supporting facilities to meet cost and reliability goals for commercialization. Similiar to other fuels cells, MCFC faces design challenge of how to complymore » with codes and standards, achieve high efficiency and part load performance, and meanwhile minimize utility requirements, weight, plot area, and cost. However, MCFC has several unique design issues due to its high operating temperature, use of molten electrolyte, and the requirement of CO2 recycle.« less

  19. Reliable sagittal plane kinematic gait assessments are feasible using low-cost webcam technology.

    PubMed

    Saner, Robert J; Washabaugh, Edward P; Krishnan, Chandramouli

    2017-07-01

    Three-dimensional (3-D) motion capture systems are commonly used for gait analysis because they provide reliable and accurate measurements. However, the downside of this approach is that it is expensive and requires technical expertise; thus making it less feasible in the clinic. To address this limitation, we recently developed and validated (using a high-precision walking robot) a low-cost, two-dimensional (2-D) real-time motion tracking approach using a simple webcam and LabVIEW Vision Assistant. The purpose of this study was to establish the repeatability and minimal detectable change values of hip and knee sagittal plane gait kinematics recorded using this system. Twenty-one healthy subjects underwent two kinematic assessments while walking on a treadmill at a range of gait velocities. Intraclass correlation coefficients (ICC) and minimal detectable change (MDC) values were calculated for commonly used hip and knee kinematic parameters to demonstrate the reliability of the system. Additionally, Bland-Altman plots were generated to examine the agreement between the measurements recorded on two different days. The system demonstrated good to excellent reliability (ICC>0.75) for all the gait parameters tested on this study. The MDC values were typically low (<5°) for most of the parameters. The Bland-Altman plots indicated that there was no systematic error or bias in kinematic measurements and showed good agreement between measurements obtained on two different days. These results indicate that kinematic gait assessments using webcam technology can be reliably used for clinical and research purposes. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Large scale distribution monitoring of FRP-OF based on BOTDR technique for infrastructures

    NASA Astrophysics Data System (ADS)

    Zhou, Zhi; He, Jianping; Yan, Kai; Ou, Jinping

    2007-04-01

    BOTDA(R) sensing technique is considered as one of the most practical solution for large-sized structures as the instrument. However, there is still a big obstacle to apply BOTDA(R) in large-scale area due to the high cost and the reliability problem of sensing head which is associated to the sensor installation and survival. In this paper, we report a novel low-cost and high reliable BOTDA(R) sensing head using FRP(Fiber Reinforced Polymer)-bare optical fiber rebar, named BOTDA(R)-FRP-OF. We investigated the surface bonding and its mechanical strength by SEM and intensity experiments. Considering the strain difference between OF and host matrix which may result in measurement error, the strain transfer from host to OF have been theoretically studied. Furthermore, GFRP-OFs sensing properties of strain and temperature at different gauge length were tested under different spatial and readout resolution using commercial BOTDA. Dual FRP-OFs temperature compensation method has also been proposed and analyzed. And finally, BOTDA(R)-OFs have been applied in Tiyu west road civil structure at Guangzhou and Daqing Highway. This novel FRP-OF rebar shows both high strengthen and good sensing properties, which can be used in long-term SHM for civil infrastructures.

  1. High Temperature Irradiation-Resistant Thermocouple Performance Improvements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joshua Daw; Joy Rempe; Darrell Knudson

    2009-04-01

    Traditional methods for measuring temperature in-pile degrade at temperatures above 1100 ºC. To address this instrumentation need, the Idaho National Laboratory (INL) developed and evaluated the performance of a high temperature irradiation-resistant thermocouple (HTIR-TC) that contains alloys of molybdenum and niobium. Data from high temperature (up to 1500 ºC) long duration (up to 4000 hours) tests and on-going irradiations at INL’s Advanced Test Reactor demonstrate the superiority of these sensors to commercially-available thermocouples. However, several options have been identified that could further enhance their reliability, reduce their production costs, and allow their use in a wider range of operating conditions.more » This paper presents results from on-going Idaho National Laboratory (INL)/University of Idaho (UI) efforts to investigate options to improve HTIR-TC ductility, reliability, and resolution by investigating specially-formulated alloys of molybdenum and niobium and alternate diameter thermoelements (wires). In addition, on-going efforts to evaluate alternate fabrication approaches, such as drawn and loose assembly techniques will be discussed. Efforts to reduce HTIR-TC fabrication costs, such as the use of less expensive extension cable will also be presented. Finally, customized HTIR-TC designs developed for specific customer needs will be summarized to emphasize the varied conditions under which these sensors may be used.« less

  2. Navigating Financial and Supply Reliability Tradeoffs in Regional Drought Portfolios

    NASA Astrophysics Data System (ADS)

    Zeff, H. B.; Herman, J. D.; Characklis, G. W.; Reed, P. M.

    2013-12-01

    Rising development costs and growing concerns over environmental impacts have led many communities to explore more diversified regional portfolio-type approaches to managing their water supplies. These strategies coordinate existing supply infrastructure with other ';assets' such as conservation measures or water transfers, reducing the capacity and costs required to meet demand by providing greater adaptability to changing hydrologic conditions. For many water utilities, however, this additional flexibility can also cause unexpected reductions in revenue (i.e. conservation) or increased costs (i.e. transfers), fluctuations that can be very difficult for a regulated entity to manage. Thus, despite the advantages, concerns over the resulting financial disruptions provide a disincentive for utilities to develop more adaptive methods, potentially limiting the role of some very effective tools. This study seeks to design portfolio strategies that employ financial instruments (e.g. contingency funds, index insurance) to reduce fluctuations in revenues and costs and therefore do not sacrifice financial stability for improved performance (e.g. lower expected costs, high reliability). This work describes the development of regional water supply portfolios in the ';Research Triangle' region of North Carolina, an area comprising four rapidly growing municipalities supplied by nine surface water reservoirs in two separate river basins. Disparities in growth rates and the respective individual storage capacities of the reservoirs provide the region with the opportunity to increase the efficiency of the regional supply infrastructure through inter-utility water transfers, even as each utility engages in its own conservation activities. The interdependence of multiple utilities navigating shared conveyance and treatment infrastructure to engage in transfers forces water managers to consider regional objectives, as the actions of any one utility can affect the others. Results indicate the inclusion of inter-utility water transfers allows the water utilities to improve on regional operational objectives (i.e. higher reliability and lower restriction frequencies) at a lower expected cost, while financial mitigation tools introduce a tradeoff between expected costs and cost variability. Financial mitigation schemes, including both third-party financial insurance contracts and contingency funds (i.e. self-insurance), were able to reduce cost variability at a lower expected cost than mitigation schemes which use self-insurance alone. The dynamics of the Research Triangle scenario (e.g. rapid population growth, constrained supply, and sensitivity to cost/revenue swings) suggest that this work may have the potential to more generally inform utilities on the effects of coordinated regional water supply planning and the resulting financial implications of more flexible, portfolio-type management techniques.

  3. The Joint Confidence Level Paradox: A History of Denial

    NASA Technical Reports Server (NTRS)

    Butts, Glenn; Linton, Kent

    2009-01-01

    This paper is intended to provide a reliable methodology for those tasked with generating price tags on construction (C0F) and research and development (R&D) activities in the NASA performance world. This document consists of a collection of cost-related engineering detail and project fulfillment information from early agency days to the present. Accurate historical detail is the first place to start when determining improved methodologies for future cost and schedule estimating. This paper contains a beneficial proposed cost estimating method for arriving at more reliable numbers for future submits. When comparing current cost and schedule methods with earlier cost and schedule approaches, it became apparent that NASA's organizational performance paradigm has morphed. Mission fulfillment speed has slowed and cost calculating factors have increased in 21st Century space exploration.

  4. On the Path to SunShot. Advancing Concentrating Solar Power Technology, Performance, and Dispatchability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mehos, Mark; Turchi, Craig; Jorgenson, Jennie

    2016-05-01

    This report examines the remaining challenges to achieving the competitive concentrating solar power (CSP) costs and large-scale deployment envisioned under the U.S. Department of Energy's SunShot Initiative. Although CSP costs continue to decline toward SunShot targets, CSP acceptance and deployment have been hindered by inexpensive photovoltaics (PV). However, a recent analysis found that thermal energy storage (TES) could increase CSP's value--based on combined operational and capacity benefits--by up to 6 cents/kWh compared to variable-generation PV, under a 40% renewable portfolio standard in California. Thus, the high grid value of CSP-TES must be considered when evaluating renewable energy options. An assessmentmore » of net system cost accounts for the difference between the costs of adding new generation and the avoided cost from displacing other resources providing the same level of energy and reliability. The net system costs of several CSP configurations are compared with the net system costs of conventional natural-gas-fired combustion-turbine (CT) and combined-cycle plants. At today's low natural gas prices and carbon emission costs, the economics suggest a peaking configuration for CSP. However, with high natural gas prices and emission costs, each of the CSP configurations compares favorably against the conventional alternatives, and systems with intermediate to high capacity factors become the preferred alternatives. Another analysis compares net system costs for three configurations of CSP versus PV with batteries and PV with CTs. Under current technology costs, the least-expensive option is a combination of PV and CTs. However, under future cost assumptions, the optimal configuration of CSP becomes the most cost-effective option.« less

  5. Integrating Solar PV in Utility System Operations: Analytical Framework and Arizona Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Jing; Botterud, Audun; Mills, Andrew

    2015-06-01

    A systematic framework is proposed to estimate the impact on operating costs due to uncertainty and variability in renewable resources. The framework quantifies the integration costs associated with subhourly variability and uncertainty as well as day-ahead forecasting errors in solar PV (photovoltaics) power. A case study illustrates how changes in system operations may affect these costs for a utility in the southwestern United States (Arizona Public Service Company). We conduct an extensive sensitivity analysis under different assumptions about balancing reserves, system flexibility, fuel prices, and forecasting errors. We find that high solar PV penetrations may lead to operational challenges, particularlymore » during low-load and high solar periods. Increased system flexibility is essential for minimizing integration costs and maintaining reliability. In a set of sensitivity cases where such flexibility is provided, in part, by flexible operations of nuclear power plants, the estimated integration costs vary between $1.0 and $4.4/MWh-PV for a PV penetration level of 17%. The integration costs are primarily due to higher needs for hour-ahead balancing reserves to address the increased sub-hourly variability and uncertainty in the PV resource. (C) 2015 Elsevier Ltd. All rights reserved.« less

  6. Wholesale electricity market design with increasing levels of renewable generation: Revenue sufficiency and long-term reliability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milligan, Michael; Frew, Bethany A.; Bloom, Aaron

    This paper discusses challenges that relate to assessing and properly incentivizing the resources necessary to ensure a reliable electricity system with growing penetrations of variable generation (VG). The output of VG (primarily wind and solar generation) varies over time and cannot be predicted precisely. Therefore, the energy from VG is not always guaranteed to be available at times when it is most needed. This means that its contribution towards resource adequacy can be significantly less than the contribution from traditional resources. Variable renewable resources also have near-zero variable costs, and with production-based subsidies they may even have negative offer costs.more » Because variable costs drive the spot price of energy, this can lead to reduced prices, sales, and therefore revenue for all resources within the energy market. The characteristics of VG can also result in increased price volatility as well as the need for more flexibility in the resource fleet in order to maintain system reliability. Furthermore, we explore both traditional and evolving electricity market designs in the United States that aim to ensure resource adequacy and sufficient revenues to recover costs when those resources are needed for long-term reliability. We also investigate how reliability needs may be evolving and discuss how VG may affect future electricity market designs.« less

  7. Scheduling structural health monitoring activities for optimizing life-cycle costs and reliability of wind turbines

    NASA Astrophysics Data System (ADS)

    Hanish Nithin, Anu; Omenzetter, Piotr

    2017-04-01

    Optimization of the life-cycle costs and reliability of offshore wind turbines (OWTs) is an area of immense interest due to the widespread increase in wind power generation across the world. Most of the existing studies have used structural reliability and the Bayesian pre-posterior analysis for optimization. This paper proposes an extension to the previous approaches in a framework for probabilistic optimization of the total life-cycle costs and reliability of OWTs by combining the elements of structural reliability/risk analysis (SRA), the Bayesian pre-posterior analysis with optimization through a genetic algorithm (GA). The SRA techniques are adopted to compute the probabilities of damage occurrence and failure associated with the deterioration model. The probabilities are used in the decision tree and are updated using the Bayesian analysis. The output of this framework would determine the optimal structural health monitoring and maintenance schedules to be implemented during the life span of OWTs while maintaining a trade-off between the life-cycle costs and risk of the structural failure. Numerical illustrations with a generic deterioration model for one monitoring exercise in the life cycle of a system are demonstrated. Two case scenarios, namely to build initially an expensive and robust or a cheaper but more quickly deteriorating structures and to adopt expensive monitoring system, are presented to aid in the decision-making process.

  8. Wholesale electricity market design with increasing levels of renewable generation: Revenue sufficiency and long-term reliability

    DOE PAGES

    Milligan, Michael; Frew, Bethany A.; Bloom, Aaron; ...

    2016-03-22

    This paper discusses challenges that relate to assessing and properly incentivizing the resources necessary to ensure a reliable electricity system with growing penetrations of variable generation (VG). The output of VG (primarily wind and solar generation) varies over time and cannot be predicted precisely. Therefore, the energy from VG is not always guaranteed to be available at times when it is most needed. This means that its contribution towards resource adequacy can be significantly less than the contribution from traditional resources. Variable renewable resources also have near-zero variable costs, and with production-based subsidies they may even have negative offer costs.more » Because variable costs drive the spot price of energy, this can lead to reduced prices, sales, and therefore revenue for all resources within the energy market. The characteristics of VG can also result in increased price volatility as well as the need for more flexibility in the resource fleet in order to maintain system reliability. Furthermore, we explore both traditional and evolving electricity market designs in the United States that aim to ensure resource adequacy and sufficient revenues to recover costs when those resources are needed for long-term reliability. We also investigate how reliability needs may be evolving and discuss how VG may affect future electricity market designs.« less

  9. Product reliability and thin-film photovoltaics

    NASA Astrophysics Data System (ADS)

    Gaston, Ryan; Feist, Rebekah; Yeung, Simon; Hus, Mike; Bernius, Mark; Langlois, Marc; Bury, Scott; Granata, Jennifer; Quintana, Michael; Carlson, Carl; Sarakakis, Georgios; Ogden, Douglas; Mettas, Adamantios

    2009-08-01

    Despite significant growth in photovoltaics (PV) over the last few years, only approximately 1.07 billion kWhr of electricity is estimated to have been generated from PV in the US during 2008, or 0.27% of total electrical generation. PV market penetration is set for a paradigm shift, as fluctuating hydrocarbon prices and an acknowledgement of the environmental impacts associated with their use, combined with breakthrough new PV technologies, such as thin-film and BIPV, are driving the cost of energy generated with PV to parity or cost advantage versus more traditional forms of energy generation. In addition to reaching cost parity with grid supplied power, a key to the long-term success of PV as a viable energy alternative is the reliability of systems in the field. New technologies may or may not have the same failure modes as previous technologies. Reliability testing and product lifetime issues continue to be one of the key bottlenecks in the rapid commercialization of PV technologies today. In this paper, we highlight the critical need for moving away from relying on traditional qualification and safety tests as a measure of reliability and focus instead on designing for reliability and its integration into the product development process. A drive towards quantitative predictive accelerated testing is emphasized and an industrial collaboration model addressing reliability challenges is proposed.

  10. Space Shuttle Software Development and Certification

    NASA Technical Reports Server (NTRS)

    Orr, James K.; Henderson, Johnnie A

    2000-01-01

    Man-rated software, "software which is in control of systems and environments upon which human life is critically dependent," must be highly reliable. The Space Shuttle Primary Avionics Software System is an excellent example of such a software system. Lessons learn from more than 20 years of effort have identified basic elements that must be present to achieve this high degree of reliability. The elements include rigorous application of appropriate software development processes, use of trusted tools to support those processes, quantitative process management, and defect elimination and prevention. This presentation highlights methods used within the Space Shuttle project and raises questions that must be addressed to provide similar success in a cost effective manner on future long-term projects where key application development tools are COTS rather than internally developed custom application development tools

  11. Anti-aliasing filter design on spaceborne digital receiver

    NASA Astrophysics Data System (ADS)

    Yu, Danru; Zhao, Chonghui

    2009-12-01

    In recent years, with the development of satellite observation technologies, more and more active remote sensing technologies are adopted in spaceborne system. The spaceborne precipitation radar will depend heavily on high performance digital processing to collect meaningful rain echo data. It will increase the complexity of the spaceborne system and need high-performance and reliable digital receiver. This paper analyzes the frequency aliasing in the intermediate frequency signal sampling of digital down conversion in spaceborne radar, and gives an effective digital filter. By analysis and calculation, we choose reasonable parameters of the half-band filters to suppress the frequency aliasing on DDC. Compared with traditional filter, the FPGA resources cost in our system are reduced by over 50%. This can effectively reduce the complexity in the spaceborne digital receiver and improve the reliability of system.

  12. A study of electric transmission lines for use on the lunar surface

    NASA Technical Reports Server (NTRS)

    Gaustad, Krista L.; Gordon, Lloyd B.; Weber, Jennifer R.

    1994-01-01

    The sources for electrical power on a lunar base are said to include solar/chemical, nuclear (static conversion), and nuclear (dynamic conversion). The transmission of power via transmission lines is more practical than power beaming or superconducting because of its low cost and reliable, proven technology. Transmission lines must have minimum mass, maximum efficiency, and the ability to operate reliably in the lunar environment. The transmission line design includes conductor material, insulator material, conductor geometry, conductor configuration, line location, waveform, phase selection, and frequency. This presentation oulines the design. Liquid and gaseous dielectrics are undesirable for long term use in the lunar vacuum due to a high probability of loss. Thus, insulation for high voltage transmission line will most likely be solid dielectric or vacuum insulation.

  13. Photovoltaic Module Reliability Workshop 2012: February 28 - March 1, 2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurtz, S.

    2013-11-01

    NREL's Photovoltaic (PV) Module Reliability Workshop (PVMRW) brings together PV reliability experts to share information, leading to the improvement of PV module reliability. Such improvement reduces the cost of solar electricity and promotes investor confidence in the technology--both critical goals for moving PV technologies deeper into the electricity marketplace.

  14. High efficiency digital cooler electronics for aerospace applications

    NASA Astrophysics Data System (ADS)

    Kirkconnell, C. S.; Luong, T. T.; Shaw, L. S.; Murphy, J. B.; Moody, E. A.; Lisiecki, A. L.; Ellis, M. J.

    2014-06-01

    Closed-cycle cryogenic refrigerators, or cryocoolers, are an enabling technology for a wide range of aerospace applications, mostly related to infrared (IR) sensors. While the industry focus has tended to be on the mechanical cryocooler thermo mechanical unit (TMU) alone, implementation on a platform necessarily consists of the combination of the TMU and a mating set of command and control electronics. For some applications the cryocooler electronics (CCE) are technologically simple and low cost relative to the TMU, but this is not always the case. The relative cost and complexity of the CCE for a space-borne application can easily exceed that of the TMU, primarily due to the technical constraints and cost impacts introduced by the typical space radiation hardness and reliability requirements. High end tactical IR sensor applications also challenge the state of the art in cryocooler electronics, such as those for which temperature setpoint and frequency must be adjustable, or those where an informative telemetry set must be supported, etc. Generally speaking for both space and tactical applications, it is often the CCE that limits the rated lifetime and reliability of the cryocooler system. A family of high end digital cryocooler electronics has been developed to address these needs. These electronics are readily scalable from 10W to 500W output capacity; experimental performance data for nominally 25W and 100W variants are presented. The combination of a FPGA-based controller and dual H-bridge motor drive architectures yields high efficiency (>92% typical) and precision temperature control (+/- 30 mK typical) for a wide range of Stirling-class mechanical cryocooler types and vendors. This paper focuses on recent testing with the AIM INFRAROT-MODULE GmbH (AIM) SX030 and AIM SF100 cryocoolers.

  15. Sideband Algorithm for Automatic Wind Turbine Gearbox Fault Detection and Diagnosis: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zappala, D.; Tavner, P.; Crabtree, C.

    2013-01-01

    Improving the availability of wind turbines (WT) is critical to minimize the cost of wind energy, especially for offshore installations. As gearbox downtime has a significant impact on WT availabilities, the development of reliable and cost-effective gearbox condition monitoring systems (CMS) is of great concern to the wind industry. Timely detection and diagnosis of developing gear defects within a gearbox is an essential part of minimizing unplanned downtime of wind turbines. Monitoring signals from WT gearboxes are highly non-stationary as turbine load and speed vary continuously with time. Time-consuming and costly manual handling of large amounts of monitoring data representmore » one of the main limitations of most current CMSs, so automated algorithms are required. This paper presents a fault detection algorithm for incorporation into a commercial CMS for automatic gear fault detection and diagnosis. The algorithm allowed the assessment of gear fault severity by tracking progressive tooth gear damage during variable speed and load operating conditions of the test rig. Results show that the proposed technique proves efficient and reliable for detecting gear damage. Once implemented into WT CMSs, this algorithm can automate data interpretation reducing the quantity of information that WT operators must handle.« less

  16. Optimal PGU operation strategy in CHP systems

    NASA Astrophysics Data System (ADS)

    Yun, Kyungtae

    Traditional power plants only utilize about 30 percent of the primary energy that they consume, and the rest of the energy is usually wasted in the process of generating or transmitting electricity. On-site and near-site power generation has been considered by business, labor, and environmental groups to improve the efficiency and the reliability of power generation. Combined heat and power (CHP) systems are a promising alternative to traditional power plants because of the high efficiency and low CO2 emission achieved by recovering waste thermal energy produced during power generation. A CHP operational algorithm designed to optimize operational costs must be relatively simple to implement in practice such as to minimize the computational requirements from the hardware to be installed. This dissertation focuses on the following aspects pertaining the design of a practical CHP operational algorithm designed to minimize the operational costs: (a) real-time CHP operational strategy using a hierarchical optimization algorithm; (b) analytic solutions for cost-optimal power generation unit operation in CHP Systems; (c) modeling of reciprocating internal combustion engines for power generation and heat recovery; (d) an easy to implement, effective, and reliable hourly building load prediction algorithm.

  17. Enhancing malaria diagnosis through microfluidic cell enrichment and magnetic resonance relaxometry detection

    NASA Astrophysics Data System (ADS)

    Fook Kong, Tian; Ye, Weijian; Peng, Weng Kung; Wei Hou, Han; Marcos; Preiser, Peter Rainer; Nguyen, Nam-Trung; Han, Jongyoon

    2015-06-01

    Despite significant advancements over the years, there remains an urgent need for low cost diagnostic approaches that allow for rapid, reliable and sensitive detection of malaria parasites in clinical samples. Our previous work has shown that magnetic resonance relaxometry (MRR) is a potentially highly sensitive tool for malaria diagnosis. A key challenge for making MRR based malaria diagnostics suitable for clinical testing is the fact that MRR baseline fluctuation exists between individuals, making it difficult to detect low level parasitemia. To overcome this problem, it is important to establish the MRR baseline of each individual while having the ability to reliably determine any changes that are caused by the infection of malaria parasite. Here we show that an approach that combines the use of microfluidic cell enrichment with a saponin lysis before MRR detection can overcome these challenges and provide the basis for a highly sensitive and reliable diagnostic approach of malaria parasites. Importantly, as little as 0.0005% of ring stage parasites can be detected reliably, making this ideally suited for the detection of malaria parasites in peripheral blood obtained from patients. The approaches used here are envisaged to provide a new malaria diagnosis solution in the near future.

  18. Gearbox Reliability Collaborative Phase 3 Gearbox 2 Test Plan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Link, H.; Keller, J.; Guo, Y.

    2013-04-01

    Gearboxes in wind turbines have not been achieving their expected design life even though they commonly meet or exceed the design criteria specified in current design standards. One of the basic premises of the National Renewable Energy Laboratory (NREL) Gearbox Reliability Collaborative (GRC) is that the low gearbox reliability results from the absence of critical elements in the design process or insufficient design tools. Key goals of the GRC are to improve design approaches and analysis tools and to recommend practices and test methods resulting in improved design standards for wind turbine gearboxes that lower the cost of energy (COE)more » through improved reliability. The GRC uses a combined gearbox testing, modeling and analysis approach, along with a database of information from gearbox failures collected from overhauls and investigation of gearbox condition monitoring techniques to improve wind turbine operations and maintenance practices. Testing of Gearbox 2 (GB2) using the two-speed turbine controller that has been used in prior testing. This test series will investigate non-torque loads, high-speed shaft misalignment, and reproduction of field conditions in the dynamometer. This test series will also include vibration testing using an eddy-current brake on the gearbox's high speed shaft.« less

  19. Using Facility Condition Assessments to Identify Actions Related to Infrastructure

    NASA Technical Reports Server (NTRS)

    Rubert, Kennedy F.

    2010-01-01

    To support cost effective, quality research it is essential that laboratory and testing facilities are maintained in a continuous and reliable state of availability at all times. NASA Langley Research Center (LaRC) and its maintenance contractor, Jacobs Technology, Inc. Research Operations, Maintenance, and Engineering (ROME) group, are in the process of implementing a combined Facility Condition Assessment (FCA) and Reliability Centered Maintenance (RCM) program to improve asset management and overall reliability of testing equipment in facilities such as wind tunnels. Specific areas are being identified for improvement, the deferred maintenance cost is being estimated, and priority is being assigned against facilities where conditions have been allowed to deteriorate. This assessment serves to assist in determining where to commit available funds on the Center. RCM methodologies are being reviewed and enhanced to assure that appropriate preventive, predictive, and facilities/equipment acceptance techniques are incorporated to prolong lifecycle availability and assure reliability at minimum cost. The results from the program have been favorable, better enabling LaRC to manage assets prudently.

  20. Study of a fail-safe abort system for an actively cooled hypersonic aircraft. Volume 1: Technical summary

    NASA Technical Reports Server (NTRS)

    Pirello, C. J.; Herring, R. L.

    1976-01-01

    Conceptual designs of a fail-safe abort system for hydrogen fueled actively cooled high speed aircraft are examined. The fail-safe concept depends on basically three factors: (1) a reliable method of detecting a failure or malfunction in the active cooling system, (2) the optimization of abort trajectories which minimize the descent heat load to the aircraft, and (3) fail-safe thermostructural concepts to minimize both the weight and the maximum temperature the structure will reach during descent. These factors are examined and promising approaches are evaluated based on weight, reliability, ease of manufacture and cost.

  1. Remote Energy Monitoring System via Cellular Network

    NASA Astrophysics Data System (ADS)

    Yunoki, Shoji; Tamaki, Satoshi; Takada, May; Iwaki, Takashi

    Recently, improvement on power saving and cost efficiency by monitoring the operation status of various facilities over the network has gained attention. Wireless network, especially cellular network, has advantage in mobility, coverage, and scalability. On the other hand, it has disadvantage of low reliability, due to rapid changes in the available bandwidth. We propose a transmission control scheme based on data priority and instantaneous available bandwidth to realize a highly reliable remote monitoring system via cellular network. We have developed our proposed monitoring system and evaluated the effectiveness of our scheme, and proved it reduces the maximum transmission delay of sensor status to 1/10 compared to best effort transmission.

  2. Robot-Powered Reliability Testing at NREL's ESIF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrison, Kevin

    With auto manufacturers expected to roll out fuel cell electric vehicles in the 2015 to 2017 timeframe, the need for a reliable hydrogen fueling infrastructure is greater than ever. That's why the National Renewable Energy Laboratory (NREL) is using a robot in its Energy Systems Integration Facility (ESIF) to assess the durability of hydrogen fueling hoses, a largely untested-and currently costly-component of hydrogen fueling stations. The automated machine mimics the repetitive stress of a human bending and twisting the hose to refuel a vehicle-all under the high pressure and low temperature required to deliver hydrogen to a fuel cell vehicle'smore » onboard storage tank.« less

  3. Robot-Powered Reliability Testing at NREL's ESIF

    ScienceCinema

    Harrison, Kevin

    2018-02-14

    With auto manufacturers expected to roll out fuel cell electric vehicles in the 2015 to 2017 timeframe, the need for a reliable hydrogen fueling infrastructure is greater than ever. That's why the National Renewable Energy Laboratory (NREL) is using a robot in its Energy Systems Integration Facility (ESIF) to assess the durability of hydrogen fueling hoses, a largely untested-and currently costly-component of hydrogen fueling stations. The automated machine mimics the repetitive stress of a human bending and twisting the hose to refuel a vehicle-all under the high pressure and low temperature required to deliver hydrogen to a fuel cell vehicle's onboard storage tank.

  4. Cost of care of haemophilia with inhibitors.

    PubMed

    Di Minno, M N D; Di Minno, G; Di Capua, M; Cerbone, A M; Coppola, A

    2010-01-01

    In Western countries, the treatment of patients with inhibitors is presently the most challenging and serious issue in haemophilia management, direct costs of clotting factor concentrates accounting for >98% of the highest economic burden absorbed for the healthcare of patients in this setting. Being designed to address questions of resource allocation and effectiveness, decision models are the golden standard to reliably assess the overall economic implications of haemophilia with inhibitors in terms of mortality, bleeding-related morbidity, and severity of arthropathy. However, presently, most data analyses stem from retrospective short-term evaluations, that only allow for the analysis of direct health costs. In the setting of chronic diseases, the cost-utility analysis, that takes into account the beneficial effects of a given treatment/healthcare intervention in terms of health-related quality of life, is likely to be the most appropriate approach. To calculate net benefits, the quality adjusted life year, that significantly reflects such health gain, has to be compared with specific economic impacts. Differences in data sources, in medical practice and/or in healthcare systems and costs, imply that most current pharmacoeconomic analyses are confined to a narrow healthcare payer perspective. Long-term/lifetime prospective or observational studies, devoted to a careful definition of when to start a treatment; of regimens (dose and type of product) to employ, and of inhibitor population (children/adults, low-responding/high responding inhibitors) to study, are thus urgently needed to allow for newer insights, based on reliable data sources into resource allocation, effectiveness and cost-utility analysis in the treatment of haemophiliacs with inhibitors.

  5. Design, performance, and economics of 50-kW and 500-kW vertical axis wind turbines

    NASA Astrophysics Data System (ADS)

    Schienbein, L. A.; Malcolm, D. J.

    1983-11-01

    A review of the development and performance of the DAF Indal 50-kW vertical axis Darrieus wind turbine shows that a high level of technical development and reliability has been achieved. Features of the drive train, braking and control systems are discussed and performance details are presented. Details are also presented of a 500-kW VAWT that is currently in production. A discussion of the economics of both the 50-kW and 500-kW VAWTs is included, showing the effects of charge rate, installed cost, operating cost, performance, and efficiency.

  6. Real-time science and outreach from the UNOLS fleet via HiSeasNet

    NASA Astrophysics Data System (ADS)

    Foley, S.; Berger, J.; Orcutt, J. A.; Brice, D.; Coleman, D. F.; Grabowski, E. M.

    2010-12-01

    The HiSeasNet satellite communications network has ben providing cost-effective, reliable, continuous Internet connectivity to the UNOLS oceanographic research fleet for nearly nine years. During that time, HiSeasNet has supported science and outreach programs with a variety of real-time interactions back to shore including videoconferencing, webcasting, shared whiteboards, and streaming high-definition video feeds. Solutions have varied in scale, cost, and capability. As real-time science and outreach becomes more common, experience with a variety of technologies continues to build, and more opportunities yet to explore.

  7. Incident learning in pursuit of high reliability: implementing a comprehensive, low-threshold reporting program in a large, multisite radiation oncology department.

    PubMed

    Gabriel, Peter E; Volz, Edna; Bergendahl, Howard W; Burke, Sean V; Solberg, Timothy D; Maity, Amit; Hahn, Stephen M

    2015-04-01

    Incident learning programs have been recognized as cornerstones of safety and quality assurance in so-called high reliability organizations in industries such as aviation and nuclear power. High reliability organizations are distinguished by their drive to continuously identify and proactively address a broad spectrum of latent safety issues. Many radiation oncology institutions have reported on their experience in tracking and analyzing adverse events and near misses but few have incorporated the principles of high reliability into their programs. Most programs have focused on the reporting and retrospective analysis of a relatively small number of significant adverse events and near misses. To advance a large, multisite radiation oncology department toward high reliability, a comprehensive, cost-effective, electronic condition reporting program was launched to enable the identification of a broad spectrum of latent system failures, which would then be addressed through a continuous quality improvement process. A comprehensive program, including policies, work flows, and information system, was designed and implemented, with use of a low reporting threshold to focus on precursors to adverse events. In a 46-month period from March 2011 through December 2014, a total of 8,504 conditions (average, 185 per month, 1 per patient treated, 3.9 per 100 fractions [individual treatments]) were reported. Some 77.9% of clinical staff members reported at least 1 condition. Ninety-eight percent of conditions were classified in the lowest two of four severity levels, providing the opportunity to address conditions before they contribute to adverse events. Results after approximately four years show excellent employee engagement, a sustained rate of reporting, and a focus on low-level issues leading to proactive quality improvement interventions.

  8. Demonstration of a diode-laser-based high spectral resolution lidar (HSRL) for quantitative profiling of clouds and aerosols.

    PubMed

    Hayman, Matthew; Spuler, Scott

    2017-11-27

    We present a demonstration of a diode-laser-based high spectral resolution lidar. It is capable of performing calibrated retrievals of aerosol and cloud optical properties at a 150 m range resolution with less than 1 minute integration time over an approximate range of 12 km during day and night. This instrument operates at 780 nm, a wavelength that is well established for reliable semiconductor lasers and detectors, and was chosen because it corresponds to the D2 rubidium absorption line. A heated vapor reference cell of isotopic rubidium 87 is used as an effective and reliable aerosol signal blocking filter in the instrument. In principle, the diode-laser-based high spectral resolution lidar can be made cost competitive with elastic backscatter lidar systems, yet delivers a significant improvement in data quality through direct retrieval of quantitative optical properties of clouds and aerosols.

  9. A Low-Cost, Reliable, High-Throughput System for Rodent Behavioral Phenotyping in a Home Cage Environment

    PubMed Central

    Parkison, Steven A.; Carlson, Jay D.; Chaudoin, Tammy R.; Hoke, Traci A.; Schenk, A. Katrin; Goulding, Evan H.; Pérez, Lance C.; Bonasera, Stephen J.

    2016-01-01

    Inexpensive, high-throughput, low maintenance systems for precise temporal and spatial measurement of mouse home cage behavior (including movement, feeding, and drinking) are required to evaluate products from large scale pharmaceutical design and genetic lesion programs. These measurements are also required to interpret results from more focused behavioral assays. We describe the design and validation of a highly-scalable, reliable mouse home cage behavioral monitoring system modeled on a previously described, one-of-a-kind system [1]. Mouse position was determined by solving static equilibrium equations describing the force and torques acting on the system strain gauges; feeding events were detected by a photobeam across the food hopper, and drinking events were detected by a capacitive lick sensor. Validation studies show excellent agreement between mouse position and drinking events measured by the system compared with video-based observation – a gold standard in neuroscience. PMID:23366406

  10. Commercialized VCSEL components fabricated at TrueLight Corporation

    NASA Astrophysics Data System (ADS)

    Pan, Jin-Shan; Lin, Yung-Sen; Li, Chao-Fang A.; Chang, C. H.; Wu, Jack; Lee, Bor-Lin; Chuang, Y. H.; Tu, S. L.; Wu, Calvin; Huang, Kai-Feng

    2001-05-01

    TrueLight Corporation was found in 1997 and it is the pioneer of VCSEL components supplier in Taiwan. We specialize in the production and distribution of VCSEL (Vertical Cavity Surface Emitting Laser) and other high-speed PIN-detector devices and components. Our core technology is developed to meet blooming demand of fiber optic transmission. Our intention is to diverse the device application into data communication, telecommunication and industrial markets. One mission is to provide the high performance, highly reliable and low-cost VCSEL components for data communication and sensing applications. For the past three years, TrueLight Corporation has entered successfully into the Gigabit Ethernet and the Fiber Channel data communication area. In this paper, we will focus on the fabrication of VCSEL components. We will present you the evolution of implanted and oxide-confined VCSEL process, device characterization, also performance in Gigabit data communication and the most important reliability issue

  11. DRS: Derivational Reasoning System

    NASA Technical Reports Server (NTRS)

    Bose, Bhaskar

    1995-01-01

    The high reliability requirements for airborne systems requires fault-tolerant architectures to address failures in the presence of physical faults, and the elimination of design flaws during the specification and validation phase of the design cycle. Although much progress has been made in developing methods to address physical faults, design flaws remain a serious problem. Formal methods provides a mathematical basis for removing design flaws from digital systems. DRS (Derivational Reasoning System) is a formal design tool based on advanced research in mathematical modeling and formal synthesis. The system implements a basic design algebra for synthesizing digital circuit descriptions from high level functional specifications. DRS incorporates an executable specification language, a set of correctness preserving transformations, verification interface, and a logic synthesis interface, making it a powerful tool for realizing hardware from abstract specifications. DRS integrates recent advances in transformational reasoning, automated theorem proving and high-level CAD synthesis systems in order to provide enhanced reliability in designs with reduced time and cost.

  12. POF-IMU sensor system: A fusion between inertial measurement units and POF sensors for low-cost and highly reliable systems

    NASA Astrophysics Data System (ADS)

    Leal-Junior, Arnaldo G.; Vargas-Valencia, Laura; dos Santos, Wilian M.; Schneider, Felipe B. A.; Siqueira, Adriano A. G.; Pontes, Maria José; Frizera, Anselmo

    2018-07-01

    This paper presents a low cost and highly reliable system for angle measurement based on a sensor fusion between inertial and fiber optic sensors. The system consists of the sensor fusion through Kalman filter of two inertial measurement units (IMUs) and an intensity variation-based polymer optical fiber (POF) curvature sensor. In addition, the IMU was applied as a reference for a compensation technique of POF curvature sensor hysteresis. The proposed system was applied on the knee angle measurement of a lower limb exoskeleton in flexion/extension cycles and in gait analysis. Results show the accuracy of the system, where the Root Mean Square Error (RMSE) between the POF-IMU sensor system and the encoder was below 4° in the worst case and about 1° in the best case. Then, the POF-IMU sensor system was evaluated as a wearable sensor for knee joint angle assessment without the exoskeleton, where its suitability for this purpose was demonstrated. The results obtained in this paper pave the way for future applications of sensor fusion between electronic and fiber optic sensors in movement analysis.

  13. Pristine carbon nanotubes based resistive temperature sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alam, Md Bayazeed, E-mail: bayazeed786@gmail.com; Jamia Millia Islamia; Saini, Sudhir Kumar, E-mail: sudhirsaini1310@gmail.com

    A good sensor must be highly sensitive, faster in response, of low cost cum easily producible, and highly reliable. Incorporation of nano-dimensional particles/ wires makes conventional sensors more effective in terms of fulfilling the above requirements. For example, Carbon Nanotubes (CNTs) are promising sensing element because of its large aspect ratio, unique electronic and thermal properties. In addition to their use for widely reported chemical sensing, it has also been explored for temperature sensing. This paper presents the fabrication of CNTs based temperature sensor, prepared on silicon substrate using low cost spray coating method, which is reliable and reproducible methodmore » to prepare uniform CNTs thin films on any substrate. Besides this, simple and inexpensive method of preparation of dispersion of single walled CNTs (SWNTs) in 1,2 dichlorobenzene by using probe type ultrasonicator for debundling the CNTs for improving sensor response were used. The electrical contacts over the dispersed SWNTs were taken using silver paste electrodes. Fabricated sensors clearly show immediate change in resistance as a response to change in temperature of SWNTs. The measured sensitivity (change in resistance with temperature) of the sensor was found ∼ 0.29%/°C in the 25°C to 60°C temperature range.« less

  14. Centralized vs decentralized lunar power system study

    NASA Astrophysics Data System (ADS)

    Metcalf, Kenneth; Harty, Richard B.; Perronne, Gerald E.

    1991-09-01

    Three power-system options are considered with respect to utilization on a lunar base: the fully centralized option, the fully decentralized option, and a hybrid comprising features of the first two options. Power source, power conditioning, and power transmission are considered separately, and each architecture option is examined with ac and dc distribution, high and low voltage transmission, and buried and suspended cables. Assessments are made on the basis of mass, technological complexity, cost, reliability, and installation complexity, however, a preferred power-system architecture is not proposed. Preferred options include transmission based on ac, transmission voltages of 2000-7000 V with buried high-voltage lines and suspended low-voltage lines. Assessments of the total cost associated with the installations are required to determine the most suitable power system.

  15. PM2.5 monitoring system based on ZigBee wireless sensor network

    NASA Astrophysics Data System (ADS)

    Lin, Lukai; Li, Xiangshun; Gu, Weiying

    2017-06-01

    In the view of the haze problem, aiming at improving the deficiency of the traditional PM2.5 monitoring methods, such as the insufficient real-time monitoring, limited transmission distance, high cost and the difficulty to maintain, the atmosphere PM2.5 monitoring system based on ZigBee technology is designed. The system combines the advantages of ZigBee’s low cost, low power consumption, high reliability and GPRS/Internet’s capability of remote transmission of data. Furthermore, it adopts TI’s Z-Stack protocol stack, and selects CC2530 chip and TI’s MSP430 microcontroller as the core, which establishes the air pollution monitoring network that is helpful for the early prediction of major air pollution disasters.

  16. Improved Performance and Safety for High Energy Batteries Through Use of Hazard Anticipation and Capacity Prediction

    NASA Technical Reports Server (NTRS)

    Atwater, Terrill

    1993-01-01

    Prediction of the capacity remaining in used high rate, high energy batteries is important information to the user. Knowledge of the capacity remaining in used batteries results in better utilization. This translates into improved readiness and cost savings due to complete, efficient use. High rate batteries, due to their chemical nature, are highly sensitive to misuse (i.e., over discharge or very high rate discharge). Battery failure due to misuse or manufacturing defects could be disastrous. Since high rate, high energy batteries are expensive and energetic, a reliable method of predicting both failures and remaining energy has been actively sought. Due to concerns over safety, the behavior of lithium/sulphur dioxide cells at different temperatures and current drains was examined. The main thrust of this effort was to determine failure conditions for incorporation in hazard anticipation circuitry. In addition, capacity prediction formulas have been developed from test data. A process that performs continuous, real-time hazard anticipation and capacity prediction was developed. The introduction of this process into microchip technology will enable the production of reliable, safe, and efficient high energy batteries.

  17. Large-area high-power VCSEL pump arrays optimized for high-energy lasers

    NASA Astrophysics Data System (ADS)

    Wang, Chad; Geske, Jonathan; Garrett, Henry; Cardellino, Terri; Talantov, Fedor; Berdin, Glen; Millenheft, David; Renner, Daniel; Klemer, Daniel

    2012-06-01

    Practical, large-area, high-power diode pumps for one micron (Nd, Yb) as well as eye-safer wavelengths (Er, Tm, Ho) are critical to the success of any high energy diode pumped solid state laser. Diode efficiency, brightness, availability and cost will determine how realizable a fielded high energy diode pumped solid state laser will be. 2-D Vertical-Cavity Surface-Emitting Laser (VCSEL) arrays are uniquely positioned to meet these requirements because of their unique properties, such as low divergence circular output beams, reduced wavelength drift with temperature, scalability to large 2-D arrays through low-cost and high-volume semiconductor photolithographic processes, high reliability, no catastrophic optical damage failure, and radiation and vacuum operation tolerance. Data will be presented on the status of FLIR-EOC's VCSEL pump arrays. Analysis of the key aspects of electrical, thermal and mechanical design that are critical to the design of a VCSEL pump array to achieve high power efficient array performance will be presented.

  18. 7 CFR 1770.5 - Periods of retention.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., would have been expected to accomplish the desired result consistent with cost effectiveness... lowest reasonable cost consistent with cost effectiveness, reliability, safety, and expedition. (b... utility service, all removal and restoration activities are completed, and all costs are retired from the...

  19. Hawaii electric system reliability.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva Monroy, Cesar Augusto; Loose, Verne William

    2012-09-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and formore » application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.« less

  20. Hawaii Electric System Reliability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loose, Verne William; Silva Monroy, Cesar Augusto

    2012-08-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers’ views of reliability “worth” and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and formore » application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers’ views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.« less

  1. High power disk lasers: advances and applications

    NASA Astrophysics Data System (ADS)

    Havrilla, David; Holzer, Marco

    2011-02-01

    Though the genesis of the disk laser concept dates to the early 90's, the disk laser continues to demonstrate the flexibility and the certain future of a breakthrough technology. On-going increases in power per disk, and improvements in beam quality and efficiency continue to validate the genius of the disk laser concept. As of today, the disk principle has not reached any fundamental limits regarding output power per disk or beam quality, and offers numerous advantages over other high power resonator concepts, especially over monolithic architectures. With well over 1000 high power disk lasers installations, the disk laser has proven to be a robust and reliable industrial tool. With advancements in running cost, investment cost and footprint, manufacturers continue to implement disk laser technology with more vigor than ever. This paper will explain important details of the TruDisk laser series and process relevant features of the system, like pump diode arrangement, resonator design and integrated beam guidance. In addition, advances in applications in the thick sheet area and very cost efficient high productivity applications like remote welding, remote cutting and cutting of thin sheets will be discussed.

  2. Frequency doubled high-power disk lasers in pulsed and continuous-wave operation

    NASA Astrophysics Data System (ADS)

    Weiler, Sascha; Hangst, Alexander; Stolzenburg, Christian; Zawischa, Ivo; Sutter, Dirk; Killi, Alexander; Kalfhues, Steffen; Kriegshaeuser, Uwe; Holzer, Marco; Havrilla, David

    2012-03-01

    The disk laser with multi-kW output power in infrared cw operation is widely used in today's manufacturing, primarily in the automotive industry. The disk technology combines high power (average and/or peak power), excellent beam quality, high efficiency and high reliability with low investment and operating costs. Additionally, the disk laser is ideally suited for frequency conversion due to its polarized output with negligible depolarization losses. Laser light in the green spectral range (~515 nm) can be created with a nonlinear crystal. Pulsed disk lasers with green output of well above 50 W (extracavity doubling) in the ps regime and several hundreds of Watts in the ns regime with intracavity doubling are already commercially available whereas intracavity doubled disk lasers in continuous wave operation with greater than 250 W output are in test phase. In both operating modes (pulsed and cw) the frequency doubled disk laser offers advantages in existing and new applications. Copper welding for example is said to show much higher process reliability with green laser light due to its higher absorption in comparison to the infrared. This improvement has the potential to be very beneficial for the automotive industry's move to electrical vehicles which requires reliable high-volume welding of copper as a major task for electro motors, batteries, etc.

  3. Advanced Launch System propulsion focused technology liquid methane turbopump technical implementation plan

    NASA Technical Reports Server (NTRS)

    Csomor, A.; Nielson, C. E.

    1989-01-01

    This program will focus on the integration of all functional disciplines of the design, manufacturing, materials, fabrication and producibility to define and demonstrate a highly reliable, easily maintained, low cost liquid methane turbopump as a component for the STBE (Space Transportation Booster Engine) using the STME (main engine) oxygen turbopump. A cost model is to be developed to predict the recurring cost of production hardware and operations. A prime objective of the program is to design the liquid methane turbopump to be used in common with a LH2 turbopump optimized for the STME. Time phasing of the effort is presented and interrelationship of the tasks is defined. Major subcontractors are identified and their roles in the program are described.

  4. X-33/RLV System Health Management/ Vehicle Health Management

    NASA Technical Reports Server (NTRS)

    Garbos, Raymond J.; Mouyos, William

    1998-01-01

    To reduce operations cost, the RLV must include the following elements: highly reliable, robust subsystems designed for simple repair access with a simplified servicing infrastructure and incorporating expedited decision making about faults and anomalies. A key component for the Single Stage to Orbit (SSTO) RLV System used to meet these objectives is System Health Management (SHM). SHM deals with the vehicle component- Vehicle Health Management (VHM), the ground processing associated with the fleet (GVHM) and the Ground Infrastructure Health Management (GIHM). The objective is to provide an automated collection and paperless health decision, maintenance and logistics system. Many critical technologies are necessary to make the SHM (and more specifically VHM) practical, reliable and cost effective. Sanders is leading the design, development and integration of the SHM system for RLV and X-33 SHM (a sub-scale, sub-orbit Advanced Technology Demonstrator). This paper will present the X-33 SHM design which forms the baseline for RLV SHM. This paper will also discuss other applications of these technologies.

  5. Development of a Whole Slide Imaging System on Smartphones and Evaluation With Frozen Section Samples.

    PubMed

    Yu, Hong; Gao, Feng; Jiang, Liren; Ma, Shuoxin

    2017-09-15

    The aim was to develop scalable Whole Slide Imaging (sWSI), a WSI system based on mainstream smartphones coupled with regular optical microscopes. This ultra-low-cost solution should offer diagnostic-ready imaging quality on par with standalone scanners, supporting both oil and dry objective lenses of different magnifications, and reasonably high throughput. These performance metrics should be evaluated by expert pathologists and match those of high-end scanners. The aim was to develop scalable Whole Slide Imaging (sWSI), a whole slide imaging system based on smartphones coupled with optical microscopes. This ultra-low-cost solution should offer diagnostic-ready imaging quality on par with standalone scanners, supporting both oil and dry object lens of different magnification. All performance metrics should be evaluated by expert pathologists and match those of high-end scanners. In the sWSI design, the digitization process is split asynchronously between light-weight clients on smartphones and powerful cloud servers. The client apps automatically capture FoVs at up to 12-megapixel resolution and process them in real-time to track the operation of users, then give instant feedback of guidance. The servers first restitch each pair of FoVs, then automatically correct the unknown nonlinear distortion introduced by the lens of the smartphone on the fly, based on pair-wise stitching, before finally combining all FoVs into one gigapixel VS for each scan. These VSs can be viewed using Internet browsers anywhere. In the evaluation experiment, 100 frozen section slides from patients randomly selected among in-patients of the participating hospital were scanned by both a high-end Leica scanner and sWSI. All VSs were examined by senior pathologists whose diagnoses were compared against those made using optical microscopy as ground truth to evaluate the image quality. The sWSI system is developed for both Android and iPhone smartphones and is currently being offered to the public. The image quality is reliable and throughput is approximately 1 FoV per second, yielding a 15-by-15 mm slide under 20X object lens in approximately 30-35 minutes, with little training required for the operator. The expected cost for setup is approximately US $100 and scanning each slide costs between US $1 and $10, making sWSI highly cost-effective for infrequent or low-throughput usage. In the clinical evaluation of sample-wise diagnostic reliability, average accuracy scores achieved by sWSI-scan-based diagnoses were as follows: 0.78 for breast, 0.88 for uterine corpus, 0.68 for thyroid, and 0.50 for lung samples. The respective low-sensitivity rates were 0.05, 0.05, 0.13, and 0.25 while the respective low-specificity rates were 0.18, 0.08, 0.20, and 0.25. The participating pathologists agreed that the overall quality of sWSI was generally on par with that produced by high-end scanners, and did not affect diagnosis in most cases. Pathologists confirmed that sWSI is reliable enough for standard diagnoses of most tissue categories, while it can be used for quick screening of difficult cases. As an ultra-low-cost alternative to whole slide scanners, diagnosis-ready VS quality and robustness for commercial usage is achieved in the sWSI solution. Operated on main-stream smartphones installed on normal optical microscopes, sWSI readily offers affordable and reliable WSI to resource-limited or infrequent clinical users. ©Hong Yu, Feng Gao, Liren Jiang, Shuoxin Ma. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 15.09.2017.

  6. 2nd Generation RLV Risk Reduction Definition Program: Pratt & Whitney Propulsion Risk Reduction Requirements Program (TA-3 & TA-4)

    NASA Technical Reports Server (NTRS)

    Matlock, Steve

    2001-01-01

    This is the final report and addresses all of the work performed on this program. Specifically, it covers vehicle architecture background, definition of six baseline engine cycles, reliability baseline (space shuttle main engine QRAS), and component level reliability/performance/cost for the six baseline cycles, and selection of 3 cycles for further study. This report further addresses technology improvement selection and component level reliability/performance/cost for the three cycles selected for further study, as well as risk reduction plans, and recommendation for future studies.

  7. Reimagining cost recovery in Pakistan's irrigation system through willingness-to-pay estimates for irrigation water from a discrete choice experiment

    NASA Astrophysics Data System (ADS)

    Bell, Andrew Reid; Shah, M. Azeem Ali; Ward, Patrick S.

    2014-08-01

    It is widely argued that farmers are unwilling to pay adequate fees for surface water irrigation to recover the costs associated with maintenance and improvement of delivery systems. In this paper, we use a discrete choice experiment to study farmer preferences for irrigation characteristics along two branch canals in Punjab Province in eastern Pakistan. We find that farmers are generally willing to pay well in excess of current surface water irrigation costs for increased surface water reliability and that the amount that farmers are willing to pay is an increasing function of their existing surface water supply as well as location along the main canal branch. This explicit translation of implicit willingness-to-pay (WTP) for water (via expenditure on groundwater pumping) to WTP for reliable surface water demonstrates the potential for greatly enhanced cost recovery in the Indus Basin Irrigation System via appropriate setting of water user fees, driven by the higher WTP of those currently receiving reliable supplies.

  8. Reimagining cost recovery in Pakistan's irrigation system through willingness-to-pay estimates for irrigation water from a discrete choice experiment

    PubMed Central

    Bell, Andrew Reid; Shah, M Azeem Ali; Ward, Patrick S

    2014-01-01

    It is widely argued that farmers are unwilling to pay adequate fees for surface water irrigation to recover the costs associated with maintenance and improvement of delivery systems. In this paper, we use a discrete choice experiment to study farmer preferences for irrigation characteristics along two branch canals in Punjab Province in eastern Pakistan. We find that farmers are generally willing to pay well in excess of current surface water irrigation costs for increased surface water reliability and that the amount that farmers are willing to pay is an increasing function of their existing surface water supply as well as location along the main canal branch. This explicit translation of implicit willingness-to-pay (WTP) for water (via expenditure on groundwater pumping) to WTP for reliable surface water demonstrates the potential for greatly enhanced cost recovery in the Indus Basin Irrigation System via appropriate setting of water user fees, driven by the higher WTP of those currently receiving reliable supplies. PMID:25552779

  9. System Architectural Considerations on Reliable Guidance, Navigation, and Control (GN and C) for Constellation Program (CxP) Spacecraft

    NASA Technical Reports Server (NTRS)

    Dennehy, Cornelius J.

    2010-01-01

    This final report summarizes the results of a comparative assessment of the fault tolerance and reliability of different Guidance, Navigation and Control (GN&C) architectural approaches. This study was proactively performed by a combined Massachusetts Institute of Technology (MIT) and Draper Laboratory team as a GN&C "Discipline-Advancing" activity sponsored by the NASA Engineering and Safety Center (NESC). This systematic comparative assessment of GN&C system architectural approaches was undertaken as a fundamental step towards understanding the opportunities for, and limitations of, architecting highly reliable and fault tolerant GN&C systems composed of common avionic components. The primary goal of this study was to obtain architectural 'rules of thumb' that could positively influence future designs in the direction of an optimized (i.e., most reliable and cost-efficient) GN&C system. A secondary goal was to demonstrate the application and the utility of a systematic modeling approach that maps the entire possible architecture solution space.

  10. Using Reliability to Meet Z540.3's 2 percent Rule

    NASA Technical Reports Server (NTRS)

    Mimbs, Scott M.

    2011-01-01

    NASA's Kennedy Space Center (KSC) undertook implementation of ANSI/NCSL Z540.3-2006 in October 2008. Early in the implementation, KSC identified that the largest cost driver of Z540.3 implementation is measurement uncertainty analyses for legacy calibration processes. NASA, like other organizations, has a significant inventory of measuring and test equipment (MTE) that have documented calibration procedures without documented measurement uncertainties. This paper provides background information to support the rationale for using high in-tolerance reliability as evidence of compliance to the 2% probability of false acceptance (PFA) quality metric of ANSI/NCSL Z540.3-2006 allowing use of qualifying legacy processes. NASA is adopting this as policy and is recommending NCSL International consider this as a method of compliance to Z540.3. Topics covered include compliance issues, using end-of-period reliability (EOPR) to estimate test point uncertainty, reliability data influences within the PFA model, the validity of EOPR data, and an appendix covering "observed" versus "true" EOPR.

  11. First-order reliability application and verification methods for semistatic structures

    NASA Astrophysics Data System (ADS)

    Verderaime, V.

    1994-11-01

    Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored in conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments; stress audits are shown to be arbitrary and incomplete, and the concept compromises the performance of high-strength materials. A reliability method is proposed that combines first-order reliability principles with deterministic design variables and conventional test techniques to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety-index expression. The application is reduced to solving for a design factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this design factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and the development of semistatic structural designs.

  12. A self-learning camera for the validation of highly variable and pseudorandom patterns

    NASA Astrophysics Data System (ADS)

    Kelley, Michael

    2004-05-01

    Reliable and productive manufacturing operations have depended on people to quickly detect and solve problems whenever they appear. Over the last 20 years, more and more manufacturing operations have embraced machine vision systems to increase productivity, reliability and cost-effectiveness, including reducing the number of human operators required. Although machine vision technology has long been capable of solving simple problems, it has still not been broadly implemented. The reason is that until now, no machine vision system has been designed to meet the unique demands of complicated pattern recognition. The ZiCAM family was specifically developed to be the first practical hardware to meet these needs. To be able to address non-traditional applications, the machine vision industry must include smart camera technology that meets its users" demands for lower costs, better performance and the ability to address applications of irregular lighting, patterns and color. The next-generation smart cameras will need to evolve as a fundamentally different kind of sensor, with new technology that behaves like a human but performs like a computer. Neural network based systems, coupled with self-taught, n-space, non-linear modeling, promises to be the enabler of the next generation of machine vision equipment. Image processing technology is now available that enables a system to match an operator"s subjectivity. A Zero-Instruction-Set-Computer (ZISC) powered smart camera allows high-speed fuzzy-logic processing, without the need for computer programming. This can address applications of validating highly variable and pseudo-random patterns. A hardware-based implementation of a neural network, Zero-Instruction-Set-Computer, enables a vision system to "think" and "inspect" like a human, with the speed and reliability of a machine.

  13. Accelerated Comparative Fatigue Strength Testing of Belt Adhesive Joints

    NASA Astrophysics Data System (ADS)

    Bajda, Miroslaw; Blazej, Ryszard; Jurdziak, Leszek

    2017-12-01

    Belt joints are the weakest link in the serial structure that creates an endless loop of spliced belt segments. This affects not only the lower strength of adhesive joints of textile belts in comparison to vulcanized splices, but also the replacement of traditional glues to more ecological but with other strength parameters. This is reflected in the lowered durability of adhesive joints, which in underground coal mines is nearly twice shorter than the operating time of belts. Vulcanized splices require high precision in performance, they need long time to achieve cross-linking of the friction mixture and, above all, they require specialized equipment (vulcanization press) which is not readily available and often takes much time to be delivered down, which means reduced mining output or even downtime. All this reduces the reliability and durability of adhesive joints. In addition, due to the consolidation on the Polish coal market, mines are joined into large economic units serviced by a smaller number of processing plants. The consequence is to extend the transport routes downstream and increase reliability requirements. The greater number of conveyors in the chain reduces reliability of supply and increases production losses. With high fixed costs of underground mines, the reduction in mining output is reflected in the increase in unit costs, and this at low coal prices on the market can mean substantial losses for mines. The paper describes the comparative study of fatigue strength of shortened samples of adhesive joints conducted to compare many different variants of joints (various adhesives and materials). Shortened samples were exposed to accelerated fatigue in the usually long-lasting dynamic studies, allowing more variants to be tested at the same time. High correlation between the results obtained for shortened (100 mm) and traditional full-length (3×250 mm) samples renders accelerated tests possible.

  14. Design of a heatpipe-cooled Mars-surface fission reactor

    NASA Astrophysics Data System (ADS)

    Poston, David I.; Kapernick, Richard J.; Guffee, Ray M.; Reid, Robert S.; Lipinski, Ronald J.; Wright, Steven A.; Talandis, Regina A.

    2002-01-01

    The next generation of robotic missions to Mars will most likely require robust power sources in the range of 3 to 20 kWe. Fission systems are well suited to provide safe, reliable, and economic power within this range. The goal of this study is to design a compact, low-mass fission system that meets Mars-surface power requirements, while maintaining a high level of safety and reliability at a relatively low cost. The Heatpipe Power System (HPS) is one possible approach for producing near-term, low-cost, space fission power. The goal of the HPS project is to devise an attractive space fission system that can be developed quickly and affordably. The primary ways of doing this are by using existing technology and by designing the system for inexpensive testing. If the system can be designed to allow highly prototypic testing with electrical heating, then an exhaustive test program can be carried out quickly and inexpensively, and thorough testing of the actual flight unit can be performed-which is a major benefit to reliability. Over the past 4 years, three small HPS proof-of-concept technology demonstrations have been conducted, and each has been highly successful. The Heatpipe-Operated Mars Exploration Reactor (HOMER) is a derivative of the HPS designed especially for producing power on the surface of Mars. The HOMER-15 is a 15-kWt reactor that couples with a 3-kWe Stirling engine power system. The reactor contains stainless-steel (SS)-clad uranium nitride (UN) fuel pins that are structurally and thermally bonded to SS/sodium heatpipes. Fission energy is conducted from the fuel pins to the heatpipes, which then carry the heat to the Stirling engine. This paper describes the attributes, specifications, and performance of a 15-kWt HOMER reactor. .

  15. Summary: High Temperature Downhole Motor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raymond, David W.

    2017-10-01

    Directional drilling can be used to enable multi-lateral completions from a single well pad to improve well productivity and decrease environmental impact. Downhole rotation is typically developed with a motor in the Bottom Hole Assembly (BHA) that develops drilling power (speed and torque) necessary to drive rock reduction mechanisms (i.e., the bit) apart from the rotation developed by the surface rig. Historically, wellbore deviation has been introduced by a “bent-sub,” located in the BHA, that introduces a small angular deviation, typically less than 3 degrees, to allow the bit to drill off-axis with orientation of the BHA controlled at themore » surface. The development of a high temperature downhole motor would allow reliable use of bent subs for geothermal directional drilling. Sandia National Laboratories is pursuing the development of a high temperature motor that will operate on either drilling fluid (water-based mud) or compressed air to enable drilling high temperature, high strength, fractured rock. The project consists of designing a power section based upon geothermal drilling requirements; modeling and analysis of potential solutions; and design, development and testing of prototype hardware to validate the concept. Drilling costs contribute substantially to geothermal electricity production costs. The present development will result in more reliable access to deep, hot geothermal resources and allow preferential wellbore trajectories to be achieved. This will enable development of geothermal wells with multi-lateral completions resulting in improved geothermal resource recovery, decreased environmental impact and enhanced well construction economics.« less

  16. Implications of scaling on static RAM bit cell stability and reliability

    NASA Astrophysics Data System (ADS)

    Coones, Mary Ann; Herr, Norm; Bormann, Al; Erington, Kent; Soorholtz, Vince; Sweeney, John; Phillips, Michael

    1993-01-01

    In order to lower manufacturing costs and increase performance, static random access memory (SRAM) bit cells are scaled progressively toward submicron geometries. The reliability of an SRAM is highly dependent on the bit cell stability. Smaller memory cells with less capacitance and restoring current make the array more susceptible to failures from defectivity, alpha hits, and other instabilities and leakage mechanisms. Improving long term reliability while migrating to higher density devices makes the task of building in and improving reliability increasingly difficult. Reliability requirements for high density SRAMs are very demanding with failure rates of less than 100 failures per billion device hours (100 FITs) being a common criteria. Design techniques for increasing bit cell stability and manufacturability must be implemented in order to build in this level of reliability. Several types of analyses are performed to benchmark the performance of the SRAM device. Examples of these analysis techniques which are presented here include DC parametric measurements of test structures, functional bit mapping of the circuit used to characterize the entire distribution of bits, electrical microprobing of weak and/or failing bits, and system and accelerated soft error rate measurements. These tests allow process and design improvements to be evaluated prior to implementation on the final product. These results are used to provide comprehensive bit cell characterization which can then be compared to device models and adjusted accordingly to provide optimized cell stability versus cell size for a particular technology. The result is designed in reliability which can be accomplished during the early stages of product development.

  17. A Topology Control Strategy with Reliability Assurance for Satellite Cluster Networks in Earth Observation

    PubMed Central

    Chen, Qing; Zhang, Jinxiu; Hu, Ze

    2017-01-01

    This article investigates the dynamic topology control problem of satellite cluster networks (SCNs) in Earth observation (EO) missions by applying a novel metric of stability for inter-satellite links (ISLs). The properties of the periodicity and predictability of satellites’ relative position are involved in the link cost metric which is to give a selection criterion for choosing the most reliable data routing paths. Also, a cooperative work model with reliability is proposed for the situation of emergency EO missions. Based on the link cost metric and the proposed reliability model, a reliability assurance topology control algorithm and its corresponding dynamic topology control (RAT) strategy are established to maximize the stability of data transmission in the SCNs. The SCNs scenario is tested through some numeric simulations of the topology stability of average topology lifetime and average packet loss rate. Simulation results show that the proposed reliable strategy applied in SCNs significantly improves the data transmission performance and prolongs the average topology lifetime. PMID:28241474

  18. A Topology Control Strategy with Reliability Assurance for Satellite Cluster Networks in Earth Observation.

    PubMed

    Chen, Qing; Zhang, Jinxiu; Hu, Ze

    2017-02-23

    This article investigates the dynamic topology control problemof satellite cluster networks (SCNs) in Earth observation (EO) missions by applying a novel metric of stability for inter-satellite links (ISLs). The properties of the periodicity and predictability of satellites' relative position are involved in the link cost metric which is to give a selection criterion for choosing the most reliable data routing paths. Also, a cooperative work model with reliability is proposed for the situation of emergency EO missions. Based on the link cost metric and the proposed reliability model, a reliability assurance topology control algorithm and its corresponding dynamic topology control (RAT) strategy are established to maximize the stability of data transmission in the SCNs. The SCNs scenario is tested through some numeric simulations of the topology stability of average topology lifetime and average packet loss rate. Simulation results show that the proposed reliable strategy applied in SCNs significantly improves the data transmission performance and prolongs the average topology lifetime.

  19. Process for the physical segregation of minerals

    DOEpatents

    Yingling, Jon C.; Ganguli, Rajive

    2004-01-06

    With highly heterogeneous groups or streams of minerals, physical segregation using online quality measurements is an economically important first stage of the mineral beneficiation process. Segregation enables high quality fractions of the stream to bypass processing, such as cleaning operations, thereby reducing the associated costs and avoiding the yield losses inherent in any downstream separation process. The present invention includes various methods for reliably segregating a mineral stream into at least one fraction meeting desired quality specifications while at the same time maximizing yield of that fraction.

  20. Induction annealing and subsequent quenching: effect on the thermoelectric properties of boron-doped nanographite ensembles.

    PubMed

    Xie, Ming; Lee, Chee Huei; Wang, Jiesheng; Yap, Yoke Khin; Bruno, Paola; Gruen, Dieter; Singh, Dileep; Routbort, Jules

    2010-04-01

    Boron-doped nanographite ensembles (NGEs) are interesting thermoelectric nanomaterials for high temperature applications. Rapid induction annealing and quenching has been applied to boron-doped NGEs using a relatively low-cost, highly reliable, laboratory built furnace to show that substantial improvements in thermoelectric power factors can be achieved using this methodology. Details of the design and performance of this compact induction furnace as well as results of the thermoelectric measurements will be reported here.

  1. Space solar array reliability: A study and recommendations

    NASA Astrophysics Data System (ADS)

    Brandhorst, Henry W., Jr.; Rodiek, Julie A.

    2008-12-01

    Providing reliable power over the anticipated mission life is critical to all satellites; therefore solar arrays are one of the most vital links to satellite mission success. Furthermore, solar arrays are exposed to the harshest environment of virtually any satellite component. In the past 10 years 117 satellite solar array anomalies have been recorded with 12 resulting in total satellite failure. Through an in-depth analysis of satellite anomalies listed in the Airclaim's Ascend SpaceTrak database, it is clear that solar array reliability is a serious, industry-wide issue. Solar array reliability directly affects the cost of future satellites through increased insurance premiums and a lack of confidence by investors. Recommendations for improving reliability through careful ground testing, standardization of testing procedures such as the emerging AIAA standards, and data sharing across the industry will be discussed. The benefits of creating a certified module and array testing facility that would certify in-space reliability will also be briefly examined. Solar array reliability is an issue that must be addressed to both reduce costs and ensure continued viability of the commercial and government assets on orbit.

  2. Modeling Imperfect Generator Behavior in Power System Operation Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krad, Ibrahim

    A key component in power system operations is the use of computer models to quickly study and analyze different operating conditions and futures in an efficient manner. The output of these models are sensitive to the data used in them as well as the assumptions made during their execution. One typical assumption is that generators and load assets perfectly follow operator control signals. While this is a valid simulation assumption, generators may not always accurately follow control signals. This imperfect response of generators could impact cost and reliability metrics. This paper proposes a generator model that capture this imperfect behaviormore » and examines its impact on production costs and reliability metrics using a steady-state power system operations model. Preliminary analysis shows that while costs remain relatively unchanged, there could be significant impacts on reliability metrics.« less

  3. What does an MRI scan cost?

    PubMed

    Young, David W

    2015-11-01

    Historically, hospital departments have computed the costs of individual tests or procedures using the ratio of cost to charges (RCC) method, which can produce inaccurate results. To determine a more accurate cost of a test or procedure, the activity-based costing (ABC) method must be used. Accurate cost calculations will ensure reliable information about the profitability of a hospital's DRGs.

  4. Multiple mini-interviews: same concept, different approaches.

    PubMed

    Knorr, Mirjana; Hissbach, Johanna

    2014-12-01

    Increasing numbers of educational institutions in the medical field choose to replace their conventional admissions interviews with a multiple mini-interview (MMI) format because the latter has superior reliability values and reduces interviewer bias. As the MMI format can be adapted to the conditions of each institution, the question of under which circumstances an MMI is most expedient remains unresolved. This article systematically reviews the existing MMI literature to identify the aspects of MMI design that have impact on the reliability, validity and cost-efficiency of the format. Three electronic databases (OVID, PubMed, Web of Science) were searched for any publications in which MMIs and related approaches were discussed. Sixty-six publications were included in the analysis. Forty studies reported reliability values. Generally, raising the number of stations has more impact on reliability than raising the number of raters per station. Other factors with positive influence include the exclusion of stations that are too easy, and the use of normative anchored rating scales or skills-based rater training. Data on criterion-related validities and analyses of dimensionality were found in 31 studies. Irrespective of design differences, the relationship between MMI results and academic measures is small to zero. The McMaster University MMI predicts in-programme and licensing examination performance. Construct validity analyses are mostly exploratory and their results are inconclusive. Seven publications gave information on required resources or provided suggestions on how to save costs. The most relevant cost factors that are additional to those of conventional interviews are the costs of station development and actor payments. The MMI literature provides useful recommendations for reliable and cost-efficient MMI designs, but some important aspects have not yet been fully explored. More theory-driven research is needed concerning dimensionality and construct validity, the predictive validity of MMIs other than those of McMaster University, the comparison of station types, and a cost-efficient station development process. © 2014 John Wiley & Sons Ltd.

  5. Reliable cues and signals of fruit quality are contingent on the habitat in black elder (Sambucus nigra).

    PubMed

    Schaefer, H Martin; Braun, Julius

    2009-06-01

    Communication mediates interactions between organisms and can be based on signals or cues. Signals are selected for their signaling function, whereas cues evolve for reasons other than signaling. To be evolutionarily stable, communication needs to be reliable on average, but the mechanisms that enforce reliability are hotly debated in light of strong environmental influence on signals and cues. While fruit quality in black elder (Sambucus nigra) is unrelated to fruit color, it is indicated by alternative pedicel phenotypes. Information on fruit quality has thus been transferred from the fruit to the developmentally associated pedicels, which are environmentally determined cues. Within each phenotype, color variation indicates fruit quality. Communication by black elder is thus reliable, but the proximate mechanisms enforcing reliability are habitat specific. High irradiance increases both the contrasts of the visual cue and fruit quality in the anthocyanin-based red pedicel phenotype, while shaded plants of the chlorophyll-based green phenotype apparently use signals by forgoing photosynthesis. This is because lower chlorophyll content in green pedicels creates contrasting pedicels, and higher contrasts indicate higher sugar content in the fruits of green pedicels. Because anthocyanins are light-induced, plants use cues when exposed to high irradiance, whereas they apparently use costly signals in the shade by reducing chlorophyll content in the pedicels. In behavioral field and laboratory experiments we document that avian seed dispersers select among pedicel phenotypes that indicate different fruit quality. Plants can thus increase their reproductive success by sending highly informative cues. Our results indicate how reliable information transfer can be maintained both in cues and signals in spite of substantial environmental influence on visual traits.

  6. High sample throughput genotyping for estimating C-lineage introgression in the dark honeybee: an accurate and cost-effective SNP-based tool.

    PubMed

    Henriques, Dora; Browne, Keith A; Barnett, Mark W; Parejo, Melanie; Kryger, Per; Freeman, Tom C; Muñoz, Irene; Garnery, Lionel; Highet, Fiona; Jonhston, J Spencer; McCormack, Grace P; Pinto, M Alice

    2018-06-04

    The natural distribution of the honeybee (Apis mellifera L.) has been changed by humans in recent decades to such an extent that the formerly widest-spread European subspecies, Apis mellifera mellifera, is threatened by extinction through introgression from highly divergent commercial strains in large tracts of its range. Conservation efforts for A. m. mellifera are underway in multiple European countries requiring reliable and cost-efficient molecular tools to identify purebred colonies. Here, we developed four ancestry-informative SNP assays for high sample throughput genotyping using the iPLEX Mass Array system. Our customized assays were tested on DNA from individual and pooled, haploid and diploid honeybee samples extracted from different tissues using a diverse range of protocols. The assays had a high genotyping success rate and yielded accurate genotypes. Performance assessed against whole-genome data showed that individual assays behaved well, although the most accurate introgression estimates were obtained for the four assays combined (117 SNPs). The best compromise between accuracy and genotyping costs was achieved when combining two assays (62 SNPs). We provide a ready-to-use cost-effective tool for accurate molecular identification and estimation of introgression levels to more effectively monitor and manage A. m. mellifera conservatories.

  7. Highly Conductive and Reliable Copper-Filled Isotropically Conductive Adhesives Using Organic Acids for Oxidation Prevention

    NASA Astrophysics Data System (ADS)

    Chen, Wenjun; Deng, Dunying; Cheng, Yuanrong; Xiao, Fei

    2015-07-01

    The easy oxidation of copper is one critical obstacle to high-performance copper-filled isotropically conductive adhesives (ICAs). In this paper, a facile method to prepare highly reliable, highly conductive, and low-cost ICAs is reported. The copper fillers were treated by organic acids for oxidation prevention. Compared with ICA filled with untreated copper flakes, the ICA filled with copper flakes treated by different organic acids exhibited much lower bulk resistivity. The lowest bulk resistivity achieved was 4.5 × 10-5 Ω cm, which is comparable to that of commercially available Ag-filled ICA. After 500 h of 85°C/85% relative humidity (RH) aging, the treated ICAs showed quite stable bulk resistivity and relatively stable contact resistance. Through analyzing the results of x-ray diffraction, x-ray photoelectron spectroscopy, and thermogravimetric analysis, we found that, with the assistance of organic acids, the treated copper flakes exhibited resistance to oxidation, thus guaranteeing good performance.

  8. VHSIC Electronics and the Cost of Air Force Avionics in the 1990s

    DTIC Science & Technology

    1990-11-01

    circuit. LRM Line replaceable module. LRU Line replaceable unit. LSI Large-scale integration. LSTTL Tow-power Schottky Transitor -to-Transistor Logic...displays, communications/navigation/identification, electronic combat equipment, dispensers, and computers. These CERs, which statistically relate the...some of the reliability numbers, and adding the F-15 and F-16 to obtain the data sample shown in Table 6. Both suite costs and reliability statistics

  9. Short communication: Genotyping of cows to speed up availability of genomic estimated breeding values for direct health traits in Austrian Fleckvieh (Simmental) cattle--genetic and economic aspects.

    PubMed

    Egger-Danner, C; Schwarzenbacher, H; Willam, A

    2014-07-01

    The aim of this study was to quantify the impact of genotyping cows with reliable phenotypes for direct health traits on annual monetary genetic gain (AMGG) and discounted profit. The calculations were based on a deterministic approach using ZPLAN software (University of Hohenheim, Stuttgart, Germany). It was assumed that increases in reliability of the total merit index (TMI) of 5, 15, and 25 percentage points were achieved through genotyping 5,000, 25,000, and 50,000 cows, respectively. Costs for phenotyping, genotyping, and genomic estimated breeding values vary between €150 and €20 per cow. The gain in genotyping cows for traits with medium to high heritability is more than for direct health traits with low heritability. The AMGG is increased by 1.5% if the reliability of TMI is 5 percentage points higher (i.e., 5,000 cows genotyped) and 6.53% higher AMGG can be expected when the reliability of TMI is increased by 25 percentage points (i.e., 50,000 cows genotyped). The discounted profit depends not only on the costs of genotyping but also on the population size. This study indicates that genotyping cows with reliable phenotypes is feasible to speed up the availability of genomic estimated breeding values for direct health traits. But, because of the huge amount of valid phenotypes and genotypes needed to establish an efficient genomic evaluation, it is likely that financial constraints will be the main limiting factor for implementation into breeding program such as Fleckvieh Austria. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  10. Independent predictors of reliability between full time employee-dependent acquisition of functional outcomes compared to non-full time employee-dependent methodologies: a prospective single institutional study.

    PubMed

    Adogwa, Owoicho; Elsamadicy, Aladine A; Cheng, Joseph; Bagley, Carlos

    2016-03-01

    The prospective acquisition of reliable patient-reported outcomes (PROs) measures demonstrating the effectiveness of spine surgery, or lack thereof, remains a challenge. The aims of this study are to compare the reliability of functional outcomes metrics obtained using full time employee (FTE) vs. non-FTE-dependent methodologies and to determine the independent predictors of response reliability using non FTE-dependent methodologies. One hundred and nineteen adult patients (male: 65, female: 54) undergoing one- and two-level lumbar fusions at Duke University Medical Center were enrolled in this prospective study. Enrollment criteria included available demographic, clinical and baseline functional outcomes data. All patients were administered two similar sets of baseline questionnaires-(I) phone interviews (FTE-dependent) and (II) hardcopy in clinic (patient self-survey, non-FTE-dependent). All patients had at least a two-week washout period between phone interviews and in-clinic self-surveys to minimize effect of recall. Questionnaires included Oswestry disability index (ODI) and Visual Analog Back and Leg Pain Scale (VAS-BP/LP). Reliability was assessed by the degree to which patient responses to baseline questionnaires differed between both time points. About 26.89% had a history an anxiety disorder and 28.57% reported a history of depression. At least 97.47% of patients had a High School Diploma or GED, with 49.57% attaining a 4-year college degree or post-graduate degree. 29.94% reported full-time employment and 14.28% were on disability. There was a very high correlation between baseline PRO's data captured between FTE-dependent compared to non-FTE-dependent methodologies (r=0.89). In a multivariate logistic regression model, the absence of anxiety and depression, higher levels of education (college or greater) and full-time employment, were independently associated with high response reliability using non-FTE-dependent methodologies. Our study suggests that capturing health-related quality of life data using non-FTE-dependent methodologies is highly reliable and maybe a more cost-effective alternative. Well-educated patients who are employed full-time appear to be the most reliable.

  11. Air-bridged Ohmic contact on vertically aligned si nanowire arrays: application to molecule sensors.

    PubMed

    Han, Hee; Kim, Jungkil; Shin, Ho Sun; Song, Jae Yong; Lee, Woo

    2012-05-02

    A simple, cost-effective, and highly reliable method for constructing an air-bridged electrical contact on large arrays of vertically aligned nanowires was developed. The present method may open up new opportunities for developing advanced nanowire-based devices for energy harvest and storage, power generation, and sensing applications. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Investigation of test methods, material properties, and processes for solar cell encapsulants

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Photovoltaic (PV) modules consist of a string of electrically interconnected silicon solar cells capable of producing practical quantities of electrical power when exposed to sunlight. To insure high reliability and long term performance, the functional components of the solar cell module must be adequately protected from the environment by some encapsulation technique. The encapsulation system must provide mechanical support for the cells and corrosion protection for the electrical components. The goal of the program is to identify and develop encapsulation systems consistent with the PV module operating requirements of 30 year life and a target cost of $0.70 per peak watt ($70 per square meter) (1980 dollars). Assuming a module efficiency of ten percent, which is equivalent to a power output of 100 watts per square meter in midday sunlight, the capital cost of the modules may be calculated to be $70.00 per square meter. Out of this cost goal, only 20 percent is available for encapsulation due to the high cost of the cells, interconnects, and other related components. The encapsulation cost allocation may then be stated as $14.00 per square meter, included all coatings, pottant and mechanical supports for the cells.

  13. High gain solar photovoltaics

    NASA Astrophysics Data System (ADS)

    MacDonald, B.; Finot, M.; Heiken, B.; Trowbridge, T.; Ackler, H.; Leonard, L.; Johnson, E.; Chang, B.; Keating, T.

    2009-08-01

    Skyline Solar Inc. has developed a novel silicon-based PV system to simultaneously reduce energy cost and improve scalability of solar energy. The system achieves high gain through a combination of high capacity factor and optical concentration. The design approach drives innovation not only into the details of the system hardware, but also into manufacturing and deployment-related costs and bottlenecks. The result of this philosophy is a modular PV system whose manufacturing strategy relies only on currently existing silicon solar cell, module, reflector and aluminum parts supply chains, as well as turnkey PV module production lines and metal fabrication industries that already exist at enormous scale. Furthermore, with a high gain system design, the generating capacity of all components is multiplied, leading to a rapidly scalable system. The product design and commercialization strategy cooperate synergistically to promise dramatically lower LCOE with substantially lower risk relative to materials-intensive innovations. In this paper, we will present the key design aspects of Skyline's system, including aspects of the optical, mechanical and thermal components, revealing the ease of scalability, low cost and high performance. Additionally, we will present performance and reliability results on modules and the system, using ASTM and UL/IEC methodologies.

  14. Ring-like reliable PON planning with physical constraints for a smart grid

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Gu, Rentao; Ji, Yuefeng

    2016-01-01

    Due to the high reliability requirements in the communication networks of a smart grid, a ring-like reliable PON is an ideal choice to carry power distribution information. Economical network planning is also very important for the smart grid communication infrastructure. Although the ring-like reliable PON has been widely used in the real applications, as far as we know, little research has been done on the network optimization subject of the ring-like reliable PON. Most PON planning research studies only consider a star-like topology or cascaded PON network, which barely guarantees the reliability requirements of the smart grid. In this paper, we mainly investigate the economical network planning problem for the ring-like reliable PON of the smart grid. To address this issue, we built a mathematical model for the planning problem of the ring-like reliable PON, and the objective was to minimize the total deployment costs under physical constraints. The model is simplified such that all of the nodes have the same properties, except OLT, because each potential splitter site can be located in the same ONU position in power communication networks. The simplified model is used to construct an optimal main tree topology in the complete graph and a backup-protected tree topology in the residual graph. An efficient heuristic algorithm, called the Constraints and Minimal Weight Oriented Fast Searching Algorithm (CMW-FSA), is proposed. In CMW-FSA, a feasible solution can be obtained directly with oriented constraints and a few recursive search processes. From the simulation results, the proposed planning model and CMW-FSA are verified to be accurate (the error rates are less than 0.4%) and effective compared with the accurate solution (CAESA), especially in small and sparse scenarios. The CMW-FSA significantly reduces the computation time compared with the CAESA. The time complexity algorithm of the CMW-FSA is acceptable and calculated as T(n) = O(n3). After evaluating the effects of the parameters of the two PON systems, the total planning costs of each scenario show a general declining trend and reach a threshold as the respective maximal transmission distances and maximal time delays increase.

  15. Minimize system cost by choosing optimal subsystem reliability and redundancy

    NASA Technical Reports Server (NTRS)

    Suich, Ronald C.; Patterson, Richard L.

    1993-01-01

    The basic question which we address in this paper is how to choose among competing subsystems. This paper utilizes both reliabilities and costs to find the subsystems with the lowest overall expected cost. The paper begins by reviewing some of the concepts of expected value. We then address the problem of choosing among several competing subsystems. These concepts are then applied to k-out-of-n: G subsystems. We illustrate the use of the authors' basic program in viewing a range of possible solutions for several different examples. We then discuss the implications of various solutions in these examples.

  16. Compact and low-cost fiber optic thermometer

    NASA Astrophysics Data System (ADS)

    Sun, Mei H.

    1997-06-01

    Commercial fiberoptic thermometers have been available for a number of years. The early products were unreliable and high in price. However, the continuing effort in the development of new sensing techniques along with the breakthroughs made in many areas of optoelectronics in recent years have made the production of cost competitive and reliable systems feasible. A fluorescence decay time based system has been demonstrated to successfully meet both cost and performance requirements for various medical applications. A very critical element to the success of this low cost and compact fiberoptic thermometer is the fluorescent sensor material. The very high quantum efficiency, the operating wavelengths, and the temperature sensitivity helped significantly in simplifying the design requirements for the optics and the electronics. The one to eight channel unit contains one to eight modules of a simple optical assembly: an LED light source, a small lens, and a filter are housed in an injection molded plastic container. Both the electronics and the optics reside on a small printed circuit board of approximately 6 inches by 3 inches. This system can be packaged as a stand alone unit or embedded in original manufacturer equipment.

  17. High-Temperature Storage Testing of ACF Attached Sensor Structures

    PubMed Central

    Lahokallio, Sanna; Hoikkanen, Maija; Vuorinen, Jyrki; Frisk, Laura

    2015-01-01

    Several electronic applications must withstand elevated temperatures during their lifetime. Materials and packages for use in high temperatures have been designed, but they are often very expensive, have limited compatibility with materials, structures, and processing techniques, and are less readily available than traditional materials. Thus, there is an increasing interest in using low-cost polymer materials in high temperature applications. This paper studies the performance and reliability of sensor structures attached with anisotropically conductive adhesive film (ACF) on two different organic printed circuit board (PCB) materials: FR-4 and Rogers. The test samples were aged at 200 °C and 240 °C and monitored electrically during the test. Material characterization techniques were also used to analyze the behavior of the materials. Rogers PCB was observed to be more stable at high temperatures in spite of degradation observed, especially during the first 120 h of aging. The electrical reliability was very good with Rogers. At 200 °C, the failures occurred after 2000 h of testing, and even at 240 °C the interconnections were functional for 400 h. The study indicates that, even though these ACFs were not designed for use in high temperatures, with stable PCB material they are promising interconnection materials at elevated temperatures, especially at 200 °C. However, the fragility of the structure due to material degradation may cause reliability problems in long-term high temperature exposure. PMID:28793735

  18. The cost of construction delays and traffic control for life-cycle cost analysis of pavements

    DOT National Transportation Integrated Search

    2002-03-01

    The objective of this report is to provide the Kentucky Transportation Cabinet a reliable approach to quantifying/calculating "Road User Cost"--often referred to as total user delay costs. To meet this objective, this report is divided into three mai...

  19. Low cost high efficiency GaAs monolithic RF module for SARSAT distress beacons

    NASA Technical Reports Server (NTRS)

    Petersen, W. C.; Siu, D. P.; Cook, H. F.

    1991-01-01

    Low cost high performance (5 Watts output) 406 MHz beacons are urgently needed to realize the maximum utilization of the Search and Rescue Satellite-Aided Tracking (SARSAT) system spearheaded in the U.S. by NASA. Although current technology can produce beacons meeting the output power requirement, power consumption is high due to the low efficiency of available transmitters. Field performance is currently unsatisfactory due to the lack of safe and reliable high density batteries capable of operation at -40 C. Low cost production is also a crucial but elusive requirement for the ultimate wide scale utilization of this system. Microwave Monolithics Incorporated (MMInc.) has proposed to make both the technical and cost goals for the SARSAT beacon attainable by developing a monolithic GaAs chip set for the RF module. This chip set consists of a high efficiency power amplifier and a bi-phase modulator. In addition to implementing the RF module in Monolithic Microwave Integrated Circuit (MMIC) form to minimize ultimate production costs, the power amplifier has a power-added efficiency nearly twice that attained with current commercial technology. A distress beacon built using this RF module chip set will be significantly smaller in size and lighter in weight due to a smaller battery requirement, since the 406 MHz signal source and the digital controller have far lower power consumption compared to the 5 watt power amplifier. All the program tasks have been successfully completed. The GaAs MMIC RF module chip set has been designed to be compatible with the present 406 MHz signal source and digital controller. A complete high performance low cost SARSAT beacon can be realized with only additional minor iteration and systems integration.

  20. Photogrammetric Point Clouds Generation in Urban Areas from Integrated Image Matching and Segmentation

    NASA Astrophysics Data System (ADS)

    Ye, L.; Wu, B.

    2017-09-01

    High-resolution imagery is an attractive option for surveying and mapping applications due to the advantages of high quality imaging, short revisit time, and lower cost. Automated reliable and dense image matching is essential for photogrammetric 3D data derivation. Such matching, in urban areas, however, is extremely difficult, owing to the complexity of urban textures and severe occlusion problems on the images caused by tall buildings. Aimed at exploiting high-resolution imagery for 3D urban modelling applications, this paper presents an integrated image matching and segmentation approach for reliable dense matching of high-resolution imagery in urban areas. The approach is based on the framework of our existing self-adaptive triangulation constrained image matching (SATM), but incorporates three novel aspects to tackle the image matching difficulties in urban areas: 1) occlusion filtering based on image segmentation, 2) segment-adaptive similarity correlation to reduce the similarity ambiguity, 3) improved dense matching propagation to provide more reliable matches in urban areas. Experimental analyses were conducted using aerial images of Vaihingen, Germany and high-resolution satellite images in Hong Kong. The photogrammetric point clouds were generated, from which digital surface models (DSMs) were derived. They were compared with the corresponding airborne laser scanning data and the DSMs generated from the Semi-Global matching (SGM) method. The experimental results show that the proposed approach is able to produce dense and reliable matches comparable to SGM in flat areas, while for densely built-up areas, the proposed method performs better than SGM. The proposed method offers an alternative solution for 3D surface reconstruction in urban areas.

  1. Using computerised patient-level costing data for setting DRG weights: the Victorian (Australia) cost weight studies.

    PubMed

    Jackson, T

    2001-05-01

    Casemix-funding systems for hospital inpatient care require a set of resource weights which will not inadvertently distort patterns of patient care. Few health systems have very good sources of cost information, and specific studies to derive empirical cost relativities are themselves costly. This paper reports a 5 year program of research into the use of data from hospital management information systems (clinical costing systems) to estimate resource relativities for inpatient hospital care used in Victoria's DRG-based payment system. The paper briefly describes international approaches to cost weight estimation. It describes the architecture of clinical costing systems, and contrasts process and job costing approaches to cost estimation. Techniques of data validation and reliability testing developed in the conduct of four of the first five of the Victorian Cost Weight Studies (1993-1998) are described. Improvement in sampling, data validity and reliability are documented over the course of the research program, the advantages of patient-level data are highlighted. The usefulness of these byproduct data for estimation of relative resource weights and other policy applications may be an important factor in hospital and health system decisions to invest in clinical costing technology.

  2. Bipolar Nickel-hydrogen Batteries for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Koehler, C. W.; Vanommering, G.; Puester, N. H.; Puglisi, V. J.

    1984-01-01

    A bipolar nickel-hydrogen battery which effectively addresses all key requirements for a spacecraft power system, including long-term reliability and low mass, is discussed. The design of this battery is discussed in the context of system requirements and nickel-hydrogen battery technology in general. To achieve the ultimate goal of an aerospace application of a bipolar Ni-H2 battery several objectives must be met in the design and development of the system. These objectives include: maximization of reliability and life; high specific energy and energy density; reasonable cost of manufacture, test, and integration; and ease in scaling for growth in power requirements. These basic objectives translate into a number of specific design requirements, which are discussed.

  3. Terrapin technologies manned Mars mission proposal

    NASA Technical Reports Server (NTRS)

    Amato, Michael; Bryant, Heather; Coleman, Rodney; Compy, Chris; Crouse, Patrick; Crunkleton, Joe; Hurtado, Edgar; Iverson, Eirik; Kamosa, Mike; Kraft, Lauri (Editor)

    1990-01-01

    A Manned Mars Mission (M3) design study is proposed. The purpose of M3 is to transport 10 personnel and a habitat with all required support systems and supplies from low Earth orbit (LEO) to the surface of Mars and, after an eight-man surface expedition of 3 months, to return the personnel safely to LEO. The proposed hardware design is based on systems and components of demonstrated high capability and reliability. The mission design builds on past mission experience, but incorporates innovative design approaches to achieve mission priorities. Those priorities, in decreasing order of importance, are safety, reliability, minimum personnel transfer time, minimum weight, and minimum cost. The design demonstrates the feasibility and flexibility of a Waverider transfer module.

  4. Robot-Powered Reliability Testing at NREL's ESIF

    ScienceCinema

    Harrison, Kevin

    2018-02-14

    With auto manufacturers expected to roll out fuel cell electric vehicles in the 2015 to 2017 timeframe, the need for a reliable hydrogen fueling infrastructure is greater than ever. That's why the National Renewable Energy Laboratory (NREL) is using a robot in its Energy Systems Integration Facility (ESIF) to assess the durability of hydrogen fueling hoses, a largely untested—and currently costly—component of hydrogen fueling stations. The automated machine mimics the repetitive stress of a human bending and twisting the hose to refuel a vehicle—all under the high pressure and low temperature required to deliver hydrogen to a fuel cell vehicle's onboard storage tank.

  5. Optimizing energy for a ‘green’ vaccine supply chain

    PubMed Central

    Lloyd, John; McCarney, Steve; Ouhichi, Ramzi; Lydon, Patrick; Zaffran, Michel

    2015-01-01

    This paper describes an approach piloted in the Kasserine region of Tunisia to increase the energy efficiency of the distribution of vaccines and temperature sensitive drugs. The objectives of an approach, known as the ‘net zero energy’ (NZE) supply chain were demonstrated within the first year of operation. The existing distribution system was modified to store vaccines and medicines in the same buildings and to transport them according to pre-scheduled and optimized delivery circuits. Electric utility vehicles, dedicated to the integrated delivery of vaccines and medicines, improved the regularity and reliability of the supply chains. Solar energy, linked to the electricity grid at regional and district stores, supplied over 100% of consumption meeting all energy needs for storage, cooling and transportation. Significant benefits to the quality and costs of distribution were demonstrated. Supply trips were scheduled, integrated and reliable, energy consumption was reduced, the recurrent cost of electricity was eliminated and the release of carbon to the atmosphere was reduced. Although the initial capital cost of scaling up implementation of NZE remain high today, commercial forecasts predict cost reduction for solar energy and electric vehicles that may permit a step-wise implementation over the next 7–10 years. Efficiency in the use of energy and in the deployment of transport is already a critical component of distribution logistics in both private and public sectors of industrialized countries. The NZE approach has an intensified rationale in countries where energy costs threaten the maintenance of public health services in areas of low population density. In these countries where the mobility of health personnel and timely arrival of supplies is at risk, NZE has the potential to reduce energy costs and release recurrent budget to other needs of service delivery while also improving the supply chain. PMID:25444811

  6. Novel Low-Cost Sensor for Human Bite Force Measurement

    PubMed Central

    Fastier-Wooller, Jarred; Phan, Hoang-Phuong; Dinh, Toan; Nguyen, Tuan-Khoa; Cameron, Andrew; Öchsner, Andreas; Dao, Dzung Viet

    2016-01-01

    This paper presents the design and development of a low cost and reliable maximal voluntary bite force sensor which can be manufactured in-house by using an acrylic laser cutting machine. The sensor has been designed for ease of fabrication, assembly, calibration, and safe use. The sensor is capable of use within an hour of commencing production, allowing for rapid prototyping/modifications and practical implementation. The measured data shows a good linear relationship between the applied force and the electrical resistance of the sensor. The output signal has low drift, excellent repeatability, and a large measurable range of 0 to 700 N. A high signal-to-noise response to human bite forces was observed, indicating the high potential of the proposed sensor for human bite force measurement. PMID:27509496

  7. The Beam Characteristics of High Power Diode Laser Stack

    NASA Astrophysics Data System (ADS)

    Gu, Yuanyuan; Fu, Yueming; Lu, Hui; Cui, Yan

    2018-03-01

    Direct diode lasers have some of the most attractive features of any laser. They are very efficient, compact, wavelength versatile, low cost, and highly reliable. However, the full utilization of direct diode lasers has yet to be realized. However, the poor quality of diode laser beam itself, directly affect its application ranges, in order to better use of diode laser stack, need a proper correction of optical system, which requires accurate understanding of the diode laser beam characteristics. Diode laser could make it possible to establish the practical application because of rectangular beam patterns which are suitable to make fine bead with less power. Therefore diode laser cladding will open a new field of repairing for the damaged machinery parts which must contribute to recycling of the used machines and saving of cost.

  8. Advanced chip designs and novel cooling techniques for brightness scaling of industrial, high power diode laser bars

    NASA Astrophysics Data System (ADS)

    Heinemann, S.; McDougall, S. D.; Ryu, G.; Zhao, L.; Liu, X.; Holy, C.; Jiang, C.-L.; Modak, P.; Xiong, Y.; Vethake, T.; Strohmaier, S. G.; Schmidt, B.; Zimer, H.

    2018-02-01

    The advance of high power semiconductor diode laser technology is driven by the rapidly growing industrial laser market, with such high power solid state laser systems requiring ever more reliable diode sources with higher brightness and efficiency at lower cost. In this paper we report simulation and experimental data demonstrating most recent progress in high brightness semiconductor laser bars for industrial applications. The advancements are in three principle areas: vertical laser chip epitaxy design, lateral laser chip current injection control, and chip cooling technology. With such improvements, we demonstrate disk laser pump laser bars with output power over 250W with 60% efficiency at the operating current. Ion implantation was investigated for improved current confinement. Initial lifetime tests show excellent reliability. For direct diode applications <1 um smile and >96% polarization are additional requirements. Double sided cooling deploying hard solder and optimized laser design enable single emitter performance also for high fill factor bars and allow further power scaling to more than 350W with 65% peak efficiency with less than 8 degrees slow axis divergence and high polarization.

  9. Challenges for Wireless Mesh Networks to provide reliable carrier-grade services

    NASA Astrophysics Data System (ADS)

    von Hugo, D.; Bayer, N.

    2011-08-01

    Provision of mobile and wireless services today within a competitive environment and driven by a huge amount of steadily emerging new services and applications is both challenge and chance for radio network operators. Deployment and operation of an infrastructure for mobile and wireless broadband connectivity generally requires planning effort and large investments. A promising approach to reduce expenses for radio access networking is offered by Wireless Mesh Networks (WMNs). Here traditional dedicated backhaul connections to each access point are replaced by wireless multi-hop links between neighbouring access nodes and few gateways to the backbone employing standard radio technology. Such a solution provides at the same time high flexibility in both deployment and the amount of offered capacity and shall reduce overall expenses. On the other hand currently available mesh solutions do not provide carrier grade service quality and reliability and often fail to cope with high traffic load. EU project CARMEN (CARrier grade MEsh Networks) was initiated to incorporate different heterogeneous technologies and new protocols to allow for reliable transmission over "best effort" radio channels, to support a reliable mobility and network management, self-configuration and dynamic resource usage, and thus to offer a permanent or temporary broadband access at high cost efficiency. The contribution provides an overview on preliminary project results with focus on main technical challenges from a research and implementation point of view. Especially impact of mesh topology on the overall system performance in terms of throughput and connection reliability and aspects of a dedicated hybrid mobility management solution will be discussed.

  10. High power visible diode laser for the treatment of eye diseases by laser coagulation

    NASA Astrophysics Data System (ADS)

    Heinrich, Arne; Hagen, Clemens; Harlander, Maximilian; Nussbaumer, Bernhard

    2015-03-01

    We present a high power visible diode laser enabling a low-cost treatment of eye diseases by laser coagulation, including the two leading causes of blindness worldwide (diabetic retinopathy, age-related macular degeneration) as well as retinopathy of prematurely born children, intraocular tumors and retinal detachment. Laser coagulation requires the exposure of the eye to visible laser light and relies on the high absorption of the retina. The need for treatment is constantly increasing, due to the demographic trend, the increasing average life expectancy and medical care demand in developing countries. The World Health Organization reacts to this demand with global programs like the VISION 2020 "The right to sight" and the following Universal Eye Health within their Global Action Plan (2014-2019). One major point is to motivate companies and research institutes to make eye treatment cheaper and easily accessible. Therefore it becomes capital providing the ophthalmology market with cost competitive, simple and reliable technologies. Our laser is based on the direct second harmonic generation of the light emitted from a tapered laser diode and has already shown reliable optical performance. All components are produced in wafer scale processes and the resulting strong economy of scale results in a price competitive laser. In a broader perspective the technology behind our laser has a huge potential in non-medical applications like welding, cutting, marking and finally laser-illuminated projection.

  11. Design for low-power and reliable flexible electronics

    NASA Astrophysics Data System (ADS)

    Huang, Tsung-Ching (Jim)

    Flexible electronics are emerging as an alternative to conventional Si electronics for large-area low-cost applications such as e-paper, smart sensors, and disposable RFID tags. By utilizing inexpensive manufacturing methods such as ink-jet printing and roll-to-roll imprinting, flexible electronics can be made on low-cost plastics just like printing a newspaper. However, the key elements of exible electronics, thin-film transistors (TFTs), have slower operating speeds and less reliability than their Si electronics counterparts. Furthermore, depending on the material property, TFTs are usually mono-type -- either p- or n-type -- devices. Making air-stable complementary TFT circuits is very challenging and not applicable to most TFT technologies. Existing design methodologies for Si electronics, therefore, cannot be directly applied to exible electronics. Other inhibiting factors such as high supply voltage, large process variation, and lack of trustworthy device modeling also make designing larger-scale and robust TFT circuits a significant challenge. The major goal of this dissertation is to provide a viable solution for robust circuit design in exible electronics. I will first introduce a reliability simulation framework that can predict the degraded TFT circuits' performance under bias-stress. This framework has been validated using the amorphous-silicon (a-Si) TFT scan driver for TFT-LCD displays. To reuse the existing CMOS design ow for exible electronics, I propose a Pseudo-CMOS cell library that can make TFT circuits operable under low supply voltage and which has post-fabrication tunability for reliability and performance enhancement. This cell library has been validated using 2V self-assembly-monolayer (SAM) organic TFTs with a low-cost shadow-mask deposition process. I will also demonstrate a 3-bit 1.25KS/s Flash ADC in a-Si TFTs, which is based on the proposed Pseudo-CMOS cell library, and explore more possibilities in display, energy, and sensing applications.

  12. Difficult Decisions Made Easier

    NASA Technical Reports Server (NTRS)

    2006-01-01

    NASA missions are extremely complex and prone to sudden, catastrophic failure if equipment falters or if an unforeseen event occurs. For these reasons, NASA trains to expect the unexpected. It tests its equipment and systems in extreme conditions, and it develops risk-analysis tests to foresee any possible problems. The Space Agency recently worked with an industry partner to develop reliability analysis software capable of modeling complex, highly dynamic systems, taking into account variations in input parameters and the evolution of the system over the course of a mission. The goal of this research was multifold. It included performance and risk analyses of complex, multiphase missions, like the insertion of the Mars Reconnaissance Orbiter; reliability analyses of systems with redundant and/or repairable components; optimization analyses of system configurations with respect to cost and reliability; and sensitivity analyses to identify optimal areas for uncertainty reduction or performance enhancement.

  13. Reliability Testing of NASA Piezocomposite Actuators

    NASA Technical Reports Server (NTRS)

    Wilkie, W.; High, J.; Bockman, J.

    2002-01-01

    NASA Langley Research Center has developed a low-cost piezocomposite actuator which has application for controlling vibrations in large inflatable smart space structures, space telescopes, and high performance aircraft. Tests show the NASA piezocomposite device is capable of producing large, directional, in-plane strains on the order of 2000 parts-per-million peak-to-peak, with no reduction in free-strain performance to 100 million electrical cycles. This paper describes methods, measurements, and preliminary results from our reliability evaluation of the device under externally applied mechanical loads and at various operational temperatures. Tests performed to date show no net reductions in actuation amplitude while the device was moderately loaded through 10 million electrical cycles. Tests were performed at both room temperature and at the maximum operational temperature of the epoxy resin system used in manufacture of the device. Initial indications are that actuator reliability is excellent, with no actuator failures or large net reduction in actuator performance.

  14. A Step Made Toward Designing Microelectromechanical System (MEMS) Structures With High Reliability

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.

    2003-01-01

    The mechanical design of microelectromechanical systems-particularly for micropower generation applications-requires the ability to predict the strength capacity of load-carrying components over the service life of the device. These microdevices, which typically are made of brittle materials such as polysilicon, show wide scatter (stochastic behavior) in strength as well as a different average strength for different sized structures (size effect). These behaviors necessitate either costly and time-consuming trial-and-error designs or, more efficiently, the development of a probabilistic design methodology for MEMS. Over the years, the NASA Glenn Research Center s Life Prediction Branch has developed the CARES/Life probabilistic design methodology to predict the reliability of advanced ceramic components. In this study, done in collaboration with Johns Hopkins University, the ability of the CARES/Life code to predict the reliability of polysilicon microsized structures with stress concentrations is successfully demonstrated.

  15. A Robust Compositional Architecture for Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume; Deney, Ewen; Farrell, Kimberley; Giannakopoulos, Dimitra; Jonsson, Ari; Frank, Jeremy; Bobby, Mark; Carpenter, Todd; Estlin, Tara

    2006-01-01

    Space exploration applications can benefit greatly from autonomous systems. Great distances, limited communications and high costs make direct operations impossible while mandating operations reliability and efficiency beyond what traditional commanding can provide. Autonomous systems can improve reliability and enhance spacecraft capability significantly. However, there is reluctance to utilizing autonomous systems. In part this is due to general hesitation about new technologies, but a more tangible concern is that of reliability of predictability of autonomous software. In this paper, we describe ongoing work aimed at increasing robustness and predictability of autonomous software, with the ultimate goal of building trust in such systems. The work combines state-of-the-art technologies and capabilities in autonomous systems with advanced validation and synthesis techniques. The focus of this paper is on the autonomous system architecture that has been defined, and on how it enables the application of validation techniques for resulting autonomous systems.

  16. Advances in a high efficiency commercial pulse tube cooler

    NASA Astrophysics Data System (ADS)

    Zhang, Yibing; Li, Haibing; Wang, Xiaotao; Dai, Wei; Yang, Zhaohui; Luo, Ercang

    2017-12-01

    The pulse tube cryocooler has the advantage of no moving part at the cold end and offers a high reliability. To further extend its use in commercial applications, efforts are still needed to improve efficiency, reliability and cost effectiveness. This paper generalizes several key innovations in our newest cooler. The cooler consists of a moving magnet compressor with dual-opposed pistons, and a co-axial cold finger. Ambient displacers are employed to recover the expansion work to increase cooling efficiency. Inside the cold finger, the conventional flow straightener screens are replaced by a tapered throat between the cold heat exchanger and the pulse tube to strengthen its immunity to the working gas contamination as well as to simplify the manufacturing processes. The cold heat exchanger is made by copper forging process which further reduces the cost. Inside the compressor, a new gas bearing design has brought in assembling simplicity and running reliability. Besides the cooler itself, electronic controller is also important for actual application. A dual channel and dual driving mode control mechanism has been selected, which reduces the vibration to a minimum, meanwhile the cool-down speed becomes faster and run-time efficiency is higher. With these innovations, the cooler TC4189 reached a no-load temperature of 44 K and provided 15 W cooling power at 80K, with an input electric power of 244 W and a cooling water temperature of 23 ℃. The efficiency reached 16.9% of Carnot at 80 K. The whole system has a total mass of 4.3 kg.

  17. Effects of imperfect automation on decision making in a simulated command and control task.

    PubMed

    Rovira, Ericka; McGarry, Kathleen; Parasuraman, Raja

    2007-02-01

    Effects of four types of automation support and two levels of automation reliability were examined. The objective was to examine the differential impact of information and decision automation and to investigate the costs of automation unreliability. Research has shown that imperfect automation can lead to differential effects of stages and levels of automation on human performance. Eighteen participants performed a "sensor to shooter" targeting simulation of command and control. Dependent variables included accuracy and response time of target engagement decisions, secondary task performance, and subjective ratings of mental work-load, trust, and self-confidence. Compared with manual performance, reliable automation significantly reduced decision times. Unreliable automation led to greater cost in decision-making accuracy under the higher automation reliability condition for three different forms of decision automation relative to information automation. At low automation reliability, however, there was a cost in performance for both information and decision automation. The results are consistent with a model of human-automation interaction that requires evaluation of the different stages of information processing to which automation support can be applied. If fully reliable decision automation cannot be guaranteed, designers should provide users with information automation support or other tools that allow for inspection and analysis of raw data.

  18. Fiber Access Networks: Reliability Analysis and Swedish Broadband Market

    NASA Astrophysics Data System (ADS)

    Wosinska, Lena; Chen, Jiajia; Larsen, Claus Popp

    Fiber access network architectures such as active optical networks (AONs) and passive optical networks (PONs) have been developed to support the growing bandwidth demand. Whereas particularly Swedish operators prefer AON, this may not be the case for operators in other countries. The choice depends on a combination of technical requirements, practical constraints, business models, and cost. Due to the increasing importance of reliable access to the network services, connection availability is becoming one of the most crucial issues for access networks, which should be reflected in the network owner's architecture decision. In many cases protection against failures is realized by adding backup resources. However, there is a trade off between the cost of protection and the level of service reliability since improving reliability performance by duplication of network resources (and capital expenditures CAPEX) may be too expensive. In this paper we present the evolution of fiber access networks and compare reliability performance in relation to investment and management cost for some representative cases. We consider both standard and novel architectures for deployment in both sparsely and densely populated areas. While some recent works focused on PON protection schemes with reduced CAPEX the current and future effort should be put on minimizing the operational expenditures (OPEX) during the access network lifetime.

  19. Phase 1 of the First Solar Small Power System Experiment (experimental System No. 1). Volume 3: Appendix E - N

    NASA Technical Reports Server (NTRS)

    Clark, T. B. (Editor)

    1979-01-01

    The design of a solar electric power plant for a small community is reported. Topics covered include: (1) control configurations and interface requirements for the baseline power system; (2) annual small power system output; (3) energy requirements for operation of the collectors and control building; (4) life cycle costs and reliability predictions; (5) thermal conductivities and costs of receiver insulation materials; (6) transient thermal modelling for the baseline receiver/thermal transport system under normal and inclement operating conditions; (7) high temperature use of sodium; (8) shading in a field of parabolic collectors; and (9) buffer storage materials.

  20. Development of Camera Electronics for the Advanced Gamma-ray Imaging System (AGIS)

    NASA Astrophysics Data System (ADS)

    Tajima, Hiroyasu

    2009-05-01

    AGIS, a next generation of atmospheric Cherenkov telescope arrays, aims to achieve a sensitivity level of a milliCrab for gamma-ray observations in in the energy band of 40 GeV to 100 TeV. Such improvement requires cost reduction of individual components with high reliability in order to equip the order of 100 telescopes necessary to achieve the sensitivity goal. We are exploring several design concepts to reduce the cost of camera electronics while improving their performance. We have developed test systems for some of these concepts and are testing their performance. Here we present test results of the test systems.

  1. Developments in the design, analysis, and fabrication of advanced technology transmission elements

    NASA Technical Reports Server (NTRS)

    Drago, R. J.; Lenski, J. W., Jr.

    1982-01-01

    Over the last decade, the presently reported proprietary development program for the reduction of helicopter drive system weight and cost and the enhancement of reliability and survivability has produced high speed roller bearings, resin-matrix composite rotor shafts and transmission housings, gear/bearing/shaft system integrations, photoelastic investigation methods for gear tooth strength, and the automatic generation of complex FEM models for gear/shaft systems. After describing the design features and performance capabilities of the hardware developed, attention is given to the prospective benefits to be derived from application of these technologies, with emphasis on the relationship between helicopter drive system performance and cost.

  2. Design, performance and economics of the DAF Indal 50 kW and 375 kW vertical axis wind turbine

    NASA Astrophysics Data System (ADS)

    Schienbein, L. A.; Malcolm, D. J.

    1982-03-01

    A review of the development and performance of the DAF Indal 50 kW vertical axis Darrieus wind turbines shows that a high level of technical development and reliability has been achieved. Features of the drive train, braking and control systems are discussed and performance details are presented. A description is given of a wind-diesel hybrid presently being tested. Details are also presented of a 375 kW VAWT planned for production in late 1982. A discussion of the economics of both the 50 kW and 375 kW VAWTs is included, showing the effects of charge rate, installed cost, operating cost, performance and efficiency. The energy outputs are translated into diesel fuel cost savings for remote communities.

  3. Hybrid propulsion technology program

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Technology was identified which will enable application of hybrid propulsion to manned and unmanned space launch vehicles. Two design concepts are proposed. The first is a hybrid propulsion system using the classical method of regression (classical hybrid) resulting from the flow of oxidizer across a fuel grain surface. The second system uses a self-sustaining gas generator (gas generator hybrid) to produce a fuel rich exhaust that was mixed with oxidizer in a separate combustor. Both systems offer cost and reliability improvement over the existing solid rocket booster and proposed liquid boosters. The designs were evaluated using life cycle cost and reliability. The program consisted of: (1) identification and evaluation of candidate oxidizers and fuels; (2) preliminary evaluation of booster design concepts; (3) preparation of a detailed point design including life cycle costs and reliability analyses; (4) identification of those hybrid specific technologies needing improvement; and (5) preperation of a technology acquisition plan and large scale demonstration plan.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, Kevin; Tuffner, Frank; Elizondo, Marcelo

    Regulated electricity utilities are required to provide safe and reliable service to their customers at a reasonable cost. To balance the objectives of reliable service and reasonable cost, utilities build and operate their systems to operate under typical historic conditions. As a result, when abnormal events such as major storms or disasters occur, it is not uncommon to have extensive interruptions in service to the end-use customers. Because it is not cost effective to make the existing electrical infrastructure 100% reliable, society has come to expect disruptions during abnormal events. However, with the increasing number of abnormal weather events, themore » public is becoming less tolerant of these disruptions. One possible solution is to deploy microgrids as part of a coordinated resiliency plan to minimize the interruption of power to essential loads. This paper evaluates the feasibility of using microgrids as a resiliency resource, including their possible benefits and the associated technical challenges. A use-case of an operational microgrid is included.« less

  5. Reliability and cost-effectiveness of complete lymph node dissection under tumescent local anaesthesia vs. general anaesthesia: a retrospective analysis in patients with malignant melanoma AJCC stage III.

    PubMed

    Stoffels, I; Dissemond, J; Schulz, A; Hillen, U; Schadendorf, D; Klode, J

    2012-02-01

    Complete lymph node dissection (CLND) in melanoma patients with a positive sentinel lymph node (SLN) is currently being debated, as it is a cost-intensive surgical intervention with potentially high morbidity. This clinical study seeks to clarify the effectiveness, reliability and cost-effectiveness of CLND performed under tumescent local anaesthesia (TLA) compared with procedures under general anaesthesia (GA). We retrospectively analysed the data from 60 patients with primary malignant melanoma American Joint Committee on Cancer stage III who underwent CLND. Altogether 26 (43.3%) patients underwent CLND under TLA and 34 (56.7%) patients underwent CLND under GA. Fifteen of 43 (34.9%) patients had a complication, such as development of seromas and/or wound infections. The rate of complications was 25.0% (3/12) in the axilla subgroup and 28.6% (4/14) in the groin subgroup of the TLA group. In the GA group, the complication rate was 31.3% (5/16) in the axilla subgroup and 44.4% (8/18) in the groin subgroup. The costs for CLND were significantly less for the CLND in a procedure room performed under TLA (mean €67.26) compared with CLND in an operating room under GA (mean €676.20, P < 0.0001). In conclusion, this study confirms that TLA is an excellent, safe, effective and cost-efficient alternative to GA for CLND in melanoma patients. © 2011 The Authors. Journal of the European Academy of Dermatology and Venereology © 2011 European Academy of Dermatology and Venereology.

  6. Utilizing Adjoint-Based Error Estimates for Surrogate Models to Accurately Predict Probabilities of Events

    DOE PAGES

    Butler, Troy; Wildey, Timothy

    2018-01-01

    In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less

  7. Utilizing Adjoint-Based Error Estimates for Surrogate Models to Accurately Predict Probabilities of Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, Troy; Wildey, Timothy

    In thist study, we develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives preciselymore » the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.« less

  8. Fast Entanglement Establishment via Local Dynamics for Quantum Repeater Networks

    NASA Astrophysics Data System (ADS)

    Gyongyosi, Laszlo; Imre, Sandor

    Quantum entanglement is a necessity for future quantum communication networks, quantum internet, and long-distance quantum key distribution. The current approaches of entanglement distribution require high-delay entanglement transmission, entanglement swapping to extend the range of entanglement, high-cost entanglement purification, and long-lived quantum memories. We introduce a fundamental protocol for establishing entanglement in quantum communication networks. The proposed scheme does not require entanglement transmission between the nodes, high-cost entanglement swapping, entanglement purification, or long-lived quantum memories. The protocol reliably establishes a maximally entangled system between the remote nodes via dynamics generated by local Hamiltonians. The method eliminates the main drawbacks of current schemes allowing fast entanglement establishment with a minimized delay. Our solution provides a fundamental method for future long-distance quantum key distribution, quantum repeater networks, quantum internet, and quantum-networking protocols. This work was partially supported by the GOP-1.1.1-11-2012-0092 project sponsored by the EU and European Structural Fund, by the Hungarian Scientific Research Fund - OTKA K-112125, and by the COST Action MP1006.

  9. Highlights of recent balance of system research and evaluation

    NASA Astrophysics Data System (ADS)

    Thomas, M. G.; Stevens, J. W.

    The cost of most photovoltaic (PV) systems is more a function of the balance of system (BOS) components than the collectors. The exception to this rule is the grid-tied system whose cost is related more directly to the collectors, and secondarily to the inverter/controls. In fact, recent procurements throughout the country document that collector costs for roof-mounted, utility-tied systems (Russell, PV Systems Workshop, 7/94) represent 60% to 70% of the system cost. This contrasts with the current market for packaged stand-alone all PV or PV-hybrid systems where collectors represent only 25% to 35% of the total. Not only are the BOS components the cost drivers in the current cost-effective PV system market place, they are also the least reliable components. This paper discusses the impact that BOS issues have on component performance, system performance, and system cost and reliability. We will also look at recent recommended changes in system design based upon performance evaluations of fielded PV systems.

  10. Interaction Entropy: A New Paradigm for Highly Efficient and Reliable Computation of Protein-Ligand Binding Free Energy.

    PubMed

    Duan, Lili; Liu, Xiao; Zhang, John Z H

    2016-05-04

    Efficient and reliable calculation of protein-ligand binding free energy is a grand challenge in computational biology and is of critical importance in drug design and many other molecular recognition problems. The main challenge lies in the calculation of entropic contribution to protein-ligand binding or interaction systems. In this report, we present a new interaction entropy method which is theoretically rigorous, computationally efficient, and numerically reliable for calculating entropic contribution to free energy in protein-ligand binding and other interaction processes. Drastically different from the widely employed but extremely expensive normal mode method for calculating entropy change in protein-ligand binding, the new method calculates the entropic component (interaction entropy or -TΔS) of the binding free energy directly from molecular dynamics simulation without any extra computational cost. Extensive study of over a dozen randomly selected protein-ligand binding systems demonstrated that this interaction entropy method is both computationally efficient and numerically reliable and is vastly superior to the standard normal mode approach. This interaction entropy paradigm introduces a novel and intuitive conceptual understanding of the entropic effect in protein-ligand binding and other general interaction systems as well as a practical method for highly efficient calculation of this effect.

  11. Degradable transportation network with the addition of electric vehicles: Network equilibrium analysis

    PubMed Central

    Zhang, Rui; Yao, Enjian; Yang, Yang

    2017-01-01

    Introducing electric vehicles (EVs) into urban transportation network brings higher requirement on travel time reliability and charging reliability. Specifically, it is believed that travel time reliability is a key factor influencing travelers’ route choice. Meanwhile, due to the limited cruising range, EV drivers need to better learn about the required energy for the whole trip to make decisions about whether charging or not and where to charge (i.e., charging reliability). Since EV energy consumption is highly related to travel speed, network uncertainty affects travel time and charging demand estimation significantly. Considering the network uncertainty resulted from link degradation, which influences the distribution of travel demand on transportation network and the energy demand on power network, this paper aims to develop a reliability-based network equilibrium framework for accommodating degradable road conditions with the addition of EVs. First, based on the link travel time distribution, the mean and variance of route travel time and monetary expenses related to energy consumption are deduced, respectively. And the charging time distribution of EVs with charging demand is also estimated. Then, a nested structure is considered to deal with the difference of route choice behavior derived by the different uncertainty degrees between the routes with and without degradable links. Given the expected generalized travel cost and a psychological safety margin, a traffic assignment model with the addition of EVs is formulated. Subsequently, a heuristic solution algorithm is developed to solve the proposed model. Finally, the effects of travelers’ risk attitude, network degradation degree, and EV penetration rate on network performance are illustrated through an example network. The numerical results show that the difference of travelers’ risk attitudes does have impact on the route choice, and the widespread adoption of EVs can cut down the total system travel cost effectively when the transportation network is more reliable. PMID:28886167

  12. Degradable transportation network with the addition of electric vehicles: Network equilibrium analysis.

    PubMed

    Zhang, Rui; Yao, Enjian; Yang, Yang

    2017-01-01

    Introducing electric vehicles (EVs) into urban transportation network brings higher requirement on travel time reliability and charging reliability. Specifically, it is believed that travel time reliability is a key factor influencing travelers' route choice. Meanwhile, due to the limited cruising range, EV drivers need to better learn about the required energy for the whole trip to make decisions about whether charging or not and where to charge (i.e., charging reliability). Since EV energy consumption is highly related to travel speed, network uncertainty affects travel time and charging demand estimation significantly. Considering the network uncertainty resulted from link degradation, which influences the distribution of travel demand on transportation network and the energy demand on power network, this paper aims to develop a reliability-based network equilibrium framework for accommodating degradable road conditions with the addition of EVs. First, based on the link travel time distribution, the mean and variance of route travel time and monetary expenses related to energy consumption are deduced, respectively. And the charging time distribution of EVs with charging demand is also estimated. Then, a nested structure is considered to deal with the difference of route choice behavior derived by the different uncertainty degrees between the routes with and without degradable links. Given the expected generalized travel cost and a psychological safety margin, a traffic assignment model with the addition of EVs is formulated. Subsequently, a heuristic solution algorithm is developed to solve the proposed model. Finally, the effects of travelers' risk attitude, network degradation degree, and EV penetration rate on network performance are illustrated through an example network. The numerical results show that the difference of travelers' risk attitudes does have impact on the route choice, and the widespread adoption of EVs can cut down the total system travel cost effectively when the transportation network is more reliable.

  13. Small Aerostationary Telecommunications Orbiter Concept for Mars in the 2020s

    NASA Technical Reports Server (NTRS)

    Lock, Robert E.; Edwards, Charles D., Jr.; Nicholas, Austin; Woolley, Ryan; Bell, David J.

    2016-01-01

    Current Mars science orbiters carry UHF proximity payloads to provide limited access and data services to landers and rovers on Mars surface. In the era of human spaceflight to Mars, very high rate and reliable relay services will be needed to serve a large number of supporting vehicles, habitats, and orbiters, as well as astronaut EVAs. These will likely be provided by a robust network of orbiting assets in very high orbits, such as areostationary orbits. In the decade leading to that era, telecommunications orbits can be operated at areostationary orbit that can support a significant population of robotic precursor missions and build the network capabilities needed for the human spaceflight era. Telecommunications orbiters of modest size and cost, delivered by Solar Electric Propulsion to areostationary orbit, can provide continuous access at very high data rates to users on the surface and in Mars orbit.In the era of human spaceflight to Mars very high rate andreliable relay services will be needed to serve a largenumber of supporting vehicles, habitats, and orbiters, aswell as astronaut EVAs. These could be provided by arobust network of orbiting assets in very high orbits. In thedecade leading to that era, telecommunications orbiterscould be operated at areostationary orbit that could support asignificant population of robotic precursor missions andbuild the network capabilities needed for the humanspaceflight era. These orbiters could demonstrate thecapabilities and services needed for the future but withoutthe high bandwidth and high reliability requirements neededfor human spaceflight.Telecommunications orbiters of modest size and cost,delivered by Solar Electric Propulsion to areostationaryorbit, could provide continuous access at very high datarates to users on the surface and in Mars orbit. Twoexamples highlighting the wide variety of orbiter deliveryand configuration options were shown that could providehigh-performance service to users.

  14. Direct determination of total sulfur in wine using a continuum-source atomic-absorption spectrometer and an air-acetylene flame.

    PubMed

    Huang, Mao Dong; Becker-Ross, Helmut; Florek, Stefan; Heitmann, Uwe; Okruss, Michael

    2005-08-01

    Determination of sulfur in wine is an important analytical task, particularly with regard to food safety legislation, wine trade, and oenology. Hitherto existing methods for sulfur determination all have specific drawbacks, for example high cost and time consumption, poor precision or selectivity, or matrix effects. In this paper a new method, with low running costs, is introduced for direct, reliable, rapid, and accurate determination of the total sulfur content of wine samples. The method is based on measurement of the molecular absorption of carbon monosulfide (CS) in an ordinary air-acetylene flame by using a high-resolution continuum-source atomic-absorption spectrometer including a novel high-intensity short-arc xenon lamp. First results for total sulfur concentrations in different wine samples were compared with data from comparative ICP-MS measurements. Very good agreement within a few percent was obtained.

  15. Final Technical Report for Automated Manufacturing of Innovative CPV/PV Modules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okawa, David

    Cogenra’s Dense Cell Interconnect system was designed to use traditional front-contact cells and string them together into high efficiency and high reliability “supercells”. This novel stringer allows one to take advantage of the ~100 GW/year of existing cell production capacity and create a solar product for the customer that will produce more power and last longer than traditional PV products. The goal for this program was for Cogenra Solar to design and develop a first-of-kind automated solar manufacturing line that produces strings of overlapping cells or “supercells” based on Cogenra’s Dense Cell Interconnect (DCI) technology for their Low Concentration Photovoltaicmore » (LCPV) systems. This will enable the commercialization of DCI technology to improve the efficiency, reliability and economics for their Low Concentration Photovoltaic systems. In this program, Cogenra Solar very successfully designed, developed, built, installed, and started up the ground-breaking manufacturing tools required to assemble supercells. Cogenra then successfully demonstrated operation of the integrated line at high yield and throughput far exceeding expectations. The development of a supercell production line represents a critical step toward a high volume and low cost Low Concentration Photovoltaic Module with Dense Cell Interconnect technology and has enabled the evaluation of the technology for reliability and yield. Unfortunately, performance and cost headwinds on Low Concentration Photovoltaics systems including lack of diffuse capture (10-15% hit) and more expensive tracker requirements resulted in a move away from LCPV technology. Fortunately, the versatility of Dense Cell Interconnect technology allows for application to flat plate module technology as well and Cogenra has worked with the DOE to utilize the learning from this grant to commercialize DCI technology for the solar market through the on-going grant: Catalyzing PV Manufacturing in the US With Cogenra Solar’s Next-Generation Dense Cell Interconnect PV Module Manufacturing Technology. This program is now very successfully building off of this work and commercializing the technology to enable increased solar adoption.« less

  16. In-Space Propulsion Technology Program Solar Electric Propulsion Technologies

    NASA Technical Reports Server (NTRS)

    Dankanich, John W.

    2006-01-01

    NASA's In-space Propulsion (ISP) Technology Project is developing new propulsion technologies that can enable or enhance near and mid-term NASA science missions. The Solar Electric Propulsion (SEP) technology area has been investing in NASA s Evolutionary Xenon Thruster (NEXT), the High Voltage Hall Accelerator (HiVHAC), lightweight reliable feed systems, wear testing, and thruster modeling. These investments are specifically targeted to increase planetary science payload capability, expand the envelope of planetary science destinations, and significantly reduce the travel times, risk, and cost of NASA planetary science missions. Status and expected capabilities of the SEP technologies are reviewed in this presentation. The SEP technology area supports numerous mission studies and architecture analyses to determine which investments will give the greatest benefit to science missions. Both the NEXT and HiVHAC thrusters have modified their nominal throttle tables to better utilize diminished solar array power on outbound missions. A new life extension mechanism has been implemented on HiVHAC to increase the throughput capability on low-power systems to meet the needs of cost-capped missions. Lower complexity, more reliable feed system components common to all electric propulsion (EP) systems are being developed. ISP has also leveraged commercial investments to further validate new ion and hall thruster technologies and to potentially lower EP mission costs.

  17. Design-for-reliability (DfR) of aerospace electronics: Attributes and challenges

    NASA Astrophysics Data System (ADS)

    Bensoussan, A.; Suhir, E.

    The next generation of multi-beam satellite systems that would be able to provide effective interactive communication services will have to operate within a highly flexible architecture. One option to develop such flexibility is to employ microwaves and/or optoelectronic components and to make them reliable. The use of optoelectronic devices, equipments and systems will result indeed in significant improvement in the state-of-the-art only provided that the new designs will suggest a novel and effective architecture that will combine the merits of good functional performance, satisfactory mechanical (structural) reliability and high cost effectiveness. The obvious challenge is the ability to design and fabricate equipment based on EEE components that would be able to successfully withstand harsh space environments for the entire duration of the mission. It is imperative that the major players in the space industry, such as manufacturers, industrial users, and space agencies, understand the importance and the limits of the achievable quality and reliability of optoelectronic devices operated in harsh environments. It is equally imperative that the physics of possible failures is well understood and, if necessary, minimized, and that adequate Quality Standards are developed and employed. The space community has to identify and to develop the strategic approach for validating optoelectronic products. This should be done with consideration of numerous intrinsic and extrinsic requirements for the systems' performance. When considering a particular next generation optoelectronic space system, the space community needs to address the following major issues: proof of concept for this system, proof of reliability and proof of performance. This should be done with taking into account the specifics of the anticipated application. High operational reliability cannot be left to the prognostics and health monitoring/management (PHM) effort and stage, no matter how important and - ffective such an effort might be. Reliability should be pursued at all the stages of the equipment lifetime: design, product development, manufacturing, burn-in testing and, of course, subsequent PHM after the space apparatus is launched and operated.

  18. Orbit transfer vehicle engine study. Volume 2: Technical report

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The orbit transfer vehicle (OTV) engine study provided parametric performance, engine programmatic, and cost data on the complete propulsive spectrum that is available for a variety of high energy, space maneuvering missions. Candidate OTV engines from the near term RL 10 (and its derivatives) to advanced high performance expander and staged combustion cycle engines were examined. The RL 10/RL 10 derivative performance, cost and schedule data were updated and provisions defined which would be necessary to accommodate extended low thrust operation. Parametric performance, weight, envelope, and cost data were generated for advanced expander and staged combustion OTV engine concepts. A prepoint design study was conducted to optimize thrust chamber geometry and cooling, engine cycle variations, and controls for an advanced expander engine. Operation at low thrust was defined for the advanced expander engine and the feasibility and design impact of kitting was investigated. An analysis of crew safety and mission reliability was conducted for both the staged combustion and advanced expander OTV engine candidates.

  19. The 25 kWe solar thermal Stirling hydraulic engine system: Conceptual design

    NASA Technical Reports Server (NTRS)

    White, Maurice; Emigh, Grant; Noble, Jack; Riggle, Peter; Sorenson, Torvald

    1988-01-01

    The conceptual design and analysis of a solar thermal free-piston Stirling hydraulic engine system designed to deliver 25 kWe when coupled to a 11 meter test bed concentrator is documented. A manufacturing cost assessment for 10,000 units per year was made. The design meets all program objectives including a 60,000 hr design life, dynamic balancing, fully automated control, more than 33.3 percent overall system efficiency, properly conditioned power, maximum utilization of annualized insolation, and projected production costs. The system incorporates a simple, rugged, reliable pool boiler reflux heat pipe to transfer heat from the solar receiver to the Stirling engine. The free-piston engine produces high pressure hydraulic flow which powers a commercial hydraulic motor that, in turn, drives a commercial rotary induction generator. The Stirling hydraulic engine uses hermetic bellows seals to separate helium working gas from hydraulic fluid which provides hydrodynamic lubrication to all moving parts. Maximum utilization of highly refined, field proven commercial components for electric power generation minimizes development cost and risk.

  20. RTM: Cost-effective processing of composite structures

    NASA Technical Reports Server (NTRS)

    Hasko, Greg; Dexter, H. Benson

    1991-01-01

    Resin transfer molding (RTM) is a promising method for cost effective fabrication of high strength, low weight composite structures from textile preforms. In this process, dry fibers are placed in a mold, resin is introduced either by vacuum infusion or pressure, and the part is cured. RTM has been used in many industries, including automotive, recreation, and aerospace. Each of the industries has different requirements of material strength, weight, reliability, environmental resistance, cost, and production rate. These requirements drive the selection of fibers and resins, fiber volume fractions, fiber orientations, mold design, and processing equipment. Research is made into applying RTM to primary aircraft structures which require high strength and stiffness at low density. The material requirements are discussed of various industries, along with methods of orienting and distributing fibers, mold configurations, and processing parameters. Processing and material parameters such as resin viscosity, perform compaction and permeability, and tool design concepts are discussed. Experimental methods to measure preform compaction and permeability are presented.

Top