Inhibition in task switching: The reliability of the n - 2 repetition cost.
Kowalczyk, Agnieszka W; Grange, James A
2017-12-01
The n - 2 repetition cost seen in task switching is the effect of slower response times performing a recently completed task (e.g. an ABA sequence) compared to performing a task that was not recently completed (e.g. a CBA sequence). This cost is thought to reflect cognitive inhibition of task representations and as such, the n - 2 repetition cost has begun to be used as an assessment of individual differences in inhibitory control; however, the reliability of this measure has not been investigated in a systematic manner. The current study addressed this important issue. Seventy-two participants performed three task switching paradigms; participants were also assessed on rumination traits and processing speed-measures of individual differences potentially modulating the n - 2 repetition cost. We found significant n - 2 repetition costs for each paradigm. However, split-half reliability tests revealed that this cost was not reliable at the individual-difference level. Neither rumination tendencies nor processing speed predicted this cost. We conclude that the n - 2 repetition cost is not reliable as a measure of individual differences in inhibitory control.
System engineering of complex optical systems for mission assurance and affordability
NASA Astrophysics Data System (ADS)
Ahmad, Anees
2017-08-01
Affordability and reliability are equally important as the performance and development time for many optical systems for military, space and commercial applications. These characteristics are even more important for the systems meant for space and military applications where total lifecycle costs must be affordable. Most customers are looking for high performance optical systems that are not only affordable but are designed with "no doubt" mission assurance, reliability and maintainability in mind. Both US military and commercial customers are now demanding an optimum balance between performance, reliability and affordability. Therefore, it is important to employ a disciplined systems design approach for meeting the performance, cost and schedule targets while keeping affordability and reliability in mind. The US Missile Defense Agency (MDA) now requires all of their systems to be engineered, tested and produced according to the Mission Assurance Provisions (MAP). These provisions or requirements are meant to ensure complex and expensive military systems are designed, integrated, tested and produced with the reliability and total lifecycle costs in mind. This paper describes a system design approach based on the MAP document for developing sophisticated optical systems that are not only cost-effective but also deliver superior and reliable performance during their intended missions.
Thermal Management and Reliability of Automotive Power Electronics and Electric Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narumanchi, Sreekant V; Bennion, Kevin S; Cousineau, Justine E
Low-cost, high-performance thermal management technologies are helping meet aggressive power density, specific power, cost, and reliability targets for power electronics and electric machines. The National Renewable Energy Laboratory is working closely with numerous industry and research partners to help influence development of components that meet aggressive performance and cost targets through development and characterization of cooling technologies, and thermal characterization and improvements of passive stack materials and interfaces. Thermomechanical reliability and lifetime estimation models are important enablers for industry in cost-and time-effective design.
Plastic packaged microcircuits: Quality, reliability, and cost issues
NASA Astrophysics Data System (ADS)
Pecht, Michael G.; Agarwal, Rakesh; Quearry, Dan
1993-12-01
Plastic encapsulated microcircuits (PEMs) find their main application in commercial and telecommunication electronics. The advantages of PEMs in cost, size, weight, performance, and market lead-time, have attracted 97% of the market share of worldwide microcircuit sales. However, PEMs have always been resisted in US Government and military applications due to the perception that PEM reliability is low. This paper surveys plastic packaging with respect to the issues of reliability, market lead-time, performance, cost, and weight as a means to guide part-selection and system-design.
Total systems design analysis of high performance structures
NASA Technical Reports Server (NTRS)
Verderaime, V.
1993-01-01
Designer-control parameters were identified at interdiscipline interfaces to optimize structural systems performance and downstream development and operations with reliability and least life-cycle cost. Interface tasks and iterations are tracked through a matrix of performance disciplines integration versus manufacturing, verification, and operations interactions for a total system design analysis. Performance integration tasks include shapes, sizes, environments, and materials. Integrity integrating tasks are reliability and recurring structural costs. Significant interface designer control parameters were noted as shapes, dimensions, probability range factors, and cost. Structural failure concept is presented, and first-order reliability and deterministic methods, benefits, and limitations are discussed. A deterministic reliability technique combining benefits of both is proposed for static structures which is also timely and economically verifiable. Though launch vehicle environments were primarily considered, the system design process is applicable to any surface system using its own unique filed environments.
Quality assurance and reliability in the Japanese electronics industry
NASA Astrophysics Data System (ADS)
Pecht, Michael; Boulton, William R.
1995-02-01
Quality and reliability are two attributes required for all Japanese products, although the JTEC panel found these attributes to be secondary to customer cost requirements. While our Japanese hosts gave presentations on the challenges of technology, cost, and miniaturization, quality and reliability were infrequently the focus of our discussions. Quality and reliability were assumed to be sufficient to meet customer needs. Fujitsu's slogan, 'quality built-in, with cost and performance as prime consideration,' illustrates this point. Sony's definition of a next-generation product is 'one that is going to be half the size and half the price at the same performance of the existing one'. Quality and reliability are so integral to Japan's electronics industry that they need no new emphasis.
Quality assurance and reliability in the Japanese electronics industry
NASA Technical Reports Server (NTRS)
Pecht, Michael; Boulton, William R.
1995-01-01
Quality and reliability are two attributes required for all Japanese products, although the JTEC panel found these attributes to be secondary to customer cost requirements. While our Japanese hosts gave presentations on the challenges of technology, cost, and miniaturization, quality and reliability were infrequently the focus of our discussions. Quality and reliability were assumed to be sufficient to meet customer needs. Fujitsu's slogan, 'quality built-in, with cost and performance as prime consideration,' illustrates this point. Sony's definition of a next-generation product is 'one that is going to be half the size and half the price at the same performance of the existing one'. Quality and reliability are so integral to Japan's electronics industry that they need no new emphasis.
Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method
NASA Technical Reports Server (NTRS)
Kowal, Michael T.
1997-01-01
The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.
1985-09-01
CoC S~04 COMPARISON OF QUANTITY VERSUS QUALITY USING PERFORMANCE, RELIABILITY, AND LIFE CYCLE COST DATA. A CASE STUDY OF THE F-15, F-16, AND A-10...CYCLE COSTIATU.AT CAE AIR ORE HEO OG .- jAITR UIVERSITY W right.,Patterson Air Force Base, Ohio .! 5ൔ ,6 198 C.IT. U AF’IT/GSL,4/L3Q/65:S Ŗ J...COMPARISON OF QUANTITY VERSUS QUALITY USING PERFORMANCE, RELIABILITY, AND LIFE CYCLE COST DATA. A CASE STUDY OF THE F-15, F-16, AND A-10 AIRCRAFT THESIS David
2nd Generation Reusable Launch Vehicle (2G RLV). Revised
NASA Technical Reports Server (NTRS)
Matlock, Steve; Sides, Steve; Kmiec, Tom; Arbogast, Tim; Mayers, Tom; Doehnert, Bill
2001-01-01
This is a revised final report and addresses all of the work performed on this program. Specifically, it covers vehicle architecture background, definition of six baseline engine cycles, reliability baseline (space shuttle main engine QRAS), and component level reliability/performance/cost for the six baseline cycles, and selection of 3 cycles for further study. This report further addresses technology improvement selection and component level reliability/performance/cost for the three cycles selected for further study, as well as risk reduction plans, and recommendation for future studies.
Modeling and Simulation Reliable Spacecraft On-Board Computing
NASA Technical Reports Server (NTRS)
Park, Nohpill
1999-01-01
The proposed project will investigate modeling and simulation-driven testing and fault tolerance schemes for Spacecraft On-Board Computing, thereby achieving reliable spacecraft telecommunication. A spacecraft communication system has inherent capabilities of providing multipoint and broadcast transmission, connectivity between any two distant nodes within a wide-area coverage, quick network configuration /reconfiguration, rapid allocation of space segment capacity, and distance-insensitive cost. To realize the capabilities above mentioned, both the size and cost of the ground-station terminals have to be reduced by using reliable, high-throughput, fast and cost-effective on-board computing system which has been known to be a critical contributor to the overall performance of space mission deployment. Controlled vulnerability of mission data (measured in sensitivity), improved performance (measured in throughput and delay) and fault tolerance (measured in reliability) are some of the most important features of these systems. The system should be thoroughly tested and diagnosed before employing a fault tolerance into the system. Testing and fault tolerance strategies should be driven by accurate performance models (i.e. throughput, delay, reliability and sensitivity) to find an optimal solution in terms of reliability and cost. The modeling and simulation tools will be integrated with a system architecture module, a testing module and a module for fault tolerance all of which interacting through a centered graphical user interface.
NASA Technical Reports Server (NTRS)
Greenhill, L. M.
1990-01-01
The Air Force/NASA Advanced Launch System (ALS) Liquid Hydrogen Fuel Turbopump (FTP) has primary design goals of low cost and high reliability, with performance and weight having less importance. This approach is atypical compared with other rocket engine turbopump design efforts, such as on the Space Shuttle Main Engine (SSME), which emphasized high performance and low weight. Similar to the SSME turbopumps, the ALS FTP operates supercritically, which implies that stability and bearing loads strongly influence the design. In addition, the use of low cost/high reliability features in the ALS FTP such as hydrostatic bearings, relaxed seal clearances, and unshrouded turbine blades also have a negative influence on rotordynamics. This paper discusses the analysis conducted to achieve a balance between low cost and acceptable rotordynamic behavior, to ensure that the ALS FTP will operate reliably without subsynchronous instabilities or excessive bearing loads.
NASA Technical Reports Server (NTRS)
Babrauckas, Theresa
2000-01-01
The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.
NASA Technical Reports Server (NTRS)
Matlock, Steve
2001-01-01
This is the final report and addresses all of the work performed on this program. Specifically, it covers vehicle architecture background, definition of six baseline engine cycles, reliability baseline (space shuttle main engine QRAS), and component level reliability/performance/cost for the six baseline cycles, and selection of 3 cycles for further study. This report further addresses technology improvement selection and component level reliability/performance/cost for the three cycles selected for further study, as well as risk reduction plans, and recommendation for future studies.
1977-03-01
system acquisition cycle since they provide necessary inputs to comparative analyses, cost/benefit trade -offs, and system simulations. In addition, the...Management Program from above performs the function of analyzing the system trade -offs with respect to reliability to determine a reliability goal...one encounters the problem of comparing present dollars with future dollars. In this analysis, we are trading off costs expended initially (or at
Effects of imperfect automation on decision making in a simulated command and control task.
Rovira, Ericka; McGarry, Kathleen; Parasuraman, Raja
2007-02-01
Effects of four types of automation support and two levels of automation reliability were examined. The objective was to examine the differential impact of information and decision automation and to investigate the costs of automation unreliability. Research has shown that imperfect automation can lead to differential effects of stages and levels of automation on human performance. Eighteen participants performed a "sensor to shooter" targeting simulation of command and control. Dependent variables included accuracy and response time of target engagement decisions, secondary task performance, and subjective ratings of mental work-load, trust, and self-confidence. Compared with manual performance, reliable automation significantly reduced decision times. Unreliable automation led to greater cost in decision-making accuracy under the higher automation reliability condition for three different forms of decision automation relative to information automation. At low automation reliability, however, there was a cost in performance for both information and decision automation. The results are consistent with a model of human-automation interaction that requires evaluation of the different stages of information processing to which automation support can be applied. If fully reliable decision automation cannot be guaranteed, designers should provide users with information automation support or other tools that allow for inspection and analysis of raw data.
Rovira, Ericka; Cross, Austin; Leitch, Evan; Bonaceto, Craig
2014-09-01
The impact of a decision support tool designed to embed contextual mission factors was investigated. Contextual information may enable operators to infer the appropriateness of data underlying the automation's algorithm. Research has shown the costs of imperfect automation are more detrimental than perfectly reliable automation when operators are provided with decision support tools. Operators may trust and rely on the automation more appropriately if they understand the automation's algorithm. The need to develop decision support tools that are understandable to the operator provides the rationale for the current experiment. A total of 17 participants performed a simulated rapid retasking of intelligence, surveillance, and reconnaissance (ISR) assets task with manual, decision automation, or contextual decision automation differing in two levels of task demand: low or high. Automation reliability was set at 80%, resulting in participants experiencing a mixture of reliable and automation failure trials. Dependent variables included ISR coverage and response time of replanning routes. Reliable automation significantly improved ISR coverage when compared with manual performance. Although performance suffered under imperfect automation, contextual decision automation helped to reduce some of the decrements in performance. Contextual information helps overcome the costs of imperfect decision automation. Designers may mitigate some of the performance decrements experienced with imperfect automation by providing operators with interfaces that display contextual information, that is, the state of factors that affect the reliability of the automation's recommendation.
The Joint Confidence Level Paradox: A History of Denial
NASA Technical Reports Server (NTRS)
Butts, Glenn; Linton, Kent
2009-01-01
This paper is intended to provide a reliable methodology for those tasked with generating price tags on construction (C0F) and research and development (R&D) activities in the NASA performance world. This document consists of a collection of cost-related engineering detail and project fulfillment information from early agency days to the present. Accurate historical detail is the first place to start when determining improved methodologies for future cost and schedule estimating. This paper contains a beneficial proposed cost estimating method for arriving at more reliable numbers for future submits. When comparing current cost and schedule methods with earlier cost and schedule approaches, it became apparent that NASA's organizational performance paradigm has morphed. Mission fulfillment speed has slowed and cost calculating factors have increased in 21st Century space exploration.
Highlights of recent balance of system research and evaluation
NASA Astrophysics Data System (ADS)
Thomas, M. G.; Stevens, J. W.
The cost of most photovoltaic (PV) systems is more a function of the balance of system (BOS) components than the collectors. The exception to this rule is the grid-tied system whose cost is related more directly to the collectors, and secondarily to the inverter/controls. In fact, recent procurements throughout the country document that collector costs for roof-mounted, utility-tied systems (Russell, PV Systems Workshop, 7/94) represent 60% to 70% of the system cost. This contrasts with the current market for packaged stand-alone all PV or PV-hybrid systems where collectors represent only 25% to 35% of the total. Not only are the BOS components the cost drivers in the current cost-effective PV system market place, they are also the least reliable components. This paper discusses the impact that BOS issues have on component performance, system performance, and system cost and reliability. We will also look at recent recommended changes in system design based upon performance evaluations of fielded PV systems.
2013-08-01
cost due to potential warranty costs, repairs and loss of market share. Reliability is the probability that the system will perform its intended...MCMC and splitting sampling schemes. Our proposed SS/ STP method is presented in Section 4, including accuracy bounds and computational effort
The influence of various test plans on mission reliability. [for Shuttle Spacelab payloads
NASA Technical Reports Server (NTRS)
Stahle, C. V.; Gongloff, H. R.; Young, J. P.; Keegan, W. B.
1977-01-01
Methods have been developed for the evaluation of cost effective vibroacoustic test plans for Shuttle Spacelab payloads. The shock and vibration environments of components have been statistically represented, and statistical decision theory has been used to evaluate the cost effectiveness of five basic test plans with structural test options for two of the plans. Component, subassembly, and payload testing have been performed for each plan along with calculations of optimum test levels and expected costs. The tests have been ranked according to both minimizing expected project costs and vibroacoustic reliability. It was found that optimum costs may vary up to $6 million with the lowest plan eliminating component testing and maintaining flight vibration reliability via subassembly tests at high acoustic levels.
Mission Reliability Estimation for Repairable Robot Teams
NASA Technical Reports Server (NTRS)
Trebi-Ollennu, Ashitey; Dolan, John; Stancliff, Stephen
2010-01-01
A mission reliability estimation method has been designed to translate mission requirements into choices of robot modules in order to configure a multi-robot team to have high reliability at minimal cost. In order to build cost-effective robot teams for long-term missions, one must be able to compare alternative design paradigms in a principled way by comparing the reliability of different robot models and robot team configurations. Core modules have been created including: a probabilistic module with reliability-cost characteristics, a method for combining the characteristics of multiple modules to determine an overall reliability-cost characteristic, and a method for the generation of legitimate module combinations based on mission specifications and the selection of the best of the resulting combinations from a cost-reliability standpoint. The developed methodology can be used to predict the probability of a mission being completed, given information about the components used to build the robots, as well as information about the mission tasks. In the research for this innovation, sample robot missions were examined and compared to the performance of robot teams with different numbers of robots and different numbers of spare components. Data that a mission designer would need was factored in, such as whether it would be better to have a spare robot versus an equivalent number of spare parts, or if mission cost can be reduced while maintaining reliability using spares. This analytical model was applied to an example robot mission, examining the cost-reliability tradeoffs among different team configurations. Particularly scrutinized were teams using either redundancy (spare robots) or repairability (spare components). Using conservative estimates of the cost-reliability relationship, results show that it is possible to significantly reduce the cost of a robotic mission by using cheaper, lower-reliability components and providing spares. This suggests that the current design paradigm of building a minimal number of highly robust robots may not be the best way to design robots for extended missions.
Parts and Components Reliability Assessment: A Cost Effective Approach
NASA Technical Reports Server (NTRS)
Lee, Lydia
2009-01-01
System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.
Analysis of the Seismic Performance of Isolated Buildings according to Life-Cycle Cost
Dang, Yu; Han, Jian-ping; Li, Yong-tao
2015-01-01
This paper proposes an indicator of seismic performance based on life-cycle cost of a building. It is expressed as a ratio of lifetime damage loss to life-cycle cost and determines the seismic performance of isolated buildings. Major factors are considered, including uncertainty in hazard demand and structural capacity, initial costs, and expected loss during earthquakes. Thus, a high indicator value indicates poor building seismic performance. Moreover, random vibration analysis is conducted to measure structural reliability and evaluate the expected loss and life-cycle cost of isolated buildings. The expected loss of an actual, seven-story isolated hospital building is only 37% of that of a fixed-base building. Furthermore, the indicator of the structural seismic performance of the isolated building is much lower in value than that of the structural seismic performance of the fixed-base building. Therefore, isolated buildings are safer and less risky than fixed-base buildings. The indicator based on life-cycle cost assists owners and engineers in making investment decisions in consideration of structural design, construction, and expected loss. It also helps optimize the balance between building reliability and building investment. PMID:25653677
Analysis of the seismic performance of isolated buildings according to life-cycle cost.
Dang, Yu; Han, Jian-Ping; Li, Yong-Tao
2015-01-01
This paper proposes an indicator of seismic performance based on life-cycle cost of a building. It is expressed as a ratio of lifetime damage loss to life-cycle cost and determines the seismic performance of isolated buildings. Major factors are considered, including uncertainty in hazard demand and structural capacity, initial costs, and expected loss during earthquakes. Thus, a high indicator value indicates poor building seismic performance. Moreover, random vibration analysis is conducted to measure structural reliability and evaluate the expected loss and life-cycle cost of isolated buildings. The expected loss of an actual, seven-story isolated hospital building is only 37% of that of a fixed-base building. Furthermore, the indicator of the structural seismic performance of the isolated building is much lower in value than that of the structural seismic performance of the fixed-base building. Therefore, isolated buildings are safer and less risky than fixed-base buildings. The indicator based on life-cycle cost assists owners and engineers in making investment decisions in consideration of structural design, construction, and expected loss. It also helps optimize the balance between building reliability and building investment.
A Simple and Reliable Method of Design for Standalone Photovoltaic Systems
NASA Astrophysics Data System (ADS)
Srinivasarao, Mantri; Sudha, K. Rama; Bhanu, C. V. K.
2017-06-01
Standalone photovoltaic (SAPV) systems are seen as a promoting method of electrifying areas of developing world that lack power grid infrastructure. Proliferations of these systems require a design procedure that is simple, reliable and exhibit good performance over its life time. The proposed methodology uses simple empirical formulae and easily available parameters to design SAPV systems, that is, array size with energy storage. After arriving at the different array size (area), performance curves are obtained for optimal design of SAPV system with high amount of reliability in terms of autonomy at a specified value of loss of load probability (LOLP). Based on the array to load ratio (ALR) and levelized energy cost (LEC) through life cycle cost (LCC) analysis, it is shown that the proposed methodology gives better performance, requires simple data and is more reliable when compared with conventional design using monthly average daily load and insolation.
NREL to Lead New Consortium to Improve Reliability and Performance of Solar
for photovoltaics (PV) and lower the cost of electricity generated by solar power. The Durable Module the cost of electricity from photovoltaics." The Energy Department's Office of Energy Efficiency , DuraMat will address the substantial opportunities that exist for durable, high-performance, low-cost
Customer Dissatisfaction Index and its Improvement Costs
NASA Astrophysics Data System (ADS)
Lvovs, Aleksandrs; Mutule, Anna
2010-01-01
The paper gives description of customer dissatisfaction index (CDI) that can be used as reliability level characterizing factor. The factor is directly joined with customer satisfaction of power supply and can be used for control of reliability level of power supply for residential customers. CDI relations with other reliability indices are shown. Paper also gives a brief overview of legislation of Latvia in power industry that is the base for CDI introduction. Calculations of CDI improvement costs are performed in the paper too.
A pragmatic decision model for inventory management with heterogeneous suppliers
NASA Astrophysics Data System (ADS)
Nakandala, Dilupa; Lau, Henry; Zhang, Jingjing; Gunasekaran, Angappa
2018-05-01
For enterprises, it is imperative that the trade-off between the cost of inventory and risk implications is managed in the most efficient manner. To explore this, we use the common example of a wholesaler operating in an environment where suppliers demonstrate heterogeneous reliability. The wholesaler has partial orders with dual suppliers and uses lateral transshipments. While supplier reliability is a key concern in inventory management, reliable suppliers are more expensive and investment in strategic approaches that improve supplier performance carries a high cost. Here we consider the operational strategy of dual sourcing with reliable and unreliable suppliers and model the total inventory cost where the likely scenario lead-time of the unreliable suppliers extends beyond the scheduling period. We then develop a Customized Integer Programming Optimization Model to determine the optimum size of partial orders with multiple suppliers. In addition to the objective of total cost optimization, this study takes into account the volatility of the cost associated with the uncertainty of an inventory system.
Smart Water Conservation System for Irrigated Landscape. ESTCP Cost and Performance Report
2016-10-01
water use by as much as 70% in support of meeting EO 13693. Additional performance objectives were to validate energy reduction, cost effectiveness ...Additional performance objectives were to validate energy reduction, cost effectiveness , and system reliability while maintaining satisfactory plant health...developments. The demonstration was conducted for two different climatic regions in the southwestern part of the United States (U.S.), where a typical
Reducing maintenance costs in agreement with CNC machine tools reliability
NASA Astrophysics Data System (ADS)
Ungureanu, A. L.; Stan, G.; Butunoi, P. A.
2016-08-01
Aligning maintenance strategy with reliability is a challenge due to the need to find an optimal balance between them. Because the various methods described in the relevant literature involve laborious calculations or use of software that can be costly, this paper proposes a method that is easier to implement on CNC machine tools. The new method, called the Consequence of Failure Analysis (CFA) is based on technical and economic optimization, aimed at obtaining a level of required performance with minimum investment and maintenance costs.
On Reliable and Efficient Data Gathering Based Routing in Underwater Wireless Sensor Networks.
Liaqat, Tayyaba; Akbar, Mariam; Javaid, Nadeem; Qasim, Umar; Khan, Zahoor Ali; Javaid, Qaisar; Alghamdi, Turki Ali; Niaz, Iftikhar Azim
2016-08-30
This paper presents cooperative routing scheme to improve data reliability. The proposed protocol achieves its objective, however, at the cost of surplus energy consumption. Thus sink mobility is introduced to minimize the energy consumption cost of nodes as it directly collects data from the network nodes at minimized communication distance. We also present delay and energy optimized versions of our proposed RE-AEDG to further enhance its performance. Simulation results prove the effectiveness of our proposed RE-AEDG in terms of the selected performance matrics.
Reliability of hospital cost profiles in inpatient surgery.
Grenda, Tyler R; Krell, Robert W; Dimick, Justin B
2016-02-01
With increased policy emphasis on shifting risk from payers to providers through mechanisms such as bundled payments and accountable care organizations, hospitals are increasingly in need of metrics to understand their costs relative to peers. However, it is unclear whether Medicare payments for surgery can reliably compare hospital costs. We used national Medicare data to assess patients undergoing colectomy, pancreatectomy, and open incisional hernia repair from 2009 to 2010 (n = 339,882 patients). We first calculated risk-adjusted hospital total episode payments for each procedure. We then used hierarchical modeling techniques to estimate the reliability of total episode payments for each procedure and explored the impact of hospital caseload on payment reliability. Finally, we quantified the number of hospitals meeting published reliability benchmarks. Mean risk-adjusted total episode payments ranged from $13,262 (standard deviation [SD] $14,523) for incisional hernia repair to $25,055 (SD $22,549) for pancreatectomy. The reliability of hospital episode payments varied widely across procedures and depended on sample size. For example, mean episode payment reliability for colectomy (mean caseload, 157) was 0.80 (SD 0.18), whereas for pancreatectomy (mean caseload, 13) the mean reliability was 0.45 (SD 0.27). Many hospitals met published reliability benchmarks for each procedure. For example, 90% of hospitals met reliability benchmarks for colectomy, 40% for pancreatectomy, and 66% for incisional hernia repair. Episode payments for inpatient surgery are a reliable measure of hospital costs for commonly performed procedures, but are less reliable for lower volume operations. These findings suggest that hospital cost profiles based on Medicare claims data may be used to benchmark efficiency, especially for more common procedures. Copyright © 2016 Elsevier Inc. All rights reserved.
Organize to manage reliability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricketts, R.
An analysis of maintenance costs in hydrocarbon processing industry (HPI) plants has revealed that attitudes and practices of personnel are the major single bottom line factor. In reaching this conclusion, Solomon Associates examined comparative analysis of plant records over the past decade. The authors learned that there was a wide range of performance independent of refinery age, capacity, processing complexity, and location. Facilities of all extremes in these attributes are included in both high-cost and low-cost categories. Those in the lowest quartile of performance posted twice the resource consumption as the best quartile. Furthermore, there was almost no similarity betweenmore » refineries within a single company. The paper discusses cost versus availability, maintenance spending, two organizational approaches used (repair focused and reliability focused), and organizational style and structure.« less
Human Reliability and the Cost of Doing Business
NASA Technical Reports Server (NTRS)
DeMott, Diana
2014-01-01
Most businesses recognize that people will make mistakes and assume errors are just part of the cost of doing business, but does it need to be? Companies with high risk, or major consequences, should consider the effect of human error. In a variety of industries, Human Errors have caused costly failures and workplace injuries. These have included: airline mishaps, medical malpractice, administration of medication and major oil spills have all been blamed on human error. A technique to mitigate or even eliminate some of these costly human errors is the use of Human Reliability Analysis (HRA). Various methodologies are available to perform Human Reliability Assessments that range from identifying the most likely areas for concern to detailed assessments with human error failure probabilities calculated. Which methodology to use would be based on a variety of factors that would include: 1) how people react and act in different industries, and differing expectations based on industries standards, 2) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 3) type and availability of data and 4) how the industry views risk & reliability influences ( types of emergencies, contingencies and routine tasks versus cost based concerns). The Human Reliability Assessments should be the first step to reduce, mitigate or eliminate the costly mistakes or catastrophic failures. Using Human Reliability techniques to identify and classify human error risks allows a company more opportunities to mitigate or eliminate these risks and prevent costly failures.
Romero-Franco, Natalia; Jiménez-Reyes, Pedro; Montaño-Munuera, Juan A
2017-11-01
Lower limb isometric strength is a key parameter to monitor the training process or recognise muscle weakness and injury risk. However, valid and reliable methods to evaluate it often require high-cost tools. The aim of this study was to analyse the concurrent validity and reliability of a low-cost digital dynamometer for measuring isometric strength in lower limb. Eleven physically active and healthy participants performed maximal isometric strength for: flexion and extension of ankle, flexion and extension of knee, flexion, extension, adduction, abduction, internal and external rotation of hip. Data obtained by the digital dynamometer were compared with the isokinetic dynamometer to examine its concurrent validity. Data obtained by the digital dynamometer from 2 different evaluators and 2 different sessions were compared to examine its inter-rater and intra-rater reliability. Intra-class correlation (ICC) for validity was excellent in every movement (ICC > 0.9). Intra and inter-tester reliability was excellent for all the movements assessed (ICC > 0.75). The low-cost digital dynamometer demonstrated strong concurrent validity and excellent intra and inter-tester reliability for assessing isometric strength in the main lower limb movements.
Stoffels, I; Dissemond, J; Körber, A; Hillen, U; Poeppel, T; Schadendorf, D; Klode, J
2011-03-01
Sentinel lymph node excision (SLNE) for the detection of regional nodal metastases and staging of malignant melanoma has resulted in some controversies in international discussions, as it is a cost-intensive surgical intervention with potentially significant morbidity. The present retrospective study seeks to clarify the effectiveness and reliability of SLNE performed under tumescent local anaesthesia (TLA) and whether SLNE performed under TLA can reduce costs and morbidity. Therefore, our study is a comparison of SLNE performed under TLA and general anaesthesia (GA). We retrospectively analysed data from 300 patients with primary malignant melanoma with a Breslow index of ≥1.0 mm. Altogether, 211 (70.3%) patients underwent SLNE under TLA and 89 (29.7%) patients underwent SLNE under GA. A total of 637 sentinel lymph nodes (SLN) were removed. In the TLA group 1.98 SLN/patient and in the GA group 2.46 SLN/patient were removed (median value). Seventy patients (23.3%) had a positive SLN. No major complications occurred. The costs for SLNE were significantly less for the SLNE in a procedures room performed under TLA (mean € 30.64) compared with SLNE in an operating room under GA (mean € 326.14, P<0.0001). In conclusion, SLNE performed under TLA is safe, reliable, and cost-efficient and could become the new gold standard in sentinel lymph node diagnostic procedures. © 2010 The Authors. Journal of the European Academy of Dermatology and Venereology © 2010 European Academy of Dermatology and Venereology.
NASA Astrophysics Data System (ADS)
Riggs, William R.
1994-05-01
SHARP is a Navy wide logistics technology development effort aimed at reducing the acquisition costs, support costs, and risks of military electronic weapon systems while increasing the performance capability, reliability, maintainability, and readiness of these systems. Lower life cycle costs for electronic hardware are achieved through technology transition, standardization, and reliability enhancement to improve system affordability and availability as well as enhancing fleet modernization. Advanced technology is transferred into the fleet through hardware specifications for weapon system building blocks of standard electronic modules, standard power systems, and standard electronic systems. The product lines are all defined with respect to their size, weight, I/O, environmental performance, and operational performance. This method of defining the standard is very conducive to inserting new technologies into systems using the standard hardware. This is the approach taken thus far in inserting photonic technologies into SHARP hardware. All of the efforts have been related to module packaging; i.e. interconnects, component packaging, and module developments. Fiber optic interconnects are discussed in this paper.
A low-cost, high-field-strength magnetic resonance imaging-compatible actuator.
Secoli, Riccardo; Robinson, Matthew; Brugnoli, Michele; Rodriguez y Baena, Ferdinando
2015-03-01
To perform minimally invasive surgical interventions with the aid of robotic systems within a magnetic resonance imaging scanner offers significant advantages compared to conventional surgery. However, despite the numerous exciting potential applications of this technology, the introduction of magnetic resonance imaging-compatible robotics has been hampered by safety, reliability and cost concerns: the robots should not be attracted by the strong magnetic field of the scanner and should operate reliably in the field without causing distortion to the scan data. Development of non-conventional sensors and/or actuators is thus required to meet these strict operational and safety requirements. These demands commonly result in expensive actuators, which mean that cost effectiveness remains a major challenge for such robotic systems. This work presents a low-cost, high-field-strength magnetic resonance imaging-compatible actuator: a pneumatic stepper motor which is controllable in open loop or closed loop, along with a rotary encoder, both fully manufactured in plastic, which are shown to perform reliably via a set of in vitro trials while generating negligible artifacts when imaged within a standard clinical scanner. © IMechE 2015.
Thermoelectric Outer Planets Spacecraft (TOPS)
NASA Technical Reports Server (NTRS)
1973-01-01
The research and advanced development work is reported on a ballistic-mode, outer planet spacecraft using radioisotope thermoelectric generator (RTG) power. The Thermoelectric Outer Planet Spacecraft (TOPS) project was established to provide the advanced systems technology that would allow the realistic estimates of performance, cost, reliability, and scheduling that are required for an actual flight mission. A system design of the complete RTG-powered outer planet spacecraft was made; major technical innovations of certain hardware elements were designed, developed, and tested; and reliability and quality assurance concepts were developed for long-life requirements. At the conclusion of its active phase, the TOPS Project reached its principal objectives: a development and experience base was established for project definition, and for estimating cost, performance, and reliability; an understanding of system and subsystem capabilities for successful outer planets missions was achieved. The system design answered long-life requirements with massive redundancy, controlled by on-board analysis of spacecraft performance data.
Chang, Jasper O; Levy, Susan S; Seay, Seth W; Goble, Daniel J
2014-05-01
Recent guidelines advocate sports medicine professionals to use balance tests to assess sensorimotor status in the management of concussions. The present study sought to determine whether a low-cost balance board could provide a valid, reliable, and objective means of performing this balance testing. Criterion validity testing relative to a gold standard and 7 day test-retest reliability. University biomechanics laboratory. Thirty healthy young adults. Balance ability was assessed on 2 days separated by 1 week using (1) a gold standard measure (ie, scientific grade force plate), (2) a low-cost Nintendo Wii Balance Board (WBB), and (3) the Balance Error Scoring System (BESS). Validity of the WBB center of pressure path length and BESS scores were determined relative to the force plate data. Test-retest reliability was established based on intraclass correlation coefficients. Composite scores for the WBB had excellent validity (r = 0.99) and test-retest reliability (R = 0.88). Both the validity (r = 0.10-0.52) and test-retest reliability (r = 0.61-0.78) were lower for the BESS. These findings demonstrate that a low-cost balance board can provide improved balance testing accuracy/reliability compared with the BESS. This approach provides a potentially more valid/reliable, yet affordable, means of assessing sports-related concussion compared with current methods.
Using Ensemble Decisions and Active Selection to Improve Low-Cost Labeling for Multi-View Data
NASA Technical Reports Server (NTRS)
Rebbapragada, Umaa; Wagstaff, Kiri L.
2011-01-01
This paper seeks to improve low-cost labeling in terms of training set reliability (the fraction of correctly labeled training items) and test set performance for multi-view learning methods. Co-training is a popular multiview learning method that combines high-confidence example selection with low-cost (self) labeling. However, co-training with certain base learning algorithms significantly reduces training set reliability, causing an associated drop in prediction accuracy. We propose the use of ensemble labeling to improve reliability in such cases. We also discuss and show promising results on combining low-cost ensemble labeling with active (low-confidence) example selection. We unify these example selection and labeling strategies under collaborative learning, a family of techniques for multi-view learning that we are developing for distributed, sensor-network environments.
Reliability based design optimization: Formulations and methodologies
NASA Astrophysics Data System (ADS)
Agarwal, Harish
Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed. A trust region managed sequential approximate optimization methodology is employed for this purpose. Results from numerical test studies indicate that the methodology can be used for performing design optimization under severe uncertainty.
Fiber Access Networks: Reliability Analysis and Swedish Broadband Market
NASA Astrophysics Data System (ADS)
Wosinska, Lena; Chen, Jiajia; Larsen, Claus Popp
Fiber access network architectures such as active optical networks (AONs) and passive optical networks (PONs) have been developed to support the growing bandwidth demand. Whereas particularly Swedish operators prefer AON, this may not be the case for operators in other countries. The choice depends on a combination of technical requirements, practical constraints, business models, and cost. Due to the increasing importance of reliable access to the network services, connection availability is becoming one of the most crucial issues for access networks, which should be reflected in the network owner's architecture decision. In many cases protection against failures is realized by adding backup resources. However, there is a trade off between the cost of protection and the level of service reliability since improving reliability performance by duplication of network resources (and capital expenditures CAPEX) may be too expensive. In this paper we present the evolution of fiber access networks and compare reliability performance in relation to investment and management cost for some representative cases. We consider both standard and novel architectures for deployment in both sparsely and densely populated areas. While some recent works focused on PON protection schemes with reduced CAPEX the current and future effort should be put on minimizing the operational expenditures (OPEX) during the access network lifetime.
Advanced Launch System Multi-Path Redundant Avionics Architecture Analysis and Characterization
NASA Technical Reports Server (NTRS)
Baker, Robert L.
1993-01-01
The objective of the Multi-Path Redundant Avionics Suite (MPRAS) program is the development of a set of avionic architectural modules which will be applicable to the family of launch vehicles required to support the Advanced Launch System (ALS). To enable ALS cost/performance requirements to be met, the MPRAS must support autonomy, maintenance, and testability capabilities which exceed those present in conventional launch vehicles. The multi-path redundant or fault tolerance characteristics of the MPRAS are necessary to offset a reduction in avionics reliability due to the increased complexity needed to support these new cost reduction and performance capabilities and to meet avionics reliability requirements which will provide cost-effective reductions in overall ALS recurring costs. A complex, real-time distributed computing system is needed to meet the ALS avionics system requirements. General Dynamics, Boeing Aerospace, and C.S. Draper Laboratory have proposed system architectures as candidates for the ALS MPRAS. The purpose of this document is to report the results of independent performance and reliability characterization and assessment analyses of each proposed candidate architecture and qualitative assessments of testability, maintainability, and fault tolerance mechanisms. These independent analyses were conducted as part of the MPRAS Part 2 program and were carried under NASA Langley Research Contract NAS1-17964, Task Assignment 28.
Radiation Challenges for Electronics in the Vision for Space Exploration
NASA Technical Reports Server (NTRS)
LaBel, Kenneth A.
2006-01-01
The slides present a brief snapshot discussing electronics and exploration-related challenges. Radiation effects have been the prime target, however, electronic parts reliability issues must also be considered. Modern electronics are designed with a 3-5 year lifetime. Upscreening does not improve reliability, merely determines inherent levels. Testing costs are driven by device complexity; they increase tester complexity, beam requirements, and facility choices. Commercial devices may improve performance, but are not cost panaceas. There is need for a more cost-effective access to high energy heavy ion facilities such as NSCL and NSRL. Costs for capable test equipment can run more than $1M for full testing.
Optimizing preventive maintenance policy: A data-driven application for a light rail braking system.
Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel
2017-10-01
This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions.
Optimizing preventive maintenance policy: A data-driven application for a light rail braking system
Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel
2017-01-01
This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions. PMID:29278245
Chen, Qingkui; Zhao, Deyu; Wang, Jingjuan
2017-01-01
This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services. PMID:28777325
Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan
2017-08-04
This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.
Three phase power conversion system for utility interconnected PV applications
NASA Astrophysics Data System (ADS)
Porter, David G.
1999-03-01
Omnion Power Engineering Corporation has developed a new three phase inverter that improves the cost, reliability, and performance of three phase utility interconnected photovoltaic inverters. The inverter uses a new, high manufacturing volume IGBT bridge that has better thermal performance than previous designs. A custom easily manufactured enclosure was designed. Controls were simplified to increase reliability while maintaining important user features.
A study of low-cost reliable actuators for light aircraft. Part A: Chapters 1-8
NASA Technical Reports Server (NTRS)
Eijsink, H.; Rice, M.
1978-01-01
An analysis involving electro-mechanical, electro-pneumatic, and electro-hydraulic actuators was performed to study which are compatible for use in the primary and secondary flight controls of a single engine light aircraft. Actuator characteristics under investigation include cost, reliability, weight, force, volumetric requirements, power requirements, response characteristics and heat accumulation characteristics. The basic types of actuators were compared for performance characteristics in positioning a control surface model and then were mathematically evaluated in an aircraft to get the closed loop dynamic response characteristics. Conclusions were made as to the suitability of each actuator type for use in an aircraft.
The Delta Launch Vehicle Model 2914 Series
NASA Technical Reports Server (NTRS)
Gunn, C. R.
1973-01-01
The newest Delta launch vehicle configuration, Model 2914 is described for potential users together with recent flight results. A functional description of the vehicle, its performance, flight profile, flight environment, injection accuracy, spacecraft integration requirements, user organizational interfaces, launch operations, costs and reimbursable users payment plan are provided. The versatile, relatively low cost Delta has a flight demonstrated reliability record of 92 percent that has been established in 96 launches over twelve years while concurrently undergoing ten major upratings to keep pace with the ever increasing performance and reliability requirements of its users. At least 40 more launches are scheduled over the next three years from the Eastern and Western Test Ranges.
Taguchi Approach to Design Optimization for Quality and Cost: An Overview
NASA Technical Reports Server (NTRS)
Unal, Resit; Dean, Edwin B.
1990-01-01
Calibrations to existing cost of doing business in space indicate that to establish human presence on the Moon and Mars with the Space Exploration Initiative (SEI) will require resources, felt by many, to be more than the national budget can afford. In order for SEI to succeed, we must actually design and build space systems at lower cost this time, even with tremendous increases in quality and performance requirements, such as extremely high reliability. This implies that both government and industry must change the way they do business. Therefore, new philosophy and technology must be employed to design and produce reliable, high quality space systems at low cost. In recognizing the need to reduce cost and improve quality and productivity, Department of Defense (DoD) and National Aeronautics and Space Administration (NASA) have initiated Total Quality Management (TQM). TQM is a revolutionary management strategy in quality assurance and cost reduction. TQM requires complete management commitment, employee involvement, and use of statistical tools. The quality engineering methods of Dr. Taguchi, employing design of experiments (DOE), is one of the most important statistical tools of TQM for designing high quality systems at reduced cost. Taguchi methods provide an efficient and systematic way to optimize designs for performance, quality, and cost. Taguchi methods have been used successfully in Japan and the United States in designing reliable, high quality products at low cost in such areas as automobiles and consumer electronics. However, these methods are just beginning to see application in the aerospace industry. The purpose of this paper is to present an overview of the Taguchi methods for improving quality and reducing cost, describe the current state of applications and its role in identifying cost sensitive design parameters.
High-reliability gas-turbine combined-cycle development program: Phase II. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hecht, K.G.; Sanderson, R.A.; Smith, M.J.
This three-volume report presents the results of Phase II of the multiphase EPRI-sponsored High-Reliability Gas Turbine Combined-Cycle Development Program whose goal is to achieve a highly reliable gas turbine combined-cycle power plant, available by the mid-1980s, which would be an economically attractive baseload generation alternative for the electric utility industry. The Phase II program objective was to prepare the preliminary design of this power plant. This volume presents information of the reliability, availability, and maintainability (RAM) analysis of a representative plant and the preliminary design of the gas turbine, the gas turbine ancillaries, and the balance of plant including themore » steam turbine generator. To achieve the program goals, a gas turbine was incorporated which combined proven reliability characteristics with improved performance features. This gas turbine, designated the V84.3, is the result of a cooperative effort between Kraftwerk Union AG and United Technologies Corporation. Gas turbines of similar design operating in Europe under baseload conditions have demonstrated mean time between failures in excess of 40,000 hours. The reliability characteristics of the gas turbine ancillaries and balance-of-plant equipment were improved through system simplification and component redundancy and by selection of component with inherent high reliability. A digital control system was included with logic, communications, sensor redundancy, and mandual backup. An independent condition monitoring and diagnostic system was also included. Program results provide the preliminary design of a gas turbine combined-cycle baseload power plant. This power plant has a predicted mean time between failure of nearly twice the 3000-hour EPRI goal. The cost of added reliability features is offset by improved performance, which results in a comparable specific cost and an 8% lower cost of electricity compared to present market offerings.« less
High efficiency low cost monolithic module for SARSAT distress beacons
NASA Technical Reports Server (NTRS)
Petersen, Wendell C.; Siu, Daniel P.
1992-01-01
The program objectives were to develop a highly efficient, low cost RF module for SARSAT beacons; achieve significantly lower battery current drain, amount of heat generated, and size of battery required; utilize MMIC technology to improve efficiency, reliability, packaging, and cost; and provide a technology database for GaAs based UHF RF circuit architectures. Presented in viewgraph form are functional block diagrams of the SARSAT distress beacon and beacon RF module as well as performance goals, schematic diagrams, predicted performances, and measured performances for the phase modulator and power amplifier.
Improving the Defense Acquisition System and Reducing System Costs
1981-03-30
The need for this specific commitment results from the competition among the conflicting objectives of high perform- ance, lower cost, shorter... conflict with initiatives to improve reliability and support. Whereas the fastest acquisition approach involves initiating production prxor to...their Individual thrusts result in confusion on the part of OASD who tries to implement conflicting programs, and of defense contractors performing
Application of hybrid propulsion systems to planetary missions
NASA Technical Reports Server (NTRS)
Don, J. P.; Phen, R. L.
1971-01-01
The feasibility and application of hybrid rocket propulsion to outer-planet orbiter missions is assessed in this study and guidelines regarding future development are provided. A Jupiter Orbiter Mission was selected for evaluation because it is the earliest planetary mission which may require advanced chemical propulsion. Mission and spacecraft characteristics which affect the selection and design of propulsion subsystems are presented. Alternative propulsion subsystems, including space-storable bipropellant liquids, a solid/monopropellant vernier, and a hybrid, are compared on the basis of performance, reliability, and cost. Cost-effectiveness comparisons are made for a range of assumptions including variation in (1) the level of need for spacecraft performance (determined in part by launch vehicle injected mass capability), and (2) achievable reliability at corresponding costs. The results indicated that the hybrid and space-storable bipropellant mechanizations are competitive.
Benchmarking Heavy Ion Transport Codes FLUKA, HETC-HEDS MARS15, MCNPX, and PHITS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronningen, Reginald Martin; Remec, Igor; Heilbronn, Lawrence H.
Powerful accelerators such as spallation neutron sources, muon-collider/neutrino facilities, and rare isotope beam facilities must be designed with the consideration that they handle the beam power reliably and safely, and they must be optimized to yield maximum performance relative to their design requirements. The simulation codes used for design purposes must produce reliable results. If not, component and facility designs can become costly, have limited lifetime and usefulness, and could even be unsafe. The objective of this proposal is to assess the performance of the currently available codes PHITS, FLUKA, MARS15, MCNPX, and HETC-HEDS that could be used for designmore » simulations involving heavy ion transport. We plan to access their performance by performing simulations and comparing results against experimental data of benchmark quality. Quantitative knowledge of the biases and the uncertainties of the simulations is essential as this potentially impacts the safe, reliable and cost effective design of any future radioactive ion beam facility. Further benchmarking of heavy-ion transport codes was one of the actions recommended in the Report of the 2003 RIA R&D Workshop".« less
Design optimization for cost and quality: The robust design approach
NASA Technical Reports Server (NTRS)
Unal, Resit
1990-01-01
Designing reliable, low cost, and operable space systems has become the key to future space operations. Designing high quality space systems at low cost is an economic and technological challenge to the designer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost, called Robust Design. Robust Design is an approach for design optimization. It consists of: making system performance insensitive to material and subsystem variation, thus allowing the use of less costly materials and components; making designs less sensitive to the variations in the operating environment, thus improving reliability and reducing operating costs; and using a new structured development process so that engineering time is used most productively. The objective in Robust Design is to select the best combination of controllable design parameters so that the system is most robust to uncontrollable noise factors. The robust design methodology uses a mathematical tool called an orthogonal array, from design of experiments theory, to study a large number of decision variables with a significantly small number of experiments. Robust design also uses a statistical measure of performance, called a signal-to-noise ratio, from electrical control theory, to evaluate the level of performance and the effect of noise factors. The purpose is to investigate the Robust Design methodology for improving quality and cost, demonstrate its application by the use of an example, and suggest its use as an integral part of space system design process.
Electric service reliability cost/worth assessment in a developing country
NASA Astrophysics Data System (ADS)
Pandey, Mohan Kumar
Considerable work has been done in developed countries to optimize the reliability of electric power systems on the basis of reliability cost versus reliability worth. This has yet to be considered in most developing countries, where development plans are still based on traditional deterministic measures. The difficulty with these criteria is that they cannot be used to evaluate the economic impacts of changing reliability levels on the utility and the customers, and therefore cannot lead to an optimum expansion plan for the system. The critical issue today faced by most developing countries is that the demand for electric power is high and growth in supply is constrained by technical, environmental, and most importantly by financial impediments. Many power projects are being canceled or postponed due to a lack of resources. The investment burden associated with the electric power sector has already led some developing countries into serious debt problems. This thesis focuses on power sector issues facing by developing countries and illustrates how a basic reliability cost/worth approach can be used in a developing country to determine appropriate planning criteria and justify future power projects by application to the Nepal Integrated Electric Power System (NPS). A reliability cost/worth based system evaluation framework is proposed in this thesis. Customer surveys conducted throughout Nepal using in-person interviews with approximately 2000 sample customers are presented. The survey results indicate that the interruption cost is dependent on both customer and interruption characteristics, and it varies from one location or region to another. Assessments at both the generation and composite system levels have been performed using the customer cost data and the developed NPS reliability database. The results clearly indicate the implications of service reliability to the electricity consumers of Nepal, and show that the reliability cost/worth evaluation is both possible and practical in a developing country. The average customer interruption costs of Rs 35/kWh at Hierarchical Level I and Rs 26/kWh at Hierarchical Level II evaluated in this research work led to an optimum reserve margin of 7.5%, which is considerably lower than the traditional reserve margin of 15% used in the NPS. A similar conclusion may result in other developing countries facing difficulties in power system expansion planning using the traditional approach. A new framework for system planning is therefore recommended for developing countries which would permit an objective review of the traditional system planning approach, and the evaluation of future power projects using a new approach based on fundamental principles of power system reliability and economics.
A cost assessment of reliability requirements for shuttle-recoverable experiments
NASA Technical Reports Server (NTRS)
Campbell, J. W.
1975-01-01
The relaunching of unsuccessful experiments or satellites will become a real option with the advent of the space shuttle. An examination was made of the cost effectiveness of relaxing reliability requirements for experiment hardware by allowing more than one flight of an experiment in the event of its failure. Any desired overall reliability or probability of mission success can be acquired by launching an experiment with less reliability two or more times if necessary. Although this procedure leads to uncertainty in total cost projections, because the number of flights is not known in advance, a considerable cost reduction can sometimes be achieved. In cases where reflight costs are low relative to the experiment's cost, three flights with overall reliability 0.9 can be made for less than half the cost of one flight with a reliability of 0.9. An example typical of shuttle payload cost projections is cited where three low reliability flights would cost less than $50 million and a single high reliability flight would cost over $100 million. The ratio of reflight cost to experiment cost is varied and its effect on the range in total cost is observed. An optimum design reliability selection criterion to minimize expected cost is proposed, and a simple graphical method of determining this reliability is demonstrated.
Power Electronics and Electric Machines | Transportation Research | NREL
-to resource for information from cutting-edge thermal management research, making wide-scale adoption battery, the motor, and other powertrain components. NREL's thermal management and reliability research is thermal management technologies to improve performance, cost, and reliability for power electronics and
NREL to Research Revolutionary Battery Storage Approaches in Support of
adoption by dramatically improving driving range and reliability, and by providing low-cost carbon have the potential to meet the demanding safety, cost and performance levels for EVs set by ARPA-E, but materials to develop a new low-cost battery that operates similar to a flow battery, where chemical energy
Medical image digital archive: a comparison of storage technologies
NASA Astrophysics Data System (ADS)
Chunn, Timothy; Hutchings, Matt
1998-07-01
A cost effective, high capacity digital archive system is one of the remaining key factors that will enable a radiology department to eliminate film as an archive medium. The ever increasing amount of digital image data is creating the need for huge archive systems that can reliably store and retrieve millions of images and hold from a few terabytes of data to possibly hundreds of terabytes. Selecting the right archive solution depends on a number of factors: capacity requirements, write and retrieval performance requirements, scaleability in capacity and performance, conformance to open standards, archive availability and reliability, security, cost, achievable benefits and cost savings, investment protection, and more. This paper addresses many of these issues. It compares and positions optical disk and magnetic tape technologies, which are the predominant archive mediums today. New technologies will be discussed, such as DVD and high performance tape. Price and performance comparisons will be made at different archive capacities, plus the effect of file size on random and pre-fetch retrieval time will be analyzed. The concept of automated migration of images from high performance, RAID disk storage devices to high capacity, NearlineR storage devices will be introduced as a viable way to minimize overall storage costs for an archive.
High-reliability gas-turbine combined-cycle development program: Phase II, Volume 3. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hecht, K.G.; Sanderson, R.A.; Smith, M.J.
This three-volume report presents the results of Phase II of the multiphase EPRI-sponsored High-Reliability Gas Turbine Combined-Cycle Development Program whose goal is to achieve a highly reliable gas turbine combined-cycle power plant, available by the mid-1980s, which would be an economically attractive baseload generation alternative for the electric utility industry. The Phase II program objective was to prepare the preliminary design of this power plant. The power plant was addressed in three areas: (1) the gas turbine, (2) the gas turbine ancillaries, and (3) the balance of plant including the steam turbine generator. To achieve the program goals, a gasmore » turbine was incorporated which combined proven reliability characteristics with improved performance features. This gas turbine, designated the V84.3, is the result of a cooperative effort between Kraftwerk Union AG and United Technologies Corporation. Gas turbines of similar design operating in Europe under baseload conditions have demonstrated mean time between failures in excess of 40,000. The reliability characteristics of the gas turbine ancillaries and balance-of-plant equipment were improved through system simplification and component redundancy and by selection of component with inherent high reliability. A digital control system was included with logic, communications, sensor redundancy, and manual backup. An independent condition monitoring and diagnostic system was also included. Program results provide the preliminary design of a gas turbine combined-cycle baseload power plant. This power plant has a predicted mean time between failure of nearly twice the 3000-h EPRI goal. The cost of added reliability features is offset by improved performance, which results in a comparable specific cost and an 8% lower cost of electricty compared to present market offerings.« less
Wu, Zhao; Xiong, Naixue; Huang, Yannong; Xu, Degang; Hu, Chunyang
2015-01-01
The services composition technology provides flexible methods for building service composition applications (SCAs) in wireless sensor networks (WSNs). The high reliability and high performance of SCAs help services composition technology promote the practical application of WSNs. The optimization methods for reliability and performance used for traditional software systems are mostly based on the instantiations of software components, which are inapplicable and inefficient in the ever-changing SCAs in WSNs. In this paper, we consider the SCAs with fault tolerance in WSNs. Based on a Universal Generating Function (UGF) we propose a reliability and performance model of SCAs in WSNs, which generalizes a redundancy optimization problem to a multi-state system. Based on this model, an efficient optimization algorithm for reliability and performance of SCAs in WSNs is developed based on a Genetic Algorithm (GA) to find the optimal structure of SCAs with fault-tolerance in WSNs. In order to examine the feasibility of our algorithm, we have evaluated the performance. Furthermore, the interrelationships between the reliability, performance and cost are investigated. In addition, a distinct approach to determine the most suitable parameters in the suggested algorithm is proposed. PMID:26561818
Capital and Operating Costs of Small Arsenic Removal Adsorptive Media Systems
The U.S. Environmental Protection Agency (EPA) conducted 50 full-scale demonstration projects on treatment systems removing arsenic from drinking water in 26 states throughout the U.S. The projects were conducted to evaluate the performance, reliability, and cost of arsenic remo...
NASA Astrophysics Data System (ADS)
Audet, J.; Martinsen, L.; Hasler, B.; de Jonge, H.; Karydi, E.; Ovesen, N. B.; Kronvang, B.
2014-07-01
Eutrophication of aquatic ecosystems caused by excess concentrations of nitrogen and phosphorus may have harmful consequences for biodiversity and poses a health risk to humans via the water supplies. Reduction of nitrogen and phosphorus losses to aquatic ecosystems involves implementation of costly measures, and reliable monitoring methods are therefore essential to select appropriate mitigation strategies and to evaluate their effects. Here, we compare the performances and costs of three methodologies for the monitoring of nutrients in rivers: grab sampling, time-proportional sampling and passive sampling using flow proportional samplers. Assuming time-proportional sampling to be the best estimate of the "true" nutrient load, our results showed that the risk of obtaining wrong total nutrient load estimates by passive samplers is high despite similar costs as the time-proportional sampling. Our conclusion is that for passive samplers to provide a reliable monitoring alternative, further development is needed. Grab sampling was the cheapest of the three methods and was more precise and accurate than passive sampling. We conclude that although monitoring employing time-proportional sampling is costly, its reliability precludes unnecessarily high implementation expenses.
NASA Astrophysics Data System (ADS)
Audet, J.; Martinsen, L.; Hasler, B.; de Jonge, H.; Karydi, E.; Ovesen, N. B.; Kronvang, B.
2014-11-01
Eutrophication of aquatic ecosystems caused by excess concentrations of nitrogen and phosphorus may have harmful consequences for biodiversity and poses a health risk to humans via water supplies. Reduction of nitrogen and phosphorus losses to aquatic ecosystems involves implementation of costly measures, and reliable monitoring methods are therefore essential to select appropriate mitigation strategies and to evaluate their effects. Here, we compare the performances and costs of three methodologies for the monitoring of nutrients in rivers: grab sampling; time-proportional sampling; and passive sampling using flow-proportional samplers. Assuming hourly time-proportional sampling to be the best estimate of the "true" nutrient load, our results showed that the risk of obtaining wrong total nutrient load estimates by passive samplers is high despite similar costs as the time-proportional sampling. Our conclusion is that for passive samplers to provide a reliable monitoring alternative, further development is needed. Grab sampling was the cheapest of the three methods and was more precise and accurate than passive sampling. We conclude that although monitoring employing time-proportional sampling is costly, its reliability precludes unnecessarily high implementation expenses.
Reliable contact fabrication on nanostructured Bi2Te3-based thermoelectric materials.
Feng, Shien-Ping; Chang, Ya-Huei; Yang, Jian; Poudel, Bed; Yu, Bo; Ren, Zhifeng; Chen, Gang
2013-05-14
A cost-effective and reliable Ni-Au contact on nanostructured Bi2Te3-based alloys for a solar thermoelectric generator (STEG) is reported. The use of MPS SAMs creates a strong covalent binding and more nucleation sites with even distribution for electroplating contact electrodes on nanostructured thermoelectric materials. A reliable high-performance flat-panel STEG can be obtained by using this new method.
NASA Astrophysics Data System (ADS)
Fan, Xiao-Ning; Zhi, Bo
2017-07-01
Uncertainties in parameters such as materials, loading, and geometry are inevitable in designing metallic structures for cranes. When considering these uncertainty factors, reliability-based design optimization (RBDO) offers a more reasonable design approach. However, existing RBDO methods for crane metallic structures are prone to low convergence speed and high computational cost. A unilevel RBDO method, combining a discrete imperialist competitive algorithm with an inverse reliability strategy based on the performance measure approach, is developed. Application of the imperialist competitive algorithm at the optimization level significantly improves the convergence speed of this RBDO method. At the reliability analysis level, the inverse reliability strategy is used to determine the feasibility of each probabilistic constraint at each design point by calculating its α-percentile performance, thereby avoiding convergence failure, calculation error, and disproportionate computational effort encountered using conventional moment and simulation methods. Application of the RBDO method to an actual crane structure shows that the developed RBDO realizes a design with the best tradeoff between economy and safety together with about one-third of the convergence speed and the computational cost of the existing method. This paper provides a scientific and effective design approach for the design of metallic structures of cranes.
NASA Technical Reports Server (NTRS)
Wilson, Larry
1991-01-01
There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Unfortunately, the models appear to be unable to account for the random nature of the data. If the same code is debugged multiple times and one of the models is used to make predictions, intolerable variance is observed in the resulting reliability predictions. It is believed that data replication can remove this variance in lab type situations and that it is less than scientific to talk about validating a software reliability model without considering replication. It is also believed that data replication may prove to be cost effective in the real world, thus the research centered on verification of the need for replication and on methodologies for generating replicated data in a cost effective manner. The context of the debugging graph was pursued by simulation and experimentation. Simulation was done for the Basic model and the Log-Poisson model. Reasonable values of the parameters were assigned and used to generate simulated data which is then processed by the models in order to determine limitations on their accuracy. These experiments exploit the existing software and program specimens which are in AIR-LAB to measure the performance of reliability models.
DPSSL and FL pumps based on 980-nm telecom pump laser technology: changing the industry
NASA Astrophysics Data System (ADS)
Lichtenstein, Norbert; Schmidt, Berthold E.; Fily, Arnaud; Weiss, Stefan; Arlt, Sebastian; Pawlik, Susanne; Sverdlov, Boris; Muller, Jurgen; Harder, Christoph S.
2004-06-01
Diode-pumped solid state laser (DPSSL) and fiber laser (FL) are believed to become the dominant systems of very high power lasers in the industrial environment. Today, ranging from 100 W to 5 - 10 kW in light output power, their field of applications spread from biomedical and sensoring to material processing. Key driver for the wide spread of such systems is a competitive ratio of cost, performance and reliability. Enabling high power, highly reliable broad-area laser diodes and laser diode bars with excellent performance at the relevant wavelengths can further optimize this ratio. In this communication we present, that this can be achieved by leveraging the tremendous improvements in reliability and performance together with the high volume, low cost manufacturing areas established during the "telecom-bubble." From today's generations of 980-nm narrow-stripe laser diodes 1.8 W of maximum CW output power can be obtained fulfilling the stringent telecom reliability at operating conditions. Single-emitter broad-area lasers deliver in excess of 11 W CW while from similar 940-nm laser bars more than 160 W output power (CW) can be obtained at 200 A. In addition, introducing telecom-grade AuSn-solder mounting technology on expansion matched subassemblies enables excellent reliability performance. Degradation rates of less than 1% over 1000 h at 60 A are observed for both 808-nm and 940-nm laser bars even under harsh intermittent operation conditions.
Chen, Qing; Zhang, Jinxiu; Hu, Ze
2017-01-01
This article investigates the dynamic topology control problem of satellite cluster networks (SCNs) in Earth observation (EO) missions by applying a novel metric of stability for inter-satellite links (ISLs). The properties of the periodicity and predictability of satellites’ relative position are involved in the link cost metric which is to give a selection criterion for choosing the most reliable data routing paths. Also, a cooperative work model with reliability is proposed for the situation of emergency EO missions. Based on the link cost metric and the proposed reliability model, a reliability assurance topology control algorithm and its corresponding dynamic topology control (RAT) strategy are established to maximize the stability of data transmission in the SCNs. The SCNs scenario is tested through some numeric simulations of the topology stability of average topology lifetime and average packet loss rate. Simulation results show that the proposed reliable strategy applied in SCNs significantly improves the data transmission performance and prolongs the average topology lifetime. PMID:28241474
Chen, Qing; Zhang, Jinxiu; Hu, Ze
2017-02-23
This article investigates the dynamic topology control problemof satellite cluster networks (SCNs) in Earth observation (EO) missions by applying a novel metric of stability for inter-satellite links (ISLs). The properties of the periodicity and predictability of satellites' relative position are involved in the link cost metric which is to give a selection criterion for choosing the most reliable data routing paths. Also, a cooperative work model with reliability is proposed for the situation of emergency EO missions. Based on the link cost metric and the proposed reliability model, a reliability assurance topology control algorithm and its corresponding dynamic topology control (RAT) strategy are established to maximize the stability of data transmission in the SCNs. The SCNs scenario is tested through some numeric simulations of the topology stability of average topology lifetime and average packet loss rate. Simulation results show that the proposed reliable strategy applied in SCNs significantly improves the data transmission performance and prolongs the average topology lifetime.
Tutorial: Performance and reliability in redundant disk arrays
NASA Technical Reports Server (NTRS)
Gibson, Garth A.
1993-01-01
A disk array is a collection of physically small magnetic disks that is packaged as a single unit but operates in parallel. Disk arrays capitalize on the availability of small-diameter disks from a price-competitive market to provide the cost, volume, and capacity of current disk systems but many times their performance. Unfortunately, relative to current disk systems, the larger number of components in disk arrays leads to higher rates of failure. To tolerate failures, redundant disk arrays devote a fraction of their capacity to an encoding of their information. This redundant information enables the contents of a failed disk to be recovered from the contents of non-failed disks. The simplest and least expensive encoding for this redundancy, known as N+1 parity is highlighted. In addition to compensating for the higher failure rates of disk arrays, redundancy allows highly reliable secondary storage systems to be built much more cost-effectively than is now achieved in conventional duplicated disks. Disk arrays that combine redundancy with the parallelism of many small-diameter disks are often called Redundant Arrays of Inexpensive Disks (RAID). This combination promises improvements to both the performance and the reliability of secondary storage. For example, IBM's premier disk product, the IBM 3390, is compared to a redundant disk array constructed of 84 IBM 0661 3 1/2-inch disks. The redundant disk array has comparable or superior values for each of the metrics given and appears likely to cost less. In the first section of this tutorial, I explain how disk arrays exploit the emergence of high performance, small magnetic disks to provide cost-effective disk parallelism that combats the access and transfer gap problems. The flexibility of disk-array configurations benefits manufacturer and consumer alike. In contrast, I describe in this tutorial's second half how parallelism, achieved through increasing numbers of components, causes overall failure rates to rise. Redundant disk arrays overcome this threat to data reliability by ensuring that data remains available during and after component failures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boopathy, Ramaraj
2012-12-31
CPERC’s activities focused on two major themes: (a) cost-effective production of next-generation fuels with a focus on hydrogen from gasification and biofuels (primarily ethanol and butanol), and (b) efficient utilization of hydrogen and biofuels for power generation with a focus on improved performance, greater reliability and reduced energy costs.
Reliability Assessment for Low-cost Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Freeman, Paul Michael
Existing low-cost unmanned aerospace systems are unreliable, and engineers must blend reliability analysis with fault-tolerant control in novel ways. This dissertation introduces the University of Minnesota unmanned aerial vehicle flight research platform, a comprehensive simulation and flight test facility for reliability and fault-tolerance research. An industry-standard reliability assessment technique, the failure modes and effects analysis, is performed for an unmanned aircraft. Particular attention is afforded to the control surface and servo-actuation subsystem. Maintaining effector health is essential for safe flight; failures may lead to loss of control incidents. Failure likelihood, severity, and risk are qualitatively assessed for several effector failure modes. Design changes are recommended to improve aircraft reliability based on this analysis. Most notably, the control surfaces are split, providing independent actuation and dual-redundancy. The simulation models for control surface aerodynamic effects are updated to reflect the split surfaces using a first-principles geometric analysis. The failure modes and effects analysis is extended by using a high-fidelity nonlinear aircraft simulation. A trim state discovery is performed to identify the achievable steady, wings-level flight envelope of the healthy and damaged vehicle. Tolerance of elevator actuator failures is studied using familiar tools from linear systems analysis. This analysis reveals significant inherent performance limitations for candidate adaptive/reconfigurable control algorithms used for the vehicle. Moreover, it demonstrates how these tools can be applied in a design feedback loop to make safety-critical unmanned systems more reliable. Control surface impairments that do occur must be quickly and accurately detected. This dissertation also considers fault detection and identification for an unmanned aerial vehicle using model-based and model-free approaches and applies those algorithms to experimental faulted and unfaulted flight test data. Flight tests are conducted with actuator faults that affect the plant input and sensor faults that affect the vehicle state measurements. A model-based detection strategy is designed and uses robust linear filtering methods to reject exogenous disturbances, e.g. wind, while providing robustness to model variation. A data-driven algorithm is developed to operate exclusively on raw flight test data without physical model knowledge. The fault detection and identification performance of these complementary but different methods is compared. Together, enhanced reliability assessment and multi-pronged fault detection and identification techniques can help to bring about the next generation of reliable low-cost unmanned aircraft.
ERIC Educational Resources Information Center
Duncombe, William
2006-01-01
Reforming school finance systems to support performance standards entails estimating the cost of an adequate education. Cost of adequacy (COA) studies have been done in more than 30 states. Recently Eric Hanushek challenged the legitimacy of COA research, calling it alchemy and pseudoscience. The objectives of this study are to present reliability…
NASA Technical Reports Server (NTRS)
Gerke, R. David; Sandor, Mike; Agarwal, Shri; Moor, Andrew F.; Cooper, Kim A.
2000-01-01
Engineers within the commercial and aerospace industries are using trade-off and risk analysis to aid in reducing spacecraft system cost while increasing performance and maintaining high reliability. In many cases, Commercial Off-The-Shelf (COTS) components, which include Plastic Encapsulated Microcircuits (PEMs), are candidate packaging technologies for spacecrafts due to their lower cost, lower weight and enhanced functionality. Establishing and implementing a parts program that effectively and reliably makes use of these potentially less reliable, but state-of-the-art devices, has become a significant portion of the job for the parts engineer. Assembling a reliable high performance electronic system, which includes COTS components, requires that the end user assume a risk. To minimize the risk involved, companies have developed methodologies by which they use accelerated stress testing to assess the product and reduce the risk involved to the total system. Currently, there are no industry standard procedures for accomplishing this risk mitigation. This paper will present the approaches for reducing the risk of using PEMs devices in space flight systems as developed by two independent Laboratories. The JPL procedure involves primarily a tailored screening with accelerated stress philosophy while the APL procedure is primarily, a lot qualification procedure. Both Laboratories successfully have reduced the risk of using the particular devices for their respective systems and mission requirements.
Choosing a reliability inspection plan for interval censored data
Lu, Lu; Anderson-Cook, Christine Michaela
2017-04-19
Reliability test plans are important for producing precise and accurate assessment of reliability characteristics. This paper explores different strategies for choosing between possible inspection plans for interval censored data given a fixed testing timeframe and budget. A new general cost structure is proposed for guiding precise quantification of total cost in inspection test plan. Multiple summaries of reliability are considered and compared as the criteria for choosing the best plans using an easily adapted method. Different cost structures and representative true underlying reliability curves demonstrate how to assess different strategies given the logistical constraints and nature of the problem. Resultsmore » show several general patterns exist across a wide variety of scenarios. Given the fixed total cost, plans that inspect more units with less frequency based on equally spaced time points are favored due to the ease of implementation and consistent good performance across a large number of case study scenarios. Plans with inspection times chosen based on equally spaced probabilities offer improved reliability estimates for the shape of the distribution, mean lifetime, and failure time for a small fraction of population only for applications with high infant mortality rates. The paper uses a Monte Carlo simulation based approach in addition to the common evaluation based on the asymptotic variance and offers comparison and recommendation for different applications with different objectives. Additionally, the paper outlines a variety of different reliability metrics to use as criteria for optimization, presents a general method for evaluating different alternatives, as well as provides case study results for different common scenarios.« less
Choosing a reliability inspection plan for interval censored data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Lu; Anderson-Cook, Christine Michaela
Reliability test plans are important for producing precise and accurate assessment of reliability characteristics. This paper explores different strategies for choosing between possible inspection plans for interval censored data given a fixed testing timeframe and budget. A new general cost structure is proposed for guiding precise quantification of total cost in inspection test plan. Multiple summaries of reliability are considered and compared as the criteria for choosing the best plans using an easily adapted method. Different cost structures and representative true underlying reliability curves demonstrate how to assess different strategies given the logistical constraints and nature of the problem. Resultsmore » show several general patterns exist across a wide variety of scenarios. Given the fixed total cost, plans that inspect more units with less frequency based on equally spaced time points are favored due to the ease of implementation and consistent good performance across a large number of case study scenarios. Plans with inspection times chosen based on equally spaced probabilities offer improved reliability estimates for the shape of the distribution, mean lifetime, and failure time for a small fraction of population only for applications with high infant mortality rates. The paper uses a Monte Carlo simulation based approach in addition to the common evaluation based on the asymptotic variance and offers comparison and recommendation for different applications with different objectives. Additionally, the paper outlines a variety of different reliability metrics to use as criteria for optimization, presents a general method for evaluating different alternatives, as well as provides case study results for different common scenarios.« less
Hawaii electric system reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva Monroy, Cesar Augusto; Loose, Verne William
2012-09-01
This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and formore » application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.« less
Hawaii Electric System Reliability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loose, Verne William; Silva Monroy, Cesar Augusto
2012-08-01
This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers’ views of reliability “worth” and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and formore » application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers’ views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.« less
Reliability enhancement through optimal burn-in
NASA Astrophysics Data System (ADS)
Kuo, W.
1984-06-01
A numerical reliability and cost model is defined for production line burn-in tests of electronic components. The necessity of burn-in is governed by upper and lower bounds: burn-in is mandatory for operation-critical or nonreparable component; no burn-in is needed when failure effects are insignificant or easily repairable. The model considers electronic systems in terms of a series of components connected by a single black box. The infant mortality rate is described with a Weibull distribution. Performance reaches a steady state after burn-in, and the cost of burn-in is a linear function for each component. A minimum cost is calculated among the costs and total time of burn-in, shop repair, and field repair, with attention given to possible losses in future sales from inadequate burn-in testing.
Access information about how CHP systems work; their efficiency, environmental, economic, and reliability benefits; the cost and performance characteristics of CHP technologies; and how to calculate CHP efficiency emissions savings.
Bearing Procurement Analysis Method by Total Cost of Ownership Analysis and Reliability Prediction
NASA Astrophysics Data System (ADS)
Trusaji, Wildan; Akbar, Muhammad; Sukoyo; Irianto, Dradjad
2018-03-01
In making bearing procurement analysis, price and its reliability must be considered as decision criteria, since price determines the direct cost as acquisition cost and reliability of bearing determine the indirect cost such as maintenance cost. Despite the indirect cost is hard to identify and measured, it has high contribution to overall cost that will be incurred. So, the indirect cost of reliability must be considered when making bearing procurement analysis. This paper tries to explain bearing evaluation method with the total cost of ownership analysis to consider price and maintenance cost as decision criteria. Furthermore, since there is a lack of failure data when bearing evaluation phase is conducted, reliability prediction method is used to predict bearing reliability from its dynamic load rating parameter. With this method, bearing with a higher price but has higher reliability is preferable for long-term planning. But for short-term planning the cheaper one but has lower reliability is preferable. This contextuality can give rise to conflict between stakeholders. Thus, the planning horizon needs to be agreed by all stakeholder before making a procurement decision.
NASA Astrophysics Data System (ADS)
Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.
2012-09-01
The global rise in energy demands brings major obstacles to many energy organizations in providing adequate energy supply. Hence, many techniques to generate cost effective, reliable and environmentally friendly alternative energy source are being explored. One such method is the integration of photovoltaic cells, wind turbine generators and fuel-based generators, included with storage batteries. This sort of power systems are known as distributed generation (DG) power system. However, the application of DG power systems raise certain issues such as cost effectiveness, environmental impact and reliability. The modelling as well as the optimization of this DG power system was successfully performed in the previous work using Particle Swarm Optimization (PSO). The central idea of that work was to minimize cost, minimize emissions and maximize reliability (multi-objective (MO) setting) with respect to the power balance and design requirements. In this work, we introduce a fuzzy model that takes into account the uncertain nature of certain variables in the DG system which are dependent on the weather conditions (such as; the insolation and wind speed profiles). The MO optimization in a fuzzy environment was performed by applying the Hopfield Recurrent Neural Network (HNN). Analysis on the optimized results was then carried out.
Development of low cost custom hybrid microcircuit technology
NASA Technical Reports Server (NTRS)
Perkins, K. L.; Licari, J. J.
1981-01-01
Selected potentially low cost, alternate packaging and interconnection techniques were developed and implemented in the manufacture of specific NASA/MSFC hardware, and the actual cost savings achieved by their use. The hardware chosen as the test bed for this evaluation ws the hybrids and modules manufactured by Rockwell International fo the MSFC Flight Accelerometer Safety Cut-Off System (FASCOS). Three potentially low cost packaging and interconnection alternates were selected for evaluation. This study was performed in three phases: hardware fabrication and testing, cost comparison, and reliability evaluation.
Novel Low Cost, High Reliability Wind Turbine Drivetrain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chobot, Anthony; Das, Debarshi; Mayer, Tyler
2012-09-13
Clipper Windpower, in collaboration with United Technologies Research Center, the National Renewable Energy Laboratory, and Hamilton Sundstrand Corporation, developed a low-cost, deflection-compliant, reliable, and serviceable chain drive speed increaser. This chain and sprocket drivetrain design offers significant breakthroughs in the areas of cost and serviceability and addresses the key challenges of current geared and direct-drive systems. The use of gearboxes has proven to be challenging; the large torques and bending loads associated with use in large multi-MW wind applications have generally limited demonstrated lifetime to 8-10 years [1]. The large cost of gearbox replacement and the required use of large,more » expensive cranes can result in gearbox replacement costs on the order of $1M, representing a significant impact to overall cost of energy (COE). Direct-drive machines eliminate the gearbox, thereby targeting increased reliability and reduced life-cycle cost. However, the slow rotational speeds require very large and costly generators, which also typically have an undesirable dependence on expensive rare-earth magnet materials and large structural penalties for precise air gap control. The cost of rare-earth materials has increased 20X in the last 8 years representing a key risk to ever realizing the promised cost of energy reductions from direct-drive generators. A common challenge to both geared and direct drive architectures is a limited ability to manage input shaft deflections. The proposed Clipper drivetrain is deflection-compliant, insulating later drivetrain stages and generators from off-axis loads. The system is modular, allowing for all key parts to be removed and replaced without the use of a high capacity crane. Finally, the technology modularity allows for scalability and many possible drivetrain topologies. These benefits enable reductions in drivetrain capital cost by 10.0%, levelized replacement and O&M costs by 26.7%, and overall cost of energy by 10.2%. This design was achieved by: (1) performing an extensive optimization study that deter-mined the preliminary cost for all practical chain drive topologies to ensure the most competitive configuration; (2) conducting detailed analysis of chain dynamics, contact stresses, and wear and efficiency characteristics over the chain's life to ensure accurate physics-based predictions of chain performance; and (3) developing a final product design, including reliability analysis, chain replacement procedures, and bearing and sprocket analysis. Definition of this final product configuration was used to develop refined cost of energy estimates. Finally, key system risks for the chain drive were defined and a comprehensive risk reduction plan was created for execution in Phase 2.« less
Systems Integration Fact Sheet
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2016-06-01
This fact sheet is an overview of the Systems Integration subprogram at the U.S. Department of Energy SunShot Initiative. The Systems Integration subprogram enables the widespread deployment of safe, reliable, and cost-effective solar energy technologies by addressing the associated technical and non-technical challenges. These include timely and cost-effective interconnection procedures, optimal system planning, accurate prediction of solar resources, monitoring and control of solar power, maintaining grid reliability and stability, and many more. To address the challenges associated with interconnecting and integrating hundreds of gigawatts of solar power onto the electricity grid, the Systems Integration program funds research, development, and demonstrationmore » projects in four broad, interrelated focus areas: grid performance and reliability, dispatchability, power electronics, and communications.« less
Methods and Costs to Achieve Ultra Reliable Life Support
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2012-01-01
A published Mars mission is used to explore the methods and costs to achieve ultra reliable life support. The Mars mission and its recycling life support design are described. The life support systems were made triply redundant, implying that each individual system will have fairly good reliability. Ultra reliable life support is needed for Mars and other long, distant missions. Current systems apparently have insufficient reliability. The life cycle cost of the Mars life support system is estimated. Reliability can be increased by improving the intrinsic system reliability, adding spare parts, or by providing technically diverse redundant systems. The costs of these approaches are estimated. Adding spares is least costly but may be defeated by common cause failures. Using two technically diverse systems is effective but doubles the life cycle cost. Achieving ultra reliability is worth its high cost because the penalty for failure is very high.
A Survey of Reliability, Maintainability, Supportability, and Testability Software Tools
1991-04-01
designs in terms of their contributions toward forced mission termination and vehicle or function loss . Includes the ability to treat failure modes of...ABSTRACT: Inputs: MTBFs, MTTRs, support equipment costs, equipment weights and costs, available targets, military occupational specialty skill level and...US Army CECOM NAME: SPARECOST ABSTRACT: Calculates expected number of failures and performs spares holding optimization based on cost, weight , or
Benefits of barrier fuel on fuel cycle economics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowther, R.L.; Kunz, C.L.
1988-01-01
Barrier fuel rod cladding was developed to eliminate fuel rod failures from pellet/cladding stress/corrosion interaction and to eliminate the associated need to restrict the rate at which fuel rod power can be increased. The performance of barrier cladding has been demonstrated through extensive testing and through production application to many boiling water reactors (BWRs). Power reactor data have shown that barrier fuel rod cladding has a significant beneficial effect on plant capacity factor and plant operating costs and significantly increases fuel reliability. Independent of the fuel reliability benefit, it is less obvious that barrier fuel has a beneficial effect ofmore » fuel cycle costs, since barrier cladding is more costly to fabricate. Evaluations, measurements, and development activities, however, have shown that the fuel cycle cost benefits of barrier fuel are large. This paper is a summary of development activities that have shown that application of barrier fuel significantly reduces BWR fuel cycle costs.« less
Optimization of structures on the basis of fracture mechanics and reliability criteria
NASA Technical Reports Server (NTRS)
Heer, E.; Yang, J. N.
1973-01-01
Systematic summary of factors which are involved in optimization of given structural configuration is part of report resulting from study of analysis of objective function. Predicted reliability of performance of finished structure is sharply dependent upon results of coupon tests. Optimization analysis developed by study also involves expected cost of proof testing.
Configuration study for a 30 GHz monolithic receive array: Technical assessment
NASA Technical Reports Server (NTRS)
Nester, W. H.; Cleaveland, B.; Edward, B.; Gotkis, S.; Hesserbacker, G.; Loh, J.; Mitchell, B.
1984-01-01
The current status of monolithic microwave integrated circuits (MMICs) in phased array feeds is discussed from the point of view of cost performance, reliability, and design considerations. Transitions to MMICs, compatible antenna radiating elements and reliability considerations are addressed. Hybrid antennas, feed array antenna technology, and offset reflectors versus phased arrays are examined.
A comparison of operational performance : Washington state ferries to ferry operators worldwide.
DOT National Transportation Integrated Search
2010-06-01
This project compares eight measures of performance related to transit service quality (e.g. trip reliability, on-time departures) and cost-efficiency (e.g. farebox recovery, subsidy per passenger) between Washington State Ferries (WSF) and 23 ferry ...
DOT National Transportation Integrated Search
2015-08-31
Proper calibration of mechanistic-empirical : (M-E) design and rehabilitation performance : models to meet Texas conditions is essential : for cost-effective flexible pavement designs. : Such a calibration effort would require a : reliable source of ...
Redundant disk arrays: Reliable, parallel secondary storage. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Gibson, Garth Alan
1990-01-01
During the past decade, advances in processor and memory technology have given rise to increases in computational performance that far outstrip increases in the performance of secondary storage technology. Coupled with emerging small-disk technology, disk arrays provide the cost, volume, and capacity of current disk subsystems, by leveraging parallelism, many times their performance. Unfortunately, arrays of small disks may have much higher failure rates than the single large disks they replace. Redundant arrays of inexpensive disks (RAID) use simple redundancy schemes to provide high data reliability. The data encoding, performance, and reliability of redundant disk arrays are investigated. Organizing redundant data into a disk array is treated as a coding problem. Among alternatives examined, codes as simple as parity are shown to effectively correct single, self-identifying disk failures.
Reliability-based management of buried pipelines considering external corrosion defects
NASA Astrophysics Data System (ADS)
Miran, Seyedeh Azadeh
Corrosion is one of the main deteriorating mechanisms that degrade the energy pipeline integrity, due to transferring corrosive fluid or gas and interacting with corrosive environment. Corrosion defects are usually detected by periodical inspections using in-line inspection (ILI) methods. In order to ensure pipeline safety, this study develops a cost-effective maintenance strategy that consists of three aspects: corrosion growth model development using ILI data, time-dependent performance evaluation, and optimal inspection interval determination. In particular, the proposed study is applied to a cathodic protected buried steel pipeline located in Mexico. First, time-dependent power-law formulation is adopted to probabilistically characterize growth of the maximum depth and length of the external corrosion defects. Dependency between defect depth and length are considered in the model development and generation of the corrosion defects over time is characterized by the homogenous Poisson process. The growth models unknown parameters are evaluated based on the ILI data through the Bayesian updating method with Markov Chain Monte Carlo (MCMC) simulation technique. The proposed corrosion growth models can be used when either matched or non-matched defects are available, and have ability to consider newly generated defects since last inspection. Results of this part of study show that both depth and length growth models can predict damage quantities reasonably well and a strong correlation between defect depth and length is found. Next, time-dependent system failure probabilities are evaluated using developed corrosion growth models considering prevailing uncertainties where three failure modes, namely small leak, large leak and rupture are considered. Performance of the pipeline is evaluated through failure probability per km (or called a sub-system) where each subsystem is considered as a series system of detected and newly generated defects within that sub-system. Sensitivity analysis is also performed to determine to which incorporated parameter(s) in the growth models reliability of the studied pipeline is most sensitive. The reliability analysis results suggest that newly generated defects should be considered in calculating failure probability, especially for prediction of long-term performance of the pipeline and also, impact of the statistical uncertainty in the model parameters is significant that should be considered in the reliability analysis. Finally, with the evaluated time-dependent failure probabilities, a life cycle-cost analysis is conducted to determine optimal inspection interval of studied pipeline. The expected total life-cycle costs consists construction cost and expected costs of inspections, repair, and failure. The repair is conducted when failure probability from any described failure mode exceeds pre-defined probability threshold after each inspection. Moreover, this study also investigates impact of repair threshold values and unit costs of inspection and failure on the expected total life-cycle cost and optimal inspection interval through a parametric study. The analysis suggests that a smaller inspection interval leads to higher inspection costs, but can lower failure cost and also repair cost is less significant compared to inspection and failure costs.
Research on the optimal structure configuration of dither RLG used in skewed redundant INS
NASA Astrophysics Data System (ADS)
Gao, Chunfeng; Wang, Qi; Wei, Guo; Long, Xingwu
2016-05-01
The actual combat effectiveness of weapon equipment is restricted by the performance of Inertial Navigation System (INS), especially in high reliability required situations such as fighter, satellite and submarine. Through the use of skewed sensor geometries, redundant technique has been applied to reduce the cost and improve the reliability of the INS. In this paper, the structure configuration and the inertial sensor characteristics of Skewed Redundant Strapdown Inertial Navigation System (SRSINS) using dithered Ring Laser Gyroscope (RLG) are analyzed. For the dither coupling effects of the dither gyro, the system measurement errors can be amplified either the individual gyro dither frequency is near one another or the structure of the SRSINS is unreasonable. Based on the characteristics of RLG, the research on coupled vibration of dithered RLG in SRSINS is carried out. On the principle of optimal navigation performance, optimal reliability and optimal cost-effectiveness, the comprehensive evaluation scheme of the inertial sensor configuration of SRINS is given.
Design of low-cost general purpose microcontroller based neuromuscular stimulator.
Koçer, S; Rahmi Canal, M; Güler, I
2000-04-01
In this study, a general purpose, low-cost, programmable, portable and high performance stimulator is designed and implemented. For this purpose, a microcontroller is used in the design of the stimulator. The duty cycle and amplitude of the designed system can be controlled using a keyboard. The performance test of the system has shown that the results are reliable. The overall system can be used as the neuromuscular stimulator under safe conditions.
Reliability and Cost Impacts for Attritable Systems
2017-03-23
and cost risk metrics to convey the value of reliability and reparability trades. Investigation of the benefit of trading system reparability...illustrates the benefit that reliability engineering can have on total cost . 2.3.1 Contexts of System Reliability Hogge (2012) identifies two distinct...reliability and reparability trades. Investigation of the benefit of trading system reparability shows a marked increase in cost risk. Yet, trades in
The Role of Demand Response in Reducing Water-Related Power Plant Vulnerabilities
NASA Astrophysics Data System (ADS)
Macknick, J.; Brinkman, G.; Zhou, E.; O'Connell, M.; Newmark, R. L.; Miara, A.; Cohen, S. M.
2015-12-01
The electric sector depends on readily available water supplies for reliable and efficient operation. Elevated water temperatures or low water levels can trigger regulatory or plant-level decisions to curtail power generation, which can affect system cost and reliability. In the past decade, dozens of power plants in the U.S. have curtailed generation due to water temperatures and water shortages. Curtailments occur during the summer, when temperatures are highest and there is greatest demand for electricity. Climate change could alter the availability and temperature of water resources, exacerbating these issues. Constructing alternative cooling systems to address vulnerabilities can be capital intensive and can also affect power plant efficiencies. Demand response programs are being implemented by electric system planners and operators to reduce and shift electricity demands from peak usage periods to other times of the day. Demand response programs can also play a role in reducing water-related power sector vulnerabilities during summer months. Traditionally, production cost modeling and demand response analyses do not include water resources. In this effort, we integrate an electricity production cost modeling framework with water-related impacts on power plants in a test system to evaluate the impacts of demand response measures on power system costs and reliability. Specifically, we i) quantify the cost and reliability implications of incorporating water resources into production cost modeling, ii) evaluate the impacts of demand response measures on reducing system costs and vulnerabilities, and iii) consider sensitivity analyses with cooling systems to highlight a range of potential benefits of demand response measures. Impacts from climate change on power plant performance and water resources are discussed. Results provide key insights to policymakers and practitioners for reducing water-related power plant vulnerabilities via lower cost methods.
48 CFR 215.404-71-2 - Performance risk.
Code of Federal Regulations, 2011 CFR
2011-10-01
... incentive range when contract performance includes the introduction of new, significant technological innovation. Use the technology incentive range only for the most innovative contract efforts. Innovation may... reliability, or reduced costs; or (B) New products or systems that contain significant technological advances...
Application of the Systematic Sensor Selection Strategy for Turbofan Engine Diagnostics
NASA Technical Reports Server (NTRS)
Sowers, T. Shane; Kopasakis, George; Simon, Donald L.
2008-01-01
The data acquired from available system sensors forms the foundation upon which any health management system is based, and the available sensor suite directly impacts the overall diagnostic performance that can be achieved. While additional sensors may provide improved fault diagnostic performance, there are other factors that also need to be considered such as instrumentation cost, weight, and reliability. A systematic sensor selection approach is desired to perform sensor selection from a holistic system-level perspective as opposed to performing decisions in an ad hoc or heuristic fashion. The Systematic Sensor Selection Strategy is a methodology that optimally selects a sensor suite from a pool of sensors based on the system fault diagnostic approach, with the ability of taking cost, weight, and reliability into consideration. This procedure was applied to a large commercial turbofan engine simulation. In this initial study, sensor suites tailored for improved diagnostic performance are constructed from a prescribed collection of candidate sensors. The diagnostic performance of the best performing sensor suites in terms of fault detection and identification are demonstrated, with a discussion of the results and implications for future research.
Application of the Systematic Sensor Selection Strategy for Turbofan Engine Diagnostics
NASA Technical Reports Server (NTRS)
Sowers, T. Shane; Kopasakis, George; Simon, Donald L.
2008-01-01
The data acquired from available system sensors forms the foundation upon which any health management system is based, and the available sensor suite directly impacts the overall diagnostic performance that can be achieved. While additional sensors may provide improved fault diagnostic performance there are other factors that also need to be considered such as instrumentation cost, weight, and reliability. A systematic sensor selection approach is desired to perform sensor selection from a holistic system-level perspective as opposed to performing decisions in an ad hoc or heuristic fashion. The Systematic Sensor Selection Strategy is a methodology that optimally selects a sensor suite from a pool of sensors based on the system fault diagnostic approach, with the ability of taking cost, weight and reliability into consideration. This procedure was applied to a large commercial turbofan engine simulation. In this initial study, sensor suites tailored for improved diagnostic performance are constructed from a prescribed collection of candidate sensors. The diagnostic performance of the best performing sensor suites in terms of fault detection and identification are demonstrated, with a discussion of the results and implications for future research.
System Risk Assessment and Allocation in Conceptual Design
NASA Technical Reports Server (NTRS)
Mahadevan, Sankaran; Smith, Natasha L.; Zang, Thomas A. (Technical Monitor)
2003-01-01
As aerospace systems continue to evolve in addressing newer challenges in air and space transportation, there exists a heightened priority for significant improvement in system performance, cost effectiveness, reliability, and safety. Tools, which synthesize multidisciplinary integration, probabilistic analysis, and optimization, are needed to facilitate design decisions allowing trade-offs between cost and reliability. This study investigates tools for probabilistic analysis and probabilistic optimization in the multidisciplinary design of aerospace systems. A probabilistic optimization methodology is demonstrated for the low-fidelity design of a reusable launch vehicle at two levels, a global geometry design and a local tank design. Probabilistic analysis is performed on a high fidelity analysis of a Navy missile system. Furthermore, decoupling strategies are introduced to reduce the computational effort required for multidisciplinary systems with feedback coupling.
Barwood CNG Cab Fleet Study: Final Results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whalen, P.; Kelly, K.; John, M.
1999-05-03
This report describes a fleet study conducted over a 12-month period to evaluate the operation of dedicated compress natural gas (CNG) Ford Crown Victoria sedans in a taxicab fleet. In the study, we assess the performance and reliability of the vehicles and the cost of operating the CNG vehicles compared to gasoline vehicles. The study results reveal that the CNG vehicles operated by this fleet offer both economic and environmental advantages. The total operating costs of the CNG vehicles were about 25% lower than those of the gasoline vehicles. The CNG vehicles performed as well as the gasoline vehicles, andmore » were just as reliable. Barwood representatives and drivers have come to consider the CNG vehicles an asset to their business and to the air quality of the local community.« less
System reliability approaches for advanced propulsion system structures
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Mahadevan, S.
1991-01-01
This paper identifies significant issues that pertain to the estimation and use of system reliability in the design of advanced propulsion system structures. Linkages between the reliabilities of individual components and their effect on system design issues such as performance, cost, availability, and certification are examined. The need for system reliability computation to address the continuum nature of propulsion system structures and synergistic progressive damage modes has been highlighted. Available system reliability models are observed to apply only to discrete systems. Therefore a sequential structural reanalysis procedure is formulated to rigorously compute the conditional dependencies between various failure modes. The method is developed in a manner that supports both top-down and bottom-up analyses in system reliability.
ERIC Educational Resources Information Center
Czuchry, Andrew J.; And Others
This report provides a complete guide to the stand alone mode operation of the reliability and maintenance (R&M) model, which was developed to facilitate the performance of design versus cost trade-offs within the digital avionics information system (DAIS) acquisition process. The features and structure of the model, its input data…
Missile and Space Systems Reliability versus Cost Trade-Off Study
1983-01-01
F00-1C09 Robert C. Schneider F00-1C09 V . PERFORMING ORGANIZATION NAME AM0 ADDRESS 16 PRGRAM ELEMENT. PROJECT. TASK BoeingAerosace CmpAnyA CA WORK UNIT...reliability problems, which has the - real bearing on program effectiveness. A well planned and funded reliability effort can prevent or ferret out...failure analysis, and the in- corporation and verification of design corrections to prevent recurrence of failures. 302.2.2 A TMJ test plan shall be
A standard for test reliability in group research.
Ellis, Jules L
2013-03-01
Many authors adhere to the rule that test reliabilities should be at least .70 or .80 in group research. This article introduces a new standard according to which reliabilities can be evaluated. This standard is based on the costs or time of the experiment and of administering the test. For example, if test administration costs are 7 % of the total experimental costs, the efficient value of the reliability is .93. If the actual reliability of a test is equal to this efficient reliability, the test size maximizes the statistical power of the experiment, given the costs. As a standard in experimental research, it is proposed that the reliability of the dependent variable be close to the efficient reliability. Adhering to this standard will enhance the statistical power and reduce the costs of experiments.
Individual styles of professional operator's performance for the needs of interplanetary mission.
NASA Astrophysics Data System (ADS)
Boritko, Yaroslav; Gushin, Vadim; Zavalko, Irina; Smoleevskiy, Alexandr; Dudukin, Alexandr
Maintenance of the cosmonaut’s professional performance reliability is one of the priorities of long-term space flights safety. Cosmonaut’s performance during long-term space flight decreases due to combination of the microgravity effects and inevitable degradation of skills during prolonged breaks in training. Therefore, the objective of the elaboration of countermeasures against skill decrement is very relevant. During the experiment with prolonged isolation "Mars-500" in IMBP two virtual models of professional operator’s activities were used to investigate the influence of extended isolation, monotony and confinement on professional skills degradation. One is well-known “PILOT-1” (docking to the space station), another - "VIRTU" (manned operations of planet exploration). Individual resistance to the artificial sensory conflict was estimated using computerized version of “Mirror koordinograf” with GSR registration. Two different individual performance styles, referring to the different types of response to stress, have been identified. Individual performance style, called "conservative control", manifested in permanent control of parameters, conditions and results of the operator’s activity. Operators with this performance style demonstrate high reliability in performing tasks. The drawback of the style is intensive resource expenditure - both the operator (physiological "cost") and the technical system operated (fuel, time). This style is more efficient while executing tasks that require long work with high reliability required according to a detailed protocol, such as orbital flight. Individual style, called "exploratory ", manifested in the search of new ways of task fulfillment. This style is accompanied by partial, periodic lack of control of the conditions and result of operator’s activity due to flexible approach to the tasks perfect implementation. Operators spent less resource (fuel, time, lower physiological "cost") due to high self-regulation in tasks not requiring high reliability. "Exploratory" style is more effective when working in nonregulated and off-nominal situations, such as interplanetary mission, due to possibility to use nonstandard innovative solutions, save physiological resources and rapidly mobilize to demonstrate high reliability at key moments.
DOT National Transportation Integrated Search
2015-10-01
Pavement performance models describe the deterioration behavior of pavements. They are essential in a pavement management : system if the goal is to make more objective, reliable, and cost-effective decisions regarding the timing and nature of paveme...
New electrostatic coal cleaning method cuts sulfur content by 40%
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1985-12-01
An emission control system that electrically charges pollutants and coal particles promises to reduce sulfur 40% at half the cost. The dry coal cleaning processes offer superior performance and better economics than conventional flotation cleaning. Advanced Energy Dynamics, Inc. (AED) is developing both fine and ultra fine processes which increase combustion efficiency and boiler reliability and reduced operating costs. The article gives details from the performance tests and comparisons and summarizes the economic analyses. 4 tables.
Redundancy management of inertial systems.
NASA Technical Reports Server (NTRS)
Mckern, R. A.; Musoff, H.
1973-01-01
The paper reviews developments in failure detection and isolation techniques applicable to gimballed and strapdown systems. It examines basic redundancy management goals of improved reliability, performance and logistic costs, and explores mechanizations available for both input and output data handling. The meaning of redundant system reliability in terms of available coverage, system MTBF, and mission time is presented and the practical hardware performance limitations of failure detection and isolation techniques are explored. Simulation results are presented illustrating implementation coverages attainable considering IMU performance models and mission detection threshold requirements. The implications of a complete GN&C redundancy management method on inertial techniques are also explored.
Thermal Management and Reliability of Power Electronics and Electric Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narumanchi, Sreekant
2016-09-19
Increasing the number of electric-drive vehicles (EDVs) on America's roads has been identified as a strategy with near-term potential for dramatically decreasing the nation's dependence on oil - by the U.S. Department of Energy, the federal cross-agency EV-Everywhere Challenge, and the automotive industry. Mass-market deployment will rely on meeting aggressive technical targets, including improved efficiency and reduced size, weight, and cost. Many of these advances will depend on optimization of thermal management. Effective thermal management is critical to improving the performance and ensuring the reliability of EDVs. Efficient heat removal makes higher power densities and lower operating temperatures possible, andmore » in turn enables cost and size reductions. The National Renewable Energy Laboratory (NREL), along with DOE and industry partners is working to develop cost-effective thermal management solutions to increase device and component power densities. In this presentation, the activities in recent years related to thermal management and reliability of automotive power electronics and electric machines are presented.« less
Towards cost-effective reliability through visualization of the reliability option space
NASA Technical Reports Server (NTRS)
Feather, Martin S.
2004-01-01
In planning a complex system's development there can be many options to improve its reliability. Typically their sum total cost exceeds the budget available, so it is necessary to select judiciously from among them. Reliability models can be employed to calculate the cost and reliability implications of a candidate selection.
NASA Astrophysics Data System (ADS)
Macknick, J.; Miara, A.; Brinkman, G.; Ibanez, E.; Newmark, R. L.
2014-12-01
The reliability of the power sector is highly vulnerable to variability in the availability and temperature of water resources, including those that might result from potential climatic changes or from competition from other users. In the past decade, power plants throughout the United States have had to shut down or curtail generation due to a lack of available water or from elevated water temperatures. These disruptions in power plant performance can have negative impacts on energy security and can be costly to address. Analysis of water-related vulnerabilities requires modeling capabilities with high spatial and temporal resolution. This research provides an innovative approach to energy-water modeling by evaluating the costs and reliability of a power sector region under policy and climate change scenarios that affect water resource availability and temperatures. This work utilizes results from a spatially distributed river water temperature model coupled with a thermoelectric power plant model to provide inputs into an electricity production cost model that operates on a high spatial and temporal resolution. The regional transmission organization ISO-New England, which includes six New England states and over 32 Gigawatts of power capacity, is utilized as a case study. Hydrological data and power plant operations are analyzed over an eleven year period from 2000-2010 under four scenarios that include climate impacts on water resources and air temperatures as well as strict interpretations of regulations that can affect power plant operations due to elevated water temperatures. Results of these model linkages show how the power sector's reliability and economic performance can be affected by changes in water temperatures and water availability. The effective reliability and capacity value of thermal electric generators are quantified and discussed in the context of current as well as potential future water resource characteristics.
2001-01-01
by Peter Wright, University of York, UK and Colin Drury , University of Buffalo. Session 3 was chaired by Reiner Onken, University of Bundeswehr, GE...proper inspection intervals; too few inspections may give rise to accidents whilst too many can increase costs . Drury has reviewed human factors studies on...thus search, whilst the cost of a miss or false rejection affects the decision stage. To furnish this model of aircraft inspection, Drury performed a
Reliability Testing of NASA Piezocomposite Actuators
NASA Technical Reports Server (NTRS)
Wilkie, W.; High, J.; Bockman, J.
2002-01-01
NASA Langley Research Center has developed a low-cost piezocomposite actuator which has application for controlling vibrations in large inflatable smart space structures, space telescopes, and high performance aircraft. Tests show the NASA piezocomposite device is capable of producing large, directional, in-plane strains on the order of 2000 parts-per-million peak-to-peak, with no reduction in free-strain performance to 100 million electrical cycles. This paper describes methods, measurements, and preliminary results from our reliability evaluation of the device under externally applied mechanical loads and at various operational temperatures. Tests performed to date show no net reductions in actuation amplitude while the device was moderately loaded through 10 million electrical cycles. Tests were performed at both room temperature and at the maximum operational temperature of the epoxy resin system used in manufacture of the device. Initial indications are that actuator reliability is excellent, with no actuator failures or large net reduction in actuator performance.
Research requirements for development of regenerative engines for helicopters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Semple, R.D.
1976-12-01
The improved specific fuel consumption of the regenerative engine was compared to a simple-cycle turboshaft engine. The performance improvement and fuel saving are obtained at the expense of increased engine weight, development and production costs, and maintenance costs. Costs and schedules are estimated for the elements of the research and development program. Interaction of the regenerative engine with other technology goals for an advanced civil helicopter is examined, including its impact on engine noise, hover and cruise performance, helicopter empty weight, drive-system efficiency and weight, one-engine-inoperative hover capability, and maintenance and reliability.
Research requirements for development of regenerative engines for helicopters
NASA Technical Reports Server (NTRS)
Semple, R. D.
1976-01-01
The improved specific fuel consumption of the regenerative engine was compared to a simple-cycle turboshaft engine. The performance improvement and fuel saving are obtained at the expense of increased engine weight, development and production costs, and maintenance costs. Costs and schedules are estimated for the elements of the research and development program. Interaction of the regenerative engine with other technology goals for an advanced civil helicopter is examined, including its impact on engine noise, hover and cruise performance, helicopter empty weight, drive-system efficiency and weight, one-engine-inoperative hover capability, and maintenance and reliability.
Development of ultracapacitor modules for 42-V automotive electrical systems
NASA Astrophysics Data System (ADS)
Jung, Do Yang; Kim, Young Ho; Kim, Sun Wook; Lee, Suck-Hyun
Two types of ultracapacitor modules have been developed for use as energy-storage devices for 42-V systems in automobiles. The modules show high performance and good reliability in terms of discharge and recharge capability, long-term endurance, and high energy and power. During a 42-V system simulation test of 6-kW power boosting/regenerative braking, the modules demonstrate very good performance. In high-power applications such as 42-V and hybrid vehicle systems, ultracapacitors have many merits compared with batteries, especially with respect to specific power at high rate, thermal stability, charge-discharge efficiency, and cycle-life. Ultracapacitors are also very safe, reliable and environmentally friendly. The cost of ultracapacitors is still high compared with batteries because of the low production scale, but is decreasing very rapidly. It is estimated that the cost of ultracapacitors will decrease to US$ 300 per 42-V module in the near future. Also, the maintenance cost of the ultracapacitor is nearly zero because of its high cycle-life. Therefore, the combined cost of the capacitor and maintenance will be lower than that of batteries in the near future. Overall, comparing performance, price and other parameters of ultracapacitors with batteries, ultracapacitors are the most likely candidate for energy-storage in 42-V systems.
A low-cost Mr compatible ergometer to assess post-exercise phosphocreatine recovery kinetics.
Naimon, Niels D; Walczyk, Jerzy; Babb, James S; Khegai, Oleksandr; Che, Xuejiao; Alon, Leeor; Regatte, Ravinder R; Brown, Ryan; Parasoglou, Prodromos
2017-06-01
To develop a low-cost pedal ergometer compatible with ultrahigh (7 T) field MR systems to reliably quantify metabolic parameters in human lower leg muscle using phosphorus magnetic resonance spectroscopy. We constructed an MR compatible ergometer using commercially available materials and elastic bands that provide resistance to movement. We recruited ten healthy subjects (eight men and two women, mean age ± standard deviation: 32.8 ± 6.0 years, BMI: 24.1 ± 3.9 kg/m 2 ). All subjects were scanned on a 7 T whole-body magnet. Each subject was scanned on two visits and performed a 90 s plantar flexion exercise at 40% maximum voluntary contraction during each scan. During the first visit, each subject performed the exercise twice in order for us to estimate the intra-exam repeatability, and once during the second visit in order to estimate the inter-exam repeatability of the time constant of phosphocreatine recovery kinetics. We assessed the intra and inter-exam reliability in terms of the within-subject coefficient of variation (CV). We acquired reliable measurements of PCr recovery kinetics with an intra- and inter-exam CV of 7.9% and 5.7%, respectively. We constructed a low-cost pedal ergometer compatible with ultrahigh (7 T) field MR systems, which allowed us to quantify reliably PCr recovery kinetics in lower leg muscle using 31 P-MRS.
A low-cost, tablet-based option for prehospital neurologic assessment: The iTREAT Study.
Chapman Smith, Sherita N; Govindarajan, Prasanthi; Padrick, Matthew M; Lippman, Jason M; McMurry, Timothy L; Resler, Brian L; Keenan, Kevin; Gunnell, Brian S; Mehndiratta, Prachi; Chee, Christina Y; Cahill, Elizabeth A; Dietiker, Cameron; Cattell-Gordon, David C; Smith, Wade S; Perina, Debra G; Solenski, Nina J; Worrall, Bradford B; Southerland, Andrew M
2016-07-05
In this 2-center study, we assessed the technical feasibility and reliability of a low cost, tablet-based mobile telestroke option for ambulance transport and hypothesized that the NIH Stroke Scale (NIHSS) could be performed with similar reliability between remote and bedside examinations. We piloted our mobile telemedicine system in 2 geographic regions, central Virginia and the San Francisco Bay Area, utilizing commercial cellular networks for videoconferencing transmission. Standardized patients portrayed scripted stroke scenarios during ambulance transport and were evaluated by independent raters comparing bedside to remote mobile telestroke assessments. We used a mixed-effects regression model to determine intraclass correlation of the NIHSS between bedside and remote examinations (95% confidence interval). We conducted 27 ambulance runs at both sites and successfully completed the NIHSS for all prehospital assessments without prohibitive technical interruption. The mean difference between bedside (face-to-face) and remote (video) NIHSS scores was 0.25 (1.00 to -0.50). Overall, correlation of the NIHSS between bedside and mobile telestroke assessments was 0.96 (0.92-0.98). In the mixed-effects regression model, there were no statistically significant differences accounting for method of evaluation or differences between sites. Utilizing a low-cost, tablet-based platform and commercial cellular networks, we can reliably perform prehospital neurologic assessments in both rural and urban settings. Further research is needed to establish the reliability and validity of prehospital mobile telestroke assessment in live patients presenting with acute neurologic symptoms. © 2016 American Academy of Neurology.
A low-cost, tablet-based option for prehospital neurologic assessment
Chapman Smith, Sherita N.; Govindarajan, Prasanthi; Padrick, Matthew M.; Lippman, Jason M.; McMurry, Timothy L.; Resler, Brian L.; Keenan, Kevin; Gunnell, Brian S.; Mehndiratta, Prachi; Chee, Christina Y.; Cahill, Elizabeth A.; Dietiker, Cameron; Cattell-Gordon, David C.; Smith, Wade S.; Perina, Debra G.; Solenski, Nina J.; Worrall, Bradford B.
2016-01-01
Objectives: In this 2-center study, we assessed the technical feasibility and reliability of a low cost, tablet-based mobile telestroke option for ambulance transport and hypothesized that the NIH Stroke Scale (NIHSS) could be performed with similar reliability between remote and bedside examinations. Methods: We piloted our mobile telemedicine system in 2 geographic regions, central Virginia and the San Francisco Bay Area, utilizing commercial cellular networks for videoconferencing transmission. Standardized patients portrayed scripted stroke scenarios during ambulance transport and were evaluated by independent raters comparing bedside to remote mobile telestroke assessments. We used a mixed-effects regression model to determine intraclass correlation of the NIHSS between bedside and remote examinations (95% confidence interval). Results: We conducted 27 ambulance runs at both sites and successfully completed the NIHSS for all prehospital assessments without prohibitive technical interruption. The mean difference between bedside (face-to-face) and remote (video) NIHSS scores was 0.25 (1.00 to −0.50). Overall, correlation of the NIHSS between bedside and mobile telestroke assessments was 0.96 (0.92–0.98). In the mixed-effects regression model, there were no statistically significant differences accounting for method of evaluation or differences between sites. Conclusions: Utilizing a low-cost, tablet-based platform and commercial cellular networks, we can reliably perform prehospital neurologic assessments in both rural and urban settings. Further research is needed to establish the reliability and validity of prehospital mobile telestroke assessment in live patients presenting with acute neurologic symptoms. PMID:27281534
Design, performance, and economics of 50-kW and 500-kW vertical axis wind turbines
NASA Astrophysics Data System (ADS)
Schienbein, L. A.; Malcolm, D. J.
1983-11-01
A review of the development and performance of the DAF Indal 50-kW vertical axis Darrieus wind turbine shows that a high level of technical development and reliability has been achieved. Features of the drive train, braking and control systems are discussed and performance details are presented. Details are also presented of a 500-kW VAWT that is currently in production. A discussion of the economics of both the 50-kW and 500-kW VAWTs is included, showing the effects of charge rate, installed cost, operating cost, performance, and efficiency.
A simplified fuel control approach for low cost aircraft gas turbines
NASA Technical Reports Server (NTRS)
Gold, H.
1973-01-01
Reduction in the complexity of gas turbine fuel controls without loss of control accuracy, reliability, or effectiveness as a method for reducing engine costs is discussed. A description and analysis of hydromechanical approach are presented. A computer simulation of the control mechanism is given and performance of a physical model in engine test is reported.
Star field attitude sensor study for the Pioneer Venus spacecraft
NASA Technical Reports Server (NTRS)
Rudolf, W. P.; Reed, D. R.
1972-01-01
The characteristics of a star field attitude sensor for use with the Pioneer Venus spacecraft are presented. The aspects of technical feasibility, system interface considerations, and cost of flight hardware development are discussed. The tradeoffs which relate to performance, design, cost, and reliability are analyzed. The configuration of the system for installation in the spacecraft is described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, Ian M.; Goldman, Charles A.; Murphy, Sean
The average cost to utilities to save a kilowatt-hour (kWh) in the United States is 2.5 cents, according to the most comprehensive assessment to date of the cost performance of energy efficiency programs funded by electricity customers. These costs are similar to those documented earlier. Cost-effective efficiency programs help ensure electricity system reliability at the most affordable cost as part of utility planning and implementation activities for resource adequacy. Building on prior studies, Berkeley Lab analyzed the cost performance of 8,790 electricity efficiency programs between 2009 and 2015 for 116 investor-owned utilities and other program administrators in 41 states. Themore » Berkeley Lab database includes programs representing about three-quarters of total spending on electricity efficiency programs in the United States.« less
Study of Multimission Modular Spacecraft (MMS) propulsion requirements
NASA Technical Reports Server (NTRS)
Fischer, N. H.; Tischer, A. E.
1977-01-01
The cost effectiveness of various propulsion technologies for shuttle-launched multimission modular spacecraft (MMS) missions was determined with special attention to the potential role of ion propulsion. The primary criterion chosen for comparison for the different types of propulsion technologies was the total propulsion related cost, including the Shuttle charges, propulsion module costs, upper stage costs, and propulsion module development. In addition to the cost comparison, other criteria such as reliability, risk, and STS compatibility are examined. Topics covered include MMS mission models, propulsion technology definition, trajectory/performance analysis, cost assessment, program evaluation, sensitivity analysis, and conclusions and recommendations.
NASA Technical Reports Server (NTRS)
Haakensen, Erik Edward
1998-01-01
The desire for low-cost reliable computing is increasing. Most current fault tolerant computing solutions are not very flexible, i.e., they cannot adapt to reliability requirements of newly emerging applications in business, commerce, and manufacturing. It is important that users have a flexible, reliable platform to support both critical and noncritical applications. Chameleon, under development at the Center for Reliable and High-Performance Computing at the University of Illinois, is a software framework. for supporting cost-effective adaptable networked fault tolerant service. This thesis details a simulation of fault injection, detection, and recovery in Chameleon. The simulation was written in C++ using the DEPEND simulation library. The results obtained from the simulation included the amount of overhead incurred by the fault detection and recovery mechanisms supported by Chameleon. In addition, information about fault scenarios from which Chameleon cannot recover was gained. The results of the simulation showed that both critical and noncritical applications can be executed in the Chameleon environment with a fairly small amount of overhead. No single point of failure from which Chameleon could not recover was found. Chameleon was also found to be capable of recovering from several multiple failure scenarios.
ERIC Educational Resources Information Center
Clifford, Matthew; Menon, Roshni; Gangi, Tracy; Condon, Christopher; Hornung, Katie
2012-01-01
This policy brief provides principal evaluation system designers information about the technical soundness and cost (i.e., time requirements) of publicly available school climate surveys. The authors focus on the technical soundness of school climate surveys because they believe that using validated and reliable surveys as an outcomes measure can…
Custom LSI plus hybrid equals cost effectiveness
NASA Astrophysics Data System (ADS)
Friedman, S. N.
The possibility to combine various technologies, such as Bi-Polar linear and CMOS/Digital makes it feasible to create systems with a tailored performance not available on a single monolithic circuit. The custom LSI 'BLOCK', especially if it is universal in nature, is proving to be a cost effective way for the developer to improve his product. The custom LSI represents a low price part in contrast to the discrete components it will replace. In addition, the hybrid assembly can realize a savings in labor as a result of the reduced parts handling and associated wire bonds. The possibility of the use of automated system manufacturing techniques leads to greater reliability as the human factor is partly eliminated. Attention is given to reliability predictions, cost considerations, and a product comparison study.
Power Electronics Packaging Reliability | Transportation Research | NREL
interface materials, are a key enabling technology for compact, lightweight, low-cost, and reliable power , reliability, and cost. High-temperature bonded interface materials are an important facilitating technology for compact, lightweight, low-cost, reliable power electronics packaging that fully utilizes the
VDA, a Method of Choosing a Better Algorithm with Fewer Validations
Kluger, Yuval
2011-01-01
The multitude of bioinformatics algorithms designed for performing a particular computational task presents end-users with the problem of selecting the most appropriate computational tool for analyzing their biological data. The choice of the best available method is often based on expensive experimental validation of the results. We propose an approach to design validation sets for method comparison and performance assessment that are effective in terms of cost and discrimination power. Validation Discriminant Analysis (VDA) is a method for designing a minimal validation dataset to allow reliable comparisons between the performances of different algorithms. Implementation of our VDA approach achieves this reduction by selecting predictions that maximize the minimum Hamming distance between algorithmic predictions in the validation set. We show that VDA can be used to correctly rank algorithms according to their performances. These results are further supported by simulations and by realistic algorithmic comparisons in silico. VDA is a novel, cost-efficient method for minimizing the number of validation experiments necessary for reliable performance estimation and fair comparison between algorithms. Our VDA software is available at http://sourceforge.net/projects/klugerlab/files/VDA/ PMID:22046256
Steam bottoming cycle for an adiabatic diesel engine
NASA Technical Reports Server (NTRS)
Poulin, E.; Demier, R.; Krepchin, I.; Walker, D.
1984-01-01
Steam bottoming cycles using adiabatic diesel engine exhaust heat which projected substantial performance and economic benefits for long haul trucks were studied. Steam cycle and system component variables, system cost, size and performance were analyzed. An 811 K/6.90 MPa state of the art reciprocating expander steam system with a monotube boiler and radiator core condenser was selected for preliminary design. The costs of the diesel with bottoming system (TC/B) and a NASA specified turbocompound adiabatic diesel with aftercooling with the same total output were compared, the annual fuel savings less the added maintenance cost was determined to cover the increase initial cost of the TC/B system in a payback period of 2.3 years. Steam bottoming system freeze protection strategies were developed, technological advances required for improved system reliability are considered and the cost and performance of advanced systes are evaluated.
Reliability and cost: A sensitivity analysis
NASA Technical Reports Server (NTRS)
Suich, Ronald C.; Patterson, Richard L.
1991-01-01
In the design phase of a system, how a design engineer or manager choose between a subsystem with .990 reliability and a more costly subsystem with .995 reliability is examined, along with the justification of the increased cost. High reliability is not necessarily an end in itself but may be desirable in order to reduce the expected cost due to subsystem failure. However, this may not be the wisest use of funds since the expected cost due to subsystem failure is not the only cost involved. The subsystem itself may be very costly. The cost of the subsystem nor the expected cost due to subsystem failure should not be considered separately but the total of the two costs should be maximized, i.e., the total of the cost of the subsystem plus the expected cost due to subsystem failure.
Reliability and coverage analysis of non-repairable fault-tolerant memory systems
NASA Technical Reports Server (NTRS)
Cox, G. W.; Carroll, B. D.
1976-01-01
A method was developed for the construction of probabilistic state-space models for nonrepairable systems. Models were developed for several systems which achieved reliability improvement by means of error-coding, modularized sparing, massive replication and other fault-tolerant techniques. From the models developed, sets of reliability and coverage equations for the systems were developed. Comparative analyses of the systems were performed using these equation sets. In addition, the effects of varying subunit reliabilities on system reliability and coverage were described. The results of these analyses indicated that a significant gain in system reliability may be achieved by use of combinations of modularized sparing, error coding, and software error control. For sufficiently reliable system subunits, this gain may far exceed the reliability gain achieved by use of massive replication techniques, yet result in a considerable saving in system cost.
Reliability-based optimization of an active vibration controller using evolutionary algorithms
NASA Astrophysics Data System (ADS)
Saraygord Afshari, Sajad; Pourtakdoust, Seid H.
2017-04-01
Many modern industrialized systems such as aircrafts, rotating turbines, satellite booms, etc. cannot perform their desired tasks accurately if their uninhibited structural vibrations are not controlled properly. Structural health monitoring and online reliability calculations are emerging new means to handle system imposed uncertainties. As stochastic forcing are unavoidable, in most engineering systems, it is often needed to take them into the account for the control design process. In this research, smart material technology is utilized for structural health monitoring and control in order to keep the system in a reliable performance range. In this regard, a reliability-based cost function is assigned for both controller gain optimization as well as sensor placement. The proposed scheme is implemented and verified for a wing section. Comparison of results for the frequency responses is considered to show potential applicability of the presented technique.
Design, performance and economics of the DAF Indal 50 kW and 375 kW vertical axis wind turbine
NASA Astrophysics Data System (ADS)
Schienbein, L. A.; Malcolm, D. J.
1982-03-01
A review of the development and performance of the DAF Indal 50 kW vertical axis Darrieus wind turbines shows that a high level of technical development and reliability has been achieved. Features of the drive train, braking and control systems are discussed and performance details are presented. A description is given of a wind-diesel hybrid presently being tested. Details are also presented of a 375 kW VAWT planned for production in late 1982. A discussion of the economics of both the 50 kW and 375 kW VAWTs is included, showing the effects of charge rate, installed cost, operating cost, performance and efficiency. The energy outputs are translated into diesel fuel cost savings for remote communities.
Genetic Algorithm Optimization of a Cost Competitive Hybrid Rocket Booster
NASA Technical Reports Server (NTRS)
Story, George
2015-01-01
Performance, reliability and cost have always been drivers in the rocket business. Hybrid rockets have been late entries into the launch business due to substantial early development work on liquid rockets and solid rockets. Slowly the technology readiness level of hybrids has been increasing due to various large scale testing and flight tests of hybrid rockets. One remaining issue is the cost of hybrids versus the existing launch propulsion systems. This paper will review the known state-of-the-art hybrid development work to date and incorporate it into a genetic algorithm to optimize the configuration based on various parameters. A cost module will be incorporated to the code based on the weights of the components. The design will be optimized on meeting the performance requirements at the lowest cost.
Genetic Algorithm Optimization of a Cost Competitive Hybrid Rocket Booster
NASA Technical Reports Server (NTRS)
Story, George
2014-01-01
Performance, reliability and cost have always been drivers in the rocket business. Hybrid rockets have been late entries into the launch business due to substantial early development work on liquid rockets and later on solid rockets. Slowly the technology readiness level of hybrids has been increasing due to various large scale testing and flight tests of hybrid rockets. A remaining issue is the cost of hybrids vs the existing launch propulsion systems. This paper will review the known state of the art hybrid development work to date and incorporate it into a genetic algorithm to optimize the configuration based on various parameters. A cost module will be incorporated to the code based on the weights of the components. The design will be optimized on meeting the performance requirements at the lowest cost.
Predicting Cost/Reliability/Maintainability of Advanced General Aviation Avionics Equipment
NASA Technical Reports Server (NTRS)
Davis, M. R.; Kamins, M.; Mooz, W. E.
1978-01-01
A methodology is provided for assisting NASA in estimating the cost, reliability, and maintenance (CRM) requirements for general avionics equipment operating in the 1980's. Practical problems of predicting these factors are examined. The usefulness and short comings of different approaches for modeling coast and reliability estimates are discussed together with special problems caused by the lack of historical data on the cost of maintaining general aviation avionics. Suggestions are offered on how NASA might proceed in assessing cost reliability CRM implications in the absence of reliable generalized predictive models.
Multiple mini-interviews: same concept, different approaches.
Knorr, Mirjana; Hissbach, Johanna
2014-12-01
Increasing numbers of educational institutions in the medical field choose to replace their conventional admissions interviews with a multiple mini-interview (MMI) format because the latter has superior reliability values and reduces interviewer bias. As the MMI format can be adapted to the conditions of each institution, the question of under which circumstances an MMI is most expedient remains unresolved. This article systematically reviews the existing MMI literature to identify the aspects of MMI design that have impact on the reliability, validity and cost-efficiency of the format. Three electronic databases (OVID, PubMed, Web of Science) were searched for any publications in which MMIs and related approaches were discussed. Sixty-six publications were included in the analysis. Forty studies reported reliability values. Generally, raising the number of stations has more impact on reliability than raising the number of raters per station. Other factors with positive influence include the exclusion of stations that are too easy, and the use of normative anchored rating scales or skills-based rater training. Data on criterion-related validities and analyses of dimensionality were found in 31 studies. Irrespective of design differences, the relationship between MMI results and academic measures is small to zero. The McMaster University MMI predicts in-programme and licensing examination performance. Construct validity analyses are mostly exploratory and their results are inconclusive. Seven publications gave information on required resources or provided suggestions on how to save costs. The most relevant cost factors that are additional to those of conventional interviews are the costs of station development and actor payments. The MMI literature provides useful recommendations for reliable and cost-efficient MMI designs, but some important aspects have not yet been fully explored. More theory-driven research is needed concerning dimensionality and construct validity, the predictive validity of MMIs other than those of McMaster University, the comparison of station types, and a cost-efficient station development process. © 2014 John Wiley & Sons Ltd.
Shuttle payload vibroacoustic test plan evaluation
NASA Technical Reports Server (NTRS)
Stahle, C. V.; Gongloff, H. R.; Young, J. P.; Keegan, W. B.
1977-01-01
Statistical decision theory is used to evaluate seven alternate vibro-acoustic test plans for Space Shuttle payloads; test plans include component, subassembly and payload testing and combinations of component and assembly testing. The optimum test levels and the expected cost are determined for each test plan. By including all of the direct cost associated with each test plan and the probabilistic costs due to ground test and flight failures, the test plans which minimize project cost are determined. The lowest cost approach eliminates component testing and maintains flight vibration reliability by performing subassembly tests at a relatively high acoustic level.
The Automated Array Assembly Task of the Low-cost Silicon Solar Array Project, Phase 2
NASA Technical Reports Server (NTRS)
Coleman, M. G.; Grenon, L.; Pastirik, E. M.; Pryor, R. A.; Sparks, T. G.
1978-01-01
An advanced process sequence for manufacturing high efficiency solar cells and modules in a cost-effective manner is discussed. Emphasis is on process simplicity and minimizing consumed materials. The process sequence incorporates texture etching, plasma processes for damage removal and patterning, ion implantation, low pressure silicon nitride deposition, and plated metal. A reliable module design is presented. Specific process step developments are given. A detailed cost analysis was performed to indicate future areas of fruitful cost reduction effort. Recommendations for advanced investigations are included.
Determining Functional Reliability of Pyrotechnic Mechanical Devices
NASA Technical Reports Server (NTRS)
Bement, Laurence J.; Multhaup, Herbert A.
1997-01-01
This paper describes a new approach for evaluating mechanical performance and predicting the mechanical functional reliability of pyrotechnic devices. Not included are other possible failure modes, such as the initiation of the pyrotechnic energy source. The requirement of hundreds or thousands of consecutive, successful tests on identical components for reliability predictions, using the generally accepted go/no-go statistical approach routinely ignores physics of failure. The approach described in this paper begins with measuring, understanding and controlling mechanical performance variables. Then, the energy required to accomplish the function is compared to that delivered by the pyrotechnic energy source to determine mechanical functional margin. Finally, the data collected in establishing functional margin is analyzed to predict mechanical functional reliability, using small-sample statistics. A careful application of this approach can provide considerable cost improvements and understanding over that of go/no-go statistics. Performance and the effects of variables can be defined, and reliability predictions can be made by evaluating 20 or fewer units. The application of this approach to a pin puller used on a successful NASA mission is provided as an example.
NASA Astrophysics Data System (ADS)
Watson, Norman F.
The relative merits of gimballed INS based on mechanical gyroscopes and strapdown INS based on ring laser gyroscopes are compared with regard to their use in 1 nm/hr combat aircraft navigation. Navigation performance, velocity performance, attitude performance, body axis outputs, environmental influences, reliability and maintainability, cost, and physical parameters are taken into consideration. Some of the advantages which have been claimed elsewhere for the laser INS, such as dramatically lower life cycle costs than for gimballed INS, are shown to be unrealistic under reasonable assumptions.
Deterministic Ethernet for Space Applications
NASA Astrophysics Data System (ADS)
Fidi, C.; Wolff, B.
2015-09-01
Typical spacecraft systems are distributed to be able to achieve the required reliability and availability targets of the mission. However the requirements on these systems are different for launchers, satellites, human space flight and exploration missions. Launchers require typically high reliability with very short mission times whereas satellites or space exploration missions require very high availability at very long mission times. Comparing a distributed system of launchers with satellites it shows very fast reaction times in launchers versus much slower once in satellite applications. Human space flight missions are maybe most challenging concerning reliability and availability since human lives are involved and the mission times can be very long e.g. ISS. Also the reaction times of these vehicles can get challenging during mission scenarios like landing or re-entry leading to very fast control loops. In these different applications more and more autonomous functions are required to fulfil the needs of current and future missions. This autonomously leads to new requirements with respect to increase performance, determinism, reliability and availability. On the other hand side the pressure on reducing costs of electronic components in space applications is increasing, leading to the use of more and more COTS components especially for launchers and LEO satellites. This requires a technology which is able to provide a cost competitive solution for both the high reliable and available deep-space as well as the low cost “new space” markets. Future spacecraft communication standards therefore have to be much more flexible, scalable and modular to be able to deal with these upcoming challenges. The only way to fulfill these requirements is, if they are based on open standards which are used cross industry leading to a reduction of the lifecycle costs and an increase in performance. The use of a communication network that fulfills these requirements will be essential for such spacecraft’s to allow the use in launcher, satellite, human space flight and exploration missions. Using one technology and the related infrastructure for these different applications will lead to a significant reduction of complexity and would moreover lead to significant savings in size weight and power while increasing the performance of the overall system. The paper focuses on the use of the TTEthernet technology for launchers, satellites and human spaceflight and will demonstrate the scalability of the technology for the different applications. The data used is derived from the ESA TRP 7594 on “Reliable High-Speed Data Bus/Network for Safety-Oriented Missions”.
Oil-free centrifugal hydrogen compression technology demonstration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heshmat, Hooshang
2014-05-31
One of the key elements in realizing a mature market for hydrogen vehicles is the deployment of a safe and efficient hydrogen production and delivery infrastructure on a scale that can compete economically with current fuels. The challenge, however, is that hydrogen, being the lightest and smallest of gases with a lower viscosity and density than natural gas, readily migrates through small spaces and is difficult to compresses efficiently. While efficient and cost effective compression technology is crucial to effective pipeline delivery of hydrogen, the compression methods used currently rely on oil lubricated positive displacement (PD) machines. PD compression technologymore » is very costly, has poor reliability and durability, especially for components subjected to wear (e.g., valves, rider bands and piston rings) and contaminates hydrogen with lubricating fluid. Even so called “oil-free” machines use oil lubricants that migrate into and contaminate the gas path. Due to the poor reliability of PD compressors, current hydrogen producers often install duplicate units in order to maintain on-line times of 98-99%. Such machine redundancy adds substantially to system capital costs. As such, DOE deemed that low capital cost, reliable, efficient and oil-free advanced compressor technologies are needed. MiTi’s solution is a completely oil-free, multi-stage, high-speed, centrifugal compressor designed for flow capacity of 500,000 kg/day with a discharge pressure of 1200 psig. The design employs oil-free compliant foil bearings and seals to allow for very high operating speeds, totally contamination free operation, long life and reliability. This design meets the DOE’s performance targets and achieves an extremely aggressive, specific power metric of 0.48 kW-hr/kg and provides significant improvements in reliability/durability, energy efficiency, sealing and freedom from contamination. The multi-stage compressor system concept has been validated through full scale performance testing of a single stage with helium similitude gas at full speed in accordance with ASME PTC-10. The experimental results indicated that aerodynamic performance, with respect to compressor discharge pressure, flow, power and efficiency exceeded theoretical prediction. Dynamic testing of a simulated multistage centrifugal compressor was also completed under a parallel program to validate the integrity and viability of the system concept. The results give strong confidence in the feasibility of the multi-stage design for use in hydrogen gas transportation and delivery from production locations to point of use.« less
Reliability and cost analysis methods
NASA Technical Reports Server (NTRS)
Suich, Ronald C.
1991-01-01
In the design phase of a system, how does a design engineer or manager choose between a subsystem with .990 reliability and a more costly subsystem with .995 reliability? When is the increased cost justified? High reliability is not necessarily an end in itself but may be desirable in order to reduce the expected cost due to subsystem failure. However, this may not be the wisest use of funds since the expected cost due to subsystem failure is not the only cost involved. The subsystem itself may be very costly. We should not consider either the cost of the subsystem or the expected cost due to subsystem failure separately but should minimize the total of the two costs, i.e., the total of the cost of the subsystem plus the expected cost due to subsystem failure. This final report discusses the Combined Analysis of Reliability, Redundancy, and Cost (CARRAC) methods which were developed under Grant Number NAG 3-1100 from the NASA Lewis Research Center. CARRAC methods and a CARRAC computer program employ five models which can be used to cover a wide range of problems. The models contain an option which can include repair of failed modules.
NASA Astrophysics Data System (ADS)
Kulkarni, R. D.; Agarwal, Vivek
2008-08-01
An ion chamber amplifier (ICA) is used as a safety device for neutronic power (flux) measurement in regulation and protection systems of nuclear reactors. Therefore, performance reliability of an ICA is an important issue. Appropriate quality engineering is essential to achieve a robust design and performance of the ICA circuit. It is observed that the low input bias current operational amplifiers used in the input stage of the ICA circuit are the most critical devices for proper functioning of the ICA. They are very sensitive to the gamma radiation present in their close vicinity. Therefore, the response of the ICA deteriorates with exposure to gamma radiation resulting in a decrease in the overall reliability, unless desired performance is ensured under all conditions. This paper presents a performance enhancement scheme for an ICA operated in the nuclear environment. The Taguchi method, which is a proven technique for reliability enhancement, has been used in this work. It is demonstrated that if a statistical, optimal design approach, like the Taguchi method is used, the cost of high quality and reliability may be brought down drastically. The complete methodology and statistical calculations involved are presented, as are the experimental and simulation results to arrive at a robust design of the ICA.
Companies Selected for Small Wind Turbine Project
) 972-9246 Golden, Colo., Nov. 27, 1996 -- In an effort to develop cost-effective, low-maintenance wind have to perform effectively and reliably over a long period of time without maintenance. The companies
NASA Astrophysics Data System (ADS)
Harkness, Linda L.; Sjoberg, Eric S.
1996-06-01
The Georgia Tech Research Institute, sponsored by the Warner Robins Air Logistics Center, has developed an approach for efficiently postulating and evaluating methods for extending the life of radars and other avionics systems. The technique identified specific assemblies for potential replacement and evaluates the system level impact, including performance, reliability and life-cycle cost of each action. The initial impetus for this research was the increasing obsolescence of integrated circuits contained in the AN/APG-63 system. The operational life of military electronics is typically in excess of twenty years, which encompasses several generations of IC technology. GTRI has developed a systems approach to inserting modern technology components into older systems based upon identification of those functions which limit the system's performance or reliability and which are cost drivers. The presentation will discuss the above methodology and a technique for evaluating and ranking the different potential system upgrade options.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Black, Billy D.; Akhil, Abbas Ali
This is the final report on a field evaluation by the Department of the Navy of twenty 5-kW PEM fuel cells carried out during 2004 and 2005 at five Navy sites located in New York, California, and Hawaii. The key objective of the effort was to obtain an engineering assessment of their military applications. Particular issues of interest were fuel cell cost, performance, reliability, and the readiness of commercial fuel cells for use as a standalone (grid-independent) power option. Two corollary objectives of the demonstration were to promote technological advances and to improve fuel performance and reliability. From a costmore » perspective, the capital cost of PEM fuel cells at this stage of their development is high compared to other power generation technologies. Sandia National Laboratories technical recommendation to the Navy is to remain involved in evaluating successive generations of this technology, particularly in locations with greater environmental extremes, and it encourages their increased use by the Navy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Yuping; Zheng, Qipeng P.; Wang, Jianhui
2014-11-01
tThis paper presents a two-stage stochastic unit commitment (UC) model, which integrates non-generation resources such as demand response (DR) and energy storage (ES) while including riskconstraints to balance between cost and system reliability due to the fluctuation of variable genera-tion such as wind and solar power. This paper uses conditional value-at-risk (CVaR) measures to modelrisks associated with the decisions in a stochastic environment. In contrast to chance-constrained modelsrequiring extra binary variables, risk constraints based on CVaR only involve linear constraints and con-tinuous variables, making it more computationally attractive. The proposed models with risk constraintsare able to avoid over-conservative solutions butmore » still ensure system reliability represented by loss ofloads. Then numerical experiments are conducted to study the effects of non-generation resources ongenerator schedules and the difference of total expected generation costs with risk consideration. Sen-sitivity analysis based on reliability parameters is also performed to test the decision preferences ofconfidence levels and load-shedding loss allowances on generation cost reduction.« less
The Role of Probabilistic Design Analysis Methods in Safety and Affordability
NASA Technical Reports Server (NTRS)
Safie, Fayssal M.
2016-01-01
For the last several years, NASA and its contractors have been working together to build space launch systems to commercialize space. Developing commercial affordable and safe launch systems becomes very important and requires a paradigm shift. This paradigm shift enforces the need for an integrated systems engineering environment where cost, safety, reliability, and performance need to be considered to optimize the launch system design. In such an environment, rule based and deterministic engineering design practices alone may not be sufficient to optimize margins and fault tolerance to reduce cost. As a result, introduction of Probabilistic Design Analysis (PDA) methods to support the current deterministic engineering design practices becomes a necessity to reduce cost without compromising reliability and safety. This paper discusses the importance of PDA methods in NASA's new commercial environment, their applications, and the key role they can play in designing reliable, safe, and affordable launch systems. More specifically, this paper discusses: 1) The involvement of NASA in PDA 2) Why PDA is needed 3) A PDA model structure 4) A PDA example application 5) PDA link to safety and affordability.
Launch vehicle systems design analysis
NASA Technical Reports Server (NTRS)
Ryan, Robert; Verderaime, V.
1993-01-01
Current launch vehicle design emphasis is on low life-cycle cost. This paper applies total quality management (TQM) principles to a conventional systems design analysis process to provide low-cost, high-reliability designs. Suggested TQM techniques include Steward's systems information flow matrix method, quality leverage principle, quality through robustness and function deployment, Pareto's principle, Pugh's selection and enhancement criteria, and other design process procedures. TQM quality performance at least-cost can be realized through competent concurrent engineering teams and brilliance of their technical leadership.
Balancing reliability and cost to choose the best power subsystem
NASA Technical Reports Server (NTRS)
Suich, Ronald C.; Patterson, Richard L.
1991-01-01
A mathematical model is presented for computing total (spacecraft) subsystem cost including both the basic subsystem cost and the expected cost due to the failure of the subsystem. This model is then used to determine power subsystem cost as a function of reliability and redundancy. Minimum cost and maximum reliability and/or redundancy are not generally equivalent. Two example cases are presented. One is a small satellite, and the other is an interplanetary spacecraft.
Interconnect fatigue design for terrestrial photovoltaic modules
NASA Technical Reports Server (NTRS)
Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.
1982-01-01
The results of comprehensive investigation of interconnect fatigue that has led to the definition of useful reliability-design and life-prediction algorithms are presented. Experimental data indicate that the classical strain-cycle (fatigue) curve for the interconnect material is a good model of mean interconnect fatigue performance, but it fails to account for the broad statistical scatter, which is critical to reliability prediction. To fill this shortcoming the classical fatigue curve is combined with experimental cumulative interconnect failure rate data to yield statistical fatigue curves (having failure probability as a parameter) which enable (1) the prediction of cumulative interconnect failures during the design life of an array field, and (2) the unambiguous--ie., quantitative--interpretation of data from field-service qualification (accelerated thermal cycling) tests. Optimal interconnect cost-reliability design algorithms are derived based on minimizing the cost of energy over the design life of the array field.
Interconnect fatigue design for terrestrial photovoltaic modules
NASA Astrophysics Data System (ADS)
Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.
1982-03-01
The results of comprehensive investigation of interconnect fatigue that has led to the definition of useful reliability-design and life-prediction algorithms are presented. Experimental data indicate that the classical strain-cycle (fatigue) curve for the interconnect material is a good model of mean interconnect fatigue performance, but it fails to account for the broad statistical scatter, which is critical to reliability prediction. To fill this shortcoming the classical fatigue curve is combined with experimental cumulative interconnect failure rate data to yield statistical fatigue curves (having failure probability as a parameter) which enable (1) the prediction of cumulative interconnect failures during the design life of an array field, and (2) the unambiguous--ie., quantitative--interpretation of data from field-service qualification (accelerated thermal cycling) tests. Optimal interconnect cost-reliability design algorithms are derived based on minimizing the cost of energy over the design life of the array field.
Reliability analysis of multicellular system architectures for low-cost satellites
NASA Astrophysics Data System (ADS)
Erlank, A. O.; Bridges, C. P.
2018-06-01
Multicellular system architectures are proposed as a solution to the problem of low reliability currently seen amongst small, low cost satellites. In a multicellular architecture, a set of independent k-out-of-n systems mimic the cells of a biological organism. In order to be beneficial, a multicellular architecture must provide more reliability per unit of overhead than traditional forms of redundancy. The overheads include power consumption, volume and mass. This paper describes the derivation of an analytical model for predicting a multicellular system's lifetime. The performance of such architectures is compared against that of several common forms of redundancy and proven to be beneficial under certain circumstances. In addition, the problem of peripheral interfaces and cross-strapping is investigated using a purpose-developed, multicellular simulation environment. Finally, two case studies are presented based on a prototype cell implementation, which demonstrate the feasibility of the proposed architecture.
Loomis, Jack M; Klatzky, Roberta L; McHugh, Brendan; Giudice, Nicholas A
2012-08-01
Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with short sequences of targets varying in angular direction, with the targets in a given sequence being all auditory, all visual, or a sequential mixture of the two. On two thirds of the trials, one of the locations was repeated, and observers had to respond as quickly as possible when detecting this repetition. Ancillary detection and localization tasks confirmed that the visual and auditory targets were perceptually comparable. Response latencies in the working memory task showed small but reliable costs in performance on trials involving a sequential mixture of auditory and visual targets, as compared with trials of pure vision or pure audition. These deficits were statistically reliable only for trials on which the modalities of the matching location switched from the penultimate to the final target in the sequence, indicating a switching cost. The switching cost for the pair in immediate succession means that the spatial images representing the target locations retain features of the visual or auditory representations from which they were derived. However, there was no reliable evidence of a performance cost for mixed modalities in the matching pair when the second of the two did not immediately follow the first, suggesting that more enduring spatial images in working memory may be amodal.
Decision-theoretic methodology for reliability and risk allocation in nuclear power plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, N.Z.; Papazoglou, I.A.; Bari, R.A.
1985-01-01
This paper describes a methodology for allocating reliability and risk to various reactor systems, subsystems, components, operations, and structures in a consistent manner, based on a set of global safety criteria which are not rigid. The problem is formulated as a multiattribute decision analysis paradigm; the multiobjective optimization, which is performed on a PRA model and reliability cost functions, serves as the guiding principle for reliability and risk allocation. The concept of noninferiority is used in the multiobjective optimization problem. Finding the noninferior solution set is the main theme of the current approach. The assessment of the decision maker's preferencesmore » could then be performed more easily on the noninferior solution set. Some results of the methodology applications to a nontrivial risk model are provided and several outstanding issues such as generic allocation and preference assessment are discussed.« less
Integrating reliability and maintainability into a concurrent engineering environment
NASA Astrophysics Data System (ADS)
Phillips, Clifton B.; Peterson, Robert R.
1993-02-01
This paper describes the results of a reliability and maintainability study conducted at the University of California, San Diego and supported by private industry. Private industry thought the study was important and provided the university access to innovative tools under cooperative agreement. The current capability of reliability and maintainability tools and how they fit into the design process is investigated. The evolution of design methodologies leading up to today's capability is reviewed for ways to enhance the design process while keeping cost under control. A method for measuring the consequences of reliability and maintainability policy for design configurations in an electronic environment is provided. The interaction of selected modern computer tool sets is described for reliability, maintainability, operations, and other elements of the engineering design process. These tools provide a robust system evaluation capability that brings life cycle performance improvement information to engineers and their managers before systems are deployed, and allow them to monitor and track performance while it is in operation.
NASA Astrophysics Data System (ADS)
Yu, Yuting; Cheng, Ming
2018-05-01
Aiming at various configuration scheme and inertial measurement units of Strapdown Inertial Navigation System, selected tetrahedron skew configuration and coaxial orthogonal configuration by nine low cost IMU to build system. Calculation and simulation the performance index, reliability and fault diagnosis ability of the navigation system. Analysis shows that the reliability and reconfiguration scheme of skew configuration is superior to the orthogonal configuration scheme, while the performance index and fault diagnosis ability of the system are similar. The work in this paper provides a strong reference for the selection of engineering applications.
Low-cost autonomous perceptron neural network inspired by quantum computation
NASA Astrophysics Data System (ADS)
Zidan, Mohammed; Abdel-Aty, Abdel-Haleem; El-Sadek, Alaa; Zanaty, E. A.; Abdel-Aty, Mahmoud
2017-11-01
Achieving low cost learning with reliable accuracy is one of the important goals to achieve intelligent machines to save time, energy and perform learning process over limited computational resources machines. In this paper, we propose an efficient algorithm for a perceptron neural network inspired by quantum computing composite from a single neuron to classify inspirable linear applications after a single training iteration O(1). The algorithm is applied over a real world data set and the results are outer performs the other state-of-the art algorithms.
Existing generating assets squeezed as new project starts slow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, R.B.; Tiffany, E.D.
Most forecasting reports concentrate on political or regulatory events to predict future industry trends. Frequently overlooked are the more empirical performance trends of the principal power generation technologies. Solomon and Associates queried its many power plant performance databases and crunched some numbers to identify those trends. Areas of investigation included reliability, utilization (net output factor and net capacity factor) and cost (operating costs). An in-depth analysis for North America and Europe is presented in this article, by region and by regeneration technology. 4 figs., 2 tabs.
Microwave components for cellular portable radiotelephone
NASA Astrophysics Data System (ADS)
Muraguchi, Masahiro; Aikawa, Masayoshi
1995-09-01
Mobile and personal communication systems are expected to represent a huge market for microwave components in the coming years. A number of components in silicon bipolar, silicon Bi-CMOS, GaAs MESFET, HBT and HEMT are now becoming available for system application. There are tradeoffs among the competing technologies with regard to performance, cost, reliability and time-to-market. This paper describes process selection and requirements of cost and r.f. performances to microwave semiconductor components for digital cellular and cordless telephones. Furthermore, new circuit techniques which were developed by NTT are presented.
Practical aspects of photovoltaic technology, applications and cost (revised)
NASA Technical Reports Server (NTRS)
Rosenblum, L.
1985-01-01
The purpose of this text is to provide the reader with the background, understanding, and computational tools needed to master the practical aspects of photovoltaic (PV) technology, application, and cost. The focus is on stand-alone, silicon solar cell, flat-plate systems in the range of 1 to 25 kWh/day output. Technology topics covered include operation and performance of each of the major system components (e.g., modules, array, battery, regulators, controls, and instrumentation), safety, installation, operation and maintenance, and electrical loads. Application experience and trends are presented. Indices of electrical service performance - reliability, availability, and voltage control - are discussed, and the known service performance of central station electric grid, diesel-generator, and PV stand-alone systems are compared. PV system sizing methods are reviewed and compared, and a procedure for rapid sizing is described and illustrated by the use of several sample cases. The rapid sizing procedure yields an array and battery size that corresponds to a minimum cost system for a given load requirement, insulation condition, and desired level of service performance. PV system capital cost and levelized energy cost are derived as functions of service performance and insulation. Estimates of future trends in PV system costs are made.
2016-03-01
Performance Metrics University of Waterloo Permanganate Treatment of an Emplaced DNAPL Source (Thomson et al., 2007) Table 5.6 Remediation Performance Data... permanganate vs. peroxide/Fenton’s for chemical oxidation). Poorer performance was generally observed when the Total CVOC was the contaminant metric...using a soluble carbon substrate (lactate), chemical oxidation using Fenton’s reagent, and chemical oxidation using potassium permanganate . At
Archival storage solutions for PACS
NASA Astrophysics Data System (ADS)
Chunn, Timothy
1997-05-01
While they are many, one of the inhibitors to the wide spread diffusion of PACS systems has been robust, cost effective digital archive storage solutions. Moreover, an automated Nearline solution is key to a central, sharable data repository, enabling many applications such as PACS, telemedicine and teleradiology, and information warehousing and data mining for research such as patient outcome analysis. Selecting the right solution depends on a number of factors: capacity requirements, write and retrieval performance requirements, scaleability in capacity and performance, configuration architecture and flexibility, subsystem availability and reliability, security requirements, system cost, achievable benefits and cost savings, investment protection, strategic fit and more.This paper addresses many of these issues. It compares and positions optical disk and magnetic tape technologies, which are the predominant archive mediums today. Price and performance comparisons will be made at different archive capacities, plus the effect of file size on storage system throughput will be analyzed. The concept of automated migration of images from high performance, high cost storage devices to high capacity, low cost storage devices will be introduced as a viable way to minimize overall storage costs for an archive. The concept of access density will also be introduced and applied to the selection of the most cost effective archive solution.
Forming a Turbomachinery Seals Working Group: An Overview and Discussion
NASA Technical Reports Server (NTRS)
Proctor, Margaret P.
2007-01-01
Purose: Identify technical challenges to improving turbomachinery seal leakage and wear performance, reliability and cost effectiveness. Develop a coordinated effort to resolve foundational issues for turbomachinery seal technologies. Identify and foster opportunities for collaboration. Advocate for funding.
Code of Federal Regulations, 2010 CFR
2010-10-01
... DEFENSE CONTRACT MANAGEMENT CONTRACT ADMINISTRATION AND AUDIT SERVICES Contractor Accounting Systems and..., shall maintain an accounting system and related internal controls throughout contract performance which... accounting system and cost data are reliable; (c) Risk of misallocations and mischarges are minimized; and (d...
Methane Trace-Gas Sensing Enabled by Silicon Photonic Integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, William
Fugitive methane leaks occurring during extraction at typical natural gas wells have an adverse environmental impact due to the methane’s large radiative forcing, in addition to reducing the producer’s overall efficiency and cost. Mitigation of these concerns can benefit from cost-effective sensor nodes, performing reliable, rapid and continuous tracking of methane emissions. The efficacy of laser spectroscopy has been widely demonstrated in both environmental and medical applications due to its sensitivity and specificity to the target analyte. However, the present cost and lack of manufacturing scalability of traditional free-space optical systems can limit their viability for deployment in economical wide-areamore » sensor networks. This presentation will review the development and performance of a cost-effective silicon photonic trace gas sensing platform that leverages silicon photonic waveguide and packaging technologies to perform on-chip evanescent field spectroscopy of methane.« less
Microeconomic analysis of military aircraft bearing restoration
NASA Technical Reports Server (NTRS)
Hein, G. F.
1976-01-01
The risk and cost of a bearing restoration by grinding program was analyzed. A microeconomic impact analysis was performed. The annual cost savings to U.S. Army aviation is approximately $950,000.00 for three engines and three transmissions. The capital value over an indefinite life is approximately ten million dollars. The annual cost savings for U.S. Air Force engines is approximately $313,000.00 with a capital value of approximately 3.1 million dollars. The program will result in the government obtaining bearings at lower costs at equivalent reliability. The bearing industry can recover lost profits during a period of reduced demand and higher costs.
Implementing a Reliability Centered Maintenance Program at NASA's Kennedy Space Center
NASA Technical Reports Server (NTRS)
Tuttle, Raymond E.; Pete, Robert R.
1998-01-01
Maintenance practices have long focused on time based "preventive maintenance" techniques. Components were changed out and parts replaced based on how long they had been in place instead of what condition they were in. A reliability centered maintenance (RCM) program seeks to offer equal or greater reliability at decreased cost by insuring only applicable, effective maintenance is performed and by in large part replacing time based maintenance with condition based maintenance. A significant portion of this program involved introducing non-intrusive technologies, such as vibration analysis, oil analysis and I/R cameras, to an existing labor force and management team.
Review of Reliability-Based Design Optimization Approach and Its Integration with Bayesian Method
NASA Astrophysics Data System (ADS)
Zhang, Xiangnan
2018-03-01
A lot of uncertain factors lie in practical engineering, such as external load environment, material property, geometrical shape, initial condition, boundary condition, etc. Reliability method measures the structural safety condition and determine the optimal design parameter combination based on the probabilistic theory. Reliability-based design optimization (RBDO) is the most commonly used approach to minimize the structural cost or other performance under uncertainty variables which combines the reliability theory and optimization. However, it cannot handle the various incomplete information. The Bayesian approach is utilized to incorporate this kind of incomplete information in its uncertainty quantification. In this paper, the RBDO approach and its integration with Bayesian method are introduced.
Electronics reliability and measurement technology
NASA Technical Reports Server (NTRS)
Heyman, Joseph S. (Editor)
1987-01-01
A summary is presented of the Electronics Reliability and Measurement Technology Workshop. The meeting examined the U.S. electronics industry with particular focus on reliability and state-of-the-art technology. A general consensus of the approximately 75 attendees was that "the U.S. electronics industries are facing a crisis that may threaten their existence". The workshop had specific objectives to discuss mechanisms to improve areas such as reliability, yield, and performance while reducing failure rates, delivery times, and cost. The findings of the workshop addressed various aspects of the industry from wafers to parts to assemblies. Key problem areas that were singled out for attention are identified, and action items necessary to accomplish their resolution are recommended.
Orbit Transfer Vehicle (OTV) engine, phase A study. Volume 2: Study
NASA Technical Reports Server (NTRS)
Mellish, J. A.
1979-01-01
The hydrogen oxygen engine used in the orbiter transfer vehicle is described. The engine design is analyzed and minimum engine performance and man rating requirements are discussed. Reliability and safety analysis test results are presented and payload, risk and cost, and engine installation parameters are defined. Engine tests were performed including performance analysis, structural analysis, thermal analysis, turbomachinery analysis, controls analysis, and cycle analysis.
Evaluation of a proposal for reliable low-cost grid power with 100% wind, water, and solar.
Clack, Christopher T M; Qvist, Staffan A; Apt, Jay; Bazilian, Morgan; Brandt, Adam R; Caldeira, Ken; Davis, Steven J; Diakov, Victor; Handschy, Mark A; Hines, Paul D H; Jaramillo, Paulina; Kammen, Daniel M; Long, Jane C S; Morgan, M Granger; Reed, Adam; Sivaram, Varun; Sweeney, James; Tynan, George R; Victor, David G; Weyant, John P; Whitacre, Jay F
2017-06-27
A number of analyses, meta-analyses, and assessments, including those performed by the Intergovernmental Panel on Climate Change, the National Oceanic and Atmospheric Administration, the National Renewable Energy Laboratory, and the International Energy Agency, have concluded that deployment of a diverse portfolio of clean energy technologies makes a transition to a low-carbon-emission energy system both more feasible and less costly than other pathways. In contrast, Jacobson et al. [Jacobson MZ, Delucchi MA, Cameron MA, Frew BA (2015) Proc Natl Acad Sci USA 112(49):15060-15065] argue that it is feasible to provide "low-cost solutions to the grid reliability problem with 100% penetration of WWS [wind, water and solar power] across all energy sectors in the continental United States between 2050 and 2055", with only electricity and hydrogen as energy carriers. In this paper, we evaluate that study and find significant shortcomings in the analysis. In particular, we point out that this work used invalid modeling tools, contained modeling errors, and made implausible and inadequately supported assumptions. Policy makers should treat with caution any visions of a rapid, reliable, and low-cost transition to entire energy systems that relies almost exclusively on wind, solar, and hydroelectric power.
Evaluation of a proposal for reliable low-cost grid power with 100% wind, water, and solar
Apt, Jay; Bazilian, Morgan; Diakov, Victor; Hines, Paul D. H.; Jaramillo, Paulina; Kammen, Daniel M.; Long, Jane C. S.; Morgan, M. Granger; Reed, Adam; Sivaram, Varun; Sweeney, James; Tynan, George R.; Victor, David G.; Weyant, John P.; Whitacre, Jay F.
2017-01-01
A number of analyses, meta-analyses, and assessments, including those performed by the Intergovernmental Panel on Climate Change, the National Oceanic and Atmospheric Administration, the National Renewable Energy Laboratory, and the International Energy Agency, have concluded that deployment of a diverse portfolio of clean energy technologies makes a transition to a low-carbon-emission energy system both more feasible and less costly than other pathways. In contrast, Jacobson et al. [Jacobson MZ, Delucchi MA, Cameron MA, Frew BA (2015) Proc Natl Acad Sci USA 112(49):15060–15065] argue that it is feasible to provide “low-cost solutions to the grid reliability problem with 100% penetration of WWS [wind, water and solar power] across all energy sectors in the continental United States between 2050 and 2055”, with only electricity and hydrogen as energy carriers. In this paper, we evaluate that study and find significant shortcomings in the analysis. In particular, we point out that this work used invalid modeling tools, contained modeling errors, and made implausible and inadequately supported assumptions. Policy makers should treat with caution any visions of a rapid, reliable, and low-cost transition to entire energy systems that relies almost exclusively on wind, solar, and hydroelectric power. PMID:28630353
Technology for low-cost PIR security sensors
NASA Astrophysics Data System (ADS)
Liddiard, Kevin C.
2008-03-01
Current passive infrared (PIR) security sensors employing pyroelectric detectors are simple, cheap and reliable, but have several deficiencies. These sensors, developed two decades ago, are essentially short-range moving-target hotspot detectors. They cannot detect slow temperature changes, and thus are unable to respond to radiation stimuli indicating potential danger such as overheating electrical appliances and developing fires. They have a poor optical resolution and limited ability to recognize detected targets. Modern uncooled thermal infrared technology has vastly superior performance but as yet is too costly to challenge the PIR security sensor market. In this paper microbolometer technology will be discussed which can provide enhanced performance at acceptable cost. In addition to security sensing the technology has numerous applications in the military, industrial and domestic markets where target range is short and low cost is paramount.
Analysis and design of digital output interface devices for gas turbine electronic controls
NASA Technical Reports Server (NTRS)
Newirth, D. M.; Koenig, E. W.
1976-01-01
A trade study was performed on twenty-one digital output interface schemes for gas turbine electronic controls to select the most promising scheme based on criteria of reliability, performance, cost, and sampling requirements. The most promising scheme, a digital effector with optical feedback of the fuel metering valve position, was designed.
Performance Assessments in Science: Hands-On Tasks and Scoring Guides.
ERIC Educational Resources Information Center
Stecher, Brian M.; Klein, Stephen P.
In 1992, RAND received a grant from the National Science Foundation to study the technical quality of performance assessments in science and to evaluate their feasibility for use in large-scale testing programs. The specific goals of the project were to assess the reliability and validity of hands-on science testing and to investigate the cost and…
Shuttle payload minimum cost vibroacoustic tests
NASA Technical Reports Server (NTRS)
Stahle, C. V.; Gongloff, H. R.; Young, J. P.; Keegan, W. B.
1977-01-01
This paper is directed toward the development of the methodology needed to evaluate cost effective vibroacoustic test plans for Shuttle Spacelab payloads. Statistical decision theory is used to quantitatively evaluate seven alternate test plans by deriving optimum test levels and the expected cost for each multiple mission payload considered. The results indicate that minimum costs can vary by as much as $6 million for the various test plans. The lowest cost approach eliminates component testing and maintains flight vibration reliability by performing subassembly tests at a relatively high acoustic level. Test plans using system testing or combinations of component and assembly level testing are attractive alternatives. Component testing alone is shown not to be cost effective.
Determining the best catheter for sonohysterography.
Dessole, S; Farina, M; Capobianco, G; Nardelli, G B; Ambrosini, G; Meloni, G B
2001-09-01
To compare the characteristics of six different catheters for performing sonohysterography (SHG) to identify those that offer the best compromise between reliability, tolerability, and cost. Prospective study. University hospital. Six hundred ten women undergoing SHG. We performed SHG with six different types of catheters: Foleycath (Wembley Rubber Products, Sepang, Malaysia), Hysca Hysterosalpingography Catheter (GTA International Medical Devices S.A., La Caleta D.N., Dominican Republic), H/S Catheter Set (Ackrad Laboratories, Cranford, NJ), PBN Balloon Hystero-Salpingography Catheter (PBN Medicals, Stenloese, Denmark), ZUI-2.0 Catheter (Zinnanti Uterine Injection; BEI Medical System International, Gembloux, Belgium), and Goldstein Catheter (Cook, Spencer, IN). We assessed the reliability, the physician's ease of use, the time requested for the insertion of the catheter, the volume of contrast medium used, the tolerability for the patients, and the cost of the catheters. In 568 (93%) correctly performed procedures, no statistically significant differences were found among the catheters. The Foleycath was the most difficult for the physician to use and required significantly more time to position correctly. The Goldstein catheter was the best tolerated by the patients. The Foleycath was the cheapest whereas the PBN Balloon was the most expensive. The choice of the catheter must be targeted to achieving a good balance between tolerability for the patients, efficacy, cost, and the personal preference of the operator.
NASA Astrophysics Data System (ADS)
Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo
2018-05-01
The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.
MEMS reliability: coming of age
NASA Astrophysics Data System (ADS)
Douglass, Michael R.
2008-02-01
In today's high-volume semiconductor world, one could easily take reliability for granted. As the MOEMS/MEMS industry continues to establish itself as a viable alternative to conventional manufacturing in the macro world, reliability can be of high concern. Currently, there are several emerging market opportunities in which MOEMS/MEMS is gaining a foothold. Markets such as mobile media, consumer electronics, biomedical devices, and homeland security are all showing great interest in microfabricated products. At the same time, these markets are among the most demanding when it comes to reliability assurance. To be successful, each company developing a MOEMS/MEMS device must consider reliability on an equal footing with cost, performance and manufacturability. What can this maturing industry learn from the successful development of DLP technology, air bag accelerometers and inkjet printheads? This paper discusses some basic reliability principles which any MOEMS/MEMS device development must use. Examples from the commercially successful and highly reliable Digital Micromirror Device complement the discussion.
Creating Highly Reliable Accountable Care Organizations.
Vogus, Timothy J; Singer, Sara J
2016-12-01
Accountable Care Organizations' (ACOs) pursuit of the triple aim of higher quality, lower cost, and improved population health has met with mixed results. To improve the design and implementation of ACOs we look to organizations that manage similarly complex, dynamic, and tightly coupled conditions while sustaining exceptional performance known as high-reliability organizations. We describe the key processes through which organizations achieve reliability, the leadership and organizational practices that enable it, and the role that professionals can play when charged with enacting it. Specifically, we present concrete practices and processes from health care organizations pursuing high-reliability and from early ACOs to illustrate how the triple aim may be met by cultivating mindful organizing, practicing reliability-enhancing leadership, and identifying and supporting reliability professionals. We conclude by proposing a set of research questions to advance the study of ACOs and high-reliability research. © The Author(s) 2016.
Operations and support cost modeling of conceptual space vehicles
NASA Technical Reports Server (NTRS)
Ebeling, Charles
1994-01-01
The University of Dayton is pleased to submit this annual report to the National Aeronautics and Space Administration (NASA) Langley Research Center which documents the development of an operations and support (O&S) cost model as part of a larger life cycle cost (LCC) structure. It is intended for use during the conceptual design of new launch vehicles and spacecraft. This research is being conducted under NASA Research Grant NAG-1-1327. This research effort changes the focus from that of the first two years in which a reliability and maintainability model was developed to the initial development of an operations and support life cycle cost model. Cost categories were initially patterned after NASA's three axis work breakdown structure consisting of a configuration axis (vehicle), a function axis, and a cost axis. A revised cost element structure (CES), which is currently under study by NASA, was used to established the basic cost elements used in the model. While the focus of the effort was on operations and maintenance costs and other recurring costs, the computerized model allowed for other cost categories such as RDT&E and production costs to be addressed. Secondary tasks performed concurrent with the development of the costing model included support and upgrades to the reliability and maintainability (R&M) model. The primary result of the current research has been a methodology and a computer implementation of the methodology to provide for timely operations and support cost analysis during the conceptual design activities.
Reliability of two social cognition tests: The combined stories test and the social knowledge test.
Thibaudeau, Élisabeth; Cellard, Caroline; Legendre, Maxime; Villeneuve, Karèle; Achim, Amélie M
2018-04-01
Deficits in social cognition are common in psychiatric disorders. Validated social cognition measures with good psychometric properties are necessary to assess and target social cognitive deficits. Two recent social cognition tests, the Combined Stories Test (COST) and the Social Knowledge Test (SKT), respectively assess theory of mind and social knowledge. Previous studies have shown good psychometric properties for these tests, but the test-retest reliability has never been documented. The aim of this study was to evaluate the test-retest reliability and the inter-rater reliability of the COST and the SKT. The COST and the SKT were administered twice to a group of forty-two healthy adults, with a delay of approximately four weeks between the assessments. Excellent test-retest reliability was observed for the COST, and a good test-retest reliability was observed for the SKT. There was no evidence of practice effect. Furthermore, an excellent inter-rater reliability was observed for both tests. This study shows a good reliability of the COST and the SKT that adds to the good validity previously reported for these two tests. These good psychometrics properties thus support that the COST and the SKT are adequate measures for the assessment of social cognition. Copyright © 2018. Published by Elsevier B.V.
The major objective of the Soliditech, Inc., SITE demonstration was to develop reliable performance and cost information about the Soliditech solidification, stabilization technology. The Soliditech process mixes hazardous waste materials with Portland cement or pozzolanic m...
Exploiting New Data Sources to Quantify Arterial Congestion and Performance Measures.
DOT National Transportation Integrated Search
2017-01-01
Transit travel time, operating speed and reliability all influence service attractiveness, operating cost and system efficiency. These metrics have a long-term impact on system effectiveness through a change in ridership. As part of its bus dispatch ...
Reliability and Productivity Modeling for the Optimization of Separated Spacecraft Interferometers
NASA Technical Reports Server (NTRS)
Kenny, Sean (Technical Monitor); Wertz, Julie
2002-01-01
As technological systems grow in capability, they also grow in complexity. Due to this complexity, it is no longer possible for a designer to use engineering judgement to identify the components that have the largest impact on system life cycle metrics, such as reliability, productivity, cost, and cost effectiveness. One way of identifying these key components is to build quantitative models and analysis tools that can be used to aid the designer in making high level architecture decisions. Once these key components have been identified, two main approaches to improving a system using these components exist: add redundancy or improve the reliability of the component. In reality, the most effective approach to almost any system will be some combination of these two approaches, in varying orders of magnitude for each component. Therefore, this research tries to answer the question of how to divide funds, between adding redundancy and improving the reliability of components, to most cost effectively improve the life cycle metrics of a system. While this question is relevant to any complex system, this research focuses on one type of system in particular: Separate Spacecraft Interferometers (SSI). Quantitative models are developed to analyze the key life cycle metrics of different SSI system architectures. Next, tools are developed to compare a given set of architectures in terms of total performance, by coupling different life cycle metrics together into one performance metric. Optimization tools, such as simulated annealing and genetic algorithms, are then used to search the entire design space to find the "optimal" architecture design. Sensitivity analysis tools have been developed to determine how sensitive the results of these analyses are to uncertain user defined parameters. Finally, several possibilities for the future work that could be done in this area of research are presented.
Reliability evaluation methodology for NASA applications
NASA Technical Reports Server (NTRS)
Taneja, Vidya S.
1992-01-01
Liquid rocket engine technology has been characterized by the development of complex systems containing large number of subsystems, components, and parts. The trend to even larger and more complex system is continuing. The liquid rocket engineers have been focusing mainly on performance driven designs to increase payload delivery of a launch vehicle for a given mission. In otherwords, although the failure of a single inexpensive part or component may cause the failure of the system, reliability in general has not been considered as one of the system parameters like cost or performance. Up till now, quantification of reliability has not been a consideration during system design and development in the liquid rocket industry. Engineers and managers have long been aware of the fact that the reliability of the system increases during development, but no serious attempts have been made to quantify reliability. As a result, a method to quantify reliability during design and development is needed. This includes application of probabilistic models which utilize both engineering analysis and test data. Classical methods require the use of operating data for reliability demonstration. In contrast, the method described in this paper is based on similarity, analysis, and testing combined with Bayesian statistical analysis.
Use of Model-Based Design Methods for Enhancing Resiliency Analysis of Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Knox, Lenora A.
The most common traditional non-functional requirement analysis is reliability. With systems becoming more complex, networked, and adaptive to environmental uncertainties, system resiliency has recently become the non-functional requirement analysis of choice. Analysis of system resiliency has challenges; which include, defining resilience for domain areas, identifying resilience metrics, determining resilience modeling strategies, and understanding how to best integrate the concepts of risk and reliability into resiliency. Formal methods that integrate all of these concepts do not currently exist in specific domain areas. Leveraging RAMSoS, a model-based reliability analysis methodology for Systems of Systems (SoS), we propose an extension that accounts for resiliency analysis through evaluation of mission performance, risk, and cost using multi-criteria decision-making (MCDM) modeling and design trade study variability modeling evaluation techniques. This proposed methodology, coined RAMSoS-RESIL, is applied to a case study in the multi-agent unmanned aerial vehicle (UAV) domain to investigate the potential benefits of a mission architecture where functionality to complete a mission is disseminated across multiple UAVs (distributed) opposed to being contained in a single UAV (monolithic). The case study based research demonstrates proof of concept for the proposed model-based technique and provides sufficient preliminary evidence to conclude which architectural design (distributed vs. monolithic) is most resilient based on insight into mission resilience performance, risk, and cost in addition to the traditional analysis of reliability.
Reliability and risk assessment of structures
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1991-01-01
Development of reliability and risk assessment of structural components and structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) the evaluation of the various uncertainties in terms of cumulative distribution functions for various structural response variables based on known or assumed uncertainties in primitive structural variables; (2) evaluation of the failure probability; (3) reliability and risk-cost assessment; and (4) an outline of an emerging approach for eventual certification of man-rated structures by computational methods. Collectively, the results demonstrate that the structural durability/reliability of man-rated structural components and structures can be effectively evaluated by using formal probabilistic methods.
Proposed Reliability/Cost Model
NASA Technical Reports Server (NTRS)
Delionback, L. M.
1982-01-01
New technique estimates cost of improvement in reliability for complex system. Model format/approach is dependent upon use of subsystem cost-estimating relationships (CER's) in devising cost-effective policy. Proposed methodology should have application in broad range of engineering management decisions.
NASA Technical Reports Server (NTRS)
1989-01-01
The objective of the Space Transportation Booster Engine Configuration Study is to contribute to the ALS development effort by providing highly reliable, low cost booster engine concepts for both expendable and reusable rocket engines. The objectives of the Space Transportation Booster Engine (STBE) Configuration Study were: (1) to identify engine development configurations which enhance vehicle performance and provide operational flexibility at low cost; and (2) to explore innovative approaches to the follow-on Full-Scale Development (FSD) phase for the STBE.
Modeling new coal projects: supercritical or subcritical?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrino, A.J.; Jones, R.B.
Decisions made on new build coal-fired plants are driven by several factors - emissions, fuel logistics and electric transmission access all provide constraints. The crucial economic decision whether to build supercritical or subcritical units often depends on assumptions concerning the reliability/availability of each technology, the cost of on-fuel operations including maintenance, the generation efficiencies and the potential for emissions credits at some future value. Modeling the influence of these key factors requires analysis and documentation to assure the assets actually meet the projected financial performance. This article addresses some of the issue related to the trade-offs that have the potentialmore » to be driven by the supercritical/subcritical decision. Solomon Associates has been collecting cost, generation and reliability data on coal-fired power generation assets for approximately 10 years using a strict methodology and taxonomy to categorize and compare actual plant operations data. This database provides validated information not only on performance, but also on alternative performance scenarios, which can provide useful insights in the pro forma financial analysis and models of new plants. 1 ref., 1 fig., 3 tabs.« less
NREL in the News | Transportation Research | NREL
Promises Power Electronics Innovation Wide bandgap (WBG) technology promises to dramatically increase performance, reduce cost, and improve reliability of electronics packaging in electric-drive vehicles and Department's new Manufacturing Innovation Institute for Next Generation Power Electronics to accelerate
49 CFR 639.27 - Minimum criteria.
Code of Federal Regulations, 2010 CFR
2010-10-01
... dollar value to any non-financial factors that are considered by using performance-based specifications... used where possible and appropriate: (a) Operation costs; (b) Reliability of service; (c) Maintenance... related to timing of acquisition of asset. (h) Value of asset at expiration of the lease. ...
The purpose of the field demonstration program is to gather technically reliable cost and performance information on selected condition assessment technologies under defined field conditions. The selected technologies include zoom camera, focused electrode leak location (FELL), ...
Reliability-Based Model to Analyze the Performance and Cost of a Transit Fare Collection System.
DOT National Transportation Integrated Search
1985-06-01
The collection of transit system fares has become more sophisticated in recent years, with more flexible structures requiring more sophisticated fare collection equipment to process tickets and admit passengers. However, this new and complex equipmen...
Pediatric laryngeal simulator using 3D printed models: A novel technique.
Kavanagh, Katherine R; Cote, Valerie; Tsui, Yvonne; Kudernatsch, Simon; Peterson, Donald R; Valdez, Tulio A
2017-04-01
Simulation to acquire and test technical skills is an essential component of medical education and residency training in both surgical and nonsurgical specialties. High-quality simulation education relies on the availability, accessibility, and reliability of models. The objective of this work was to describe a practical pediatric laryngeal model for use in otolaryngology residency training. Ideally, this model would be low-cost, have tactile properties resembling human tissue, and be reliably reproducible. Pediatric laryngeal models were developed using two manufacturing methods: direct three-dimensional (3D) printing of anatomical models and casted anatomical models using 3D-printed molds. Polylactic acid, acrylonitrile butadiene styrene, and high-impact polystyrene (HIPS) were used for the directly printed models, whereas a silicone elastomer (SE) was used for the casted models. The models were evaluated for anatomic quality, ease of manipulation, hardness, and cost of production. A tissue likeness scale was created to validate the simulation model. Fleiss' Kappa rating was performed to evaluate interrater agreement, and analysis of variance was performed to evaluate differences among the materials. The SE provided the most anatomically accurate models, with the tactile properties allowing for surgical manipulation of the larynx. Direct 3D printing was more cost-effective than the SE casting method but did not possess the material properties and tissue likeness necessary for surgical simulation. The SE models of the pediatric larynx created from a casting method demonstrated high quality anatomy, tactile properties comparable to human tissue, and easy manipulation with standard surgical instruments. Their use in a reliable, low-cost, accessible, modular simulation system provides a valuable training resource for otolaryngology residents. N/A. Laryngoscope, 127:E132-E137, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
NASA Astrophysics Data System (ADS)
Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.
2012-12-01
Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.
Reliability and Maintainability Engineering - A Major Driver for Safety and Affordability
NASA Technical Reports Server (NTRS)
Safie, Fayssal M.
2011-01-01
The United States National Aeronautics and Space Administration (NASA) is in the midst of an effort to design and build a safe and affordable heavy lift vehicle to go to the moon and beyond. To achieve that, NASA is seeking more innovative and efficient approaches to reduce cost while maintaining an acceptable level of safety and mission success. One area that has the potential to contribute significantly to achieving NASA safety and affordability goals is Reliability and Maintainability (R&M) engineering. Inadequate reliability or failure of critical safety items may directly jeopardize the safety of the user(s) and result in a loss of life. Inadequate reliability of equipment may directly jeopardize mission success. Systems designed to be more reliable (fewer failures) and maintainable (fewer resources needed) can lower the total life cycle cost. The Department of Defense (DOD) and industry experience has shown that optimized and adequate levels of R&M are critical for achieving a high level of safety and mission success, and low sustainment cost. Also, lessons learned from the Space Shuttle program clearly demonstrated the importance of R&M engineering in designing and operating safe and affordable launch systems. The Challenger and Columbia accidents are examples of the severe impact of design unreliability and process induced failures on system safety and mission success. These accidents demonstrated the criticality of reliability engineering in understanding component failure mechanisms and integrated system failures across the system elements interfaces. Experience from the shuttle program also shows that insufficient Reliability, Maintainability, and Supportability (RMS) engineering analyses upfront in the design phase can significantly increase the sustainment cost and, thereby, the total life cycle cost. Emphasis on RMS during the design phase is critical for identifying the design features and characteristics needed for time efficient processing, improved operational availability, and optimized maintenance and logistic support infrastructure. This paper discusses the role of R&M in a program acquisition phase and the potential impact of R&M on safety, mission success, operational availability, and affordability. This includes discussion of the R&M elements that need to be addressed and the R&M analyses that need to be performed in order to support a safe and affordable system design. The paper also provides some lessons learned from the Space Shuttle program on the impact of R&M on safety and affordability.
Introduction: Aims and Requirements of Future Aerospace Vehicles. Chapter 1
NASA Technical Reports Server (NTRS)
Rodriguez, Pedro I.; Smeltzer, Stanley S., III; McConnaughey, Paul (Technical Monitor)
2001-01-01
The goals and system-level requirements for the next generation aerospace vehicles emphasize safety, reliability, low-cost, and robustness rather than performance. Technologies, including new materials, design and analysis approaches, manufacturing and testing methods, operations and maintenance, and multidisciplinary systems-level vehicle development are key to increasing the safety and reducing the cost of aerospace launch systems. This chapter identifies the goals and needs of the next generation or advanced aerospace vehicle systems.
Systems design study of the Pioneer Venus spacecraft. Volume 2. Preliminary program development plan
NASA Technical Reports Server (NTRS)
1973-01-01
The preliminary development plan for the Pioneer Venus program is presented. This preliminary plan treats only developmental aspects that would have a significant effect on program cost. These significant development areas were: master program schedule planning; test planning - both unit and system testing for probes/orbiter/ probe bus; ground support equipment; performance assurance; and science integration Various test planning options and test method techniques were evaluated in terms of achieving a low-cost program without degrading mission performance or system reliability. The approaches studied and the methodology of the selected approach are defined.
Okundamiya, Michael S; Emagbetere, Joy O; Ogujor, Emmanuel A
2014-01-01
The rapid growth of the mobile telecommunication sectors of many emerging countries creates a number of problems such as network congestion and poor service delivery for network operators. This results primarily from the lack of a reliable and cost-effective power solution within such regions. This study presents a comprehensive review of the underlying principles of the renewable energy technology (RET) with the objective of ensuring a reliable and cost-effective energy solution for a sustainable development in the emerging world. The grid-connected hybrid renewable energy system incorporating a power conversion and battery storage unit has been proposed based on the availability, dynamism, and technoeconomic viability of energy resources within the region. The proposed system's performance validation applied a simulation model developed in MATLAB, using a practical load data for different locations with varying climatic conditions in Nigeria. Results indicate that, apart from being environmentally friendly, the increase in the overall energy throughput of about 4 kWh/$ of the proposed system would not only improve the quality of mobile services, by making the operations of GSM base stations more reliable and cost effective, but also better the living standards of the host communities.
Okundamiya, Michael S.; Emagbetere, Joy O.; Ogujor, Emmanuel A.
2014-01-01
The rapid growth of the mobile telecommunication sectors of many emerging countries creates a number of problems such as network congestion and poor service delivery for network operators. This results primarily from the lack of a reliable and cost-effective power solution within such regions. This study presents a comprehensive review of the underlying principles of the renewable energy technology (RET) with the objective of ensuring a reliable and cost-effective energy solution for a sustainable development in the emerging world. The grid-connected hybrid renewable energy system incorporating a power conversion and battery storage unit has been proposed based on the availability, dynamism, and technoeconomic viability of energy resources within the region. The proposed system's performance validation applied a simulation model developed in MATLAB, using a practical load data for different locations with varying climatic conditions in Nigeria. Results indicate that, apart from being environmentally friendly, the increase in the overall energy throughput of about 4 kWh/$ of the proposed system would not only improve the quality of mobile services, by making the operations of GSM base stations more reliable and cost effective, but also better the living standards of the host communities. PMID:24578673
NASA Technical Reports Server (NTRS)
1982-01-01
Primary and automatic flight controls are combined for a total flight control reliability and maintenance cost data base using information from two previous reports and additional cost data gathered from a major airline. A comparison of the current B-747 flight control system effects on reliability and operating cost with that of a B-747 designed for an active control wing load alleviation system is provided.
Performance of High-Reliability Space-Qualified Processors Implementing Software Defined Radios
2014-03-01
ADDRESS(ES) AND ADDRESS(ES) Naval Postgraduate School, Department of Electrical and Computer Engineering, 833 Dyer Road, Monterey, CA 93943-5121 8...Chairman Jeffrey D. Paduan Electrical and Computer Engineering Dean of Research iii THIS PAGE...capability. Radiation in space poses a considerable threat to modern microelectronic devices, in particular to the high-performance low-cost computing
Technology Infusion of CodeSonar into the Space Network Ground Segment (RII07)
NASA Technical Reports Server (NTRS)
Benson, Markland
2008-01-01
The NASA Software Assurance Research Program (in part) performs studies as to the feasibility of technologies for improving the safety, quality, reliability, cost, and performance of NASA software. This study considers the application of commercial automated source code analysis tools to mission critical ground software that is in the operations and sustainment portion of the product lifecycle.
Enabling technologies for fiber optic sensing
NASA Astrophysics Data System (ADS)
Ibrahim, Selwan K.; Farnan, Martin; Karabacak, Devrez M.; Singer, Johannes M.
2016-04-01
In order for fiber optic sensors to compete with electrical sensors, several critical parameters need to be addressed such as performance, cost, size, reliability, etc. Relying on technologies developed in different industrial sectors helps to achieve this goal in a more efficient and cost effective way. FAZ Technology has developed a tunable laser based optical interrogator based on technologies developed in the telecommunication sector and optical transducer/sensors based on components sourced from the automotive market. Combining Fiber Bragg Grating (FBG) sensing technology with the above, high speed, high precision, reliable quasi distributed optical sensing systems for temperature, pressure, acoustics, acceleration, etc. has been developed. Careful design needs to be considered to filter out any sources of measurement drifts/errors due to different effects e.g. polarization and birefringence, coating imperfections, sensor packaging etc. Also to achieve high speed and high performance optical sensing systems, combining and synchronizing multiple optical interrogators similar to what has been used with computer/processors to deliver super computing power is an attractive solution. This path can be achieved by using photonic integrated circuit (PIC) technology which opens the doors to scaling up and delivering powerful optical sensing systems in an efficient and cost effective way.
A Near-Term, High-Confidence Heavy Lift Launch Vehicle
NASA Technical Reports Server (NTRS)
Rothschild, William J.; Talay, Theodore A.
2009-01-01
The use of well understood, legacy elements of the Space Shuttle system could yield a near-term, high-confidence Heavy Lift Launch Vehicle that offers significant performance, reliability, schedule, risk, cost, and work force transition benefits. A side-mount Shuttle-Derived Vehicle (SDV) concept has been defined that has major improvements over previous Shuttle-C concepts. This SDV is shown to carry crew plus large logistics payloads to the ISS, support an operationally efficient and cost effective program of lunar exploration, and offer the potential to support commercial launch operations. This paper provides the latest data and estimates on the configurations, performance, concept of operations, reliability and safety, development schedule, risks, costs, and work force transition opportunities for this optimized side-mount SDV concept. The results presented in this paper have been based on established models and fully validated analysis tools used by the Space Shuttle Program, and are consistent with similar analysis tools commonly used throughout the aerospace industry. While these results serve as a factual basis for comparisons with other launch system architectures, no such comparisons are presented in this paper. The authors welcome comparisons between this optimized SDV and other Heavy Lift Launch Vehicle concepts.
Characterization of a low concentrator photovoltaics module
NASA Astrophysics Data System (ADS)
Butler, B. A.; van Dyk, E. E.; Vorster, F. J.; Okullo, W.; Munji, M. K.; Booysen, P.
2012-05-01
Low concentration photovoltaic (LCPV) systems have the potential to reduce the cost per kWh of electricity compared to conventional flat-plate photovoltaics (PV) by up to 50%. The cost-savings are realised by replacing expensive PV cells with relatively cheaper optical components to concentrate incident solar irradiance onto a receiver and by tracking the sun along either 1 axis or 2 axes. A LCPV module consists of three interrelated subsystems, viz., the optical, electrical and the thermal subsystems, which must be considered for optimal module design and performance. Successful integration of these subsystems requires the balancing of cost, performance and reliability. In this study LCPV experimental prototype modules were designed, built and evaluated with respect to optimisation of the three subsystems and overall performance. This paper reports on the optical and electrical evaluation of a prototype LCPV module.
Reliability based design including future tests and multiagent approaches
NASA Astrophysics Data System (ADS)
Villanueva, Diane
The initial stages of reliability-based design optimization involve the formulation of objective functions and constraints, and building a model to estimate the reliability of the design with quantified uncertainties. However, even experienced hands often overlook important objective functions and constraints that affect the design. In addition, uncertainty reduction measures, such as tests and redesign, are often not considered in reliability calculations during the initial stages. This research considers two areas that concern the design of engineering systems: 1) the trade-off of the effect of a test and post-test redesign on reliability and cost and 2) the search for multiple candidate designs as insurance against unforeseen faults in some designs. In this research, a methodology was developed to estimate the effect of a single future test and post-test redesign on reliability and cost. The methodology uses assumed distributions of computational and experimental errors with re-design rules to simulate alternative future test and redesign outcomes to form a probabilistic estimate of the reliability and cost for a given design. Further, it was explored how modeling a future test and redesign provides a company an opportunity to balance development costs versus performance by simultaneously designing the design and the post-test redesign rules during the initial design stage. The second area of this research considers the use of dynamic local surrogates, or surrogate-based agents, to locate multiple candidate designs. Surrogate-based global optimization algorithms often require search in multiple candidate regions of design space, expending most of the computation needed to define multiple alternate designs. Thus, focusing on solely locating the best design may be wasteful. We extended adaptive sampling surrogate techniques to locate multiple optima by building local surrogates in sub-regions of the design space to identify optima. The efficiency of this method was studied, and the method was compared to other surrogate-based optimization methods that aim to locate the global optimum using two two-dimensional test functions, a six-dimensional test function, and a five-dimensional engineering example.
Improving Stochastic Communication Network Performance: Reliability vs. Throughput
1991-12-01
increased to one. 2) arc survivabil.. ities will be increased in increments of one tenths. and 3) the costs to increase- arc si’rvivabilities were equal and...This reliability value is leni used to maximize the associated expected flow. For Net work A. a bIdget of (8)() pro(duces a tradcoff point at (.58.37...Network B for a buidgel of 2000 which allows a nel \\\\ork relial)ilitv of one to be achieved and a bidget of 1200 which allows for ;, maximum 57
Study of thermal management for space platform applications
NASA Technical Reports Server (NTRS)
Oren, J. A.
1980-01-01
Techniques for the management of the thermal energy of large space platforms using many hundreds of kilowatts over a 10 year life span were evaluated. Concepts for heat rejection, heat transport within the vehicle, and interfacing were analyzed and compared. The heat rejection systems were parametrically weight optimized over conditions for heat pipe and pumped fluid approaches. Two approaches to achieve reliability were compared for: performance, weight, volume, projected area, reliability, cost, and operational characteristics. Technology needs are assessed and technology advancement recommendations are made.
Signal verification can promote reliable signalling.
Broom, Mark; Ruxton, Graeme D; Schaefer, H Martin
2013-11-22
The central question in communication theory is whether communication is reliable, and if so, which mechanisms select for reliability. The primary approach in the past has been to attribute reliability to strategic costs associated with signalling as predicted by the handicap principle. Yet, reliability can arise through other mechanisms, such as signal verification; but the theoretical understanding of such mechanisms has received relatively little attention. Here, we model whether verification can lead to reliability in repeated interactions that typically characterize mutualisms. Specifically, we model whether fruit consumers that discriminate among poor- and good-quality fruits within a population can select for reliable fruit signals. In our model, plants either signal or they do not; costs associated with signalling are fixed and independent of plant quality. We find parameter combinations where discriminating fruit consumers can select for signal reliability by abandoning unprofitable plants more quickly. This self-serving behaviour imposes costs upon plants as a by-product, rendering it unprofitable for unrewarding plants to signal. Thus, strategic costs to signalling are not a prerequisite for reliable communication. We expect verification to more generally explain signal reliability in repeated consumer-resource interactions that typify mutualisms but also in antagonistic interactions such as mimicry and aposematism.
Reliability assessment of Multichip Module technologies via the Triservice/NASA RELTECH program
NASA Astrophysics Data System (ADS)
Fayette, Daniel F.
1994-10-01
Multichip Module (MCM) packaging/interconnect technologies have seen increased emphasis from both the commercial and military communities as a means of increasing capability and performance while providing a vehicle for reducing cost, power and weight of the end item electronic application. This is accomplished through three basic Multichip module technologies, MCM-L that are laminates, MCM-C that are ceramic type substrates and MCM-D that are deposited substrates (e.g., polymer dielectric with thin film metals). Three types of interconnect structures are also used with these substrates and include, wire bond, Tape Automated Bonds (TAB) and flip chip ball bonds. Application, cost, producibility and reliability are the drivers that will determine which MCM technology will best fit a respective need or requirement. With all the benefits and technologies cited, it would be expected that the use of, or the planned use of, MCM's would be more extensive in both military and commercial applications. However, two significant roadblocks exist to implementation of these new technologies: the absence of reliability data and a single national standard for the procurement of reliable/quality MCM's. To address the preceding issues, the Reliability Technology to Achieve Insertion of Advanced Packaging (RELTECH) program has been established. This program, which began in May 1992, has endeavored to evaluate a cross section of MCM technologies covering all classes of MCM's previously cited. NASA and the Tri-Services (Air Force Rome Laboratory, Naval Surface Warfare Center, Crane IN and Army Research Laboratory) have teamed together with sponsorship from ARPA to evaluate the performance, reliability and producibility of MCM's for both military and commercial usage. This is done in close cooperation with our industry partners whose support is critical to the goals of the program. Several tasks are being performed by the RELTECH program and data from this effort, in conjunction with information from our industry partners as well as discussions with industry organizations (IPC, EIA, ISHM, etc.) are being used to develop the qualification and screening requirements for MCM's. Specific tasks being performed by the RELTECH program include technical assessments, product evaluations, reliability modeling, environmental testing, and failure analysis. This paper will describe the various tasks associated with the RELTECH program, status, progress and a description of the national dual use specification being developed for MCM technologies.
DOT National Transportation Integrated Search
1976-07-01
The Federal Railroad Administration (FRA) is sponsoring research, development, and demonstration programs to provide improved safety, performance, speed, reliability, and maintainability of rail transportation systems at reduced life-cycle costs. A m...
Development of formula varsity race car chassis
NASA Astrophysics Data System (ADS)
Abdullah, M. A.; Mansur, M. R.; Tamaldin, N.; Thanaraj, K.
2013-12-01
Three chassis designs have been developed using commercial computer aided design (CAD) software. The design is based on the specifications of UTeM Formula VarsityTM 2012 (FV2012). The selection of the design is derived from weighted matrix which consists of reliability, cost, time consumption and weight. The score of the matrix is formulated based on relative weighted factor among the selections. All three designs are then fabricated using selected materials available. The actual cost, time consumption and weight of the chassis's are compared with the theoretical weighted scores. Standard processes of cuttings, fittings and welding are performed in chassis mock up and fabrication. The chassis is later assembled together with suspension systems, steering linkages, brake systems, engine system, and drive shaft systems. Once the chassis is assembled, the studies of driver's ergonomic and part accessibility are performed. The completion in final fittings and assembly of the race car and its reliability demonstrate an outstanding design for manufacturing (DFM) practices of the chassis.
Pereira, Andre; Atri, Mostafa; Rogalla, Patrik; Huynh, Thien; O'Malley, Martin E
2015-11-01
The value of a teaching case repository in radiology training programs is immense. The allocation of resources for putting one together is a complex issue, given the factors that have to be coordinated: hardware, software, infrastructure, administration, and ethics. Costs may be significant and cost-effective solutions are desirable. We chose Medical Imaging Resource Center (MIRC) to build our teaching file. It is offered by RSNA for free. For the hardware, we chose the Raspberry Pi, developed by the Raspberry Foundation: a small control board developed as a low cost computer for schools also used in alternative projects such as robotics and environmental data collection. Its performance and reliability as a file server were unknown to us. For the operational system, we chose Raspbian, a variant of Debian Linux, along with Apache (web server), MySql (database server) and PHP, which enhance the functionality of the server. A USB hub and an external hard drive completed the setup. Installation of software was smooth. The Raspberry Pi was able to handle very well the task of hosting the teaching file repository for our division. Uptime was logged at 100 %, and loading times were similar to other MIRC sites available online. We setup two servers (one for backup), each costing just below $200.00 including external storage and USB hub. It is feasible to run RSNA's MIRC off a low-cost control board (Raspberry Pi). Performance and reliability are comparable to full-size servers for the intended purpose of hosting a teaching file within an intranet environment.
NASA Technical Reports Server (NTRS)
Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron
1994-01-01
This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.
Dykes, Patricia C; Wantland, Dean; Whittenburg, Luann; Lipsitz, Stuart; Saba, Virginia K
2013-01-01
While nursing activities represent a significant proportion of inpatient care, there are no reliable methods for determining nursing costs based on the actual services provided by the nursing staff. Capture of data to support accurate measurement and reporting on the cost of nursing services is fundamental to effective resource utilization. Adopting standard terminologies that support tracking both the quality and the cost of care could reduce the data entry burden on direct care providers. This pilot study evaluated the feasibility of using a standardized nursing terminology, the Clinical Care Classification System (CCC), for developing a reliable costing method for nursing services. Two different approaches are explored; the Relative Value Unit RVU and the simple cost-to-time methods. We found that the simple cost-to-time method was more accurate and more transparent in its derivation than the RVU method and may support a more consistent and reliable approach for costing nursing services.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yishen; Zhou, Zhi; Liu, Cong
2016-08-01
As more wind power and other renewable resources are being integrated into the electric power grid, the forecast uncertainty brings operational challenges for the power system operators. In this report, different operational strategies for uncertainty management are presented and evaluated. A comprehensive and consistent simulation framework is developed to analyze the performance of different reserve policies and scheduling techniques under uncertainty in wind power. Numerical simulations are conducted on a modified version of the IEEE 118-bus system with a 20% wind penetration level, comparing deterministic, interval, and stochastic unit commitment strategies. The results show that stochastic unit commitment provides amore » reliable schedule without large increases in operational costs. Moreover, decomposition techniques, such as load shift factor and Benders decomposition, can help in overcoming the computational obstacles to stochastic unit commitment and enable the use of a larger scenario set to represent forecast uncertainty. In contrast, deterministic and interval unit commitment tend to give higher system costs as more reserves are being scheduled to address forecast uncertainty. However, these approaches require a much lower computational effort Choosing a proper lower bound for the forecast uncertainty is important for balancing reliability and system operational cost in deterministic and interval unit commitment. Finally, we find that the introduction of zonal reserve requirements improves reliability, but at the expense of higher operational costs.« less
Stretchable and high-performance supercapacitors with crumpled graphene papers.
Zang, Jianfeng; Cao, Changyong; Feng, Yaying; Liu, Jie; Zhao, Xuanhe
2014-10-01
Fabrication of unconventional energy storage devices with high stretchability and performance is challenging, but critical to practical operations of fully power-independent stretchable electronics. While supercapacitors represent a promising candidate for unconventional energy-storage devices, existing stretchable supercapacitors are limited by their low stretchability, complicated fabrication process, and high cost. Here, we report a simple and low-cost method to fabricate extremely stretchable and high-performance electrodes for supercapacitors based on new crumpled-graphene papers. Electrolyte-mediated-graphene paper bonded on a compliant substrate can be crumpled into self-organized patterns by harnessing mechanical instabilities in the graphene paper. As the substrate is stretched, the crumpled patterns unfold, maintaining high reliability of the graphene paper under multiple cycles of large deformation. Supercapacitor electrodes based on the crumpled graphene papers exhibit a unique combination of high stretchability (e.g., linear strain ~300%, areal strain ~800%), high electrochemical performance (e.g., specific capacitance ~196 F g(-1)), and high reliability (e.g., over 1000 stretch/relax cycles). An all-solid-state supercapacitor capable of large deformation is further fabricated to demonstrate practical applications of the crumpled-graphene-paper electrodes. Our method and design open a wide range of opportunities for manufacturing future energy-storage devices with desired deformability together with high performance.
Stretchable and High-Performance Supercapacitors with Crumpled Graphene Papers
NASA Astrophysics Data System (ADS)
Zang, Jianfeng; Cao, Changyong; Feng, Yaying; Liu, Jie; Zhao, Xuanhe
2014-10-01
Fabrication of unconventional energy storage devices with high stretchability and performance is challenging, but critical to practical operations of fully power-independent stretchable electronics. While supercapacitors represent a promising candidate for unconventional energy-storage devices, existing stretchable supercapacitors are limited by their low stretchability, complicated fabrication process, and high cost. Here, we report a simple and low-cost method to fabricate extremely stretchable and high-performance electrodes for supercapacitors based on new crumpled-graphene papers. Electrolyte-mediated-graphene paper bonded on a compliant substrate can be crumpled into self-organized patterns by harnessing mechanical instabilities in the graphene paper. As the substrate is stretched, the crumpled patterns unfold, maintaining high reliability of the graphene paper under multiple cycles of large deformation. Supercapacitor electrodes based on the crumpled graphene papers exhibit a unique combination of high stretchability (e.g., linear strain ~300%, areal strain ~800%), high electrochemical performance (e.g., specific capacitance ~196 F g-1), and high reliability (e.g., over 1000 stretch/relax cycles). An all-solid-state supercapacitor capable of large deformation is further fabricated to demonstrate practical applications of the crumpled-graphene-paper electrodes. Our method and design open a wide range of opportunities for manufacturing future energy-storage devices with desired deformability together with high performance.
Stretchable and High-Performance Supercapacitors with Crumpled Graphene Papers
Zang, Jianfeng; Cao, Changyong; Feng, Yaying; Liu, Jie; Zhao, Xuanhe
2014-01-01
Fabrication of unconventional energy storage devices with high stretchability and performance is challenging, but critical to practical operations of fully power-independent stretchable electronics. While supercapacitors represent a promising candidate for unconventional energy-storage devices, existing stretchable supercapacitors are limited by their low stretchability, complicated fabrication process, and high cost. Here, we report a simple and low-cost method to fabricate extremely stretchable and high-performance electrodes for supercapacitors based on new crumpled-graphene papers. Electrolyte-mediated-graphene paper bonded on a compliant substrate can be crumpled into self-organized patterns by harnessing mechanical instabilities in the graphene paper. As the substrate is stretched, the crumpled patterns unfold, maintaining high reliability of the graphene paper under multiple cycles of large deformation. Supercapacitor electrodes based on the crumpled graphene papers exhibit a unique combination of high stretchability (e.g., linear strain ~300%, areal strain ~800%), high electrochemical performance (e.g., specific capacitance ~196 F g−1), and high reliability (e.g., over 1000 stretch/relax cycles). An all-solid-state supercapacitor capable of large deformation is further fabricated to demonstrate practical applications of the crumpled-graphene-paper electrodes. Our method and design open a wide range of opportunities for manufacturing future energy-storage devices with desired deformability together with high performance. PMID:25270673
Central Plant Optimization for Waste Energy Reduction (CPOWER). ESTCP Cost and Performance Report
2016-12-01
in the regression models. The solar radiation data did not appear reliable in the weather dataset for the location, and hence it was not used. The...and additional factors (e.g., solar insolation) may be needed to obtain a better model. 2. Inputs to optimizer: During several periods of...Location: North Carolina Energy Consumption Cost Savings $ 443,698.00 Analysis Type: FEMP PV of total savings 215,698.00$ Base Date: April 1
The measurement of maintenance function efficiency through financial KPIs
NASA Astrophysics Data System (ADS)
Galar, D.; Parida, A.; Kumar, U.; Baglee, D.; Morant, A.
2012-05-01
The measurement of the performance in the maintenance function has produced large sets of indicators that due to their nature and disparity in criteria and objectives have been grouped in different subsets lately, emphasizing the set of financial indicators. The generation of these indicators demands data collection of high reliability that is only made possible through a model of costs adapted to the special casuistry of the maintenance function, characterized by the occultism of these costs.
Hotol and Saenger are good political trump cards
NASA Astrophysics Data System (ADS)
Ruppe, Harry O.
Political and technological aspects of proposals for ESA reusable and/or SSTO launch vehicles (LVs) are examined in a critical review. The lack of reliable performance and cost estimates for such unconventional LV designs as Hotol, Saenger II, LART, ADV, and EARL is pointed out, and it is argued that progress toward the ESA goal of greater European space autonomy could be seriously endangered by abandoning or underfunding the current Ariane/Hermes LV program. The cost and reliability of expendable and reusable LV systems are discussed; two-stage and hybrid air-breathing engine concepts are compared; and the need for fundamental in-depth planning studies based on presently available technology or realistic projections is stressed. Long-term funding of such research at about 5 percent of present Ariane/Hermes levels is recommended.
NASA Technical Reports Server (NTRS)
Kimble, Michael C.; Hoberecht, Mark
2003-01-01
NASA's Next Generation Launch Technology (NGLT) program is being developed to meet national needs for civil and commercial space access with goals of reducing the launch costs, increasing the reliability, and reducing the maintenance and operating costs. To this end, NASA is considering an all- electric capability for NGLT vehicles requiring advanced electrical power generation technology at a nominal 20 kW level with peak power capabilities six times the nominal power. The proton exchange membrane (PEM) fuel cell has been identified as a viable candidate to supply this electrical power; however, several technology aspects need to be assessed. Electrochem, Inc., under contract to NASA, has developed a breadboard power generator to address these technical issues with the goal of maximizing the system reliability while minimizing the cost and system complexity. This breadboard generator operates with dry hydrogen and oxygen gas using eductors to recirculate the gases eliminating gas humidification and blowers from the system. Except for a coolant pump, the system design incorporates passive components allowing the fuel cell to readily follow a duty cycle profile and that may operate at high 6:1 peak power levels for 30 second durations. Performance data of the fuel cell stack along with system performance is presented to highlight the benefits of the fuel cell stack design and system design for NGLT vehicles.
Baur, Heiner; Groppa, Alessia Severina; Limacher, Regula; Radlinger, Lorenz
2016-02-02
Maximum strength and rate of force development (RFD) are 2 important strength characteristics for everyday tasks and athletic performance. Measurements of both parameters must be reliable. Expensive isokinetic devices with isometric modes are often used. The possibility of cost-effective measurements in a practical setting would facilitate quality control. The purpose of this study was to assess the reliability of measurements of maximum isometric strength (Fmax) and RFD on a conventional leg press. Sixteen subjects (23 ± 2 y, 1.68 ± 0.05 m, 59 ± 5 kg) were tested twice within 1 session. After warm-up, subjects performed 2 times 5 trials eliciting maximum voluntary isometric contractions on an instrumented leg press (1- and 2-legged randomized). Fmax (N) and RFD (N/s) were extracted from force-time curves. Reliability was determined for Fmax and RFD by calculating the intraclass correlation coefficient (ICC), the test-retest variability (TRV), and the bias and limits of agreement. Reliability measures revealed good to excellent ICCs of .80-.93. TRV showed mean differences between measurement sessions of 0.4-6.9%. The systematic error was low compared with the absolute mean values (Fmax 5-6%, RFD 1-4%). The implementation of a force transducer into a conventional leg press provides a viable procedure to assess Fmax and RFD. Both performance parameters can be assessed with good to excellent reliability allowing quality control of interventions.
"Fly-by-Wireless" and Wireless Sensors Update
NASA Technical Reports Server (NTRS)
Studor, George F.
2009-01-01
This slide presentation reviews the uses of wires in the Aerospace industry. The vision is to minimize cables and connectors and increase functionality across the aerospace industry by providing reliable lower cost modular and higher performance alternatives to wired data connectivity to benefit the entire vehicle and program
The purpose of the field demonstration program is to gather technically reliable cost and performance information on selected condition assessment technologies under defined field conditions. The selected technologies include zoom camera, electro-scan (FELL-41), and a multi-sens...
The major objective of the HAZCON Solidification SITE Program Demonstration Test was to develop reliable performance and cost information. The demonstration occurred at a 50-acre site of a former oil reprocessing plant at Douglassville, PA containing a wide range of organic...
In-Use Performance Comparison of Hybrid Electric, CNG, and Diesel Buses at New York City Transit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnitt, R. A.
2008-06-01
The National Renewable Energy Laboratory (NREL) evaluated the performance of diesel, compressed natural gas (CNG), and hybrid electric (equipped with BAE Systems? HybriDrive propulsion system) transit buses at New York City Transit (NYCT). CNG, Gen I and Gen II hybrid electric propulsion systems were compared on fuel economy, maintenance and operating costs per mile, and reliability.
Probabilistic Finite Element Analysis & Design Optimization for Structural Designs
NASA Astrophysics Data System (ADS)
Deivanayagam, Arumugam
This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.
NASA Astrophysics Data System (ADS)
Cheng, Xiao; Feng, Lei; Zhou, Fanqin; Wei, Lei; Yu, Peng; Li, Wenjing
2018-02-01
With the rapid development of the smart grid, the data aggregation point (AP) in the neighborhood area network (NAN) is becoming increasingly important for forwarding the information between the home area network and wide area network. Due to limited budget, it is unable to use one-single access technology to meet the ongoing requirements on AP coverage. This paper first introduces the wired and wireless hybrid access network with the integration of long-term evolution (LTE) and passive optical network (PON) system for NAN, which allows a good trade-off among cost, flexibility, and reliability. Then, based on the already existing wireless LTE network, an AP association optimization model is proposed to make the PON serve as many APs as possible, considering both the economic efficiency and network reliability. Moreover, since the features of the constraints and variables of this NP-hard problem, a hybrid intelligent optimization algorithm is proposed, which is achieved by the mixture of the genetic, ant colony and dynamic greedy algorithm. By comparing with other published methods, simulation results verify the performance of the proposed method in improving the AP coverage and the performance of the proposed algorithm in terms of convergence.
Alternative Fuels Data Center: Minnesota School District Finds Cost
Savings, Cold-Weather Reliability with Propane Buses Minnesota School District Finds Cost Center: Minnesota School District Finds Cost Savings, Cold-Weather Reliability with Propane Buses on Facebook Tweet about Alternative Fuels Data Center: Minnesota School District Finds Cost Savings, Cold
Cost analysis of objective resident cataract surgery assessments.
Nandigam, Kiran; Soh, Jonathan; Gensheimer, William G; Ghazi, Ahmed; Khalifa, Yousuf M
2015-05-01
To compare 8 ophthalmology resident surgical training tools to determine which is most cost effective. University of Rochester Medical Center, Rochester, New York, USA. Retrospective evaluation of technology. A cost-analysis model was created to compile all relevant costs in running each tool in a medium-sized ophthalmology program. Quantitative cost estimates were obtained based on cost of tools, cost of time in evaluations, and supply and maintenance costs. For wet laboratory simulation, Eyesi was the least expensive cataract surgery simulation method; however, it is only capable of evaluating simulated cataract surgery rehearsal and requires supplementation with other evaluative methods for operating room performance and for noncataract wet lab training and evaluation. The most expensive training tool was the Eye Surgical Skills Assessment Test (ESSAT). The 2 most affordable methods for resident evaluation in operating room performance were the Objective Assessment of Skills in Intraocular Surgery (OASIS) and Global Rating Assessment of Skills in Intraocular Surgery (GRASIS). Cost-based analysis of ophthalmology resident surgical training tools are needed so residency programs can implement tools that are valid, reliable, objective, and cost effective. There is no perfect training system at this time. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Silver-free Metallization Technology for Producing High Efficiency, Industrial Silicon Solar Cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michaelson, Lynne M.; Munoz, Krystal; Karas, Joseph
The goal of this project is to provide a commercially viable Ag-free metallization technology that will both reduce cost and increase efficiency of standard silicon solar cells. By removing silver from the front grid metallization and replacing it with lower cost nickel, copper, and tin metal, the front grid direct materials costs will decrease. This reduction in material costs should provide a path to meeting the Sunshot 2020 goal of 1 dollar / W DC. As of today, plated contacts are not widely implemented in large scale manufacturing. For organizations that wish to implement pilot scale manufacturing, only two equipmentmore » choices exist. These equipment manufacturers do not supply plating chemistry. The main goal of this project is to provide a chemistry and equipment solution to the industry that enables reliable manufacturing of plated contacts marked by passing reliability results and higher efficiencies than silver paste front grid contacts. To date, there have been several key findings that point to plated contacts performing equal to or better than the current state of the art silver paste contacts. Poor adhesion and reliability concerns are a few of the hurdles for plated contacts, specifically plated nickel directly on silicon. A key finding of the Phase 1 budget period is that the plated contacts have the same adhesion as the silver paste controls. This is a huge win for plated contacts. With very little optimization work, state of the art electrical results for plated contacts on laser ablated lines have been demonstrated with efficiencies up to 19.1% and fill factors ~80% on grid lines 40-50 um wide. The silver paste controls with similar line widths demonstrate similar electrical results. By optimizing the emitter and grid design for the plated contacts, it is expected that the electrical performance will exceed the silver paste controls. In addition, cells plated using Technic chemistry and equipment pass reliability testing; i.e. 1000 hours damp heat and 200 thermal cycles, with results similar to silver paste control cells. 100 cells have been processed through Technic’s novel demo plating tool built and installed during budget period 2. This plating tool performed consistently from cell to cell, providing gentle handling for the solar cells. An agreement has been signed with a cell manufacturer to process their cells through our plating chemistry and equipment. Their main focus for plated contacts is to reduce the direct materials cost by utilizing nickel, copper, and tin in place of silver paste. Based on current market conditions and cost model calculations, the overall savings offered by plated contacts is only 3.5% dollar/W versus silver paste contacts; however, the direct materials savings depend on the silver market. If silver prices increase, plated contacts may find a wider adoption in the solar industry in order to keep the direct materials costs down for front grid contacts.« less
Creation of Power Reserves Under the Market Economy Conditions
NASA Astrophysics Data System (ADS)
Mahnitko, A.; Gerhards, J.; Lomane, T.; Ribakov, S.
2008-09-01
The main task of the control over an electric power system (EPS) is to ensure reliable power supply at the least cost. In this case, requirements to the electric power quality, power supply reliability and cost limitations on the energy resources must be observed. The available power reserve in an EPS is the necessary condition to keep it in operation with maintenance of normal operating variables (frequency, node voltage, power flows via the transmission lines, etc.). The authors examine possibilities to create power reserves that could be offered for sale by the electric power producer. They consider a procedure of price formation for the power reserves and propose a relevant mathematical model for a united EPS, the initial data being the fuel-cost functions for individual systems, technological limitations on the active power generation and consumers' load. As the criterion of optimization the maximum profit for the producer is taken. The model is exemplified by a concentrated EPS. The computations have been performed using the MATLAB program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narumanchi, Sreekant
Increasing the number of electric-drive vehicles (EDVs) on America's roads has been identified as a strategy with near-term potential for dramatically decreasing the nation's dependence on oil - by the U.S. Department of Energy, the federal cross-agency EV-Everywhere Challenge, and the automotive industry. Mass-market deployment will rely on meeting aggressive technical targets, including improved efficiency and reduced size, weight, and cost. Many of these advances will depend on optimization of thermal management. Effective thermal management is critical to improving the performance and ensuring the reliability of EDVs. Efficient heat removal makes higher power densities and lower operating temperatures possible, andmore » in turn enables cost and size reductions. The National Renewable Energy Laboratory (NREL), along with DOE and industry partners is working to develop cost-effective thermal management solutions to increase device and component power densities. In this presentation, the activities in recent years related to thermal management and reliability of automotive power electronics and electric machines are presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narumanchi, Sreekant
Increasing the number of electric-drive vehicles (EDVs) on America's roads has been identified as a strategy with near-term potential for dramatically decreasing the nation's dependence on oil -- by the U.S. Department of Energy, the federal cross-agency EV-Everywhere Challenge, and the automotive industry. Mass-market deployment will rely on meeting aggressive technical targets, including improved efficiency and reduced size, weight, and cost. Many of these advances will depend on optimization of thermal management. Effective thermal management is critical to improving the performance and ensuring the reliability of EDVs. Efficient heat removal makes higher power densities and lower operating temperatures possible, andmore » in turn enables cost and size reductions. The National Renewable Energy Laboratory (NREL), along with DOE and industry partners is working to develop cost-effective thermal management solutions to increase device and component power densities. In this presentation, the activities in recent years related to thermal management and reliability of automotive power electronics and electric machines will be presented.« less
2014-01-01
Background Multiple mini-interviews (MMIs) are a valuable tool in medical school selection due to their broad acceptance and promising psychometric properties. With respect to the high expenses associated with this procedure, the discussion about its feasibility should be extended to cost-effectiveness issues. Methods Following a pilot test of MMIs for medical school admission at Hamburg University in 2009 (HAM-Int), we took several actions to improve reliability and to reduce costs of the subsequent procedure in 2010. For both years, we assessed overall and inter-rater reliabilities based on multilevel analyses. Moreover, we provide a detailed specification of costs, as well as an extrapolation of the interrelation of costs, reliability, and the setup of the procedure. Results The overall reliability of the initial 2009 HAM-Int procedure with twelve stations and an average of 2.33 raters per station was ICC=0.75. Following the improvement actions, in 2010 the ICC remained stable at 0.76, despite the reduction of the process to nine stations and 2.17 raters per station. Moreover, costs were cut down from $915 to $495 per candidate. With the 2010 modalities, we could have reached an ICC of 0.80 with 16 single rater stations ($570 per candidate). Conclusions With respect to reliability and cost-efficiency, it is generally worthwhile to invest in scoring, rater training and scenario development. Moreover, it is more beneficial to increase the number of stations instead of raters within stations. However, if we want to achieve more than 80 % reliability, a minor improvement is paid with skyrocketing costs. PMID:24645665
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2015-01-01
This report documents a case study on the application of Reliability Engineering techniques to achieve an optimal balance between performance and robustness by tuning the functional parameters of a complex non-linear control system. For complex systems with intricate and non-linear patterns of interaction between system components, analytical derivation of a mathematical model of system performance and robustness in terms of functional parameters may not be feasible or cost-effective. The demonstrated approach is simple, structured, effective, repeatable, and cost and time efficient. This general approach is suitable for a wide range of systems.
Reduction of police vehicle accidents through mechaniically aided supervision
Larson, Lynn D.; Schnelle, John F.; Kirchner, Robert; Carr, Adam F.; Domash, Michele; Risley, Todd R.
1980-01-01
Tachograph recorders were installed in 224 vehicles of a metropolitan police department to monitor vehicle operation in an attempt to reduce the rate of accidents. Police sergeants reviewed each tachograph chart and provided feedback to officers regarding their driving performance. Reliability checks and additional feedback procedures were implemented so that upper level supervisors monitored and controlled the performance of field sergeants. The tachograph intervention and components of the feedback system nearly eliminated personal injury accidents and sharply reduced accidents caused by officer negligence. A cost-benefit analysis revealed that the savings in vehicle repair and injury claims outweighed the equipment and operating costs. PMID:16795634
Development of Camera Electronics for the Advanced Gamma-ray Imaging System (AGIS)
NASA Astrophysics Data System (ADS)
Tajima, Hiroyasu
2009-05-01
AGIS, a next generation of atmospheric Cherenkov telescope arrays, aims to achieve a sensitivity level of a milliCrab for gamma-ray observations in in the energy band of 40 GeV to 100 TeV. Such improvement requires cost reduction of individual components with high reliability in order to equip the order of 100 telescopes necessary to achieve the sensitivity goal. We are exploring several design concepts to reduce the cost of camera electronics while improving their performance. We have developed test systems for some of these concepts and are testing their performance. Here we present test results of the test systems.
Performance Prediction and Validation: Data, Frameworks, and Considerations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tinnesand, Heidi
2017-05-19
Improving the predictability and reliability of wind power generation and operations will reduce costs and potentially establish a framework to attract new capital into the distributed wind sector, a key cost reduction requirement highlighted in results from the distributed wind future market assessment conducted with dWind. Quantifying and refining the accuracy of project performance estimates will also directly address several of the key challenges identified by industry stakeholders in 2015 as part of the distributed wind resource assessment workshop and be cross-cutting for several other facets of the distributed wind portfolio. This presentation covers the efforts undertaken in 2016 tomore » address these topics.« less
Developments in the design, analysis, and fabrication of advanced technology transmission elements
NASA Technical Reports Server (NTRS)
Drago, R. J.; Lenski, J. W., Jr.
1982-01-01
Over the last decade, the presently reported proprietary development program for the reduction of helicopter drive system weight and cost and the enhancement of reliability and survivability has produced high speed roller bearings, resin-matrix composite rotor shafts and transmission housings, gear/bearing/shaft system integrations, photoelastic investigation methods for gear tooth strength, and the automatic generation of complex FEM models for gear/shaft systems. After describing the design features and performance capabilities of the hardware developed, attention is given to the prospective benefits to be derived from application of these technologies, with emphasis on the relationship between helicopter drive system performance and cost.
The SITE Program demonstration of one configuration of the BioTrol Soil Washing System (BSWS) was conducted to obtain reliable performance and cost data that can be used to evaluate the potential applicability of the technology as a remediation alternative for sites contaminated ...
A USEPA-sponsored field demonstration program was conducted to gather technically reliable cost and performance information on the electro-scan (FELL -41) pipeline condition assessment technology. Electro-scan technology can be used to estimate the magnitude and location of pote...
A Framework For Fault Tolerance In Virtualized Servers
2016-06-01
effects into the system. Decrease in performance, the expansion in the total system size and weight, and a hike in the system cost can be counted in... benefit also shines out in terms of reliability. 41 4. How Data Guard Synchronizes Standby Databases Primary and standby databases in Oracle Data
Modern Efficiencies for Healthy Schools
ERIC Educational Resources Information Center
VanOort, Adam
2012-01-01
Facility managers everywhere are tasked with improving energy efficiency to control costs. Those strides cannot be achieved at the expense of system performance and reliability, or the comfort of the people within those properties. There are few places where this is truer than in schools and universities. K-12 schools and university lecture spaces…
Choosing an Optical Disc System: A Guide for Users and Resellers.
ERIC Educational Resources Information Center
Vane-Tempest, Stewart
1995-01-01
Presents a guide for selecting an optional disc system. Highlights include storage hierarchy; standards; data life cycles; security; implementing an optical jukebox system; optimizing the system; performance; quality and reliability; software; cost of online versus near-line; and growing opportunities. Sidebars provide additional information on…
Study on the stability and reliability of Clinotron at Y-band
NASA Astrophysics Data System (ADS)
Li, Shuang; Wang, Jianguo; Chen, Zaigao; Wang, Guangqiang; Wang, Dongyang; Teng, Yan
2017-11-01
To improve the stability and reliability of Clinotron at the Y-band, some key issues are researched, such as the synchronous operating mode, the heat accumulation on the slow-wave structure, and the errors in micro-fabrication. By analyzing the dispersion relationship, the working mode is determined as the TM10 mode. The problem of heat dissipation on a comb is researched to make a trade-off on the choice of suitable working conditions, making sure that the safety and efficiency of the device are guaranteed simultaneously. The study on the effect of tolerance on device's performance is also conducted to determine the acceptable error during micro-fabrication. The validity of the device and the cost for fabrication are both taken into consideration. At last, the performance of Clinotron under the optimized conditions demonstrates that it can work steadily at 315.89 GHz and the output power is about 12 W, showing advanced stability and reliability.
Difficult Decisions Made Easier
NASA Technical Reports Server (NTRS)
2006-01-01
NASA missions are extremely complex and prone to sudden, catastrophic failure if equipment falters or if an unforeseen event occurs. For these reasons, NASA trains to expect the unexpected. It tests its equipment and systems in extreme conditions, and it develops risk-analysis tests to foresee any possible problems. The Space Agency recently worked with an industry partner to develop reliability analysis software capable of modeling complex, highly dynamic systems, taking into account variations in input parameters and the evolution of the system over the course of a mission. The goal of this research was multifold. It included performance and risk analyses of complex, multiphase missions, like the insertion of the Mars Reconnaissance Orbiter; reliability analyses of systems with redundant and/or repairable components; optimization analyses of system configurations with respect to cost and reliability; and sensitivity analyses to identify optimal areas for uncertainty reduction or performance enhancement.
NASA Technical Reports Server (NTRS)
1991-01-01
The technical effort and computer code enhancements performed during the sixth year of the Probabilistic Structural Analysis Methods program are summarized. Various capabilities are described to probabilistically combine structural response and structural resistance to compute component reliability. A library of structural resistance models is implemented in the Numerical Evaluations of Stochastic Structures Under Stress (NESSUS) code that included fatigue, fracture, creep, multi-factor interaction, and other important effects. In addition, a user interface was developed for user-defined resistance models. An accurate and efficient reliability method was developed and was successfully implemented in the NESSUS code to compute component reliability based on user-selected response and resistance models. A risk module was developed to compute component risk with respect to cost, performance, or user-defined criteria. The new component risk assessment capabilities were validated and demonstrated using several examples. Various supporting methodologies were also developed in support of component risk assessment.
NASA Technical Reports Server (NTRS)
Sizlo, T. R.; Berg, R. A.; Gilles, D. L.
1979-01-01
An augmentation system for a 230 passenger, twin engine aircraft designed with a relaxation of conventional longitudinal static stability was developed. The design criteria are established and candidate augmentation system control laws and hardware architectures are formulated and evaluated with respect to reliability, flying qualities, and flight path tracking performance. The selected systems are shown to satisfy the interpreted regulatory safety and reliability requirements while maintaining the present DC 10 (study baseline) level of maintainability and reliability for the total flight control system. The impact of certification of the relaxed static stability augmentation concept is also estimated with regard to affected federal regulations, system validation plan, and typical development/installation costs.
Space transportation booster engine configuration study. Volume 1: Executive Summary
NASA Technical Reports Server (NTRS)
1989-01-01
The objective of the Space Transportation Booster Engine (STBE) Configuration Study is to contribute to the Advanced Launch System (ALS) development effort by providing highly reliable, low cost booster engine concepts for both expendable and reusable rocket engines. The objectives of the Space Transportation Booster Engine (STBE) Configuration Study were to identify engine configurations which enhance vehicle performance and provide operational flexibility at low cost, and to explore innovative approaches to the follow-on full-scale development (FSD) phase for the STBE.
NASA Technical Reports Server (NTRS)
Traversi, M.; Piccolo, R.
1980-01-01
Tradeoff study activities and the analysis process used are described with emphasis on (1) review of the alternatives; (2) vehicle architecture; and (3) evaluation of the propulsion system alternatives; interim results are presented for the basic hybrid vehicle characterization; vehicle scheme development; propulsion system power and transmission ratios; vehicle weight; energy consumption and emissions; performance; production costs; reliability, availability and maintainability; life cycle costs, and operational quality. The final vehicle conceptual design is examined.
Near-term hybrid vehicle program, phase 1
NASA Technical Reports Server (NTRS)
1979-01-01
The preliminary design of a hybrid vehicle which fully meets or exceeds the requirements set forth in the Near Term Hybrid Vehicle Program is documented. Topics addressed include the general layout and styling, the power train specifications with discussion of each major component, vehicle weight and weight breakdown, vehicle performance, measures of energy consumption, and initial cost and ownership cost. Alternative design options considered and their relationship to the design adopted, computer simulation used, and maintenance and reliability considerations are also discussed.
NASA Astrophysics Data System (ADS)
Giacomel, L.; Manfrin, C.; Marchiori, G.
2008-07-01
From the first application on the VLT Telescopes till today, the linear motor identifies the best solution in terms of quality/cost for any technological application in the astronomical field. Its application also in the radio-astronomy sector with the ALMA project represents a whole of forefront technology, high reliability and minimum maintenance. The adoption of embedded electronics on each motor sector makes it a system at present modular, redundant with resetting of EMC troubles.
Analysis of field usage failure rate data for plastic encapsulated solid state devices
NASA Technical Reports Server (NTRS)
1981-01-01
Survey and questionnaire techniques were used to gather data from users and manufacturers on the failure rates in the field of plastic encapsulated semiconductors. It was found that such solid state devices are being successfully used by commercial companies which impose certain screening and qualification procedures. The reliability of these semiconductors is now adequate to support their consideration in NASA systems, particularly in low cost systems. The cost of performing necessary screening for NASA applications was assessed.
NASA Astrophysics Data System (ADS)
Hu, Ming-Che
Optimization and simulation are popular operations research and systems analysis tools for energy policy modeling. This dissertation addresses three important questions concerning the use of these tools for energy market (and electricity market) modeling and planning under uncertainty. (1) What is the value of information and cost of disregarding different sources of uncertainty for the U.S. energy economy? (2) Could model-based calculations of the performance (social welfare) of competitive and oligopolistic market equilibria be optimistically biased due to uncertainties in objective function coefficients? (3) How do alternative sloped demand curves perform in the PJM capacity market under economic and weather uncertainty? How does curve adjustment and cost dynamics affect the capacity market outcomes? To address the first question, two-stage stochastic optimization is utilized in the U.S. national MARKAL energy model; then the value of information and cost of ignoring uncertainty are estimated for three uncertainties: carbon cap policy, load growth and natural gas prices. When an uncertainty is important, then explicitly considering those risks when making investments will result in better performance in expectation (positive expected cost of ignoring uncertainty). Furthermore, eliminating the uncertainty would improve strategies even further, meaning that improved forecasts of future conditions are valuable ( i.e., a positive expected value of information). Also, the value of policy coordination shows the difference between a strategy developed under the incorrect assumption of no carbon cap and a strategy correctly anticipating imposition of such a cap. For the second question, game theory models are formulated and the existence of optimistic (positive) biases in market equilibria (both competitive and oligopoly markets) are proved, in that calculated social welfare and producer profits will, in expectation, exceed the values that will actually be received. Theoretical analyses prove the general existence of this bias for both competitive and oligopolistic models when production costs and demand curves are uncertain. Also demonstrated is an optimistic bias for the net benefits of introducing a new technology into a market when the cost of the new technology is uncertainty. The optimistic biases are quantified for a model of the northwest European electricity market (including Belgium, France, Germany and the Netherlands). Demand uncertainty results in an optimistic bias of 150,000-220,000 [Euro]/hr of total surplus and natural gas price uncertainty yields a smaller bias of 8,000-10,000 [Euro]/hr for total surplus. Further, adding a new uncertain technology (biomass) to the set of possible generation methods almost doubles the optimistic bias (14,000-18,000 [Euro]/hr). The third question concerns ex ante evaluation of the Reliability Pricing Model (RPM)---the new PJM capacity market---launched in June 2007. A Monte Carlo simulation model is developed to simulate PJM capacity market and predict market performance, producer revenue, and consumer payments. An important input to RPM is a demand curve for capacity; several alternative demand curves are compared, and sensitivity analyses conducted of those conclusions. One conclusion is that the sloped demand curves are more robust because those demand curves gives higher reliability with lower consumer payments. In addition, the performance of the curves is evaluated for a more sophisticated market design in which the demand curve can be adjusted in response to previous market outcomes and where the capital costs may change unexpectedly. The simulation shows that curve adjustment increases system reliability with lower consumer payments. Also the effect of learning-by-doing, leading to lower plant capital costs, leads to higher average reserve margin and lower consumer payments. In contrast, a the sudden rise in capital costs causes a decrease in reliability and an increase in consumer payments.
Proposed reliability cost model
NASA Technical Reports Server (NTRS)
Delionback, L. M.
1973-01-01
The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.
Need for Cost Optimization of Space Life Support Systems
NASA Technical Reports Server (NTRS)
Jones, Harry W.; Anderson, Grant
2017-01-01
As the nation plans manned missions that go far beyond Earth orbit to Mars, there is an urgent need for a robust, disciplined systems engineering methodology that can identify an optimized Environmental Control and Life Support (ECLSS) architecture for long duration deep space missions. But unlike the previously used Equivalent System Mass (ESM), the method must be inclusive of all driving parameters and emphasize the economic analysis of life support system design. The key parameter for this analysis is Life Cycle Cost (LCC). LCC takes into account the cost for development and qualification of the system, launch costs, operational costs, maintenance costs and all other relevant and associated costs. Additionally, an effective methodology must consider system technical performance, safety, reliability, maintainability, crew time, and other factors that could affect the overall merit of the life support system.
Using virtual robot-mediated play activities to assess cognitive skills.
Encarnação, Pedro; Alvarez, Liliana; Rios, Adriana; Maya, Catarina; Adams, Kim; Cook, Al
2014-05-01
To evaluate the feasibility of using virtual robot-mediated play activities to assess cognitive skills. Children with and without disabilities utilized both a physical robot and a matching virtual robot to perform the same play activities. The activities were designed such that successfully performing them is an indication of understanding of the underlying cognitive skills. Participants' performance with both robots was similar when evaluated by the success rates in each of the activities. Session video analysis encompassing participants' behavioral, interaction and communication aspects revealed differences in sustained attention, visuospatial and temporal perception, and self-regulation, favoring the virtual robot. The study shows that virtual robots are a viable alternative to the use of physical robots for assessing children's cognitive skills, with the potential of overcoming limitations of physical robots such as cost, reliability and the need for on-site technical support. Virtual robots can provide a vehicle for children to demonstrate cognitive understanding. Virtual and physical robots can be used as augmentative manipulation tools allowing children with disabilities to actively participate in play, educational and therapeutic activities. Virtual robots have the potential of overcoming limitations of physical robots such as cost, reliability and the need for on-site technical support.
A Highly Reliable and Cost-Efficient Multi-Sensor System for Land Vehicle Positioning.
Li, Xu; Xu, Qimin; Li, Bin; Song, Xianghui
2016-05-25
In this paper, we propose a novel positioning solution for land vehicles which is highly reliable and cost-efficient. The proposed positioning system fuses information from the MEMS-based reduced inertial sensor system (RISS) which consists of one vertical gyroscope and two horizontal accelerometers, low-cost GPS, and supplementary sensors and sources. First, pitch and roll angle are accurately estimated based on a vehicle kinematic model. Meanwhile, the negative effect of the uncertain nonlinear drift of MEMS inertial sensors is eliminated by an H∞ filter. Further, a distributed-dual-H∞ filtering (DDHF) mechanism is adopted to address the uncertain nonlinear drift of the MEMS-RISS and make full use of the supplementary sensors and sources. The DDHF is composed of a main H∞ filter (MHF) and an auxiliary H∞ filter (AHF). Finally, a generalized regression neural network (GRNN) module with good approximation capability is specially designed for the MEMS-RISS. A hybrid methodology which combines the GRNN module and the AHF is utilized to compensate for RISS position errors during GPS outages. To verify the effectiveness of the proposed solution, road-test experiments with various scenarios were performed. The experimental results illustrate that the proposed system can achieve accurate and reliable positioning for land vehicles.
A Highly Reliable and Cost-Efficient Multi-Sensor System for Land Vehicle Positioning
Li, Xu; Xu, Qimin; Li, Bin; Song, Xianghui
2016-01-01
In this paper, we propose a novel positioning solution for land vehicles which is highly reliable and cost-efficient. The proposed positioning system fuses information from the MEMS-based reduced inertial sensor system (RISS) which consists of one vertical gyroscope and two horizontal accelerometers, low-cost GPS, and supplementary sensors and sources. First, pitch and roll angle are accurately estimated based on a vehicle kinematic model. Meanwhile, the negative effect of the uncertain nonlinear drift of MEMS inertial sensors is eliminated by an H∞ filter. Further, a distributed-dual-H∞ filtering (DDHF) mechanism is adopted to address the uncertain nonlinear drift of the MEMS-RISS and make full use of the supplementary sensors and sources. The DDHF is composed of a main H∞ filter (MHF) and an auxiliary H∞ filter (AHF). Finally, a generalized regression neural network (GRNN) module with good approximation capability is specially designed for the MEMS-RISS. A hybrid methodology which combines the GRNN module and the AHF is utilized to compensate for RISS position errors during GPS outages. To verify the effectiveness of the proposed solution, road-test experiments with various scenarios were performed. The experimental results illustrate that the proposed system can achieve accurate and reliable positioning for land vehicles. PMID:27231917
Henriksen, Niel M.; Roe, Daniel R.; Cheatham, Thomas E.
2013-01-01
Molecular dynamics force field development and assessment requires a reliable means for obtaining a well-converged conformational ensemble of a molecule in both a time-efficient and cost-effective manner. This remains a challenge for RNA because its rugged energy landscape results in slow conformational sampling and accurate results typically require explicit solvent which increases computational cost. To address this, we performed both traditional and modified replica exchange molecular dynamics simulations on a test system (alanine dipeptide) and an RNA tetramer known to populate A-form-like conformations in solution (single-stranded rGACC). A key focus is on providing the means to demonstrate that convergence is obtained, for example by investigating replica RMSD profiles and/or detailed ensemble analysis through clustering. We found that traditional replica exchange simulations still require prohibitive time and resource expenditures, even when using GPU accelerated hardware, and our results are not well converged even at 2 microseconds of simulation time per replica. In contrast, a modified version of replica exchange, reservoir replica exchange in explicit solvent, showed much better convergence and proved to be both a cost-effective and reliable alternative to the traditional approach. We expect this method will be attractive for future research that requires quantitative conformational analysis from explicitly solvated simulations. PMID:23477537
Henriksen, Niel M; Roe, Daniel R; Cheatham, Thomas E
2013-04-18
Molecular dynamics force field development and assessment requires a reliable means for obtaining a well-converged conformational ensemble of a molecule in both a time-efficient and cost-effective manner. This remains a challenge for RNA because its rugged energy landscape results in slow conformational sampling and accurate results typically require explicit solvent which increases computational cost. To address this, we performed both traditional and modified replica exchange molecular dynamics simulations on a test system (alanine dipeptide) and an RNA tetramer known to populate A-form-like conformations in solution (single-stranded rGACC). A key focus is on providing the means to demonstrate that convergence is obtained, for example, by investigating replica RMSD profiles and/or detailed ensemble analysis through clustering. We found that traditional replica exchange simulations still require prohibitive time and resource expenditures, even when using GPU accelerated hardware, and our results are not well converged even at 2 μs of simulation time per replica. In contrast, a modified version of replica exchange, reservoir replica exchange in explicit solvent, showed much better convergence and proved to be both a cost-effective and reliable alternative to the traditional approach. We expect this method will be attractive for future research that requires quantitative conformational analysis from explicitly solvated simulations.
Orbit transfer vehicle engine study. Volume 2: Technical report
NASA Technical Reports Server (NTRS)
1980-01-01
The orbit transfer vehicle (OTV) engine study provided parametric performance, engine programmatic, and cost data on the complete propulsive spectrum that is available for a variety of high energy, space maneuvering missions. Candidate OTV engines from the near term RL 10 (and its derivatives) to advanced high performance expander and staged combustion cycle engines were examined. The RL 10/RL 10 derivative performance, cost and schedule data were updated and provisions defined which would be necessary to accommodate extended low thrust operation. Parametric performance, weight, envelope, and cost data were generated for advanced expander and staged combustion OTV engine concepts. A prepoint design study was conducted to optimize thrust chamber geometry and cooling, engine cycle variations, and controls for an advanced expander engine. Operation at low thrust was defined for the advanced expander engine and the feasibility and design impact of kitting was investigated. An analysis of crew safety and mission reliability was conducted for both the staged combustion and advanced expander OTV engine candidates.
Life Cycle Systems Engineering Approach to NASA's 2nd Generation Reusable Launch Vehicle
NASA Technical Reports Server (NTRS)
Thomas, Dale; Smith, Charles; Safie, Fayssal; Kittredge, Sheryl
2002-01-01
The overall goal of the 2nd Generation RLV Program is to substantially reduce technical and business risks associated with developing a new class of reusable launch vehicles. NASA's specific goals are to improve the safety of a 2nd- generation system by 2 orders of magnitude - equivalent to a crew risk of 1 -in- 10,000 missions - and decrease the cost tenfold, to approximately $1,000 per pound of payload launched. Architecture definition is being conducted in parallel with the maturating of key technologies specifically identified to improve safety and reliability, while reducing operational costs. An architecture broadly includes an Earth-to-orbit reusable launch vehicle, on-orbit transfer vehicles and upper stages, mission planning, ground and flight operations, and support infrastructure, both on the ground and in orbit. The systems engineering approach ensures that the technologies developed - such as lightweight structures, long-life rocket engines, reliable crew escape, and robust thermal protection systems - will synergistically integrate into the optimum vehicle. Given a candidate architecture that possesses credible physical processes and realistic technology assumptions, the next set of analyses address the system's functionality across the spread of operational scenarios characterized by the design reference missions. The safety/reliability and cost/economics associated with operating the system will also be modeled and analyzed to answer the questions "How safe is it?" and "How much will it cost to acquire and operate?" The systems engineering review process factors in comprehensive budget estimates, detailed project schedules, and business and performance plans, against the goals of safety, reliability, and cost, in addition to overall technical feasibility. This approach forms the basis for investment decisions in the 2nd Generation RLV Program's risk-reduction activities. Through this process, NASA will continually refine its specialized needs and identify where Defense and commercial requirements overlap those of civil missions.
Candidate Technologies for the Integrated Health Management Program
NASA Technical Reports Server (NTRS)
Johnson, Neal F., Jr.; Martin, Fred H.
1993-01-01
The purpose of this report is to assess Vehicle Health Management (VHM) technologies for implementation as a demonstration. Extensive studies have been performed to determine technologies which could be implemented on the Atlas and Centaur vehicles as part of a bridging program. This paper discusses areas today where VHM can be implemented for benefits in reliability, performance, and cost reduction. VHM Options are identified and one demonstration is recommended for execution.
Optimal periodic proof test based on cost-effective and reliability criteria
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1976-01-01
An exploratory study for the optimization of periodic proof tests for fatigue-critical structures is presented. The optimal proof load level and the optimal number of periodic proof tests are determined by minimizing the total expected (statistical average) cost, while the constraint on the allowable level of structural reliability is satisfied. The total expected cost consists of the expected cost of proof tests, the expected cost of structures destroyed by proof tests, and the expected cost of structural failure in service. It is demonstrated by numerical examples that significant cost saving and reliability improvement for fatigue-critical structures can be achieved by the application of the optimal periodic proof test. The present study is relevant to the establishment of optimal maintenance procedures for fatigue-critical structures.
An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.
1993-01-01
We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.
NASA Astrophysics Data System (ADS)
Hittle, D. C.; Johnson, D. L.
1985-01-01
This report is one of a series on the development of heating, ventilating, and air-conditioning (HVAC) control systems that are simple, efficient, reliable, maintainable, and well-documented. This report identifies major problems associated with three currently used HVAC control systems. It also describes the development of a retrofit control system applicable to military buildings that will allow easy identification of component failures, facilitate repair, and minimize system failures. Evaluation of currently used controls showed that pneumatic temperature control equipment requires a very clean source of supply air and is also not very accurate. Pneumatic, rather than electronic, actuators should be used because they are cheaper and require less maintenance. Thermistor temperature detectors should not be used for HVAC applications because they require frequent calibration. It was found that enthalpy economy cycles cannot be used for control because the humidity sensors required for their use are prone to rapid drift, inaccurate, and hard to calibrate in the field. Performance of control systems greatly affects HVAC operating costs. Significant savings can be achieved if proportional-plus-integral control schemes are used. Use of the retrofit prototype control panel developed in this study on variable-air-volume systems should provide significant energy cost savings, improve comfort and reliability, and reduce maintenance costs.
Navigating financial and supply reliability tradeoffs in regional drought management portfolios
NASA Astrophysics Data System (ADS)
Zeff, Harrison B.; Kasprzyk, Joseph R.; Herman, Jonathan D.; Reed, Patrick M.; Characklis, Gregory W.
2014-06-01
Rising development costs and growing concerns over environmental impacts have led many communities to explore more diversified water management strategies. These "portfolio"-style approaches integrate existing supply infrastructure with other options such as conservation measures or water transfers. Diversified water supply portfolios have been shown to reduce the capacity and costs required to meet demand, while also providing greater adaptability to changing hydrologic conditions. However, this additional flexibility can also cause unexpected reductions in revenue (from conservation) or increased costs (from transfers). The resulting financial instability can act as a substantial disincentive to utilities seeking to implement more innovative water management techniques. This study seeks to design portfolios that employ financial tools (e.g., contingency funds and index insurance) to reduce fluctuations in revenues and costs, allowing these strategies to achieve improved performance without sacrificing financial stability. This analysis is applied to the development of coordinated regional supply portfolios in the "Research Triangle" region of North Carolina, an area comprising four rapidly growing municipalities. The actions of each independent utility become interconnected when shared infrastructure is utilized to enable interutility transfers, requiring the evaluation of regional tradeoffs in up to five performance and financial objectives. Diversified strategies introduce significant tradeoffs between achieving reliability goals and introducing burdensome variability in annual revenues and/or costs. Financial mitigation tools can mitigate the impacts of this variability, allowing for an alternative suite of improved solutions. This analysis provides a general template for utilities seeking to navigate the tradeoffs associated with more flexible, portfolio-style management approaches.
Stoffels, I; Dissemond, J; Schulz, A; Hillen, U; Schadendorf, D; Klode, J
2012-02-01
Complete lymph node dissection (CLND) in melanoma patients with a positive sentinel lymph node (SLN) is currently being debated, as it is a cost-intensive surgical intervention with potentially high morbidity. This clinical study seeks to clarify the effectiveness, reliability and cost-effectiveness of CLND performed under tumescent local anaesthesia (TLA) compared with procedures under general anaesthesia (GA). We retrospectively analysed the data from 60 patients with primary malignant melanoma American Joint Committee on Cancer stage III who underwent CLND. Altogether 26 (43.3%) patients underwent CLND under TLA and 34 (56.7%) patients underwent CLND under GA. Fifteen of 43 (34.9%) patients had a complication, such as development of seromas and/or wound infections. The rate of complications was 25.0% (3/12) in the axilla subgroup and 28.6% (4/14) in the groin subgroup of the TLA group. In the GA group, the complication rate was 31.3% (5/16) in the axilla subgroup and 44.4% (8/18) in the groin subgroup. The costs for CLND were significantly less for the CLND in a procedure room performed under TLA (mean €67.26) compared with CLND in an operating room under GA (mean €676.20, P < 0.0001). In conclusion, this study confirms that TLA is an excellent, safe, effective and cost-efficient alternative to GA for CLND in melanoma patients. © 2011 The Authors. Journal of the European Academy of Dermatology and Venereology © 2011 European Academy of Dermatology and Venereology.
Active and passive vibration suppression for space structures
NASA Technical Reports Server (NTRS)
Hyland, David C.
1991-01-01
The relative benefits of passive and active vibration suppression for large space structures (LSS) are discussed. The intent is to sketch the true ranges of applicability of these approaches using previously published technical results. It was found that the distinction between active and passive vibration suppression approaches is not as sharp as might be thought at first. The relative simplicity, reliability, and cost effectiveness touted for passive measures are vitiated by 'hidden costs' bound up with detailed engineering implementation issues and inherent performance limitations. At the same time, reliability and robustness issues are often cited against active control. It is argued that a continuum of vibration suppression measures offering mutually supporting capabilities is needed. The challenge is to properly orchestrate a spectrum of methods to reap the synergistic benefits of combined advanced materials, passive damping, and active control.
Design for low-power and reliable flexible electronics
NASA Astrophysics Data System (ADS)
Huang, Tsung-Ching (Jim)
Flexible electronics are emerging as an alternative to conventional Si electronics for large-area low-cost applications such as e-paper, smart sensors, and disposable RFID tags. By utilizing inexpensive manufacturing methods such as ink-jet printing and roll-to-roll imprinting, flexible electronics can be made on low-cost plastics just like printing a newspaper. However, the key elements of exible electronics, thin-film transistors (TFTs), have slower operating speeds and less reliability than their Si electronics counterparts. Furthermore, depending on the material property, TFTs are usually mono-type -- either p- or n-type -- devices. Making air-stable complementary TFT circuits is very challenging and not applicable to most TFT technologies. Existing design methodologies for Si electronics, therefore, cannot be directly applied to exible electronics. Other inhibiting factors such as high supply voltage, large process variation, and lack of trustworthy device modeling also make designing larger-scale and robust TFT circuits a significant challenge. The major goal of this dissertation is to provide a viable solution for robust circuit design in exible electronics. I will first introduce a reliability simulation framework that can predict the degraded TFT circuits' performance under bias-stress. This framework has been validated using the amorphous-silicon (a-Si) TFT scan driver for TFT-LCD displays. To reuse the existing CMOS design ow for exible electronics, I propose a Pseudo-CMOS cell library that can make TFT circuits operable under low supply voltage and which has post-fabrication tunability for reliability and performance enhancement. This cell library has been validated using 2V self-assembly-monolayer (SAM) organic TFTs with a low-cost shadow-mask deposition process. I will also demonstrate a 3-bit 1.25KS/s Flash ADC in a-Si TFTs, which is based on the proposed Pseudo-CMOS cell library, and explore more possibilities in display, energy, and sensing applications.
NASA Astrophysics Data System (ADS)
Lam, C. Y.; Ip, W. H.
2012-11-01
A higher degree of reliability in the collaborative network can increase the competitiveness and performance of an entire supply chain. As supply chain networks grow more complex, the consequences of unreliable behaviour become increasingly severe in terms of cost, effort and time. Moreover, it is computationally difficult to calculate the network reliability of a Non-deterministic Polynomial-time hard (NP-hard) all-terminal network using state enumeration, as this may require a huge number of iterations for topology optimisation. Therefore, this paper proposes an alternative approach of an improved spanning tree for reliability analysis to help effectively evaluate and analyse the reliability of collaborative networks in supply chains and reduce the comparative computational complexity of algorithms. Set theory is employed to evaluate and model the all-terminal reliability of the improved spanning tree algorithm and present a case study of a supply chain used in lamp production to illustrate the application of the proposed approach.
7 CFR 1788.2 - General insurance requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... consistent with cost-effectiveness, reliability, safety, and expedition. It is recognized that Prudent... accomplish the desired result at the lowest reasonable cost consistent with cost-effectiveness, reliability... which is used or useful in the borrower's business and which shall be covered by insurance, unless each...
Optimized Generator Designs for the DTU 10-MW Offshore Wind Turbine using GeneratorSE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sethuraman, Latha; Maness, Michael; Dykes, Katherine
Compared to land-based applications, offshore wind imposes challenges for the development of next generation wind turbine generator technology. Direct-drive generators are believed to offer high availability, efficiency, and reduced operation and maintenance requirements; however, previous research suggests difficulties in scaling to several megawatts or more in size. The resulting designs are excessively large and/or massive, which are major impediments to transportation logistics, especially for offshore applications. At the same time, geared wind turbines continue to sustain offshore market growth through relatively cheaper and lightweight generators. However, reliability issues associated with mechanical components in a geared system create significant operation andmore » maintenance costs, and these costs make up a large portion of overall system costs offshore. Thus, direct-drive turbines are likely to outnumber their gear-driven counterparts for this market, and there is a need to review the costs or opportunities of building machines with different types of generators and examining their competitiveness at the sizes necessary for the next generation of offshore wind turbines. In this paper, we use GeneratorSE, the National Renewable Energy Laboratory's newly developed systems engineering generator sizing tool to estimate mass, efficiency, and the costs of different generator technologies satisfying the electromagnetic, structural, and basic thermal design requirements for application in a very large-scale offshore wind turbine such as the Technical University of Denmark's (DTU) 10-MW reference wind turbine. For the DTU reference wind turbine, we use the previously mentioned criteria to optimize a direct-drive, radial flux, permanent-magnet synchronous generator; a direct-drive electrically excited synchronous generator; a medium-speed permanent-magnet generator; and a high-speed, doubly-fed induction generator. Preliminary analysis of leveled costs of energy indicate that for large turbines, the cost of permanent magnets and reliability issues associated with brushes in electrically excited machines are the biggest deterrents for building direct-drive systems. The advantage of medium-speed permanent-magnet machines over doubly-fed induction generators is evident, yet, variability in magnet prices and solutions to address reliability issues associated with gearing and brushes can change this outlook. This suggests the need to potentially pursue fundamentally new innovations in generator designs that help avoid high capital costs but still have significant reliability related to performance.« less
Optimized Generator Designs for the DTU 10-MW Offshore Wind Turbine using GeneratorSE: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sethuraman, Latha; Maness, Michael; Dykes, Katherine
Compared to land-based applications, offshore wind imposes challenges for the development of next generation wind turbine generator technology. Direct-drive generators are believed to offer high availability, efficiency, and reduced operation and maintenance requirements; however, previous research suggests difficulties in scaling to several megawatts or more in size. The resulting designs are excessively large and/or massive, which are major impediments to transportation logistics, especially for offshore applications. At the same time, geared wind turbines continue to sustain offshore market growth through relatively cheaper and lightweight generators. However, reliability issues associated with mechanical components in a geared system create significant operation andmore » maintenance costs, and these costs make up a large portion of overall system costs offshore. Thus, direct-drive turbines are likely to outnumber their gear-driven counterparts for this market, and there is a need to review the costs or opportunities of building machines with different types of generators and examining their competitiveness at the sizes necessary for the next generation of offshore wind turbines. In this paper, we use GeneratorSE, the National Renewable Energy Laboratory's newly developed systems engineering generator sizing tool to estimate mass, efficiency, and the costs of different generator technologies satisfying the electromagnetic, structural, and basic thermal design requirements for application in a very large-scale offshore wind turbine such as the Technical University of Denmark's (DTU) 10-MW reference wind turbine. For the DTU reference wind turbine, we use the previously mentioned criteria to optimize a direct-drive, radial flux, permanent-magnet synchronous generator; a direct-drive electrically excited synchronous generator; a medium-speed permanent-magnet generator; and a high-speed, doubly-fed induction generator. Preliminary analysis of leveled costs of energy indicate that for large turbines, the cost of permanent magnets and reliability issues associated with brushes in electrically excited machines are the biggest deterrents for building direct-drive systems. The advantage of medium-speed permanent-magnet machines over doubly-fed induction generators is evident, yet, variability in magnet prices and solutions to address reliability issues associated with gearing and brushes can change this outlook. This suggests the need to potentially pursue fundamentally new innovations in generator designs that help avoid high capital costs but still have significant reliability related to performance.« less
In Space Nuclear Power as an Enabling Technology for Deep Space Exploration
NASA Technical Reports Server (NTRS)
Sackheim, Robert L.; Houts, Michael
2000-01-01
Deep Space Exploration missions, both for scientific and Human Exploration and Development (HEDS), appear to be as weight limited today as they would have been 35 years ago. Right behind the weight constraints is the nearly equally important mission limitation of cost. Launch vehicles, upper stages and in-space propulsion systems also cost about the same today with the same efficiency as they have had for many years (excluding impact of inflation). Both these dual mission constraints combine to force either very expensive, mega systems missions or very light weight, but high risk/low margin planetary spacecraft designs, such as the recent unsuccessful attempts for an extremely low cost mission to Mars during the 1998-99 opportunity (i.e., Mars Climate Orbiter and the Mars Polar Lander). When one considers spacecraft missions to the outer heliopause or even the outer planets, the enormous weight and cost constraints will impose even more daunting concerns for mission cost, risk and the ability to establish adequate mission margins for success. This paper will discuss the benefits of using a safe in-space nuclear reactor as the basis for providing both sufficient electric power and high performance space propulsion that will greatly reduce mission risk and significantly increase weight (IMLEO) and cost margins. Weight and cost margins are increased by enabling much higher payload fractions and redundant design features for a given launch vehicle (higher payload fraction of IMLEO). The paper will also discuss and summarize the recent advances in nuclear reactor technology and safety of modern reactor designs and operating practice and experience, as well as advances in reactor coupled power generation and high performance nuclear thermal and electric propulsion technologies. It will be shown that these nuclear power and propulsion technologies are major enabling capabilities for higher reliability, higher margin and lower cost deep space missions design to reliably reach the outer planets for scientific exploration.
Bulk electric system reliability evaluation incorporating wind power and demand side management
NASA Astrophysics Data System (ADS)
Huang, Dange
Electric power systems are experiencing dramatic changes with respect to structure, operation and regulation and are facing increasing pressure due to environmental and societal constraints. Bulk electric system reliability is an important consideration in power system planning, design and operation particularly in the new competitive environment. A wide range of methods have been developed to perform bulk electric system reliability evaluation. Theoretically, sequential Monte Carlo simulation can include all aspects and contingencies in a power system and can be used to produce an informative set of reliability indices. It has become a practical and viable tool for large system reliability assessment technique due to the development of computing power and is used in the studies described in this thesis. The well-being approach used in this research provides the opportunity to integrate an accepted deterministic criterion into a probabilistic framework. This research work includes the investigation of important factors that impact bulk electric system adequacy evaluation and security constrained adequacy assessment using the well-being analysis framework. Load forecast uncertainty is an important consideration in an electrical power system. This research includes load forecast uncertainty considerations in bulk electric system reliability assessment and the effects on system, load point and well-being indices and reliability index probability distributions are examined. There has been increasing worldwide interest in the utilization of wind power as a renewable energy source over the last two decades due to enhanced public awareness of the environment. Increasing penetration of wind power has significant impacts on power system reliability, and security analyses become more uncertain due to the unpredictable nature of wind power. The effects of wind power additions in generating and bulk electric system reliability assessment considering site wind speed correlations and the interactive effects of wind power and load forecast uncertainty on system reliability are examined. The concept of the security cost associated with operating in the marginal state in the well-being framework is incorporated in the economic analyses associated with system expansion planning including wind power and load forecast uncertainty. Overall reliability cost/worth analyses including security cost concepts are applied to select an optimal wind power injection strategy in a bulk electric system. The effects of the various demand side management measures on system reliability are illustrated using the system, load point, and well-being indices, and the reliability index probability distributions. The reliability effects of demand side management procedures in a bulk electric system including wind power and load forecast uncertainty considerations are also investigated. The system reliability effects due to specific demand side management programs are quantified and examined in terms of their reliability benefits.
Lee, Robert H; Bott, Marjorie J; Forbes, Sarah; Redford, Linda; Swagerty, Daniel L; Taunton, Roma Lee
2003-01-01
Understanding how quality improvement affects costs is important. Unfortunately, low-cost, reliable ways of measuring direct costs are scarce. This article builds on the principles of process improvement to develop a costing strategy that meets both criteria. Process-based costing has 4 steps: developing a flowchart, estimating resource use, valuing resources, and calculating direct costs. To illustrate the technique, this article uses it to cost the care planning process in 3 long-term care facilities. We conclude that process-based costing is easy to implement; generates reliable, valid data; and allows nursing managers to assess the costs of new or modified processes.
Implications of scaling on static RAM bit cell stability and reliability
NASA Astrophysics Data System (ADS)
Coones, Mary Ann; Herr, Norm; Bormann, Al; Erington, Kent; Soorholtz, Vince; Sweeney, John; Phillips, Michael
1993-01-01
In order to lower manufacturing costs and increase performance, static random access memory (SRAM) bit cells are scaled progressively toward submicron geometries. The reliability of an SRAM is highly dependent on the bit cell stability. Smaller memory cells with less capacitance and restoring current make the array more susceptible to failures from defectivity, alpha hits, and other instabilities and leakage mechanisms. Improving long term reliability while migrating to higher density devices makes the task of building in and improving reliability increasingly difficult. Reliability requirements for high density SRAMs are very demanding with failure rates of less than 100 failures per billion device hours (100 FITs) being a common criteria. Design techniques for increasing bit cell stability and manufacturability must be implemented in order to build in this level of reliability. Several types of analyses are performed to benchmark the performance of the SRAM device. Examples of these analysis techniques which are presented here include DC parametric measurements of test structures, functional bit mapping of the circuit used to characterize the entire distribution of bits, electrical microprobing of weak and/or failing bits, and system and accelerated soft error rate measurements. These tests allow process and design improvements to be evaluated prior to implementation on the final product. These results are used to provide comprehensive bit cell characterization which can then be compared to device models and adjusted accordingly to provide optimized cell stability versus cell size for a particular technology. The result is designed in reliability which can be accomplished during the early stages of product development.
A low cost real-time motion tracking approach using webcam technology.
Krishnan, Chandramouli; Washabaugh, Edward P; Seetharaman, Yogesh
2015-02-05
Physical therapy is an important component of gait recovery for individuals with locomotor dysfunction. There is a growing body of evidence that suggests that incorporating a motor learning task through visual feedback of movement trajectory is a useful approach to facilitate therapeutic outcomes. Visual feedback is typically provided by recording the subject's limb movement patterns using a three-dimensional motion capture system and displaying it in real-time using customized software. However, this approach can seldom be used in the clinic because of the technical expertise required to operate this device and the cost involved in procuring a three-dimensional motion capture system. In this paper, we describe a low cost two-dimensional real-time motion tracking approach using a simple webcam and an image processing algorithm in LabVIEW Vision Assistant. We also evaluated the accuracy of this approach using a high precision robotic device (Lokomat) across various walking speeds. Further, the reliability and feasibility of real-time motion-tracking were evaluated in healthy human participants. The results indicated that the measurements from the webcam tracking approach were reliable and accurate. Experiments on human subjects also showed that participants could utilize the real-time kinematic feedback generated from this device to successfully perform a motor learning task while walking on a treadmill. These findings suggest that the webcam motion tracking approach is a feasible low cost solution to perform real-time movement analysis and training. Copyright © 2014 Elsevier Ltd. All rights reserved.
A low cost real-time motion tracking approach using webcam technology
Krishnan, Chandramouli; Washabaugh, Edward P.; Seetharaman, Yogesh
2014-01-01
Physical therapy is an important component of gait recovery for individuals with locomotor dysfunction. There is a growing body of evidence that suggests that incorporating a motor learning task through visual feedback of movement trajectory is a useful approach to facilitate therapeutic outcomes. Visual feedback is typically provided by recording the subject’s limb movement patterns using a three-dimensional motion capture system and displaying it in real-time using customized software. However, this approach can seldom be used in the clinic because of the technical expertise required to operate this device and the cost involved in procuring a three-dimensional motion capture system. In this paper, we describe a low cost two-dimensional real-time motion tracking approach using a simple webcam and an image processing algorithm in LabVIEW Vision Assistant. We also evaluated the accuracy of this approach using a high precision robotic device (Lokomat) across various walking speeds. Further, the reliability and feasibility of real-time motion-tracking were evaluated in healthy human participants. The results indicated that the measurements from the webcam tracking approach were reliable and accurate. Experiments on human subjects also showed that participants could utilize the real-time kinematic feedback generated from this device to successfully perform a motor learning task while walking on a treadmill. These findings suggest that the webcam motion tracking approach is a feasible low cost solution to perform real-time movement analysis and training. PMID:25555306
Autonomous Assembly of Modular Structures in Space and on Extraterrestrial Locations
NASA Astrophysics Data System (ADS)
Alhorn, Dean C.
2005-02-01
The new U.S. National Vision for Space Exploration requires many new enabling technologies to accomplish the goals of space commercialization and returning humans to the moon and extraterrestrial environments. Traditionally, flight elements are complete sub-systems requiring humans to complete the integration and assembly. These bulky structures also require the use of heavy launch vehicles to send the units to a desired location. This philosophy necessitates a high degree of safety, numerous space walks at a significant cost. Future space mission costs must be reduced and safety increased to reasonably achieve exploration goals. One proposed concept is the autonomous assembly of space structures. This concept is an affordable, reliable solution to in-space and extraterrestrial assembly. Assembly is autonomously performed when two components join after determining that specifications are correct. Local sensors continue monitor joint integrity post assembly, which is critical for safety and structural reliability. Achieving this concept requires a change in space structure design philosophy and the development of innovative technologies to perform autonomous assembly. Assembly of large space structures will require significant numbers of integrity sensors. Thus simple, low-cost sensors are integral to the success of this concept. This paper addresses these issues and proposes a novel concept for assembling space structures autonomously. Core technologies required to achieve in space assembly are presented. These core technologies are critical to the goal of utilizing space in a cost efficient and safe manner. Additionally, these novel technologies can be applied to other systems both on earth and extraterrestrial environments.
Autonomous Assembly of Modular Structures in Space and on Extraterrestrial Locations
NASA Technical Reports Server (NTRS)
Alhorn, Dean C.
2005-01-01
The new U.S. National Vision for Space Exploration requires many new enabling technologies to accomplish the goals of space commercialization and returning humans to the moon and extraterrestrial environments. Traditionally, flight elements are complete subsystems requiring humans to complete the integration and assembly. These bulky structures also require the use of heavy launch vehicles to send the units to a desired location. This philosophy necessitates a high degree of safety, numerous space walks at a significant cost. Future space mission costs must be reduced and safety increased to reasonably achieve exploration goals. One proposed concept is the autonomous assembly of space structures. This concept is an affordable, reliable solution to in-space and extraterrestrial assembly. Assembly is autonomously performed when two components join after determining that specifications are correct. Local sensors continue monitor joint integrity post assembly, which is critical for safety and structural reliability. Achieving this concept requires a change in space structure design philosophy and the development of innovative technologies to perform autonomous assembly. Assembly of large space structures will require significant numbers of integrity sensors. Thus simple, low-cost sensors are integral to the success of this concept. This paper addresses these issues and proposes a novel concept for assembling space structures autonomously. Core technologies required to achieve in space assembly are presented. These core technologies are critical to the goal of utilizing space in a cost efficient and safe manner. Additionally, these novel technologies can be applied to other systems both on earth and extraterrestrial environments.
Hardware for Accelerating N-Modular Redundant Systems for High-Reliability Computing
NASA Technical Reports Server (NTRS)
Dobbs, Carl, Sr.
2012-01-01
A hardware unit has been designed that reduces the cost, in terms of performance and power consumption, for implementing N-modular redundancy (NMR) in a multiprocessor device. The innovation monitors transactions to memory, and calculates a form of sumcheck on-the-fly, thereby relieving the processors of calculating the sumcheck in software
77 FR 57949 - Federal Acquisition Regulation; Positive Law Codification of Title 41
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-18
..., environmental, public health and safety effects, distributive impacts, and equity). E.O. 13563 emphasizes the... Work Hours and 40 U.S.C. chapter 37 Contract Work Hours Safety Standards Act. and Safety Standards... at improving performance, reliability, quality, safety, and life-cycle costs 41 U.S.C. 1711). For use...
Atlas Centaur Rocket With Reusable Booster Engines
NASA Technical Reports Server (NTRS)
Martin, James A.
1993-01-01
Proposed modification of Atlas Centaur enables reuse of booster engines. Includes replacement of current booster engines with engine of new design in which hydrogen used for both cooling and generation of power. Use of hydrogen in new engine eliminates coking and clogging and improves performance significantly. Primary advantages: reduction of cost; increased reliability; and increased payload.
2016-03-01
Tinker DRA-3 Chem. Ox. Potassium permanganate 10 2.2 Advantages and Limitations Potential advantages and disadvantages of our dataset, and...Washington DC. Thomson, N.R., E.D. Hood, and G.J. Farquhar, 2007. “ Permanganate Treatment of an Emplaced DNAPL Source,” Ground Water Monitoring
A new software-based architecture for quantum computer
NASA Astrophysics Data System (ADS)
Wu, Nan; Song, FangMin; Li, Xiangdong
2010-04-01
In this paper, we study a reliable architecture of a quantum computer and a new instruction set and machine language for the architecture, which can improve the performance and reduce the cost of the quantum computing. We also try to address some key issues in detail in the software-driven universal quantum computers.
Body postures and patterns as amplifiers of physical condition.
Taylor, P W; Hasson, O; Clark, D L
2000-01-01
The question of why receivers accept a selfish signaller's message as reliable or 'honest' has fuelled ample controversy in discussions of communication. The handicap mechanism is now widely accepted as a potent constraint on cheating. Handicap signals are deemed reliable by their costs: signallers must choose between investing in the signal or in other aspects of fitness. Accordingly, resources allocated to the signal come to reflect the signaller's fitness budget and, on average, cheating is uneconomic. However, that signals may also be deemed reliable by their design, regardless of costs, is not widely appreciated. Here we briefly describe indices and amplifiers, reliable signals that may be essentially cost free. Indices are reliable because they bear a direct association with the signalled quality rather than costs. Amplifiers do not directly provide information about signaller quality, but they facilitate assessment by increasing the apparency of pre-existing cues and signals that are associated with quality. We present results of experiments involving a jumping spider (Plexippus paykulli) to illustrate how amplifiers can facilitate assessment of cues associated with physical condition without invoking the costs required for handicap signalling. PMID:10853735
Van de Weijer-Bergsma, Eva; Kroesbergen, Evelyn H; Prast, Emilie J; Van Luit, Johannes E H
2015-09-01
Working memory is an important predictor of academic performance, and of math performance in particular. Most working memory tasks depend on one-to-one administration by a testing assistant, which makes the use of such tasks in large-scale studies time-consuming and costly. Therefore, an online, self-reliant visual-spatial working memory task (the Lion game) was developed for primary school children (6-12 years of age). In two studies, the validity and reliability of the Lion game were investigated. The results from Study 1 (n = 442) indicated satisfactory six-week test-retest reliability, excellent internal consistency, and good concurrent and predictive validity. The results from Study 2 (n = 5,059) confirmed the results on the internal consistency and predictive validity of the Lion game. In addition, multilevel analysis revealed that classroom membership influenced Lion game scores. We concluded that the Lion game is a valid and reliable instrument for the online computerized and self-reliant measurement of visual-spatial working memory (i.e., updating).
Modelling utility-scale wind power plants. Part 1: Economics
NASA Astrophysics Data System (ADS)
Milligan, Michael R.
1999-10-01
As the worldwide use of wind turbine generators continues to increase in utility-scale applications, it will become increasingly important to assess the economic and reliability impact of these intermittent resources. Although the utility industry in the United States appears to be moving towards a restructured environment, basic economic and reliability issues will continue to be relevant to companies involved with electricity generation. This article is the first of two which address modelling approaches and results obtained in several case studies and research projects at the National Renewable Energy Laboratory (NREL). This first article addresses the basic economic issues associated with electricity production from several generators that include large-scale wind power plants. An important part of this discussion is the role of unit commitment and economic dispatch in production cost models. This paper includes overviews and comparisons of the prevalent production cost modelling methods, including several case studies applied to a variety of electric utilities. The second article discusses various methods of assessing capacity credit and results from several reliability-based studies performed at NREL.
The 747 primary flight control systems reliability and maintenance study
NASA Technical Reports Server (NTRS)
1979-01-01
The major operational characteristics of the 747 Primary Flight Control Systems (PFCS) are described. Results of reliability analysis for separate control functions are presented. The analysis makes use of a NASA computer program which calculates reliability of redundant systems. Costs for maintaining the 747 PFCS in airline service are assessed. The reliabilities and cost will provide a baseline for use in trade studies of future flight control system design.
Sensor Systems for Prognostics and Health Management
Cheng, Shunfeng; Azarian, Michael H.; Pecht, Michael G.
2010-01-01
Prognostics and health management (PHM) is an enabling discipline consisting of technologies and methods to assess the reliability of a product in its actual life cycle conditions to determine the advent of failure and mitigate system risk. Sensor systems are needed for PHM to monitor environmental, operational, and performance-related characteristics. The gathered data can be analyzed to assess product health and predict remaining life. In this paper, the considerations for sensor system selection for PHM applications, including the parameters to be measured, the performance needs, the electrical and physical attributes, reliability, and cost of the sensor system, are discussed. The state-of-the-art sensor systems for PHM and the emerging trends in technologies of sensor systems for PHM are presented. PMID:22219686
Sensor systems for prognostics and health management.
Cheng, Shunfeng; Azarian, Michael H; Pecht, Michael G
2010-01-01
Prognostics and health management (PHM) is an enabling discipline consisting of technologies and methods to assess the reliability of a product in its actual life cycle conditions to determine the advent of failure and mitigate system risk. Sensor systems are needed for PHM to monitor environmental, operational, and performance-related characteristics. The gathered data can be analyzed to assess product health and predict remaining life. In this paper, the considerations for sensor system selection for PHM applications, including the parameters to be measured, the performance needs, the electrical and physical attributes, reliability, and cost of the sensor system, are discussed. The state-of-the-art sensor systems for PHM and the emerging trends in technologies of sensor systems for PHM are presented.
Government and industry interactions in the development of clock technology
NASA Technical Reports Server (NTRS)
Hellwig, H.
1981-01-01
It appears likely that everyone in the time and frequency community can agree on goals to be realized through the expenditure of resources. These goals are the same as found in most fields of technology: lower cost, better performance, increased reliability, small size and lower power. Related aspects are examined in the process of clock and frequency standard development. Government and industry are reviewed in a highly interactive role. These interactions include judgements on clock performance, what kind of clock, expenditure of resources, transfer of ideas or hardware concepts from government to industry, and control of production. Successful clock development and production requires a government/industry relationship which is characterized by long-term continuity, multidisciplinary team work, focused funding and a separation of reliability and production oriented tasks from performance improvement/research type efforts.
Thick resist for MEMS processing
NASA Astrophysics Data System (ADS)
Brown, Joe; Hamel, Clifford
2001-11-01
The need for technical innovation is always present in today's economy. Microfabrication methods have evolved in support of the demand for smaller and faster integrated circuits with price performance improvements always in the scope of the manufacturing design engineer. The dispersion of processing technology spans well beyond IC fabrication today with batch fabrication and wafer scale processing lending advantages to MEMES applications from biotechnology to consumer electronics from oil exploration to aerospace. Today the demand for innovative processing techniques that enable technology is apparent where only a few years ago appeared too costly or not reliable. In high volume applications where yield and cost improvements are measured in fractions of a percent it is imperative to have process technologies that produce consistent results. Only a few years ago thick resist coatings were limited to thickness less than 20 microns. Factors such as uniformity, edge bead and multiple coatings made high volume production impossible. New developments in photoresist formulation combined with advanced coating equipment techniques that closely controls process parameters have enable thick photoresist coatings of 70 microns with acceptable uniformity and edge bead in one pass. Packaging of microelectronic and micromechanical devices is often a significant cost factor and a reliability issue for high volume low cost production. Technologies such as flip- chip assembly provide a solution for cost and reliability improvements over wire bond techniques. The processing for such technology demands dimensional control and presents a significant cost savings if it were compatible with mainstream technologies. Thick photoresist layers, with good sidewall control would allow wafer-bumping technologies to penetrate the barriers to yield and production where costs for technology are the overriding issue. Single pass processing is paramount to the manufacturability of packaging technology. Uniformity and edge bead control defined the success of process implementation. Today advanced packaging solutions are created with thick photoresist coatings. The techniques and results will be presented.
The Reliability, Impact, and Cost-Effectiveness of Value-Added Teacher Assessment Methods
ERIC Educational Resources Information Center
Yeh, Stuart S.
2012-01-01
This article reviews evidence regarding the intertemporal reliability of teacher rankings based on value-added methods. Value-added methods exhibit low reliability, yet are broadly supported by prominent educational researchers and are increasingly being used to evaluate and fire teachers. The article then presents a cost-effectiveness analysis…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stokes, David
This program seeks to demonstrate a solution to enhance existing biomass cookstove performance through the use of RTI’s Thermoelectric Enhanced Cookstove Add-on (TECA) device. The self-powered TECA device captures a portion of heat from the stove and converts it to electricity through a thermoelectric (TE) device to power a blower. Colorado State University and Envirofit International are partners to support the air injection design and commercialization to enhance combustion in the stove and reduce emissions. Relevance: By demonstrating a proof of concept of the approach with the Envirofit M-5000 stove and TECA device, we hope to apply this technology tomore » existing stoves that are already in use and reduce emissions for stoves that have already found user acceptance to provide a true health benefit. Challenges: The technical challenges include achieving Tier 4 emissions from a biomass stove and for such a stove to operate reliably in the harsh field environment. Additional challenges include the fact that it is difficult to develop a cost effective solution and insure adoption and proper use in the field. Outcomes: In this program we have demonstrated PM emissions at 82 mg/MJd, a 70% reduction as compared to baseline stove operation. We have also developed a stove optimization approach that reduces the number of costly experiments. We have evaluated component-level reliability and will be testing the stove prototype in the field for performance and reliability.« less
Thin-film module circuit design: Practical and reliability aspects
NASA Technical Reports Server (NTRS)
Daiello, R. V.; Twesme, E. N.
1985-01-01
This paper will address several aspects of the design and construction of submodules based on thin film amorphous silicon (a-Si) p i n solar cells. Starting from presently attainable single cell characteristics, and a realistic set of specifications, practical module designs are discussed from the viewpoints of efficient designs, the fabrication requirements, and reliability concerns. The examples center mostly on series interconnected modules of the superstrate type with detailed discussions of each portion of the structure in relation to its influence on module efficiency. Emphasis is placed on engineering topics such as: area coverage, optimal geometries, and cost and reliability. Practical constraints on achieving optimal designs, along with some examples of potential pitfalls in the manufacture and subsequent performance of a-Si modules are discussed.
Losa-Iglesias, Marta Elena; Becerro-de-Bengoa-Vallejo, Ricardo; Becerro-de-Bengoa-Losa, Klark Ricardo
2016-06-01
There are downloadable applications (Apps) for cell phones that can measure heart rate in a simple and painless manner. The aim of this study was to assess the reliability of this type of App for a Smartphone using an Android system, compared to the radial pulse and a portable pulse oximeter. We performed a pilot observational study of diagnostic accuracy, randomized in 46 healthy volunteers. The patients' demographic data and cardiac pulse were collected. Radial pulse was measured by palpation of the radial artery with three fingers at the wrist over the radius; a low-cost portable, liquid crystal display finger pulse oximeter; and a Heart Rate Plus for Samsung Galaxy Note®. This study demonstrated high reliability and consistency between systems with respect to the heart rate parameter of healthy adults using three systems. For all parameters, ICC was > 0.93, indicating excellent reliability. Moreover, CVME values for all parameters were between 1.66-4.06 %. We found significant correlation coefficients and no systematic differences between radial pulse palpation and pulse oximeter and a high precision. Low-cost pulse oximeter and App systems can serve as valid instruments for the assessment of heart rate in healthy adults. © The Author(s) 2014.
2013-01-01
Background This study investigates the reliability of muscle performance tests using cost- and time-effective methods similar to those used in clinical practice. When conducting reliability studies, great effort goes into standardising test procedures to facilitate a stable outcome. Therefore, several test trials are often performed. However, when muscle performance tests are applied in the clinical setting, clinicians often only conduct a muscle performance test once as repeated testing may produce fatigue and pain, thus variation in test results. We aimed to investigate whether cervical muscle performance tests, which have shown promising psychometric properties, would remain reliable when examined under conditions similar to those of daily clinical practice. Methods The intra-rater (between-day) and inter-rater (within-day) reliability was assessed for five cervical muscle performance tests in patients with (n = 33) and without neck pain (n = 30). The five tests were joint position error, the cranio-cervical flexion test, the neck flexor muscle endurance test performed in supine and in a 45°-upright position and a new neck extensor test. Results Intra-rater reliability ranged from moderate to almost perfect agreement for joint position error (ICC ≥ 0.48-0.82), the cranio-cervical flexion test (ICC ≥ 0.69), the neck flexor muscle endurance test performed in supine (ICC ≥ 0.68) and in a 45°-upright position (ICC ≥ 0.41) with the exception of a new test (neck extensor test), which ranged from slight to moderate agreement (ICC = 0.14-0.41). Likewise, inter-rater reliability ranged from moderate to almost perfect agreement for joint position error (ICC ≥ 0.51-0.75), the cranio-cervical flexion test (ICC ≥ 0.85), the neck flexor muscle endurance test performed in supine (ICC ≥ 0.70) and in a 45°-upright position (ICC ≥ 0.56). However, only slight to fair agreement was found for the neck extensor test (ICC = 0.19-0.25). Conclusions Intra- and inter-rater reliability ranged from moderate to almost perfect agreement with the exception of a new test (neck extensor test), which ranged from slight to moderate agreement. The significant variability observed suggests that tests like the neck extensor test and the neck flexor muscle endurance test performed in a 45°-upright position are too unstable to be used when evaluating neck muscle performance. PMID:24299621
The space station: Human factors and productivity
NASA Technical Reports Server (NTRS)
Gillan, D. J.; Burns, M. J.; Nicodemus, C. L.; Smith, R. L.
1986-01-01
Human factor researchers and engineers are making inputs into the early stages of the design of the Space Station to improve both the quality of life and work on-orbit. Effective integration of the human factors information related to various Intravehicular Activity (IVA), Extravehicular Activity (EVA), and teletobotics systems during the Space Station design will result in increased productivity, increased flexibility of the Space Stations systems, lower cost of operations, improved reliability, and increased safety for the crew onboard the Space Station. The major features of productivity examined include the cognitive and physical effort involved in work, the accuracy of worker output and ability to maintain performance at a high level of accuracy, the speed and temporal efficiency with which a worker performs, crewmember satisfaction with their work environment, and the relation between performance and cost.
Validation of a unique concept for a low-cost, lightweight space-deployable antenna structure
NASA Technical Reports Server (NTRS)
Freeland, R. E.; Bilyeu, G. D.; Veal, G. R.
1993-01-01
An experiment conducted in the framework of a NASA In-Space Technology Experiments Program based on a concept of inflatable deployable structures is described. The concept utilizes very low inflation pressure to maintain the required geometry on orbit and gravity-induced deflection of the structure precludes any meaningful ground-based demonstrations of functions performance. The experiment is aimed at validating and characterizing the mechanical functional performance of a 14-m-diameter inflatable deployable reflector antenna structure in the orbital operational environment. Results of the experiment are expected to significantly reduce the user risk associated with using large space-deployable antennas by demonstrating the functional performance of a concept that meets the criteria for low-cost, lightweight, and highly reliable space-deployable structures.
Updated Value of Service Reliability Estimates for Electric Utility Customers in the United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sullivan, Michael; Schellenberg, Josh; Blundell, Marshall
2015-01-01
This report updates the 2009 meta-analysis that provides estimates of the value of service reliability for electricity customers in the United States (U.S.). The meta-dataset now includes 34 different datasets from surveys fielded by 10 different utility companies between 1989 and 2012. Because these studies used nearly identical interruption cost estimation or willingness-to-pay/accept methods, it was possible to integrate their results into a single meta-dataset describing the value of electric service reliability observed in all of them. Once the datasets from the various studies were combined, a two-part regression model was used to estimate customer damage functions that can bemore » generally applied to calculate customer interruption costs per event by season, time of day, day of week, and geographical regions within the U.S. for industrial, commercial, and residential customers. This report focuses on the backwards stepwise selection process that was used to develop the final revised model for all customer classes. Across customer classes, the revised customer interruption cost model has improved significantly because it incorporates more data and does not include the many extraneous variables that were in the original specification from the 2009 meta-analysis. The backwards stepwise selection process led to a more parsimonious model that only included key variables, while still achieving comparable out-of-sample predictive performance. In turn, users of interruption cost estimation tools such as the Interruption Cost Estimate (ICE) Calculator will have less customer characteristics information to provide and the associated inputs page will be far less cumbersome. The upcoming new version of the ICE Calculator is anticipated to be released in 2015.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feldman, David; Margolis, Robert; Denholm, Paul
Declining costs of both solar photovoltaics (PV) and battery storage have raised interest in the creation of “solar-plus-storage” systems to provide dispatchable energy and reliable capacity. There has been limited deployment of PV-plus-energy storage systems (PV+ESS), and the actual configuration and performance of these systems for dispatchable energy are in the early stages of being defined. In contrast, concentrating solar power with thermal energy storage (CSP+TES) has been deployed at scale with the proven capability of providing a dispatchable, reliable source of renewable generation. A key question moving forward is how to compare the relative costs and benefits of PV+ESSmore » and CSP+TES. While both technologies collect solar radiation and produce electricity, they do so through very different mechanisms, which creates challenges for direct comparison. Nonetheless, it is important to establish a framework for comparison and to identify cost and performance targets to aid meeting the nation’s goals for clean energy deployment. In this paper, we provide a preliminary assessment comparing the cost of energy from CSP+TES and PV+ESS that focuses on a single metric: levelized cost of energy (LCOE). We begin by defining the configuration of each system, which is particularly important for PV+ESS systems. We then examine a range of projected cost declines for PV, batteries, and CSP. Finally, we summarize the estimated LCOE over a range of configuration and cost estimates. We conclude by acknowledging that differences in these technologies present challenges for comparison using a single performance metric. We define systems with similar configurations in some respects. In reality, because of inherent differences in CSP+TES and PV+ESS systems, they will provide different grid services and different value. For example, depending on its configuration, a PV+ESS system may provide additional value over CSP+TES by providing more flexible operation, including certain ancillary services and the ability to store off-peak grid energy. Alternatively, direct thermal energy storage allows a greater capture of solar energy, reducing the potential for curtailments in very high solar scenarios. So while this analysis evaluates a key performance metric (cost per unit of generation) under a range of cost projections, additional analysis of the value per unit of generation will be needed to comprehensively assess the relative competitiveness of solar energy systems deployed with energy storage.« less
Cost prediction model for various payloads and instruments for the Space Shuttle Orbiter
NASA Technical Reports Server (NTRS)
Hoffman, F. E.
1984-01-01
The following cost parameters of the space shuttle were undertaken: (1) to develop a cost prediction model for various payload classes of instruments and experiments for the Space Shuttle Orbiter; and (2) to show the implications of various payload classes on the cost of: reliability analysis, quality assurance, environmental design requirements, documentation, parts selection, and other reliability enhancing activities.
Breaking Barriers to Low-Cost Modular Inverter Production & Use
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogdan Borowy; Leo Casey; Jerry Foshage
2005-05-31
The goal of this cost share contract is to advance key technologies to reduce size, weight and cost while enhancing performance and reliability of Modular Inverter Product for Distributed Energy Resources (DER). Efforts address technology development to meet technical needs of DER market protection, isolation, reliability, and quality. Program activities build on SatCon Technology Corporation inverter experience (e.g., AIPM, Starsine, PowerGate) for Photovoltaic, Fuel Cell, Energy Storage applications. Efforts focused four technical areas, Capacitors, Cooling, Voltage Sensing and Control of Parallel Inverters. Capacitor efforts developed a hybrid capacitor approach for conditioning SatCon's AIPM unit supply voltages by incorporating several typesmore » and sizes to store energy and filter at high, medium and low frequencies while minimizing parasitics (ESR and ESL). Cooling efforts converted the liquid cooled AIPM module to an air-cooled unit using augmented fin, impingement flow cooling. Voltage sensing efforts successfully modified the existing AIPM sensor board to allow several, application dependent configurations and enabling voltage sensor galvanic isolation. Parallel inverter control efforts realized a reliable technique to control individual inverters, connected in a parallel configuration, without a communication link. Individual inverter currents, AC and DC, were balanced in the paralleled modules by introducing a delay to the individual PWM gate pulses. The load current sharing is robust and independent of load types (i.e., linear and nonlinear, resistive and/or inductive). It is a simple yet powerful method for paralleling both individual devices dramatically improves reliability and fault tolerance of parallel inverter power systems. A patent application has been made based on this control technology.« less
Inspection planning development: An evolutionary approach using reliability engineering as a tool
NASA Technical Reports Server (NTRS)
Graf, David A.; Huang, Zhaofeng
1994-01-01
This paper proposes an evolutionary approach for inspection planning which introduces various reliability engineering tools into the process and assess system trade-offs among reliability, engineering requirement, manufacturing capability and inspection cost to establish an optimal inspection plan. The examples presented in the paper illustrate some advantages and benefits of the new approach. Through the analysis, reliability and engineering impacts due to manufacturing process capability and inspection uncertainty are clearly understood; the most cost effective and efficient inspection plan can be established and associated risks are well controlled; some inspection reductions and relaxations are well justified; and design feedbacks and changes may be initiated from the analysis conclusion to further enhance reliability and reduce cost. The approach is particularly promising as global competitions and customer quality improvement expectations are rapidly increasing.
Proactive replica checking to assure reliability of data in cloud storage with minimum replication
NASA Astrophysics Data System (ADS)
Murarka, Damini; Maheswari, G. Uma
2017-11-01
The two major issues for cloud storage systems are data reliability and storage costs. For data reliability protection, multi-replica replication strategy which is used mostly in current clouds acquires huge storage consumption, leading to a large storage cost for applications within the loud specifically. This paper presents a cost-efficient data reliability mechanism named PRCR to cut back the cloud storage consumption. PRCR ensures data reliability of large cloud information with the replication that might conjointly function as a price effective benchmark for replication. The duplication shows that when resembled to the standard three-replica approach, PRCR will scale back to consume only a simple fraction of the cloud storage from one-third of the storage, thence considerably minimizing the cloud storage price.
NASA Software Cost Estimation Model: An Analogy Based Estimation Model
NASA Technical Reports Server (NTRS)
Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James
2015-01-01
The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K- nearest neighbor prediction model performance on the same data set.
NASA Astrophysics Data System (ADS)
Kumar, Manasvi; Sharifi Dehsari, Hamed; Anwar, Saleem; Asadi, Kamal
2018-03-01
Organic bistable diodes based on phase-separated blends of ferroelectric and semiconducting polymers have emerged as promising candidates for non-volatile information storage for low-cost solution processable electronics. One of the bottlenecks impeding upscaling is stability and reliable operation of the array in air. Here, we present a memory array fabricated with an air-stable amine-based semiconducting polymer. Memory diode fabrication and full electrical characterizations were carried out in atmospheric conditions (23 °C and 45% relative humidity). The memory diodes showed on/off ratios greater than 100 and further exhibited robust and stable performance upon continuous write-read-erase-read cycles. Moreover, we demonstrate a 4-bit memory array that is free from cross-talk with a shelf-life of several months. Demonstration of the stability and reliable air operation further strengthens the feasibility of the resistance switching in ferroelectric memory diodes for low-cost applications.
Electromechanical actuation for thrust vector control applications
NASA Technical Reports Server (NTRS)
Roth, Mary Ellen
1990-01-01
The advanced launch system (ALS), is a launch vehicle that is designed to be cost-effective, highly reliable, and operationally efficient with a goal of reducing the cost per pound to orbit. An electromechanical actuation (EMA) system is being developed as an attractive alternative to the hydraulic systems. The controller will integrate 20 kHz resonant link power management and distribution (PMAD) technology and pulse population modulation (PPM) techniques to implement field-oriented vector control (FOVC) of a new advanced induction motor. The driver and the FOVC will be microprocessor controlled. For increased system reliability, a built-in test (BITE) capability will be included. This involves introducing testability into the design of a system such that testing is calibrated and exercised during the design, manufacturing, maintenance, and prelaunch activities. An actuator will be integrated with the motor controller for performance testing of the EMA thrust vector control (TVC) system. The EMA system and work proposed for the future are discussed.
Development of SiC Large Tapered Crystal Growth
NASA Technical Reports Server (NTRS)
Neudeck, Phil
2010-01-01
Majority of very large potential benefits of wide band gap semiconductor power electronics have NOT been realized due in large part to high cost and high defect density of commercial wafers. Despite 20 years of development, present SiC wafer growth approach is yet to deliver majority of SiC's inherent performance and cost benefits to power systems. Commercial SiC power devices are significantly de-rated in order to function reliably due to the adverse effects of SiC crystal dislocation defects (thousands per sq cm) in the SiC wafer.
Monolithic Microwave Integrated Circuits Based on GaAs Mesfet Technology
NASA Astrophysics Data System (ADS)
Bahl, Inder J.
Advanced military microwave systems are demanding increased integration, reliability, radiation hardness, compact size and lower cost when produced in large volume, whereas the microwave commercial market, including wireless communications, mandates low cost circuits. Monolithic Microwave Integrated Circuit (MMIC) technology provides an economically viable approach to meeting these needs. In this paper the design considerations for several types of MMICs and their performance status are presented. Multifunction integrated circuits that advance the MMIC technology are described, including integrated microwave/digital functions and a highly integrated transceiver at C-band.
NASA Technical Reports Server (NTRS)
1989-01-01
The objective of the Space Transportation Booster Engine (STBE) Configuration Study is to contribute to the Advanced Launch System (ALS) development effort by providing highly reliable, low cost booster engine concepts for both expendable and reusable rocket engines. The objectives of the space Transportation Booster Engine (STBE) Configuration Study were: (1) to identify engine configurations which enhance vehicle performance and provide operational flexibility at low cost, and (2) to explore innovative approaches to the follow-on Full-Scale Development (FSD) phase for the STBE.
Abercrombie, Robert K; Sheldon, Frederick T; Ferragut, Erik M
2014-06-24
A system evaluates reliability, performance and/or safety by automatically assessing the targeted system's requirements. A cost metric quantifies the impact of failures as a function of failure cost per unit of time. The metrics or measurements may render real-time (or near real-time) outcomes by initiating active response against one or more high ranked threats. The system may support or may be executed in many domains including physical domains, cyber security domains, cyber-physical domains, infrastructure domains, etc. or any other domains that are subject to a threat or a loss.
Engine for the next-generation launcher
NASA Astrophysics Data System (ADS)
Beichel, Rudi; Grey, Jerry
1995-05-01
The proposed dual-fuel/dual-expansion engine for the Reusable Launch Vehicle (RLV) could solve the vehicle's need for a high-performance, lightweight, low-cost, maintainable engine. The features that make dual-fuel/dual-expansion engine a prime candidate for RLV include oxygen-rich combustion, high-pressure staged-combustion cycle and dual-fuel operation. Cost-reducing, reliability-enhancing innovations such as the elimination of regenerative cooling, elimination of gimbaling and replacement of kerosene-based hydrocarbon fuel by subcooled propane have also made the this type of engine an attractive option.
A case for Redundant Arrays of Inexpensive Disks (RAID)
NASA Technical Reports Server (NTRS)
Patterson, David A.; Gibson, Garth; Katz, Randy H.
1988-01-01
Increasing performance of CPUs and memories will be squandered if not matched by a similar performance increase in I/O. While the capacity of Single Large Expensive Disks (SLED) has grown rapidly, the performance improvement of SLED has been modest. Redundant Arrays of Inexpensive Disks (RAID), based on the magnetic disk technology developed for personal computers, offers an attractive alternative to SLED, promising improvements of an order of magnitude in performance, reliability, power consumption, and scalability. This paper introduces five levels of RAIDs, giving their relative cost/performance, and compares RAID to an IBM 3380 and a Fujitsu Super Eagle.
TDRSS telecommunications system, PN code analysis
NASA Technical Reports Server (NTRS)
Dixon, R.; Gold, R.; Kaiser, F.
1976-01-01
The pseudo noise (PN) codes required to support the TDRSS telecommunications services are analyzed and the impact of alternate coding techniques on the user transponder equipment, the TDRSS equipment, and all factors that contribute to the acquisition and performance of these telecommunication services is assessed. Possible alternatives to the currently proposed hybrid FH/direct sequence acquisition procedures are considered and compared relative to acquisition time, implementation complexity, operational reliability, and cost. The hybrid FH/direct sequence technique is analyzed and rejected in favor of a recommended approach which minimizes acquisition time and user transponder complexity while maximizing probability of acquisition and overall link reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jonkman, Jason; Annoni, Jennifer; Hayman, Greg
This paper presents the development of FAST.Farm, a new multiphysics tool applicable to engineering problems in research and industry involving wind farm performance and cost optimization that is needed to address the current underperformance, failures, and expenses plaguing the wind industry. Achieving wind cost-of-energy targets - which requires improvements in wind farm performance and reliability, together with reduced uncertainty and expenditures - has been eluded by the complicated nature of the wind farm design problem, especially the sophisticated interaction between atmospheric phenomena and wake dynamics and array effects. FAST.Farm aims to balance the need for accurate modeling of the relevantmore » physics for predicting power performance and loads while maintaining low computational cost to support a highly iterative and probabilistic design process and system-wide optimization. FAST.Farm makes use of FAST to model the aero-hydro-servo-elastics of distinct turbines in the wind farm, and it is based on some of the principles of the Dynamic Wake Meandering (DWM) model, but avoids many of the limitations of existing DWM implementations.« less
INNOVATIVE TECHNOLOGY VERIFICATION REPORT " ...
The EnSys Petro Test System developed by Strategic Diagnostics Inc. (SDI), was demonstrated under the U.S. Environmental Protection Agency Superfund Innovative Technology Evaluation Program in June 2000 at the Navy Base Ventura County site in Port Hueneme, California. The purpose of the demonstration was to collect reliable performance and cost data for the EnSys Petro Test System and six other field measurement devices for total petroleum hydrocarbons (TPH) in soil. In addition to assessing ease of device operation, the key objectives of the demonstration included determining the (1) method detection limit, (2) accuracy and precision, (3) effects of interferents and soil moisture content on TPH measurement, (4) sample throughput, and (5) TPH measurement costs for each device. The demonstration involved analysis of both performance evaluation samples and environmental samples collected in four areas contaminated with gasoline, diesel, or other petroleum products. The performance and cost results for a given field measurement device were compared to those for an off-site laboratory reference method,
INNOVATIVE TECHNOLOGY VERIFICATION REPORT " ...
The Synchronous Scanning Luminoscope (Luminoscope) developed by the Oak Ridge National Laboratory in collaboration with Environmental Systems Corporation (ESC) was demonstrated under the U.S. Environmental Protection Agency Superfund Innovative Technology Evaluation Program in June 2000 at the Navy Base Ventura County site in Port Hueneme, California. The purpose of the demonstration was to collect reliable performance and cost data for the Luminoscope and six other field measurement devices for total petroleum hydrocarbons (TPH) in soil. In addition to assessing ease of device operation, the key objectives of the demonstration included determining the (1) method detection limit, (2) accuracy and precision, (3) effects of interferents and soil moisture content on TPH measurement, (4) sample throughput, and (5) TPH measurement costs for each device. The demonstration involved analysis of both performance evaluation samples and environmental samples collected in five areas contaminated with gasoline, diesel, lubricating oil, or other petroleum products. The performance and cost results for a given field measurement device were compared to those for an off-site laboratory reference method,
Designing magnetic systems for reliability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitzenroeder, P.J.
1991-01-01
Designing magnetic system is an iterative process in which the requirements are set, a design is developed, materials and manufacturing processes are defined, interrelationships with the various elements of the system are established, engineering analyses are performed, and fault modes and effects are studied. Reliability requires that all elements of the design process, from the seemingly most straightforward such as utilities connection design and implementation, to the most sophisticated such as advanced finite element analyses, receives a balanced and appropriate level of attention. D.B. Montgomery's study of magnet failures has shown that the predominance of magnet failures tend not tomore » be in the most intensively engineered areas, but are associated with insulation, leads, ad unanticipated conditions. TFTR, JET, JT-60, and PBX are all major tokamaks which have suffered loss of reliability due to water leaks. Similarly the majority of causes of loss of magnet reliability at PPPL has not been in the sophisticated areas of the design but are due to difficulties associated with coolant connections, bus connections, and external structural connections. Looking towards the future, the major next-devices such as BPX and ITER are most costly and complex than any of their predecessors and are pressing the bounds of operating levels, materials, and fabrication. Emphasis on reliability is a must as the fusion program enters a phase where there are fewer, but very costly devices with the goal of reaching a reactor prototype stage in the next two or three decades. This paper reviews some of the magnet reliability issues which PPPL has faced over the years the lessons learned from them, and magnet design and fabrication practices which have been found to contribute to magnet reliability.« less
Temperature and leakage aware techniques to improve cache reliability
NASA Astrophysics Data System (ADS)
Akaaboune, Adil
Decreasing power consumption in small devices such as handhelds, cell phones and high-performance processors is now one of the most critical design concerns. On-chip cache memories dominate the chip area in microprocessors and thus arises the need for power efficient cache memories. Cache is the simplest cost effective method to attain high speed memory hierarchy and, its performance is extremely critical for high speed computers. Cache is used by the microprocessor for channeling the performance gap between processor and main memory (RAM) hence the memory bandwidth is frequently a bottleneck which can affect the peak throughput significantly. In the design of any cache system, the tradeoffs of area/cost, performance, power consumption, and thermal management must be taken into consideration. Previous work has mainly concentrated on performance and area/cost constraints. More recent works have focused on low power design especially for portable devices and media-processing systems, however fewer research has been done on the relationship between heat management, Leakage power and cost per die. Lately, the focus of power dissipation in the new generations of microprocessors has shifted from dynamic power to idle power, a previously underestimated form of power loss that causes battery charge to drain and shutdown too early due the waste of energy. The problem has been aggravated by the aggressive scaling of process; device level method used originally by designers to enhance performance, conserve dissipation and reduces the sizes of digital circuits that are increasingly condensed. This dissertation studies the impact of hotspots, in the cache memory, on leakage consumption and microprocessor reliability and durability. The work will first prove that by eliminating hotspots in the cache memory, leakage power will be reduced and therefore, the reliability will be improved. The second technique studied is data quality management that improves the quality of the data stored in the cache to reduce power consumption. The initial work done on this subject focuses on the type of data that increases leakage consumption and ways to manage without impacting the performance of the microprocessor. The second phase of the project focuses on managing the data storage in different blocks of the cache to smooth the leakage power as well as dynamic power consumption. The last technique is a voltage controlled cache to reduce the leakage consumption of the cache while in execution and even in idle state. Two blocks of the 4-way set associative cache go through a voltage regulator before getting to the voltage well, and the other two are directly connected to the voltage well. The idea behind this technique is to use the replacement algorithm information to increase or decrease voltage of the two blocks depending on the need of the information stored on them.
Optimizing digital 8mm drive performance
NASA Technical Reports Server (NTRS)
Schadegg, Gerry
1993-01-01
The experience of attaching over 350,000 digital 8mm drives to 85-plus system platforms has uncovered many factors which can reduce cartridge capacity or drive throughput, reduce reliability, affect cartridge archivability and actually shorten drive life. Some are unique to an installation. Others result from how the system is set up to talk to the drive. Many stem from how applications use the drive, the work load that's present, the kind of media used and, very important, the kind of cleaning program in place. Digital 8mm drives record data at densities that rival those of disk technology. Even with technology this advanced, they are extremely robust and, given proper usage, care and media, should reward the user with a long productive life. The 8mm drive will give its best performance using high-quality 'data grade' media. Even though it costs more, good 'data grade' media can sustain the reliability and rigorous needs of a data storage environment and, with proper care, give users an archival life of 30 years or more. Various factors, taken individually, may not necessarily produce performance or reliability problems. Taken in combination, their effects can compound, resulting in rapid reductions in a drive's serviceable life, cartridge capacity, or drive performance. The key to managing media is determining the importance one places upon their recorded data and, subsequently, setting media usage guidelines that can deliver data reliability. Various options one can implement to optimize digital 8mm drive performance are explored.
Many-objective optimization and visual analytics reveal key trade-offs for London's water supply
NASA Astrophysics Data System (ADS)
Matrosov, Evgenii S.; Huskova, Ivana; Kasprzyk, Joseph R.; Harou, Julien J.; Lambert, Chris; Reed, Patrick M.
2015-12-01
In this study, we link a water resource management simulator to multi-objective search to reveal the key trade-offs inherent in planning a real-world water resource system. We consider new supplies and demand management (conservation) options while seeking to elucidate the trade-offs between the best portfolios of schemes to satisfy projected water demands. Alternative system designs are evaluated using performance measures that minimize capital and operating costs and energy use while maximizing resilience, engineering and environmental metrics, subject to supply reliability constraints. Our analysis shows many-objective evolutionary optimization coupled with state-of-the art visual analytics can help planners discover more diverse water supply system designs and better understand their inherent trade-offs. The approach is used to explore future water supply options for the Thames water resource system (including London's water supply). New supply options include a new reservoir, water transfers, artificial recharge, wastewater reuse and brackish groundwater desalination. Demand management options include leakage reduction, compulsory metering and seasonal tariffs. The Thames system's Pareto approximate portfolios cluster into distinct groups of water supply options; for example implementing a pipe refurbishment program leads to higher capital costs but greater reliability. This study highlights that traditional least-cost reliability constrained design of water supply systems masks asset combinations whose benefits only become apparent when more planning objectives are considered.
Solar Energy Technologies Office Fact Sheet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solar Energy Technologies Office
The U.S. Department of Energy Solar Energy Technologies Office (SETO) supports early-stage research and development to improve the affordability, reliability, and performance of solar technologies on the grid. The office invests in innovative research efforts that securely integrate more solar energy into the grid, enhance the use and storage of solar energy, and lower solar electricity costs.
ERIC Educational Resources Information Center
Callaway, Andrew J.; Cobb, Jon E.
2012-01-01
Where as video cameras are a reliable and established technology for the measurement of kinematic parameters, accelerometers are increasingly being employed for this type of measurement due to their ease of use, performance, and comparatively low cost. However, the majority of accelerometer-based studies involve a single channel due to the…
ERIC Educational Resources Information Center
Woodell, Eric A.
2013-01-01
Information Technology (IT) professionals use the Information Technology Infrastructure Library (ITIL) process to better manage their business operations, measure performance, improve reliability and lower costs. This study examined the operational results of those data centers using ITIL against those that do not, and whether the results change…
Standards for space automation and robotics
NASA Technical Reports Server (NTRS)
Kader, Jac B.; Loftin, R. B.
1992-01-01
The AIAA's Committee on Standards for Space Automation and Robotics (COS/SAR) is charged with the identification of key functions and critical technologies applicable to multiple missions that reflect fundamental consideration of environmental factors. COS/SAR's standards/practices/guidelines implementation methods will be based on reliability, performance, and operations, as well as economic viability and life-cycle costs, simplicity, and modularity.
2014-10-27
a phase-averaged spectral wind-wave generation and transformation model and its interface in the Surface-water Modeling System (SMS). Ambrose...applications of the Boussinesq (BOUSS-2D) wave model that provides more rigorous calculations for design and performance optimization of integrated...navigation systems . Together these wave models provide reliable predictions on regional and local spatial domains and cost-effective engineering solutions
Space Station Engineering and Technology Development
NASA Technical Reports Server (NTRS)
1985-01-01
The evolving space station program will be examined through a series of more specific studies: maintainability; research and technology in space; solar thermodynamics research and technology; program performance; onboard command and control; and research and technology road maps. The purpose is to provide comments on approaches to long-term, reliable operation at low cost in terms of funds and crew time.
Overview of the National Timber Bridge Inspection Study
James P. Wacker; Brian K. Brashaw; Frank Jalinoos
2013-01-01
As many engineers begin to implement life cycle cost analyses within the preliminary bridge design phase, there is a significant need for more reliable data on the expected service life of highway bridges. Many claims are being made about the expected longevity of concrete and steel bridges, but few are based on actual performance data. Because engineers are least...
Evaluating the Technical and Economic Performance of PV Plus Storage Power Plants: Report Summary
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denholm, Paul L.; Margolis, Robert M.; Eichman, Joshua D.
The decreasing costs of both PV and energy storage technologies have raised interest in the creation of combined PV plus storage systems to provide dispatchable energy and reliable capacity. In this study, we examine the tradeoffs among various PV plus storage configurations and quantify the impact of configuration on system net value.
Evaluating the Technical and Economic Performance of PV Plus Storage Power Plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denholm, Paul L.; Margolis, Robert M.; Eichman, Joshua D.
The decreasing costs of both PV and energy storage technologies have raised interest in the creation of combined PV plus storage systems to provide dispatchable energy and reliable capacity. In this study, we examine the tradeoffs among various PV plus storage configurations and quantify the impact of configuration on system net value.
ERIC Educational Resources Information Center
Goclowski, John C.; And Others
The Reliability, Maintainability, and Cost Model (RMCM) described in this report is an interactive mathematical model with a built-in sensitivity analysis capability. It is a major component of the Life Cycle Cost Impact Model (LCCIM), which was developed as part of the DAIS advanced development program to be used to assess the potential impacts…
First Order Reliability Application and Verification Methods for Semistatic Structures
NASA Technical Reports Server (NTRS)
Verderaime, Vincent
1994-01-01
Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored by conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments, its stress audits are shown to be arbitrary and incomplete, and it compromises high strength materials performance. A reliability method is proposed which combines first order reliability principles with deterministic design variables and conventional test technique to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety index expression. The application is reduced to solving for a factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and with the pace of semistatic structural designs.
Digital electronic engine control history
NASA Technical Reports Server (NTRS)
Putnam, T. W.
1984-01-01
Full authority digital electronic engine controls (DEECs) were studied, developed, and ground tested because of projected benefits in operability, improved performance, reduced maintenance, improved reliability, and lower life cycle costs. The issues of operability and improved performance, however, are assessed in a flight test program. The DEEC on a F100 engine in an F-15 aircraft was demonstrated and evaluated. The events leading to the flight test program are chronicled and important management and technical results are identified.
Advanced composites characterization with x-ray technologies
NASA Astrophysics Data System (ADS)
Baaklini, George Y.
1993-12-01
Recognizing the critical need to advance new composites for the aeronautics and aerospace industries, we are focussing on advanced test methods that are vital to successful modeling and manufacturing of future generations of high temperature and durable composite materials. These newly developed composites are necessary to reduce propulsion cost and weight, to improve performance and reliability, and to address longer-term national strategic thrusts for sustaining global preeminence in high speed air transport and in high performance military aircraft.
Perspective: The future of quantum dot photonic integrated circuits
NASA Astrophysics Data System (ADS)
Norman, Justin C.; Jung, Daehwan; Wan, Yating; Bowers, John E.
2018-03-01
Direct epitaxial integration of III-V materials on Si offers substantial manufacturing cost and scalability advantages over heterogeneous integration. The challenge is that epitaxial growth introduces high densities of crystalline defects that limit device performance and lifetime. Quantum dot lasers, amplifiers, modulators, and photodetectors epitaxially grown on Si are showing promise for achieving low-cost, scalable integration with silicon photonics. The unique electrical confinement properties of quantum dots provide reduced sensitivity to the crystalline defects that result from III-V/Si growth, while their unique gain dynamics show promise for improved performance and new functionalities relative to their quantum well counterparts in many devices. Clear advantages for using quantum dot active layers for lasers and amplifiers on and off Si have already been demonstrated, and results for quantum dot based photodetectors and modulators look promising. Laser performance on Si is improving rapidly with continuous-wave threshold currents below 1 mA, injection efficiencies of 87%, and output powers of 175 mW at 20 °C. 1500-h reliability tests at 35 °C showed an extrapolated mean-time-to-failure of more than ten million hours. This represents a significant stride toward efficient, scalable, and reliable III-V lasers on on-axis Si substrates for photonic integrate circuits that are fully compatible with complementary metal-oxide-semiconductor (CMOS) foundries.
Automotive applications of chromogenic materials
NASA Astrophysics Data System (ADS)
Lynam, Niall R.
1990-03-01
Automobiles present both opportunities and challenges for large-area chromogenics. Opportunities include optical and thermal control of vehicle glazing along with optical control of rearview mirrors and privacy glass. Challenges include cost-effectively meeting automotive safety, performance, and reliability standards. Worldwide automobile production' for 1987 is listed in Table 1. Of the roughly 33 million cars produced annually, approximately 8% are luxury models which are candidates for features such as auto- matically dimming rearview mirrors or variable opacity sunroofs. Thus copious commercial opportunities await whatever chromogenic technologies qualify for use in automobiles. This review will describe the performance, safety, and reliability/durability required for automotive use. Commercial opportunities and challenges will be discussed including cost factors and specifications. Chromogenic technologies such as electrochromism, liquid crystals and thermochromism will be reviewed in terms of how publicly announced technical developments match automotive needs and expectations. Construction and performance of ex- isting or imminent chromogenic devices will be described. Finally, how opportunities and challenges of the automotive environment translate to other applications for chromogenic materials such as architectural or information display devices will be discussed. The objective is to generally review the applications, the technologies appropriate to these applications, and the automotive chromogenic devices available at the time of writing to match these applications.
Topology design and performance analysis of an integrated communication network
NASA Technical Reports Server (NTRS)
Li, V. O. K.; Lam, Y. F.; Hou, T. C.; Yuen, J. H.
1985-01-01
A research study on the topology design and performance analysis for the Space Station Information System (SSIS) network is conducted. It is begun with a survey of existing research efforts in network topology design. Then a new approach for topology design is presented. It uses an efficient algorithm to generate candidate network designs (consisting of subsets of the set of all network components) in increasing order of their total costs, and checks each design to see if it forms an acceptable network. This technique gives the true cost-optimal network, and is particularly useful when the network has many constraints and not too many components. The algorithm for generating subsets is described in detail, and various aspects of the overall design procedure are discussed. Two more efficient versions of this algorithm (applicable in specific situations) are also given. Next, two important aspects of network performance analysis: network reliability and message delays are discussed. A new model is introduced to study the reliability of a network with dependent failures. For message delays, a collection of formulas from existing research results is given to compute or estimate the delays of messages in a communication network without making the independence assumption. The design algorithm coded in PASCAL is included as an appendix.
Economics of small fully reusable launch systems (SSTO vs. TSTO)
NASA Astrophysics Data System (ADS)
Koelle, Dietrich E.
1997-01-01
The paper presents a design and cost comparison of an SSTO vehicle concept with two TSTO vehicle options. It is shown that the ballistic SSTO concept feasibility is NOT a subject of technology but of proper vehicle SIZING. This also allows to design for sufficient performance margin. The cost analysis has been performed with the TRANSCOST- Model, also using the "Standardized Cost per Flight" definition for the CpF comparison. The results show that a present-technology SSTO for LEO missions is about 30 % less expensive than any TSTO vehicle, based on Life-Cycle-Cost analysis, in addition to the inherent operational/ reliability advantages of a single-stage vehicle. In case of a commercial development and operation it is estimated that an SSTO vehicle with 400 Mg propellant mass can be flown for some 9 Million per mission (94/95) with 14 Mg payload to LEO, 7 Mg to the Space Station Orbit, or 2 Mg to a 200/800 km polar orbit. This means specific transportation cost of 650 /kg (300 $/lb), resp.3.2 MYr/Mg, to LEO which is 6 -10% of present expendable launch vehicles.
Jimenez, Krystal; Vargas, Cristina; Garcia, Karla; Guzman, Herlinda; Angulo, Marco; Billimek, John
2017-02-01
Purpose The purpose of this study was to examine the reliability and validity of a Spanish version of the Beliefs about Medicines Questionnaire (BMQ) as a measure to evaluate beliefs about medications and to differentiate adherent from nonadherent patients among low-income Latino patients with diabetes in the United States. Methods Seventy-three patients were administered the BMQ and surveyed for evidence of medication nonadherence. Internal consistency of the BMQ was assessed by Cronbach's alpha along with performing a confirmatory factor analysis. Criterion validity was assessed by comparing mean scores on 3 subscales of the BMQ (General Overuse, General Harm, and Specific Necessity-Concerns difference score) between adherent patients and patients reporting nonadherence for 3 different reasons (unintentional nonadherence, cost-related nonadherence, and nonadherence due to reasons other than cost) using independent samples t tests. Results The BMQ is a reliable instrument to examine beliefs about medications in this Spanish-speaking population. Construct validity testing shows nearly identical factor loading as the original construct map. General Overuse scores were significantly more negative for patients reporting each reason for nonadherence compared with their adherent counterparts. Necessity-Concerns difference scores were significantly more negative for patients reporting nonadherence for reasons other than cost compared with those who did not report this reason for nonadherence. Conclusion The Spanish version of the BMQ is appropriate to assess beliefs about medications in Latino patients with type 2 diabetes in the United States and may help identify patients who become nonadherent to medications for reasons other than out-of-pocket costs.
Static test induced loads verification beyond elastic limit
NASA Technical Reports Server (NTRS)
Verderaime, V.; Harrington, F.
1996-01-01
Increasing demands for reliable and least-cost high-performance aerostructures are pressing design analyses, materials, and manufacturing processes to new and narrowly experienced performance and verification technologies. This study assessed the adequacy of current experimental verification of the traditional binding ultimate safety factor which covers rare events in which no statistical design data exist. Because large high-performance structures are inherently very flexible, boundary rotations and deflections under externally applied loads approaching fracture may distort their transmission and unknowingly accept submarginal structures or prematurely fracturing reliable ones. A technique was developed, using measured strains from back-to-back surface mounted gauges, to analyze, define, and monitor induced moments and plane forces through progressive material changes from total-elastic to total-inelastic zones within the structural element cross section. Deviations from specified test loads are identified by the consecutively changing ratios of moment-to-axial load.
Static test induced loads verification beyond elastic limit
NASA Technical Reports Server (NTRS)
Verderaime, V.; Harrington, F.
1996-01-01
Increasing demands for reliable and least-cost high performance aerostructures are pressing design analyses, materials, and manufacturing processes to new and narrowly experienced performance and verification technologies. This study assessed the adequacy of current experimental verification of the traditional binding ultimate safety factor which covers rare events in which no statistical design data exist. Because large, high-performance structures are inherently very flexible, boundary rotations and deflections under externally applied loads approaching fracture may distort their transmission and unknowingly accept submarginal structures or prematurely fracturing reliable ones. A technique was developed, using measured strains from back-to-back surface mounted gauges, to analyze, define, and monitor induced moments and plane forces through progressive material changes from total-elastic to total inelastic zones within the structural element cross section. Deviations from specified test loads are identified by the consecutively changing ratios of moment-to-axial load.
King, Robert; Parker, Simon; Mouzakis, Kon; Fletcher, Winston; Fitzgerald, Patrick
2007-11-01
The Integrated Task Modeling Environment (ITME) is a user-friendly software tool that has been developed to automatically recode low-level data into an empirical record of meaningful task performance. The present research investigated and validated the performance of the ITME software package by conducting complex simulation missions and comparing the task analyses produced by ITME with taskanalyses produced by experienced video analysts. A very high interrater reliability (> or = .94) existed between experienced video analysts and the ITME for the task analyses produced for each mission. The mean session time:analysis time ratio was 1:24 using video analysis techniques and 1:5 using the ITME. It was concluded that the ITME produced task analyses that were as reliable as those produced by experienced video analysts, and significantly reduced the time cost associated with these analyses.
Projecting technology change to improve space technology planning and systems management
NASA Astrophysics Data System (ADS)
Walk, Steven Robert
2011-04-01
Projecting technology performance evolution has been improving over the years. Reliable quantitative forecasting methods have been developed that project the growth, diffusion, and performance of technology in time, including projecting technology substitutions, saturation levels, and performance improvements. These forecasts can be applied at the early stages of space technology planning to better predict available future technology performance, assure the successful selection of technology, and improve technology systems management strategy. Often what is published as a technology forecast is simply scenario planning, usually made by extrapolating current trends into the future, with perhaps some subjective insight added. Typically, the accuracy of such predictions falls rapidly with distance in time. Quantitative technology forecasting (QTF), on the other hand, includes the study of historic data to identify one of or a combination of several recognized universal technology diffusion or substitution patterns. In the same manner that quantitative models of physical phenomena provide excellent predictions of system behavior, so do QTF models provide reliable technological performance trajectories. In practice, a quantitative technology forecast is completed to ascertain with confidence when the projected performance of a technology or system of technologies will occur. Such projections provide reliable time-referenced information when considering cost and performance trade-offs in maintaining, replacing, or migrating a technology, component, or system. This paper introduces various quantitative technology forecasting techniques and illustrates their practical application in space technology and technology systems management.
Affordable Launch Services using the Sport Orbit Transfer System
NASA Astrophysics Data System (ADS)
Goldstein, D. J.
2002-01-01
Despite many advances in small satellite technology, a low-cost, reliable method is needed to place spacecraft in their de- sired orbits. AeroAstro has developed the Small Payload ORbit Transfer (SPORTTM) system to provide a flexible low-cost orbit transfer capability, enabling small payloads to use low-cost secondary launch opportunities and still reach their desired final orbits. This capability allows small payloads to effectively use a wider variety of launch opportunities, including nu- merous under-utilized GTO slots. Its use, in conjunction with growing opportunities for secondary launches, enable in- creased access to space using proven technologies and highly reliable launch vehicles such as the Ariane family and the Starsem launcher. SPORT uses a suite of innovative technologies that are packaged in a simple, reliable, modular system. The command, control and data handling of SPORT is provided by the AeroAstro BitsyTM core electronics module. The Bitsy module also provides power regulation for the batteries and optional solar arrays. The primary orbital maneuvering capability is provided by a nitrous oxide monopropellant propulsion system. This system exploits the unique features of nitrous oxide, which in- clude self-pressurization, good performance, and safe handling, to provide a light-weight, low-cost and reliable propulsion capability. When transferring from a higher energy orbit to a lower energy orbit (i.e. GTO to LEO), SPORT uses aerobraking technol- ogy. After using the propulsion system to lower the orbit perigee, the aerobrake gradually slows SPORT via atmospheric drag. After the orbit apogee is reduced to the target level, an apogee burn raises the perigee and ends the aerobraking. At the conclusion of the orbit transfer maneuver, either the aerobrake or SPORT can be shed, as desired by the payload. SPORT uses a simple design for high reliability and a modular architecture for maximum mission flexibility. This paper will discuss the launch system and its application to small satellite launch without increasing risk. It will also discuss relevant issues such as aerobraking operations and radiation issues, as well as existing partnerships and patents for the system.
The relationship between cost estimates reliability and BIM adoption: SEM analysis
NASA Astrophysics Data System (ADS)
Ismail, N. A. A.; Idris, N. H.; Ramli, H.; Rooshdi, R. R. Raja Muhammad; Sahamir, S. R.
2018-02-01
This paper presents the usage of Structural Equation Modelling (SEM) approach in analysing the effects of Building Information Modelling (BIM) technology adoption in improving the reliability of cost estimates. Based on the questionnaire survey results, SEM analysis using SPSS-AMOS application examined the relationships between BIM-improved information and cost estimates reliability factors, leading to BIM technology adoption. Six hypotheses were established prior to SEM analysis employing two types of SEM models, namely the Confirmatory Factor Analysis (CFA) model and full structural model. The SEM models were then validated through the assessment on their uni-dimensionality, validity, reliability, and fitness index, in line with the hypotheses tested. The final SEM model fit measures are: P-value=0.000, RMSEA=0.079<0.08, GFI=0.824, CFI=0.962>0.90, TLI=0.956>0.90, NFI=0.935>0.90 and ChiSq/df=2.259; indicating that the overall index values achieved the required level of model fitness. The model supports all the hypotheses evaluated, confirming that all relationship exists amongst the constructs are positive and significant. Ultimately, the analysis verified that most of the respondents foresee better understanding of project input information through BIM visualization, its reliable database and coordinated data, in developing more reliable cost estimates. They also perceive to accelerate their cost estimating task through BIM adoption.
Do aggressive signals evolve towards higher reliability or lower costs of assessment?
Ręk, P
2014-12-01
It has been suggested that the evolution of signals must be a wasteful process for the signaller, aimed at the maximization of signal honesty. However, the reliability of communication depends not only on the costs paid by signallers but also on the costs paid by receivers during assessment, and less attention has been given to the interaction between these two types of costs during the evolution of signalling systems. A signaller and receiver may accept some level of signal dishonesty by choosing signals that are cheaper in terms of assessment but that are stabilized with less reliable mechanisms. I studied the potential trade-off between signal reliability and the costs of signal assessment in the corncrake (Crex crex). I found that the birds prefer signals that are less costly regarding assessment rather than more reliable. Despite the fact that the fundamental frequency of calls was a strong predictor of male size, it was ignored by receivers unless they could directly compare signal variants. My data revealed a response advantage of costly signals when comparison between calls differing with fundamental frequencies is fast and straightforward, whereas cheap signalling is preferred in natural conditions. These data might improve our understanding of the influence of receivers on signal design because they support the hypothesis that fully honest signalling systems may be prone to dishonesty based on the effects of receiver costs and be replaced by signals that are cheaper in production and reception but more susceptible to cheating. © 2014 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.
Anti-aliasing filter design on spaceborne digital receiver
NASA Astrophysics Data System (ADS)
Yu, Danru; Zhao, Chonghui
2009-12-01
In recent years, with the development of satellite observation technologies, more and more active remote sensing technologies are adopted in spaceborne system. The spaceborne precipitation radar will depend heavily on high performance digital processing to collect meaningful rain echo data. It will increase the complexity of the spaceborne system and need high-performance and reliable digital receiver. This paper analyzes the frequency aliasing in the intermediate frequency signal sampling of digital down conversion in spaceborne radar, and gives an effective digital filter. By analysis and calculation, we choose reasonable parameters of the half-band filters to suppress the frequency aliasing on DDC. Compared with traditional filter, the FPGA resources cost in our system are reduced by over 50%. This can effectively reduce the complexity in the spaceborne digital receiver and improve the reliability of system.
Ise, Yuya; Wako, Tetsuya; Miura, Yoshihiko; Katayama, Shirou; Shimizu, Hisanori
2009-12-01
The present study was undertaken to determine the pharmacoeconomics of switching from sustained-release morphine tablet to matrix type (MT) of transdermal fontanel or sustained-release Oxycodone tablet. Cost-effective analysis was performed using a simulation model along with decision analysis. The analysis was done from the payer's perspective. The cost-effective ratio/patient of transdermal MT fontanel (22, 539 yen)was lower than that of sustained -release Oxycodone tablet (23, 630 yen), although a sensitivity analysis could not indicate that this result was reliable. These results suggest the possibility that transdermal MT fontanel was much less expensive than a sustained-release Oxycodone tablet.
Evaluating alternative service contracts for medical equipment.
De Vivo, L; Derrico, P; Tomaiuolo, D; Capussotto, C; Reali, A
2004-01-01
Managing medical equipments is a formidable task that has to be pursued maximizing the benefits within a highly regulated and cost-constrained environment. Clinical engineers are uniquely equipped to determine which policies are the most efficacious and cost effective for a health care institution to ensure that medical devices meet appropriate standards of safety, quality and performance. Part of this support is a strategy for preventive and corrective maintenance. This paper describes an alternative scheme of OEM (Original Equipment Manufacturer) service contract for medical equipment that combines manufacturers' technical support and in-house maintenance. An efficient and efficacious organization can reduce the high cost of medical equipment maintenance while raising reliability and quality. Methodology and results are discussed.
Redondo, Jonatan Pajares; González, Lisardo Prieto; Guzman, Javier García; Boada, Beatriz L; Díaz, Vicente
2018-02-06
Nowadays, the current vehicles are incorporating control systems in order to improve their stability and handling. These control systems need to know the vehicle dynamics through the variables (lateral acceleration, roll rate, roll angle, sideslip angle, etc.) that are obtained or estimated from sensors. For this goal, it is necessary to mount on vehicles not only low-cost sensors, but also low-cost embedded systems, which allow acquiring data from sensors and executing the developed algorithms to estimate and to control with novel higher speed computing. All these devices have to be integrated in an adequate architecture with enough performance in terms of accuracy, reliability and processing time. In this article, an architecture to carry out the estimation and control of vehicle dynamics has been developed. This architecture was designed considering the basic principles of IoT and integrates low-cost sensors and embedded hardware for orchestrating the experiments. A comparison of two different low-cost systems in terms of accuracy, acquisition time and reliability has been done. Both devices have been compared with the VBOX device from Racelogic, which has been used as the ground truth. The comparison has been made from tests carried out in a real vehicle. The lateral acceleration and roll rate have been analyzed in order to quantify the error of these devices.
Díaz, Vicente
2018-01-01
Nowadays, the current vehicles are incorporating control systems in order to improve their stability and handling. These control systems need to know the vehicle dynamics through the variables (lateral acceleration, roll rate, roll angle, sideslip angle, etc.) that are obtained or estimated from sensors. For this goal, it is necessary to mount on vehicles not only low-cost sensors, but also low-cost embedded systems, which allow acquiring data from sensors and executing the developed algorithms to estimate and to control with novel higher speed computing. All these devices have to be integrated in an adequate architecture with enough performance in terms of accuracy, reliability and processing time. In this article, an architecture to carry out the estimation and control of vehicle dynamics has been developed. This architecture was designed considering the basic principles of IoT and integrates low-cost sensors and embedded hardware for orchestrating the experiments. A comparison of two different low-cost systems in terms of accuracy, acquisition time and reliability has been done. Both devices have been compared with the VBOX device from Racelogic, which has been used as the ground truth. The comparison has been made from tests carried out in a real vehicle. The lateral acceleration and roll rate have been analyzed in order to quantify the error of these devices. PMID:29415507
Photovoltaic-system evaluation at the Northeast Residential Experiment Station
NASA Astrophysics Data System (ADS)
Russell, M. C.
1983-01-01
Five residential photovoltaic systems were tested and the systems' performance and cost was evaluated. The five systems each consist of an unoccupied structure employing a roof mounted photovoltaic array and a utility connected power inverter capable of sending excess PV generated energy to the local utility system. The photovoltaic systems are designed to meet at least 50% of the total annual electrical demand of residences in the cold climate regions of the country. The following specific issues were investigated: photovoltaic array and inverter system power rating and performance characterization, system energy production, reliability and system cost/worth. Summary load data from five houses in the vicinity of the Northeast Residential Experiment Station, and meteorological data from the station's weather station are also presented.
Loss of Load Probability Calculation for West Java Power System with Nuclear Power Plant Scenario
NASA Astrophysics Data System (ADS)
Azizah, I. D.; Abdullah, A. G.; Purnama, W.; Nandiyanto, A. B. D.; Shafii, M. A.
2017-03-01
Loss of Load Probability (LOLP) index showing the quality and performance of an electrical system. LOLP value is affected by load growth, the load duration curve, forced outage rate of the plant, number and capacity of generating units. This reliability index calculation begins with load forecasting to 2018 using multiple regression method. Scenario 1 with compositions of conventional plants produce the largest LOLP in 2017 amounted to 71.609 days / year. While the best reliability index generated in scenario 2 with the NPP amounted to 6.941 days / year in 2015. Improved reliability of systems using nuclear power more efficiently when compared to conventional plants because it also has advantages such as emission-free, inexpensive fuel costs, as well as high level of plant availability.
Optical Measurements for Intelligent Aerospace Propulsion
NASA Technical Reports Server (NTRS)
Mercer, Carolyn R.
2003-01-01
There is growing interest in applying intelligent technologies to aerospace propulsion systems to reap expected benefits in cost, performance, and environmental compliance. Cost benefits span the engine life cycle from development, operations, and maintenance. Performance gains are anticipated in reduced fuel consumption, increased thrust-toweight ratios, and operability. Environmental benefits include generating fewer pollutants and less noise. Critical enabling technologies to realize these potential benefits include sensors, actuators, logic, electronics, materials, and structures. For propulsion applications, the challenge is to increase the robustness of these technologies so that they can withstand harsh temperatures, vibrations, and grime while providing extremely reliable performance. This paper addresses the role that optical metrology is playing in providing solutions to these challenges. Optics for ground-based testing (development cycle), flight sensing (operations), and inspection (maintenance) are described. Opportunities for future work are presented.
Magnetic resonance angiography for the nonpalpable testis: a cost and cancer risk analysis.
Eggener, S E; Lotan, Y; Cheng, E Y
2005-05-01
For the unilateral nonpalpable testis standard management is open surgical or laparoscopic exploration. An ideal imaging technique would reliably identify testicular nubbins and safely allow children to forgo surgical exploration without compromising future health or fertility. Our goal was to perform a cost and risk analysis of magnetic resonance angiography (MRA) for unilateral nonpalpable cryptorchid testes. A search of the English medical literature revealed 3 studies addressing the usefulness of MRA for the nonpalpable testicle. We performed a meta-analysis and applied the results to a hypothetical set of patients using historical testicular localization data. Analysis was then performed using 3 different management protocols-MRA with removal of testicular nubbin tissue, MRA with observation of testicular nubbin tissue and diagnostic laparoscopy. A cancer risk and cost analysis was then performed. MRA with observation of testicular nubbin tissue results in 29% of patients avoiding surgery without any increased cost of care. Among the 29% of boys with testicular nubbins left in situ and observed the highest estimated risk was 1 in 300 of cancer developing, and 1 in 5,300 of dying of cancer. A protocol using MRA with observation of inguinal nubbins results in nearly a third of boys avoiding surgical intervention at a similar cost to standard care without any significant increased risk of development of testis cancer.
Probabilistic Risk Assessment (PRA): A Practical and Cost Effective Approach
NASA Technical Reports Server (NTRS)
Lee, Lydia L.; Ingegneri, Antonino J.; Djam, Melody
2006-01-01
The Lunar Reconnaissance Orbiter (LRO) is the first mission of the Robotic Lunar Exploration Program (RLEP), a space exploration venture to the Moon, Mars and beyond. The LRO mission includes spacecraft developed by NASA Goddard Space Flight Center (GSFC) and seven instruments built by GSFC, Russia, and contractors across the nation. LRO is defined as a measurement mission, not a science mission. It emphasizes the overall objectives of obtaining data to facilitate returning mankind safely to the Moon in preparation for an eventual manned mission to Mars. As the first mission in response to the President's commitment of the journey of exploring the solar system and beyond: returning to the Moon in the next decade, then venturing further into the solar system, ultimately sending humans to Mars and beyond, LRO has high-visibility to the public but limited resources and a tight schedule. This paper demonstrates how NASA's Lunar Reconnaissance Orbiter Mission project office incorporated reliability analyses in assessing risks and performing design tradeoffs to ensure mission success. Risk assessment is performed using NASA Procedural Requirements (NPR) 8705.5 - Probabilistic Risk Assessment (PRA) Procedures for NASA Programs and Projects to formulate probabilistic risk assessment (PRA). As required, a limited scope PRA is being performed for the LRO project. The PRA is used to optimize the mission design within mandated budget, manpower, and schedule constraints. The technique that LRO project office uses to perform PRA relies on the application of a component failure database to quantify the potential mission success risks. To ensure mission success in an efficient manner, low cost and tight schedule, the traditional reliability analyses, such as reliability predictions, Failure Modes and Effects Analysis (FMEA), and Fault Tree Analysis (FTA), are used to perform PRA for the large system of LRO with more than 14,000 piece parts and over 120 purchased or contractor built components.
NASA Astrophysics Data System (ADS)
Mittchell, Richard L.; Symko-Davies, Martha; Thomas, Holly P.; Witt, C. Edwin
1999-03-01
The Photovoltaic Manufacturing Technology (PVMaT) Project is a government/industry research and development (R&D) partnership between the U.S. federal government (through the U.S. Department of Energy [DOE]) and members of the U.S. PV industry. The goals of PVMaT are to assist the U.S. PV industry improve module manufacturing processes and equipment; accelerate manufacturing cost reductions for PV modules, balance-of-systems components, and integrated systems; increase commercial product performance and reliability; and enhance investment opportunities for substantial scale-ups of U.S.-based PV manufacturing plant capacities. The approach for PVMaT has been to cost-share the R&D risk as industry explores new manufacturing options and ideas for improved PV modules and components, advances system and product integration, and develops new system designs. These activities will lead to overall reduced system life-cycle costs for reliable PV end-products. The 1994 PVMaT Product-Driven BOS and Systems activities, as well as Product-Driven Module Manufacturing R&D activities, are just being completed. Fourteen new subcontracts have just been awarded in the areas of PV System and Component Technology and Module Manufacturing Technology. Government funding, subcontractor cost-sharing, and a comparison of the relative efforts by PV technology throughout the PVMaT project are also discussed.
Instructions for Plastic Encapsulated Microcircuit(PEM) Selection, Screening and Qualification.
NASA Technical Reports Server (NTRS)
King, Terry; Teverovsky, Alexander; Leidecker, Henning
2002-01-01
The use of Plastic Encapsulated Microcircuits (PEMs) is permitted on NASA Goddard Space Flight Center (GSFC) spaceflight applications, provided each use is thoroughly evaluated for thermal, mechanical, and radiation implications of the specific application and found to meet mission requirements. PEMs shall be selected for their functional advantage and availability, not for cost saving; the steps necessary to ensure reliability usually negate any initial apparent cost advantage. A PEM shall not be substituted for a form, fit and functional equivalent, high reliability, hermetic device in spaceflight applications. Due to the rapid change in wafer-level designs typical of commercial parts and the unknown traceability between packaging lots and wafer lots, lot specific testing is required for PEMs, unless specifically excepted by the Mission Assurance Requirements (MAR) for the project. Lot specific qualification, screening, radiation hardness assurance analysis and/or testing, shall be consistent with the required reliability level as defined in the MAR. Developers proposing to use PEMs shall address the following items in their Performance Assurance Implementation Plan: source selection (manufacturers and distributors), storage conditions for all stages of use, packing, shipping and handling, electrostatic discharge (ESD), screening and qualification testing, derating, radiation hardness assurance, test house selection and control, data collection and retention.
Systems definition summary. Earth Observatory Satellite system definition study (EOS)
NASA Technical Reports Server (NTRS)
1974-01-01
A standard spacecraft bus for performing a variety of earth orbit missions in the late 1970's and 1980's is defined. Emphasis is placed on a low-cost, multimission capability, benefitting from the space shuttle system. The subjects considered are as follows: (1) performance requirements, (2) internal interfaces, (3) redundancy and reliability, (4) communications and data handling module design, (5) payload data handling, (6) application of the modular design to various missions, and (7) the verification concept.
Results from conceptual design study of potential early commercial MHD/steam power plants
NASA Technical Reports Server (NTRS)
Hals, F.; Kessler, R.; Swallom, D.; Westra, L.; Zar, J.; Morgan, W.; Bozzuto, C.
1981-01-01
This paper presents conceptual design information for a potential early MHD power plant developed in the second phase of a joint study of such plants. Conceptual designs of plant components and equipment with performance, operational characteristics and costs are reported on. Plant economics and overall performance including full and part load operation are reviewed. Environmental aspects and the methods incorporated in plant design for emission control of sulfur and nitrogen oxides are reviewed. Results from reliability/availability analysis conducted are also included.
Encapsulation materials research
NASA Technical Reports Server (NTRS)
Willis, P. B.
1984-01-01
Encapsulation materials for solar cells were investigated. The different phases consisted of: (1) identification and development of low cost module encapsulation materials; (2) materials reliability examination; and (3) process sensitivity and process development. It is found that outdoor photothermal aging devices (OPT) are the best accelerated aging methods, simulate worst case field conditions, evaluate formulation and module performance and have a possibility for life assessment. Outdoor metallic copper exposure should be avoided, self priming formulations have good storage stability, stabilizers enhance performance, and soil resistance treatment is still effective.
Monolithic microwave integrated circuits: Interconnections and packaging considerations
NASA Astrophysics Data System (ADS)
Bhasin, K. B.; Downey, A. N.; Ponchak, G. E.; Romanofsky, R. R.; Anzic, G.; Connolly, D. J.
Monolithic microwave integrated circuits (MMIC's) above 18 GHz were developed because of important potential system benefits in cost reliability, reproducibility, and control of circuit parameters. The importance of interconnection and packaging techniques that do not compromise these MMIC virtues is emphasized. Currently available microwave transmission media are evaluated to determine their suitability for MMIC interconnections. An antipodal finline type of microstrip waveguide transition's performance is presented. Packaging requirements for MMIC's are discussed for thermal, mechanical, and electrical parameters for optimum desired performance.
Monolithic microwave integrated circuits: Interconnections and packaging considerations
NASA Technical Reports Server (NTRS)
Bhasin, K. B.; Downey, A. N.; Ponchak, G. E.; Romanofsky, R. R.; Anzic, G.; Connolly, D. J.
1984-01-01
Monolithic microwave integrated circuits (MMIC's) above 18 GHz were developed because of important potential system benefits in cost reliability, reproducibility, and control of circuit parameters. The importance of interconnection and packaging techniques that do not compromise these MMIC virtues is emphasized. Currently available microwave transmission media are evaluated to determine their suitability for MMIC interconnections. An antipodal finline type of microstrip waveguide transition's performance is presented. Packaging requirements for MMIC's are discussed for thermal, mechanical, and electrical parameters for optimum desired performance.
State-of-the-Art for Small Satellite Propulsion Systems
NASA Technical Reports Server (NTRS)
Parker, Khary I.
2016-01-01
SmallSats are a low cost access to space with an increasing need for propulsion systems. NASA, and other organizations, will be using SmallSats that require propulsion systems to: a) Conduct high quality near and far reaching on-orbit research and b) Perform technology demonstrations. Increasing call for high reliability and high performing for SmallSat components. Many SmallSat propulsion technologies are currently under development: a) Systems at various levels of maturity and b) Wide variety of systems for many mission applications.
ERIC Educational Resources Information Center
Campbell, David; Picard-Aitken, Michelle; Cote, Gregoire; Caruso, Julie; Valentim, Rodolfo; Edmonds, Stuart; Williams, Gregory Thomas; Macaluso, Benoit; Robitaille, Jean-Pierre; Bastien, Nicolas; Laframboise, Marie-Claude; Lebeau, Louis-Michel; Mirabel, Philippe; Lariviere, Vincent; Archambault, Eric
2010-01-01
As bibliometric indicators are objective, reliable, and cost-effective measures of peer-reviewed research outputs, they are expected to play an increasingly important role in research assessment/management. Recently, a bibliometric approach was developed and integrated within the evaluation framework of research funded by the National Cancer…
Microprocessor control of a wind turbine generator
NASA Technical Reports Server (NTRS)
Gnecco, A. J.; Whitehead, G. T.
1978-01-01
A microprocessor based system was used to control the unattended operation of a wind turbine generator. The turbine and its microcomputer system are fully described with special emphasis on the wide variety of tasks performed by the microprocessor for the safe and efficient operation of the turbine. The flexibility, cost and reliability of the microprocessor were major factors in its selection.
Risk Leading Indicators for DOD Acquisition Programs
2014-08-12
Adverse consequences include development time and cost overrun, technical performance and reliability shortfall, and excessive production , operation...prior to contract award, but is outside the scope of this paper. Risk exposure early warning complements the risk identification practices and...likelihood and magnitude are priorities for tracking and mitigation. The practices and procedures in the guide start with identifying risk events. Risk
Lim, Wei Yin; Goh, Boon Tong; Khor, Sook Mei
2017-08-15
Clinicians, working in the health-care diagnostic systems of developing countries, currently face the challenges of rising costs, increased number of patient visits, and limited resources. A significant trend is using low-cost substrates to develop microfluidic devices for diagnostic purposes. Various fabrication techniques, materials, and detection methods have been explored to develop these devices. Microfluidic paper-based analytical devices (μPADs) have gained attention for sensing multiplex analytes, confirming diagnostic test results, rapid sample analysis, and reducing the volume of samples and analytical reagents. μPADs, which can provide accurate and reliable direct measurement without sample pretreatment, can reduce patient medical burden and yield rapid test results, aiding physicians in choosing appropriate treatment. The objectives of this review are to provide an overview of the strategies used for developing paper-based sensors with enhanced analytical performances and to discuss the current challenges, limitations, advantages, disadvantages, and future prospects of paper-based microfluidic platforms in clinical diagnostics. μPADs, with validated and justified analytical performances, can potentially improve the quality of life by providing inexpensive, rapid, portable, biodegradable, and reliable diagnostics. Copyright © 2017 Elsevier B.V. All rights reserved.
Software IV and V Research Priorities and Applied Program Accomplishments Within NASA
NASA Technical Reports Server (NTRS)
Blazy, Louis J.
2000-01-01
The mission of this research is to be world-class creators and facilitators of innovative, intelligent, high performance, reliable information technologies that enable NASA missions to (1) increase software safety and quality through error avoidance, early detection and resolution of errors, by utilizing and applying empirically based software engineering best practices; (2) ensure customer software risks are identified and/or that requirements are met and/or exceeded; (3) research, develop, apply, verify, and publish software technologies for competitive advantage and the advancement of science; and (4) facilitate the transfer of science and engineering data, methods, and practices to NASA, educational institutions, state agencies, and commercial organizations. The goals are to become a national Center Of Excellence (COE) in software and system independent verification and validation, and to become an international leading force in the field of software engineering for improving the safety, quality, reliability, and cost performance of software systems. This project addresses the following problems: Ensure safety of NASA missions, ensure requirements are met, minimize programmatic and technological risks of software development and operations, improve software quality, reduce costs and time to delivery, and improve the science of software engineering
Implications of Transitioning from De Facto to Engineered Water Reuse for Power Plant Cooling.
Barker, Zachary A; Stillwell, Ashlynn S
2016-05-17
Thermoelectric power plants demand large quantities of cooling water, and can use alternative sources like treated wastewater (reclaimed water); however, such alternatives generate many uncertainties. De facto water reuse, or the incidental presence of wastewater effluent in a water source, is common at power plants, representing baseline conditions. In many cases, power plants would retrofit open-loop systems to cooling towers to use reclaimed water. To evaluate the feasibility of reclaimed water use, we compared hydrologic and economic conditions at power plants under three scenarios: quantified de facto reuse, de facto reuse with cooling tower retrofits, and modeled engineered reuse conditions. We created a genetic algorithm to estimate costs and model optimal conditions. To assess power plant performance, we evaluated reliability metrics for thermal variances and generation capacity loss as a function of water temperature. Applying our analysis to the greater Chicago area, we observed high de facto reuse for some power plants and substantial costs for retrofitting to use reclaimed water. Conversely, the gains in reliability and performance through engineered reuse with cooling towers outweighed the energy investment in reclaimed water pumping. Our analysis yields quantitative results of reclaimed water feasibility and can inform sustainable management of water and energy.
Towards Integrating Distributed Energy Resources and Storage Devices in Smart Grid.
Xu, Guobin; Yu, Wei; Griffith, David; Golmie, Nada; Moulema, Paul
2017-02-01
Internet of Things (IoT) provides a generic infrastructure for different applications to integrate information communication techniques with physical components to achieve automatic data collection, transmission, exchange, and computation. The smart grid, as one of typical applications supported by IoT, denoted as a re-engineering and a modernization of the traditional power grid, aims to provide reliable, secure, and efficient energy transmission and distribution to consumers. How to effectively integrate distributed (renewable) energy resources and storage devices to satisfy the energy service requirements of users, while minimizing the power generation and transmission cost, remains a highly pressing challenge in the smart grid. To address this challenge and assess the effectiveness of integrating distributed energy resources and storage devices, in this paper we develop a theoretical framework to model and analyze three types of power grid systems: the power grid with only bulk energy generators, the power grid with distributed energy resources, and the power grid with both distributed energy resources and storage devices. Based on the metrics of the power cumulative cost and the service reliability to users, we formally model and analyze the impact of integrating distributed energy resources and storage devices in the power grid. We also use the concept of network calculus, which has been traditionally used for carrying out traffic engineering in computer networks, to derive the bounds of both power supply and user demand to achieve a high service reliability to users. Through an extensive performance evaluation, our data shows that integrating distributed energy resources conjointly with energy storage devices can reduce generation costs, smooth the curve of bulk power generation over time, reduce bulk power generation and power distribution losses, and provide a sustainable service reliability to users in the power grid.
Towards Integrating Distributed Energy Resources and Storage Devices in Smart Grid
Xu, Guobin; Yu, Wei; Griffith, David; Golmie, Nada; Moulema, Paul
2017-01-01
Internet of Things (IoT) provides a generic infrastructure for different applications to integrate information communication techniques with physical components to achieve automatic data collection, transmission, exchange, and computation. The smart grid, as one of typical applications supported by IoT, denoted as a re-engineering and a modernization of the traditional power grid, aims to provide reliable, secure, and efficient energy transmission and distribution to consumers. How to effectively integrate distributed (renewable) energy resources and storage devices to satisfy the energy service requirements of users, while minimizing the power generation and transmission cost, remains a highly pressing challenge in the smart grid. To address this challenge and assess the effectiveness of integrating distributed energy resources and storage devices, in this paper we develop a theoretical framework to model and analyze three types of power grid systems: the power grid with only bulk energy generators, the power grid with distributed energy resources, and the power grid with both distributed energy resources and storage devices. Based on the metrics of the power cumulative cost and the service reliability to users, we formally model and analyze the impact of integrating distributed energy resources and storage devices in the power grid. We also use the concept of network calculus, which has been traditionally used for carrying out traffic engineering in computer networks, to derive the bounds of both power supply and user demand to achieve a high service reliability to users. Through an extensive performance evaluation, our data shows that integrating distributed energy resources conjointly with energy storage devices can reduce generation costs, smooth the curve of bulk power generation over time, reduce bulk power generation and power distribution losses, and provide a sustainable service reliability to users in the power grid1. PMID:29354654
Developing safety performance functions incorporating reliability-based risk measures.
Ibrahim, Shewkar El-Bassiouni; Sayed, Tarek
2011-11-01
Current geometric design guides provide deterministic standards where the safety margin of the design output is generally unknown and there is little knowledge of the safety implications of deviating from these standards. Several studies have advocated probabilistic geometric design where reliability analysis can be used to account for the uncertainty in the design parameters and to provide a risk measure of the implication of deviation from design standards. However, there is currently no link between measures of design reliability and the quantification of safety using collision frequency. The analysis presented in this paper attempts to bridge this gap by incorporating a reliability-based quantitative risk measure such as the probability of non-compliance (P(nc)) in safety performance functions (SPFs). Establishing this link will allow admitting reliability-based design into traditional benefit-cost analysis and should lead to a wider application of the reliability technique in road design. The present application is concerned with the design of horizontal curves, where the limit state function is defined in terms of the available (supply) and stopping (demand) sight distances. A comprehensive collision and geometric design database of two-lane rural highways is used to investigate the effect of the probability of non-compliance on safety. The reliability analysis was carried out using the First Order Reliability Method (FORM). Two Negative Binomial (NB) SPFs were developed to compare models with and without the reliability-based risk measures. It was found that models incorporating the P(nc) provided a better fit to the data set than the traditional (without risk) NB SPFs for total, injury and fatality (I+F) and property damage only (PDO) collisions. Copyright © 2011 Elsevier Ltd. All rights reserved.
High day-to-day reliability in lower leg volume measured by water displacement.
Pasley, Jeffrey D; O'Connor, Patrick J
2008-07-01
The day-to-day reliability of lower leg volume is poorly documented. This investigation determined the day-to-day reliability of lower leg volume (soleus and gastrocnemius) measured using water displacement. Thirty young adults (15 men and 15 women) had their right lower leg volume measured by water displacement on five separate occasions. The participants performed normal activities of daily living and were measured at the same time of day after being seated for 30 min. The results revealed a high day-to-day reliability for lower leg volume. The mean percentage change in lower leg volume across days compared to day 1 ranged between 0 and 0.37%. The mean within subjects coefficient of variation in lower leg volume was 0.72% and the coefficient of variation for the entire sample across days ranged from 5.66 to 6.32%. A two way mixed model intraclass correlation (30 subjects x 5 days) showed that the lower leg volume measurement was highly reliable (ICC = 0.972). Foot and total lower leg volumes showed similarly high reliability. Water displacement offers a cost effective and reliable solution for the measurement of lower leg edema across days.
Fuzzy probabilistic design of water distribution networks
NASA Astrophysics Data System (ADS)
Fu, Guangtao; Kapelan, Zoran
2011-05-01
The primary aim of this paper is to present a fuzzy probabilistic approach for optimal design and rehabilitation of water distribution systems, combining aleatoric and epistemic uncertainties in a unified framework. The randomness and imprecision in future water consumption are characterized using fuzzy random variables whose realizations are not real but fuzzy numbers, and the nodal head requirements are represented by fuzzy sets, reflecting the imprecision in customers' requirements. The optimal design problem is formulated as a two-objective optimization problem, with minimization of total design cost and maximization of system performance as objectives. The system performance is measured by the fuzzy random reliability, defined as the probability that the fuzzy head requirements are satisfied across all network nodes. The satisfactory degree is represented by necessity measure or belief measure in the sense of the Dempster-Shafer theory of evidence. An efficient algorithm is proposed, within a Monte Carlo procedure, to calculate the fuzzy random system reliability and is effectively combined with the nondominated sorting genetic algorithm II (NSGAII) to derive the Pareto optimal design solutions. The newly proposed methodology is demonstrated with two case studies: the New York tunnels network and Hanoi network. The results from both cases indicate that the new methodology can effectively accommodate and handle various aleatoric and epistemic uncertainty sources arising from the design process and can provide optimal design solutions that are not only cost-effective but also have higher reliability to cope with severe future uncertainties.
Advanced Life Support System Value Metric
NASA Technical Reports Server (NTRS)
Jones, Harry W.; Arnold, James O. (Technical Monitor)
1999-01-01
The NASA Advanced Life Support (ALS) Program is required to provide a performance metric to measure its progress in system development. Extensive discussions within the ALS program have reached a consensus. The Equivalent System Mass (ESM) metric has been traditionally used and provides a good summary of the weight, size, and power cost factors of space life support equipment. But ESM assumes that all the systems being traded off exactly meet a fixed performance requirement, so that the value and benefit (readiness, performance, safety, etc.) of all the different systems designs are exactly equal. This is too simplistic. Actual system design concepts are selected using many cost and benefit factors and the system specification is then set accordingly. The ALS program needs a multi-parameter metric including both the ESM and a System Value Metric (SVM). The SVM would include safety, maintainability, reliability, performance, use of cross cutting technology, and commercialization potential. Another major factor in system selection is technology readiness level (TRL), a familiar metric in ALS. The overall ALS system metric that is suggested is a benefit/cost ratio, [SVM + TRL]/ESM, with appropriate weighting and scaling. The total value is the sum of SVM and TRL. Cost is represented by ESM. The paper provides a detailed description and example application of the suggested System Value Metric.
Progress in GaN devices performances and reliability
NASA Astrophysics Data System (ADS)
Saunier, P.; Lee, C.; Jimenez, J.; Balistreri, A.; Dumka, D.; Tserng, H. Q.; Kao, M. Y.; Chowdhury, U.; Chao, P. C.; Chu, K.; Souzis, A.; Eliashevich, I.; Guo, S.; del Alamo, J.; Joh, J.; Shur, M.
2008-02-01
With the DARPA Wide Bandgap Semiconductor Technology RF Thrust Contract, TriQuint Semiconductor and its partners, BAE Systems, Lockheed Martin, IQE-RF, II-VI, Nitronex, M.I.T., and R.P.I. are achieving great progress towards the overall goal of making Gallium Nitride a revolutionary RF technology ready to be inserted in defense and commercial applications. Performance and reliability are two critical components of success (along with cost and manufacturability). In this paper we will discuss these two aspects. Our emphasis is now operation at 40 V bias voltage (we had been working at 28 V). 1250 µm devices have power densities in the 6 to 9 W/mm with associated efficiencies in the low- to mid 60 % and associated gain in the 12 to 12.5 dB at 10 GHz. We are using a dual field-plate structure to optimize these performances. Very good performances have also been achieved at 18 GHz with 400 µm devices. Excellent progress has been made in reliability. Our preliminary DC and RF reliability tests at 40 V indicate a MTTF of 1E6hrs with1.3 eV activation energy at 150 0C channel temperature. Jesus Del Alamo at MIT has greatly refined our initial findings leading to a strain related theory of degradation that is driven by electric fields. Degradation can occur on the drain edge of the gate due to excessive strain given by inverse piezoelectric effect.
NASA Astrophysics Data System (ADS)
Bechou, L.; Deshayes, Y.; Aupetit-Berthelemot, C.; Guerin, A.; Tronche, C.
Space missions for Earth Observation are called upon to carry a growing number of instruments in their payload, whose performances are increasing. Future space systems are therefore intended to generate huge amounts of data and a key challenge in coming years will therefore lie in the ability to transmit that significant quantity of data to ground. Thus very high data rate Payload Telemetry (PLTM) systems will be required to face the demand of the future Earth Exploration Satellite Systems and reliability is one of the major concern of such systems. An attractive approach associated with the concept of predictive modeling consists in analyzing the impact of components malfunctioning on the optical link performances taking into account the network requirements and experimental degradation laws. Reliability estimation is traditionally based on life-testing and a basic approach is to use Telcordia requirements (468GR) for optical telecommunication applications. However, due to the various interactions between components, operating lifetime of a system cannot be taken as the lifetime of the less reliable component. In this paper, an original methodology is proposed to estimate reliability of an optical communication system by using a dedicated system simulator for predictive modeling and design for reliability. At first, we present frameworks of point-to-point optical communication systems for space applications where high data rate (or frequency bandwidth), lower cost or mass saving are needed. Optoelectronics devices used in these systems can be similar to those found in terrestrial optical network. Particularly we report simulation results of transmission performances after introduction of DFB Laser diode parameters variations versus time extrapolated from accelerated tests based on terrestrial or submarine telecommunications qualification standards. Simulations are performed to investigate and predict the consequence of degradations of the Laser diode (acting as a - requency carrier) on system performances (eye diagram, quality factor and BER). The studied link consists in 4× 2.5 Gbits/s WDM channels with direct modulation and equally spaced (0,8 nm) around the 1550 nm central wavelength. Results clearly show that variation of fundamental parameters such as bias current or central wavelength induces a penalization of dynamic performances of the complete WDM link. In addition different degradation kinetics of aged Laser diodes from a same batch have been implemented to build the final distribution of Q-factor and BER values after 25 years. When considering long optical distance, fiber attenuation, EDFA noise, dispersion, PMD, ... penalize network performances that can be compensated using Forward Error Correction (FEC) coding. Three methods have been investigated in the case of On-Off Keying (OOK) transmission over an unipolar optical channel corrupted by Gaussian noise. Such system simulations highlight the impact of component parameter degradations on the whole network performances allowing to optimize various time and cost consuming sensitivity analyses at the early stage of the system development. Thus the validity of failure criteria in relation with mission profiles can be evaluated representing a significant part of the general PDfR effort in particular for aerospace applications.
Study of turboprop systems reliability and maintenance costs
NASA Technical Reports Server (NTRS)
1978-01-01
The overall reliability and maintenance costs (R&MC's) of past and current turboprop systems were examined. Maintenance cost drivers were found to be scheduled overhaul (40%), lack of modularity particularly in the propeller and reduction gearbox, and lack of inherent durability (reliability) of some parts. Comparisons were made between the 501-D13/54H60 turboprop system and the widely used JT8D turbofan. It was found that the total maintenance cost per flight hour of the turboprop was 75% higher than that of the JT8D turbofan. Part of this difference was due to propeller and gearbox costs being higher than those of the fan and reverser, but most of the difference was in the engine core where the older technology turboprop core maintenance costs were nearly 70 percent higher than for the turbofan. The estimated maintenance cost of both the advanced turboprop and advanced turbofan were less than the JT8D. The conclusion was that an advanced turboprop and an advanced turbofan, using similar cores, will have very competitive maintenance costs per flight hour.
Cost drivers and resource allocation in military health care systems.
Fulton, Larry; Lasdon, Leon S; McDaniel, Reuben R
2007-03-01
This study illustrates the feasibility of incorporating technical efficiency considerations in the funding of military hospitals and identifies the primary drivers for hospital costs. Secondary data collected for 24 U.S.-based Army hospitals and medical centers for the years 2001 to 2003 are the basis for this analysis. Technical efficiency was measured by using data envelopment analysis; subsequently, efficiency estimates were included in logarithmic-linear cost models that specified cost as a function of volume, complexity, efficiency, time, and facility type. These logarithmic-linear models were compared against stochastic frontier analysis models. A parsimonious, three-variable, logarithmic-linear model composed of volume, complexity, and efficiency variables exhibited a strong linear relationship with observed costs (R(2) = 0.98). This model also proved reliable in forecasting (R(2) = 0.96). Based on our analysis, as much as $120 million might be reallocated to improve the United States-based Army hospital performance evaluated in this study.
Advanced Cogeneration Technology Economic Optimization Study (ACTEOS)
NASA Technical Reports Server (NTRS)
Nanda, P.; Ansu, Y.; Manuel, E. H., Jr.; Price, W. G., Jr.
1980-01-01
The advanced cogeneration technology economic optimization study (ACTEOS) was undertaken to extend the results of the cogeneration technology alternatives study (CTAS). Cost comparisons were made between designs involving advanced cogeneration technologies and designs involving either conventional cogeneration technologies or not involving cogeneration. For the specific equipment cost and fuel price assumptions made, it was found that: (1) coal based cogeneration systems offered appreciable cost savings over the no cogeneration case, while systems using coal derived liquids offered no costs savings; and (2) the advanced cogeneration systems provided somewhat larger cost savings than the conventional systems. Among the issues considered in the study included: (1) temporal variations in steam and electric demands; (2) requirements for reliability/standby capacity; (3) availability of discrete equipment sizes; (4) regional variations in fuel and electricity prices; (5) off design system performance; and (6) separate demand and energy charges for purchased electricity.
Time-domain diffuse optical tomography using silicon photomultipliers: feasibility study.
Di Sieno, Laura; Zouaoui, Judy; Hervé, Lionel; Pifferi, Antonio; Farina, Andrea; Martinenghi, Edoardo; Derouard, Jacques; Dinten, Jean-Marc; Mora, Alberto Dalla
2016-11-01
Silicon photomultipliers (SiPMs) have been very recently introduced as the most promising detectors in the field of diffuse optics, in particular due to the inherent low cost and large active area. We also demonstrate the suitability of SiPMs for time-domain diffuse optical tomography (DOT). The study is based on both simulations and experimental measurements. Results clearly show excellent performances in terms of spatial localization of an absorbing perturbation, thus opening the way to the use of SiPMs for DOT, with the possibility to conceive a new generation of low-cost and reliable multichannel tomographic systems.
Design of the Space Station Freedom power system
NASA Technical Reports Server (NTRS)
Thomas, Ronald L.; Hallinan, George J.
1989-01-01
The design of Space Station Freedom's electric power system (EPS) is reviewed, highlighting the key design goals of performance, low cost, reliability and safety. Tradeoff study results that illustrate the competing factors responsible for many of the more important design decisions are discussed. When Freedom's EPS is compared with previous space power designs, two major differences stand out. The first is the size of the EPS, which is larger than any prior system. The second major difference between the EPS and other space power designs is the indefinite expected life of Freedom; 30 years has been used for life-cycle-cost calculations.
NASA Technical Reports Server (NTRS)
Birman, Kenneth; Schiper, Andre; Stephenson, Pat
1990-01-01
A new protocol is presented that efficiently implements a reliable, causally ordered multicast primitive and is easily extended into a totally ordered one. Intended for use in the ISIS toolkit, it offers a way to bypass the most costly aspects of ISIS while benefiting from virtual synchrony. The facility scales with bounded overhead. Measured speedups of more than an order of magnitude were obtained when the protocol was implemented within ISIS. One conclusion is that systems such as ISIS can achieve performance competitive with the best existing multicast facilities--a finding contradicting the widespread concern that fault-tolerance may be unacceptably costly.
Diverse Redundant Systems for Reliable Space Life Support
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2015-01-01
Reliable life support systems are required for deep space missions. The probability of a fatal life support failure should be less than one in a thousand in a multi-year mission. It is far too expensive to develop a single system with such high reliability. Using three redundant units would require only that each have a failure probability of one in ten over the mission. Since the system development cost is inverse to the failure probability, this would cut cost by a factor of one hundred. Using replaceable subsystems instead of full systems would further cut cost. Using full sets of replaceable components improves reliability more than using complete systems as spares, since a set of components could repair many different failures instead of just one. Replaceable components would require more tools, space, and planning than full systems or replaceable subsystems. However, identical system redundancy cannot be relied on in practice. Common cause failures can disable all the identical redundant systems. Typical levels of common cause failures will defeat redundancy greater than two. Diverse redundant systems are required for reliable space life support. Three, four, or five diverse redundant systems could be needed for sufficient reliability. One system with lower level repair could be substituted for two diverse systems to save cost.
NASA Astrophysics Data System (ADS)
Sembiring, N.; Panjaitan, N.; Angelita, S.
2018-02-01
PT. XYZ is a company owned by non-governmental organizations engaged in the field of production of rubber processing becoming crumb rubber. Part of the production is supported by some of machines and interacting equipment to achieve optimal productivity. Types of the machine that are used in the production process are Conveyor Breaker, Breaker, Rolling Pin, Hammer Mill, Mill Roll, Conveyor, Shredder Crumb, and Dryer. Maintenance system in PT. XYZ is corrective maintenance i.e. repairing or replacing the engine components after the crash on the machine. Replacement of engine components on corrective maintenance causes the machine to stop operating during the production process is in progress. The result is in the loss of production time due to the operator must replace the damaged engine components. The loss of production time can impact on the production targets which were not reached and lead to high loss costs. The cost for all components is Rp. 4.088.514.505. This cost is really high just for maintaining a Mill Roll Machine. Therefore PT. XYZ is needed to do preventive maintenance i.e. scheduling engine components and improving maintenance efficiency. The used methods are Reliability Engineering and Maintenance Value Stream Mapping (MVSM). The needed data in this research are the interval of time damage to engine components, opportunity cost, labor cost, component cost, corrective repair time, preventive repair time, Mean Time To Opportunity (MTTO), Mean Time To Repair (MTTR), and Mean Time To Yield (MTTY). In this research, the critical components of Mill Roll machine are Spier, Bushing, Bearing, Coupling and Roll. Determination of damage distribution, reliability, MTTF, cost of failure, cost of preventive, current state map, and future state map are done so that the replacement time for each critical component with the lowest maintenance cost and preparation of Standard Operation Procedure (SOP) are developed. For the critical component that has been determined, the Spier component replacement time interval is 228 days with a reliability value of 0,503171, Bushing component is 240 days with reliability value of 0.36861, Bearing component is 202 days with reliability value of 0,503058, Coupling component is 247 days with reliability value of 0,50108 and Roll component is 301 days with reliability value of 0,373525. The results show that the cost decreases from Rp 300,688,114 to Rp 244,384,371 obtained from corrective maintenance to preventive maintenance. While maintenance efficiency increases with the application of preventive maintenance i.e. for Spier component from 54,0540541% to 74,07407%, Bushing component from 52,3809524% to 68,75%, Bearing component from 40% to 52,63158%, Coupling component from 60.9756098% to 71.42857%, and Roll components from 64.516129% to 74.7663551%.
1984-10-01
appears to have cost $6.54 to produce 1,000,000 Btu’s of heat. This equation took into account the cost of repair and replacement parts, consumable...waste incineration rate, thermal efficiency, and steam cost . Actual results for incinerating waste to produce steam were: reliability 58% (75% of design...87% of goal); incineration rate 1.75 tons/hr (105% of goal); and cost of steam $6.05/MBtu. The HRI was expected to save $26,600/yr from landfill
An Investment Level Decision Method to Secure Long-term Reliability
NASA Astrophysics Data System (ADS)
Bamba, Satoshi; Yabe, Kuniaki; Seki, Tomomichi; Shibaya, Tetsuji
The slowdown in power demand increase and facility replacement causes the aging and lower reliability in power facility. And the aging is followed by the rapid increase of repair and replacement when many facilities reach their lifetime in future. This paper describes a method to estimate the repair and replacement costs in future by applying the life-cycle cost model and renewal theory to the historical data. This paper also describes a method to decide the optimum investment plan, which replaces facilities in the order of cost-effectiveness by setting replacement priority formula, and the minimum investment level to keep the reliability. Estimation examples applied to substation facilities show that the reasonable and leveled future cash-out can keep the reliability by lowering the percentage of replacements caused by fatal failures.
Transitioning EEG experiments away from the laboratory using a Raspberry Pi 2.
Kuziek, Jonathan W P; Shienh, Axita; Mathewson, Kyle E
2017-02-01
Electroencephalography (EEG) experiments are typically performed in controlled laboratory settings to minimise noise and produce reliable measurements. These controlled conditions also reduce the applicability of the obtained results to more varied environments and may limit their relevance to everyday situations. Advances in computer portability may increase the mobility and applicability of EEG results while decreasing costs. In this experiment we show that stimulus presentation using a Raspberry Pi 2 computer provides a low cost, reliable alternative to a traditional desktop PC in the administration of EEG experimental tasks. Significant and reliable MMN and P3 activity, typical event-related potentials (ERPs) associated with an auditory oddball paradigm, were measured while experiments were administered using the Raspberry Pi 2. While latency differences in ERP triggering were observed between systems, these differences reduced power only marginally, likely due to the reduced processing power of the Raspberry Pi 2. An auditory oddball task administered using the Raspberry Pi 2 produced similar ERPs to those derived from a desktop PC in a laboratory setting. Despite temporal differences and slight increases in trials needed for similar statistical power, the Raspberry Pi 2 can be used to design and present auditory experiments comparable to a PC. Our results show that the Raspberry Pi 2 is a low cost alternative to the desktop PC when administering EEG experiments and, due to its small size and low power consumption, will enable mobile EEG experiments unconstrained by a traditional laboratory setting. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Thomas, Dale; Smith, Charles; Thomas, Leann; Kittredge, Sheryl
2002-01-01
The overall goal of the 2nd Generation RLV Program is to substantially reduce technical and business risks associated with developing a new class of reusable launch vehicles. NASA's specific goals are to improve the safety of a 2nd-generation system by 2 orders of magnitude - equivalent to a crew risk of 1-in-10,000 missions - and decrease the cost tenfold, to approximately $1,000 per pound of payload launched. Architecture definition is being conducted in parallel with the maturating of key technologies specifically identified to improve safety and reliability, while reducing operational costs. An architecture broadly includes an Earth-to-orbit reusable launch vehicle, on-orbit transfer vehicles and upper stages, mission planning, ground and flight operations, and support infrastructure, both on the ground and in orbit. The systems engineering approach ensures that the technologies developed - such as lightweight structures, long-life rocket engines, reliable crew escape, and robust thermal protection systems - will synergistically integrate into the optimum vehicle. To best direct technology development decisions, analytical models are employed to accurately predict the benefits of each technology toward potential space transportation architectures as well as the risks associated with each technology. Rigorous systems analysis provides the foundation for assessing progress toward safety and cost goals. The systems engineering review process factors in comprehensive budget estimates, detailed project schedules, and business and performance plans, against the goals of safety, reliability, and cost, in addition to overall technical feasibility. This approach forms the basis for investment decisions in the 2nd Generation RLV Program's risk-reduction activities. Through this process, NASA will continually refine its specialized needs and identify where Defense and commercial requirements overlap those of civil missions.
NASA Technical Reports Server (NTRS)
Thomas, Dale; Smith, Charles; Thomas, Leann; Kittredge, Sheryl
2002-01-01
The overall goal of the 2nd Generation RLV Program is to substantially reduce technical and business risks associated with developing a new class of reusable launch vehicles. NASA's specific goals are to improve the safety of a 2nd generation system by 2 orders of magnitude - equivalent to a crew risk of 1-in-10,000 missions - and decrease the cost tenfold, to approximately $1,000 per pound of payload launched. Architecture definition is being conducted in parallel with the maturating of key technologies specifically identified to improve safety and reliability, while reducing operational costs. An architecture broadly includes an Earth-to-orbit reusable launch vehicle, on-orbit transfer vehicles and upper stages, mission planning, ground and flight operations, and support infrastructure, both on the ground and in orbit. The systems engineering approach ensures that the technologies developed - such as lightweight structures, long-life rocket engines, reliable crew escape, and robust thermal protection systems - will synergistically integrate into the optimum vehicle. To best direct technology development decisions, analytical models are employed to accurately predict the benefits of each technology toward potential space transportation architectures as well as the risks associated with each technology. Rigorous systems analysis provides the foundation for assessing progress toward safety and cost goals. The systems engineering review process factors in comprehensive budget estimates, detailed project schedules, and business and performance plans, against the goals of safety, reliability, and cost, in addition to overall technical feasibility. This approach forms the basis for investment decisions in the 2nd Generation RLV Program's risk-reduction activities. Through this process, NASA will continually refine its specialized needs and identify where Defense and commercial requirements overlap those of civil missions.
Jimenez, Krystal; Vargas, Cristina; Garcia, Karla; Guzman, Herlinda; Angulo, Marco; Billimek, John
2018-01-01
Purpose The purpose of this study was to examine the reliability and validity of a Spanish version of the Beliefs About Medicines Questionnaire (BMQ) as a measure to evaluate beliefs about medications and to differentiate adherent from nonadherent patients among low-income Latino patients with diabetes in the United States. Methods Seventy-three patients were administered the BMQ and surveyed for evidence of medication nonadherence. Internal consistency of the BMQ was assessed by Cronbach’s alpha along with performing a confirmatory factor analysis. Criterion validity was assessed by comparing mean scores on three subscales of the BMQ (General Overuse, General Harm, and Specific Necessity-Concerns difference score) between adherent patients and patients reporting nonadherence for three different reasons (unintentional nonadherence, cost-related nonadherence, and nonadherence due to reasons other than cost) using independent samples t-tests. Results The BMQ is a reliable instrument to examine beliefs about medications in this Spanish-speaking population. Construct validity testing shows nearly identical factor loading as the original construct map. General Overuse scores were significantly more negative for patients reporting each reason for nonadherence compared to their adherent counterparts. Necessity-concerns difference scores were significantly more negative for patients reporting nonadherence for reasons other than cost compared to those who did not report this reason for nonadherence. Conclusions The Spanish version of the BMQ is appropriate to assess beliefs about medications in Latino patients with type 2 diabetes in the United States, and may help identify patients who become nonadherent to medications for reasons other than out of pocket costs. PMID:27831521
A Framework for Assessing the Reusability of Hardware (Reusable Rocket Engines)
NASA Technical Reports Server (NTRS)
Childress-Thompson, Rhonda; Farrington, Philip; Thomas, Dale
2016-01-01
Within the space flight community, reusability has taken center stage as the new buzzword. In order for reusable hardware to be competitive with its expendable counterpart, two major elements must be closely scrutinized. First, recovery and refurbishment costs must be lower than the development and acquisition costs. Additionally, the reliability for reused hardware must remain the same (or nearly the same) as "first use" hardware. Therefore, it is imperative that a systematic approach be established to enhance the development of reusable systems. However, before the decision can be made on whether it is more beneficial to reuse hardware or to replace it, the parameters that are needed to deem hardware worthy of reuse must be identified. For reusable hardware to be successful, the factors that must be considered are reliability (integrity, life, number of uses), operability (maintenance, accessibility), and cost (procurement, retrieval, refurbishment). These three factors are essential to the successful implementation of reusability while enabling the ability to meet performance goals. Past and present strategies and attempts at reuse within the space industry will be examined to identify important attributes of reusability that can be used to evaluate hardware when contemplating reusable versus expendable options. This paper will examine why reuse must be stated as an initial requirement rather than included as an afterthought in the final design. Late in the process, changes in the overall objective/purpose of components typically have adverse effects that potentially negate the benefits. A methodology for assessing the viability of reusing hardware will be presented by using the Space Shuttle Main Engine (SSME) to validate the approach. Because reliability, operability, and costs are key drivers in making this critical decision, they will be used to assess requirements for reuse as applied to components of the SSME.
A Design Heritage-Based Forecasting Methodology for Risk Informed Management of Advanced Systems
NASA Technical Reports Server (NTRS)
Maggio, Gaspare; Fragola, Joseph R.
1999-01-01
The development of next generation systems often carries with it the promise of improved performance, greater reliability, and reduced operational costs. These expectations arise from the use of novel designs, new materials, advanced integration and production technologies intended for functionality replacing the previous generation. However, the novelty of these nascent technologies is accompanied by lack of operational experience and, in many cases, no actual testing as well. Therefore some of the enthusiasm surrounding most new technologies may be due to inflated aspirations from lack of knowledge rather than actual future expectations. This paper proposes a design heritage approach for improved reliability forecasting of advanced system components. The basis of the design heritage approach is to relate advanced system components to similar designs currently in operation. The demonstrated performance of these components could then be used to forecast the expected performance and reliability of comparable advanced technology components. In this approach the greater the divergence of the advanced component designs from the current systems the higher the uncertainty that accompanies the associated failure estimates. Designers of advanced systems are faced with many difficult decisions. One of the most common and more difficult types of these decisions are those related to the choice between design alternatives. In the past decision-makers have found these decisions to be extremely difficult to make because they often involve the trade-off between a known performing fielded design and a promising paper design. When it comes to expected reliability performance the paper design always looks better because it is on paper and it addresses all the know failure modes of the fielded design. On the other hand there is a long, and sometimes very difficult road, between the promise of a paper design and its fulfillment; with the possibility that sometimes the reliability promise is not fulfilled at all. Decision makers in advanced technology areas have always known to discount the performance claims of a design to a degree in proportion to its stage of development, and at times have preferred the more mature design over the one of lesser maturity even with the latter promising substantially better performance once fielded. As with the broader measures of performance this has also been true for projected reliability performance. Paper estimates of potential advances in design reliability are to a degree uncertain in proportion to the maturity of the features being proposed to secure those advances. This is especially true when performance-enhancing features in other areas are also planned to be part of the development program.
Limitations of Reliability for Long-Endurance Human Spaceflight
NASA Technical Reports Server (NTRS)
Owens, Andrew C.; de Weck, Olivier L.
2016-01-01
Long-endurance human spaceflight - such as missions to Mars or its moons - will present a never-before-seen maintenance logistics challenge. Crews will be in space for longer and be farther way from Earth than ever before. Resupply and abort options will be heavily constrained, and will have timescales much longer than current and past experience. Spare parts and/or redundant systems will have to be included to reduce risk. However, the high cost of transportation means that this risk reduction must be achieved while also minimizing mass. The concept of increasing system and component reliability is commonly discussed as a means to reduce risk and mass by reducing the probability that components will fail during a mission. While increased reliability can reduce maintenance logistics mass requirements, the rate of mass reduction decreases over time. In addition, reliability growth requires increased test time and cost. This paper assesses trends in test time requirements, cost, and maintenance logistics mass savings as a function of increase in Mean Time Between Failures (MTBF) for some or all of the components in a system. In general, reliability growth results in superlinear growth in test time requirements, exponential growth in cost, and sublinear benefits (in terms of logistics mass saved). These trends indicate that it is unlikely that reliability growth alone will be a cost-effective approach to maintenance logistics mass reduction and risk mitigation for long-endurance missions. This paper discusses these trends as well as other options to reduce logistics mass such as direct reduction of part mass, commonality, or In-Space Manufacturing (ISM). Overall, it is likely that some combination of all available options - including reliability growth - will be required to reduce mass and mitigate risk for future deep space missions.
On reliable control system designs. Ph.D. Thesis; [actuators
NASA Technical Reports Server (NTRS)
Birdwell, J. D.
1978-01-01
A mathematical model for use in the design of reliable multivariable control systems is discussed with special emphasis on actuator failures and necessary actuator redundancy levels. The model consists of a linear time invariant discrete time dynamical system. Configuration changes in the system dynamics are governed by a Markov chain that includes transition probabilities from one configuration state to another. The performance index is a standard quadratic cost functional, over an infinite time interval. The actual system configuration can be deduced with a one step delay. The calculation of the optimal control law requires the solution of a set of highly coupled Riccati-like matrix difference equations. Results can be used for off-line studies relating the open loop dynamics, required performance, actuator mean time to failure, and functional or identical actuator redundancy, with and without feedback gain reconfiguration strategies.
FY12 End of Year Report for NEPP DDR2 Reliability
NASA Technical Reports Server (NTRS)
Guertin, Steven M.
2013-01-01
This document reports the status of the NASA Electronic Parts and Packaging (NEPP) Double Data Rate 2 (DDR2) Reliability effort for FY2012. The task expanded the focus of evaluating reliability effects targeted for device examination. FY11 work highlighted the need to test many more parts and to examine more operating conditions, in order to provide useful recommendations for NASA users of these devices. This year's efforts focused on development of test capabilities, particularly focusing on those that can be used to determine overall lot quality and identify outlier devices, and test methods that can be employed on components for flight use. Flight acceptance of components potentially includes considerable time for up-screening (though this time may not currently be used for much reliability testing). Manufacturers are much more knowledgeable about the relevant reliability mechanisms for each of their devices. We are not in a position to know what the appropriate reliability tests are for any given device, so although reliability testing could be focused for a given device, we are forced to perform a large campaign of reliability tests to identify devices with degraded reliability. With the available up-screening time for NASA parts, it is possible to run many device performance studies. This includes verification of basic datasheet characteristics. Furthermore, it is possible to perform significant pattern sensitivity studies. By doing these studies we can establish higher reliability of flight components. In order to develop these approaches, it is necessary to develop test capability that can identify reliability outliers. To do this we must test many devices to ensure outliers are in the sample, and we must develop characterization capability to measure many different parameters. For FY12 we increased capability for reliability characterization and sample size. We increased sample size this year by moving from loose devices to dual inline memory modules (DIMMs) with an approximate reduction of 20 to 50 times in terms of per device under test (DUT) cost. By increasing sample size we have improved our ability to characterize devices that may be considered reliability outliers. This report provides an update on the effort to improve DDR2 testing capability. Although focused on DDR2, the methods being used can be extended to DDR and DDR3 with relative ease.
National Launch System comparative economic analysis
NASA Technical Reports Server (NTRS)
Prince, A.
1992-01-01
Results are presented from an analysis of economic benefits (or losses), in the form of the life cycle cost savings, resulting from the development of the National Launch System (NLS) family of launch vehicles. The analysis was carried out by comparing various NLS-based architectures with the current Shuttle/Titan IV fleet. The basic methodology behind this NLS analysis was to develop a set of annual payload requirements for the Space Station Freedom and LEO, to design launch vehicle architectures around these requirements, and to perform life-cycle cost analyses on all of the architectures. A SEI requirement was included. Launch failure costs were estimated and combined with the relative reliability assumptions to measure the effects of losses. Based on the analysis, a Shuttle/NLS architecture evolving into a pressurized-logistics-carrier/NLS architecture appears to offer the best long-term cost benefit.
Life cycle cost modeling of conceptual space vehicles
NASA Technical Reports Server (NTRS)
Ebeling, Charles
1993-01-01
This paper documents progress to date by the University of Dayton on the development of a life cycle cost model for use during the conceptual design of new launch vehicles and spacecraft. This research is being conducted under NASA Research Grant NAG-1-1327. This research effort changes the focus from that of the first two years in which a reliability and maintainability model was developed to the initial development of a life cycle cost model. Cost categories are initially patterned after NASA's three axis work breakdown structure consisting of a configuration axis (vehicle), a function axis, and a cost axis. The focus will be on operations and maintenance costs and other recurring costs. Secondary tasks performed concurrent with the development of the life cycle costing model include continual support and upgrade of the R&M model. The primary result of the completed research will be a methodology and a computer implementation of the methodology to provide for timely cost analysis in support of the conceptual design activities. The major objectives of this research are: to obtain and to develop improved methods for estimating manpower, spares, software and hardware costs, facilities costs, and other cost categories as identified by NASA personnel; to construct a life cycle cost model of a space transportation system for budget exercises and performance-cost trade-off analysis during the conceptual and development stages; to continue to support modifications and enhancements to the R&M model; and to continue to assist in the development of a simulation model to provide an integrated view of the operations and support of the proposed system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weston, F.; Harrington, C.; Moskovitz, D.
Distributed resources can provide cost-effective reliability and energy services - in many cases, obviating the need for more expensive investments in wires and central station electricity generating facilities. Given the unique features of distributed resources, the challenge facing policymakers today is how to restructure wholesale markets for electricity and related services so as to reveal the full value that distributed resources can provide to the electric power system (utility grid). This report looks at the functions that distributed resources can perform and examines the barriers to them. It then identifies a series of policy and operational approaches to promoting DRmore » in wholesale markets. This report is one in the State Electricity Regulatory Policy and Distributed Resources series developed under contract to NREL (see Annual Technical Status Report of the Regulatory Assistance Project: September 2000-September 2001, NREL/SR-560-32733). Other titles in this series are: (1) Distributed Resource Distribution Credit Pilot Programs - Revealing the Value to Consumers and Vendors, NREL/SR-560-32499; (2) Distributed Resources and Electric System Reliability, NREL/SR-560-32498; (3) Distribution System Cost Methodologies for Distributed Generation, NREL/SR-560-32500; (4) Distribution System Cost Methodologies for Distributed Generation Appendices, NREL/SR-560-32501« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hightower, Marion Michael; Baca, Michael J.; VanderMey, Carissa
In June 2016, the Department of Energy's (DOE's) Office of Energy Efficiency and Renewable Energy (EERE) in collaboration with the Renewable Energy Branch for the Hawaii State Energy Office (HSEO), the Hawaii Community Development Authority (HCDA), the United States Navy (Navy), and Sandia National Laboratories (Sandia) established a project to 1) assess the current functionality of the energy infrastructure at the Kalaeloa Community Development District, and 2) evaluate options to use both existing and new distributed and renewable energy generation and storage resources within advanced microgrid frameworks to cost-effectively enhance energy security and reliability for critical stakeholder needs during bothmore » short-term and extended electric power outages. This report discusses the results of a stakeholder workshop and associated site visits conducted by Sandia in October 2016 to identify major Kalaeloa stakeholder and tenant energy issues, concerns, and priorities. The report also documents information on the performance and cost benefits of a range of possible energy system improvement options including traditional electric grid upgrade approaches, advanced microgrid upgrades, and combined grid/microgrid improvements. The costs and benefits of the different improvement options are presented, comparing options to see how well they address the energy system reliability, sustainability, and resiliency priorities identified by the Kalaeloa stakeholders.« less
Evaluation of the CEAS model for barley yields in North Dakota and Minnesota
NASA Technical Reports Server (NTRS)
Barnett, T. L. (Principal Investigator)
1981-01-01
The CEAS yield model is based upon multiple regression analysis at the CRD and state levels. For the historical time series, yield is regressed on a set of variables derived from monthly mean temperature and monthly precipitation. Technological trend is represented by piecewise linear and/or quadriatic functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test (1970-79) demonstrated that biases are small and performance as indicated by the root mean square errors are acceptable for intended application, however, model response for individual years particularly unusual years, is not very reliable and shows some large errors. The model is objective, adequate, timely, simple and not costly. It considers scientific knowledge on a broad scale but not in detail, and does not provide a good current measure of modeled yield reliability.
Detailed requirements document for the Interactive Financial Management System (IFMS), volume 1
NASA Technical Reports Server (NTRS)
Dodson, D. B.
1975-01-01
The detailed requirements for phase 1 (online fund control, subauthorization accounting, and accounts receivable functional capabilities) of the Interactive Financial Management System (IFMS) are described. This includes information on the following: systems requirements, performance requirements, test requirements, and production implementation. Most of the work is centered on systems requirements, and includes discussions on the following processes: resources authority, allotment, primary work authorization, reimbursable order acceptance, purchase request, obligation, cost accrual, cost distribution, disbursement, subauthorization performance, travel, accounts receivable, payroll, property, edit table maintenance, end-of-year, backup input. Other subjects covered include: external systems interfaces, general inquiries, general report requirements, communication requirements, and miscellaneous. Subjects covered under performance requirements include: response time, processing volumes, system reliability, and accuracy. Under test requirements come test data sources, general test approach, and acceptance criteria. Under production implementation come data base establishment, operational stages, and operational requirements.
Systematic Sensor Selection Strategy (S4) User Guide
NASA Technical Reports Server (NTRS)
Sowers, T. Shane
2012-01-01
This paper describes a User Guide for the Systematic Sensor Selection Strategy (S4). S4 was developed to optimally select a sensor suite from a larger pool of candidate sensors based on their performance in a diagnostic system. For aerospace systems, selecting the proper sensors is important for ensuring adequate measurement coverage to satisfy operational, maintenance, performance, and system diagnostic criteria. S4 optimizes the selection of sensors based on the system fault diagnostic approach while taking conflicting objectives such as cost, weight and reliability into consideration. S4 can be described as a general architecture structured to accommodate application-specific components and requirements. It performs combinational optimization with a user defined merit or cost function to identify optimum or near-optimum sensor suite solutions. The S4 User Guide describes the sensor selection procedure and presents an example problem using an open source turbofan engine simulation to demonstrate its application.
Thin-film filament-based solar cells and modules
NASA Astrophysics Data System (ADS)
Tuttle, J. R.; Cole, E. D.; Berens, T. A.; Alleman, J.; Keane, J.
1997-04-01
This concept paper describes a patented, novel photovoltaic (PV) technology that is capable of achieving near-term commercialization and profitability based upon design features that maximize product performance while minimizing initial and future manufacturing costs. DayStar Technologies plans to exploit these features and introduce a product to the market based upon these differential positions. The technology combines the demonstrated performance and reliability of existing thin-film PV product with a cell and module geometry that cuts material usage by a factor of 5, and enhances performance and manufacturability relative to standard flat-plate designs. The target product introduction price is 1.50/Watt-peak (Wp). This is approximately one-half the cost of the presently available PV product. Additional features include: increased efficiency through low-level concentration, no scribe or grid loss, simple series interconnect, high voltage, light weight, high-throughput manufacturing, large area immediate demonstration, flexibility, modularity.
Survey points to practices that reduce refinery maintenance spending
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ricketts, R.
During the past decade, Solomon Associates Inc., Dallas, has conducted several comparative analyses of maintenance costs in the refining industry. These investigations have brought to light maintenance practices and reliability improvement activities that are responsible for the wide range of maintenance costs recorded by refineries. Some of the practices are of an organizational nature and thus are of interest to managers reviewing their operations. The paper discusses maintenance costs; profitability; cost trends; equipment availability; funds application; two basic organizational approached to maintenance (repair-focused organization and reliability-focused organization); low-cost practices; and organizational style.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jonkman, Jason; Annoni, Jennifer; Hayman, Greg
2017-01-01
This paper presents the development of FAST.Farm, a new multiphysics tool applicable to engineering problems in research and industry involving wind farm performance and cost optimization that is needed to address the current underperformance, failures, and expenses plaguing the wind industry. Achieving wind cost-of-energy targets - which requires improvements in wind farm performance and reliability, together with reduced uncertainty and expenditures - has been eluded by the complicated nature of the wind farm design problem, especially the sophisticated interaction between atmospheric phenomena and wake dynamics and array effects. FAST.Farm aims to balance the need for accurate modeling of the relevantmore » physics for predicting power performance and loads while maintaining low computational cost to support a highly iterative and probabilistic design process and system-wide optimization. FAST.Farm makes use of FAST to model the aero-hydro-servo-elastics of distinct turbines in the wind farm, and it is based on some of the principles of the Dynamic Wake Meandering (DWM) model, but avoids many of the limitations of existing DWM implementations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holdmann, Gwen
2016-12-20
Alaska is considered a world leader in renewable energy and microgrid technologies. Our workplan started as an analysis of existing wind-diesel systems, many of which were not performing as designed. We aimed to analyze and understand the performance of existing wind-diesel systems, to establish a knowledge baseline from which to work towards improvement and maximizing renewable energy utilization. To accomplish this, we worked with the Alaska Energy Authority to develop a comprehensive database of wind system experience, including underlying climatic and socioeconomic characteristics, actual operating data, projected vs. actual capital and O&M costs, and a catalogue of catastrophic anomalies. Thismore » database formed the foundation for the rest of the research program, with the overarching goal of delivering low-cost, reliable, and sustainable energy to diesel microgrids.« less
A new communication protocol family for a distributed spacecraft control system
NASA Technical Reports Server (NTRS)
Baldi, Andrea; Pace, Marco
1994-01-01
In this paper we describe the concepts behind and architecture of a communication protocol family, which was designed to fulfill the communication requirements of ESOC's new distributed spacecraft control system SCOS 2. A distributed spacecraft control system needs a data delivery subsystem to be used for telemetry (TLM) distribution, telecommand (TLC) dispatch and inter-application communication, characterized by the following properties: reliability, so that any operational workstation is guaranteed to receive the data it needs to accomplish its role; efficiency, so that the telemetry distribution, even for missions with high telemetry rates, does not cause a degradation of the overall control system performance; scalability, so that the network is not the bottleneck both in terms of bandwidth and reconfiguration; flexibility, so that it can be efficiently used in many different situations. The new protocol family which satisfies the above requirements is built on top of widely used communication protocols (UDP and TCP), provides reliable point-to-point and broadcast communication (UDP+) and is implemented in C++. Reliability is achieved using a retransmission mechanism based on a sequence numbering scheme. Such a scheme allows to have cost-effective performances compared to the traditional protocols, because retransmission is only triggered by applications which explicitly need reliability. This flexibility enables applications with different profiles to take advantage of the available protocols, so that the best rate between sped and reliability can be achieved case by case.
NASA Astrophysics Data System (ADS)
Stamenkovic, Dragan D.; Popovic, Vladimir M.
2015-02-01
Warranty is a powerful marketing tool, but it always involves additional costs to the manufacturer. In order to reduce these costs and make use of warranty's marketing potential, the manufacturer needs to master the techniques for warranty cost prediction according to the reliability characteristics of the product. In this paper a combination free replacement and pro rata warranty policy is analysed as warranty model for one type of light bulbs. Since operating conditions have a great impact on product reliability, they need to be considered in such analysis. A neural network model is used to predict light bulb reliability characteristics based on the data from the tests of light bulbs in various operating conditions. Compared with a linear regression model used in the literature for similar tasks, the neural network model proved to be a more accurate method for such prediction. Reliability parameters obtained in this way are later used in Monte Carlo simulation for the prediction of times to failure needed for warranty cost calculation. The results of the analysis make possible for the manufacturer to choose the optimal warranty policy based on expected product operating conditions. In such a way, the manufacturer can lower the costs and increase the profit.
Forecast analysis of optical waveguide bus performance
NASA Technical Reports Server (NTRS)
Ledesma, R.; Rourke, M. D.
1979-01-01
Elements to be considered in the design of a data bus include: architecture; data rate; modulation, encoding, detection; power distribution requirements; protocol, work structure; bus reliability, maintainability; interterminal transmission medium; cost; and others specific to application. Fiber- optic data bus considerations for a 32 port transmissive star architecture, are discussed in a tutorial format. General optical-waveguide bus concepts, are reviewed. The electrical and optical performance of a 32 port transmissive star bus, and the effects of temperature on the performance of optical-waveguide buses are examined. A bibliography of pertinent references and the bus receiver test results are included.
Solar power satellite system definition study. Volume 1, phase 1: Executive summary
NASA Technical Reports Server (NTRS)
1979-01-01
A systems definition study of the solar satellite system (SPS) is presented. The technical feasibility of solar power satellites based on forecasts of technical capability in the various applicable technologies is assessed. The performance, cost, operational characteristics, reliability, and the suitability of SPS's as power generators for typical commercial electricity grids are discussed. The uncertainties inherent in the system characteristics forecasts are assessed.
NASA Headquarters/Kennedy Space Center: Organization and Small Spacecraft Launch Services
NASA Technical Reports Server (NTRS)
Sierra, Albert; Beddel, Darren
1999-01-01
The objectives of the Kennedy Space Center's (KSC) Expendable Launch Vehicles (ELV) Program are to provide safe, reliable, cost effective ELV launches, maximize customer satisfaction, and perform advanced payload processing capability development. Details are given on the ELV program organization, products and services, foreign launch vehicle policy, how to get a NASA launch service, and some of the recent NASA payloads.
Preliminary candidate advanced avionics system for general aviation
NASA Technical Reports Server (NTRS)
Mccalla, T. M.; Grismore, F. L.; Greatline, S. E.; Birkhead, L. M.
1977-01-01
An integrated avionics system design was carried out to the level which indicates subsystem function, and the methods of overall system integration. Sufficient detail was included to allow identification of possible system component technologies, and to perform reliability, modularity, maintainability, cost, and risk analysis upon the system design. Retrofit to older aircraft, availability of this system to the single engine two place aircraft, was considered.
Electronics for a focal plane crystal spectrometer
NASA Technical Reports Server (NTRS)
Goeke, R. F.
1978-01-01
The HEAO-B program forced the usual constraints upon the spacecraft experiment electronics: high reliability, low power consumption, and tight packaging at reasonable cost. The programmable high voltage power supplies were unique in both application and simplicity of manufacture. The hybridized measurement chain is a modification of that used on the SAS-C program; the charge amplifier design in particular shows definite improvement in performance over previous work.
Systems Engineering of Electric and Hybrid Vehicles
NASA Technical Reports Server (NTRS)
Kurtz, D. W.; Levin, R. R.
1986-01-01
Technical paper notes systems engineering principles applied to development of electric and hybrid vehicles such that system performance requirements support overall program goal of reduced petroleum consumption. Paper discusses iterative design approach dictated by systems analyses. In addition to obvious peformance parameters of range, acceleration rate, and energy consumption, systems engineering also considers such major factors as cost, safety, reliability, comfort, necessary supporting infrastructure, and availability of materials.
Management approach recommendations. Earth Observatory Satellite system definition study (EOS)
NASA Technical Reports Server (NTRS)
1974-01-01
Management analyses and tradeoffs were performed to determine the most cost effective management approach for the Earth Observatory Satellite (EOS) Phase C/D. The basic objectives of the management approach are identified. Some of the subjects considered are as follows: (1) contract startup phase, (2) project management control system, (3) configuration management, (4) quality control and reliability engineering requirements, and (5) the parts procurement program.
Agreement and reading time for differently-priced devices for the digital capture of X-ray films.
Salazar, Antonio José; Camacho, Juan Camilo; Aguirre, Diego Andrés
2012-03-01
We assessed the reliability of three digital capture devices: a film digitizer (which cost US $15,000), a flat-bed scanner (US $1800) and a digital camera (US $450). Reliability was measured as the agreement between six observers when reading images acquired from a single device and also in terms of the pair-device agreement. The images were 136 chest X-ray cases. The variables measured were the interstitial opacities distribution, interstitial patterns, nodule size and percentage pneumothorax size. The agreement between the six readers when reading images acquired from a single device was similar for the three devices. The pair-device agreements were moderate for all variables. There were significant differences in reading-time between devices: the mean reading-time for the film digitizer was 93 s, it was 59 s for the flat-bed scanner and 70 s for the digital camera. Despite the differences in their cost, there were no substantial differences in the performance of the three devices.
NASA Astrophysics Data System (ADS)
Capps, Gregory
Semiconductor products are manufactured and consumed across the world. The semiconductor industry is constantly striving to manufacture products with greater performance, improved efficiency, less energy consumption, smaller feature sizes, thinner gate oxides, and faster speeds. Customers have pushed towards zero defects and require a more reliable, higher quality product than ever before. Manufacturers are required to improve yields, reduce operating costs, and increase revenue to maintain a competitive advantage. Opportunities exist for integrated circuit (IC) customers and manufacturers to work together and independently to reduce costs, eliminate waste, reduce defects, reduce warranty returns, and improve quality. This project focuses on electrical over-stress (EOS) and re-test okay (RTOK), two top failure return mechanisms, which both make great defect reduction opportunities in customer-manufacturer relationship. Proactive continuous improvement initiatives and methodologies are addressed with emphasis on product life cycle, manufacturing processes, test, statistical process control (SPC), industry best practices, customer education, and customer-manufacturer interaction.
A Study of Phased Array Antennas for NASA's Deep Space Network
NASA Technical Reports Server (NTRS)
Jamnejad, Vahraz; Huang, John; Cesarone, Robert J.
2001-01-01
In this paper we briefly discuss various options but focus on the feasibility of the phased arrays as a viable option for this application. Of particular concern and consideration will be the cost, reliability, and performance compared to the present 70-meter antenna system, particularly the gain/noise temperature levels in the receive mode. Many alternative phased arrays including planar horizontal arrays, hybrid mechanically/electronically steered arrays, phased array of mechanically steered reflectors, multi-faceted planar arrays, phased array-fed lens antennas, and planar reflect-arrays are compared and their viability is assessed. Although they have many advantages including higher reliability, near-instantaneous beam switching or steering capability, the cost of such arrays is presently prohibitive and it is concluded that the only viable array options at the present are the arrays of a few or many small reflectors. The active planar phased arrays, however, may become feasible options in the next decade and can be considered for deployment in smaller configurations as supplementary options.
The Potential of Energy Storage Systems with Respect to Generation Adequacy and Economic Viability
NASA Astrophysics Data System (ADS)
Bradbury, Kyle Joseph
Intermittent energy resources, including wind and solar power, continue to be rapidly added to the generation fleet domestically and abroad. The variable power of these resources introduces new levels of stochasticity into electric interconnections that must be continuously balanced in order to maintain system reliability. Energy storage systems (ESSs) offer one potential option to compensate for the intermittency of renewables. ESSs for long-term storage (1-hour or greater), aside from a few pumped hydroelectric installations, are not presently in widespread use in the U.S. The deployment of ESSs would be most likely driven by either the potential for a strong internal rate of return (IRR) on investment and through significant benefits to system reliability that independent system operators (ISOs) could incentivize. To assess the potential of ESSs three objectives are addressed. (1) Evaluate the economic viability of energy storage for price arbitrage in real-time energy markets and determine system cost improvements for ESSs to become attractive investments. (2) Estimate the reliability impact of energy storage systems on the large-scale integration of intermittent generation. (3) Analyze the economic, environmental, and reliability tradeoffs associated with using energy storage in conjunction with stochastic generation. First, using real-time energy market price data from seven markets across the U.S. and the physical parameters of fourteen ESS technologies, the maximum potential IRR of each technology from price arbitrage was evaluated in each market, along with the optimal ESS system size. Additionally, the reductions in capital cost needed to achieve a 10% IRR were estimated for each ESS. The results indicate that the profit-maximizing size of an ESS is primarily determined by its technological characteristics (round-trip charge/discharge efficiency and self-discharge) and not market price volatility, which instead increases IRR. This analysis demonstrates that few ESS technologies are likely to be implemented by investors alone. Next, the effects of ESSs on system reliability are quantified. Using historic data for wind, solar, and conventional generation, a correlation-preserving, copula-transform model was implemented in conjunction with Markov chain Monte Carlo framework for estimating system reliability indices. Systems with significant wind and solar penetration (25% or greater), even with added energy storage capacity, resulted in considerable decreases in generation adequacy. Lastly, rather than analyzing the reliability and costs in isolation of one another, system reliability, cost, and emissions were analyzed in 3-space to quantify and visualize the system tradeoffs. The modeling results implied that ESSs perform similarly to natural gas combined cycle (NGCC) systems with respect to generation adequacy and system cost, with the primary difference being that the generation adequacy improvements are less for ESSs than that of NGCC systems and the increase in LCOE is greater for ESSs than NGCC systems. Although ESSs do not appear to offer greater benefits than NGCC systems for managing energy on time intervals of 1-hour or more, we conclude that future research into short-term power balancing applications of ESSs, in particular for frequency regulation, is necessary to understand the full potential of ESSs in modern electric interconnections.
System principles, mathematical models and methods to ensure high reliability of safety systems
NASA Astrophysics Data System (ADS)
Zaslavskyi, V.
2017-04-01
Modern safety and security systems are composed of a large number of various components designed for detection, localization, tracking, collecting, and processing of information from the systems of monitoring, telemetry, control, etc. They are required to be highly reliable in a view to correctly perform data aggregation, processing and analysis for subsequent decision making support. On design and construction phases of the manufacturing of such systems a various types of components (elements, devices, and subsystems) are considered and used to ensure high reliability of signals detection, noise isolation, and erroneous commands reduction. When generating design solutions for highly reliable systems a number of restrictions and conditions such as types of components and various constrains on resources should be considered. Various types of components perform identical functions; however, they are implemented using diverse principles, approaches and have distinct technical and economic indicators such as cost or power consumption. The systematic use of different component types increases the probability of tasks performing and eliminates the common cause failure. We consider type-variety principle as an engineering principle of system analysis, mathematical models based on this principle, and algorithms for solving optimization problems of highly reliable safety and security systems design. Mathematical models are formalized in a class of two-level discrete optimization problems of large dimension. The proposed approach, mathematical models, algorithms can be used for problem solving of optimal redundancy on the basis of a variety of methods and control devices for fault and defects detection in technical systems, telecommunication networks, and energy systems.
Flight control electronics reliability/maintenance study
NASA Technical Reports Server (NTRS)
Dade, W. W.; Edwards, R. H.; Katt, G. T.; Mcclellan, K. L.; Shomber, H. A.
1977-01-01
Collection and analysis of data are reported that concern the reliability and maintenance experience of flight control system electronics currently in use on passenger carrying jet aircraft. Two airlines B-747 airplane fleets were analyzed to assess the component reliability, system functional reliability, and achieved availability of the CAT II configuration flight control system. Also assessed were the costs generated by this system in the categories of spare equipment, schedule irregularity, and line and shop maintenance. The results indicate that although there is a marked difference in the geographic location and route pattern between the airlines studied, there is a close similarity in the reliability and the maintenance costs associated with the flight control electronics.
Making a Reliable Actuator Faster and More Affordable
NASA Technical Reports Server (NTRS)
2005-01-01
Before any rocket is allowed to fly and be used for a manned mission, it is first test-fired on a static test stand to verify its flight readiness. NASA s Stennis Space Center provides testing of Space Shuttle Main Engines, rocket propulsion systems, and related components with several test facilities. It has been NASA s test-launch site since 1961. The testing stations age with time and repeated use; and with aging comes maintenance; and with maintenance comes expense. NASA has been seeking ways to lower the cost of maintaining the stations, and has aided in the development of an improved reliable linear actuator that arrives onsite quickly and costs less money than other actuators. In general terms, a linear actuator is a servomechanism that supplies a measured amount of energy for the operation of another mechanical system. Accuracy, reliability, and speed of the actuator are critical to performance of the entire system, and these actuators are critical components of the engine test stands. Partnership An actuator was developed as part of a Dual-Use Cooperative Agreement between BAFCO, Inc., of Warminister, Pennsylvania, and Stennis. BAFCO identified four suppliers that manufactured actuator components that met the rigorous testing standards imposed by the Space Agency and then modified these components for application on the rocket test stands. In partnership with BAFCO, the existing commercial products size and weight were reworked, reducing cost and delivery time. Previously, these parts would cost between $20,000 and $22,000, but with the new process, they now run between $11,000 and $13,000, a substantial savings, considering NASA has already purchased over 120 of the units. Delivery time of the cost-saving actuators has also been cut from over 20 to 22 weeks to within 8 to 10 weeks. The redesigned actuator is commercially available, and the company is successfully supplying them to customers other than NASA.
Using random forest for reliable classification and cost-sensitive learning for medical diagnosis.
Yang, Fan; Wang, Hua-zhen; Mi, Hong; Lin, Cheng-de; Cai, Wei-wen
2009-01-30
Most machine-learning classifiers output label predictions for new instances without indicating how reliable the predictions are. The applicability of these classifiers is limited in critical domains where incorrect predictions have serious consequences, like medical diagnosis. Further, the default assumption of equal misclassification costs is most likely violated in medical diagnosis. In this paper, we present a modified random forest classifier which is incorporated into the conformal predictor scheme. A conformal predictor is a transductive learning scheme, using Kolmogorov complexity to test the randomness of a particular sample with respect to the training sets. Our method show well-calibrated property that the performance can be set prior to classification and the accurate rate is exactly equal to the predefined confidence level. Further, to address the cost sensitive problem, we extend our method to a label-conditional predictor which takes into account different costs for misclassifications in different class and allows different confidence level to be specified for each class. Intensive experiments on benchmark datasets and real world applications show the resultant classifier is well-calibrated and able to control the specific risk of different class. The method of using RF outlier measure to design a nonconformity measure benefits the resultant predictor. Further, a label-conditional classifier is developed and turn to be an alternative approach to the cost sensitive learning problem that relies on label-wise predefined confidence level. The target of minimizing the risk of misclassification is achieved by specifying the different confidence level for different class.
Temperature and Humidity Calibration of a Low-Cost Wireless Dust Sensor for Real-Time Monitoring.
Hojaiji, Hannaneh; Kalantarian, Haik; Bui, Alex A T; King, Christine E; Sarrafzadeh, Majid
2017-03-01
This paper introduces the design, calibration, and validation of a low-cost portable sensor for the real-time measurement of dust particles within the environment. The proposed design consists of low hardware cost and calibration based on temperature and humidity sensing to achieve accurate processing of airborne dust density. Using commercial particulate matter sensors, a highly accurate air quality monitoring sensor was designed and calibrated using real world variations in humidity and temperature for indoor and outdoor applications. Furthermore, to provide a low-cost secure solution for real-time data transfer and monitoring, an onboard Bluetooth module with AES data encryption protocol was implemented. The wireless sensor was tested against a Dylos DC1100 Pro Air Quality Monitor, as well as an Alphasense OPC-N2 optical air quality monitoring sensor for accuracy. The sensor was also tested for reliability by comparing the sensor to an exact copy of itself under indoor and outdoor conditions. It was found that accurate measurements under real-world humid and temperature varying and dynamically changing conditions were achievable using the proposed sensor when compared to the commercially available sensors. In addition to accurate and reliable sensing, this sensor was designed to be wearable and perform real-time data collection and transmission, making it easy to collect and analyze data for air quality monitoring and real-time feedback in remote health monitoring applications. Thus, the proposed device achieves high quality measurements at lower-cost solutions than commercially available wireless sensors for air quality.
Long life, low cost, rechargeable AgZn battery for non-military applications
NASA Astrophysics Data System (ADS)
Brown, Curtis C.
1996-03-01
Of the rechargeable (secondary) battery systems with mature technology, the silver oxide-zinc system (AgZn) safely offers the highest power and energy (watts and watt hours) per unit of volume and mass. As a result they have long been used for aerospace and defense applications where they have also proven their high reliability. In the past, the expense associated with the cost of silver and the resulting low production volume have limited their commercial application. However, the relative low cost of silver now make this system feasible in many applications where high energy and reliability are required. One area of commercial potential is power for a new generation of sophisticated, portable medical equipment. AgZn batteries have recently proven ``enabling technology'' for power critical, advanced medical devices. By extending the cycle calendar life to the system (offers both improved performance and lower operating cost), a combination is achieved which may enable a wide range of future electrical devices. Other areas where AgZn batteries have been used in nonmilitary applications to provide power to aid in the development of commercial equipment have been: (a) Electrically powered vehicles; (b) Remote sensing in nuclear facilities; (c) Special effects equipment for movies; (d) Remote sensing in petroleum pipe lines; (e) Portable computers; (f) Fly by wire systems for commercial aircraft; and (g) Robotics. However none of these applications have progressed to the level where the volume required will significantly lower cost.
Case-Mix for Performance Management: A Risk Algorithm Based on ICD-10-CM.
Gao, Jian; Moran, Eileen; Almenoff, Peter L
2018-06-01
Accurate risk adjustment is the key to a reliable comparison of cost and quality performance among providers and hospitals. However, the existing case-mix algorithms based on age, sex, and diagnoses can only explain up to 50% of the cost variation. More accurate risk adjustment is desired for provider performance assessment and improvement. To develop a case-mix algorithm that hospitals and payers can use to measure and compare cost and quality performance of their providers. All 6,048,895 patients with valid diagnoses and cost recorded in the US Veterans health care system in fiscal year 2016 were included in this study. The dependent variable was total cost at the patient level, and the explanatory variables were age, sex, and comorbidities represented by 762 clinically homogeneous groups, which were created by expanding the 283 categories from Clinical Classifications Software based on ICD-10-CM codes. The split-sample method was used to assess model overfitting and coefficient stability. The predictive power of the algorithms was ascertained by comparing the R, mean absolute percentage error, root mean square error, predictive ratios, and c-statistics. The expansion of the Clinical Classifications Software categories resulted in higher predictive power. The R reached 0.72 and 0.52 for the transformed and raw scale cost, respectively. The case-mix algorithm we developed based on age, sex, and diagnoses outperformed the existing case-mix models reported in the literature. The method developed in this study can be used by other health systems to produce tailored risk models for their specific purpose.
NASA Technical Reports Server (NTRS)
Dennehy, Cornelius J.
2010-01-01
This final report summarizes the results of a comparative assessment of the fault tolerance and reliability of different Guidance, Navigation and Control (GN&C) architectural approaches. This study was proactively performed by a combined Massachusetts Institute of Technology (MIT) and Draper Laboratory team as a GN&C "Discipline-Advancing" activity sponsored by the NASA Engineering and Safety Center (NESC). This systematic comparative assessment of GN&C system architectural approaches was undertaken as a fundamental step towards understanding the opportunities for, and limitations of, architecting highly reliable and fault tolerant GN&C systems composed of common avionic components. The primary goal of this study was to obtain architectural 'rules of thumb' that could positively influence future designs in the direction of an optimized (i.e., most reliable and cost-efficient) GN&C system. A secondary goal was to demonstrate the application and the utility of a systematic modeling approach that maps the entire possible architecture solution space.
NASA Astrophysics Data System (ADS)
Khatibinia, M.; Salajegheh, E.; Salajegheh, J.; Fadaee, M. J.
2013-10-01
A new discrete gravitational search algorithm (DGSA) and a metamodelling framework are introduced for reliability-based design optimization (RBDO) of reinforced concrete structures. The RBDO of structures with soil-structure interaction (SSI) effects is investigated in accordance with performance-based design. The proposed DGSA is based on the standard gravitational search algorithm (GSA) to optimize the structural cost under deterministic and probabilistic constraints. The Monte-Carlo simulation (MCS) method is considered as the most reliable method for estimating the probabilities of reliability. In order to reduce the computational time of MCS, the proposed metamodelling framework is employed to predict the responses of the SSI system in the RBDO procedure. The metamodel consists of a weighted least squares support vector machine (WLS-SVM) and a wavelet kernel function, which is called WWLS-SVM. Numerical results demonstrate the efficiency and computational advantages of DGSA and the proposed metamodel for RBDO of reinforced concrete structures.
First-order reliability application and verification methods for semistatic structures
NASA Astrophysics Data System (ADS)
Verderaime, V.
1994-11-01
Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored in conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments; stress audits are shown to be arbitrary and incomplete, and the concept compromises the performance of high-strength materials. A reliability method is proposed that combines first-order reliability principles with deterministic design variables and conventional test techniques to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety-index expression. The application is reduced to solving for a design factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this design factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and the development of semistatic structural designs.
NASA Astrophysics Data System (ADS)
Strunz, Richard; Herrmann, Jeffrey W.
2011-12-01
The hot fire test strategy for liquid rocket engines has always been a concern of space industry and agency alike because no recognized standard exists. Previous hot fire test plans focused on the verification of performance requirements but did not explicitly include reliability as a dimensioning variable. The stakeholders are, however, concerned about a hot fire test strategy that balances reliability, schedule, and affordability. A multiple criteria test planning model is presented that provides a framework to optimize the hot fire test strategy with respect to stakeholder concerns. The Staged Combustion Rocket Engine Demonstrator, a program of the European Space Agency, is used as example to provide the quantitative answer to the claim that a reduced thrust scale demonstrator is cost beneficial for a subsequent flight engine development. Scalability aspects of major subsystems are considered in the prior information definition inside the Bayesian framework. The model is also applied to assess the impact of an increase of the demonstrated reliability level on schedule and affordability.
PV inverter performance and reliability: What is the role of the bus capacitor?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flicker, Jack; Kaplar, Robert; Marinella, Matthew
In order to elucidate how the degradation of individual components affects the state of the photovoltaic inverter as a whole, we have carried out SPICE simulations to investigate the voltage and current ripple on the DC bus. The bus capacitor is generally considered to be among the least reliable components of the system, so we have simulated how the degradation of bus capacitors affects the AC ripple at the terminals of the PV module. Degradation-induced ripple leads to an increased degradation rate in a positive feedback cycle. Additionally, laboratory experiments are being carried out to ascertain the reliability of metallizedmore » thin film capacitors. By understanding the degradation mechanisms and their effects on the inverter as a system, steps can be made to more effectively replace marginal components with more reliable ones, increasing the lifetime and efficiency of the inverter and decreasing its cost per watt towards the US Department of Energy goals.« less
Optimization of Adaptive Intraply Hybrid Fiber Composites with Reliability Considerations
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1994-01-01
The reliability with bounded distribution parameters (mean, standard deviation) was maximized and the reliability-based cost was minimized for adaptive intra-ply hybrid fiber composites by using a probabilistic method. The probabilistic method accounts for all naturally occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry, and control-related parameters. Probabilistic sensitivity factors were computed and used in the optimization procedures. For actuated change in the angle of attack of an airfoil-like composite shell structure with an adaptive torque plate, the reliability was maximized to 0.9999 probability, with constraints on the mean and standard deviation of the actuation material volume ratio (percentage of actuation composite material in a ply) and the actuation strain coefficient. The reliability-based cost was minimized for an airfoil-like composite shell structure with an adaptive skin and a mean actuation material volume ratio as the design parameter. At a O.9-mean actuation material volume ratio, the minimum cost was obtained.
Managing Reliability in the 21st Century
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dellin, T.A.
1998-11-23
The rapid pace of change at Ike end of the 20th Century should continue unabated well into the 21st Century. The driver will be the marketplace imperative of "faster, better, cheaper." This imperative has already stimulated a revolution-in-engineering in design and manufacturing. In contrast, to date, reliability engineering has not undergone a similar level of change. It is critical that we implement a corresponding revolution-in-reliability-engineering as we enter the new millennium. If we are still using 20th Century reliability approaches in the 21st Century, then reliability issues will be the limiting factor in faster, better, and cheaper. At the heartmore » of this reliability revolution will be a science-based approach to reliability engineering. Science-based reliability will enable building-in reliability, application-specific products, virtual qualification, and predictive maintenance. The purpose of this paper is to stimulate a dialogue on the future of reliability engineering. We will try to gaze into the crystal ball and predict some key issues that will drive reliability programs in the new millennium. In the 21st Century, we will demand more of our reliability programs. We will need the ability to make accurate reliability predictions that will enable optimizing cost, performance and time-to-market to meet the needs of every market segment. We will require that all of these new capabilities be in place prior to the stint of a product development cycle. The management of reliability programs will be driven by quantifiable metrics of value added to the organization business objectives.« less
NASA Technical Reports Server (NTRS)
Ghaffarian, Reza
2014-01-01
Bottom terminated components and quad flat no-lead (BTC/QFN) packages have been extensively used by commercial industry for more than a decade. Cost and performance advantages and the closeness of the packages to the boards make them especially unique for radio frequency (RF) applications. A number of high-reliability parts are now available in this style of package configuration. This report presents a summary of literature surveyed and provides a body of knowledge (BOK) gathered on the status of BTC/QFN and their advanced versions of multi-row QFN (MRQFN) packaging technologies. The report provides a comprehensive review of packaging trends and specifications on design, assembly, and reliability. Emphasis is placed on assembly reliability and associated key design and process parameters because they show lower life than standard leaded package assembly under thermal cycling exposures. Inspection of hidden solder joints for assuring quality is challenging and is similar to ball grid arrays (BGAs). Understanding the key BTC/QFN technology trends, applications, processing parameters, workmanship defects, and reliability behavior is important when judicially selecting and narrowing the follow-on packages for evaluation and testing, as well as for the low risk insertion in high-reliability applications.
NASA Technical Reports Server (NTRS)
Ghaffarian, Reza
2014-01-01
Bottom terminated components and quad flat no-lead (BTC/QFN) packages have been extensively used by commercial industry for more than a decade. Cost and performance advantages and the closeness of the packages to the boards make them especially unique for radio frequency (RF) applications. A number of high-reliability parts are now available in this style of package configuration. This report presents a summary of literature surveyed and provides a body of knowledge (BOK) gathered on the status of BTC/QFN and their advanced versions of multi-row QFN (MRQFN) packaging technologies. The report provides a comprehensive review of packaging trends and specifications on design, assembly, and reliability. Emphasis is placed on assembly reliability and associated key design and process parameters because they show lower life than standard leaded package assembly under thermal cycling exposures. Inspection of hidden solder joints for assuring quality is challenging and is similar to ball grid arrays (BGAs). Understanding the key BTC/QFN technology trends, applications, processing parameters, workmanship defects, and reliability behavior is important when judicially selecting and narrowing the follow-on packages for evaluation and testing, as well as for the low risk insertion in high-reliability applications.
Cao, Xuan; Chen, Haitian; Gu, Xiaofei; Liu, Bilu; Wang, Wenli; Cao, Yu; Wu, Fanqi; Zhou, Chongwu
2014-12-23
Semiconducting single-wall carbon nanotubes are very promising materials in printed electronics due to their excellent mechanical and electrical property, outstanding printability, and great potential for flexible electronics. Nonetheless, developing scalable and low-cost approaches for manufacturing fully printed high-performance single-wall carbon nanotube thin-film transistors remains a major challenge. Here we report that screen printing, which is a simple, scalable, and cost-effective technique, can be used to produce both rigid and flexible thin-film transistors using separated single-wall carbon nanotubes. Our fully printed top-gated nanotube thin-film transistors on rigid and flexible substrates exhibit decent performance, with mobility up to 7.67 cm2 V(-1) s(-1), on/off ratio of 10(4)∼10(5), minimal hysteresis, and low operation voltage (<10 V). In addition, outstanding mechanical flexibility of printed nanotube thin-film transistors (bent with radius of curvature down to 3 mm) and driving capability for organic light-emitting diode have been demonstrated. Given the high performance of the fully screen-printed single-wall carbon nanotube thin-film transistors, we believe screen printing stands as a low-cost, scalable, and reliable approach to manufacture high-performance nanotube thin-film transistors for application in display electronics. Moreover, this technique may be used to fabricate thin-film transistors based on other materials for large-area flexible macroelectronics, and low-cost display electronics.
Navigating Financial and Supply Reliability Tradeoffs in Regional Drought Portfolios
NASA Astrophysics Data System (ADS)
Zeff, H. B.; Herman, J. D.; Characklis, G. W.; Reed, P. M.
2013-12-01
Rising development costs and growing concerns over environmental impacts have led many communities to explore more diversified regional portfolio-type approaches to managing their water supplies. These strategies coordinate existing supply infrastructure with other ';assets' such as conservation measures or water transfers, reducing the capacity and costs required to meet demand by providing greater adaptability to changing hydrologic conditions. For many water utilities, however, this additional flexibility can also cause unexpected reductions in revenue (i.e. conservation) or increased costs (i.e. transfers), fluctuations that can be very difficult for a regulated entity to manage. Thus, despite the advantages, concerns over the resulting financial disruptions provide a disincentive for utilities to develop more adaptive methods, potentially limiting the role of some very effective tools. This study seeks to design portfolio strategies that employ financial instruments (e.g. contingency funds, index insurance) to reduce fluctuations in revenues and costs and therefore do not sacrifice financial stability for improved performance (e.g. lower expected costs, high reliability). This work describes the development of regional water supply portfolios in the ';Research Triangle' region of North Carolina, an area comprising four rapidly growing municipalities supplied by nine surface water reservoirs in two separate river basins. Disparities in growth rates and the respective individual storage capacities of the reservoirs provide the region with the opportunity to increase the efficiency of the regional supply infrastructure through inter-utility water transfers, even as each utility engages in its own conservation activities. The interdependence of multiple utilities navigating shared conveyance and treatment infrastructure to engage in transfers forces water managers to consider regional objectives, as the actions of any one utility can affect the others. Results indicate the inclusion of inter-utility water transfers allows the water utilities to improve on regional operational objectives (i.e. higher reliability and lower restriction frequencies) at a lower expected cost, while financial mitigation tools introduce a tradeoff between expected costs and cost variability. Financial mitigation schemes, including both third-party financial insurance contracts and contingency funds (i.e. self-insurance), were able to reduce cost variability at a lower expected cost than mitigation schemes which use self-insurance alone. The dynamics of the Research Triangle scenario (e.g. rapid population growth, constrained supply, and sensitivity to cost/revenue swings) suggest that this work may have the potential to more generally inform utilities on the effects of coordinated regional water supply planning and the resulting financial implications of more flexible, portfolio-type management techniques.
COSTMODL: An automated software development cost estimation tool
NASA Technical Reports Server (NTRS)
Roush, George B.
1991-01-01
The cost of developing computer software continues to consume an increasing portion of many organizations' total budgets, both in the public and private sector. As this trend develops, the capability to produce reliable estimates of the effort and schedule required to develop a candidate software product takes on increasing importance. The COSTMODL program was developed to provide an in-house capability to perform development cost estimates for NASA software projects. COSTMODL is an automated software development cost estimation tool which incorporates five cost estimation algorithms including the latest models for the Ada language and incrementally developed products. The principal characteristic which sets COSTMODL apart from other software cost estimation programs is its capacity to be completely customized to a particular environment. The estimation equations can be recalibrated to reflect the programmer productivity characteristics demonstrated by the user's organization, and the set of significant factors which effect software development costs can be customized to reflect any unique properties of the user's development environment. Careful use of a capability such as COSTMODL can significantly reduce the risk of cost overruns and failed projects.
Shaw, Andrew J; Ingham, Stephen A; Fudge, Barry W; Folland, Jonathan P
2013-12-01
This study assessed the between-test reliability of oxygen cost (OC) and energy cost (EC) in distance runners, and contrasted it with the smallest worthwhile change (SWC) of these measures. OC and EC displayed similar levels of within-subject variation (typical error < 3.85%). However, the typical error (2.75% vs 2.74%) was greater than the SWC (1.38% vs 1.71%) for both OC and EC, respectively, indicating insufficient sensitivity to confidently detect small, but meaningful, changes in OC and EC.
Anzalone, Gerald C; Glover, Alexandra G; Pearce, Joshua M
2013-04-19
The high cost of what have historically been sophisticated research-related sensors and tools has limited their adoption to a relatively small group of well-funded researchers. This paper provides a methodology for applying an open-source approach to design and development of a colorimeter. A 3-D printable, open-source colorimeter utilizing only open-source hardware and software solutions and readily available discrete components is discussed and its performance compared to a commercial portable colorimeter. Performance is evaluated with commercial vials prepared for the closed reflux chemical oxygen demand (COD) method. This approach reduced the cost of reliable closed reflux COD by two orders of magnitude making it an economic alternative for the vast majority of potential users. The open-source colorimeter demonstrated good reproducibility and serves as a platform for further development and derivation of the design for other, similar purposes such as nephelometry. This approach promises unprecedented access to sophisticated instrumentation based on low-cost sensors by those most in need of it, under-developed and developing world laboratories.
Reliability and performance experience with flat-plate photovoltaic modules
NASA Technical Reports Server (NTRS)
Ross, R. G., Jr.
1982-01-01
Statistical models developed to define the most likely sources of photovoltaic (PV) array failures and the optimum method of allowing for the defects in order to achieve a 20 yr lifetime with acceptable performance degradation are summarized. Significant parameters were the cost of energy, annual power output, initial cost, replacement cost, rate of module replacement, the discount rate, and the plant lifetime. Acceptable degradation allocations were calculated to be 0.0001 cell failures/yr, 0.005 module failures/yr, 0.05 power loss/yr, a 0.01 rate of power loss/yr, and a 25 yr module wear-out length. Circuit redundancy techniques were determined to offset cell failures using fault tolerant designs such as series/parallel and bypass diode arrangements. Screening processes have been devised to eliminate cells that will crack in operation, and multiple electrical contacts at each cell compensate for the cells which escape the screening test and then crack when installed. The 20 yr array lifetime is expected to be achieved in the near-term.
Anzalone, Gerald C.; Glover, Alexandra G.; Pearce, Joshua M.
2013-01-01
The high cost of what have historically been sophisticated research-related sensors and tools has limited their adoption to a relatively small group of well-funded researchers. This paper provides a methodology for applying an open-source approach to design and development of a colorimeter. A 3-D printable, open-source colorimeter utilizing only open-source hardware and software solutions and readily available discrete components is discussed and its performance compared to a commercial portable colorimeter. Performance is evaluated with commercial vials prepared for the closed reflux chemical oxygen demand (COD) method. This approach reduced the cost of reliable closed reflux COD by two orders of magnitude making it an economic alternative for the vast majority of potential users. The open-source colorimeter demonstrated good reproducibility and serves as a platform for further development and derivation of the design for other, similar purposes such as nephelometry. This approach promises unprecedented access to sophisticated instrumentation based on low-cost sensors by those most in need of it, under-developed and developing world laboratories. PMID:23604032
Low cost Earth attitude sensor
NASA Astrophysics Data System (ADS)
Liberati, Fabrizio; Perrotta, Giorgio; Verzegnassi, Fulvia
2017-11-01
A patent-pending, low-cost, moderate performance, Earth Attitude Sensor for LEO satellites is described in this paper. The paper deals with the system concepts, the technology adopted and the simulation results. The sensor comprises three or four narrow field of view mini telescopes pointed towards the Earth edge to detect and measure the variation of the off-nadir angle of the Earth-to-black sky transition using thermopile detectors suitably placed in the foci of the optical min telescopes. The system's innovation consists in the opto-mechanical configuration adopted that is sturdy and has no moving parts being , thus, inherently reliable. In addition, with a view to reducing production costs, the sensor does without hi-rel and is instead mainly based on COTS parts suitably chosen. Besides it is flexible and can be adapted to perform attitude measurement onboard spacecraft flying in orbits other than LEO with a minimum of modifications to the basic design. At present the sensor is under development by IMT and OptoService.
Testing the Feasibility of a Low-Cost Network Performance Measurement Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chevalier, Scott; Schopf, Jennifer M.; Miller, Kenneth
2016-07-01
Todays science collaborations depend on reliable, high performance networks, but monitoring the end-to-end performance of a network can be costly and difficult. The most accurate approaches involve using measurement equipment in many locations, which can be both expensive and difficult to manage due to immobile or complicated assets. The perfSONAR framework facilitates network measurement making management of the tests more reasonable. Traditional deployments have used over-provisioned servers, which can be expensive to deploy and maintain. As scientific network uses proliferate, there is a desire to instrument more facets of a network to better understand trends. This work explores low costmore » alternatives to assist with network measurement. Benefits include the ability to deploy more resources quickly, and reduced capital and operating expenditures. Finally, we present candidate platforms and a testing scenario that evaluated the relative merits of four types of small form factor equipment to deliver accurate performance measurements.« less
Cullen, Ralph H; Rogers, Wendy A; Fisk, Arthur D
2013-11-01
Diagnostic automation has been posited to alleviate the high demands of multiple-task environments; however, mixed effects have been found pertaining to performance aid success. To better understand these effects, attention allocation must be studied directly. We developed a multiple-task environment to study the effects of automation on visual attention. Participants interacted with a system providing varying levels of automation and automation reliability and then were transferred to a system with no support. Attention allocation was measured by tracking the number of times each task was viewed. We found that participants receiving automation allocated their time according to the task frequency and that tasks that benefited most from automation were most harmed when it was removed. The results suggest that the degree to which automation affects multiple-task performance is dependent on the relative attributes of the tasks involved. Moreover, there is an inverse relationship between support and cost when automation fails. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
A systematic review of the cost of data collection for performance monitoring in hospitals.
Jones, Cheryl; Gannon, Brenda; Wakai, Abel; O'Sullivan, Ronan
2015-04-01
Key performance indicators (KPIs) are used to identify where organisational performance is meeting desired standards and where performance requires improvement. Valid and reliable KPIs depend on the availability of high-quality data, specifically the relevant minimum data set ((MDS) the core data identified as the minimum required to measure performance for a KPI) elements. However, the feasibility of collecting the relevant MDS elements is always a limitation of performance monitoring using KPIs. Preferably, data should be integrated into service delivery, and, where additional data are required that are not currently collected as part of routine service delivery, there should be an economic evaluation to determine the cost of data collection. The aim of this systematic review was to synthesise the evidence base concerning the costs of data collection in hospitals for performance monitoring using KPI, and to identify hospital data collection systems that have proven to be cost minimising. We searched MEDLINE (1946 to May week 4 2014), Embase (1974 to May week 2 2014), and CINAHL (1937 to date). The database searches were supplemented by searching for grey literature through the OpenGrey database. Data was extracted, tabulated, and summarised as part of a narrative synthesis. The searches yielded a total of 1,135 publications. After assessing each identified study against specific inclusion exclusion criteria only eight studies were deemed as relevant for this review. The studies attempt to evaluate different types of data collection interventions including the installation of information communication technology (ICT), improvements to current ICT systems, and how different analysis techniques may be used to monitor performance. The evaluation methods used to measure the costs and benefits of data collection interventions are inconsistent across the identified literature. Overall, the results weakly indicate that collection of hospital data and improvements in data recording can be cost-saving. Given the limitations of this systematic review, it is difficult to conclude whether improvements in data collection systems can save money, increase quality of care, and assist performance monitoring of hospitals. With that said, the results are positive and suggest that data collection improvements may lead to cost savings and aid quality of care. PROSPERO CRD42014007450 .
Courses of Action to Optimize Heavy Bearings Cages
NASA Astrophysics Data System (ADS)
Szekely, V. G.
2016-11-01
The global expansion in the industrial, economically and technological context determines the need to develop products, technologies, processes and methods which ensure increased performance, lower manufacturing costs and synchronization of the main costs reported to the elementary values which correspond to utilization”. The development trend of the heavy bearing industry and the wide use of bearings determines the necessity of choosing the most appropriate material for a given application in order to meet the cumulative requirements of durability, reliability, strength, etc. Evaluation of commonly known or new materials represents a fundamental criterion, in order to choose the materials based on the cost, machinability and the technological process. In order to ensure the most effective basis for the decision, regarding the heavy bearing cage, in the first stage the functions of the product are established and in a further step a comparative analysis of the materials is made in order to establish the best materials which satisfy the product functions. The decision for selecting the most appropriate material is based largely on the overlapping of the material costs and manufacturing process during which the half-finished material becomes a finished product. The study is orientated towards a creative approach, especially towards innovation and reengineering by using specific techniques and methods applied in inventics. The main target is to find new efficient and reliable constructive and/or technological solutions which are consistent with the concept of sustainable development.
Cost-effective method of manufacturing a 3D MEMS optical switch
NASA Astrophysics Data System (ADS)
Carr, Emily; Zhang, Ping; Keebaugh, Doug; Chau, Kelvin
2009-02-01
growth of data and video transport networks. All-optical switching eliminates the need for optical-electrical conversion offering the ability to switch optical signals transparently: independent of data rates, formats and wavelength. It also provides network operators much needed automation capabilities to create, monitor and protect optical light paths. To further accelerate the market penetration, it is necessary to identify a path to reduce the manufacturing cost significantly as well as enhance the overall system performance, uniformity and reliability. Currently, most MEMS optical switches are assembled through die level flip-chip bonding with either epoxies or solder bumps. This is due to the alignment accuracy requirements of the switch assembly, defect matching of individual die, and cost of the individual components. In this paper, a wafer level assembly approach is reported based on silicon fusion bonding which aims to reduce the packaging time, defect count and cost through volume production. This approach is successfully demonstrated by the integration of two 6-inch wafers: a mirror array wafer and a "snap-guard" wafer, which provides a mechanical structure on top of the micromirror to prevent electrostatic snap-down. The direct silicon-to-silicon bond eliminates the CTEmismatch and stress issues caused by non-silicon bonding agents. Results from a completed integrated switch assembly will be presented, which demonstrates the reliability and uniformity of some key parameters of this MEMS optical switch.
Reusability Studies for Ares I and Ares V Propulsion
NASA Technical Reports Server (NTRS)
Williams, Thomas J.; Priskos, Alex S.; Schorr, Andrew A.; Barrett, Gregory
2008-01-01
With a mission to continue to support the goals of the International Space Station (ISS) and explore beyond Earth orbit, the United States National Aeronautics and Space Administration (NASA) is in the process of launching an entirely new space exploration initiative, the Constellation Program. Even as the Space Shuttle moves toward its final voyage, Constellation is building from nearly half a century of NASA spaceflight experience, and technological advances, including the legacy of Shuttle and earlier programs such as Apollo and the Saturn V rocket. Out of Constellation will come two new launch vehicles: the Ares I crew launch vehicle and the Ares V cargo launch vehicle. With the initial goal to seamlessly continue where the Space Shuttle leaves off, Ares will firstly service the Space Station. Ultimately, however, the intent is to push further: to establish an outpost on the Moon, and then to explore other destinations. With significant experience and a strong foundation in aerospace, NASA is now progressing toward the final design of the First Stage propulsion system for the Ares I. The new launch vehicle design will considerably increase safety and reliability, reduce the cost of accessing space, and provide a viable growth path for human space exploration. To achieve these goals, NASA is taking advantage of Space Shuttle hardware, safety, reliability, and experience. With efforts to minimize technical risk and life-cycle costs, the First Stage office is again pulling from NASA's strong legacy in aerospace exploration and development, most specifically the Space Shuttle Program. Trade studies have been conducted to evaluate lifecycle costs, expendability, and risk reduction. While many first stage features have already been determined, these trade studies are helping to resolve the operational requisites and configuration of the first stage element. This paper first presents an overview of the Ares missions and the genesis of the Ares vehicle design. It then looks at one of the most important trade studies to date, the "Ares I First Stage Expendability Trade Study." The purpose of this study was to determine the utility of flying the first stage as an expendable booster rather than making it reusable. To lower the study complexity, four operational scenarios (or cases) were defined. This assessment then included an evaluation of the development, reliability, performance, and transition impacts associated with an expendable solution. The paper looks at these scenarios from the perspectives of cost, reliability, and performance. The presentation provides an overview of the paper.
Reusability Studies for Ares I and Ares V Propulsion
NASA Technical Reports Server (NTRS)
Williams, Thomas J.; Priskos, Alex S.; Schorr, Andrew A.; Barrett, Greg
2008-01-01
With a mission to continue to support the goals of the International Space Station (ISS) and explore beyond Earth orbit, the United States National Aeronautics and Space Administration (NASA) is in the process of launching an entirely new space exploration initiative, the Constellation Program. Even as the Space Shuttle moves toward its final voyage, Constellation is building from nearly half a century of NASA spaceflight experience, and technological advances, including the legacy of Shuttle and earlier programs such as Apollo and the Saturn V rocket. Out of Constellation will come two new launch vehicles: the Ares I crew launch vehicle and the Ares V cargo launch vehicle. With the initial goal to seamlessly continue where the Space Shuttle leaves off, Ares will firstly service the Space Station. Ultimately, however, the intent is to push further: to establish an outpost on the Moon, and then to explore other destinations. With significant experience and a strong foundation in aerospace, NASA is now progressing toward the final design of the First Stage propulsion system for the Ares I. The new launch vehicle design will considerably increase safety and reliability, reduce the cost of accessing space, and provide a viable growth path for human space exploration. To achieve these goals, NASA is taking advantage of Space Shuttle hardware, safety, reliability, and experience. With efforts to minimize technical risk and life-cycle costs, the First Stage office is again pulling from NASA s strong legacy in aerospace exploration and development, most specifically the Space Shuttle Program. Trade studies have been conducted to evaluate life-cycle costs, expendability, and risk reduction. While many first stage features have already been determined, these trade studies are helping to resolve the operational requisites and configuration of the first stage element. This paper first presents an overview of the Ares missions and the genesis of the Ares vehicle design. It then looks at one of the most important trade studies to date, the "Ares I First Stage Expendability Trade Study." The purpose of this study was to determine the utility of flying the first stage as an expendable booster rather than making it reusable. To lower the study complexity, four operational scenarios (or cases) were defined. This assessment then included an evaluation of the development, reliability, performance, and transition impacts associated with an expendable solution. This paper looks at these scenarios from the perspectives of cost, reliability, and performance.
NASA Astrophysics Data System (ADS)
Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming
2018-01-01
This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.
Electrical service reliability: the customer perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samsa, M.E.; Hub, K.A.; Krohm, G.C.
1978-09-01
Electric-utility-system reliability criteria have traditionally been established as a matter of utility policy or through long-term engineering practice, generally with no supportive customer cost/benefit analysis as justification. This report presents results of an initial study of the customer perspective toward electric-utility-system reliability, based on critical review of over 20 previous and ongoing efforts to quantify the customer's value of reliable electric service. A possible structure of customer classifications is suggested as a reasonable level of disaggregation for further investigation of customer value, and these groups are characterized in terms of their electricity use patterns. The values that customers assign tomore » reliability are discussed in terms of internal and external cost components. A list of options for effecting changes in customer service reliability is set forth, and some of the many policy issues that could alter customer-service reliability are identified.« less
Tabard-Fougère, Anne; Bonnefoy-Mazure, Alice; Hanquinet, Sylviane; Lascombes, Pierre; Armand, Stéphane; Dayer, Romain
2017-01-15
Test-retest study. This study aimed to evaluate the validity and reliability of rasterstereography in patients with adolescent idiopathic scoliosis (AIS) with a major curve Cobb angle (CA) between 10° and 40° for frontal, sagittal, and transverse parameters. Previous studies evaluating the validity and reliability of rasterstereography concluded that this technique had good accuracy compared with radiographs and a high intra- and interday reliability in healthy volunteers. To the best of our knowledge, the validity and reliability have not been assessed in AIS patients. Thirty-five adolescents with AIS (male = 13) aged 13.1 ± 2.0 years were included. To evaluate the validity of the scoliosis angle (SA) provided by rasterstereography, a comparison (t test, Pearson correlation) was performed with the CA obtained using 2D EOS® radiography (XR). Three rasterstereographic repeated measurements were independently performed by two operators on the same day (interrater reliability) and again by the first operator 1 week later (intrarater reliability). The variables of interest were the SA, lumbar lordosis, and thoracic kyphosis angle, trunk length, pelvic obliquity, and maximum, root mean square and amplitude of vertebral rotations. The data analyses used intraclass correlation coefficients (ICCs). The CA and SA were strongly correlated (R = 0.70) and were nonsignificantly different (P = 0.60). The intrarater reliability (same day: ICC [1, 1], n = 35; 1 week later: ICC [1, 3], n = 28) and interrater reliability (ICC [3, 3], n = 16) were globally excellent (ICC > 0.75) except for the assessment of pelvic obliquity. This study showed that the rasterstereographic system allows for the evaluation of AIS patients with a good validity compared with XR with an overall excellent intra- and interrater reliability. Based on these results, this automatic, fast, and noninvasive system can be used for monitoring the evolution of AIS in growing patients instead of repetitive radiographs, thereby reducing radiation exposure and decreasing costs. 4.
Design-for-reliability (DfR) of aerospace electronics: Attributes and challenges
NASA Astrophysics Data System (ADS)
Bensoussan, A.; Suhir, E.
The next generation of multi-beam satellite systems that would be able to provide effective interactive communication services will have to operate within a highly flexible architecture. One option to develop such flexibility is to employ microwaves and/or optoelectronic components and to make them reliable. The use of optoelectronic devices, equipments and systems will result indeed in significant improvement in the state-of-the-art only provided that the new designs will suggest a novel and effective architecture that will combine the merits of good functional performance, satisfactory mechanical (structural) reliability and high cost effectiveness. The obvious challenge is the ability to design and fabricate equipment based on EEE components that would be able to successfully withstand harsh space environments for the entire duration of the mission. It is imperative that the major players in the space industry, such as manufacturers, industrial users, and space agencies, understand the importance and the limits of the achievable quality and reliability of optoelectronic devices operated in harsh environments. It is equally imperative that the physics of possible failures is well understood and, if necessary, minimized, and that adequate Quality Standards are developed and employed. The space community has to identify and to develop the strategic approach for validating optoelectronic products. This should be done with consideration of numerous intrinsic and extrinsic requirements for the systems' performance. When considering a particular next generation optoelectronic space system, the space community needs to address the following major issues: proof of concept for this system, proof of reliability and proof of performance. This should be done with taking into account the specifics of the anticipated application. High operational reliability cannot be left to the prognostics and health monitoring/management (PHM) effort and stage, no matter how important and - ffective such an effort might be. Reliability should be pursued at all the stages of the equipment lifetime: design, product development, manufacturing, burn-in testing and, of course, subsequent PHM after the space apparatus is launched and operated.
Study of prototypes of LFoundry active CMOS pixels sensors for the ATLAS detector
NASA Astrophysics Data System (ADS)
Vigani, L.; Bortoletto, D.; Ambroz, L.; Plackett, R.; Hemperek, T.; Rymaszewski, P.; Wang, T.; Krueger, H.; Hirono, T.; Caicedo Sierra, I.; Wermes, N.; Barbero, M.; Bhat, S.; Breugnon, P.; Chen, Z.; Godiot, S.; Pangaud, P.; Rozanov, A.
2018-02-01
Current high energy particle physics experiments at the LHC use hybrid silicon detectors, in both pixel and strip configurations, for their inner trackers. These detectors have proven to be very reliable and performant. Nevertheless, there is great interest in depleted CMOS silicon detectors, which could achieve a similar performance at lower cost of production. We present recent developments of this technology in the framework of the ATLAS CMOS demonstrator project. In particular, studies of two active sensors from LFoundry, CCPD_LF and LFCPIX, are shown.
NASA Technical Reports Server (NTRS)
Trevino, Luis; Brown, Terry; Crumbley, R. T. (Technical Monitor)
2001-01-01
The problem to be addressed in this paper is to explore how the use of Soft Computing Technologies (SCT) could be employed to improve overall vehicle system safety, reliability, and rocket engine performance by development of a qualitative and reliable engine control system (QRECS). Specifically, this will be addressed by enhancing rocket engine control using SCT, innovative data mining tools, and sound software engineering practices used in Marshall's Flight Software Group (FSG). The principle goals for addressing the issue of quality are to improve software management, software development time, software maintenance, processor execution, fault tolerance and mitigation, and nonlinear control in power level transitions. The intent is not to discuss any shortcomings of existing engine control methodologies, but to provide alternative design choices for control, implementation, performance, and sustaining engineering, all relative to addressing the issue of reliability. The approaches outlined in this paper will require knowledge in the fields of rocket engine propulsion (system level), software engineering for embedded flight software systems, and soft computing technologies (i.e., neural networks, fuzzy logic, data mining, and Bayesian belief networks); some of which are briefed in this paper. For this effort, the targeted demonstration rocket engine testbed is the MC-1 engine (formerly FASTRAC) which is simulated with hardware and software in the Marshall Avionics & Software Testbed (MAST) laboratory that currently resides at NASA's Marshall Space Flight Center, building 4476, and is managed by the Avionics Department. A brief plan of action for design, development, implementation, and testing a Phase One effort for QRECS is given, along with expected results. Phase One will focus on development of a Smart Start Engine Module and a Mainstage Engine Module for proper engine start and mainstage engine operations. The overall intent is to demonstrate that by employing soft computing technologies, the quality and reliability of the overall scheme to engine controller development is further improved and vehicle safety is further insured. The final product that this paper proposes is an approach to development of an alternative low cost engine controller that would be capable of performing in unique vision spacecraft vehicles requiring low cost advanced avionics architectures for autonomous operations from engine pre-start to engine shutdown.
Li, Tuan; Zhang, Hongping; Niu, Xiaoji; Gao, Zhouzheng
2017-01-01
Dual-frequency Global Positioning System (GPS) Real-time Kinematics (RTK) has been proven in the past few years to be a reliable and efficient technique to obtain high accuracy positioning. However, there are still challenges for GPS single-frequency RTK, such as low reliability and ambiguity resolution (AR) success rate, especially in kinematic environments. Recently, multi-Global Navigation Satellite System (multi-GNSS) has been applied to enhance the RTK performance in terms of availability and reliability of AR. In order to further enhance the multi-GNSS single-frequency RTK performance in terms of reliability, continuity and accuracy, a low-cost micro-electro-mechanical system (MEMS) inertial measurement unit (IMU) is adopted in this contribution. We tightly integrate the single-frequency GPS/BeiDou/GLONASS and MEMS-IMU through the extended Kalman filter (EKF), which directly fuses the ambiguity-fixed double-differenced (DD) carrier phase observables and IMU data. A field vehicular test was carried out to evaluate the impacts of the multi-GNSS and IMU on the AR and positioning performance in different system configurations. Test results indicate that the empirical success rate of single-epoch AR for the tightly-coupled single-frequency multi-GNSS RTK/INS integration is over 99% even at an elevation cut-off angle of 40°, and the corresponding position time series is much more stable in comparison with the GPS solution. Besides, GNSS outage simulations show that continuous positioning with certain accuracy is possible due to the INS bridging capability when GNSS positioning is not available. PMID:29077070
Li, Tuan; Zhang, Hongping; Niu, Xiaoji; Gao, Zhouzheng
2017-10-27
Dual-frequency Global Positioning System (GPS) Real-time Kinematics (RTK) has been proven in the past few years to be a reliable and efficient technique to obtain high accuracy positioning. However, there are still challenges for GPS single-frequency RTK, such as low reliability and ambiguity resolution (AR) success rate, especially in kinematic environments. Recently, multi-Global Navigation Satellite System (multi-GNSS) has been applied to enhance the RTK performance in terms of availability and reliability of AR. In order to further enhance the multi-GNSS single-frequency RTK performance in terms of reliability, continuity and accuracy, a low-cost micro-electro-mechanical system (MEMS) inertial measurement unit (IMU) is adopted in this contribution. We tightly integrate the single-frequency GPS/BeiDou/GLONASS and MEMS-IMU through the extended Kalman filter (EKF), which directly fuses the ambiguity-fixed double-differenced (DD) carrier phase observables and IMU data. A field vehicular test was carried out to evaluate the impacts of the multi-GNSS and IMU on the AR and positioning performance in different system configurations. Test results indicate that the empirical success rate of single-epoch AR for the tightly-coupled single-frequency multi-GNSS RTK/INS integration is over 99% even at an elevation cut-off angle of 40°, and the corresponding position time series is much more stable in comparison with the GPS solution. Besides, GNSS outage simulations show that continuous positioning with certain accuracy is possible due to the INS bridging capability when GNSS positioning is not available.
Jones, Stephanie A H; Butler, Beverly C; Kintzel, Franziska; Johnson, Anne; Klein, Raymond M; Eskes, Gail A
2016-01-01
Attention is an important, multifaceted cognitive domain that has been linked to three distinct, yet interacting, networks: alerting, orienting, and executive control. The measurement of attention and deficits of attention within these networks is critical to the assessment of many neurological and psychiatric conditions in both research and clinical settings. The Dalhousie Computerized Attention Battery (DalCAB) was created to assess attentional functions related to the three attention networks using a range of tasks including: simple reaction time, go/no-go, choice reaction time, dual task, flanker, item and location working memory, and visual search. The current study provides preliminary normative data, test-retest reliability (intraclass correlations) and practice effects in DalCAB performance 24-h after baseline for healthy young adults (n = 96, 18-31 years). Performance on the DalCAB tasks demonstrated Good to Very Good test-retest reliability for mean reaction time, while accuracy and difference measures (e.g., switch costs, interference effects, and working memory load effects) were most reliable for tasks that require more extensive cognitive processing (e.g., choice reaction time, flanker, dual task, and conjunction search). Practice effects were common and pronounced at the 24-h interval. In addition, performance related to specific within-task parameters of the DalCAB sub-tests provides preliminary support for future formal assessment of the convergent validity of our interpretation of the DalCAB as a potential clinical and research assessment tool for measuring aspects of attention related to the alerting, orienting, and executive control networks.
Advanced Life Support System Value Metric
NASA Technical Reports Server (NTRS)
Jones, Harry W.; Rasky, Daniel J. (Technical Monitor)
1999-01-01
The NASA Advanced Life Support (ALS) Program is required to provide a performance metric to measure its progress in system development. Extensive discussions within the ALS program have led to the following approach. The Equivalent System Mass (ESM) metric has been traditionally used and provides a good summary of the weight, size, and power cost factors of space life support equipment. But ESM assumes that all the systems being traded off exactly meet a fixed performance requirement, so that the value and benefit (readiness, performance, safety, etc.) of all the different systems designs are considered to be exactly equal. This is too simplistic. Actual system design concepts are selected using many cost and benefit factors and the system specification is defined after many trade-offs. The ALS program needs a multi-parameter metric including both the ESM and a System Value Metric (SVM). The SVM would include safety, maintainability, reliability, performance, use of cross cutting technology, and commercialization potential. Another major factor in system selection is technology readiness level (TRL), a familiar metric in ALS. The overall ALS system metric that is suggested is a benefit/cost ratio, SVM/[ESM + function (TRL)], with appropriate weighting and scaling. The total value is given by SVM. Cost is represented by higher ESM and lower TRL. The paper provides a detailed description and example application of a suggested System Value Metric and an overall ALS system metric.
Optimization of hybrid power system composed of SMES and flywheel MG for large pulsed load
NASA Astrophysics Data System (ADS)
Niiyama, K.; Yagai, T.; Tsuda, M.; Hamajima, T.
2008-09-01
A superconducting magnetic storage system (SMES) has some advantages such as rapid large power response and high storage efficiency which are superior to other energy storage systems. A flywheel motor generator (FWMG) has large scaled capacity and high reliability, and hence is broadly utilized for a large pulsed load, while it has comparatively low storage efficiency due to high mechanical loss compared with SMES. A fusion power plant such as International Thermo-Nuclear Experimental Reactor (ITER) requires a large and long pulsed load which causes a frequency deviation in a utility power system. In order to keep the frequency within an allowable deviation, we propose a hybrid power system for the pulsed load, which equips the SMES and the FWMG with the utility power system. We evaluate installation cost and frequency control performance of three power systems combined with energy storage devices; (i) SMES with the utility power, (ii) FWMG with the utility power, (iii) both SMES and FWMG with the utility power. The first power system has excellent frequency power control performance but its installation cost is high. The second system has inferior frequency control performance but its installation cost is the lowest. The third system has good frequency control performance and its installation cost is attained lower than the first power system by adjusting the ratio between SMES and FWMG.
PV O&M Cost Model and Cost Reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Andy
This is a presentation on PV O&M cost model and cost reduction for the annual Photovoltaic Reliability Workshop (2017), covering estimating PV O&M costs, polynomial expansion, and implementation of Net Present Value (NPV) and reserve account in cost models.
Smart Sensor Demonstration Payload
NASA Technical Reports Server (NTRS)
Schmalzel, John; Bracey, Andrew; Rawls, Stephen; Morris, Jon; Turowski, Mark; Franzl, Richard; Figueroa, Fernando
2010-01-01
Sensors are a critical element to any monitoring, control, and evaluation processes such as those needed to support ground based testing for rocket engine test. Sensor applications involve tens to thousands of sensors; their reliable performance is critical to achieving overall system goals. Many figures of merit are used to describe and evaluate sensor characteristics; for example, sensitivity and linearity. In addition, sensor selection must satisfy many trade-offs among system engineering (SE) requirements to best integrate sensors into complex systems [1]. These SE trades include the familiar constraints of power, signal conditioning, cabling, reliability, and mass, and now include considerations such as spectrum allocation and interference for wireless sensors. Our group at NASA s John C. Stennis Space Center (SSC) works in the broad area of integrated systems health management (ISHM). Core ISHM technologies include smart and intelligent sensors, anomaly detection, root cause analysis, prognosis, and interfaces to operators and other system elements [2]. Sensor technologies are the base fabric that feed data and health information to higher layers. Cost-effective operation of the complement of test stands benefits from technologies and methodologies that contribute to reductions in labor costs, improvements in efficiency, reductions in turn-around times, improved reliability, and other measures. ISHM is an active area of development at SSC because it offers the potential to achieve many of those operational goals [3-5].
Health management and controls for Earth-to-orbit propulsion systems
NASA Astrophysics Data System (ADS)
Bickford, R. L.
1995-03-01
Avionics and health management technologies increase the safety and reliability while decreasing the overall cost for Earth-to-orbit (ETO) propulsion systems. New ETO propulsion systems will depend on highly reliable fault tolerant flight avionics, advanced sensing systems and artificial intelligence aided software to ensure critical control, safety and maintenance requirements are met in a cost effective manner. Propulsion avionics consist of the engine controller, actuators, sensors, software and ground support elements. In addition to control and safety functions, these elements perform system monitoring for health management. Health management is enhanced by advanced sensing systems and algorithms which provide automated fault detection and enable adaptive control and/or maintenance approaches. Aerojet is developing advanced fault tolerant rocket engine controllers which provide very high levels of reliability. Smart sensors and software systems which significantly enhance fault coverage and enable automated operations are also under development. Smart sensing systems, such as flight capable plume spectrometers, have reached maturity in ground-based applications and are suitable for bridging to flight. Software to detect failed sensors has reached similar maturity. This paper will discuss fault detection and isolation for advanced rocket engine controllers as well as examples of advanced sensing systems and software which significantly improve component failure detection for engine system safety and health management.
EELV Booster Assist Options for CEV
NASA Technical Reports Server (NTRS)
McNeal, Curtis, Jr.
2005-01-01
Medium lift EELVs may still play a role in manned space flight. To be considered for manned flight, medium lift EELVs must address the short comings in their current boost assist motors. Two options exist: redesign and requalify the solid rocket motors. Replace solid rocket motors (SRMs) with hybrid rocket motors. Hybrid rocket motors are an attractive alternative. They are safer than SRMs. The TRL's Lockheed Martin Small Launch Vehicle booster development substantially lowers the development risk, cost risk, and the schedule risk for developing hybrid boost assist for EELVs. Hybrid boosters testability offsets SRMs higher inherent reliability.Hybrid booster development and recurring costs are lower than SRMs. Performance gains are readily achieved.
Reliability models: the influence of model specification in generation expansion planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stremel, J.P.
1982-10-01
This paper is a critical evaluation of reliability methods used for generation expansion planning. It is shown that the methods for treating uncertainty are critical for determining the relative reliability value of expansion alternatives. It is also shown that the specification of the reliability model will not favor all expansion options equally. Consequently, the model is biased. In addition, reliability models should be augmented with an economic value of reliability (such as the cost of emergency procedures or energy not served). Generation expansion evaluations which ignore the economic value of excess reliability can be shown to be inconsistent. The conclusionsmore » are that, in general, a reliability model simplifies generation expansion planning evaluations. However, for a thorough analysis, the expansion options should be reviewed for candidates which may be unduly rejected because of the bias of the reliability model. And this implies that for a consistent formulation in an optimization framework, the reliability model should be replaced with a full economic optimization which includes the costs of emergency procedures and interruptions in the objective function.« less
Commercialized VCSEL components fabricated at TrueLight Corporation
NASA Astrophysics Data System (ADS)
Pan, Jin-Shan; Lin, Yung-Sen; Li, Chao-Fang A.; Chang, C. H.; Wu, Jack; Lee, Bor-Lin; Chuang, Y. H.; Tu, S. L.; Wu, Calvin; Huang, Kai-Feng
2001-05-01
TrueLight Corporation was found in 1997 and it is the pioneer of VCSEL components supplier in Taiwan. We specialize in the production and distribution of VCSEL (Vertical Cavity Surface Emitting Laser) and other high-speed PIN-detector devices and components. Our core technology is developed to meet blooming demand of fiber optic transmission. Our intention is to diverse the device application into data communication, telecommunication and industrial markets. One mission is to provide the high performance, highly reliable and low-cost VCSEL components for data communication and sensing applications. For the past three years, TrueLight Corporation has entered successfully into the Gigabit Ethernet and the Fiber Channel data communication area. In this paper, we will focus on the fabrication of VCSEL components. We will present you the evolution of implanted and oxide-confined VCSEL process, device characterization, also performance in Gigabit data communication and the most important reliability issue
Study for analysis of benefit versus cost of low thrust propulsion system
NASA Technical Reports Server (NTRS)
Hamlyn, K. M.; Robertson, R. I.; Rose, L. J.
1983-01-01
The benefits and costs associated with placing large space systems (LSS) in operational orbits were investigated, and a flexible computer model for analyzing these benefits and costs was developed. A mission model for LSS was identified that included both NASA/Commercial and DOD missions. This model included a total of 68 STS launches for the NASA/Commercial missions and 202 launches for the DOD missions. The mission catalog was of sufficient depth to define the structure type, mass and acceleration limits of each LSS. Conceptual primary propulsion stages (PPS) designs for orbital transfer were developed for three low thrust LO2/LH2 engines baselined for the study. The performance characteristics for each of these PPS was compared to the LSS mission catalog to create a mission capture. The costs involved in placing the LSS in their operational orbits were identified. The two primary costs were that of the PPS and of the STS launch. The cost of the LSS was not included as it is not a function of the PPS performance. The basic relationships and algorithms that could be used to describe the costs were established. The benefit criteria for the mission model were also defined. These included mission capture, reliability, technical risk, development time, and growth potential. Rating guidelines were established for each parameter. For flexibility, each parameter is assigned a weighting factor.
Task switching costs in preschool children and adults.
Peng, Anna; Kirkham, Natasha Z; Mareschal, Denis
2018-08-01
Past research investigating cognitive flexibility has shown that preschool children make many perseverative errors in tasks that require switching between different sets of rules. However, this inflexibility might not necessarily hold with easier tasks. The current study investigated the developmental differences in cognitive flexibility using a task-switching procedure that compared reaction times and accuracy in 4- and 6-year-olds with those in adults. The experiment involved simple target detection tasks and was intentionally designed in a way that the stimulus and response conflicts were minimal together with a long preparation window. Global mixing costs (performance costs when multiple tasks are relevant in a context), and local switch costs (performance costs due to switching to an alternative task) are typically thought to engage endogenous control processes. If this is the case, we should observe developmental differences with both of these costs. Our results show, however, that when the accuracy was good, there were no age differences in cognitive flexibility (i.e., the ability to manage multiple tasks and to switch between tasks) between children and adults. Even though preschool children had slower reaction times and were less accurate, the mixing and switch costs associated with task switching were not reliably larger for preschool children. Preschool children did, however, show more commission errors and greater response repetition effects than adults, which may reflect differences in inhibitory control. Copyright © 2018 Elsevier Inc. All rights reserved.
Critical issues in assuring long lifetime and fail-safe operation of optical communications network
NASA Astrophysics Data System (ADS)
Paul, Dilip K.
1993-09-01
Major factors in assuring long lifetime and fail-safe operation in optical communications networks are reviewed in this paper. Reliable functionality to design specifications, complexity of implementation, and cost are the most critical issues. As economics is the driving force to set the goals as well as priorities for the design, development, safe operation, and maintenance schedules of reliable networks, a balance is sought between the degree of reliability enhancement, cost, and acceptable outage of services. Protecting both the link and the network with high reliability components, hardware duplication, and diversity routing can ensure the best network availability. Case examples include both fiber optic and lasercom systems. Also, the state-of-the-art reliability of photonics in space environment is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stettenheim, Joel
Norwich Technologies (NT) is developing a disruptively superior solar field for trough concentrating solar power (CSP). Troughs are the leading CSP technology (85% of installed capacity), being highly deployable and similar to photovoltaic (PV) systems for siting. NT has developed the SunTrap receiver, a disruptive alternative to vacuum-tube concentrating solar power (CSP) receivers, a market currently dominated by the Schott PTR-70. The SunTrap receiver will (1) operate at higher temperature (T) by using an insulated, recessed radiation-collection system to overcome the energy losses that plague vacuum-tube receivers at high T, (2) decrease acquisition costs via simpler structure, and (3) dramaticallymore » increase reliability by eliminating vacuum. It offers comparable optical efficiency with thermal loss reduction from ≥ 26% (at presently standard T) to ≥ 55% (at high T), lower acquisition costs, and near-zero O&M costs.« less
Implementation and Testing of Low Cost Uav Platform for Orthophoto Imaging
NASA Astrophysics Data System (ADS)
Brucas, D.; Suziedelyte-Visockiene, J.; Ragauskas, U.; Berteska, E.; Rudinskas, D.
2013-08-01
Implementation of Unmanned Aerial Vehicles for civilian applications is rapidly increasing. Technologies which were expensive and available only for military use have recently spread on civilian market. There is a vast number of low cost open source components and systems for implementation on UAVs available. Using of low cost hobby and open source components ensures considerable decrease of UAV price, though in some cases compromising its reliability. In Space Science and Technology Institute (SSTI) in collaboration with Vilnius Gediminas Technical University (VGTU) researches have been performed in field of constructing and implementation of small UAVs composed of low cost open source components (and own developments). Most obvious and simple implementation of such UAVs - orthophoto imaging with data download and processing after the flight. The construction, implementation of UAVs, flight experience, data processing and data implementation will be further covered in the paper and presentation.
Revenue Sufficiency and Reliability in a Zero Marginal Cost Future
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frew, Bethany A.
Features of existing wholesale electricity markets, such as administrative pricing rules and policy-based reliability standards, can distort market incentives from allowing generators sufficient opportunities to recover both fixed and variable costs. Moreover, these challenges can be amplified by other factors, including (1) inelastic demand resulting from a lack of price signal clarity, (2) low- or near-zero marginal cost generation, particularly arising from low natural gas fuel prices and variable generation (VG), such as wind and solar, and (3) the variability and uncertainty of this VG. As power systems begin to incorporate higher shares of VG, many questions arise about themore » suitability of the existing marginal-cost-based price formation, primarily within an energy-only market structure, to ensure the economic viability of resources that might be needed to provide system reliability. This article discusses these questions and provides a summary of completed and ongoing modelling-based work at the National Renewable Energy Laboratory to better understand the impacts of evolving power systems on reliability and revenue sufficiency.« less