System and Software Reliability (C103)
NASA Technical Reports Server (NTRS)
Wallace, Dolores
2003-01-01
Within the last decade better reliability models (hardware. software, system) than those currently used have been theorized and developed but not implemented in practice. Previous research on software reliability has shown that while some existing software reliability models are practical, they are no accurate enough. New paradigms of development (e.g. OO) have appeared and associated reliability models have been proposed posed but not investigated. Hardware models have been extensively investigated but not integrated into a system framework. System reliability modeling is the weakest of the three. NASA engineers need better methods and tools to demonstrate that the products meet NASA requirements for reliability measurement. For the new models for the software component of the last decade, there is a great need to bring them into a form that they can be used on software intensive systems. The Statistical Modeling and Estimation of Reliability Functions for Systems (SMERFS'3) tool is an existing vehicle that may be used to incorporate these new modeling advances. Adapting some existing software reliability modeling changes to accommodate major changes in software development technology may also show substantial improvement in prediction accuracy. With some additional research, the next step is to identify and investigate system reliability. System reliability models could then be incorporated in a tool such as SMERFS'3. This tool with better models would greatly add value in assess in GSFC projects.
Reliability and coverage analysis of non-repairable fault-tolerant memory systems
NASA Technical Reports Server (NTRS)
Cox, G. W.; Carroll, B. D.
1976-01-01
A method was developed for the construction of probabilistic state-space models for nonrepairable systems. Models were developed for several systems which achieved reliability improvement by means of error-coding, modularized sparing, massive replication and other fault-tolerant techniques. From the models developed, sets of reliability and coverage equations for the systems were developed. Comparative analyses of the systems were performed using these equation sets. In addition, the effects of varying subunit reliabilities on system reliability and coverage were described. The results of these analyses indicated that a significant gain in system reliability may be achieved by use of combinations of modularized sparing, error coding, and software error control. For sufficiently reliable system subunits, this gain may far exceed the reliability gain achieved by use of massive replication techniques, yet result in a considerable saving in system cost.
Statistical modelling of software reliability
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1991-01-01
During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety.
Integrated performance and reliability specification for digital avionics systems
NASA Technical Reports Server (NTRS)
Brehm, Eric W.; Goettge, Robert T.
1995-01-01
This paper describes an automated tool for performance and reliability assessment of digital avionics systems, called the Automated Design Tool Set (ADTS). ADTS is based on an integrated approach to design assessment that unifies traditional performance and reliability views of system designs, and that addresses interdependencies between performance and reliability behavior via exchange of parameters and result between mathematical models of each type. A multi-layer tool set architecture has been developed for ADTS that separates the concerns of system specification, model generation, and model solution. Performance and reliability models are generated automatically as a function of candidate system designs, and model results are expressed within the system specification. The layered approach helps deal with the inherent complexity of the design assessment process, and preserves long-term flexibility to accommodate a wide range of models and solution techniques within the tool set structure. ADTS research and development to date has focused on development of a language for specification of system designs as a basis for performance and reliability evaluation. A model generation and solution framework has also been developed for ADTS, that will ultimately encompass an integrated set of analytic and simulated based techniques for performance, reliability, and combined design assessment.
Development of a Conservative Model Validation Approach for Reliable Analysis
2015-01-01
CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account
Software For Computing Reliability Of Other Software
NASA Technical Reports Server (NTRS)
Nikora, Allen; Antczak, Thomas M.; Lyu, Michael
1995-01-01
Computer Aided Software Reliability Estimation (CASRE) computer program developed for use in measuring reliability of other software. Easier for non-specialists in reliability to use than many other currently available programs developed for same purpose. CASRE incorporates mathematical modeling capabilities of public-domain Statistical Modeling and Estimation of Reliability Functions for Software (SMERFS) computer program and runs in Windows software environment. Provides menu-driven command interface; enabling and disabling of menu options guides user through (1) selection of set of failure data, (2) execution of mathematical model, and (3) analysis of results from model. Written in C language.
Tracking reliability for space cabin-borne equipment in development by Crow model.
Chen, J D; Jiao, S J; Sun, H L
2001-12-01
Objective. To study and track the reliability growth of manned spaceflight cabin-borne equipment in the course of its development. Method. A new technique of reliability growth estimation and prediction, which is composed of the Crow model and test data conversion (TDC) method was used. Result. The estimation and prediction value of the reliability growth conformed to its expectations. Conclusion. The method could dynamically estimate and predict the reliability of the equipment by making full use of various test information in the course of its development. It offered not only a possibility of tracking the equipment reliability growth, but also the reference for quality control in manned spaceflight cabin-borne equipment design and development process.
An Evaluation Method of Equipment Reliability Configuration Management
NASA Astrophysics Data System (ADS)
Wang, Wei; Feng, Weijia; Zhang, Wei; Li, Yuan
2018-01-01
At present, many equipment development companies have been aware of the great significance of reliability of the equipment development. But, due to the lack of effective management evaluation method, it is very difficult for the equipment development company to manage its own reliability work. Evaluation method of equipment reliability configuration management is to determine the reliability management capabilities of equipment development company. Reliability is not only designed, but also managed to achieve. This paper evaluates the reliability management capabilities by reliability configuration capability maturity model(RCM-CMM) evaluation method.
Reliability models applicable to space telescope solar array assembly system
NASA Technical Reports Server (NTRS)
Patil, S. A.
1986-01-01
A complex system may consist of a number of subsystems with several components in series, parallel, or combination of both series and parallel. In order to predict how well the system will perform, it is necessary to know the reliabilities of the subsystems and the reliability of the whole system. The objective of the present study is to develop mathematical models of the reliability which are applicable to complex systems. The models are determined by assuming k failures out of n components in a subsystem. By taking k = 1 and k = n, these models reduce to parallel and series models; hence, the models can be specialized to parallel, series combination systems. The models are developed by assuming the failure rates of the components as functions of time and as such, can be applied to processes with or without aging effects. The reliability models are further specialized to Space Telescope Solar Arrray (STSA) System. The STSA consists of 20 identical solar panel assemblies (SPA's). The reliabilities of the SPA's are determined by the reliabilities of solar cell strings, interconnects, and diodes. The estimates of the reliability of the system for one to five years are calculated by using the reliability estimates of solar cells and interconnects given n ESA documents. Aging effects in relation to breaks in interconnects are discussed.
Software reliability models for critical applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pham, H.; Pham, M.
This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the secondmore » place. 407 refs., 4 figs., 2 tabs.« less
Software reliability models for critical applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pham, H.; Pham, M.
This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place.more » 407 refs., 4 figs., 2 tabs.« less
Reliable results from stochastic simulation models
Donald L., Jr. Gochenour; Leonard R. Johnson
1973-01-01
Development of a computer simulation model is usually done without fully considering how long the model should run (e.g. computer time) before the results are reliable. However construction of confidence intervals (CI) about critical output parameters from the simulation model makes it possible to determine the point where model results are reliable. If the results are...
Reliability based fatigue design and maintenance procedures
NASA Technical Reports Server (NTRS)
Hanagud, S.
1977-01-01
A stochastic model has been developed to describe a probability for fatigue process by assuming a varying hazard rate. This stochastic model can be used to obtain the desired probability of a crack of certain length at a given location after a certain number of cycles or time. Quantitative estimation of the developed model was also discussed. Application of the model to develop a procedure for reliability-based cost-effective fail-safe structural design is presented. This design procedure includes the reliability improvement due to inspection and repair. Methods of obtaining optimum inspection and maintenance schemes are treated.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-18
... NUCLEAR REGULATORY COMMISSION [NRC-2011-0109] NUREG/CR-XXXX, Development of Quantitative Software..., ``Development of Quantitative Software Reliability Models for Digital Protection Systems of Nuclear Power Plants... of Risk Analysis, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission...
Aerospace Applications of Weibull and Monte Carlo Simulation with Importance Sampling
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.
1998-01-01
Recent developments in reliability modeling and computer technology have made it practical to use the Weibull time to failure distribution to model the system reliability of complex fault-tolerant computer-based systems. These system models are becoming increasingly popular in space systems applications as a result of mounting data that support the decreasing Weibull failure distribution and the expectation of increased system reliability. This presentation introduces the new reliability modeling developments and demonstrates their application to a novel space system application. The application is a proposed guidance, navigation, and control (GN&C) system for use in a long duration manned spacecraft for a possible Mars mission. Comparisons to the constant failure rate model are presented and the ramifications of doing so are discussed.
A simulation model for risk assessment of turbine wheels
NASA Technical Reports Server (NTRS)
Safie, Fayssal M.; Hage, Richard T.
1991-01-01
A simulation model has been successfully developed to evaluate the risk of the Space Shuttle auxiliary power unit (APU) turbine wheels for a specific inspection policy. Besides being an effective tool for risk/reliability evaluation, the simulation model also allows the analyst to study the trade-offs between wheel reliability, wheel life, inspection interval, and rejection crack size. For example, in the APU application, sensitivity analysis results showed that the wheel life limit has the least effect on wheel reliability when compared to the effect of the inspection interval and the rejection crack size. In summary, the simulation model developed represents a flexible tool to predict turbine wheel reliability and study the risk under different inspection policies.
A simulation model for risk assessment of turbine wheels
NASA Astrophysics Data System (ADS)
Safie, Fayssal M.; Hage, Richard T.
A simulation model has been successfully developed to evaluate the risk of the Space Shuttle auxiliary power unit (APU) turbine wheels for a specific inspection policy. Besides being an effective tool for risk/reliability evaluation, the simulation model also allows the analyst to study the trade-offs between wheel reliability, wheel life, inspection interval, and rejection crack size. For example, in the APU application, sensitivity analysis results showed that the wheel life limit has the least effect on wheel reliability when compared to the effect of the inspection interval and the rejection crack size. In summary, the simulation model developed represents a flexible tool to predict turbine wheel reliability and study the risk under different inspection policies.
NASA Technical Reports Server (NTRS)
Liu, Donhang
2014-01-01
This presentation includes a summary of NEPP-funded deliverables for the Base-Metal Electrodes (BMEs) capacitor task, development of a general reliability model for BME capacitors, and a summary and future work.
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
Methodology for Physics and Engineering of Reliable Products
NASA Technical Reports Server (NTRS)
Cornford, Steven L.; Gibbel, Mark
1996-01-01
Physics of failure approaches have gained wide spread acceptance within the electronic reliability community. These methodologies involve identifying root cause failure mechanisms, developing associated models, and utilizing these models to inprove time to market, lower development and build costs and higher reliability. The methodology outlined herein sets forth a process, based on integration of both physics and engineering principles, for achieving the same goals.
Superior model for fault tolerance computation in designing nano-sized circuit systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, N. S. S., E-mail: narinderjit@petronas.com.my; Muthuvalu, M. S., E-mail: msmuthuvalu@gmail.com; Asirvadam, V. S., E-mail: vijanth-sagayan@petronas.com.my
2014-10-24
As CMOS technology scales nano-metrically, reliability turns out to be a decisive subject in the design methodology of nano-sized circuit systems. As a result, several computational approaches have been developed to compute and evaluate reliability of desired nano-electronic circuits. The process of computing reliability becomes very troublesome and time consuming as the computational complexity build ups with the desired circuit size. Therefore, being able to measure reliability instantly and superiorly is fast becoming necessary in designing modern logic integrated circuits. For this purpose, the paper firstly looks into the development of an automated reliability evaluation tool based on the generalizationmore » of Probabilistic Gate Model (PGM) and Boolean Difference-based Error Calculator (BDEC) models. The Matlab-based tool allows users to significantly speed-up the task of reliability analysis for very large number of nano-electronic circuits. Secondly, by using the developed automated tool, the paper explores into a comparative study involving reliability computation and evaluation by PGM and, BDEC models for different implementations of same functionality circuits. Based on the reliability analysis, BDEC gives exact and transparent reliability measures, but as the complexity of the same functionality circuits with respect to gate error increases, reliability measure by BDEC tends to be lower than the reliability measure by PGM. The lesser reliability measure by BDEC is well explained in this paper using distribution of different signal input patterns overtime for same functionality circuits. Simulation results conclude that the reliability measure by BDEC depends not only on faulty gates but it also depends on circuit topology, probability of input signals being one or zero and also probability of error on signal lines.« less
Hierarchical modeling for reliability analysis using Markov models. B.S./M.S. Thesis - MIT
NASA Technical Reports Server (NTRS)
Fagundo, Arturo
1994-01-01
Markov models represent an extremely attractive tool for the reliability analysis of many systems. However, Markov model state space grows exponentially with the number of components in a given system. Thus, for very large systems Markov modeling techniques alone become intractable in both memory and CPU time. Often a particular subsystem can be found within some larger system where the dependence of the larger system on the subsystem is of a particularly simple form. This simple dependence can be used to decompose such a system into one or more subsystems. A hierarchical technique is presented which can be used to evaluate these subsystems in such a way that their reliabilities can be combined to obtain the reliability for the full system. This hierarchical approach is unique in that it allows the subsystem model to pass multiple aggregate state information to the higher level model, allowing more general systems to be evaluated. Guidelines are developed to assist in the system decomposition. An appropriate method for determining subsystem reliability is also developed. This method gives rise to some interesting numerical issues. Numerical error due to roundoff and integration are discussed at length. Once a decomposition is chosen, the remaining analysis is straightforward but tedious. However, an approach is developed for simplifying the recombination of subsystem reliabilities. Finally, a real world system is used to illustrate the use of this technique in a more practical context.
Analysis of whisker-toughened CMC structural components using an interactive reliability model
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Palko, Joseph L.
1992-01-01
Realizing wider utilization of ceramic matrix composites (CMC) requires the development of advanced structural analysis technologies. This article focuses on the use of interactive reliability models to predict component probability of failure. The deterministic William-Warnke failure criterion serves as theoretical basis for the reliability model presented here. The model has been implemented into a test-bed software program. This computer program has been coupled to a general-purpose finite element program. A simple structural problem is presented to illustrate the reliability model and the computer algorithm.
Comprehensive Design Reliability Activities for Aerospace Propulsion Systems
NASA Technical Reports Server (NTRS)
Christenson, R. L.; Whitley, M. R.; Knight, K. C.
2000-01-01
This technical publication describes the methodology, model, software tool, input data, and analysis result that support aerospace design reliability studies. The focus of these activities is on propulsion systems mechanical design reliability. The goal of these activities is to support design from a reliability perspective. Paralleling performance analyses in schedule and method, this requires the proper use of metrics in a validated reliability model useful for design, sensitivity, and trade studies. Design reliability analysis in this view is one of several critical design functions. A design reliability method is detailed and two example analyses are provided-one qualitative and the other quantitative. The use of aerospace and commercial data sources for quantification is discussed and sources listed. A tool that was developed to support both types of analyses is presented. Finally, special topics discussed include the development of design criteria, issues of reliability quantification, quality control, and reliability verification.
Rollover risk prediction of heavy vehicles by reliability index and empirical modelling
NASA Astrophysics Data System (ADS)
Sellami, Yamine; Imine, Hocine; Boubezoul, Abderrahmane; Cadiou, Jean-Charles
2018-03-01
This paper focuses on a combination of a reliability-based approach and an empirical modelling approach for rollover risk assessment of heavy vehicles. A reliability-based warning system is developed to alert the driver to a potential rollover before entering into a bend. The idea behind the proposed methodology is to estimate the rollover risk by the probability that the vehicle load transfer ratio (LTR) exceeds a critical threshold. Accordingly, a so-called reliability index may be used as a measure to assess the vehicle safe functioning. In the reliability method, computing the maximum of LTR requires to predict the vehicle dynamics over the bend which can be in some cases an intractable problem or time-consuming. With the aim of improving the reliability computation time, an empirical model is developed to substitute the vehicle dynamics and rollover models. This is done by using the SVM (Support Vector Machines) algorithm. The preliminary obtained results demonstrate the effectiveness of the proposed approach.
ERIC Educational Resources Information Center
Goclowski, John C.; And Others
The Reliability, Maintainability, and Cost Model (RMCM) described in this report is an interactive mathematical model with a built-in sensitivity analysis capability. It is a major component of the Life Cycle Cost Impact Model (LCCIM), which was developed as part of the DAIS advanced development program to be used to assess the potential impacts…
The art of fault-tolerant system reliability modeling
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Johnson, Sally C.
1990-01-01
A step-by-step tutorial of the methods and tools used for the reliability analysis of fault-tolerant systems is presented. Emphasis is on the representation of architectural features in mathematical models. Details of the mathematical solution of complex reliability models are not presented. Instead the use of several recently developed computer programs--SURE, ASSIST, STEM, PAWS--which automate the generation and solution of these models is described.
Chen, J D; Sun, H L
1999-04-01
Objective. To assess and predict reliability of an equipment dynamically by making full use of various test informations in the development of products. Method. A new reliability growth assessment method based on army material system analysis activity (AMSAA) model was developed. The method is composed of the AMSAA model and test data conversion technology. Result. The assessment and prediction results of a space-borne equipment conform to its expectations. Conclusion. It is suggested that this method should be further researched and popularized.
Cost Estimation of Software Development and the Implications for the Program Manager
1992-06-01
Software Lifecycle Model (SLIM), the Jensen System-4 model, the Software Productivity, Quality, and Reliability Estimator ( SPQR \\20), the Constructive...function models in current use are the Software Productivity, Quality, and Reliability Estimator ( SPQR /20) and the Software Architecture Sizing and...Estimator ( SPQR /20) was developed by T. Capers Jones of Software Productivity Research, Inc., in 1985. The model is intended to estimate the outcome
Estimate of the Reliability in Geological Forecasts for Tunnels: Toward a Structured Approach
NASA Astrophysics Data System (ADS)
Perello, Paolo
2011-11-01
In tunnelling, a reliable geological model often allows providing an effective design and facing the construction phase without unpleasant surprises. A geological model can be considered reliable when it is a valid support to correctly foresee the rock mass behaviour, therefore preventing unexpected events during the excavation. The higher the model reliability, the lower the probability of unforeseen rock mass behaviour. Unfortunately, owing to different reasons, geological models are affected by uncertainties and a fully reliable knowledge of the rock mass is, in most cases, impossible. Therefore, estimating to which degree a geological model is reliable, becomes a primary requirement in order to save time and money and to adopt the appropriate construction strategy. The definition of the geological model reliability is often achieved by engineering geologists through an unstructured analytical process and variable criteria. This paper focusses on geological models for projects of linear underground structures and represents an effort to analyse and include in a conceptual framework the factors influencing such models. An empirical parametric procedure is then developed with the aim of obtaining an index called "geological model rating (GMR)", which can be used to provide a more standardised definition of a geological model reliability.
Constraining uncertainties in water supply reliability in a tropical data scarce basin
NASA Astrophysics Data System (ADS)
Kaune, Alexander; Werner, Micha; Rodriguez, Erasmo; de Fraiture, Charlotte
2015-04-01
Assessing the water supply reliability in river basins is essential for adequate planning and development of irrigated agriculture and urban water systems. In many cases hydrological models are applied to determine the surface water availability in river basins. However, surface water availability and variability is often not appropriately quantified due to epistemic uncertainties, leading to water supply insecurity. The objective of this research is to determine the water supply reliability in order to support planning and development of irrigated agriculture in a tropical, data scarce environment. The approach proposed uses a simple hydrological model, but explicitly includes model parameter uncertainty. A transboundary river basin in the tropical region of Colombia and Venezuela with an approximately area of 2100 km² was selected as a case study. The Budyko hydrological framework was extended to consider climatological input variability and model parameter uncertainty, and through this the surface water reliability to satisfy the irrigation and urban demand was estimated. This provides a spatial estimate of the water supply reliability across the basin. For the middle basin the reliability was found to be less than 30% for most of the months when the water is extracted from an upstream source. Conversely, the monthly water supply reliability was high (r>98%) in the lower basin irrigation areas when water was withdrawn from a source located further downstream. Including model parameter uncertainty provides a complete estimate of the water supply reliability, but that estimate is influenced by the uncertainty in the model. Reducing the uncertainty in the model through improved data and perhaps improved model structure will improve the estimate of the water supply reliability allowing better planning of irrigated agriculture and dependable water allocation decisions.
Reliability of IGBT in a STATCOM for Harmonic Compensation and Power Factor Correction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopi Reddy, Lakshmi Reddy; Tolbert, Leon M; Ozpineci, Burak
With smart grid integration, there is a need to characterize reliability of a power system by including reliability of power semiconductors in grid related applications. In this paper, the reliability of IGBTs in a STATCOM application is presented for two different applications, power factor correction and harmonic elimination. The STATCOM model is developed in EMTP, and analytical equations for average conduction losses in an IGBT and a diode are derived and compared with experimental data. A commonly used reliability model is used to predict reliability of IGBT.
Validation of urban freeway models. [supporting datasets
DOT National Transportation Integrated Search
2015-01-01
The goal of the SHRP 2 Project L33 Validation of Urban Freeway Models was to assess and enhance the predictive travel time reliability models developed in the SHRP 2 Project L03, Analytic Procedures for Determining the Impacts of Reliability Mitigati...
Towards cost-effective reliability through visualization of the reliability option space
NASA Technical Reports Server (NTRS)
Feather, Martin S.
2004-01-01
In planning a complex system's development there can be many options to improve its reliability. Typically their sum total cost exceeds the budget available, so it is necessary to select judiciously from among them. Reliability models can be employed to calculate the cost and reliability implications of a candidate selection.
Interval Estimation of Revision Effect on Scale Reliability via Covariance Structure Modeling
ERIC Educational Resources Information Center
Raykov, Tenko
2009-01-01
A didactic discussion of a procedure for interval estimation of change in scale reliability due to revision is provided, which is developed within the framework of covariance structure modeling. The method yields ranges of plausible values for the population gain or loss in reliability of unidimensional composites, which results from deletion or…
NASA Technical Reports Server (NTRS)
Kleinhammer, Roger K.; Graber, Robert R.; DeMott, D. L.
2016-01-01
Reliability practitioners advocate getting reliability involved early in a product development process. However, when assigned to estimate or assess the (potential) reliability of a product or system early in the design and development phase, they are faced with lack of reasonable models or methods for useful reliability estimation. Developing specific data is costly and time consuming. Instead, analysts rely on available data to assess reliability. Finding data relevant to the specific use and environment for any project is difficult, if not impossible. Instead, analysts attempt to develop the "best" or composite analog data to support the assessments. Industries, consortia and vendors across many areas have spent decades collecting, analyzing and tabulating fielded item and component reliability performance in terms of observed failures and operational use. This data resource provides a huge compendium of information for potential use, but can also be compartmented by industry, difficult to find out about, access, or manipulate. One method used incorporates processes for reviewing these existing data sources and identifying the available information based on similar equipment, then using that generic data to derive an analog composite. Dissimilarities in equipment descriptions, environment of intended use, quality and even failure modes impact the "best" data incorporated in an analog composite. Once developed, this composite analog data provides a "better" representation of the reliability of the equipment or component. It can be used to support early risk or reliability trade studies, or analytical models to establish the predicted reliability data points. It also establishes a baseline prior that may updated based on test data or observed operational constraints and failures, i.e., using Bayesian techniques. This tutorial presents a descriptive compilation of historical data sources across numerous industries and disciplines, along with examples of contents and data characteristics. It then presents methods for combining failure information from different sources and mathematical use of this data in early reliability estimation and analyses.
Reliability Stress-Strength Models for Dependent Observations with Applications in Clinical Trials
NASA Technical Reports Server (NTRS)
Kushary, Debashis; Kulkarni, Pandurang M.
1995-01-01
We consider the applications of stress-strength models in studies involving clinical trials. When studying the effects and side effects of certain procedures (treatments), it is often the case that observations are correlated due to subject effect, repeated measurements and observing many characteristics simultaneously. We develop maximum likelihood estimator (MLE) and uniform minimum variance unbiased estimator (UMVUE) of the reliability which in clinical trial studies could be considered as the chances of increased side effects due to a particular procedure compared to another. The results developed apply to both univariate and multivariate situations. Also, for the univariate situations we develop simple to use lower confidence bounds for the reliability. Further, we consider the cases when both stress and strength constitute time dependent processes. We define the future reliability and obtain methods of constructing lower confidence bounds for this reliability. Finally, we conduct simulation studies to evaluate all the procedures developed and also to compare the MLE and the UMVUE.
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.
2017-04-04
research thrust areas are designed to enable the development of reliable, damage tolerant, lightweight structures with excellent thermal management...46 2. RESEARCH THRUST AREA: MULTISCALE CHARACTERIZATION AND MODELING .................................... 56 2.1 DESIGN OF MATERIALS...The research thrust areas are designed to enable the development of reliable, damage tolerant, lightweight structures with excellent thermal
Software development predictors, error analysis, reliability models and software metric analysis
NASA Technical Reports Server (NTRS)
Basili, Victor
1983-01-01
The use of dynamic characteristics as predictors for software development was studied. It was found that there are some significant factors that could be useful as predictors. From a study on software errors and complexity, it was shown that meaningful results can be obtained which allow insight into software traits and the environment in which it is developed. Reliability models were studied. The research included the field of program testing because the validity of some reliability models depends on the answers to some unanswered questions about testing. In studying software metrics, data collected from seven software engineering laboratory (FORTRAN) projects were examined and three effort reporting accuracy checks were applied to demonstrate the need to validate a data base. Results are discussed.
NASA Technical Reports Server (NTRS)
Unal, Resit; Morris, W. Douglas; White, Nancy H.; Lepsch, Roger A.; Brown, Richard W.
2000-01-01
This paper describes the development of parametric models for estimating operational reliability and maintainability (R&M) characteristics for reusable vehicle concepts, based on vehicle size and technology support level. A R&M analysis tool (RMAT) and response surface methods are utilized to build parametric approximation models for rapidly estimating operational R&M characteristics such as mission completion reliability. These models that approximate RMAT, can then be utilized for fast analysis of operational requirements, for lifecycle cost estimating and for multidisciplinary sign optimization.
Design Strategy for a Formally Verified Reliable Computing Platform
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Caldwell, James L.; DiVito, Ben L.
1991-01-01
This paper presents a high-level design for a reliable computing platform for real-time control applications. The design tradeoffs and analyses related to the development of a formally verified reliable computing platform are discussed. The design strategy advocated in this paper requires the use of techniques that can be completely characterized mathematically as opposed to more powerful or more flexible algorithms whose performance properties can only be analyzed by simulation and testing. The need for accurate reliability models that can be related to the behavior models is also stressed. Tradeoffs between reliability and voting complexity are explored. In particular, the transient recovery properties of the system are found to be fundamental to both the reliability analysis as well as the "correctness" models.
Reliability model of a monopropellant auxiliary propulsion system
NASA Technical Reports Server (NTRS)
Greenberg, J. S.
1971-01-01
A mathematical model and associated computer code has been developed which computes the reliability of a monopropellant blowdown hydrazine spacecraft auxiliary propulsion system as a function of time. The propulsion system is used to adjust or modify the spacecraft orbit over an extended period of time. The multiple orbit corrections are the multiple objectives which the auxiliary propulsion system is designed to achieve. Thus the reliability model computes the probability of successfully accomplishing each of the desired orbit corrections. To accomplish this, the reliability model interfaces with a computer code that models the performance of a blowdown (unregulated) monopropellant auxiliary propulsion system. The computer code acts as a performance model and as such gives an accurate time history of the system operating parameters. The basic timing and status information is passed on to and utilized by the reliability model which establishes the probability of successfully accomplishing the orbit corrections.
Models for evaluating the performability of degradable computing systems
NASA Technical Reports Server (NTRS)
Wu, L. T.
1982-01-01
Recent advances in multiprocessor technology established the need for unified methods to evaluate computing systems performance and reliability. In response to this modeling need, a general modeling framework that permits the modeling, analysis and evaluation of degradable computing systems is considered. Within this framework, several user oriented performance variables are identified and shown to be proper generalizations of the traditional notions of system performance and reliability. Furthermore, a time varying version of the model is developed to generalize the traditional fault tree reliability evaluation methods of phased missions.
The Verification-based Analysis of Reliable Multicast Protocol
NASA Technical Reports Server (NTRS)
Wu, Yunqing
1996-01-01
Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP Multicasting. In this paper, we develop formal models for R.W using existing automatic verification systems, and perform verification-based analysis on the formal RMP specifications. We also use the formal models of RW specifications to generate a test suite for conformance testing of the RMP implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress between the implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.
Physics-based process modeling, reliability prediction, and design guidelines for flip-chip devices
NASA Astrophysics Data System (ADS)
Michaelides, Stylianos
Flip Chip on Board (FCOB) and Chip-Scale Packages (CSPs) are relatively new technologies that are being increasingly used in the electronic packaging industry. Compared to the more widely used face-up wirebonding and TAB technologies, flip-chips and most CSPs provide the shortest possible leads, lower inductance, higher frequency, better noise control, higher density, greater input/output (I/O), smaller device footprint and lower profile. However, due to the short history and due to the introduction of several new electronic materials, designs, and processing conditions, very limited work has been done to understand the role of material, geometry, and processing parameters on the reliability of flip-chip devices. Also, with the ever-increasing complexity of semiconductor packages and with the continued reduction in time to market, it is too costly to wait until the later stages of design and testing to discover that the reliability is not satisfactory. The objective of the research is to develop integrated process-reliability models that will take into consideration the mechanics of assembly processes to be able to determine the reliability of face-down devices under thermal cycling and long-term temperature dwelling. The models incorporate the time and temperature-dependent constitutive behavior of various materials in the assembly to be able to predict failure modes such as die cracking and solder cracking. In addition, the models account for process-induced defects and macro-micro features of the assembly. Creep-fatigue and continuum-damage mechanics models for the solder interconnects and fracture-mechanics models for the die have been used to determine the reliability of the devices. The results predicted by the models have been successfully validated against experimental data. The validated models have been used to develop qualification and test procedures for implantable medical devices. In addition, the research has helped develop innovative face-down devices without the underfill, based on the thorough understanding of the failure modes. Also, practical design guidelines for material, geometry and process parameters for reliable flip-chip devices have been developed.
The Challenges of Credible Thermal Protection System Reliability Quantification
NASA Technical Reports Server (NTRS)
Green, Lawrence L.
2013-01-01
The paper discusses several of the challenges associated with developing a credible reliability estimate for a human-rated crew capsule thermal protection system. The process of developing such a credible estimate is subject to the quantification, modeling and propagation of numerous uncertainties within a probabilistic analysis. The development of specific investment recommendations, to improve the reliability prediction, among various potential testing and programmatic options is then accomplished through Bayesian analysis.
Compound estimation procedures in reliability
NASA Technical Reports Server (NTRS)
Barnes, Ron
1990-01-01
At NASA, components and subsystems of components in the Space Shuttle and Space Station generally go through a number of redesign stages. While data on failures for various design stages are sometimes available, the classical procedures for evaluating reliability only utilize the failure data on the present design stage of the component or subsystem. Often, few or no failures have been recorded on the present design stage. Previously, Bayesian estimators for the reliability of a single component, conditioned on the failure data for the present design, were developed. These new estimators permit NASA to evaluate the reliability, even when few or no failures have been recorded. Point estimates for the latter evaluation were not possible with the classical procedures. Since different design stages of a component (or subsystem) generally have a good deal in common, the development of new statistical procedures for evaluating the reliability, which consider the entire failure record for all design stages, has great intuitive appeal. A typical subsystem consists of a number of different components and each component has evolved through a number of redesign stages. The present investigations considered compound estimation procedures and related models. Such models permit the statistical consideration of all design stages of each component and thus incorporate all the available failure data to obtain estimates for the reliability of the present version of the component (or subsystem). A number of models were considered to estimate the reliability of a component conditioned on its total failure history from two design stages. It was determined that reliability estimators for the present design stage, conditioned on the complete failure history for two design stages have lower risk than the corresponding estimators conditioned only on the most recent design failure data. Several models were explored and preliminary models involving bivariate Poisson distribution and the Consael Process (a bivariate Poisson process) were developed. Possible short comings of the models are noted. An example is given to illustrate the procedures. These investigations are ongoing with the aim of developing estimators that extend to components (and subsystems) with three or more design stages.
Development of a Standard Set of Software Indicators for Aeronautical Systems Center.
1992-09-01
29:12). The composite models listed include COCOMO and the Software Productivity, Quality, and Reliability Model ( SPQR ) (29:12). The SPQR model was...determine the values of the 68 input parameters. Source provides no specifics. Indicator Name SPQR (SW Productivity, Qual, Reliability) Indicator Class
Reliability and Maintainability model (RAM) user and maintenance manual. Part 2
NASA Technical Reports Server (NTRS)
Ebeling, Charles E.
1995-01-01
This report documents the procedures for utilizing and maintaining the Reliability and Maintainability Model (RAM) developed by the University of Dayton for the NASA Langley Research Center (LaRC). The RAM model predicts reliability and maintainability (R&M) parameters for conceptual space vehicles using parametric relationships between vehicle design and performance characteristics and subsystem mean time between maintenance actions (MTBM) and manhours per maintenance action (MH/MA). These parametric relationships were developed using aircraft R&M data from over thirty different military aircraft of all types. This report describes the general methodology used within the model, the execution and computational sequence, the input screens and data, the output displays and reports, and study analyses and procedures. A source listing is provided.
NASA Astrophysics Data System (ADS)
Xia, Quan; Wang, Zili; Ren, Yi; Sun, Bo; Yang, Dezhen; Feng, Qiang
2018-05-01
With the rapid development of lithium-ion battery technology in the electric vehicle (EV) industry, the lifetime of the battery cell increases substantially; however, the reliability of the battery pack is still inadequate. Because of the complexity of the battery pack, a reliability design method for a lithium-ion battery pack considering the thermal disequilibrium is proposed in this paper based on cell redundancy. Based on this method, a three-dimensional electric-thermal-flow-coupled model, a stochastic degradation model of cells under field dynamic conditions and a multi-state system reliability model of a battery pack are established. The relationships between the multi-physics coupling model, the degradation model and the system reliability model are first constructed to analyze the reliability of the battery pack and followed by analysis examples with different redundancy strategies. By comparing the reliability of battery packs of different redundant cell numbers and configurations, several conclusions for the redundancy strategy are obtained. More notably, the reliability does not monotonically increase with the number of redundant cells for the thermal disequilibrium effects. In this work, the reliability of a 6 × 5 parallel-series configuration is the optimal system structure. In addition, the effect of the cell arrangement and cooling conditions are investigated.
A General Reliability Model for Ni-BaTiO3-Based Multilayer Ceramic Capacitors
NASA Technical Reports Server (NTRS)
Liu, Donhang
2014-01-01
The evaluation of multilayer ceramic capacitors (MLCCs) with Ni electrode and BaTiO3 dielectric material for potential space project applications requires an in-depth understanding of their reliability. A general reliability model for Ni-BaTiO3 MLCC is developed and discussed. The model consists of three parts: a statistical distribution; an acceleration function that describes how a capacitor's reliability life responds to the external stresses, and an empirical function that defines contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size, and capacitor chip size A. Application examples are also discussed based on the proposed reliability model for Ni-BaTiO3 MLCCs.
A General Reliability Model for Ni-BaTiO3-Based Multilayer Ceramic Capacitors
NASA Technical Reports Server (NTRS)
Liu, Donhang
2014-01-01
The evaluation for potential space project applications of multilayer ceramic capacitors (MLCCs) with Ni electrode and BaTiO3 dielectric material requires an in-depth understanding of the MLCCs reliability. A general reliability model for Ni-BaTiO3 MLCCs is developed and discussed in this paper. The model consists of three parts: a statistical distribution; an acceleration function that describes how a capacitors reliability life responds to external stresses; and an empirical function that defines the contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size r, and capacitor chip size A. Application examples are also discussed based on the proposed reliability model for Ni-BaTiO3 MLCCs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, A.D.; Ayoub, A.K.; Singh, C.
1982-07-01
Existing methods for generating capacity reliability evaluation do not explicitly recognize a number of operating considerations which may have important effects in system reliability performance. Thus, current methods may yield estimates of system reliability which differ appreciably from actual observed reliability. Further, current methods offer no means of accurately studying or evaluating alternatives which may differ in one or more operating considerations. Operating considerations which are considered to be important in generating capacity reliability evaluation include: unit duty cycles as influenced by load cycle shape, reliability performance of other units, unit commitment policy, and operating reserve policy; unit start-up failuresmore » distinct from unit running failures; unit start-up times; and unit outage postponability and the management of postponable outages. A detailed Monte Carlo simulation computer model called GENESIS and two analytical models called OPCON and OPPLAN have been developed which are capable of incorporating the effects of many operating considerations including those noted above. These computer models have been used to study a variety of actual and synthetic systems and are available from EPRI. The new models are shown to produce system reliability indices which differ appreciably from index values computed using traditional models which do not recognize operating considerations.« less
Soft computing techniques toward modeling the water supplies of Cyprus.
Iliadis, L; Maris, F; Tachos, S
2011-10-01
This research effort aims in the application of soft computing techniques toward water resources management. More specifically, the target is the development of reliable soft computing models capable of estimating the water supply for the case of "Germasogeia" mountainous watersheds in Cyprus. Initially, ε-Regression Support Vector Machines (ε-RSVM) and fuzzy weighted ε-RSVMR models have been developed that accept five input parameters. At the same time, reliable artificial neural networks have been developed to perform the same job. The 5-fold cross validation approach has been employed in order to eliminate bad local behaviors and to produce a more representative training data set. Thus, the fuzzy weighted Support Vector Regression (SVR) combined with the fuzzy partition has been employed in an effort to enhance the quality of the results. Several rational and reliable models have been produced that can enhance the efficiency of water policy designers. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Johnson, Sally C.
1995-01-01
This paper presents a step-by-step tutorial of the methods and the tools that were used for the reliability analysis of fault-tolerant systems. The approach used in this paper is the Markov (or semi-Markov) state-space method. The paper is intended for design engineers with a basic understanding of computer architecture and fault tolerance, but little knowledge of reliability modeling. The representation of architectural features in mathematical models is emphasized. This paper does not present details of the mathematical solution of complex reliability models. Instead, it describes the use of several recently developed computer programs SURE, ASSIST, STEM, and PAWS that automate the generation and the solution of these models.
The specification-based validation of reliable multicast protocol: Problem Report. M.S. Thesis
NASA Technical Reports Server (NTRS)
Wu, Yunqing
1995-01-01
Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP multicasting. In this report, we develop formal models for RMP using existing automated verification systems, and perform validation on the formal RMP specifications. The validation analysis help identifies some minor specification and design problems. We also use the formal models of RMP to generate a test suite for conformance testing of the implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress of implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.
Reliability modelling and analysis of thermal MEMS
NASA Astrophysics Data System (ADS)
Muratet, Sylvaine; Lavu, Srikanth; Fourniols, Jean-Yves; Bell, George; Desmulliez, Marc P. Y.
2006-04-01
This paper presents a MEMS reliability study methodology based on the novel concept of 'virtual prototyping'. This methodology can be used for the development of reliable sensors or actuators and also to characterize their behaviour in specific use conditions and applications. The methodology is demonstrated on the U-shaped micro electro thermal actuator used as test vehicle. To demonstrate this approach, a 'virtual prototype' has been developed with the modeling tools MatLab and VHDL-AMS. A best practice FMEA (Failure Mode and Effect Analysis) is applied on the thermal MEMS to investigate and assess the failure mechanisms. Reliability study is performed by injecting the identified defaults into the 'virtual prototype'. The reliability characterization methodology predicts the evolution of the behavior of these MEMS as a function of the number of cycles of operation and specific operational conditions.
NASA Technical Reports Server (NTRS)
1991-01-01
The technical effort and computer code enhancements performed during the sixth year of the Probabilistic Structural Analysis Methods program are summarized. Various capabilities are described to probabilistically combine structural response and structural resistance to compute component reliability. A library of structural resistance models is implemented in the Numerical Evaluations of Stochastic Structures Under Stress (NESSUS) code that included fatigue, fracture, creep, multi-factor interaction, and other important effects. In addition, a user interface was developed for user-defined resistance models. An accurate and efficient reliability method was developed and was successfully implemented in the NESSUS code to compute component reliability based on user-selected response and resistance models. A risk module was developed to compute component risk with respect to cost, performance, or user-defined criteria. The new component risk assessment capabilities were validated and demonstrated using several examples. Various supporting methodologies were also developed in support of component risk assessment.
Reliability evaluation of microgrid considering incentive-based demand response
NASA Astrophysics Data System (ADS)
Huang, Ting-Cheng; Zhang, Yong-Jun
2017-07-01
Incentive-based demand response (IBDR) can guide customers to adjust their behaviour of electricity and curtail load actively. Meanwhile, distributed generation (DG) and energy storage system (ESS) can provide time for the implementation of IBDR. The paper focus on the reliability evaluation of microgrid considering IBDR. Firstly, the mechanism of IBDR and its impact on power supply reliability are analysed. Secondly, the IBDR dispatch model considering customer’s comprehensive assessment and the customer response model are developed. Thirdly, the reliability evaluation method considering IBDR based on Monte Carlo simulation is proposed. Finally, the validity of the above models and method is studied through numerical tests on modified RBTS Bus6 test system. Simulation results demonstrated that IBDR can improve the reliability of microgrid.
Developing an oropharyngeal cancer (OPC) knowledge and behaviors survey.
Dodd, Virginia J; Riley Iii, Joseph L; Logan, Henrietta L
2012-09-01
To use the community participation research model to (1) develop a survey assessing knowledge about mouth and throat cancer and (2) field test and establish test-retest reliability with newly developed instrument. Cognitive interviews with primarily rural African American adults to assess their perception and interpretation of survey items. Test-retest reliability was established with a racially diverse rural population. Test-retest reliabilities ranged from .79 to .40 for screening awareness and .74 to .19 for knowledge. Coefficients increased for composite scores. Community participation methodology provided a culturally appropriate survey instrument that demonstrated acceptable levels of reliability.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
Reliability of four models for clinical gait analysis.
Kainz, Hans; Graham, David; Edwards, Julie; Walsh, Henry P J; Maine, Sheanna; Boyd, Roslyn N; Lloyd, David G; Modenese, Luca; Carty, Christopher P
2017-05-01
Three-dimensional gait analysis (3DGA) has become a common clinical tool for treatment planning in children with cerebral palsy (CP). Many clinical gait laboratories use the conventional gait analysis model (e.g. Plug-in-Gait model), which uses Direct Kinematics (DK) for joint kinematic calculations, whereas, musculoskeletal models, mainly used for research, use Inverse Kinematics (IK). Musculoskeletal IK models have the advantage of enabling additional analyses which might improve the clinical decision-making in children with CP. Before any new model can be used in a clinical setting, its reliability has to be evaluated and compared to a commonly used clinical gait model (e.g. Plug-in-Gait model) which was the purpose of this study. Two testers performed 3DGA in eleven CP and seven typically developing participants on two occasions. Intra- and inter-tester standard deviations (SD) and standard error of measurement (SEM) were used to compare the reliability of two DK models (Plug-in-Gait and a six degrees-of-freedom model solved using Vicon software) and two IK models (two modifications of 'gait2392' solved using OpenSim). All models showed good reliability (mean SEM of 3.0° over all analysed models and joint angles). Variations in joint kinetics were less in typically developed than in CP participants. The modified 'gait2392' model which included all the joint rotations commonly reported in clinical 3DGA, showed reasonable reliable joint kinematic and kinetic estimates, and allows additional musculoskeletal analysis on surgically adjustable parameters, e.g. muscle-tendon lengths, and, therefore, is a suitable model for clinical gait analysis. Copyright © 2017. Published by Elsevier B.V.
Developing Confidence Limits For Reliability Of Software
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.
1991-01-01
Technique developed for estimating reliability of software by use of Moranda geometric de-eutrophication model. Pivotal method enables straightforward construction of exact bounds with associated degree of statistical confidence about reliability of software. Confidence limits thus derived provide precise means of assessing quality of software. Limits take into account number of bugs found while testing and effects of sampling variation associated with random order of discovering bugs.
NASA Astrophysics Data System (ADS)
Gromek, Katherine Emily
A novel computational and inference framework of the physics-of-failure (PoF) reliability modeling for complex dynamic systems has been established in this research. The PoF-based reliability models are used to perform a real time simulation of system failure processes, so that the system level reliability modeling would constitute inferences from checking the status of component level reliability at any given time. The "agent autonomy" concept is applied as a solution method for the system-level probabilistic PoF-based (i.e. PPoF-based) modeling. This concept originated from artificial intelligence (AI) as a leading intelligent computational inference in modeling of multi agents systems (MAS). The concept of agent autonomy in the context of reliability modeling was first proposed by M. Azarkhail [1], where a fundamentally new idea of system representation by autonomous intelligent agents for the purpose of reliability modeling was introduced. Contribution of the current work lies in the further development of the agent anatomy concept, particularly the refined agent classification within the scope of the PoF-based system reliability modeling, new approaches to the learning and the autonomy properties of the intelligent agents, and modeling interacting failure mechanisms within the dynamic engineering system. The autonomous property of intelligent agents is defined as agent's ability to self-activate, deactivate or completely redefine their role in the analysis. This property of agents and the ability to model interacting failure mechanisms of the system elements makes the agent autonomy fundamentally different from all existing methods of probabilistic PoF-based reliability modeling. 1. Azarkhail, M., "Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems", PhD thesis, University of Maryland, College Park, 2007.
Development of confidence limits by pivotal functions for estimating software reliability
NASA Technical Reports Server (NTRS)
Dotson, Kelly J.
1987-01-01
The utility of pivotal functions is established for assessing software reliability. Based on the Moranda geometric de-eutrophication model of reliability growth, confidence limits for attained reliability and prediction limits for the time to the next failure are derived using a pivotal function approach. Asymptotic approximations to the confidence and prediction limits are considered and are shown to be inadequate in cases where only a few bugs are found in the software. Departures from the assumed exponentially distributed interfailure times in the model are also investigated. The effect of these departures is discussed relative to restricting the use of the Moranda model.
The open-source, public domain JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) API (Application Programming Interface) provides conventions and Fortran-90 modules to develop applications (computer programs) for analyzing process models. The input ...
NASA Astrophysics Data System (ADS)
Vivoni, E. R.; Mayer, A. S.; Halvorsen, K. E.; Robles-Morua, A.; Kossak, D.
2016-12-01
A series of iterative participatory modeling workshops were held in Sonora, México with the goal of developing water resources management strategies in a water-stressed basin subject to hydro-climatic variability and change. A model of the water resources system, consisting of watershed hydrology, water resources infrastructure, and groundwater models, was developed deliberatively in the workshops, along with scenarios of future climate and development. Participants used the final version of the water resources systems model to select from supply-side and demand-side water resources management strategies. The performance of the strategies was based on the reliability of meeting current and future demands at a daily time scale over a year's period. Pre- and post-workshop surveys were developed and administered. The survey questions focused on evaluation of participants' modeling capacity and the utility and accuracy of the models. The selected water resources strategies and the associated, expected reliability varied widely among participants. Most participants could be clustered into three groups with roughly equal numbers of participants that varied in terms of reliance on expanding infrastructure vs. demand modification; expectations of reliability; and perceptions of social, environmental, and economic impacts. The wide range of strategies chosen and associated reliabilities indicate that there is a substantial degree of uncertainty in how future water resources decisions could be made in the region. The pre- and post-survey results indicate that participants believed their modeling abilities increased and beliefs in the utility of models increased as a result of the workshops
solveME: fast and reliable solution of nonlinear ME models.
Yang, Laurence; Ma, Ding; Ebrahim, Ali; Lloyd, Colton J; Saunders, Michael A; Palsson, Bernhard O
2016-09-22
Genome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic reconstructions (M models), are multiscale, and growth maximization is a nonlinear programming (NLP) problem, mainly due to macromolecule dilution constraints. Here, we address these computational challenges. We develop a fast and numerically reliable solution method for growth maximization in ME models using a quad-precision NLP solver (Quad MINOS). Our method was up to 45 % faster than binary search for six significant digits in growth rate. We also develop a fast, quad-precision flux variability analysis that is accelerated (up to 60× speedup) via solver warm-starts. Finally, we employ the tools developed to investigate growth-coupled succinate overproduction, accounting for proteome constraints. Just as genome-scale metabolic reconstructions have become an invaluable tool for computational and systems biologists, we anticipate that these fast and numerically reliable ME solution methods will accelerate the wide-spread adoption of ME models for researchers in these fields.
Interpreting Variance Components as Evidence for Reliability and Validity.
ERIC Educational Resources Information Center
Kane, Michael T.
The reliability and validity of measurement is analyzed by a sampling model based on generalizability theory. A model for the relationship between a measurement procedure and an attribute is developed from an analysis of how measurements are used and interpreted in science. The model provides a basis for analyzing the concept of an error of…
Advanced reliability modeling of fault-tolerant computer-based systems
NASA Technical Reports Server (NTRS)
Bavuso, S. J.
1982-01-01
Two methodologies for the reliability assessment of fault tolerant digital computer based systems are discussed. The computer-aided reliability estimation 3 (CARE 3) and gate logic software simulation (GLOSS) are assessment technologies that were developed to mitigate a serious weakness in the design and evaluation process of ultrareliable digital systems. The weak link is based on the unavailability of a sufficiently powerful modeling technique for comparing the stochastic attributes of one system against others. Some of the more interesting attributes are reliability, system survival, safety, and mission success.
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Belcastro, Christine
2008-01-01
Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. As a part of the validation process, this paper describes an analysis method for determining a reliable flight regime in the flight envelope within which an integrated resilent control system can achieve the desired performance of tracking command signals and detecting additive faults in the presence of parameter uncertainty and unmodeled dynamics. To calculate a reliable flight regime, a structured singular value analysis method is applied to analyze the closed-loop system over the entire flight envelope. To use the structured singular value analysis method, a linear fractional transform (LFT) model of a transport aircraft longitudinal dynamics is developed over the flight envelope by using a preliminary LFT modeling software tool developed at the NASA Langley Research Center, which utilizes a matrix-based computational approach. The developed LFT model can capture original nonlinear dynamics over the flight envelope with the ! block which contains key varying parameters: angle of attack and velocity, and real parameter uncertainty: aerodynamic coefficient uncertainty and moment of inertia uncertainty. Using the developed LFT model and a formal robustness analysis method, a reliable flight regime is calculated for a transport aircraft closed-loop system.
NASA Astrophysics Data System (ADS)
Kurnosov, R. Yu; Chernyshova, T. I.; Chernyshov, V. N.
2018-05-01
The algorithms for improving the metrological reliability of analogue blocks of measuring channels and information-measuring systems are developed. The proposed algorithms ensure the optimum values of their metrological reliability indices for a given analogue circuit block solution.
Verification and validation of a reliable multicast protocol
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1995-01-01
This paper describes the methods used to specify and implement a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally by two complementary teams using a combination of formal and informal techniques in an attempt to ensure the correctness of the protocol implementation. The first team, called the Design team, initially specified protocol requirements using a variant of SCR requirements tables and implemented a prototype solution. The second team, called the V&V team, developed a state model based on the requirements tables and derived test cases from these tables to exercise the implementation. In a series of iterative steps, the Design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation through testing. Test cases derived from state transition paths in the formal model formed the dialogue between teams during development and served as the vehicles for keeping the model and implementation in fidelity with each other. This paper describes our experiences in developing our process model, details of our approach, and some example problems found during the development of RMP.
Reliability analysis and initial requirements for FC systems and stacks
NASA Astrophysics Data System (ADS)
Åström, K.; Fontell, E.; Virtanen, S.
In the year 2000 Wärtsilä Corporation started an R&D program to develop SOFC systems for CHP applications. The program aims to bring to the market highly efficient, clean and cost competitive fuel cell systems with rated power output in the range of 50-250 kW for distributed generation and marine applications. In the program Wärtsilä focuses on system integration and development. System reliability and availability are key issues determining the competitiveness of the SOFC technology. In Wärtsilä, methods have been implemented for analysing the system in respect to reliability and safety as well as for defining reliability requirements for system components. A fault tree representation is used as the basis for reliability prediction analysis. A dynamic simulation technique has been developed to allow for non-static properties in the fault tree logic modelling. Special emphasis has been placed on reliability analysis of the fuel cell stacks in the system. A method for assessing reliability and critical failure predictability requirements for fuel cell stacks in a system consisting of several stacks has been developed. The method is based on a qualitative model of the stack configuration where each stack can be in a functional, partially failed or critically failed state, each of the states having different failure rates and effects on the system behaviour. The main purpose of the method is to understand the effect of stack reliability, critical failure predictability and operating strategy on the system reliability and availability. An example configuration, consisting of 5 × 5 stacks (series of 5 sets of 5 parallel stacks) is analysed in respect to stack reliability requirements as a function of predictability of critical failures and Weibull shape factor of failure rate distributions.
Life and reliability models for helicopter transmissions
NASA Technical Reports Server (NTRS)
Savage, M.; Knorr, R. J.; Coy, J. J.
1982-01-01
Computer models of life and reliability are presented for planetary gear trains with a fixed ring gear, input applied to the sun gear, and output taken from the planet arm. For this transmission the input and output shafts are co-axial and the input and output torques are assumed to be coaxial with these shafts. Thrust and side loading are neglected. The reliability model is based on the Weibull distributions of the individual reliabilities of the in transmission components. The system model is also a Weibull distribution. The load versus life model for the system is a power relationship as the models for the individual components. The load-life exponent and basic dynamic capacity are developed as functions of the components capacities. The models are used to compare three and four planet, 150 kW (200 hp), 5:1 reduction transmissions with 1500 rpm input speed to illustrate their use.
NASA Astrophysics Data System (ADS)
Mayer, A. S.; Vivoni, E. R.; Halvorsen, K. E.; Kossak, D.
2014-12-01
The Rio Sonora Basin (RSB) in northwest Mexico has a semi-arid and highly variable climate along with urban and agricultural pressures on water resources. Three participatory modeling workshops were held in the RSB in spring 2013. A model of the water resources system, consisting of a watershed hydrology model, a model of the water infrastructure, and groundwater models, was developed deliberatively in the workshops, along with scenarios of future climate and development. Participants were asked to design water resources management strategies by choosing from a range of supply augmentation and demand reduction measures associated with water conservation. Participants assessed water supply reliability, measured as the average daily supply divided by daily demand for historical and future periods, by probing with the climate and development scenarios. Pre- and post-workshop-surveys were developed and administered, based on conceptual models of workshop participants' beliefs regarding modeling and local water resources. The survey results indicate that participants believed their modeling abilities increased and beliefs in the utility of models increased as a result of the workshops. The selected water resources strategies varied widely among participants. Wastewater reuse for industry and aquifer recharge were popular options, but significant numbers of participants thought that inter-basin transfers and desalination were viable. The majority of participants indicated that substantial increases in agricultural water efficiency could be achieved. On average, participants chose strategies that produce reliabilities over the historical and future periods of 95%, but more than 20% of participants were apparently satisfied with reliabilities lower than 80%. The wide range of strategies chosen and associated reliabilities indicate that there is a substantial degree of uncertainty in how future water resources decisions could be made in the region.
Structural reliability assessment capability in NESSUS
NASA Technical Reports Server (NTRS)
Millwater, H.; Wu, Y.-T.
1992-01-01
The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.
Structural reliability assessment capability in NESSUS
NASA Astrophysics Data System (ADS)
Millwater, H.; Wu, Y.-T.
1992-07-01
The principal capabilities of NESSUS (Numerical Evaluation of Stochastic Structures Under Stress), an advanced computer code developed for probabilistic structural response analysis, are reviewed, and its structural reliability assessed. The code combines flexible structural modeling tools with advanced probabilistic algorithms in order to compute probabilistic structural response and resistance, component reliability and risk, and system reliability and risk. An illustrative numerical example is presented.
Statistical modeling of software reliability
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1992-01-01
This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.
Software reliability perspectives
NASA Technical Reports Server (NTRS)
Wilson, Larry; Shen, Wenhui
1987-01-01
Software which is used in life critical functions must be known to be highly reliable before installation. This requires a strong testing program to estimate the reliability, since neither formal methods, software engineering nor fault tolerant methods can guarantee perfection. Prior to the final testing software goes through a debugging period and many models have been developed to try to estimate reliability from the debugging data. However, the existing models are poorly validated and often give poor performance. This paper emphasizes the fact that part of their failures can be attributed to the random nature of the debugging data given to these models as input, and it poses the problem of correcting this defect as an area of future research.
ERIC Educational Resources Information Center
Czuchry, Andrew J.; And Others
This report provides a complete guide to the stand alone mode operation of the reliability and maintenance (R&M) model, which was developed to facilitate the performance of design versus cost trade-offs within the digital avionics information system (DAIS) acquisition process. The features and structure of the model, its input data…
Predicting wettability behavior of fluorosilica coated metal surface using optimum neural network
NASA Astrophysics Data System (ADS)
Taghipour-Gorjikolaie, Mehran; Valipour Motlagh, Naser
2018-02-01
The interaction between variables, which are effective on the surface wettability, is very complex to predict the contact angles and sliding angles of liquid drops. In this paper, in order to solve this complexity, artificial neural network was used to develop reliable models for predicting the angles of liquid drops. Experimental data are divided into training data and testing data. By using training data and feed forward structure for the neural network and using particle swarm optimization for training the neural network based models, the optimum models were developed. The obtained results showed that regression index for the proposed models for the contact angles and sliding angles are 0.9874 and 0.9920, respectively. As it can be seen, these values are close to unit and it means the reliable performance of the models. Also, it can be inferred from the results that the proposed model have more reliable performance than multi-layer perceptron and radial basis function based models.
Lewicke, Aaron; Sazonov, Edward; Corwin, Michael J; Neuman, Michael; Schuckers, Stephanie
2008-01-01
Reliability of classification performance is important for many biomedical applications. A classification model which considers reliability in the development of the model such that unreliable segments are rejected would be useful, particularly, in large biomedical data sets. This approach is demonstrated in the development of a technique to reliably determine sleep and wake using only the electrocardiogram (ECG) of infants. Typically, sleep state scoring is a time consuming task in which sleep states are manually derived from many physiological signals. The method was tested with simultaneous 8-h ECG and polysomnogram (PSG) determined sleep scores from 190 infants enrolled in the collaborative home infant monitoring evaluation (CHIME) study. Learning vector quantization (LVQ) neural network, multilayer perceptron (MLP) neural network, and support vector machines (SVMs) are tested as the classifiers. After systematic rejection of difficult to classify segments, the models can achieve 85%-87% correct classification while rejecting only 30% of the data. This corresponds to a Kappa statistic of 0.65-0.68. With rejection, accuracy improves by about 8% over a model without rejection. Additionally, the impact of the PSG scored indeterminate state epochs is analyzed. The advantages of a reliable sleep/wake classifier based only on ECG include high accuracy, simplicity of use, and low intrusiveness. Reliability of the classification can be built directly in the model, such that unreliable segments are rejected.
NASA Technical Reports Server (NTRS)
Salem, Jonathan A.
2002-01-01
A generalized reliability model was developed for use in the design of structural components made from brittle, homogeneous anisotropic materials such as single crystals. The model is based on the Weibull distribution and incorporates a variable strength distribution and any equivalent stress failure criteria. In addition to the reliability model, an energy based failure criterion for elastically anisotropic materials was formulated. The model is different from typical Weibull-based models in that it accounts for strength anisotropy arising from fracture toughness anisotropy and thereby allows for strength and reliability predictions of brittle, anisotropic single crystals subjected to multiaxial stresses. The model is also applicable to elastically isotropic materials exhibiting strength anisotropy due to an anisotropic distribution of flaws. In order to develop and experimentally verify the model, the uniaxial and biaxial strengths of a single crystal nickel aluminide were measured. The uniaxial strengths of the <100> and <110> crystal directions were measured in three and four-point flexure. The biaxial strength was measured by subjecting <100> plates to a uniform pressure in a test apparatus that was developed and experimentally verified. The biaxial strengths of the single crystal plates were estimated by extending and verifying the displacement solution for a circular, anisotropic plate to the case of a variable radius and thickness. The best correlation between the experimental strength data and the model predictions occurred when an anisotropic stress analysis was combined with the normal stress criterion and the strength parameters associated with the <110> crystal direction.
The Reliability of Criterion-Referenced Measures.
ERIC Educational Resources Information Center
Livingston, Samuel A.
The assumptions of the classical test-theory model are used to develop a theory of reliability for criterion-referenced measures which parallels that for norm-referenced measures. It is shown that the Spearman-Brown formula holds for criterion-referenced measures and that the criterion-referenced reliability coefficient can be used to correct…
Feasibility of developing LSI microcircuit reliability prediction models
NASA Technical Reports Server (NTRS)
Ryerson, C. M.
1972-01-01
In the proposed modeling approach, when any of the essential key factors are not known initially, they can be approximated in various ways with a known impact on the accuracy of the final predictions. For example, on any program where reliability predictions are started at interim states of project completion, a-priori approximate estimates of the key factors are established for making preliminary predictions. Later these are refined for greater accuracy as subsequent program information of a more definitive nature becomes available. Specific steps to develop, validate and verify these new models are described.
Evaluation methodologies for an advanced information processing system
NASA Technical Reports Server (NTRS)
Schabowsky, R. S., Jr.; Gai, E.; Walker, B. K.; Lala, J. H.; Motyka, P.
1984-01-01
The system concept and requirements for an Advanced Information Processing System (AIPS) are briefly described, but the emphasis of this paper is on the evaluation methodologies being developed and utilized in the AIPS program. The evaluation tasks include hardware reliability, maintainability and availability, software reliability, performance, and performability. Hardware RMA and software reliability are addressed with Markov modeling techniques. The performance analysis for AIPS is based on queueing theory. Performability is a measure of merit which combines system reliability and performance measures. The probability laws of the performance measures are obtained from the Markov reliability models. Scalar functions of this law such as the mean and variance provide measures of merit in the AIPS performability evaluations.
Evaluation of Scale Reliability with Binary Measures Using Latent Variable Modeling
ERIC Educational Resources Information Center
Raykov, Tenko; Dimitrov, Dimiter M.; Asparouhov, Tihomir
2010-01-01
A method for interval estimation of scale reliability with discrete data is outlined. The approach is applicable with multi-item instruments consisting of binary measures, and is developed within the latent variable modeling methodology. The procedure is useful for evaluation of consistency of single measures and of sum scores from item sets…
Evaluation of Weighted Scale Reliability and Criterion Validity: A Latent Variable Modeling Approach
ERIC Educational Resources Information Center
Raykov, Tenko
2007-01-01
A method is outlined for evaluating the reliability and criterion validity of weighted scales based on sets of unidimensional measures. The approach is developed within the framework of latent variable modeling methodology and is useful for point and interval estimation of these measurement quality coefficients in counseling and education…
NASA Technical Reports Server (NTRS)
Scheper, C.; Baker, R.; Frank, G.; Yalamanchili, S.; Gray, G.
1992-01-01
Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified.
Towards automatic Markov reliability modeling of computer architectures
NASA Technical Reports Server (NTRS)
Liceaga, C. A.; Siewiorek, D. P.
1986-01-01
The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.
A Research Roadmap for Computation-Based Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boring, Ronald; Mandelli, Diego; Joe, Jeffrey
2015-08-01
The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is oftenmore » secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.« less
Development of reliable pavement models.
DOT National Transportation Integrated Search
2011-05-01
The current report proposes a framework for estimating the reliability of a given pavement structure as analyzed by : the Mechanistic-Empirical Pavement Design Guide (MEPDG). The methodology proposes using a previously fit : response surface, in plac...
The process group approach to reliable distributed computing
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.
1992-01-01
The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.
On the next generation of reliability analysis tools
NASA Technical Reports Server (NTRS)
Babcock, Philip S., IV; Leong, Frank; Gai, Eli
1987-01-01
The current generation of reliability analysis tools concentrates on improving the efficiency of the description and solution of the fault-handling processes and providing a solution algorithm for the full system model. The tools have improved user efficiency in these areas to the extent that the problem of constructing the fault-occurrence model is now the major analysis bottleneck. For the next generation of reliability tools, it is proposed that techniques be developed to improve the efficiency of the fault-occurrence model generation and input. Further, the goal is to provide an environment permitting a user to provide a top-down design description of the system from which a Markov reliability model is automatically constructed. Thus, the user is relieved of the tedious and error-prone process of model construction, permitting an efficient exploration of the design space, and an independent validation of the system's operation is obtained. An additional benefit of automating the model construction process is the opportunity to reduce the specialized knowledge required. Hence, the user need only be an expert in the system he is analyzing; the expertise in reliability analysis techniques is supplied.
Hou, Xianlong; Hodges, Ben R; Feng, Dongyu; Liu, Qixiao
2017-03-15
As oil transport increasing in the Texas bays, greater risks of ship collisions will become a challenge, yielding oil spill accidents as a consequence. To minimize the ecological damage and optimize rapid response, emergency managers need to be informed with how fast and where oil will spread as soon as possible after a spill. The state-of-the-art operational oil spill forecast modeling system improves the oil spill response into a new stage. However uncertainty due to predicted data inputs often elicits compromise on the reliability of the forecast result, leading to misdirection in contingency planning. Thus understanding the forecast uncertainty and reliability become significant. In this paper, Monte Carlo simulation is implemented to provide parameters to generate forecast probability maps. The oil spill forecast uncertainty is thus quantified by comparing the forecast probability map and the associated hindcast simulation. A HyosPy-based simple statistic model is developed to assess the reliability of an oil spill forecast in term of belief degree. The technologies developed in this study create a prototype for uncertainty and reliability analysis in numerical oil spill forecast modeling system, providing emergency managers to improve the capability of real time operational oil spill response and impact assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Vahdani, Behnam; Tavakkoli-Moghaddam, Reza; Jolai, Fariborz; Baboli, Arman
2013-06-01
This article seeks to offer a systematic approach to establishing a reliable network of facilities in closed loop supply chains (CLSCs) under uncertainties. Facilities that are located in this article concurrently satisfy both traditional objective functions and reliability considerations in CLSC network designs. To attack this problem, a novel mathematical model is developed that integrates the network design decisions in both forward and reverse supply chain networks. The model also utilizes an effective reliability approach to find a robust network design. In order to make the results of this article more realistic, a CLSC for a case study in the iron and steel industry has been explored. The considered CLSC is multi-echelon, multi-facility, multi-product and multi-supplier. Furthermore, multiple facilities exist in the reverse logistics network leading to high complexities. Since the collection centres play an important role in this network, the reliability concept of these facilities is taken into consideration. To solve the proposed model, a novel interactive hybrid solution methodology is developed by combining a number of efficient solution approaches from the recent literature. The proposed solution methodology is a bi-objective interval fuzzy possibilistic chance-constraint mixed integer linear programming (BOIFPCCMILP). Finally, computational experiments are provided to demonstrate the applicability and suitability of the proposed model in a supply chain environment and to help decision makers facilitate their analyses.
Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.
Chatzis, Sotirios P; Andreou, Andreas S
2015-11-01
Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.
NASA Technical Reports Server (NTRS)
Motyka, P.
1983-01-01
A methodology is developed and applied for quantitatively analyzing the reliability of a dual, fail-operational redundant strapdown inertial measurement unit (RSDIMU). A Markov evaluation model is defined in terms of the operational states of the RSDIMU to predict system reliability. A 27 state model is defined based upon a candidate redundancy management system which can detect and isolate a spectrum of failure magnitudes. The results of parametric studies are presented which show the effect on reliability of the gyro failure rate, both the gyro and accelerometer failure rates together, false alarms, probability of failure detection, probability of failure isolation, and probability of damage effects and mission time. A technique is developed and evaluated for generating dynamic thresholds for detecting and isolating failures of the dual, separated IMU. Special emphasis is given to the detection of multiple, nonconcurrent failures. Digital simulation time histories are presented which show the thresholds obtained and their effectiveness in detecting and isolating sensor failures.
NASA Astrophysics Data System (ADS)
Bieniek, T.; Janczyk, G.; Dobrowolski, R.; Wojciechowska, K.; Malinowska, A.; Panas, A.; Nieprzecki, M.; Kłos, H.
2016-11-01
This paper covers research results on development of the cantilevers beams test structures for interconnects reliability and robustness investigation. Presented results include design, modelling, simulation, optimization and finally fabrication stage performed on 4 inch Si wafers using the ITE microfabrication facility. This paper also covers experimental results from the test structures characterization.
The relationship between cost estimates reliability and BIM adoption: SEM analysis
NASA Astrophysics Data System (ADS)
Ismail, N. A. A.; Idris, N. H.; Ramli, H.; Rooshdi, R. R. Raja Muhammad; Sahamir, S. R.
2018-02-01
This paper presents the usage of Structural Equation Modelling (SEM) approach in analysing the effects of Building Information Modelling (BIM) technology adoption in improving the reliability of cost estimates. Based on the questionnaire survey results, SEM analysis using SPSS-AMOS application examined the relationships between BIM-improved information and cost estimates reliability factors, leading to BIM technology adoption. Six hypotheses were established prior to SEM analysis employing two types of SEM models, namely the Confirmatory Factor Analysis (CFA) model and full structural model. The SEM models were then validated through the assessment on their uni-dimensionality, validity, reliability, and fitness index, in line with the hypotheses tested. The final SEM model fit measures are: P-value=0.000, RMSEA=0.079<0.08, GFI=0.824, CFI=0.962>0.90, TLI=0.956>0.90, NFI=0.935>0.90 and ChiSq/df=2.259; indicating that the overall index values achieved the required level of model fitness. The model supports all the hypotheses evaluated, confirming that all relationship exists amongst the constructs are positive and significant. Ultimately, the analysis verified that most of the respondents foresee better understanding of project input information through BIM visualization, its reliable database and coordinated data, in developing more reliable cost estimates. They also perceive to accelerate their cost estimating task through BIM adoption.
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.
Tutorial: Advanced fault tree applications using HARP
NASA Technical Reports Server (NTRS)
Dugan, Joanne Bechta; Bavuso, Salvatore J.; Boyd, Mark A.
1993-01-01
Reliability analysis of fault tolerant computer systems for critical applications is complicated by several factors. These modeling difficulties are discussed and dynamic fault tree modeling techniques for handling them are described and demonstrated. Several advanced fault tolerant computer systems are described, and fault tree models for their analysis are presented. HARP (Hybrid Automated Reliability Predictor) is a software package developed at Duke University and NASA Langley Research Center that is capable of solving the fault tree models presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paret, Paul
The National Renewable Energy Laboratory (NREL) will conduct thermal and reliability modeling on three sets of power modules for the development of a next generation inverter for electric traction drive vehicles. These modules will be chosen by General Motors (GM) to represent three distinct technological approaches to inverter power module packaging. Likely failure mechanisms will be identified in each package and a physics-of-failure-based reliability assessment will be conducted.
Spector, Aimee; Hebditch, Molly; Stoner, Charlotte R; Gibbor, Luke
2016-09-01
The ability to identify biological, social, and psychological issues for people with dementia is an important skill for healthcare professionals. Therefore, valid and reliable measures are needed to assess this ability. This study involves the development of a vignette style measure to capture the extent to which health professionals use "Biopsychosocial" thinking in dementia care (VIG-Dem), based on the framework of the model developed by Spector and Orrell (2010). The development process consisted of Phase 1: Developing and refining the vignettes; Phase 2: Field testing (N = 9), and Phase 3: A pilot study to assess reliability and validity (N = 131). The VIG-Dem, consisting of two vignettes with open-ended questions and a standardized scoring scheme, was developed. Evidence for the good inter-rater reliability, convergent validity, and test-retest reliability were established. The VIG-Dem has good psychometric properties and may provide a useful tool in dementia care research and practice.
Reliability of Total Test Scores When Considered as Ordinal Measurements
ERIC Educational Resources Information Center
Biswas, Ajoy Kumar
2006-01-01
This article studies the ordinal reliability of (total) test scores. This study is based on a classical-type linear model of observed score (X), true score (T), and random error (E). Based on the idea of Kendall's tau-a coefficient, a measure of ordinal reliability for small-examinee populations is developed. This measure is extended to large…
Modeling, implementation, and validation of arterial travel time reliability.
DOT National Transportation Integrated Search
2013-11-01
Previous research funded by Florida Department of Transportation (FDOT) developed a method for estimating : travel time reliability for arterials. This method was not initially implemented or validated using field data. This : project evaluated and r...
Development of system reliability models for railway bridges.
DOT National Transportation Integrated Search
2012-07-01
Performance of the railway transportation network depends on the reliability of railway bridges, which can be affected by : various forms of deterioration and extreme environmental conditions. More than half of the railway bridges in US were : built ...
Marchioro, C A; Krechemer, F S; de Moraes, C P; Foerster, L A
2015-12-01
The diamondback moth, Plutella xylostella (L.), is a cosmopolitan pest of brassicaceous crops occurring in regions with highly distinct climate conditions. Several studies have investigated the relationship between temperature and P. xylostella development rate, providing degree-day models for populations from different geographical regions. However, there are no data available to date to demonstrate the suitability of such models to make reliable projections on the development time for this species in field conditions. In the present study, 19 models available in the literature were tested regarding their ability to accurately predict the development time of two cohorts of P. xylostella under field conditions. Only 11 out of the 19 models tested accurately predicted the development time for the first cohort of P. xylostella, but only seven for the second cohort. Five models correctly predicted the development time for both cohorts evaluated. Our data demonstrate that the accuracy of the models available for P. xylostella varies widely and therefore should be used with caution for pest management purposes.
Reliability and maintainability assessment factors for reliable fault-tolerant systems
NASA Technical Reports Server (NTRS)
Bavuso, S. J.
1984-01-01
A long term goal of the NASA Langley Research Center is the development of a reliability assessment methodology of sufficient power to enable the credible comparison of the stochastic attributes of one ultrareliable system design against others. This methodology, developed over a 10 year period, is a combined analytic and simulative technique. An analytic component is the Computer Aided Reliability Estimation capability, third generation, or simply CARE III. A simulative component is the Gate Logic Software Simulator capability, or GLOSS. The numerous factors that potentially have a degrading effect on system reliability and the ways in which these factors that are peculiar to highly reliable fault tolerant systems are accounted for in credible reliability assessments. Also presented are the modeling difficulties that result from their inclusion and the ways in which CARE III and GLOSS mitigate the intractability of the heretofore unworkable mathematics.
Muehrer, Rebecca J; Lanuza, Dorothy M; Brown, Roger L; Djamali, Arjang
2015-01-01
This study describes the development and psychometric testing of the Sexual Concerns Questionnaire (SCQ) in kidney transplant (KTx) recipients. Construct validity was assessed using the Kroonenberg and Lewis exploratory/confirmatory procedure and testing hypothesized relationships with established questionnaires. Configural and weak invariance were examined across gender, dialysis history, relationship status, and transplant type. Reliability was assessed with Cronbach's alpha, composite reliability, and test-retest reliability. Factor analysis resulted in a 7-factor solution and suggests good model fit. Construct validity was also supported by the tests of hypothesized relationships. Configural and weak invariance were supported for all subgroups. Reliability of the SCQ was also supported. Findings indicate the SCQ is a valid and reliable measure of KTx recipients' sexual concerns.
ITE CHARACTERIZATION TO SUPPORT CONCEPTUAL MODEL DEVELOPMENT FOR SUBSURFACE RADIONUCLIDE TRANSPORT
Remediation of radionuclide contaminants in ground water often begins with the development of conceptual and analytical models that guide our understanding of the processes controlling radionuclide transport. The reliability of these models is often predicated on the collection o...
NASA Astrophysics Data System (ADS)
Asfaroh, Jati Aurum; Rosana, Dadan; Supahar
2017-08-01
This research aims to develop an evaluation instrument models CIPP valid and reliable as well as determine the feasibility and practicality of an evaluation instrument models CIPP. An evaluation instrument models CIPP to evaluate the implementation of the project assessment topic optik to measure problem-solving skills of junior high school class VIII in the Yogyakarta region. This research is a model of development that uses 4-D. Subject of product trials are students in class VIII SMP N 1 Galur and SMP N 1 Sleman. Data collection techniques in this research using non-test techniques include interviews, questionnaires and observations. Validity in this research was analyzed using V'Aikens. Reliability analyzed using ICC. This research uses 7 raters are derived from two lecturers expert (expert judgment), two practitioners (science teacher) and three colleagues. The results of this research is the evaluation's instrument model of CIPP is used to evaluate the implementation of the implementation of the project assessment instruments. The validity result of evaluation instrument have V'Aikens values between 0.86 to 1, which means a valid and 0.836 reliability values into categories so well that it has been worth used as an evaluation instrument.
Modeling human disease using organotypic cultures.
Schweiger, Pawel J; Jensen, Kim B
2016-12-01
Reliable disease models are needed in order to improve quality of healthcare. This includes gaining better understanding of disease mechanisms, developing new therapeutic interventions and personalizing treatment. Up-to-date, the majority of our knowledge about disease states comes from in vivo animal models and in vitro cell culture systems. However, it has been exceedingly difficult to model disease at the tissue level. Since recently, the gap between cell line studies and in vivo modeling has been narrowing thanks to progress in biomaterials and stem cell research. Development of reliable 3D culture systems has enabled a rapid expansion of sophisticated in vitro models. Here we focus on some of the latest advances and future perspectives in 3D organoids for human disease modeling. Copyright © 2016 Elsevier Ltd. All rights reserved.
Predicting Software Suitability Using a Bayesian Belief Network
NASA Technical Reports Server (NTRS)
Beaver, Justin M.; Schiavone, Guy A.; Berrios, Joseph S.
2005-01-01
The ability to reliably predict the end quality of software under development presents a significant advantage for a development team. It provides an opportunity to address high risk components earlier in the development life cycle, when their impact is minimized. This research proposes a model that captures the evolution of the quality of a software product, and provides reliable forecasts of the end quality of the software being developed in terms of product suitability. Development team skill, software process maturity, and software problem complexity are hypothesized as driving factors of software product quality. The cause-effect relationships between these factors and the elements of software suitability are modeled using Bayesian Belief Networks, a machine learning method. This research presents a Bayesian Network for software quality, and the techniques used to quantify the factors that influence and represent software quality. The developed model is found to be effective in predicting the end product quality of small-scale software development efforts.
NASA Astrophysics Data System (ADS)
Biryuk, V. V.; Tsapkova, A. B.; Larin, E. A.; Livshiz, M. Y.; Sheludko, L. P.
2018-01-01
A set of mathematical models for calculating the reliability indexes of structurally complex multifunctional combined installations in heat and power supply systems was developed. Reliability of energy supply is considered as required condition for the creation and operation of heat and power supply systems. The optimal value of the power supply system coefficient F is based on an economic assessment of the consumers’ loss caused by the under-supply of electric power and additional system expences for the creation and operation of an emergency capacity reserve. Rationing of RI of the industrial heat supply is based on the use of concept of technological margin of safety of technological processes. The definition of rationed RI values of heat supply of communal consumers is based on the air temperature level iside the heated premises. The complex allows solving a number of practical tasks for providing reliability of heat supply for consumers. A probabilistic model is developed for calculating the reliability indexes of combined multipurpose heat and power plants in heat-and-power supply systems. The complex of models and calculation programs can be used to solve a wide range of specific tasks of optimization of schemes and parameters of combined heat and power plants and systems, as well as determining the efficiency of various redundance methods to ensure specified reliability of power supply.
Interactive Reliability Model for Whisker-toughened Ceramics
NASA Technical Reports Server (NTRS)
Palko, Joseph L.
1993-01-01
Wider use of ceramic matrix composites (CMC) will require the development of advanced structural analysis technologies. The use of an interactive model to predict the time-independent reliability of a component subjected to multiaxial loads is discussed. The deterministic, three-parameter Willam-Warnke failure criterion serves as the theoretical basis for the reliability model. The strength parameters defining the model are assumed to be random variables, thereby transforming the deterministic failure criterion into a probabilistic criterion. The ability of the model to account for multiaxial stress states with the same unified theory is an improvement over existing models. The new model was coupled with a public-domain finite element program through an integrated design program. This allows a design engineer to predict the probability of failure of a component. A simple structural problem is analyzed using the new model, and the results are compared to existing models.
NASA Technical Reports Server (NTRS)
Smith, R. M.
1991-01-01
Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the behavior of the system with a continuous-time Markov chain, where a reward rate is associated with each state. In a reliability/availability model, upstates may have reward rate 1 and down states may have reward rate zero associated with them. In a queueing model, the number of jobs of certain type in a given state may be the reward rate attached to that state. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Expected steady-state reward rate and expected instantaneous reward rate are clearly useful measures of the Markov reward model. More generally, the distribution of accumulated reward or time-averaged reward over a finite time interval may be determined from the solution of the Markov reward model. This information is of great practical significance in situations where the workload can be well characterized (deterministically, or by continuous functions e.g., distributions). The design process in the development of a computer system is an expensive and long term endeavor. For aerospace applications the reliability of the computer system is essential, as is the ability to complete critical workloads in a well defined real time interval. Consequently, effective modeling of such systems must take into account both performance and reliability. This fact motivates our use of Markov reward models to aid in the development and evaluation of fault tolerant computer systems.
Using Model Replication to Improve the Reliability of Agent-Based Models
NASA Astrophysics Data System (ADS)
Zhong, Wei; Kim, Yushim
The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.
A Review on VSC-HVDC Reliability Modeling and Evaluation Techniques
NASA Astrophysics Data System (ADS)
Shen, L.; Tang, Q.; Li, T.; Wang, Y.; Song, F.
2017-05-01
With the fast development of power electronics, voltage-source converter (VSC) HVDC technology presents cost-effective ways for bulk power transmission. An increasing number of VSC-HVDC projects has been installed worldwide. Their reliability affects the profitability of the system and therefore has a major impact on the potential investors. In this paper, an overview of the recent advances in the area of reliability evaluation for VSC-HVDC systems is provided. Taken into account the latest multi-level converter topology, the VSC-HVDC system is categorized into several sub-systems and the reliability data for the key components is discussed based on sources with academic and industrial backgrounds. The development of reliability evaluation methodologies is reviewed and the issues surrounding the different computation approaches are briefly analysed. A general VSC-HVDC reliability evaluation procedure is illustrated in this paper.
NASA Technical Reports Server (NTRS)
Ebeling, Charles E.
1996-01-01
This report documents the procedures for utilizing and maintaining the Reliability & Maintainability Model (RAM) developed by the University of Dayton for the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC). The purpose of the grant is to provide support to NASA in establishing operational and support parameters and costs of proposed space systems. As part of this research objective, the model described here was developed. This Manual updates and supersedes the 1995 RAM User and Maintenance Manual. Changes and enhancements from the 1995 version of the model are primarily a result of the addition of more recent aircraft and shuttle R&M data.
Reliability evaluation methodology for NASA applications
NASA Technical Reports Server (NTRS)
Taneja, Vidya S.
1992-01-01
Liquid rocket engine technology has been characterized by the development of complex systems containing large number of subsystems, components, and parts. The trend to even larger and more complex system is continuing. The liquid rocket engineers have been focusing mainly on performance driven designs to increase payload delivery of a launch vehicle for a given mission. In otherwords, although the failure of a single inexpensive part or component may cause the failure of the system, reliability in general has not been considered as one of the system parameters like cost or performance. Up till now, quantification of reliability has not been a consideration during system design and development in the liquid rocket industry. Engineers and managers have long been aware of the fact that the reliability of the system increases during development, but no serious attempts have been made to quantify reliability. As a result, a method to quantify reliability during design and development is needed. This includes application of probabilistic models which utilize both engineering analysis and test data. Classical methods require the use of operating data for reliability demonstration. In contrast, the method described in this paper is based on similarity, analysis, and testing combined with Bayesian statistical analysis.
Quantitative metal magnetic memory reliability modeling for welded joints
NASA Astrophysics Data System (ADS)
Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng
2016-03-01
Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance. PMID:28750091
NHPP-Based Software Reliability Models Using Equilibrium Distribution
NASA Astrophysics Data System (ADS)
Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi
Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.
Markov chains for testing redundant software
NASA Technical Reports Server (NTRS)
White, Allan L.; Sjogren, Jon A.
1988-01-01
A preliminary design for a validation experiment has been developed that addresses several problems unique to assuring the extremely high quality of multiple-version programs in process-control software. The procedure uses Markov chains to model the error states of the multiple version programs. The programs are observed during simulated process-control testing, and estimates are obtained for the transition probabilities between the states of the Markov chain. The experimental Markov chain model is then expanded into a reliability model that takes into account the inertia of the system being controlled. The reliability of the multiple version software is computed from this reliability model at a given confidence level using confidence intervals obtained for the transition probabilities during the experiment. An example demonstrating the method is provided.
NASA Technical Reports Server (NTRS)
Stiffler, J. J.; Bryant, L. A.; Guccione, L.
1979-01-01
A computer program to aid in accessing the reliability of fault tolerant avionics systems was developed. A simple mathematical expression was used to evaluate the reliability of any redundant configuration over any interval during which the failure rates and coverage parameters remained unaffected by configuration changes. Provision was made for convolving such expressions in order to evaluate the reliability of a dual mode system. A coverage model was also developed to determine the various relevant coverage coefficients as a function of the available hardware and software fault detector characteristics, and subsequent isolation and recovery delay statistics.
NASA Technical Reports Server (NTRS)
Dunham, J. R. (Editor); Knight, J. C. (Editor)
1982-01-01
The state of the art in the production of crucial software for flight control applications was addressed. The association between reliability metrics and software is considered. Thirteen software development projects are discussed. A short term need for research in the areas of tool development and software fault tolerance was indicated. For the long term, research in format verification or proof methods was recommended. Formal specification and software reliability modeling, were recommended as topics for both short and long term research.
Thermal Management and Reliability of Automotive Power Electronics and Electric Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narumanchi, Sreekant V; Bennion, Kevin S; Cousineau, Justine E
Low-cost, high-performance thermal management technologies are helping meet aggressive power density, specific power, cost, and reliability targets for power electronics and electric machines. The National Renewable Energy Laboratory is working closely with numerous industry and research partners to help influence development of components that meet aggressive performance and cost targets through development and characterization of cooling technologies, and thermal characterization and improvements of passive stack materials and interfaces. Thermomechanical reliability and lifetime estimation models are important enablers for industry in cost-and time-effective design.
Model testing for reliability and validity of the Outcome Expectations for Exercise Scale.
Resnick, B; Zimmerman, S; Orwig, D; Furstenberg, A L; Magaziner, J
2001-01-01
Development of a reliable and valid measure of outcome expectations for exercise appropriate for older adults will help establish the relationship between outcome expectations and exercise. Once established, this measure can be used to facilitate the development of interventions to strengthen outcome expectations and improve adherence to regular exercise in older adults. Building on initial psychometrics of the Outcome Expectation for Exercise (OEE) Scale, the purpose of the current study was to use structural equation modeling to provide additional support for the reliability and validity of this measure. The OEE scale is a 9-item measure specifically focusing on the perceived consequences of exercise for older adults. The OEE scale was given to 191 residents in a continuing care retirement community. The mean age of the participants was 85 +/- 6.1 and the majority were female (76%), White (99%), and unmarried (76%). Using structural equation modeling, reliability was based on R2 values, and validity was based on a confirmatory factor analysis and path coefficients. There was continued evidence for reliability of the OEE based on R2 values ranging from .42 to .77, and validity with path coefficients ranging from .69 to .87, and evidence of model fit (X2 of 69, df = 27, p < .05, NFI = .98, RMSEA = .07). The evidence of reliability and validity of this measure has important implications for clinical work and research. The OEE scale can be used to identify older adults who have low outcome expectations for exercise, and interventions can then be implemented to strengthen these expectations and thereby improve exercise behavior.
Sazonovas, A; Japertas, P; Didziapetris, R
2010-01-01
This study presents a new type of acute toxicity (LD(50)) prediction that enables automated assessment of the reliability of predictions (which is synonymous with the assessment of the Model Applicability Domain as defined by the Organization for Economic Cooperation and Development). Analysis involved nearly 75,000 compounds from six animal systems (acute rat toxicity after oral and intraperitoneal administration; acute mouse toxicity after oral, intraperitoneal, intravenous, and subcutaneous administration). Fragmental Partial Least Squares (PLS) with 100 bootstraps yielded baseline predictions that were automatically corrected for non-linear effects in local chemical spaces--a combination called Global, Adjusted Locally According to Similarity (GALAS) modelling methodology. Each prediction obtained in this manner is provided with a reliability index value that depends on both compound's similarity to the training set (that accounts for similar trends in LD(50) variations within multiple bootstraps) and consistency of experimental results with regard to the baseline model in the local chemical environment. The actual performance of the Reliability Index (RI) was proven by its good (and uniform) correlations with Root Mean Square Error (RMSE) in all validation sets, thus providing quantitative assessment of the Model Applicability Domain. The obtained models can be used for compound screening in the early stages of drug development and prioritization for experimental in vitro testing or later in vivo animal acute toxicity studies.
An integrated approach to system design, reliability, and diagnosis
NASA Technical Reports Server (NTRS)
Patterson-Hine, F. A.; Iverson, David L.
1990-01-01
The requirement for ultradependability of computer systems in future avionics and space applications necessitates a top-down, integrated systems engineering approach for design, implementation, testing, and operation. The functional analyses of hardware and software systems must be combined by models that are flexible enough to represent their interactions and behavior. The information contained in these models must be accessible throughout all phases of the system life cycle in order to maintain consistency and accuracy in design and operational decisions. One approach being taken by researchers at Ames Research Center is the creation of an object-oriented environment that integrates information about system components required in the reliability evaluation with behavioral information useful for diagnostic algorithms. Procedures have been developed at Ames that perform reliability evaluations during design and failure diagnoses during system operation. These procedures utilize information from a central source, structured as object-oriented fault trees. Fault trees were selected because they are a flexible model widely used in aerospace applications and because they give a concise, structured representation of system behavior. The utility of this integrated environment for aerospace applications in light of our experiences during its development and use is described. The techniques for reliability evaluation and failure diagnosis are discussed, and current extensions of the environment and areas requiring further development are summarized.
An integrated approach to system design, reliability, and diagnosis
NASA Astrophysics Data System (ADS)
Patterson-Hine, F. A.; Iverson, David L.
1990-12-01
The requirement for ultradependability of computer systems in future avionics and space applications necessitates a top-down, integrated systems engineering approach for design, implementation, testing, and operation. The functional analyses of hardware and software systems must be combined by models that are flexible enough to represent their interactions and behavior. The information contained in these models must be accessible throughout all phases of the system life cycle in order to maintain consistency and accuracy in design and operational decisions. One approach being taken by researchers at Ames Research Center is the creation of an object-oriented environment that integrates information about system components required in the reliability evaluation with behavioral information useful for diagnostic algorithms. Procedures have been developed at Ames that perform reliability evaluations during design and failure diagnoses during system operation. These procedures utilize information from a central source, structured as object-oriented fault trees. Fault trees were selected because they are a flexible model widely used in aerospace applications and because they give a concise, structured representation of system behavior. The utility of this integrated environment for aerospace applications in light of our experiences during its development and use is described. The techniques for reliability evaluation and failure diagnosis are discussed, and current extensions of the environment and areas requiring further development are summarized.
Cost prediction model for various payloads and instruments for the Space Shuttle Orbiter
NASA Technical Reports Server (NTRS)
Hoffman, F. E.
1984-01-01
The following cost parameters of the space shuttle were undertaken: (1) to develop a cost prediction model for various payload classes of instruments and experiments for the Space Shuttle Orbiter; and (2) to show the implications of various payload classes on the cost of: reliability analysis, quality assurance, environmental design requirements, documentation, parts selection, and other reliability enhancing activities.
Creating High Reliability in Health Care Organizations
Pronovost, Peter J; Berenholtz, Sean M; Goeschel, Christine A; Needham, Dale M; Sexton, J Bryan; Thompson, David A; Lubomski, Lisa H; Marsteller, Jill A; Makary, Martin A; Hunt, Elizabeth
2006-01-01
Objective The objective of this paper was to present a comprehensive approach to help health care organizations reliably deliver effective interventions. Context Reliability in healthcare translates into using valid rate-based measures. Yet high reliability organizations have proven that the context in which care is delivered, called organizational culture, also has important influences on patient safety. Model for Improvement Our model to improve reliability, which also includes interventions to improve culture, focuses on valid rate-based measures. This model includes (1) identifying evidence-based interventions that improve the outcome, (2) selecting interventions with the most impact on outcomes and converting to behaviors, (3) developing measures to evaluate reliability, (4) measuring baseline performance, and (5) ensuring patients receive the evidence-based interventions. The comprehensive unit-based safety program (CUSP) is used to improve culture and guide organizations in learning from mistakes that are important, but cannot be measured as rates. Conclusions We present how this model was used in over 100 intensive care units in Michigan to improve culture and eliminate catheter-related blood stream infections—both were accomplished. Our model differs from existing models in that it incorporates efforts to improve a vital component for system redesign—culture, it targets 3 important groups—senior leaders, team leaders, and front line staff, and facilitates change management—engage, educate, execute, and evaluate for planned interventions. PMID:16898981
NASA Technical Reports Server (NTRS)
Huang, Zhao-Feng; Fint, Jeffry A.; Kuck, Frederick M.
2005-01-01
This paper is to address the in-flight reliability of a liquid propulsion engine system for a launch vehicle. We first establish a comprehensive list of system and sub-system reliability drivers for any liquid propulsion engine system. We then build a reliability model to parametrically analyze the impact of some reliability parameters. We present sensitivity analysis results for a selected subset of the key reliability drivers using the model. Reliability drivers identified include: number of engines for the liquid propulsion stage, single engine total reliability, engine operation duration, engine thrust size, reusability, engine de-rating or up-rating, engine-out design (including engine-out switching reliability, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction), propellant specific hazards, engine start and cutoff transient hazards, engine combustion cycles, vehicle and engine interface and interaction hazards, engine health management system, engine modification, engine ground start hold down with launch commit criteria, engine altitude start (1 in. start), Multiple altitude restart (less than 1 restart), component, subsystem and system design, manufacturing/ground operation support/pre and post flight check outs and inspection, extensiveness of the development program. We present some sensitivity analysis results for the following subset of the drivers: number of engines for the propulsion stage, single engine total reliability, engine operation duration, engine de-rating or up-rating requirements, engine-out design, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction, and engine health management system implementation (basic redlines and more advanced health management systems).
Reliability and Maintainability Analysis: A Conceptual Design Model
1972-03-01
Elements For a System I. Research ane Development A. Preliminary design and engineering B. Fabrication of test equipment C. Test operations D...reliability racquiro:wents, little, if any, modu larzation and auto- matic test features would be incorporated in the subsystem design, limited reliability...niaintaina~ility testing and monitoring would be conducted turing dev!qopmcnt, and little Quality Control effort, in the rell ability/’uaintainalility
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Gyekenyesi, John P.
1989-01-01
Presently there are many opportunities for the application of ceramic materials at elevated temperatures. In the near future ceramic materials are expected to supplant high temperature metal alloys in a number of applications. It thus becomes essential to develop a capability to predict the time-dependent response of these materials. The creep rupture phenomenon is discussed, and a time-dependent reliability model is outlined that integrates continuum damage mechanics principles and Weibull analysis. Several features of the model are presented in a qualitative fashion, including predictions of both reliability and hazard rate. In addition, a comparison of the continuum and the microstructural kinetic equations highlights a strong resemblance in the two approaches.
E-Service Quality Evaluation on E-Government Website: Case Study BPJS Kesehatan Indonesia
NASA Astrophysics Data System (ADS)
Rasyid, A.; Alfina, I.
2017-01-01
This research intends to develop a model to evaluate the quality of e-services on e-government. The proposed model consists of seven dimensions: web design, reliability, responsiveness, privacy and security, personalization, information, and ease of use. The model is used to measure the quality of the e-registration of BPJS Kesehatan, an Indonesian government health insurance program. The validation and reliability testing show that of the seven dimensions proposed, only four that suitable for the case study. The result shows that the BPJS Kesehatan e-registration service is good in reliability and responsiveness dimensions, while from web design and ease of use dimensions the e-service still needs to be optimized.
ECO-DRIVING MODELING ENVIRONMENT
DOT National Transportation Integrated Search
2015-11-01
This research project aims to examine the eco-driving modeling capabilities of different traffic modeling tools available and to develop a driver-simulator-based eco-driving modeling tool to evaluate driver behavior and to reliably estimate or measur...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boring, Ronald; Mandelli, Diego; Rasmussen, Martin
2016-06-01
This report presents an application of a computation-based human reliability analysis (HRA) framework called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER). HUNTER has been developed not as a standalone HRA method but rather as framework that ties together different HRA methods to model dynamic risk of human activities as part of an overall probabilistic risk assessment (PRA). While we have adopted particular methods to build an initial model, the HUNTER framework is meant to be intrinsically flexible to new pieces that achieve particular modeling goals. In the present report, the HUNTER implementation has the following goals: •more » Integration with a high fidelity thermal-hydraulic model capable of modeling nuclear power plant behaviors and transients • Consideration of a PRA context • Incorporation of a solid psychological basis for operator performance • Demonstration of a functional dynamic model of a plant upset condition and appropriate operator response This report outlines these efforts and presents the case study of a station blackout scenario to demonstrate the various modules developed to date under the HUNTER research umbrella.« less
Data Applicability of Heritage and New Hardware For Launch Vehicle Reliability Models
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Novack, Steven
2015-01-01
Bayesian reliability requires the development of a prior distribution to represent degree of belief about the value of a parameter (such as a component's failure rate) before system specific data become available from testing or operations. Generic failure data are often provided in reliability databases as point estimates (mean or median). A component's failure rate is considered a random variable where all possible values are represented by a probability distribution. The applicability of the generic data source is a significant source of uncertainty that affects the spread of the distribution. This presentation discusses heuristic guidelines for quantifying uncertainty due to generic data applicability when developing prior distributions mainly from reliability predictions.
Chapter 15: Reliability of Wind Turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheng, Shuangwen; O'Connor, Ryan
The global wind industry has witnessed exciting developments in recent years. The future will be even brighter with further reductions in capital and operation and maintenance costs, which can be accomplished with improved turbine reliability, especially when turbines are installed offshore. One opportunity for the industry to improve wind turbine reliability is through the exploration of reliability engineering life data analysis based on readily available data or maintenance records collected at typical wind plants. If adopted and conducted appropriately, these analyses can quickly save operation and maintenance costs in a potentially impactful manner. This chapter discusses wind turbine reliability bymore » highlighting the methodology of reliability engineering life data analysis. It first briefly discusses fundamentals for wind turbine reliability and the current industry status. Then, the reliability engineering method for life analysis, including data collection, model development, and forecasting, is presented in detail and illustrated through two case studies. The chapter concludes with some remarks on potential opportunities to improve wind turbine reliability. An owner and operator's perspective is taken and mechanical components are used to exemplify the potential benefits of reliability engineering analysis to improve wind turbine reliability and availability.« less
An Approach to Verification and Validation of a Reliable Multicasting Protocol
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1994-01-01
This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or offnominal behaviors predicted by the current model. If the execution of a test was different between the model and implementation, then the differences helped identify inconsistencies between the model and implementation. The dialogue between both teams drove the co-evolution of the model and implementation. Testing served as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP.
An approach to verification and validation of a reliable multicasting protocol
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1995-01-01
This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or off-nominal behaviors predicted by the current model. If the execution of a test was different between the model and implementation, then the differences helped identify inconsistencies between the model and implementation. The dialogue between both teams drove the co-evolution of the model and implementation. Testing served as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP.
NASA Astrophysics Data System (ADS)
Yu, Z. P.; Yue, Z. F.; Liu, W.
2018-05-01
With the development of artificial intelligence, more and more reliability experts have noticed the roles of subjective information in the reliability design of complex system. Therefore, based on the certain numbers of experiment data and expert judgments, we have divided the reliability estimation based on distribution hypothesis into cognition process and reliability calculation. Consequently, for an illustration of this modification, we have taken the information fusion based on intuitional fuzzy belief functions as the diagnosis model of cognition process, and finished the reliability estimation for the open function of cabin door affected by the imprecise judgment corresponding to distribution hypothesis.
Merkow, Ryan P; Hall, Bruce L; Cohen, Mark E; Wang, Xue; Adams, John L; Chow, Warren B; Lawson, Elise H; Bilimoria, Karl Y; Richards, Karen; Ko, Clifford Y
2013-03-01
To develop a reliable, robust, parsimonious, risk-adjusted 30-day composite colectomy outcome measure. A fundamental aspect in the pursuit of high-quality care is the development of valid and reliable performance measures in surgery. Colon resection is associated with appreciable morbidity and mortality and therefore is an ideal quality improvement target. From 2010 American College of Surgeons National Surgical Quality Improvement Program data, patients were identified who underwent colon resection for any indication. A composite outcome of death or any serious morbidity within 30 days of the index operation was established. A 6-predictor, parsimonious model was developed and compared with a more complex model with more variables. National caseload requirements were calculated on the basis of increasing reliability thresholds. From 255 hospitals, 22,346 patients were accrued who underwent a colon resection in 2010, most commonly for neoplasm (46.7%). A mortality or serious morbidity event occurred in 4461 patients (20.0%). At the hospital level, the median composite event rate was 20.7% (interquartile range: 15.8%-26.3%). The parsimonious model performed similarly to the full model (Akaike information criterion: 19,411 vs 18,988), and hospital-level performance comparisons were highly correlated (R = 0.97). At a reliability threshold of 0.4, 56 annual colon resections would be required and achievable at an estimated 42% of US and 69% of American College of Surgeons National Surgical Quality Improvement Program hospitals. This 42% of US hospitals performed approximately 84% of all colon resections in the country in 2008. It is feasible to design a measure with a composite outcome of death or serious morbidity after colon surgery that has a low burden for data collection, has substantial clinical importance, and has acceptable reliability.
Lifetime Reliability Prediction of Ceramic Structures Under Transient Thermomechanical Loads
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Jadaan, Osama J.; Gyekenyesi, John P.
2005-01-01
An analytical methodology is developed to predict the probability of survival (reliability) of ceramic components subjected to harsh thermomechanical loads that can vary with time (transient reliability analysis). This capability enables more accurate prediction of ceramic component integrity against fracture in situations such as turbine startup and shutdown, operational vibrations, atmospheric reentry, or other rapid heating or cooling situations (thermal shock). The transient reliability analysis methodology developed herein incorporates the following features: fast-fracture transient analysis (reliability analysis without slow crack growth, SCG); transient analysis with SCG (reliability analysis with time-dependent damage due to SCG); a computationally efficient algorithm to compute the reliability for components subjected to repeated transient loading (block loading); cyclic fatigue modeling using a combined SCG and Walker fatigue law; proof testing for transient loads; and Weibull and fatigue parameters that are allowed to vary with temperature or time. Component-to-component variation in strength (stochastic strength response) is accounted for with the Weibull distribution, and either the principle of independent action or the Batdorf theory is used to predict the effect of multiaxial stresses on reliability. The reliability analysis can be performed either as a function of the component surface (for surface-distributed flaws) or component volume (for volume-distributed flaws). The transient reliability analysis capability has been added to the NASA CARES/ Life (Ceramic Analysis and Reliability Evaluation of Structures/Life) code. CARES/Life was also updated to interface with commercially available finite element analysis software, such as ANSYS, when used to model the effects of transient load histories. Examples are provided to demonstrate the features of the methodology as implemented in the CARES/Life program.
ERIC Educational Resources Information Center
Wesolowski, Brian C.; Amend, Ross M.; Barnstead, Thomas S.; Edwards, Andrew S.; Everhart, Matthew; Goins, Quentin R.; Grogan, Robert J., III; Herceg, Amanda M.; Jenkins, S. Ira; Johns, Paul M.; McCarver, Christopher J.; Schaps, Robin E.; Sorrell, Gary W.; Williams, Jonathan D.
2017-01-01
The purpose of this study was to describe the development of a valid and reliable rubric to assess secondary-level solo instrumental music performance based on principles of invariant measurement. The research questions that guided this study included (1) What is the psychometric quality (i.e., validity, reliability, and precision) of a scale…
Hierarchical Bayesian Model Averaging for Chance Constrained Remediation Designs
NASA Astrophysics Data System (ADS)
Chitsazan, N.; Tsai, F. T.
2012-12-01
Groundwater remediation designs are heavily relying on simulation models which are subjected to various sources of uncertainty in their predictions. To develop a robust remediation design, it is crucial to understand the effect of uncertainty sources. In this research, we introduce a hierarchical Bayesian model averaging (HBMA) framework to segregate and prioritize sources of uncertainty in a multi-layer frame, where each layer targets a source of uncertainty. The HBMA framework provides an insight to uncertainty priorities and propagation. In addition, HBMA allows evaluating model weights in different hierarchy levels and assessing the relative importance of models in each level. To account for uncertainty, we employ a chance constrained (CC) programming for stochastic remediation design. Chance constrained programming was implemented traditionally to account for parameter uncertainty. Recently, many studies suggested that model structure uncertainty is not negligible compared to parameter uncertainty. Using chance constrained programming along with HBMA can provide a rigorous tool for groundwater remediation designs under uncertainty. In this research, the HBMA-CC was applied to a remediation design in a synthetic aquifer. The design was to develop a scavenger well approach to mitigate saltwater intrusion toward production wells. HBMA was employed to assess uncertainties from model structure, parameter estimation and kriging interpolation. An improved harmony search optimization method was used to find the optimal location of the scavenger well. We evaluated prediction variances of chloride concentration at the production wells through the HBMA framework. The results showed that choosing the single best model may lead to a significant error in evaluating prediction variances for two reasons. First, considering the single best model, variances that stem from uncertainty in the model structure will be ignored. Second, considering the best model with non-dominant model weight may underestimate or overestimate prediction variances by ignoring other plausible propositions. Chance constraints allow developing a remediation design with a desirable reliability. However, considering the single best model, the calculated reliability will be different from the desirable reliability. We calculated the reliability of the design for the models at different levels of HBMA. The results showed that by moving toward the top layers of HBMA, the calculated reliability converges to the chosen reliability. We employed the chance constrained optimization along with the HBMA framework to find the optimal location and pumpage for the scavenger well. The results showed that using models at different levels in the HBMA framework, the optimal location of the scavenger well remained the same, but the optimal extraction rate was altered. Thus, we concluded that the optimal pumping rate was sensitive to the prediction variance. Also, the prediction variance was changed by using different extraction rate. Using very high extraction rate will cause prediction variances of chloride concentration at the production wells to approach zero regardless of which HBMA models used.
Data Applicability of Heritage and New Hardware for Launch Vehicle System Reliability Models
NASA Technical Reports Server (NTRS)
Al Hassan Mohammad; Novack, Steven
2015-01-01
Many launch vehicle systems are designed and developed using heritage and new hardware. In most cases, the heritage hardware undergoes modifications to fit new functional system requirements, impacting the failure rates and, ultimately, the reliability data. New hardware, which lacks historical data, is often compared to like systems when estimating failure rates. Some qualification of applicability for the data source to the current system should be made. Accurately characterizing the reliability data applicability and quality under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This presentation will demonstrate a data-source classification method that ranks reliability data according to applicability and quality criteria to a new launch vehicle. This method accounts for similarities/dissimilarities in source and applicability, as well as operating environments like vibrations, acoustic regime, and shock. This classification approach will be followed by uncertainty-importance routines to assess the need for additional data to reduce uncertainty.
Reliability analysis based on the losses from failures.
Todinov, M T
2006-04-01
The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.
A study of fault prediction and reliability assessment in the SEL environment
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Patnaik, Debabrata
1986-01-01
An empirical study on estimation and prediction of faults, prediction of fault detection and correction effort, and reliability assessment in the Software Engineering Laboratory environment (SEL) is presented. Fault estimation using empirical relationships and fault prediction using curve fitting method are investigated. Relationships between debugging efforts (fault detection and correction effort) in different test phases are provided, in order to make an early estimate of future debugging effort. This study concludes with the fault analysis, application of a reliability model, and analysis of a normalized metric for reliability assessment and reliability monitoring during development of software.
NASA Astrophysics Data System (ADS)
El-Jaat, Majda; Hulley, Michael; Tétreault, Michel
2018-02-01
Despite the broad impact and importance of saltwater intrusion in coastal aquifers, little research has been directed towards forecasting saltwater intrusion in areas where the source of saltwater is uncertain. Saline contamination in inland groundwater supplies is a concern for numerous communities in the southern US including the city of Deltona, Florida. Furthermore, conventional numerical tools for forecasting saltwater contamination are heavily dependent on reliable characterization of the physical characteristics of underlying aquifers, information that is often absent or challenging to obtain. To overcome these limitations, a reliable alternative data-driven model for forecasting salinity in a groundwater supply was developed for Deltona using the fast orthogonal search (FOS) method. FOS was applied on monthly water-demand data and corresponding chloride concentrations at water supply wells. Groundwater salinity measurements from Deltona water supply wells were applied to evaluate the forecasting capability and accuracy of the FOS model. Accurate and reliable groundwater salinity forecasting is necessary to support effective and sustainable coastal-water resource planning and management. The available (27) water supply wells for Deltona were randomly split into three test groups for the purposes of FOS model development and performance assessment. Based on four performance indices (RMSE, RSR, NSEC, and R), the FOS model proved to be a reliable and robust forecaster of groundwater salinity. FOS is relatively inexpensive to apply, is not based on rigorous physical characterization of the water supply aquifer, and yields reliable estimates of groundwater salinity in active water supply wells.
Social franchising primary healthcare clinics--a model for South African National Health Insurance?
Robinson, Andrew Ken Lacey
2015-09-21
This article describes the first government social franchise initiative in the world to deliver a 'brand' of quality primary healthcare (PHC) clinic services. Quality and standards of care are not uniformly and reliably delivered across government PHC clinics in North West Province, South Africa, despite government support, numerous policies, guidelines and in-service training sessions provided to staff. Currently the strongest predictor of good-quality service is the skill and dedication of the facility manager. A project utilising the social franchising business model, harvesting best practices, has been implemented with the aim of developing a system to ensure reliably excellent healthcare service provision in every facility in North West. The services of social franchising consultants have been procured to develop the business model to drive this initiative. Best practices have been benchmarked, and policies, guidelines and clinic support systems have been reviewed, evaluated and assessed, and incorporated into the business plan. A pilot clinic has been selected to refine and develop a working social franchise model. This will then be replicated in one clinic to confirm proof of concept before further scale-up. The social franchise business model can provide solutions to a reliable and recognisable 'brand' of quality universal coverage of healthcare services.
Modeling and experimental characterization of electromigration in interconnect trees
NASA Astrophysics Data System (ADS)
Thompson, C. V.; Hau-Riege, S. P.; Andleigh, V. K.
1999-11-01
Most modeling and experimental characterization of interconnect reliability is focussed on simple straight lines terminating at pads or vias. However, laid-out integrated circuits often have interconnects with junctions and wide-to-narrow transitions. In carrying out circuit-level reliability assessments it is important to be able to assess the reliability of these more complex shapes, generally referred to as `trees.' An interconnect tree consists of continuously connected high-conductivity metal within one layer of metallization. Trees terminate at diffusion barriers at vias and contacts, and, in the general case, can have more than one terminating branch when they include junctions. We have extended the understanding of `immortality' demonstrated and analyzed for straight stud-to-stud lines, to trees of arbitrary complexity. This leads to a hierarchical approach for identifying immortal trees for specific circuit layouts and models for operation. To complete a circuit-level-reliability analysis, it is also necessary to estimate the lifetimes of the mortal trees. We have developed simulation tools that allow modeling of stress evolution and failure in arbitrarily complex trees. We are testing our models and simulations through comparisons with experiments on simple trees, such as lines broken into two segments with different currents in each segment. Models, simulations and early experimental results on the reliability of interconnect trees are shown to be consistent.
Engine System Model Development for Nuclear Thermal Propulsion
NASA Technical Reports Server (NTRS)
Nelson, Karl W.; Simpson, Steven P.
2006-01-01
In order to design, analyze, and evaluate conceptual Nuclear Thermal Propulsion (NTP) engine systems, an improved NTP design and analysis tool has been developed. The NTP tool utilizes the Rocket Engine Transient Simulation (ROCETS) system tool and many of the routines from the Enabler reactor model found in Nuclear Engine System Simulation (NESS). Improved non-nuclear component models and an external shield model were added to the tool. With the addition of a nearly complete system reliability model, the tool will provide performance, sizing, and reliability data for NERVA-Derived NTP engine systems. A new detailed reactor model is also being developed and will replace Enabler. The new model will allow more flexibility in reactor geometry and include detailed thermal hydraulics and neutronics models. A description of the reactor, component, and reliability models is provided. Another key feature of the modeling process is the use of comprehensive spreadsheets for each engine case. The spreadsheets include individual worksheets for each subsystem with data, plots, and scaled figures, making the output very useful to each engineering discipline. Sample performance and sizing results with the Enabler reactor model are provided including sensitivities. Before selecting an engine design, all figures of merit must be considered including the overall impacts on the vehicle and mission. Evaluations based on key figures of merit of these results and results with the new reactor model will be performed. The impacts of clustering and external shielding will also be addressed. Over time, the reactor model will be upgraded to design and analyze other NTP concepts with CERMET and carbide fuel cores.
Validation of urban freeway models.
DOT National Transportation Integrated Search
2015-01-01
This report describes the methodology, data, conclusions, and enhanced models regarding the validation of two sets of models developed in the Strategic Highway Research Program 2 (SHRP 2) Reliability Project L03, Analytical Procedures for Determining...
An Evidential Reasoning-Based CREAM to Human Reliability Analysis in Maritime Accident Process.
Wu, Bing; Yan, Xinping; Wang, Yang; Soares, C Guedes
2017-10-01
This article proposes a modified cognitive reliability and error analysis method (CREAM) for estimating the human error probability in the maritime accident process on the basis of an evidential reasoning approach. This modified CREAM is developed to precisely quantify the linguistic variables of the common performance conditions and to overcome the problem of ignoring the uncertainty caused by incomplete information in the existing CREAM models. Moreover, this article views maritime accident development from the sequential perspective, where a scenario- and barrier-based framework is proposed to describe the maritime accident process. This evidential reasoning-based CREAM approach together with the proposed accident development framework are applied to human reliability analysis of a ship capsizing accident. It will facilitate subjective human reliability analysis in different engineering systems where uncertainty exists in practice. © 2017 Society for Risk Analysis.
ERIC Educational Resources Information Center
Lee, Jihyun; Jang, Seonyoung
2014-01-01
Instructional design (ID) models have been developed to promote understandings of ID reality and guide ID performance. As the number and diversity of ID practices grows, implicit doubts regarding the reliability, validity, and usefulness of ID models suggest the need for methodological guidance that would help to generate ID models that are…
Reliability Prediction Approaches For Domestic Intelligent Electric Energy Meter Based on IEC62380
NASA Astrophysics Data System (ADS)
Li, Ning; Tong, Guanghua; Yang, Jincheng; Sun, Guodong; Han, Dongjun; Wang, Guixian
2018-01-01
The reliability of intelligent electric energy meter is a crucial issue considering its large calve application and safety of national intelligent grid. This paper developed a procedure of reliability prediction for domestic intelligent electric energy meter according to IEC62380, especially to identify the determination of model parameters combining domestic working conditions. A case study was provided to show the effectiveness and validation.
Reliability and accuracy of Crystaleye spectrophotometric system.
Chen, Li; Tan, Jian Guo; Zhou, Jian Feng; Yang, Xu; Du, Yang; Wang, Fang Ping
2010-01-01
to develop an in vitro shade-measuring model to evaluate the reliability and accuracy of the Crystaleye spectrophotometric system, a newly developed spectrophotometer. four shade guides, VITA Classical, VITA 3D-Master, Chromascop and Vintage Halo NCC, were measured with the Crystaleye spectrophotometer in a standardised model, ten times for 107 shade tabs. The shade-matching results and the CIE L*a*b* values of the cervical, body and incisal regions for each measurement were automatically analysed using the supporting software. Reliability and accuracy were calculated for each shade tab both in percentage and in colour difference (ΔE). Difference was analysed by one-way ANOVA in the cervical, body and incisal regions. range of reliability was 88.81% to 98.97% and 0.13 to 0.24 ΔE units, and that of accuracy was 44.05% to 91.25% and 1.03 to 1.89 ΔE units. Significant differences in reliability and accuracy were found between the body region and the cervical and incisal regions. Comparisons made among regions and shade guides revealed that evaluation in ΔE was prone to disclose the differences. measurements with the Crystaleye spectrophotometer had similar, high reliability in different shade guides and regions, indicating predictable repeated measurements. Accuracy in the body region was high and less variable compared with the cervical and incisal regions.
Bishop, Chris; Paul, Gunther; Thewlis, Dominic
2013-04-01
Kinematic models are commonly used to quantify foot and ankle kinematics, yet no marker sets or models have been proven reliable or accurate when wearing shoes. Further, the minimal detectable difference of a developed model is often not reported. We present a kinematic model that is reliable, accurate and sensitive to describe the kinematics of the foot-shoe complex and lower leg during walking gait. In order to achieve this, a new marker set was established, consisting of 25 markers applied on the shoe and skin surface, which informed a four segment kinematic model of the foot-shoe complex and lower leg. Three independent experiments were conducted to determine the reliability, accuracy and minimal detectable difference of the marker set and model. Inter-rater reliability of marker placement on the shoe was proven to be good to excellent (ICC=0.75-0.98) indicating that markers could be applied reliably between raters. Intra-rater reliability was better for the experienced rater (ICC=0.68-0.99) than the inexperienced rater (ICC=0.38-0.97). The accuracy of marker placement along each axis was <6.7 mm for all markers studied. Minimal detectable difference (MDD90) thresholds were defined for each joint; tibiocalcaneal joint--MDD90=2.17-9.36°, tarsometatarsal joint--MDD90=1.03-9.29° and the metatarsophalangeal joint--MDD90=1.75-9.12°. These thresholds proposed are specific for the description of shod motion, and can be used in future research designed at comparing between different footwear. Copyright © 2012 Elsevier B.V. All rights reserved.
Reliability of the Suicide Opinion Questionnaire.
ERIC Educational Resources Information Center
Rogers, James R.; DeShon, Richard P.
The lack of systematic psychometric information on the Suicide Opinion Questionnaire (SOQ) was addressed by investigating the factor structure and reliability of the eight-factor clinical scale model (mental illness, cry for help, right to die, religion, impulsivity, normality, aggression, and moral evil), developed for interpreting responses to…
Sabater-Galindo, Marta; Sabater-Hernández, Daniel; Ruiz de Maya, Salvador; Gastelurrutia, Miguel Angel; Martínez-Martínez, Fernando; Benrimoj, Shalom I
2017-06-01
Professional pharmaceutical services may impact on patient's health behaviour as well as influence on patients' perceptions of the pharmacist image. The Health Belief Model predicts health-related behaviours using patients' beliefs. However, health beliefs (HBs) could transcend beyond predicting health behaviour and may have an impact on the patients' perceptions of the pharmacist image. This study objective was to develop and test a model that relates patients' HBs to patient's perception of the image of the pharmacist, and to assess if the provision of pharmacy services (Intervention group-IG) influences this perception compared to usual care (Control group). A qualitative study was undertaken and a questionnaire was created for the development of the model. The content, dimensions, validity and reliability of the questionnaire were pre-tested qualitatively and in a pilot mail survey. The reliability and validity of the proposed model were tested using Confirmatory Factor Analysis (CFA). Structural Equation Modelling (SEM) was used to explain relationships between dimensions of the final model and to analyse differences between groups. As a result, a final model was developed. CFA concluded that the model was valid and reliable (Goodness of Fit indices: x²(80) = 125.726, p = .001, RMSEA = .04, SRMR = .04, GFI = .997, NFI = .93, CFI = .974). SEM indicated that 'Perceived benefits' were significantly associated with 'Perceived Pharmacist Image' in the whole sample. Differences were found in the IG with also 'Self-efficacy' significantly influencing 'Perceived pharmacist image'. A model of patients' HBs related to their image of the pharmacist was developed and tested. When pharmacists deliver professional services, these services modify some patients' HBs that in turn influence public perception of the pharmacist.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpson, L.; Britt, J.; Birkmire, R.
ITN Energy Systems, Inc., and Global Solar Energy, Inc., assisted by NREL's PV Manufacturing R&D program, have continued to advance CIGS production technology by developing trajectory-oriented predictive/control models, fault-tolerance control, control platform development, in-situ sensors, and process improvements. Modeling activities included developing physics-based and empirical models for CIGS and sputter-deposition processing, implementing model-based control, and applying predictive models to the construction of new evaporation sources and for control. Model-based control is enabled by implementing reduced or empirical models into a control platform. Reliability improvement activities include implementing preventive maintenance schedules; detecting failed sensors/equipment and reconfiguring to tinue processing; and systematicmore » development of fault prevention and reconfiguration strategies for the full range of CIGS PV production deposition processes. In-situ sensor development activities have resulted in improved control and indicated the potential for enhanced process status monitoring and control of the deposition processes. Substantial process improvements have been made, including significant improvement in CIGS uniformity, thickness control, efficiency, yield, and throughput. In large measure, these gains have been driven by process optimization, which in turn have been enabled by control and reliability improvements due to this PV Manufacturing R&D program.« less
Measurement-based reliability/performability models
NASA Technical Reports Server (NTRS)
Hsueh, Mei-Chen
1987-01-01
Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.
Increasing the reliability of ecological models using modern software engineering techniques
Robert M. Scheller; Brian R. Sturtevant; Eric J. Gustafson; Brendan C. Ward; David J. Mladenoff
2009-01-01
Modern software development techniques are largely unknown to ecologists. Typically, ecological models and other software tools are developed for limited research purposes, and additional capabilities are added later, usually in an ad hoc manner. Modern software engineering techniques can substantially increase scientific rigor and confidence in ecological models and...
System reliability approaches for advanced propulsion system structures
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Mahadevan, S.
1991-01-01
This paper identifies significant issues that pertain to the estimation and use of system reliability in the design of advanced propulsion system structures. Linkages between the reliabilities of individual components and their effect on system design issues such as performance, cost, availability, and certification are examined. The need for system reliability computation to address the continuum nature of propulsion system structures and synergistic progressive damage modes has been highlighted. Available system reliability models are observed to apply only to discrete systems. Therefore a sequential structural reanalysis procedure is formulated to rigorously compute the conditional dependencies between various failure modes. The method is developed in a manner that supports both top-down and bottom-up analyses in system reliability.
NASA Technical Reports Server (NTRS)
Karns, James
1993-01-01
The objective of this study was to establish the initial quantitative reliability bounds for nuclear electric propulsion systems in a manned Mars mission required to ensure crew safety and mission success. Finding the reliability bounds involves balancing top-down (mission driven) requirements and bottom-up (technology driven) capabilities. In seeking this balance we hope to accomplish the following: (1) provide design insights into the achievability of the baseline design in terms of reliability requirements, given the existing technology base; (2) suggest alternative design approaches which might enhance reliability and crew safety; and (3) indicate what technology areas require significant research and development to achieve the reliability objectives.
Predictive models of safety based on audit findings: Part 1: Model development and reliability.
Hsiao, Yu-Lin; Drury, Colin; Wu, Changxu; Paquet, Victor
2013-03-01
This consecutive study was aimed at the quantitative validation of safety audit tools as predictors of safety performance, as we were unable to find prior studies that tested audit validity against safety outcomes. An aviation maintenance domain was chosen for this work as both audits and safety outcomes are currently prescribed and regulated. In Part 1, we developed a Human Factors/Ergonomics classification framework based on HFACS model (Shappell and Wiegmann, 2001a,b), for the human errors detected by audits, because merely counting audit findings did not predict future safety. The framework was tested for measurement reliability using four participants, two of whom classified errors on 1238 audit reports. Kappa values leveled out after about 200 audits at between 0.5 and 0.8 for different tiers of errors categories. This showed sufficient reliability to proceed with prediction validity testing in Part 2. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
NASA Astrophysics Data System (ADS)
Banerjee, Sourav; Liu, Lie; Liu, S. T.; Yuan, Fuh-Gwo; Beard, Shawn
2011-04-01
Materials State Awareness (MSA) goes beyond traditional NDE and SHM in its challenge to characterize the current state of material damage before the onset of macro-damage such as cracks. A highly reliable, minimally invasive system for MSA of Aerospace Structures, Naval structures as well as next generation space systems is critically needed. Development of such a system will require a reliable SHM system that can detect the onset of damage well before the flaw grows to a critical size. Therefore, it is important to develop an integrated SHM system that not only detects macroscale damages in the structures but also provides an early indication of flaw precursors and microdamages. The early warning for flaw precursors and their evolution provided by an SHM system can then be used to define remedial strategies before the structural damage leads to failure, and significantly improve the safety and reliability of the structures. Thus, in this article a preliminary concept of developing the Hybrid Distributed Sensor Network Integrated with Self-learning Symbiotic Diagnostic Algorithms and Models to accurately and reliably detect the precursors to damages that occur to the structure are discussed. Experiments conducted in a laboratory environment shows potential of the proposed technique.
Emission of pesticides into the air
Van Den, Berg; Kubiak, R.; Benjey, W.G.; Majewski, M.S.; Yates, S.R.; Reeves, G.L.; Smelt, J.H.; Van Der Linden, A. M. A.
1999-01-01
During and after the application of a pesticide in agriculture, a substantial fraction of the dosage may enter the atmosphere and be transported over varying distances downwind of the target. The rate and extent of the emission during application, predominantly as spray particle drift, depends primarily on the application method (equipment and technique), the formulation and environmental conditions, whereas the emission after application depends primarily on the properties of the pesticide, soils, crops and environmental conditions. The fraction of the dosage that misses the target area may be high in some cases and more experimental data on this loss term are needed for various application types and weather conditions. Such data are necessary to test spray drift models, and for further model development and verification as well. Following application, the emission of soil fumigants and soil incorporated pesticides into the air can be measured and computed with reasonable accuracy, but further model development is needed to improve the reliability of the model predictions. For soil surface applied pesticides reliable measurement methods are available, but there is not yet a reliable model. Further model development is required which must be verified by field experiments. Few data are available on pesticide volatilization from plants and more field experiments are also needed to study the fate processes on the plants. Once this information is available, a model needs to be developed to predict the volatilization of pesticides from plants, which, again, should be verified with field measurements. For regional emission estimates, a link between data on the temporal and spatial pesticide use and a geographical information system for crops and soils with their characteristics is needed.
Reliability Analysis of the Adult Mentoring Assessment for Extension Professionals
ERIC Educational Resources Information Center
Denny, Marina D'Abreau
2017-01-01
The Adult Mentoring Assessment for Extension Professionals will help mentors develop an accurate profile of their mentoring style with adult learners and identify areas of proficiency and deficiency based on six constructs--relationship, information, facilitation, confrontation, modeling, and vision. This article reports on the reliability of this…
Towards early software reliability prediction for computer forensic tools (case study).
Abu Talib, Manar
2016-01-01
Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.
DEVELOPMENT OF A LAND-SURFACE MODEL PART I: APPLICATION IN A MESOSCALE METEOROLOGY MODEL
Parameterization of land-surface processes and consideration of surface inhomogeneities are very important to mesoscale meteorological modeling applications, especially those that provide information for air quality modeling. To provide crucial, reliable information on the diurn...
The reliability of the Australasian Triage Scale: a meta-analysis
Ebrahimi, Mohsen; Heydari, Abbas; Mazlom, Reza; Mirhaghi, Amir
2015-01-01
BACKGROUND: Although the Australasian Triage Scale (ATS) has been developed two decades ago, its reliability has not been defined; therefore, we present a meta-analyis of the reliability of the ATS in order to reveal to what extent the ATS is reliable. DATA SOURCES: Electronic databases were searched to March 2014. The included studies were those that reported samples size, reliability coefficients, and adequate description of the ATS reliability assessment. The guidelines for reporting reliability and agreement studies (GRRAS) were used. Two reviewers independently examined abstracts and extracted data. The effect size was obtained by the z-transformation of reliability coefficients. Data were pooled with random-effects models, and meta-regression was done based on the method of moment’s estimator. RESULTS: Six studies were included in this study at last. Pooled coefficient for the ATS was substantial 0.428 (95%CI 0.340–0.509). The rate of mis-triage was less than fifty percent. The agreement upon the adult version is higher than the pediatric version. CONCLUSION: The ATS has shown an acceptable level of overall reliability in the emergency department, but it needs more development to reach an almost perfect agreement. PMID:26056538
Hall, William J
2016-11-01
This article describes the development and preliminary validation of the Bullying, Harassment, and Aggression Receipt Measure (BullyHARM). The development of the BullyHARM involved a number of steps and methods, including a literature review, expert review, cognitive testing, readability testing, data collection from a large sample, reliability testing, and confirmatory factor analysis. A sample of 275 middle school students was used to examine the psychometric properties and factor structure of the BullyHARM, which consists of 22 items and 6 subscales: physical bullying, verbal bullying, social/relational bullying, cyber-bullying, property bullying, and sexual bullying. First-order and second-order factor models were evaluated. Results demonstrate that the first-order factor model had superior fit. Results of reliability testing indicate that the BullyHARM scale and subscales have very good internal consistency reliability. Findings indicate that the BullyHARM has good properties regarding content validation and respondent-related validation and is a promising instrument for measuring bullying victimization in school.
Hall, William J.
2017-01-01
This article describes the development and preliminary validation of the Bullying, Harassment, and Aggression Receipt Measure (BullyHARM). The development of the BullyHARM involved a number of steps and methods, including a literature review, expert review, cognitive testing, readability testing, data collection from a large sample, reliability testing, and confirmatory factor analysis. A sample of 275 middle school students was used to examine the psychometric properties and factor structure of the BullyHARM, which consists of 22 items and 6 subscales: physical bullying, verbal bullying, social/relational bullying, cyber-bullying, property bullying, and sexual bullying. First-order and second-order factor models were evaluated. Results demonstrate that the first-order factor model had superior fit. Results of reliability testing indicate that the BullyHARM scale and subscales have very good internal consistency reliability. Findings indicate that the BullyHARM has good properties regarding content validation and respondent-related validation and is a promising instrument for measuring bullying victimization in school. PMID:28194041
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emery, John M.; Coffin, Peter; Robbins, Brian A.
Microstructural variabilities are among the predominant sources of uncertainty in structural performance and reliability. We seek to develop efficient algorithms for multiscale calcu- lations for polycrystalline alloys such as aluminum alloy 6061-T6 in environments where ductile fracture is the dominant failure mode. Our approach employs concurrent multiscale methods, but does not focus on their development. They are a necessary but not sufficient ingredient to multiscale reliability predictions. We have focused on how to efficiently use concurrent models for forward propagation because practical applications cannot include fine-scale details throughout the problem domain due to exorbitant computational demand. Our approach begins withmore » a low-fidelity prediction at the engineering scale that is sub- sequently refined with multiscale simulation. The results presented in this report focus on plasticity and damage at the meso-scale, efforts to expedite Monte Carlo simulation with mi- crostructural considerations, modeling aspects regarding geometric representation of grains and second-phase particles, and contrasting algorithms for scale coupling.« less
Care 3 phase 2 report, maintenance manual
NASA Technical Reports Server (NTRS)
Bryant, L. A.; Stiffler, J. J.
1982-01-01
CARE 3 (Computer-Aided Reliability Estimation, version three) is a computer program designed to help estimate the reliability of complex, redundant systems. Although the program can model a wide variety of redundant structures, it was developed specifically for fault-tolerant avionics systems--systems distinguished by the need for extremely reliable performance since a system failure could well result in the loss of human life. It substantially generalizes the class of redundant configurations that could be accommodated, and includes a coverage model to determine the various coverage probabilities as a function of the applicable fault recovery mechanisms (detection delay, diagnostic scheduling interval, isolation and recovery delay, etc.). CARE 3 further generalizes the class of system structures that can be modeled and greatly expands the coverage model to take into account such effects as intermittent and transient faults, latent faults, error propagation, etc.
An experiment in software reliability: Additional analyses using data from automated replications
NASA Technical Reports Server (NTRS)
Dunham, Janet R.; Lauterbach, Linda A.
1988-01-01
A study undertaken to collect software error data of laboratory quality for use in the development of credible methods for predicting the reliability of software used in life-critical applications is summarized. The software error data reported were acquired through automated repetitive run testing of three independent implementations of a launch interceptor condition module of a radar tracking problem. The results are based on 100 test applications to accumulate a sufficient sample size for error rate estimation. The data collected is used to confirm the results of two Boeing studies reported in NASA-CR-165836 Software Reliability: Repetitive Run Experimentation and Modeling, and NASA-CR-172378 Software Reliability: Additional Investigations into Modeling With Replicated Experiments, respectively. That is, the results confirm the log-linear pattern of software error rates and reject the hypothesis of equal error rates per individual fault. This rejection casts doubt on the assumption that the program's failure rate is a constant multiple of the number of residual bugs; an assumption which underlies some of the current models of software reliability. data raises new questions concerning the phenomenon of interacting faults.
NASA Technical Reports Server (NTRS)
Cohen, Gerald C. (Inventor); McMann, Catherine M. (Inventor)
1991-01-01
An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.
Reliability Analysis and Modeling of ZigBee Networks
NASA Astrophysics Data System (ADS)
Lin, Cheng-Min
The architecture of ZigBee networks focuses on developing low-cost, low-speed ubiquitous communication between devices. The ZigBee technique is based on IEEE 802.15.4, which specifies the physical layer and medium access control (MAC) for a low rate wireless personal area network (LR-WPAN). Currently, numerous wireless sensor networks have adapted the ZigBee open standard to develop various services to promote improved communication quality in our daily lives. The problem of system and network reliability in providing stable services has become more important because these services will be stopped if the system and network reliability is unstable. The ZigBee standard has three kinds of networks; star, tree and mesh. The paper models the ZigBee protocol stack from the physical layer to the application layer and analyzes these layer reliability and mean time to failure (MTTF). Channel resource usage, device role, network topology and application objects are used to evaluate reliability in the physical, medium access control, network, and application layers, respectively. In the star or tree networks, a series system and the reliability block diagram (RBD) technique can be used to solve their reliability problem. However, a division technology is applied here to overcome the problem because the network complexity is higher than that of the others. A mesh network using division technology is classified into several non-reducible series systems and edge parallel systems. Hence, the reliability of mesh networks is easily solved using series-parallel systems through our proposed scheme. The numerical results demonstrate that the reliability will increase for mesh networks when the number of edges in parallel systems increases while the reliability quickly drops when the number of edges and the number of nodes increase for all three networks. More use of resources is another factor impact on reliability decreasing. However, lower network reliability will occur due to network complexity, more resource usage and complex object relationship.
The 20 GHz solid state transmitter design, impatt diode development and reliability assessment
NASA Technical Reports Server (NTRS)
Picone, S.; Cho, Y.; Asmus, J. R.
1984-01-01
A single drift gallium arsenide (GaAs) Schottky barrier IMPATT diode and related components were developed. The IMPATT diode reliability was assessed. A proof of concept solid state transmitter design and a technology assessment study were performed. The transmitter design utilizes technology which, upon implementation, will demonstrate readiness for development of a POC model within the 1982 time frame and will provide an information base for flight hardware capable of deployment in a 1985 to 1990 demonstrational 30/20 GHz satellite communication system. Life test data for Schottky barrier GaAs diodes and grown junction GaAs diodes are described. The results demonstrate the viability of GaAs IMPATTs as high performance, reliable RF power sources which, based on the recommendation made herein, will surpass device reliability requirements consistent with a ten year spaceborne solid state power amplifier mission.
Nutrition environment measures survey-vending: development, dissemination, and reliability.
Voss, Carol; Klein, Susan; Glanz, Karen; Clawson, Margaret
2012-07-01
Researchers determined a need to develop an instrument to assess the vending machine environment that was comparably reliable and valid to other Nutrition Environment Measures Survey tools and that would provide consistent and comparable data for businesses, schools, and communities. Tool development, reliability testing, and dissemination of the Nutrition Environment Measures Survey-Vending (NEMS-V) involved a collaboration of students, professionals, and community leaders. Interrater reliability testing showed high levels of agreement among trained raters on the products and evaluations of products. NEMS-V can benefit public health partners implementing policy and environmental change initiatives as a part of their community wellness activities. The vending machine project will support a policy calling for state facilities to provide a minimum of 30% of foods and beverages in vending machines as healthy options, based on NEMS-V criteria, which will be used as a model for other businesses.
Refinement, Validation and Benchmarking of a Model for E-Government Service Quality
NASA Astrophysics Data System (ADS)
Magoutas, Babis; Mentzas, Gregoris
This paper presents the refinement and validation of a model for Quality of e-Government Services (QeGS). We built upon our previous work where a conceptualized model was identified and put focus on the confirmatory phase of the model development process, in order to come up with a valid and reliable QeGS model. The validated model, which was benchmarked with very positive results with similar models found in the literature, can be used for measuring the QeGS in a reliable and valid manner. This will form the basis for a continuous quality improvement process, unleashing the full potential of e-government services for both citizens and public administrations.
NASA Astrophysics Data System (ADS)
Kwok, Yu Fat
The main objective of this study is to develop a model for the determination of optimum testing interval (OTI) of non-redundant standby plants. This study focuses on the emergency power generators in tall buildings in Hong Kong. The model for the reliability, which is developed, is applicable to any non-duplicated standby plant. In a tall building, the mobilisation of occupants is constrained by its height and the building internal layout. Occupant's safety, amongst other safety considerations, highly depends on the reliability of the fire detection and protection system, which in turn is dependent on the reliability of the emergency power generation plants. A thorough literature survey shows that the practice used in determining OTI in nuclear plants is generally applicable. Historically, the OTI in these plants is determined by balancing the testing downtime and reliability gained from frequent testing. However, testing downtime does not exist in plants like emergency power generator. Subsequently, sophisticated models have taken repairing downtime into consideration. In this study, the algorithms for the determination of OTI, and hence reliability of standby plants, are reconsidered. A new concept is introduced into the subject. A new model is developed for such purposes which embraces more realistic factors found in practice. System aging and the finite life cycle of the standby plant are considered. Somewhat more pragmatic is that the Optimum Overhauling Interval can also be determined from this new model. System unavailability grow with time, but can be reset by test or overhaul. Contrary to fixed testing intervals, OTI is determined whenever system point unavailability exceeds certain level, which depends on the reliability requirement of the standby system. An optimum testing plan for lowering this level to the 'minimum useful unavailability' level (see section 9.1 for more elaboration) can be determined by the new model presented. Cost effectiveness is accounted for by a new parameter 'tau min', the minimum testing interval (MTI). The MTI optimises the total number of tests and the total number of overhauls, when the costs for each are available. The model sets up criteria for test and overhaul and to 'announce' end of system life. The usefulness of the model is validated by a detailed analysis of the operating parameters from 8,500 maintenance records collected for emergency power generation plants in high rise buildings in Hong Kong. (Abstract shortened by UMI.)
Reliability Assessment Approach for Stirling Convertors and Generators
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Schreiber, Jeffrey G.; Zampino, Edward; Best, Timothy
2004-01-01
Stirling power conversion is being considered for use in a Radioisotope Power System for deep-space science missions because it offers a multifold increase in the conversion efficiency of heat to electric power. Quantifying the reliability of a Radioisotope Power System that utilizes Stirling power conversion technology is important in developing and demonstrating the capability for long-term success. A description of the Stirling power convertor is provided, along with a discussion about some of the key components. Ongoing efforts to understand component life, design variables at the component and system levels, related sources, and the nature of uncertainties is discussed. The requirement for reliability also is discussed, and some of the critical areas of concern are identified. A section on the objectives of the performance model development and a computation of reliability is included to highlight the goals of this effort. Also, a viable physics-based reliability plan to model the design-level variable uncertainties at the component and system levels is outlined, and potential benefits are elucidated. The plan involves the interaction of different disciplines, maintaining the physical and probabilistic correlations at all the levels, and a verification process based on rational short-term tests. In addition, both top-down and bottom-up coherency were maintained to follow the physics-based design process and mission requirements. The outlined reliability assessment approach provides guidelines to improve the design and identifies governing variables to achieve high reliability in the Stirling Radioisotope Generator design.
Developing safety performance functions incorporating reliability-based risk measures.
Ibrahim, Shewkar El-Bassiouni; Sayed, Tarek
2011-11-01
Current geometric design guides provide deterministic standards where the safety margin of the design output is generally unknown and there is little knowledge of the safety implications of deviating from these standards. Several studies have advocated probabilistic geometric design where reliability analysis can be used to account for the uncertainty in the design parameters and to provide a risk measure of the implication of deviation from design standards. However, there is currently no link between measures of design reliability and the quantification of safety using collision frequency. The analysis presented in this paper attempts to bridge this gap by incorporating a reliability-based quantitative risk measure such as the probability of non-compliance (P(nc)) in safety performance functions (SPFs). Establishing this link will allow admitting reliability-based design into traditional benefit-cost analysis and should lead to a wider application of the reliability technique in road design. The present application is concerned with the design of horizontal curves, where the limit state function is defined in terms of the available (supply) and stopping (demand) sight distances. A comprehensive collision and geometric design database of two-lane rural highways is used to investigate the effect of the probability of non-compliance on safety. The reliability analysis was carried out using the First Order Reliability Method (FORM). Two Negative Binomial (NB) SPFs were developed to compare models with and without the reliability-based risk measures. It was found that models incorporating the P(nc) provided a better fit to the data set than the traditional (without risk) NB SPFs for total, injury and fatality (I+F) and property damage only (PDO) collisions. Copyright © 2011 Elsevier Ltd. All rights reserved.
A female black bear denning habitat model using a geographic information system
Clark, J.D.; Hayes, S.G.; Pledger, J.M.
1998-01-01
We used the Mahalanobis distance statistic and a raster geographic information system (GIS) to model potential black bear (Ursus americanus) denning habitat in the Ouachita Mountains of Arkansas. The Mahalanobis distance statistic was used to represent the standard squared distance between sample variates in the GIS database (forest cover type, elevation, slope, aspect, distance to streams, distance to roads, and forest cover richness) and variates at known bear dens. Two models were developed: a generalized model for all den locations and another specific to dens in rock cavities. Differences between habitat at den sites and habitat across the study area were represented in 2 new GIS themes as Mahalanobis distance values. Cells similar to the mean vector derived from the known dens had low Mahalanobis distance values, and dissimilar cells had high values. The reliability of the predictive model was tested by overlaying den locations collected subsequent to original model development on the resultant den habitat themes. Although the generalized model demonstrated poor reliability, the model specific to rock dens had good reliability. Bears were more likely to choose rock den locations with low Mahalanobis distance values and less likely to choose those with high values. The model can be used to plan the timing and extent of management actions (e.g., road building, prescribed fire, timber harvest) most appropriate for those sites with high or low denning potential.
Mud, models, and managers: Reaching consensus on a watershed strategy for sediment load reduction
NASA Astrophysics Data System (ADS)
Wilcock, P. R.; Cho, S. J.; Gran, K.; Belmont, P.; Hobbs, B. F.; Heitkamp, B.; Marr, J. D.
2017-12-01
Agricultural nonpoint source sediment pollution is a leading cause of impairment of U.S. waters. Sediment sources are often on private land, such that solutions require not only considerable investment, but broad acceptance among landowners. We present the story of a participatory modeling exercise whose goal was to develop a consensus strategy for reducing sediment loading from the Greater Blue Earth River Basin, a large (9,200 km2) watershed in southern Minnesota dominated by row crop agriculture. The Collaborative for Sediment Source Reduction was a stakeholder group of farmers, industry representatives, conservation groups, and regulatory agencies. We used a participatory modeling approach to promote understanding of the problem, to define the scope of solutions acceptable to farmers, to develop confidence in a watershed model, and to reach consensus on a watershed strategy. We found that no existing watershed model could provide a reliable estimate of sediment response to management actions and developed a purpose-built model that could provide reliable, transparent, and fast answers. Because increased stream flow was identified as an important driver of sediment loading, the model and solutions included both hydrologic and sediment transport components. The model was based on an annual sediment budget with management actions serving to proportionally reduce both sediment sources and sediment delivery. Importantly, the model was developed in collaboration with stakeholders, such that a shared understanding emerged regarding of the modeling challenges and the reliability of information used to strongly constrain model output. The simplicity of the modeling approach supported stakeholder engagement and understanding, thereby lowering the social barrier between expert modeler and concerned stakeholder. The consensus strategy focused on water storage higher in the watershed in order to reduce river discharge and the large supply of sediment from near-channel sources. Because water storage must occur largely on private farmland, this strategy was initially opposed by some stakeholders, such that model simplicity and transparency was essential in reaching a consensus strategy.
Closing Report for NASA Cooperative Agreement NASA-1-242
NASA Technical Reports Server (NTRS)
Maung, Khin Maung
1999-01-01
Reliable estimates of exposures due to ionizing radiations are of paramount importance in achieving human exploration and development of space, and in several technologically important and scientifically significant areas impacting on industrial and public health. For proper assessment of radiation exposures reliable transport codes are needed. An essential input to the transport codes is the information about the interaction of ions and neutrons with the matter. Most of the information about this interaction is put in by nuclear cross section data. In order to obtain an accurate parameterization of cross sections data, theoretical input is indispensable especially for the processes where there is little or no experimental data available. In the grant period reliable data base was developed and a phenomenological model was developed for the total absorption cross sections valid for any charged/uncharged light, medium and heavy collision pairs valid for the entire energy range. It is gratifying to note the success of the model. The cross sections model has been adopted and is in use in NASA cosmic ray detector development projects, the radiation protection and shielding programs and several DoE laboratories and institutions. A list of the publications based on the work done during the grant period is given below and a sample copy of one of the papers is enclosed with this report.
Development and evaluation of social cognitive measures related to adolescent physical activity.
Dewar, Deborah L; Lubans, David Revalds; Morgan, Philip James; Plotnikoff, Ronald C
2013-05-01
This study aimed to develop and evaluate the construct validity and reliability of modernized social cognitive measures relating to physical activity behaviors in adolescents. An instrument was developed based on constructs from Bandura's Social Cognitive Theory and included the following scales: self-efficacy, situation (perceived physical environment), social support, behavioral strategies, and outcome expectations and expectancies. The questionnaire was administered in a sample of 171 adolescents (age = 13.6 ± 1.2 years, females = 61%). Confirmatory factor analysis was employed to examine model-fit for each scale using multiple indices, including chi-square index, comparative-fit index (CFI), goodness-of-fit index (GFI), and the root mean square error of approximation (RMSEA). Reliability properties were also examined (ICC and Cronbach's alpha). Each scale represented a statistically sound measure: fit indices indicated each model to be an adequate-to-exact fit to the data; internal consistency was acceptable to good (α = 0.63-0.79); rank order repeatability was strong (ICC = 0.82-0.91). Results support the validity and reliability of social cognitive scales relating to physical activity among adolescents. As such, the developed scales have utility for the identification of potential social cognitive correlates of youth physical activity, mediators of physical activity behavior changes and the testing of theoretical models based on Social Cognitive Theory.
NASA Technical Reports Server (NTRS)
Eshleman, R. L.; Meyers, A. P.; Davidson, W. A.; Gortowski, R. C.; Anderson, M. E.
1973-01-01
The development, performance, and test results for the spaceborne magnetic tape transport are discussed. An analytical model of the tape transport was used to optimize its conceptual design. Each of the subsystems was subjected to reliability analyses which included structural integrity, maintenance of system performance within acceptable bounds, and avoidance of fatigue failure. These subsystems were also compared with each other in order to evaluate reliability characteristics. The transport uses no mechanical couplings. Four drive motors, one for each reel and one for each of two capstans, are used in a differential mode. There are two hybrid, spherical, cone tapered-crown rollers for tape guidance. Storage of the magnetic tape is provided by a reel assembly which includes the reel, a reel support structure and bearings, dust seals, and a dc drive motor. A summary of transport test results on tape guidance, flutter, and skew is provided.
Development and analysis of the Software Implemented Fault-Tolerance (SIFT) computer
NASA Technical Reports Server (NTRS)
Goldberg, J.; Kautz, W. H.; Melliar-Smith, P. M.; Green, M. W.; Levitt, K. N.; Schwartz, R. L.; Weinstock, C. B.
1984-01-01
SIFT (Software Implemented Fault Tolerance) is an experimental, fault-tolerant computer system designed to meet the extreme reliability requirements for safety-critical functions in advanced aircraft. Errors are masked by performing a majority voting operation over the results of identical computations, and faulty processors are removed from service by reassigning computations to the nonfaulty processors. This scheme has been implemented in a special architecture using a set of standard Bendix BDX930 processors, augmented by a special asynchronous-broadcast communication interface that provides direct, processor to processor communication among all processors. Fault isolation is accomplished in hardware; all other fault-tolerance functions, together with scheduling and synchronization are implemented exclusively by executive system software. The system reliability is predicted by a Markov model. Mathematical consistency of the system software with respect to the reliability model has been partially verified, using recently developed tools for machine-aided proof of program correctness.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-11
... Requirement R3.1 of MOD-001-1. C. Benchmarking 14. In the Final Rule, the Commission directed the ERO to develop benchmarking and updating requirements for the MOD Reliability Standards to measure modeled... requirements should specify the frequency for benchmarking and updating the available transfer and flowgate...
ERIC Educational Resources Information Center
Schweig, Jonathan
2013-01-01
Measuring school and classroom environments has become central in a nation-wide effort to develop comprehensive programs that measure teacher quality and teacher effectiveness. Formulating successful programs necessitates accurate and reliable methods for measuring these environmental variables. This paper uses a generalizability theory framework…
Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant
NASA Astrophysics Data System (ADS)
Aggarwal, Anil Kr.; Kumar, Sanjeev; Singh, Vikram; Garg, Tarun Kr.
2015-12-01
This paper deals with the Markov modeling and reliability analysis of urea synthesis system of a fertilizer plant. This system was modeled using Markov birth-death process with the assumption that the failure and repair rates of each subsystem follow exponential distribution. The first-order Chapman-Kolmogorov differential equations are developed with the use of mnemonic rule and these equations are solved with Runga-Kutta fourth-order method. The long-run availability, reliability and mean time between failures are computed for various choices of failure and repair rates of subsystems of the system. The findings of the paper are discussed with the plant personnel to adopt and practice suitable maintenance policies/strategies to enhance the performance of the urea synthesis system of the fertilizer plant.
NASA Technical Reports Server (NTRS)
Lawrence, Stella
1992-01-01
This paper is concerned with methods of measuring and developing quality software. Reliable flight and ground support software is a highly important factor in the successful operation of the space shuttle program. Reliability is probably the most important of the characteristics inherent in the concept of 'software quality'. It is the probability of failure free operation of a computer program for a specified time and environment.
Ebara, Takeshi; Azuma, Ryohei; Shoji, Naoto; Matsukawa, Tsuyoshi; Yamada, Yasuyuki; Akiyama, Tomohiro; Kurihara, Takahiro; Yamada, Shota
2017-11-25
Objective measurements using built-in smartphone sensors that can measure physical activity/inactivity in daily working life have the potential to provide a new approach to assessing workers' health effects. The aim of this study was to elucidate the characteristics and reliability of built-in step counting sensors on smartphones for development of an easy-to-use objective measurement tool that can be applied in ergonomics or epidemiological research. To evaluate the reliability of step counting sensors embedded in seven major smartphone models, the 6-minute walk test was conducted and the following analyses of sensor precision and accuracy were performed: 1) relationship between actual step count and step count detected by sensors, 2) reliability between smartphones of the same model, and 3) false detection rates when sitting during office work, while riding the subway, and driving. On five of the seven models, the inter-class correlations coefficient (ICC (3,1) ) showed high reliability with a range of 0.956-0.993. The other two models, however, had ranges of 0.443-0.504 and the relative error ratios of the sensor-detected step count to the actual step count were ±48.7%-49.4%. The level of agreement between the same models was ICC (3,1) : 0.992-0.998. The false detection rates differed between the sitting conditions. These results suggest the need for appropriate regulation of step counts measured by sensors, through means such as correction or calibration with a predictive model formula, in order to obtain the highly reliable measurement results that are sought in scientific investigation.
Development of a sheep challenge model for Rift Valley fever
USDA-ARS?s Scientific Manuscript database
Rift Valley fever (RVF) is a zoonotic disease that causes severe epizootic disease in ruminants, characterized by mass abortion and high mortality rates in younger animals. The development of a reliable challenge model is an important prerequisite for evaluation of existing and novel vaccines. A stu...
Wisconsin's Model Academic Standards for Music.
ERIC Educational Resources Information Center
Nikolay, Pauli; Grady, Susan; Stefonek, Thomas
To assist parents and educators in preparing students for the 21st century, Wisconsin citizens have become involved in the development of challenging academic standards in 12 curricular areas. Having clear standards for students and teachers makes it possible to develop rigorous local curricula and valid, reliable assessments. This model of…
Modeling of turbulence and transition
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing
1992-01-01
The first objective is to evaluate current two-equation and second order closure turbulence models using available direct numerical simulations and experiments, and to identify the models which represent the state of the art in turbulence modeling. The second objective is to study the near-wall behavior of turbulence, and to develop reliable models for an engineering calculation of turbulence and transition. The third objective is to develop a two-scale model for compressible turbulence.
Mahato, Niladri K; Montuelle, Stephane; Cotton, John; Williams, Susan; Thomas, James; Clark, Brian
2016-05-18
Single or biplanar video radiography and Roentgen stereophotogrammetry (RSA) techniques used for the assessment of in-vivo joint kinematics involves application of ionizing radiation, which is a limitation for clinical research involving human subjects. To overcome this limitation, our long-term goal is to develop a magnetic resonance imaging (MRI)-only, three dimensional (3-D) modeling technique that permits dynamic imaging of joint motion in humans. Here, we present our initial findings, as well as reliability data, for an MRI-only protocol and modeling technique. We developed a morphology-based motion-analysis technique that uses MRI of custom-built solid-body objects to animate and quantify experimental displacements between them. The technique involved four major steps. First, the imaging volume was calibrated using a custom-built grid. Second, 3-D models were segmented from axial scans of two custom-built solid-body cubes. Third, these cubes were positioned at pre-determined relative displacements (translation and rotation) in the magnetic resonance coil and scanned with a T1 and a fast contrast-enhanced pulse sequences. The digital imaging and communications in medicine (DICOM) images were then processed for animation. The fourth step involved importing these processed images into an animation software, where they were displayed as background scenes. In the same step, 3-D models of the cubes were imported into the animation software, where the user manipulated the models to match their outlines in the scene (rotoscoping) and registered the models into an anatomical joint system. Measurements of displacements obtained from two different rotoscoping sessions were tested for reliability using coefficient of variations (CV), intraclass correlation coefficients (ICC), Bland-Altman plots, and Limits of Agreement analyses. Between-session reliability was high for both the T1 and the contrast-enhanced sequences. Specifically, the average CVs for translation were 4.31 % and 5.26 % for the two pulse sequences, respectively, while the ICCs were 0.99 for both. For rotation measures, the CVs were 3.19 % and 2.44 % for the two pulse sequences with the ICCs being 0.98 and 0.97, respectively. A novel biplanar imaging approach also yielded high reliability with mean CVs of 2.66 % and 3.39 % for translation in the x- and z-planes, respectively, and ICCs of 0.97 in both planes. This work provides basic proof-of-concept for a reliable marker-less non-ionizing-radiation-based quasi-dynamic motion quantification technique that can potentially be developed into a tool for real-time joint kinematics analysis.
Silva, Wanderson Roberto; Costa, David; Pimenta, Filipa; Maroco, João; Campos, Juliana Alvares Duarte Bonini
2016-07-21
The objectives of this study were to develop a unified Portuguese-language version, for use in Brazil and Portugal, of the Body Shape Questionnaire (BSQ) and to estimate its validity, reliability, and internal consistency in Brazilian and Portuguese female university students. Confirmatory factor analysis was performed using both original (34-item) and shortened (8-item) versions. The model's fit was assessed with χ²/df, CFI, NFI, and RMSEA. Concurrent and convergent validity were assessed. Reliability was estimated through internal consistency and composite reliability (α). Transnational invariance of the BSQ was tested using multi-group analysis. The original 32-item model was refined to present a better fit and adequate validity and reliability. The shortened model was stable in both independent samples and in transnational samples (Brazil and Portugal). The use of this unified version is recommended for the assessment of body shape concerns in both Brazilian and Portuguese college students.
A pragmatic decision model for inventory management with heterogeneous suppliers
NASA Astrophysics Data System (ADS)
Nakandala, Dilupa; Lau, Henry; Zhang, Jingjing; Gunasekaran, Angappa
2018-05-01
For enterprises, it is imperative that the trade-off between the cost of inventory and risk implications is managed in the most efficient manner. To explore this, we use the common example of a wholesaler operating in an environment where suppliers demonstrate heterogeneous reliability. The wholesaler has partial orders with dual suppliers and uses lateral transshipments. While supplier reliability is a key concern in inventory management, reliable suppliers are more expensive and investment in strategic approaches that improve supplier performance carries a high cost. Here we consider the operational strategy of dual sourcing with reliable and unreliable suppliers and model the total inventory cost where the likely scenario lead-time of the unreliable suppliers extends beyond the scheduling period. We then develop a Customized Integer Programming Optimization Model to determine the optimum size of partial orders with multiple suppliers. In addition to the objective of total cost optimization, this study takes into account the volatility of the cost associated with the uncertainty of an inventory system.
Temporal eye movement strategies during naturalistic viewing
Wang, Helena X.; Freeman, Jeremy; Merriam, Elisha P.; Hasson, Uri; Heeger, David J.
2011-01-01
The deployment of eye movements to complex spatiotemporal stimuli likely involves a variety of cognitive factors. However, eye movements to movies are surprisingly reliable both within and across observers. We exploited and manipulated that reliability to characterize observers’ temporal viewing strategies. Introducing cuts and scrambling the temporal order of the resulting clips systematically changed eye movement reliability. We developed a computational model that exhibited this behavior and provided an excellent fit to the measured eye movement reliability. The model assumed that observers searched for, found, and tracked a point-of-interest, and that this process reset when there was a cut. The model did not require that eye movements depend on temporal context in any other way, and it managed to describe eye movements consistently across different observers and two movie sequences. Thus, we found no evidence for the integration of information over long time scales (greater than a second). The results are consistent with the idea that observers employ a simple tracking strategy even while viewing complex, engaging naturalistic stimuli. PMID:22262911
Software Reliability Analysis of NASA Space Flight Software: A Practical Experience
Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S.; Mcginnis, Issac
2017-01-01
In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions. PMID:29278255
Software Reliability Analysis of NASA Space Flight Software: A Practical Experience.
Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S; Mcginnis, Issac
2016-01-01
In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions.
Toward fidelity between specification and implementation
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.; Morrison, Jeff; Wu, Yunqing
1994-01-01
This paper describes the methods used to specify and implement a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally by two complementary teams using a combination of formal and informal techniques in an attempt to ensure the correctness of the protocol implementation. The first team, called the Design team, initially specified protocol requirements using a variant of SCR requirements tables and implemented a prototype solution. The second team, called the V&V team, developed a state model based on the requirements tables and derived test cases from these tables to exercise the implementation. In a series of iterative steps, the Design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation through testing. Test cases derived from state transition paths in the formal model formed the dialogue between teams during development and served as the vehicles for keeping the model and implementation in fidelity with each other. This paper describes our experiences in developing our process model, details of our approach, and some example problems found during the development of RMP.
Probabilistic simulation of the human factor in structural reliability
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Chamis, Christos C.
1991-01-01
Many structural failures have occasionally been attributed to human factors in engineering design, analyses maintenance, and fabrication processes. Every facet of the engineering process is heavily governed by human factors and the degree of uncertainty associated with them. Factors such as societal, physical, professional, psychological, and many others introduce uncertainties that significantly influence the reliability of human performance. Quantifying human factors and associated uncertainties in structural reliability require: (1) identification of the fundamental factors that influence human performance, and (2) models to describe the interaction of these factors. An approach is being developed to quantify the uncertainties associated with the human performance. This approach consists of a multi factor model in conjunction with direct Monte-Carlo simulation.
NASA Astrophysics Data System (ADS)
Jia, Heping; Jin, Wende; Ding, Yi; Song, Yonghua; Yu, Dezhao
2017-01-01
With the expanding proportion of renewable energy generation and development of smart grid technologies, flexible demand resources (FDRs) have been utilized as an approach to accommodating renewable energies. However, multiple uncertainties of FDRs may influence reliable and secure operation of smart grid. Multi-state reliability models for a single FDR and aggregating FDRs have been proposed in this paper with regard to responsive abilities for FDRs and random failures for both FDR devices and information system. The proposed reliability evaluation technique is based on Lz transform method which can formulate time-varying reliability indices. A modified IEEE-RTS has been utilized as an illustration of the proposed technique.
Descriptive Model of Generic WAMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hauer, John F.; DeSteese, John G.
The Department of Energy’s (DOE) Transmission Reliability Program is supporting the research, deployment, and demonstration of various wide area measurement system (WAMS) technologies to enhance the reliability of the Nation’s electrical power grid. Pacific Northwest National Laboratory (PNNL) was tasked by the DOE National SCADA Test Bed Program to conduct a study of WAMS security. This report represents achievement of the milestone to develop a generic WAMS model description that will provide a basis for the security analysis planned in the next phase of this study.
APPLICATION OF TRAVEL TIME RELIABILITY FOR PERFORMANCE ORIENTED OPERATIONAL PLANNING OF EXPRESSWAYS
NASA Astrophysics Data System (ADS)
Mehran, Babak; Nakamura, Hideki
Evaluation of impacts of congestion improvement scheme s on travel time reliability is very significant for road authorities since travel time reliability repr esents operational performance of expressway segments. In this paper, a methodology is presented to estimate travel tim e reliability prior to implementation of congestion relief schemes based on travel time variation modeling as a function of demand, capacity, weather conditions and road accident s. For subject expressway segmen ts, traffic conditions are modeled over a whole year considering demand and capacity as random variables. Patterns of demand and capacity are generated for each five minute interval by appl ying Monte-Carlo simulation technique, and accidents are randomly generated based on a model that links acci dent rate to traffic conditions. A whole year analysis is performed by comparing de mand and available capacity for each scenario and queue length is estimated through shockwave analysis for each time in terval. Travel times are estimated from refined speed-flow relationships developed for intercity expressways and buffer time index is estimated consequently as a measure of travel time reliability. For validation, estimated reliability indices are compared with measured values from empirical data, and it is shown that the proposed method is suitable for operational evaluation and planning purposes.
Software reliability through fault-avoidance and fault-tolerance
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.
1993-01-01
Strategies and tools for the testing, risk assessment and risk control of dependable software-based systems were developed. Part of this project consists of studies to enable the transfer of technology to industry, for example the risk management techniques for safety-concious systems. Theoretical investigations of Boolean and Relational Operator (BRO) testing strategy were conducted for condition-based testing. The Basic Graph Generation and Analysis tool (BGG) was extended to fully incorporate several variants of the BRO metric. Single- and multi-phase risk, coverage and time-based models are being developed to provide additional theoretical and empirical basis for estimation of the reliability and availability of large, highly dependable software. A model for software process and risk management was developed. The use of cause-effect graphing for software specification and validation was investigated. Lastly, advanced software fault-tolerance models were studied to provide alternatives and improvements in situations where simple software fault-tolerance strategies break down.
Bayesian Cue Integration as a Developmental Outcome of Reward Mediated Learning
Weisswange, Thomas H.; Rothkopf, Constantin A.; Rodemann, Tobias; Triesch, Jochen
2011-01-01
Average human behavior in cue combination tasks is well predicted by Bayesian inference models. As this capability is acquired over developmental timescales, the question arises, how it is learned. Here we investigated whether reward dependent learning, that is well established at the computational, behavioral, and neuronal levels, could contribute to this development. It is shown that a model free reinforcement learning algorithm can indeed learn to do cue integration, i.e. weight uncertain cues according to their respective reliabilities and even do so if reliabilities are changing. We also consider the case of causal inference where multimodal signals can originate from one or multiple separate objects and should not always be integrated. In this case, the learner is shown to develop a behavior that is closest to Bayesian model averaging. We conclude that reward mediated learning could be a driving force for the development of cue integration and causal inference. PMID:21750717
NASA Astrophysics Data System (ADS)
Zheng, W.; Gao, J. M.; Wang, R. X.; Chen, K.; Jiang, Y.
2017-12-01
This paper put forward a new method of technical characteristics deployment based on Reliability Function Deployment (RFD) by analysing the advantages and shortages of related research works on mechanical reliability design. The matrix decomposition structure of RFD was used to describe the correlative relation between failure mechanisms, soft failures and hard failures. By considering the correlation of multiple failure modes, the reliability loss of one failure mode to the whole part was defined, and a calculation and analysis model for reliability loss was presented. According to the reliability loss, the reliability index value of the whole part was allocated to each failure mode. On the basis of the deployment of reliability index value, the inverse reliability method was employed to acquire the values of technology characteristics. The feasibility and validity of proposed method were illustrated by a development case of machining centre’s transmission system.
NASA Technical Reports Server (NTRS)
French, V. (Principal Investigator)
1982-01-01
The CEAS models evaluated use historic trend and meteorological and agroclimatic variables to forecast soybean yields in Iowa, Illinois, and Indiana. Indicators of yield reliability and current measures of modeled yield reliability were obtained from bootstrap tests on the end of season models. Indicators of yield reliability show that the state models are consistently better than the crop reporting district (CRD) models. One CRD model is especially poor. At the state level, the bias of each model is less than one half quintal/hectare. The standard deviation is between one and two quintals/hectare. The models are adequate in terms of coverage and are to a certain extent consistent with scientific knowledge. Timely yield estimates can be made during the growing season using truncated models. The models are easy to understand and use and are not costly to operate. Other than the specification of values used to determine evapotranspiration, the models are objective. Because the method of variable selection used in the model development is adequately documented, no evaluation can be made of the objectivity and cost of redevelopment of the model.
Development and Exemplification of a Model for Teacher Assessment in Primary Science
ERIC Educational Resources Information Center
Davies, D. J.; Earle, S.; McMahon, K.; Howe, A.; Collier, C.
2017-01-01
The Teacher Assessment in Primary Science project is funded by the Primary Science Teaching Trust and based at Bath Spa University. The study aims to develop a whole-school model of valid, reliable and manageable teacher assessment to inform practice and make a positive impact on primary-aged children's learning in science. The model is based on a…
Cost-effective solutions to maintaining smart grid reliability
NASA Astrophysics Data System (ADS)
Qin, Qiu
As the aging power systems are increasingly working closer to the capacity and thermal limits, maintaining an sufficient reliability has been of great concern to the government agency, utility companies and users. This dissertation focuses on improving the reliability of transmission and distribution systems. Based on the wide area measurements, multiple model algorithms are developed to diagnose transmission line three-phase short to ground faults in the presence of protection misoperations. The multiple model algorithms utilize the electric network dynamics to provide prompt and reliable diagnosis outcomes. Computational complexity of the diagnosis algorithm is reduced by using a two-step heuristic. The multiple model algorithm is incorporated into a hybrid simulation framework, which consist of both continuous state simulation and discrete event simulation, to study the operation of transmission systems. With hybrid simulation, line switching strategy for enhancing the tolerance to protection misoperations is studied based on the concept of security index, which involves the faulted mode probability and stability coverage. Local measurements are used to track the generator state and faulty mode probabilities are calculated in the multiple model algorithms. FACTS devices are considered as controllers for the transmission system. The placement of FACTS devices into power systems is investigated with a criterion of maintaining a prescribed level of control reconfigurability. Control reconfigurability measures the small signal combined controllability and observability of a power system with an additional requirement on fault tolerance. For the distribution systems, a hierarchical framework, including a high level recloser allocation scheme and a low level recloser placement scheme, is presented. The impacts of recloser placement on the reliability indices is analyzed. Evaluation of reliability indices in the placement process is carried out via discrete event simulation. The reliability requirements are described with probabilities and evaluated from the empirical distributions of reliability indices.
Finite element modelling of aluminum alloy 2024-T3 under transverse impact loading
NASA Astrophysics Data System (ADS)
Abdullah, Ahmad Sufian; Kuntjoro, Wahyu; Yamin, A. F. M.
2017-12-01
Fiber metal laminate named GLARE is a new aerospace material which has great potential to be widely used in future lightweight aircraft. It consists of aluminum alloy 2024-T3 and glass-fiber reinforced laminate. In order to produce reliable finite element model of impact response or crashworthiness of structure made of GLARE, one can initially model and validate the finite element model of the impact response of its constituents separately. The objective of this study was to develop a reliable finite element model of aluminum alloy 2024-T3 under low velocity transverse impact loading using commercial software ABAQUS. Johnson-Cook plasticity and damage models were used to predict the alloy's material properties and impact behavior. The results of the finite element analysis were compared to the experiment that has similar material and impact conditions. Results showed good correlations in terms of impact forces, deformation and failure progressions which concluded that the finite element model of 2024-T3 aluminum alloy under low velocity transverse impact condition using Johnson-Cook plastic and damage models was reliable.
Models and techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Meyer, J. F.
1977-01-01
Models, measures and techniques were developed for evaluating the effectiveness of aircraft computing systems. The concept of effectiveness involves aspects of system performance, reliability and worth. Specifically done was a detailed development of model hierarchy at mission, functional task, and computational task levels. An appropriate class of stochastic models was investigated which served as bottom level models in the hierarchial scheme. A unified measure of effectiveness called 'performability' was defined and formulated.
Energy Systems Integration News - October 2016 | Energy Systems Integration
SuperModels Game," a sophisticated game model developed by the National Association of Regulatory Utility maintain reliability, affordability, and utility company competitiveness. Those not invited to the game
A stochastic hybrid systems based framework for modeling dependent failure processes
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313
A stochastic hybrid systems based framework for modeling dependent failure processes.
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.
Risk Management of New Microelectronics for NASA: Radiation Knowledge-base
NASA Technical Reports Server (NTRS)
LaBel, Kenneth A.
2004-01-01
Contents include the following: NASA Missions - implications to reliability and radiation constraints. Approach to Insertion of New Technologies Technology Knowledge-base development. Technology model/tool development and validation. Summary comments.
Final Report: System Reliability Model for Solid-State Lighting (SSL) Luminaires
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, J. Lynn
2017-05-31
The primary objectives of this project was to develop and validate reliability models and accelerated stress testing (AST) methodologies for predicting the lifetime of integrated SSL luminaires. This study examined the likely failure modes for SSL luminaires including abrupt failure, excessive lumen depreciation, unacceptable color shifts, and increased power consumption. Data on the relative distribution of these failure modes were acquired through extensive accelerated stress tests and combined with industry data and other source of information on LED lighting. This data was compiled and utilized to build models of the aging behavior of key luminaire optical and electrical components.
Open and Distance Education Accreditation Standards Scale: Validity and Reliability Studies
ERIC Educational Resources Information Center
Can, Ertug
2016-01-01
The purpose of this study is to develop, and test the validity and reliability of a scale for the use of researchers to determine the accreditation standards of open and distance education based on the views of administrators, teachers, staff and students. This research was designed according to the general descriptive survey model since it aims…
Water Awareness Scale for Pre-Service Science Teachers: Validity and Reliability Study
ERIC Educational Resources Information Center
Filik Iscen, Cansu
2015-01-01
The role of teachers in the formation of environmentally sensitive behaviors in students is quite high. Thus, the water awareness of teachers, who represent role models for students, is rather important. The main purpose of this study is to identify the reliability and validity study outcomes of the Water Awareness Scale, which was developed to…
A reliability-based cost effective fail-safe design procedure
NASA Technical Reports Server (NTRS)
Hanagud, S.; Uppaluri, B.
1976-01-01
The authors have developed a methodology for cost-effective fatigue design of structures subject to random fatigue loading. A stochastic model for fatigue crack propagation under random loading has been discussed. Fracture mechanics is then used to estimate the parameters of the model and the residual strength of structures with cracks. The stochastic model and residual strength variations have been used to develop procedures for estimating the probability of failure and its changes with inspection frequency. This information on reliability is then used to construct an objective function in terms of either a total weight function or cost function. A procedure for selecting the design variables, subject to constraints, by optimizing the objective function has been illustrated by examples. In particular, optimum design of stiffened panel has been discussed.
Gebreyesus, Grum; Lund, Mogens S; Buitenhuis, Bart; Bovenhuis, Henk; Poulsen, Nina A; Janss, Luc G
2017-12-05
Accurate genomic prediction requires a large reference population, which is problematic for traits that are expensive to measure. Traits related to milk protein composition are not routinely recorded due to costly procedures and are considered to be controlled by a few quantitative trait loci of large effect. The amount of variation explained may vary between regions leading to heterogeneous (co)variance patterns across the genome. Genomic prediction models that can efficiently take such heterogeneity of (co)variances into account can result in improved prediction reliability. In this study, we developed and implemented novel univariate and bivariate Bayesian prediction models, based on estimates of heterogeneous (co)variances for genome segments (BayesAS). Available data consisted of milk protein composition traits measured on cows and de-regressed proofs of total protein yield derived for bulls. Single-nucleotide polymorphisms (SNPs), from 50K SNP arrays, were grouped into non-overlapping genome segments. A segment was defined as one SNP, or a group of 50, 100, or 200 adjacent SNPs, or one chromosome, or the whole genome. Traditional univariate and bivariate genomic best linear unbiased prediction (GBLUP) models were also run for comparison. Reliabilities were calculated through a resampling strategy and using deterministic formula. BayesAS models improved prediction reliability for most of the traits compared to GBLUP models and this gain depended on segment size and genetic architecture of the traits. The gain in prediction reliability was especially marked for the protein composition traits β-CN, κ-CN and β-LG, for which prediction reliabilities were improved by 49 percentage points on average using the MT-BayesAS model with a 100-SNP segment size compared to the bivariate GBLUP. Prediction reliabilities were highest with the BayesAS model that uses a 100-SNP segment size. The bivariate versions of our BayesAS models resulted in extra gains of up to 6% in prediction reliability compared to the univariate versions. Substantial improvement in prediction reliability was possible for most of the traits related to milk protein composition using our novel BayesAS models. Grouping adjacent SNPs into segments provided enhanced information to estimate parameters and allowing the segments to have different (co)variances helped disentangle heterogeneous (co)variances across the genome.
The ability to make reliable decisions about the extent of subsurface contamination and approaches to restoration of contaminated ground water is dependent on the development of an accurate conceptual site model (CSM). The accuracy of the CSM is dependent on the quality of site ...
The Development and Testing of a Tool for Analysis of Computer-Mediated Conferencing Transcripts.
ERIC Educational Resources Information Center
Fahy, Patrick J.; Crawford, Gail; Ally, Mohamed; Cookson, Peter; Keller, Verna; Prosser, Frank
2000-01-01
The Zhu model for analyzing computer mediated communications was further developed by an Athabasca University (Alberta) distance education research team based on ease of use, reliability, validity, theoretical support, and cross-discipline utility. Five classification categories of the new model are vertical questioning, horizontal questioning,…
Performance-Based Service Quality Model: An Empirical Study on Japanese Universities
ERIC Educational Resources Information Center
Sultan, Parves; Wong, Ho
2010-01-01
Purpose: This paper aims to develop and empirically test the performance-based higher education service quality model. Design/methodology/approach: The study develops 67-item instrument for measuring performance-based service quality with a particular focus on the higher education sector. Scale reliability is confirmed using the Cronbach's alpha.…
The aim of this work is to develop group-contribution+ (GC+) method (combined group-contribution (GC) method and atom connectivity index (CI) method) based property models to provide reliable estimations of environment-related properties of organic chemicals together with uncert...
NASA Technical Reports Server (NTRS)
Connelly, L. C.
1977-01-01
The mission planning processor is a user oriented tool for consumables management and is part of the total consumables subsystem management concept. The approach to be used in developing a working model of the mission planning processor is documented. The approach includes top-down design, structured programming techniques, and application of NASA approved software development standards. This development approach: (1) promotes cost effective software development, (2) enhances the quality and reliability of the working model, (3) encourages the sharing of the working model through a standard approach, and (4) promotes portability of the working model to other computer systems.
Wagner, Flávia; Martel, Michelle M; Cogo-Moreira, Hugo; Maia, Carlos Renato Moreira; Pan, Pedro Mario; Rohde, Luis Augusto; Salum, Giovanni Abrahão
2016-01-01
The best structural model for attention-deficit/hyperactivity disorder (ADHD) symptoms remains a matter of debate. The objective of this study is to test the fit and factor reliability of competing models of the dimensional structure of ADHD symptoms in a sample of randomly selected and high-risk children and pre-adolescents from Brazil. Our sample comprised 2512 children aged 6-12 years from 57 schools in Brazil. The ADHD symptoms were assessed using parent report on the development and well-being assessment (DAWBA). Fit indexes from confirmatory factor analysis were used to test unidimensional, correlated, and bifactor models of ADHD, the latter including "g" ADHD and "s" symptom domain factors. Reliability of all models was measured with omega coefficients. A bifactor model with one general factor and three specific factors (inattention, hyperactivity, impulsivity) exhibited the best fit to the data, according to fit indices, as well as the most consistent factor loadings. However, based on omega reliability statistics, the specific inattention, hyperactivity, and impulsivity dimensions provided very little reliable information after accounting for the reliable general ADHD factor. Our study presents some psychometric evidence that ADHD specific ("s") factors might be unreliable after taking common ("g" factor) variance into account. These results are in accordance with the lack of longitudinal stability among subtypes, the absence of dimension-specific molecular genetic findings and non-specific effects of treatment strategies. Therefore, researchers and clinicians might most effectively rely on the "g" ADHD to characterize ADHD dimensional phenotype, based on currently available symptom items.
Carlisle, Daren M.; Wolock, David M.; Howard, Jeannette K.; Grantham, Theodore E.; Fesenmyer, Kurt; Wieczorek, Michael
2016-12-12
Because natural patterns of streamflow are a fundamental property of the health of streams, there is a critical need to quantify the degree to which human activities have modified natural streamflows. A requirement for assessing streamflow modification in a given stream is a reliable estimate of flows expected in the absence of human influences. Although there are many techniques to predict streamflows in specific river basins, there is a lack of approaches for making predictions of natural conditions across large regions and over many decades. In this study conducted by the U.S. Geological Survey, in cooperation with The Nature Conservancy and Trout Unlimited, the primary objective was to develop empirical models that predict natural (that is, unaffected by land use or water management) monthly streamflows from 1950 to 2012 for all stream segments in California. Models were developed using measured streamflow data from the existing network of streams where daily flow monitoring occurs, but where the drainage basins have minimal human influences. Widely available data on monthly weather conditions and the physical attributes of river basins were used as predictor variables. Performance of regional-scale models was comparable to that of published mechanistic models for specific river basins, indicating the models can be reliably used to estimate natural monthly flows in most California streams. A second objective was to develop a model that predicts the likelihood that streams experience modified hydrology. New models were developed to predict modified streamflows at 558 streamflow monitoring sites in California where human activities affect the hydrology, using basin-scale geospatial indicators of land use and water management. Performance of these models was less reliable than that for the natural-flow models, but results indicate the models could be used to provide a simple screening tool for identifying, across the State of California, which streams may be experiencing anthropogenic flow modification.
Li, Wei Bo; Greiter, Matthias; Oeh, Uwe; Hoeschen, Christoph
2011-12-01
The reliability of biokinetic models is essential in internal dose assessments and radiation risk analysis for the public, occupational workers, and patients exposed to radionuclides. In this paper, a method for assessing the reliability of biokinetic models by means of uncertainty and sensitivity analysis was developed. The paper is divided into two parts. In the first part of the study published here, the uncertainty sources of the model parameters for zirconium (Zr), developed by the International Commission on Radiological Protection (ICRP), were identified and analyzed. Furthermore, the uncertainty of the biokinetic experimental measurement performed at the Helmholtz Zentrum München-German Research Center for Environmental Health (HMGU) for developing a new biokinetic model of Zr was analyzed according to the Guide to the Expression of Uncertainty in Measurement, published by the International Organization for Standardization. The confidence interval and distribution of model parameters of the ICRP and HMGU Zr biokinetic models were evaluated. As a result of computer biokinetic modelings, the mean, standard uncertainty, and confidence interval of model prediction calculated based on the model parameter uncertainty were presented and compared to the plasma clearance and urinary excretion measured after intravenous administration. It was shown that for the most important compartment, the plasma, the uncertainty evaluated for the HMGU model was much smaller than that for the ICRP model; that phenomenon was observed for other organs and tissues as well. The uncertainty of the integral of the radioactivity of Zr up to 50 y calculated by the HMGU model after ingestion by adult members of the public was shown to be smaller by a factor of two than that of the ICRP model. It was also shown that the distribution type of the model parameter strongly influences the model prediction, and the correlation of the model input parameters affects the model prediction to a certain extent depending on the strength of the correlation. In the case of model prediction, the qualitative comparison of the model predictions with the measured plasma and urinary data showed the HMGU model to be more reliable than the ICRP model; quantitatively, the uncertainty model prediction by the HMGU systemic biokinetic model is smaller than that of the ICRP model. The uncertainty information on the model parameters analyzed in this study was used in the second part of the paper regarding a sensitivity analysis of the Zr biokinetic models.
Tan, Christine L; Hassali, Mohamed A; Saleem, Fahad; Shafie, Asrul A; Aljadhey, Hisham; Gan, Vincent B
2015-01-01
(i) To develop the Pharmacy Value-Added Services Questionnaire (PVASQ) using emerging themes generated from interviews. (ii) To establish reliability and validity of questionnaire instrument. Using an extended Theory of Planned Behavior as the theoretical model, face-to-face interviews generated salient beliefs of pharmacy value-added services. The PVASQ was constructed initially in English incorporating important themes and later translated into the Malay language with forward and backward translation. Intention (INT) to adopt pharmacy value-added services is predicted by attitudes (ATT), subjective norms (SN), perceived behavioral control (PBC), knowledge and expectations. Using a 7-point Likert-type scale and a dichotomous scale, test-retest reliability (N=25) was assessed by administrating the questionnaire instrument twice at an interval of one week apart. Internal consistency was measured by Cronbach's alpha and construct validity between two administrations was assessed using the kappa statistic and the intraclass correlation coefficient (ICC). Confirmatory Factor Analysis, CFA (N=410) was conducted to assess construct validity of the PVASQ. The kappa coefficients indicate a moderate to almost perfect strength of agreement between test and retest. The ICC for all scales tested for intra-rater (test-retest) reliability was good. The overall Cronbach' s alpha (N=25) is 0.912 and 0.908 for the two time points. The result of CFA (N=410) showed most items loaded strongly and correctly into corresponding factors. Only one item was eliminated. This study is the first to develop and establish the reliability and validity of the Pharmacy Value-Added Services Questionnaire instrument using the Theory of Planned Behavior as the theoretical model. The translated Malay language version of PVASQ is reliable and valid to predict Malaysian patients' intention to adopt pharmacy value-added services to collect partial medicine supply.
Ebara, Takeshi; Azuma, Ryohei; Shoji, Naoto; Matsukawa, Tsuyoshi; Yamada, Yasuyuki; Akiyama, Tomohiro; Kurihara, Takahiro; Yamada, Shota
2017-01-01
Objectives: Objective measurements using built-in smartphone sensors that can measure physical activity/inactivity in daily working life have the potential to provide a new approach to assessing workers' health effects. The aim of this study was to elucidate the characteristics and reliability of built-in step counting sensors on smartphones for development of an easy-to-use objective measurement tool that can be applied in ergonomics or epidemiological research. Methods: To evaluate the reliability of step counting sensors embedded in seven major smartphone models, the 6-minute walk test was conducted and the following analyses of sensor precision and accuracy were performed: 1) relationship between actual step count and step count detected by sensors, 2) reliability between smartphones of the same model, and 3) false detection rates when sitting during office work, while riding the subway, and driving. Results: On five of the seven models, the inter-class correlations coefficient (ICC (3,1)) showed high reliability with a range of 0.956-0.993. The other two models, however, had ranges of 0.443-0.504 and the relative error ratios of the sensor-detected step count to the actual step count were ±48.7%-49.4%. The level of agreement between the same models was ICC (3,1): 0.992-0.998. The false detection rates differed between the sitting conditions. Conclusions: These results suggest the need for appropriate regulation of step counts measured by sensors, through means such as correction or calibration with a predictive model formula, in order to obtain the highly reliable measurement results that are sought in scientific investigation. PMID:28835575
Development and Validation of the Primary Care Team Dynamics Survey
Song, Hummy; Chien, Alyna T; Fisher, Josephine; Martin, Julia; Peters, Antoinette S; Hacker, Karen; Rosenthal, Meredith B; Singer, Sara J
2015-01-01
Objective To develop and validate a survey instrument designed to measure team dynamics in primary care. Data Sources/Study Setting We studied 1,080 physician and nonphysician health care professionals working at 18 primary care practices participating in a learning collaborative aimed at improving team-based care. Study Design We developed a conceptual model and administered a cross-sectional survey addressing team dynamics, and we assessed reliability and discriminant validity of survey factors and the overall survey's goodness-of-fit using structural equation modeling. Data Collection We administered the survey between September 2012 and March 2013. Principal Findings Overall response rate was 68 percent (732 respondents). Results support a seven-factor model of team dynamics, suggesting that conditions for team effectiveness, shared understanding, and three supportive processes are associated with acting and feeling like a team and, in turn, perceived team effectiveness. This model demonstrated adequate fit (goodness-of-fit index: 0.91), scale reliability (Cronbach's alphas: 0.71–0.91), and discriminant validity (average factor correlations: 0.49). Conclusions It is possible to measure primary care team dynamics reliably using a 29-item survey. This survey may be used in ambulatory settings to study teamwork and explore the effect of efforts to improve team-based care. Future studies should demonstrate the importance of team dynamics for markers of team effectiveness (e.g., work satisfaction, care quality, clinical outcomes). PMID:25423886
Development and validation of the primary care team dynamics survey.
Song, Hummy; Chien, Alyna T; Fisher, Josephine; Martin, Julia; Peters, Antoinette S; Hacker, Karen; Rosenthal, Meredith B; Singer, Sara J
2015-06-01
To develop and validate a survey instrument designed to measure team dynamics in primary care. We studied 1,080 physician and nonphysician health care professionals working at 18 primary care practices participating in a learning collaborative aimed at improving team-based care. We developed a conceptual model and administered a cross-sectional survey addressing team dynamics, and we assessed reliability and discriminant validity of survey factors and the overall survey's goodness-of-fit using structural equation modeling. We administered the survey between September 2012 and March 2013. Overall response rate was 68 percent (732 respondents). Results support a seven-factor model of team dynamics, suggesting that conditions for team effectiveness, shared understanding, and three supportive processes are associated with acting and feeling like a team and, in turn, perceived team effectiveness. This model demonstrated adequate fit (goodness-of-fit index: 0.91), scale reliability (Cronbach's alphas: 0.71-0.91), and discriminant validity (average factor correlations: 0.49). It is possible to measure primary care team dynamics reliably using a 29-item survey. This survey may be used in ambulatory settings to study teamwork and explore the effect of efforts to improve team-based care. Future studies should demonstrate the importance of team dynamics for markers of team effectiveness (e.g., work satisfaction, care quality, clinical outcomes). © Health Research and Educational Trust.
Development and construct validity of the Classroom Strategies Scale-Observer Form.
Reddy, Linda A; Fabiano, Gregory; Dudek, Christopher M; Hsu, Louis
2013-12-01
Research on progress monitoring has almost exclusively focused on student behavior and not on teacher practices. This article presents the development and validation of a new teacher observational assessment (Classroom Strategies Scale) of classroom instructional and behavioral management practices. The theoretical underpinnings and empirical basis for the instructional and behavioral management scales are presented. The Classroom Strategies Scale (CSS) evidenced overall good reliability estimates including internal consistency, interrater reliability, test-retest reliability, and freedom from item bias on important teacher demographics (age, educational degree, years of teaching experience). Confirmatory factor analyses (CFAs) of CSS data from 317 classrooms were carried out to assess the level of empirical support for (a) a 4 first-order factor theory concerning teachers' instructional practices, and (b) a 4 first-order factor theory concerning teachers' behavior management practice. Several fit indices indicated acceptable fit of the (a) and (b) CFA models to the data, as well as acceptable fit of less parsimonious alternative CFA models that included 1 or 2 second-order factors. Information-theory-based indices generally suggested that the (a) and (b) CFA models fit better than some more parsimonious alternative CFA models that included constraints on relations of first-order factors. Overall, CFA first-order and higher order factor results support the CSS-Observer Total, Composite, and subscales. Suggestions for future measurement development efforts are outlined. PsycINFO Database Record (c) 2013 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Filis, Avishai; Pundak, Nachman; Barak, Moshe; Porat, Ze'ev; Jaeger, Mordechai
2011-06-01
The growing demand for EO applications that work around the clock 24hr/7days a week, such as in border surveillance systems, emphasizes the need for a highly reliable cryocooler having increased operational availability and decreased integrated system Life Cycle (ILS) cost. In order to meet this need RICOR has developed a new rotary Stirling cryocooler, model K508N, intended to double the K508's operating MTTF achieving 20,000 operating MTTF hours. The K508N employs RICOR's latest mechanical design technologies such as optimized bearings and greases, bearings preloading, advanced seals, laser welded cold finger and robust design structure with increased natural frequency compared to the K508 model. The cooler enhanced MTTF was demonstrated by a Validation and Verification (V&V) plan comprising analytical means and a comparative accelerated life test between the standard K508 and the K508N models. Particularly, point estimate and confidence interval for the MTTF improvement factor where calculated periodically during and after the test. The (V&V) effort revealed that the K508N meets its MTTF design goal. The paper will focus on the technical and engineering aspects of the new design. In addition it will discuss the market needs and expectations, investigate the reliability data of the present reference K508 model; and report the accelerate life test data and the statistical analysis methodology as well as its underlying assumptions and results.
Fatigue reliability of deck structures subjected to correlated crack growth
NASA Astrophysics Data System (ADS)
Feng, G. Q.; Garbatov, Y.; Guedes Soares, C.
2013-12-01
The objective of this work is to analyse fatigue reliability of deck structures subjected to correlated crack growth. The stress intensity factors of the correlated cracks are obtained by finite element analysis and based on which the geometry correction functions are derived. The Monte Carlo simulations are applied to predict the statistical descriptors of correlated cracks based on the Paris-Erdogan equation. A probabilistic model of crack growth as a function of time is used to analyse the fatigue reliability of deck structures accounting for the crack propagation correlation. A deck structure is modelled as a series system of stiffened panels, where a stiffened panel is regarded as a parallel system composed of plates and are longitudinal. It has been proven that the method developed here can be conveniently applied to perform the fatigue reliability assessment of structures subjected to correlated crack growth.
Dynamic one-dimensional modeling of secondary settling tanks and system robustness evaluation.
Li, Ben; Stenstrom, M K
2014-01-01
One-dimensional secondary settling tank models are widely used in current engineering practice for design and optimization, and usually can be expressed as a nonlinear hyperbolic or nonlinear strongly degenerate parabolic partial differential equation (PDE). Reliable numerical methods are needed to produce approximate solutions that converge to the exact analytical solutions. In this study, we introduced a reliable numerical technique, the Yee-Roe-Davis (YRD) method as the governing PDE solver, and compared its reliability with the prevalent Stenstrom-Vitasovic-Takács (SVT) method by assessing their simulation results at various operating conditions. The YRD method also produced a similar solution to the previously developed Method G and Enquist-Osher method. The YRD and SVT methods were also used for a time-to-failure evaluation, and the results show that the choice of numerical method can greatly impact the solution. Reliable numerical methods, such as the YRD method, are strongly recommended.
NASA Technical Reports Server (NTRS)
Tamayo, Tak Chai
1987-01-01
Quality of software not only is vital to the successful operation of the space station, it is also an important factor in establishing testing requirements, time needed for software verification and integration as well as launching schedules for the space station. Defense of management decisions can be greatly strengthened by combining engineering judgments with statistical analysis. Unlike hardware, software has the characteristics of no wearout and costly redundancies, thus making traditional statistical analysis not suitable in evaluating reliability of software. A statistical model was developed to provide a representation of the number as well as types of failures occur during software testing and verification. From this model, quantitative measure of software reliability based on failure history during testing are derived. Criteria to terminate testing based on reliability objectives and methods to estimate the expected number of fixings required are also presented.
A stepwise, multi-objective, multi-variable parameter optimization method for the APEX model
USDA-ARS?s Scientific Manuscript database
Proper parameterization enables hydrological models to make reliable estimates of non-point source pollution for effective control measures. The automatic calibration of hydrologic models requires significant computational power limiting its application. The study objective was to develop and eval...
NASA Astrophysics Data System (ADS)
Hardikar, Kedar Y.; Liu, Bill J. J.; Bheemreddy, Venkata
2016-09-01
Gaining an understanding of degradation mechanisms and their characterization are critical in developing relevant accelerated tests to ensure PV module performance warranty over a typical lifetime of 25 years. As newer technologies are adapted for PV, including new PV cell technologies, new packaging materials, and newer product designs, the availability of field data over extended periods of time for product performance assessment cannot be expected within the typical timeframe for business decisions. In this work, to enable product design decisions and product performance assessment for PV modules utilizing newer technologies, Simulation and Mechanism based Accelerated Reliability Testing (SMART) methodology and empirical approaches to predict field performance from accelerated test results are presented. The method is demonstrated for field life assessment of flexible PV modules based on degradation mechanisms observed in two accelerated tests, namely, Damp Heat and Thermal Cycling. The method is based on design of accelerated testing scheme with the intent to develop relevant acceleration factor models. The acceleration factor model is validated by extensive reliability testing under different conditions going beyond the established certification standards. Once the acceleration factor model is validated for the test matrix a modeling scheme is developed to predict field performance from results of accelerated testing for particular failure modes of interest. Further refinement of the model can continue as more field data becomes available. While the demonstration of the method in this work is for thin film flexible PV modules, the framework and methodology can be adapted to other PV products.
Integrated Model to Assess Cloud Deployment Effectiveness When Developing an IT-strategy
NASA Astrophysics Data System (ADS)
Razumnikov, S.; Prankevich, D.
2016-04-01
Developing an IT-strategy of cloud deployment is a complex issue since even the stage of its formation necessitates revealing what applications will be the best possible to meet the requirements of a company business-strategy, evaluate reliability and safety of cloud providers and analyze staff satisfaction. A system of criteria, as well an integrated model to assess cloud deployment effectiveness is offered. The model makes it possible to identify what applications being at the disposal of a company, as well as new tools to be deployed are reliable and safe enough for implementation in the cloud environment. The data on practical use of the procedure to assess cloud deployment effectiveness by a provider of telecommunication services is presented. The model was used to calculate values of integral indexes of services to be assessed, then, ones, meeting the criteria and answering the business-strategy of a company, were selected.
Reference values for the muscle power sprint test in 6- to 12-year-old children.
Douma-van Riet, Danielle; Verschuren, Olaf; Jelsma, Dorothee; Kruitwagen, Cas; Smits-Engelsman, Bouwien; Takken, Tim
2012-01-01
The aims of this study were (1) to develop centile reference values for anaerobic performance of Dutch children tested using the Muscle Power Sprint Test (MPST) and (2) to examine the test-retest reliability of the MPST. Children who were developing typically (178 boys and 201 girls) and aged 6 to 12 years (mean = 8.9 years) were recruited. The MPST was administered to 379 children, and test-retest reliability was examined in 47 children. MPST scores were transformed into centile curves, which were created using generalized additive models for location, scale, and shape. Height-related reference curves were created for both genders. Excellent (intraclass correlation coefficient = 0.98) test-retest reliability was demonstrated. The reference values for the MPST of children who are developing typically and aged 6 to 12 years can serve as a clinical standard in pediatric physical therapy practice. The MPST is a reliable and practical method for determining anaerobic performance in children.
2016-07-27
is a common requirement for aircraft, rockets , and hypersonic vehicles. The Aerospace Fuels Quality Test and Model Development (AFQTMoDev) project...was initiated to mature fuel quality assurance practices for rocket grade kerosene, thereby ensuring operational readiness of conventional and...and reliability, is a common requirement for aircraft, rockets , and hypersonic vehicles. The Aerospace Fuels Quality Test and Model Development
User's guide to the Reliability Estimation System Testbed (REST)
NASA Technical Reports Server (NTRS)
Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam
1992-01-01
The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.
Software reliability models for fault-tolerant avionics computers and related topics
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1987-01-01
Software reliability research is briefly described. General research topics are reliability growth models, quality of software reliability prediction, the complete monotonicity property of reliability growth, conceptual modelling of software failure behavior, assurance of ultrahigh reliability, and analysis techniques for fault-tolerant systems.
Sleeper, Mark D; Kenyon, Lisa K; Elliott, James M; Cheng, M Samuel
2016-12-01
Despite the availability of various field-tests for many competitive sports, a reliable and valid test specifically developed for use in men's gymnastics has not yet been developed. The Men's Gymnastics Functional Measurement Tool (MGFMT) was designed to assess sport-specific physical abilities in male competitive gymnasts. The purpose of this study was to develop the MGFMT by establishing a scoring system for individual test items and to initiate the process of establishing test-retest reliability and construct validity. A total of 83 competitive male gymnasts ages 7-18 underwent testing using the MGFMT. Thirty of these subjects underwent re-testing one week later in order to assess test-retest reliability. Construct validity was assessed using a simple regression analysis between total MGFMT scores and the gymnasts' USA-Gymnastics competitive level to calculate the coefficient of determination (r 2 ). Test-retest reliability was analyzed using Model 1 Intraclass correlation coefficients (ICC). Statistical significance was set at the p<0.05 level. The relationship between total MGFMT scores and subjects' current USA-Gymnastics competitive level was found to be good (r 2 = 0.63). Reliability testing of the MGFMT composite test score showed excellent test-retest reliability over a one-week period (ICC = 0.97). Test-retest reliability of the individual component tests ranged from good to excellent (ICC = 0.75-0.97). The results of this study provide initial support for the construct validity and test-retest reliability of the MGFMT. Level 3.
Nicholson, Patricia; Griffin, Patrick; Gillis, Shelley; Wu, Margaret; Dunning, Trisha
2013-09-01
Concern about the process of identifying underlying competencies that contribute to effective nursing performance has been debated with a lack of consensus surrounding an approved measurement instrument for assessing clinical performance. Although a number of methodologies are noted in the development of competency-based assessment measures, these studies are not without criticism. The primary aim of the study was to develop and validate a Performance Based Scoring Rubric, which included both analytical and holistic scales. The aim included examining the validity and reliability of the rubric, which was designed to measure clinical competencies in the operating theatre. The fieldwork observations of 32 nurse educators and preceptors assessing the performance of 95 instrument nurses in the operating theatre were used in the calibration of the rubric. The Rasch model, a particular model among Item Response Models, was used in the calibration of each item in the rubric in an attempt at improving the measurement properties of the scale. This is done by establishing the 'fit' of the data to the conditions demanded by the Rasch model. Acceptable reliability estimates, specifically a high Cronbach's alpha reliability coefficient (0.940), as well as empirical support for construct and criterion validity for the rubric were achieved. Calibration of the Performance Based Scoring Rubric using Rasch model revealed that the fit statistics for most items were acceptable. The use of the Rasch model offers a number of features in developing and refining healthcare competency-based assessments, improving confidence in measuring clinical performance. The Rasch model was shown to be useful in developing and validating a competency-based assessment for measuring the competence of the instrument nurse in the operating theatre with implications for use in other areas of nursing practice. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.
Validation of A Global Hydrological Model
NASA Astrophysics Data System (ADS)
Doell, P.; Lehner, B.; Kaspar, F.; Vassolo, S.
Freshwater availability has been recognized as a global issue, and its consistent quan- tification not only in individual river basins but also at the global scale is required to support the sustainable use of water. The Global Hydrology Model WGHM, which is a submodel of the global water use and availability model WaterGAP 2, computes sur- face runoff, groundwater recharge and river discharge at a spatial resolution of 0.5. WGHM is based on the best global data sets currently available, including a newly developed drainage direction map and a data set of wetlands, lakes and reservoirs. It calculates both natural and actual discharge by simulating the reduction of river discharge by human water consumption (as computed by the water use submodel of WaterGAP 2). WGHM is calibrated against observed discharge at 724 gauging sta- tions (representing about 50% of the global land area) by adjusting a parameter of the soil water balance. It not only computes the long-term average water resources but also water availability indicators that take into account the interannual and seasonal variability of runoff and discharge. The reliability of the model results is assessed by comparing observed and simulated discharges at the calibration stations and at se- lected other stations. We conclude that reliable results can be obtained for basins of more than 20,000 km2. In particular, the 90% reliable monthly discharge is simu- lated well. However, there is the tendency that semi-arid and arid basins are modeled less satisfactorily than humid ones, which is partially due to neglecting river channel losses and evaporation of runoff from small ephemeral ponds in the model. Also, the hydrology of highly developed basins with large artificial storages, basin transfers and irrigation schemes cannot be simulated well. The seasonality of discharge in snow- dominated basins is overestimated by WGHM, and if the snow-dominated basin is uncalibrated, discharge is likely to be underestimated due to the precipitation mea- surement errors. Even though the explicit modeling of wetlands and lakes leads to a much improved modeling of both the vertical water balance and the lateral transport of water, not enough information is included in WGHM to accurately capture the hy- drology of these water bodies. Certainly, the reliability of model results is highest at the locations at which WGHM was calibrated. The validation indicates that reliability for cells inside calibrated basins is satisfactory if the basin is relatively homogeneous. Analyses of the few available stations outside of calibrated basins indicate a reason- ably high model reliability, particularly in humid regions.
NASA Astrophysics Data System (ADS)
Zhou, Xunfei; Hsieh, Sheng-Jen
2017-05-01
After years of development, Fused Deposition Modeling (FDM) has become the most popular technique in commercial 3D printing due to its cost effectiveness and easy-to-operate fabrication process. Mechanical strength and dimensional accuracy are two of the most important factors for reliability of FDM products. However, the solid-liquid-solid state changes of material in the FDM process make it difficult to monitor and model. In this paper, an experimental model was developed to apply cost-effective infrared thermography imaging method to acquire temperature history of filaments at the interface and their corresponding cooling mechanism. A three-dimensional finite element model was constructed to simulate the same process using element "birth and death" feature and validated with the thermal response from the experimental model. In 6 of 9 experimental conditions, a maximum of 13% difference existed between the experimental and numerical models. This work suggests that numerical modeling of FDM process is reliable and can facilitate better understanding of bead spreading and road-to-road bonding mechanics during fabrication.
Identification of reliable gridded reference data for statistical downscaling methods in Alberta
NASA Astrophysics Data System (ADS)
Eum, H. I.; Gupta, A.
2017-12-01
Climate models provide essential information to assess impacts of climate change at regional and global scales. However, statistical downscaling methods have been applied to prepare climate model data for various applications such as hydrologic and ecologic modelling at a watershed scale. As the reliability and (spatial and temporal) resolution of statistically downscaled climate data mainly depend on a reference data, identifying the most reliable reference data is crucial for statistical downscaling. A growing number of gridded climate products are available for key climate variables which are main input data to regional modelling systems. However, inconsistencies in these climate products, for example, different combinations of climate variables, varying data domains and data lengths and data accuracy varying with physiographic characteristics of the landscape, have caused significant challenges in selecting the most suitable reference climate data for various environmental studies and modelling. Employing various observation-based daily gridded climate products available in public domain, i.e. thin plate spline regression products (ANUSPLIN and TPS), inverse distance method (Alberta Townships), and numerical climate model (North American Regional Reanalysis) and an optimum interpolation technique (Canadian Precipitation Analysis), this study evaluates the accuracy of the climate products at each grid point by comparing with the Adjusted and Homogenized Canadian Climate Data (AHCCD) observations for precipitation, minimum and maximum temperature over the province of Alberta. Based on the performance of climate products at AHCCD stations, we ranked the reliability of these publically available climate products corresponding to the elevations of stations discretized into several classes. According to the rank of climate products for each elevation class, we identified the most reliable climate products based on the elevation of target points. A web-based system was developed to allow users to easily select the most reliable reference climate data at each target point based on the elevation of grid cell. By constructing the best combination of reference data for the study domain, the accurate and reliable statistically downscaled climate projections could be significantly improved.
Reliability-Based Design Optimization of a Composite Airframe Component
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.
2009-01-01
A stochastic design optimization methodology (SDO) has been developed to design components of an airframe structure that can be made of metallic and composite materials. The design is obtained as a function of the risk level, or reliability, p. The design method treats uncertainties in load, strength, and material properties as distribution functions, which are defined with mean values and standard deviations. A design constraint or a failure mode is specified as a function of reliability p. Solution to stochastic optimization yields the weight of a structure as a function of reliability p. Optimum weight versus reliability p traced out an inverted-S-shaped graph. The center of the inverted-S graph corresponded to 50 percent (p = 0.5) probability of success. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponds to unity for reliability p (or p = 1). Weight can be reduced to a small value for the most failure-prone design with a reliability that approaches zero (p = 0). Reliability can be changed for different components of an airframe structure. For example, the landing gear can be designed for a very high reliability, whereas it can be reduced to a small extent for a raked wingtip. The SDO capability is obtained by combining three codes: (1) The MSC/Nastran code was the deterministic analysis tool, (2) The fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and (3) NASA Glenn Research Center s optimization testbed CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life raked wingtip structure of the Boeing 767-400 extended range airliner made of metallic and composite materials.
Savage, Trevor Nicholas; McIntosh, Andrew Stuart
2017-03-01
It is important to understand factors contributing to and directly causing sports injuries to improve the effectiveness and safety of sports skills. The characteristics of injury events must be evaluated and described meaningfully and reliably. However, many complex skills cannot be effectively investigated quantitatively because of ethical, technological and validity considerations. Increasingly, qualitative methods are being used to investigate human movement for research purposes, but there are concerns about reliability and measurement bias of such methods. Using the tackle in Rugby union as an example, we outline a systematic approach for developing a skill analysis protocol with a focus on improving objectivity, validity and reliability. Characteristics for analysis were selected using qualitative analysis and biomechanical theoretical models and epidemiological and coaching literature. An expert panel comprising subject matter experts provided feedback and the inter-rater reliability of the protocol was assessed using ten trained raters. The inter-rater reliability results were reviewed by the expert panel and the protocol was revised and assessed in a second inter-rater reliability study. Mean agreement in the second study improved and was comparable (52-90% agreement and ICC between 0.6 and 0.9) with other studies that have reported inter-rater reliability of qualitative analysis of human movement.
Reliability Quantification of Advanced Stirling Convertor (ASC) Components
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward
2010-01-01
The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.
1988-12-01
software development scene is often charac- c. SPQR Model-Jones terized by: * schedule and cost estimates that are gross-d. COPMO-Thebaut ly inaccurate, SEI...time c. SPQR Model-Jones (in seconds) is simply derived from E by dividing T. Capers Jones has developed a software cost by the Stroud number, S...estimation model called the Software Produc- T=E/S tivity, Quality, and Reliability ( SPQR ) model. The basic approach is similar to that of Boehm’s The value
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calderer, Antoni; Yang, Xiaolei; Angelidis, Dionysios
2015-10-30
The present project involves the development of modeling and analysis design tools for assessing offshore wind turbine technologies. The computational tools developed herein are able to resolve the effects of the coupled interaction of atmospheric turbulence and ocean waves on aerodynamic performance and structural stability and reliability of offshore wind turbines and farms. Laboratory scale experiments have been carried out to derive data sets for validating the computational models.
Al Abboud, Safaa Ahmed; Ahmad, Sohail; Bidin, Mohamed Badrulnizam Long; Ismail, Nahlah Elkudssiah
2016-11-01
The Diabetes Mellitus (DM) is a common silent epidemic disease with frequent morbidity and mortality. The psychological and psychosocial health factors are negatively influencing the glycaemic control in diabetic patients. Therefore, various questionnaires were developed to address the psychological and psychosocial well-being of the diabetic patients. Most of these questionnaires were first developed in English and then translated into different languages to make them useful for the local communities. The main aim of this study was to translate and validate the Malaysian versions of Perceived Diabetes Self-Management Scale (PDSMS), Medication Understanding and Use Self-Efficacy Scale (MUSE), and to revalidate 8-Morisky Medication Adherence Scale (MMAS-8) by Partial Credit Rasch Model (Modern Test Theory). Permission was obtained from respective authors to translate the English versions of PDSMS, MUSE and MMAS-8 into Malay language according to established standard international translation guidelines. In this cross-sectional study, 62 adult DM patients were recruited from Hospital Kuala Lumpur by purposive sampling method. The data were extracted from the self-administered questionnaires and entered manually in the Ministeps (Winsteps) software for Partial Credit Rasch Model. The item and person reliability, infit/outfit Z-Standard (ZSTD), infit/outfit Mean Square (MNSQ) and point measure correlation (PTMEA Corr) values were analysed for the reliability analyses and construct validation. The Malay version of PDSMS, MUSE and MMAS-8 found to be valid and reliable instrument for the Malaysian diabetic adults. The instrument showed good overall reliability value of 0.76 and 0.93 for item and person reliability, respectively. The values of infit/outfit ZSTD, infit/outfit MNSQ, and PTMEA Corr were also within the stipulated range of the Rasch Model proving the valid item constructs of the questionnaire. The translated Malay version of PDSMS, MUSE and MMAS-8 was found to be a highly reliable and valid questionnaire by Partial Credit Model. The Malay version was conceptually equivalent to original version, easy to understand and can be used for the Malaysian adult diabetic patients for future studies.
Sabater-Galindo, Marta; Ruiz de Maya, Salvador; Benrimoj, Shalom I; Gastelurrutia, Miguel Angel; Martínez-Martínez, Fernando; Sabater-Hernández, Daniel
The roles of community pharmacists are evolving to include provision of expanded professional pharmacy services, thus leading to an increased interest in pharmacist-patient interactions. Role theory can be used to explain the interaction between this pair of individuals, by focusing on the roles performed by each one. To develop and test a model that relates patients' image of the pharmacist to their expectations of pharmacist's role, and how this then influences patients' reactions toward the pharmacist's role. A qualitative study was undertaken, and a questionnaire was created for the development of the model, based on role theory. The content, dimensions, validity and reliability of the questionnaire were pre-tested qualitatively and in a pilot mail survey. The reliability and validity of the proposed model were tested using confirmatory factor analysis (CFA). Structural equation modelling (SEM) was used to explain relationships between dimensions of the final model. A final model was developed. CFA concluded that the model was valid and reliable (Goodness of Fit indices: χ 2 (109) = 227.662, P = 0.000, RMSEA = 0.05, SRMR = 0.05, GFI = 1.00, NNFI = 0.90, CFI = 0.92). SEM indicated that "perceived pharmacist image" was associated positively and significantly with both "professional expectations" (the standardized path coefficient of (H) = 0.719, P < 0.05), as well as "courtesy expectations" (the standardized path coefficient of (H) = 0.582, P < 0.05). At the same time, "professional expectations" were associated positively and significantly with "positive reactions" (the standardized path coefficient of (H) = 0.358, P < 0.05), but negatively with "Negative reactions" (the standardized path coefficient of (H) = -0.427, P < 0.05). "Courtesy expectations" were associated positively and significantly with "positive reactions" (the standardized path coefficient of (H) = 0.081, P < 0.05), as well as "negative reactions" (the standardized path coefficient of (H) = 0.450, P < 0.05). A valid and reliable model of patients' image of the pharmacist related to their expectations and reactions to the pharmacist's role was developed and tested. When the perceived image of the pharmacist is enhanced, patients' expectations of the pharmacist are heightened; in turn, these expectations were associated with reactions of patients. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Prayogi, S.; Yuanita, L.; Wasis
2018-01-01
This study aimed to develop Critical-Inquiry-Based-Learning (CIBL) learning model to promote critical thinking (CT) ability of preservice teachers. The CIBL learning model was developed by meeting the criteria of validity, practicality, and effectiveness. Validation of the model involves 4 expert validators through the mechanism of the focus group discussion (FGD). CIBL learning model declared valid to promote CT ability, with the validity level (Va) of 4.20 and reliability (r) of 90,1% (very reliable). The practicality of the model was evaluated when it was implemented that involving 17 of preservice teachers. The CIBL learning model had been declared practice, its measuring from learning feasibility (LF) with very good criteria (LF-score = 4.75). The effectiveness of the model was evaluated from the improvement CT ability after the implementation of the model. CT ability were evaluated using the scoring technique adapted from Ennis-Weir Critical Thinking Essay Test. The average score of CT ability on pretest is - 1.53 (uncritical criteria), whereas on posttest is 8.76 (critical criteria), with N-gain score of 0.76 (high criteria). Based on the results of this study, it can be concluded that developed CIBL learning model is feasible to promote CT ability of preservice teachers.
Development of an Environment for Software Reliability Model Selection
1992-09-01
now is directed to other related problems such as tools for model selection, multiversion programming, and software fault tolerance modeling... multiversion programming, 7. Hlardware can be repaired by spare modules, which is not. the case for software, 2-6 N. Preventive maintenance is very important
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpson, L.
ITN Energy Systems, Inc., and Global Solar Energy, Inc., with the assistance of NREL's PV Manufacturing R&D program, have continued the advancement of CIGS production technology through the development of trajectory-oriented predictive/control models, fault-tolerance control, control-platform development, in-situ sensors, and process improvements. Modeling activities to date include the development of physics-based and empirical models for CIGS and sputter-deposition processing, implementation of model-based control, and application of predictive models to the construction of new evaporation sources and for control. Model-based control is enabled through implementation of reduced or empirical models into a control platform. Reliability improvement activities include implementation of preventivemore » maintenance schedules; detection of failed sensors/equipment and reconfiguration to continue processing; and systematic development of fault prevention and reconfiguration strategies for the full range of CIGS PV production deposition processes. In-situ sensor development activities have resulted in improved control and indicated the potential for enhanced process status monitoring and control of the deposition processes. Substantial process improvements have been made, including significant improvement in CIGS uniformity, thickness control, efficiency, yield, and throughput. In large measure, these gains have been driven by process optimization, which, in turn, have been enabled by control and reliability improvements due to this PV Manufacturing R&D program. This has resulted in substantial improvements of flexible CIGS PV module performance and efficiency.« less
R&D of high reliable refrigeration system for superconducting generators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hosoya, T.; Shindo, S.; Yaguchi, H.
1996-12-31
Super-GM carries out R&D of 70 MW class superconducting generators (model machines), refrigeration system and superconducting wires to apply superconducting technology to electric power apparatuses. The helium refrigeration system for keeping field windings of superconducting generator (SCG) in cryogenic environment must meet the requirement of high reliability for uninterrupted long term operation of the SCG. In FY 1992, a high reliable conventional refrigeration system for the model machines was integrated by combining components such as compressor unit, higher temperature cold box and lower temperature cold box which were manufactured utilizing various fundamental technologies developed in early stage of the projectmore » since 1988. Since FY 1993, its performance tests have been carried out. It has been confirmed that its performance was fulfilled the development target of liquefaction capacity of 100 L/h and impurity removal in the helium gas to < 0.1 ppm. Furthermore, its operation method and performance were clarified to all different modes as how to control liquefaction rate and how to supply liquid helium from a dewar to the model machine. In addition, the authors have made performance tests and system performance analysis of oil free screw type and turbo type compressors which greatly improve reliability of conventional refrigeration systems. The operation performance and operational control method of the compressors has been clarified through the tests and analysis.« less
Development of an Integrated Agricultural Planning Model Considering Climate Change
NASA Astrophysics Data System (ADS)
Santikayasa, I. P.
2016-01-01
The goal of this study is to develop an agriculture planning model in order to sustain the future water use under the estimation of crop water requirement, water availability and future climate projection. For this purpose, the Citarum river basin which is located in West Java - Indonesia is selected as the study area. Two emission scenarios A2 and B2 were selected. For the crop water requirement estimation, the output of HadCM3 AOGCM is statistically downscale using SDSM and used as the input for WEAP model developed by SEI (Stockholm Environmental Institute). The reliability of water uses is assessed by comparing the irrigation water demand and the water allocation for the irrigation area. The water supply resources are assessed using the water planning tool. This study shows that temperature and precipitation over the study area are projected to increase in the future. The water availability was projected to increase under both A2 and B2 emission scenarios in the future. The irrigation water requirement is expected to decrease in the future under A2 and B2 scenarios. By comparing the irrigation water demand and water allocation for irrigation, the reliability of agriculture water use is expected to change in the period of 2050s and 2080s while the reliability will not change in 2020s. The reliability under A2 scenario is expected to be higher than B2 scenario. The combination of WEAP and SDSM is significance to use in assessing and allocating the water resources in the region.
Callwood, Alison; Cooke, Debbie; Allan, Helen
2014-12-01
Published research has demonstrated that the multiple mini-interview (MMI) is a reliable assessment instrument in medical and nursing student selection. There is a dearth of evidence specifically relating to the advancement and subsequent evaluation of MMIs in the context of student midwife selection. To develop, pilot and examine the reliability of MMIs in pre-registration student midwife selection in a UK setting. DeVellis' framework for questionnaire development underpinned the generation of MMI scenarios. BSc (Hons) Midwifery Studies students at a Higher Education Institution in the UK volunteered to participate in 'mock' MMI circuits during the first week of their programme. An eight station model was piloted. Communication skills were rated at each station as a generic attribute. Station specific attributes assessed included: compassion and empathy; respect for difference and diversity; honesty and integrity; intellectual curiosity and reflective nature; advocacy; respect for privacy and dignity; team working and initiative; the role of the midwife and motivation to become a midwife. Participants' responses to scenario questions were rated on a 7 point scale. Cronbach's alpha scores measuring internal consistency ranged from 0.91 to 0.97 CONCLUSION: The systematic development of the MMI model and scenarios resulted in 'excellent' reliability across all stations. These findings endorse the MMI technique as a reliable alternative to the personal interview in informing final decisions in pre-registration student midwife selection. Copyright © 2014 Elsevier Ltd. All rights reserved.
Assessment of Prevalence of Persons with Down Syndrome: A Theory-Based Demographic Model
ERIC Educational Resources Information Center
de Graaf, Gert; Vis, Jeroen C.; Haveman, Meindert; van Hove, Geert; de Graaf, Erik A. B.; Tijssen, Jan G. P.; Mulder, Barbara J. M.
2011-01-01
Background: The Netherlands are lacking reliable empirical data in relation to the development of birth and population prevalence of Down syndrome. For the UK and Ireland there are more historical empirical data available. A theory-based model is developed for predicting Down syndrome prevalence in the Netherlands from the 1950s onwards. It is…
Importance of Nuclear Physics to NASA's Space Missions
NASA Technical Reports Server (NTRS)
Tripathi, R. K.; Wilson, J. W.; Cucinotta, F. A.
2001-01-01
We show that nuclear physics is extremely important for accurate risk assessments for space missions. Due to paucity of experimental input radiation interaction information it is imperative to develop reliable accurate models for the interaction of radiation with matter. State-of-the-art nuclear cross sections models have been developed at the NASA Langley Research center and are discussed.
ERIC Educational Resources Information Center
Clemens, Elysia V.; Carey, John C.; Harrington, Karen M.
2010-01-01
This article details the initial development of the School Counseling Program Implementation Survey and psychometric results including reliability and factor structure. An exploratory factor analysis revealed a three-factor model that accounted for 54% of the variance of the intercorrelation matrix and a two-factor model that accounted for 47% of…
The reliability of the Adelaide in-shoe foot model.
Bishop, Chris; Hillier, Susan; Thewlis, Dominic
2017-07-01
Understanding the biomechanics of the foot is essential for many areas of research and clinical practice such as orthotic interventions and footwear development. Despite the widespread attention paid to the biomechanics of the foot during gait, what largely remains unknown is how the foot moves inside the shoe. This study investigated the reliability of the Adelaide In-Shoe Foot Model, which was designed to quantify in-shoe foot kinematics and kinetics during walking. Intra-rater reliability was assessed in 30 participants over five walking trials whilst wearing shoes during two data collection sessions, separated by one week. Sufficient reliability for use was interpreted as a coefficient of multiple correlation and intra-class correlation coefficient of >0.61. Inter-rater reliability was investigated separately in a second sample of 10 adults by two researchers with experience in applying markers for the purpose of motion analysis. The results indicated good consistency in waveform estimation for most kinematic and kinetic data, as well as good inter-and intra-rater reliability. The exception is the peak medial ground reaction force, the minimum abduction angle and the peak abduction/adduction external hindfoot joint moments which resulted in less than acceptable repeatability. Based on our results, the Adelaide in-shoe foot model can be used with confidence for 24 commonly measured biomechanical variables during shod walking. Copyright © 2017 Elsevier B.V. All rights reserved.
Software Technology for Adaptable, Reliable Systems (STARS)
1994-03-25
Tmeline(3), SECOMO(3), SEER(3), GSFC Software Engineering Lab Model(l), SLIM(4), SEER-SEM(l), SPQR (2), PRICE-S(2), internally-developed models(3), APMSS(1...3 " Timeline - 3 " SASET (Software Architecture Sizing Estimating Tool) - 2 " MicroMan 11- 2 * LCM (Logistics Cost Model) - 2 * SPQR - 2 * PRICE-S - 2
A comparison of hydrologic models for ecological flows and water availability
Peter V. Caldwell; Jonathan G. Kennen; Ge Sun; Julie E. Kiang; Jon B. Butcher; Michele C. Eddy; Lauren E. Hay; Jacob H. LaFontaine; Ernie F. Hain; Stacy A. C. Nelson; Steve G. McNulty
2015-01-01
Robust hydrologic models are needed to help manage water resources for healthy aquatic ecosystems and reliable water supplies for people, but there is a lack of comprehensive model comparison studies that quantify differences in streamflow predictions among model applications developed to answer management questions. We assessed differences in daily streamflow...
Reliable models for assessing human exposures are important for understanding health risks from chemicals. The Stochastic Human Exposure and Dose Simulation model for multimedia, multi-route/pathway chemicals (SHEDS-Multimedia), developed by EPA’s Office of Research and Developm...
Development of modelling algorithm of technological systems by statistical tests
NASA Astrophysics Data System (ADS)
Shemshura, E. A.; Otrokov, A. V.; Chernyh, V. G.
2018-03-01
The paper tackles the problem of economic assessment of design efficiency regarding various technological systems at the stage of their operation. The modelling algorithm of a technological system was performed using statistical tests and with account of the reliability index allows estimating the level of machinery technical excellence and defining the efficiency of design reliability against its performance. Economic feasibility of its application shall be determined on the basis of service quality of a technological system with further forecasting of volumes and the range of spare parts supply.
From bedside to bench and back again: research issues in animal models of human disease.
Tkacs, Nancy C; Thompson, Hilaire J
2006-07-01
To improve outcomes for patients with many serious clinical problems, multifactorial research approaches by nurse scientists, including the use of animal models, are necessary. Animal models serve as analogies for clinical problems seen in humans and must meet certain criteria, including validity and reliability, to be useful in moving research efforts forward. This article describes research considerations in the development of rodent models. As the standard of diabetes care evolves to emphasize intensive insulin therapy, rates of severe hypoglycemia are increasing among patients with type 1 and type 2 diabetes mellitus. A consequence of this change in clinical practice is an increase in rates of two hypoglycemia-related diabetes complications: hypoglycemia-associated autonomic failure (HAAF) and resulting hypoglycemia unawareness. Work on an animal model of HAAF is in an early developmental stage, with several labs reporting different approaches to model this complication of type 1 diabetes mellitus. This emerging model serves as an example illustrating how evaluation of validity and reliability is critically important at each stage of developing and testing animal models to support inquiry into human disease.
Boerebach, Benjamin C M; Lombarts, Kiki M J M H; Arah, Onyebuchi A
2016-03-01
The System for Evaluation of Teaching Qualities (SETQ) was developed as a formative system for the continuous evaluation and development of physicians' teaching performance in graduate medical training. It has been seven years since the introduction and initial exploratory psychometric analysis of the SETQ questionnaires. This study investigates the validity and reliability of the SETQ questionnaires across hospitals and medical specialties using confirmatory factor analyses (CFAs), reliability analysis, and generalizability analysis. The SETQ questionnaires were tested in a sample of 3,025 physicians and 2,848 trainees in 46 hospitals. The CFA revealed acceptable fit of the data to the previously identified five-factor model. The high internal consistency estimates suggest satisfactory reliability of the subscales. These results provide robust evidence for the validity and reliability of the SETQ questionnaires for evaluating physicians' teaching performance. © The Author(s) 2014.
Reliability models: the influence of model specification in generation expansion planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stremel, J.P.
1982-10-01
This paper is a critical evaluation of reliability methods used for generation expansion planning. It is shown that the methods for treating uncertainty are critical for determining the relative reliability value of expansion alternatives. It is also shown that the specification of the reliability model will not favor all expansion options equally. Consequently, the model is biased. In addition, reliability models should be augmented with an economic value of reliability (such as the cost of emergency procedures or energy not served). Generation expansion evaluations which ignore the economic value of excess reliability can be shown to be inconsistent. The conclusionsmore » are that, in general, a reliability model simplifies generation expansion planning evaluations. However, for a thorough analysis, the expansion options should be reviewed for candidates which may be unduly rejected because of the bias of the reliability model. And this implies that for a consistent formulation in an optimization framework, the reliability model should be replaced with a full economic optimization which includes the costs of emergency procedures and interruptions in the objective function.« less
Paap, Kenneth R; Sawi, Oliver
2016-12-01
Studies testing for individual or group differences in executive functioning can be compromised by unknown test-retest reliability. Test-retest reliabilities across an interval of about one week were obtained from performance in the antisaccade, flanker, Simon, and color-shape switching tasks. There is a general trade-off between the greater reliability of single mean RT measures, and the greater process purity of measures based on contrasts between mean RTs in two conditions. The individual differences in RT model recently developed by Miller and Ulrich was used to evaluate the trade-off. Test-retest reliability was statistically significant for 11 of the 12 measures, but was of moderate size, at best, for the difference scores. The test-retest reliabilities for the Simon and flanker interference scores were lower than those for switching costs. Standard practice evaluates the reliability of executive-functioning measures using split-half methods based on data obtained in a single day. Our test-retest measures of reliability are lower, especially for difference scores. These reliability measures must also take into account possible day effects that classical test theory assumes do not occur. Measures based on single mean RTs tend to have acceptable levels of reliability and convergent validity, but are "impure" measures of specific executive functions. The individual differences in RT model shows that the impurity problem is worse than typically assumed. However, the "purer" measures based on difference scores have low convergent validity that is partly caused by deficiencies in test-retest reliability. Copyright © 2016 Elsevier B.V. All rights reserved.
Outdoor Leader Career Development: Exploration of a Career Path
ERIC Educational Resources Information Center
Wagstaff, Mark
2016-01-01
The purpose of this study was to assess the efficacy of the proposed Outdoor Leader Career Development Model (OLCDM) through the development of the Outdoor Leader Career Development Inventory (OLCDI). I assessed the reliability and validity of the OLCDI through exploratory factor analysis, principal component analysis, and varimax rotation, based…
2018-01-01
On-chip LiDAR sensors for vehicle collision avoidance are a rapidly expanding area of research and development. The assessment of reliable obstacle detection using data collected by LiDAR sensors has become a key issue that the scientific community is actively exploring. The design of a self-tuning methodology and its implementation are presented in this paper, to maximize the reliability of LiDAR sensors network for obstacle detection in the ‘Internet of Things’ (IoT) mobility scenarios. The Webots Automobile 3D simulation tool for emulating sensor interaction in complex driving environments is selected in order to achieve that objective. Furthermore, a model-based framework is defined that employs a point-cloud clustering technique, and an error-based prediction model library that is composed of a multilayer perceptron neural network, and k-nearest neighbors and linear regression models. Finally, a reinforcement learning technique, specifically a Q-learning method, is implemented to determine the number of LiDAR sensors that are required to increase sensor reliability for obstacle localization tasks. In addition, a IoT driving assistance user scenario, connecting a five LiDAR sensor network is designed and implemented to validate the accuracy of the computational intelligence-based framework. The results demonstrated that the self-tuning method is an appropriate strategy to increase the reliability of the sensor network while minimizing detection thresholds. PMID:29748521
Castaño, Fernando; Beruvides, Gerardo; Villalonga, Alberto; Haber, Rodolfo E
2018-05-10
On-chip LiDAR sensors for vehicle collision avoidance are a rapidly expanding area of research and development. The assessment of reliable obstacle detection using data collected by LiDAR sensors has become a key issue that the scientific community is actively exploring. The design of a self-tuning methodology and its implementation are presented in this paper, to maximize the reliability of LiDAR sensors network for obstacle detection in the 'Internet of Things' (IoT) mobility scenarios. The Webots Automobile 3D simulation tool for emulating sensor interaction in complex driving environments is selected in order to achieve that objective. Furthermore, a model-based framework is defined that employs a point-cloud clustering technique, and an error-based prediction model library that is composed of a multilayer perceptron neural network, and k-nearest neighbors and linear regression models. Finally, a reinforcement learning technique, specifically a Q-learning method, is implemented to determine the number of LiDAR sensors that are required to increase sensor reliability for obstacle localization tasks. In addition, a IoT driving assistance user scenario, connecting a five LiDAR sensor network is designed and implemented to validate the accuracy of the computational intelligence-based framework. The results demonstrated that the self-tuning method is an appropriate strategy to increase the reliability of the sensor network while minimizing detection thresholds.
Kainz, Hans; Hoang, Hoa X; Stockton, Chris; Boyd, Roslyn R; Lloyd, David G; Carty, Christopher P
2017-10-01
Gait analysis together with musculoskeletal modeling is widely used for research. In the absence of medical images, surface marker locations are used to scale a generic model to the individual's anthropometry. Studies evaluating the accuracy and reliability of different scaling approaches in a pediatric and/or clinical population have not yet been conducted and, therefore, formed the aim of this study. Magnetic resonance images (MRI) and motion capture data were collected from 12 participants with cerebral palsy and 6 typically developed participants. Accuracy was assessed by comparing the scaled model's segment measures to the corresponding MRI measures, whereas reliability was assessed by comparing the model's segments scaled with the experimental marker locations from the first and second motion capture session. The inclusion of joint centers into the scaling process significantly increased the accuracy of thigh and shank segment length estimates compared to scaling with markers alone. Pelvis scaling approaches which included the pelvis depth measure led to the highest errors compared to the MRI measures. Reliability was similar between scaling approaches with mean ICC of 0.97. The pelvis should be scaled using pelvic width and height and the thigh and shank segment should be scaled using the proximal and distal joint centers.
Design and reliability analysis of DP-3 dynamic positioning control architecture
NASA Astrophysics Data System (ADS)
Wang, Fang; Wan, Lei; Jiang, Da-Peng; Xu, Yu-Ru
2011-12-01
As the exploration and exploitation of oil and gas proliferate throughout deepwater area, the requirements on the reliability of dynamic positioning system become increasingly stringent. The control objective ensuring safety operation at deep water will not be met by a single controller for dynamic positioning. In order to increase the availability and reliability of dynamic positioning control system, the triple redundancy hardware and software control architectures were designed and developed according to the safe specifications of DP-3 classification notation for dynamically positioned ships and rigs. The hardware redundant configuration takes the form of triple-redundant hot standby configuration including three identical operator stations and three real-time control computers which connect each other through dual networks. The function of motion control and redundancy management of control computers were implemented by software on the real-time operating system VxWorks. The software realization of task loose synchronization, majority voting and fault detection were presented in details. A hierarchical software architecture was planed during the development of software, consisting of application layer, real-time layer and physical layer. The behavior of the DP-3 dynamic positioning control system was modeled by a Markov model to analyze its reliability. The effects of variation in parameters on the reliability measures were investigated. The time domain dynamic simulation was carried out on a deepwater drilling rig to prove the feasibility of the proposed control architecture.
Mentiplay, Benjamin F; Tan, Dawn; Williams, Gavin; Adair, Brooke; Pua, Yong-Hao; Bower, Kelly J; Clark, Ross A
2018-04-27
Isometric rate of torque development examines how quickly force can be exerted and may resemble everyday task demands more closely than isometric strength. Rate of torque development may provide further insight into the relationship between muscle function and gait following stroke. Aims of this study were to examine the test-retest reliability of hand-held dynamometry to measure isometric rate of torque development following stroke, to examine associations between strength and rate of torque development, and to compare the relationships of strength and rate of torque development to gait velocity. Sixty-three post-stroke adults participated (60 years, 34 male). Gait velocity was assessed using the fast-paced 10 m walk test. Isometric strength and rate of torque development of seven lower-limb muscle groups were assessed with hand-held dynamometry. Intraclass correlation coefficients were calculated for reliability and Spearman's rho correlations were calculated for associations. Regression analyses using partial F-tests were used to compare strength and rate of torque development in their relationship with gait velocity. Good to excellent reliability was shown for strength and rate of torque development (0.82-0.97). Strong associations were found between strength and rate of torque development (0.71-0.94). Despite high correlations between strength and rate of torque development, rate of torque development failed to provide significant value to regression models that already contained strength. Assessment of isometric rate of torque development with hand-held dynamometry is reliable following stroke, however isometric strength demonstrated greater relationships with gait velocity. Further research should examine the relationship between dynamic measures of muscle strength/torque and gait after stroke. Copyright © 2018 Elsevier Ltd. All rights reserved.
Stefanović, Stefica Cerjan; Bolanča, Tomislav; Luša, Melita; Ukić, Sime; Rogošić, Marko
2012-02-24
This paper describes the development of ad hoc methodology for determination of inorganic anions in oilfield water, since their composition often significantly differs from the average (concentration of components and/or matrix). Therefore, fast and reliable method development has to be performed in order to ensure the monitoring of desired properties under new conditions. The method development was based on computer assisted multi-criteria decision making strategy. The used criteria were: maximal value of objective functions used, maximal robustness of the separation method, minimal analysis time, and maximal retention distance between two nearest components. Artificial neural networks were used for modeling of anion retention. The reliability of developed method was extensively tested by the validation of performance characteristics. Based on validation results, the developed method shows satisfactory performance characteristics, proving the successful application of computer assisted methodology in the described case study. Copyright © 2011 Elsevier B.V. All rights reserved.
Wu, Zhao; Xiong, Naixue; Huang, Yannong; Xu, Degang; Hu, Chunyang
2015-01-01
The services composition technology provides flexible methods for building service composition applications (SCAs) in wireless sensor networks (WSNs). The high reliability and high performance of SCAs help services composition technology promote the practical application of WSNs. The optimization methods for reliability and performance used for traditional software systems are mostly based on the instantiations of software components, which are inapplicable and inefficient in the ever-changing SCAs in WSNs. In this paper, we consider the SCAs with fault tolerance in WSNs. Based on a Universal Generating Function (UGF) we propose a reliability and performance model of SCAs in WSNs, which generalizes a redundancy optimization problem to a multi-state system. Based on this model, an efficient optimization algorithm for reliability and performance of SCAs in WSNs is developed based on a Genetic Algorithm (GA) to find the optimal structure of SCAs with fault-tolerance in WSNs. In order to examine the feasibility of our algorithm, we have evaluated the performance. Furthermore, the interrelationships between the reliability, performance and cost are investigated. In addition, a distinct approach to determine the most suitable parameters in the suggested algorithm is proposed. PMID:26561818
A road map for integrating eco-evolutionary processes into biodiversity models.
Thuiller, Wilfried; Münkemüller, Tamara; Lavergne, Sébastien; Mouillot, David; Mouquet, Nicolas; Schiffers, Katja; Gravel, Dominique
2013-05-01
The demand for projections of the future distribution of biodiversity has triggered an upsurge in modelling at the crossroads between ecology and evolution. Despite the enthusiasm around these so-called biodiversity models, most approaches are still criticised for not integrating key processes known to shape species ranges and community structure. Developing an integrative modelling framework for biodiversity distribution promises to improve the reliability of predictions and to give a better understanding of the eco-evolutionary dynamics of species and communities under changing environments. In this article, we briefly review some eco-evolutionary processes and interplays among them, which are essential to provide reliable projections of species distributions and community structure. We identify gaps in theory, quantitative knowledge and data availability hampering the development of an integrated modelling framework. We argue that model development relying on a strong theoretical foundation is essential to inspire new models, manage complexity and maintain tractability. We support our argument with an example of a novel integrated model for species distribution modelling, derived from metapopulation theory, which accounts for abiotic constraints, dispersal, biotic interactions and evolution under changing environmental conditions. We hope such a perspective will motivate exciting and novel research, and challenge others to improve on our proposed approach. © 2013 John Wiley & Sons Ltd/CNRS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chu, Tsong-Lun; Varuttamaseni, Athi; Baek, Joo-Seok
The U.S. Nuclear Regulatory Commission (NRC) encourages the use of probabilistic risk assessment (PRA) technology in all regulatory matters, to the extent supported by the state-of-the-art in PRA methods and data. Although much has been accomplished in the area of risk-informed regulation, risk assessment for digital systems has not been fully developed. The NRC established a plan for research on digital systems to identify and develop methods, analytical tools, and regulatory guidance for (1) including models of digital systems in the PRAs of nuclear power plants (NPPs), and (2) incorporating digital systems in the NRC's risk-informed licensing and oversight activities.more » Under NRC's sponsorship, Brookhaven National Laboratory (BNL) explored approaches for addressing the failures of digital instrumentation and control (I and C) systems in the current NPP PRA framework. Specific areas investigated included PRA modeling digital hardware, development of a philosophical basis for defining software failure, and identification of desirable attributes of quantitative software reliability methods. Based on the earlier research, statistical testing is considered a promising method for quantifying software reliability. This paper describes a statistical software testing approach for quantifying software reliability and applies it to the loop-operating control system (LOCS) of an experimental loop of the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL).« less
Jalalian, Mehrdad; Latiff, Latiffah; Hassan, Syed Tajuddin Syed; Hanachi, Parichehr; Othman, Mohamed
2010-05-01
University students are a target group for blood donor programs. To develop a blood donation culture among university students, it is important to identify factors used to predict their intent to donate blood. This study attempted to develop a valid and reliable measurement tool to be employed in assessing variables in a blood donation behavior model based on the Theory of Planned Behavior (TPB), a commonly used theoretical foundation for social psychology studies. We employed an elicitation study, in which we determined the commonly held behavioral and normative beliefs about blood donation. We used the results of the elicitation study and a standard format for creating questionnaire items for all constructs of the TPB model to prepare the first draft of the measurement tool. After piloting the questionnaire, we prepared the final draft of the questionnaire to be used in our main study. Examination of internal consistency using Chronbach's alpha coefficient and item-total statistics indicated the constructs "Intention" and "Self efficacy" had the highest reliability. Removing one item from each of the constructs, "Attitude," "Subjective norm," "Self efficacy," or "Behavioral beliefs", can considerably increase the reliability of the measurement tool, however, such action is controversial, especially for the variables "attitude" and "subjective norm." We consider all the items of our first draft questionnaire in our main study to make it a reliable measurement tool.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, David; Brunett, Acacia J.; Passerini, Stefano
GE Hitachi Nuclear Energy (GEH) and Argonne National Laboratory (Argonne) participated in a two year collaboration to modernize and update the probabilistic risk assessment (PRA) for the PRISM sodium fast reactor. At a high level, the primary outcome of the project was the development of a next-generation PRA that is intended to enable risk-informed prioritization of safety- and reliability-focused research and development. A central Argonne task during this project was a reliability assessment of passive safety systems, which included the Reactor Vessel Auxiliary Cooling System (RVACS) and the inherent reactivity feedbacks of the metal fuel core. Both systems were examinedmore » utilizing a methodology derived from the Reliability Method for Passive Safety Functions (RMPS), with an emphasis on developing success criteria based on mechanistic system modeling while also maintaining consistency with the Fuel Damage Categories (FDCs) of the mechanistic source term assessment. This paper provides an overview of the reliability analyses of both systems, including highlights of the FMEAs, the construction of best-estimate models, uncertain parameter screening and propagation, and the quantification of system failure probability. In particular, special focus is given to the methodologies to perform the analysis of uncertainty propagation and the determination of the likelihood of violating FDC limits. Additionally, important lessons learned are also reviewed, such as optimal sampling methodologies for the discovery of low likelihood failure events and strategies for the combined treatment of aleatory and epistemic uncertainties.« less
Cloud-based calculators for fast and reliable access to NOAA's geomagnetic field models
NASA Astrophysics Data System (ADS)
Woods, A.; Nair, M. C.; Boneh, N.; Chulliat, A.
2017-12-01
While the Global Positioning System (GPS) provides accurate point locations, it does not provide pointing directions. Therefore, the absolute directional information provided by the Earth's magnetic field is of primary importance for navigation and for the pointing of technical devices such as aircrafts, satellites and lately, mobile phones. The major magnetic sources that affect compass-based navigation are the Earth's core, its magnetized crust and the electric currents in the ionosphere and magnetosphere. NOAA/CIRES Geomagnetism (ngdc.noaa.gov/geomag/) group develops and distributes models that describe all these important sources to aid navigation. Our geomagnetic models are used in variety of platforms including airplanes, ships, submarines and smartphones. While the magnetic field from Earth's core can be described in relatively fewer parameters and is suitable for offline computation, the magnetic sources from Earth's crust, ionosphere and magnetosphere require either significant computational resources or real-time capabilities and are not suitable for offline calculation. This is especially important for small navigational devices or embedded systems, where computational resources are limited. Recognizing the need for a fast and reliable access to our geomagnetic field models, we developed cloud-based application program interfaces (APIs) for NOAA's ionospheric and magnetospheric magnetic field models. In this paper we will describe the need for reliable magnetic calculators, the challenges faced in running geomagnetic field models in the cloud in real-time and the feedback from our user community. We discuss lessons learned harvesting and validating the data which powers our cloud services, as well as our strategies for maintaining near real-time service, including load-balancing, real-time monitoring, and instance cloning. We will also briefly talk about the progress we achieved on NOAA's Big Earth Data Initiative (BEDI) funded project to develop API interface to our Enhanced Magnetic Model (EMM).
Dasgupta, Gargi; BenMohamed, Lbachir
2011-01-01
Herpes simplex virus type 1 and type 2 (HSV-1 and HSV-2) -specific CD8+ T cells that reside in sensory ganglia, appears to control recurrent herpetic disease by aborting or reducing spontaneous and sporadic reactivations of latent virus. A reliable animal model is the ultimate key factor to test the efficacy of therapeutic vaccines that boost the level and the quality of sensory ganglia-resident CD8+ T cells against spontaneous herpes reactivation from sensory neurons, yet its relevance has been often overlooked. Herpes vaccinologists are hesitant about using mouse as a model in pre-clinical development of therapeutic vaccines because they do not adequately mimic spontaneous viral shedding or recurrent symptomatic diseases, as occurs in human. Alternatives to mouse models are rabbits and guinea pigs in which reactivation arise spontaneously with clinical features relevant to human disease. However, while rabbits and guinea pigs develop spontaneous HSV reactivation and recurrent ocular and genital disease none of them can mount CD8+ T cell responses specific to Human Leukocyte Antigen- (HLA-) restricted epitopes. In this review, we discuss the advantages and limitations of these animal models and describe a novel “humanized” HLA transgenic rabbit, which shows spontaneous HSV-1 reactivation, recurrent ocular disease and mounts CD8+ T cell responses to HLA-restricted epitopes. Adequate investments are needed to develop reliable preclinical animal models, such as HLA class I and class II double transgenic rabbits and guinea pigs to balance the ethical and financial concerns associated with the rising number of unsuccessful clinical trials for therapeutic vaccine formulations tested in unreliable mouse models. PMID:21718746
NDE reliability and probability of detection (POD) evolution and paradigm shift
NASA Astrophysics Data System (ADS)
Singh, Surendra
2014-02-01
The subject of NDE Reliability and POD has gone through multiple phases since its humble beginning in the late 1960s. This was followed by several programs including the important one nicknamed "Have Cracks - Will Travel" or in short "Have Cracks" by Lockheed Georgia Company for US Air Force during 1974-1978. This and other studies ultimately led to a series of developments in the field of reliability and POD starting from the introduction of fracture mechanics and Damaged Tolerant Design (DTD) to statistical framework by Bernes and Hovey in 1981 for POD estimation to MIL-STD HDBK 1823 (1999) and 1823A (2009). During the last decade, various groups and researchers have further studied the reliability and POD using Model Assisted POD (MAPOD), Simulation Assisted POD (SAPOD), and applying Bayesian Statistics. All and each of these developments had one objective, i.e., improving accuracy of life prediction in components that to a large extent depends on the reliability and capability of NDE methods. Therefore, it is essential to have a reliable detection and sizing of large flaws in components. Currently, POD is used for studying reliability and capability of NDE methods, though POD data offers no absolute truth regarding NDE reliability, i.e., system capability, effects of flaw morphology, and quantifying the human factors. Furthermore, reliability and POD have been reported alike in meaning but POD is not NDE reliability. POD is a subset of the reliability that consists of six phases: 1) samples selection using DOE, 2) NDE equipment setup and calibration, 3) System Measurement Evaluation (SME) including Gage Repeatability &Reproducibility (Gage R&R) and Analysis Of Variance (ANOVA), 4) NDE system capability and electronic and physical saturation, 5) acquiring and fitting data to a model, and data analysis, and 6) POD estimation. This paper provides an overview of all major POD milestones for the last several decades and discuss rationale for using Integrated Computational Materials Engineering (ICME), MAPOD, SAPOD, and Bayesian statistics for studying controllable and non-controllable variables including human factors for estimating POD. Another objective is to list gaps between "hoped for" versus validated or fielded failed hardware.
NASA Technical Reports Server (NTRS)
Lawrence, Stella
1991-01-01
The object of this project was to develop and calibrate quantitative models for predicting the quality of software. Reliable flight and supporting ground software is a highly important factor in the successful operation of the space shuttle program. The models used in the present study consisted of SMERFS (Statistical Modeling and Estimation of Reliability Functions for Software). There are ten models in SMERFS. For a first run, the results obtained in modeling the cumulative number of failures versus execution time showed fairly good results for our data. Plots of cumulative software failures versus calendar weeks were made and the model results were compared with the historical data on the same graph. If the model agrees with actual historical behavior for a set of data then there is confidence in future predictions for this data. Considering the quality of the data, the models have given some significant results, even at this early stage. With better care in data collection, data analysis, recording of the fixing of failures and CPU execution times, the models should prove extremely helpful in making predictions regarding the future pattern of failures, including an estimate of the number of errors remaining in the software and the additional testing time required for the software quality to reach acceptable levels. It appears that there is no one 'best' model for all cases. It is for this reason that the aim of this project was to test several models. One of the recommendations resulting from this study is that great care must be taken in the collection of data. When using a model, the data should satisfy the model assumptions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, A.D.; Ayoub, A.K.; Singh, C.
1982-07-01
This report describes the structure and operation of prototype computer programs developed for a Monte Carlo simulation model, GENESIS, and for two analytical models, OPCON and OPPLAN. It includes input data requirements and sample test cases.
Effectiveness of back-to-back testing
NASA Technical Reports Server (NTRS)
Vouk, Mladen A.; Mcallister, David F.; Eckhardt, David E.; Caglayan, Alper; Kelly, John P. J.
1987-01-01
Three models of back-to-back testing processes are described. Two models treat the case where there is no intercomponent failure dependence. The third model describes the more realistic case where there is correlation among the failure probabilities of the functionally equivalent components. The theory indicates that back-to-back testing can, under the right conditions, provide a considerable gain in software reliability. The models are used to analyze the data obtained in a fault-tolerant software experiment. It is shown that the expected gain is indeed achieved, and exceeded, provided the intercomponent failure dependence is sufficiently small. However, even with the relatively high correlation the use of several functionally equivalent components coupled with back-to-back testing may provide a considerable reliability gain. Implications of this finding are that the multiversion software development is a feasible and cost effective approach to providing highly reliable software components intended for fault-tolerant software systems, on condition that special attention is directed at early detection and elimination of correlated faults.
Byrne, Patrick A; Crawford, J Douglas
2010-06-01
It is not known how egocentric visual information (location of a target relative to the self) and allocentric visual information (location of a target relative to external landmarks) are integrated to form reach plans. Based on behavioral data from rodents and humans we hypothesized that the degree of stability in visual landmarks would influence the relative weighting. Furthermore, based on numerous cue-combination studies we hypothesized that the reach system would act like a maximum-likelihood estimator (MLE), where the reliability of both cues determines their relative weighting. To predict how these factors might interact we developed an MLE model that weighs egocentric and allocentric information based on their respective reliabilities, and also on an additional stability heuristic. We tested the predictions of this model in 10 human subjects by manipulating landmark stability and reliability (via variable amplitude vibration of the landmarks and variable amplitude gaze shifts) in three reach-to-touch tasks: an egocentric control (reaching without landmarks), an allocentric control (reaching relative to landmarks), and a cue-conflict task (involving a subtle landmark "shift" during the memory interval). Variability from all three experiments was used to derive parameters for the MLE model, which was then used to simulate egocentric-allocentric weighting in the cue-conflict experiment. As predicted by the model, landmark vibration--despite its lack of influence on pointing variability (and thus allocentric reliability) in the control experiment--had a strong influence on egocentric-allocentric weighting. A reduced model without the stability heuristic was unable to reproduce this effect. These results suggest heuristics for extrinsic cue stability are at least as important as reliability for determining cue weighting in memory-guided reaching.
2007-09-30
combined with measured sediment properties, to test the validity of sediment acoustic models , and in particular the poroelastic (Biot) model . Addressing...TERM GOALS 1. Development of accurate models for acoustic scattering from, penetration into, and propagation within shallow water ocean sediments...2. Development of reliable methods for modeling acoustic detection of buried objects at subcritical grazing angles. 3. Improving our
A Fresh Start for Flood Estimation in Ungauged Basins
NASA Astrophysics Data System (ADS)
Woods, R. A.
2017-12-01
The two standard methods for flood estimation in ungauged basins, regression-based statistical models and rainfall-runoff models using a design rainfall event, have survived relatively unchanged as the methods of choice for more than 40 years. Their technical implementation has developed greatly, but the models' representation of hydrological processes has not, despite a large volume of hydrological research. I suggest it is time to introduce more hydrology into flood estimation. The reliability of the current methods can be unsatisfactory. For example, despite the UK's relatively straightforward hydrology, regression estimates of the index flood are uncertain by +/- a factor of two (for a 95% confidence interval), an impractically large uncertainty for design. The standard error of rainfall-runoff model estimates is not usually known, but available assessments indicate poorer reliability than statistical methods. There is a practical need for improved reliability in flood estimation. Two promising candidates to supersede the existing methods are (i) continuous simulation by rainfall-runoff modelling and (ii) event-based derived distribution methods. The main challenge with continuous simulation methods in ungauged basins is to specify the model structure and parameter values, when calibration data are not available. This has been an active area of research for more than a decade, and this activity is likely to continue. The major challenges for the derived distribution method in ungauged catchments include not only the correct specification of model structure and parameter values, but also antecedent conditions (e.g. seasonal soil water balance). However, a much smaller community of researchers are active in developing or applying the derived distribution approach, and as a result slower progress is being made. A change in needed: surely we have learned enough about hydrology in the last 40 years that we can make a practical hydrological advance on our methods for flood estimation! A shift to new methods for flood estimation will not be taken lightly by practitioners. However, the standard for change is clear - can we develop new methods which give significant improvements in reliability over those existing methods which are demonstrably unsatisfactory?
Integrating Reliability Analysis with a Performance Tool
NASA Technical Reports Server (NTRS)
Nicol, David M.; Palumbo, Daniel L.; Ulrey, Michael
1995-01-01
A large number of commercial simulation tools support performance oriented studies of complex computer and communication systems. Reliability of these systems, when desired, must be obtained by remodeling the system in a different tool. This has obvious drawbacks: (1) substantial extra effort is required to create the reliability model; (2) through modeling error the reliability model may not reflect precisely the same system as the performance model; (3) as the performance model evolves one must continuously reevaluate the validity of assumptions made in that model. In this paper we describe an approach, and a tool that implements this approach, for integrating a reliability analysis engine into a production quality simulation based performance modeling tool, and for modeling within such an integrated tool. The integrated tool allows one to use the same modeling formalisms to conduct both performance and reliability studies. We describe how the reliability analysis engine is integrated into the performance tool, describe the extensions made to the performance tool to support the reliability analysis, and consider the tool's performance.
A probabilisitic based failure model for components fabricated from anisotropic graphite
NASA Astrophysics Data System (ADS)
Xiao, Chengfeng
The nuclear moderator for high temperature nuclear reactors are fabricated from graphite. During reactor operations graphite components are subjected to complex stress states arising from structural loads, thermal gradients, neutron irradiation damage, and seismic events. Graphite is a quasi-brittle material. Two aspects of nuclear grade graphite, i.e., material anisotropy and different behavior in tension and compression, are explicitly accounted for in this effort. Fracture mechanic methods are useful for metal alloys, but they are problematic for anisotropic materials with a microstructure that makes it difficult to identify a "critical" flaw. In fact cracking in a graphite core component does not necessarily result in the loss of integrity of a nuclear graphite core assembly. A phenomenological failure criterion that does not rely on flaw detection has been derived that accounts for the material behaviors mentioned. The probability of failure of components fabricated from graphite is governed by the scatter in strength. The design protocols being proposed by international code agencies recognize that design and analysis of reactor core components must be based upon probabilistic principles. The reliability models proposed herein for isotropic graphite and graphite that can be characterized as being transversely isotropic are another set of design tools for the next generation very high temperature reactors (VHTR) as well as molten salt reactors. The work begins with a review of phenomenologically based deterministic failure criteria. A number of this genre of failure models are compared with recent multiaxial nuclear grade failure data. Aspects in each are shown to be lacking. The basic behavior of different failure strengths in tension and compression is exhibited by failure models derived for concrete, but attempts to extend these concrete models to anisotropy were unsuccessful. The phenomenological models are directly dependent on stress invariants. A set of invariants, known as an integrity basis, was developed for a non-linear elastic constitutive model. This integrity basis allowed the non-linear constitutive model to exhibit different behavior in tension and compression and moreover, the integrity basis was amenable to being augmented and extended to anisotropic behavior. This integrity basis served as the starting point in developing both an isotropic reliability model and a reliability model for transversely isotropic materials. At the heart of the reliability models is a failure function very similar in nature to the yield functions found in classic plasticity theory. The failure function is derived and presented in the context of a multiaxial stress space. States of stress inside the failure envelope denote safe operating states. States of stress on or outside the failure envelope denote failure. The phenomenological strength parameters associated with the failure function are treated as random variables. There is a wealth of failure data in the literature that supports this notion. The mathematical integration of a joint probability density function that is dependent on the random strength variables over the safe operating domain defined by the failure function provides a way to compute the reliability of a state of stress in a graphite core component fabricated from graphite. The evaluation of the integral providing the reliability associated with an operational stress state can only be carried out using a numerical method. Monte Carlo simulation with importance sampling was selected to make these calculations. The derivation of the isotropic reliability model and the extension of the reliability model to anisotropy are provided in full detail. Model parameters are cast in terms of strength parameters that can (and have been) characterized by multiaxial failure tests. Comparisons of model predictions with failure data is made and a brief comparison is made to reliability predictions called for in the ASME Boiler and Pressure Vessel Code. Future work is identified that would provide further verification and augmentation of the numerical methods used to evaluate model predictions.
Khoshgoftaar, T M; Allen, E B; Hudepohl, J P; Aud, S J
1997-01-01
Society relies on telecommunications to such an extent that telecommunications software must have high reliability. Enhanced measurement for early risk assessment of latent defects (EMERALD) is a joint project of Nortel and Bell Canada for improving the reliability of telecommunications software products. This paper reports a case study of neural-network modeling techniques developed for the EMERALD system. The resulting neural network is currently in the prototype testing phase at Nortel. Neural-network models can be used to identify fault-prone modules for extra attention early in development, and thus reduce the risk of operational problems with those modules. We modeled a subset of modules representing over seven million lines of code from a very large telecommunications software system. The set consisted of those modules reused with changes from the previous release. The dependent variable was membership in the class of fault-prone modules. The independent variables were principal components of nine measures of software design attributes. We compared the neural-network model with a nonparametric discriminant model and found the neural-network model had better predictive accuracy.
[Animal experimentation, computer simulation and surgical research].
Carpentier, Alain
2009-11-01
We live in a digital world In medicine, computers are providing new tools for data collection, imaging, and treatment. During research and development of complex technologies and devices such as artificial hearts, computer simulation can provide more reliable information than experimentation on large animals. In these specific settings, animal experimentation should serve more to validate computer models of complex devices than to demonstrate their reliability.
2009-01-01
Background Feed composition has a large impact on the growth of animals, particularly marine fish. We have developed a quantitative dynamic model that can predict the growth and body composition of marine fish for a given feed composition over a timespan of several months. The model takes into consideration the effects of environmental factors, particularly temperature, on growth, and it incorporates detailed kinetics describing the main metabolic processes (protein, lipid, and central metabolism) known to play major roles in growth and body composition. Results For validation, we compared our model's predictions with the results of several experimental studies. We showed that the model gives reliable predictions of growth, nutrient utilization (including amino acid retention), and body composition over a timespan of several months, longer than most of the previously developed predictive models. Conclusion We demonstrate that, despite the difficulties involved, multiscale models in biology can yield reasonable and useful results. The model predictions are reliable over several timescales and in the presence of strong temperature fluctuations, which are crucial factors for modeling marine organism growth. The model provides important improvements over existing models. PMID:19903354
The process group approach to reliable distributed computing
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.
1991-01-01
The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.
A Fuzzy Robust Optimization Model for Waste Allocation Planning Under Uncertainty
Xu, Ye; Huang, Guohe; Xu, Ling
2014-01-01
Abstract In this study, a fuzzy robust optimization (FRO) model was developed for supporting municipal solid waste management under uncertainty. The Development Zone of the City of Dalian, China, was used as a study case for demonstration. Comparing with traditional fuzzy models, the FRO model made improvement by considering the minimization of the weighted summation among the expected objective values, the differences between two extreme possible objective values, and the penalty of the constraints violation as the objective function, instead of relying purely on the minimization of expected value. Such an improvement leads to enhanced system reliability and the model becomes especially useful when multiple types of uncertainties and complexities are involved in the management system. Through a case study, the applicability of the FRO model was successfully demonstrated. Solutions under three future planning scenarios were provided by the FRO model, including (1) priority on economic development, (2) priority on environmental protection, and (3) balanced consideration for both. The balanced scenario solution was recommended for decision makers, since it respected both system economy and reliability. The model proved valuable in providing a comprehensive profile about the studied system and helping decision makers gain an in-depth insight into system complexity and select cost-effective management strategies. PMID:25317037
A Fuzzy Robust Optimization Model for Waste Allocation Planning Under Uncertainty.
Xu, Ye; Huang, Guohe; Xu, Ling
2014-10-01
In this study, a fuzzy robust optimization (FRO) model was developed for supporting municipal solid waste management under uncertainty. The Development Zone of the City of Dalian, China, was used as a study case for demonstration. Comparing with traditional fuzzy models, the FRO model made improvement by considering the minimization of the weighted summation among the expected objective values, the differences between two extreme possible objective values, and the penalty of the constraints violation as the objective function, instead of relying purely on the minimization of expected value. Such an improvement leads to enhanced system reliability and the model becomes especially useful when multiple types of uncertainties and complexities are involved in the management system. Through a case study, the applicability of the FRO model was successfully demonstrated. Solutions under three future planning scenarios were provided by the FRO model, including (1) priority on economic development, (2) priority on environmental protection, and (3) balanced consideration for both. The balanced scenario solution was recommended for decision makers, since it respected both system economy and reliability. The model proved valuable in providing a comprehensive profile about the studied system and helping decision makers gain an in-depth insight into system complexity and select cost-effective management strategies.
Kenyon, Lisa K.; Elliott, James M; Cheng, M. Samuel
2016-01-01
Purpose/Background Despite the availability of various field-tests for many competitive sports, a reliable and valid test specifically developed for use in men's gymnastics has not yet been developed. The Men's Gymnastics Functional Measurement Tool (MGFMT) was designed to assess sport-specific physical abilities in male competitive gymnasts. The purpose of this study was to develop the MGFMT by establishing a scoring system for individual test items and to initiate the process of establishing test-retest reliability and construct validity. Methods A total of 83 competitive male gymnasts ages 7-18 underwent testing using the MGFMT. Thirty of these subjects underwent re-testing one week later in order to assess test-retest reliability. Construct validity was assessed using a simple regression analysis between total MGFMT scores and the gymnasts’ USA-Gymnastics competitive level to calculate the coefficient of determination (r2). Test-retest reliability was analyzed using Model 1 Intraclass correlation coefficients (ICC). Statistical significance was set at the p<0.05 level. Results The relationship between total MGFMT scores and subjects’ current USA-Gymnastics competitive level was found to be good (r2 = 0.63). Reliability testing of the MGFMT composite test score showed excellent test-retest reliability over a one-week period (ICC = 0.97). Test-retest reliability of the individual component tests ranged from good to excellent (ICC = 0.75-0.97). Conclusions The results of this study provide initial support for the construct validity and test-retest reliability of the MGFMT. Level of Evidence Level 3 PMID:27999723
Perceived experiences of atheist discrimination: Instrument development and evaluation.
Brewster, Melanie E; Hammer, Joseph; Sawyer, Jacob S; Eklund, Austin; Palamar, Joseph
2016-10-01
The present 2 studies describe the development and initial psychometric evaluation of a new instrument, the Measure of Atheist Discrimination Experiences (MADE), which may be used to examine the minority stress experiences of atheist people. Items were created from prior literature, revised by a panel of expert researchers, and assessed psychometrically. In Study 1 (N = 1,341 atheist-identified people), an exploratory factor analysis with 665 participants suggested the presence of 5 related dimensions of perceived discrimination. However, bifactor modeling via confirmatory factor analysis and model-based reliability estimates with data from the remaining 676 participants affirmed the presence of a strong "general" factor of discrimination and mixed to poor support for substantive subdimensions. In Study 2 (N = 1,057 atheist-identified people), another confirmatory factor analysis and model-based reliability estimates strongly supported the bifactor model from Study 1 (i.e., 1 strong "general" discrimination factor) and poor support for subdimensions. Across both studies, the MADE general factor score demonstrated evidence of good reliability (i.e., Cronbach's alphas of .94 and .95; omega hierarchical coefficients of .90 and .92), convergent validity (i.e., with stigma consciousness, β = .56; with awareness of public devaluation, β = .37), and preliminary evidence for concurrent validity (i.e., with loneliness β = .18; with psychological distress β = .27). Reliability and validity evidence for the MADE subscale scores was not sufficient to warrant future use of the subscales. Limitations and implications for future research and clinical work with atheist individuals are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Computational problems in autoregressive moving average (ARMA) models
NASA Technical Reports Server (NTRS)
Agarwal, G. C.; Goodarzi, S. M.; Oneill, W. D.; Gottlieb, G. L.
1981-01-01
The choice of the sampling interval and the selection of the order of the model in time series analysis are considered. Band limited (up to 15 Hz) random torque perturbations are applied to the human ankle joint. The applied torque input, the angular rotation output, and the electromyographic activity using surface electrodes from the extensor and flexor muscles of the ankle joint are recorded. Autoregressive moving average models are developed. A parameter constraining technique is applied to develop more reliable models. The asymptotic behavior of the system must be taken into account during parameter optimization to develop predictive models.
Niedermann, K; Forster, A; Hammond, A; Uebelhart, D; de Bie, R
2007-03-15
Joint protection (JP) is an important part of the treatment concept for patients with rheumatoid arthritis (RA). The Joint Protection Behavior Assessment short form (JPBA-S) assesses the use of hand JP methods by patients with RA while preparing a hot drink. The purpose of this study was to develop a German version of the JPBA-S (D-JPBA-S) and to test its validity and reliability. A manual was developed through consensus with 8 occupational therapist (OT) experts as the reference for assessing patients' JP behavior. Twenty-four patients with RA and 10 healthy individuals were videotaped while performing 10 tasks reflecting the activity of preparing instant coffee. Recordings were repeated after 3 months for test-retest analysis. One rater assessed all available patient recordings (n = 23, recorded twice) for test-retest reliability. The video recordings of 10 randomly selected patients and all healthy individuals were independently assessed for interrater reliability by 6 OTs who were explicitly asked to follow the manual. Rasch analysis was performed to test construct validity and transform ordinal raw data into interval data for reliability calculations. Nine of the 10 tasks fit the Rasch model. The D-JPBA-S, consisting of 9 valid tasks, had an intraclass correlation coefficient of 0.77 for interrater reliability and 0.71 for test-retest reliability. The D-JPBA-S provides a valid and reliable instrument for assessing JP behavior of patients with RA and can be used in German-speaking countries.
Developing Cognitive Models for Social Simulation from Survey Data
NASA Astrophysics Data System (ADS)
Alt, Jonathan K.; Lieberman, Stephen
The representation of human behavior and cognition continues to challenge the modeling and simulation community. The use of survey and polling instruments to inform belief states, issue stances and action choice models provides a compelling means of developing models and simulations with empirical data. Using these types of data to population social simulations can greatly enhance the feasibility of validation efforts, the reusability of social and behavioral modeling frameworks, and the testable reliability of simulations. We provide a case study demonstrating these effects, document the use of survey data to develop cognitive models, and suggest future paths forward for social and behavioral modeling.
Hewson, Kylie; Noormohammadi, Amir H; Devlin, Joanne M; Mardani, Karim; Ignjatovic, Jagoda
2009-01-01
Infectious bronchitis virus (IBV) is a coronavirus that causes upper respiratory, renal and/or reproductive diseases with high morbidity in poultry. Classification of IBV is important for implementation of vaccination strategies to control the disease in commercial poultry. Currently, the lengthy process of sequence analysis of the IBV S1 gene is considered the gold standard for IBV strain identification, with a high nucleotide identity (e.g. > or =95%) indicating related strains. However, this gene has a high propensity to mutate and/or undergo recombination, and alone it may not be reliable for strain identification. A real-time polymerase chain reaction (RT-PCR) combined with high-resolution melt (HRM) curve analysis was developed based on the 3'UTR of IBV for rapid detection and classification of IBV from commercial poultry. HRM curves generated from 230 to 435-bp PCR products of several IBV strains were subjected to further analysis using a mathematical model also developed during this study. It was shown that a combination of HRM curve analysis and the mathematical model could reliably group 189 out of 190 comparisons of pairs of IBV strains in accordance with their 3'UTR and S1 gene identities. The newly developed RT-PCR/HRM curve analysis model could detect and rapidly identify novel and vaccine-related IBV strains, as confirmed by S1 gene and 3'UTR nucleotide sequences. This model is a rapid, reliable, accurate and non-subjective system for detection of IBVs in poultry flocks.
Lessons Learned from OSIRIS-Rex Autonomous Navigation Using Natural Feature Tracking
NASA Technical Reports Server (NTRS)
Lorenz, David A.; Olds, Ryan; May, Alexander; Mario, Courtney; Perry, Mark E.; Palmer, Eric E.; Daly, Michael
2017-01-01
The Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer (Osiris-REx) spacecraft is scheduled to launch in September, 2016 to embark on an asteroid sample return mission. It is expected to rendezvous with the asteroid, Bennu, navigate to the surface, collect a sample (July 20), and return the sample to Earth (September 23). The original mission design called for using one of two Flash Lidar units to provide autonomous navigation to the surface. Following Preliminary design and initial development of the Lidars, reliability issues with the hardware and test program prompted the project to begin development of an alternative navigation technique to be used as a backup to the Lidar. At the critical design review, Natural Feature Tracking (NFT) was added to the mission. NFT is an onboard optical navigation system that compares observed images to a set of asteroid terrain models which are rendered in real-time from a catalog stored in memory on the flight computer. Onboard knowledge of the spacecraft state is then updated by a Kalman filter using the measured residuals between the rendered reference images and the actual observed images. The asteroid terrain models used by NFT are built from a shape model generated from observations collected during earlier phases of the mission and include both terrain shape and albedo information about the asteroid surface. As a result, the success of NFT is highly dependent on selecting a set of topographic features that can be both identified during descent as well as reliably rendered using the shape model data available. During development, the OSIRIS-REx team faced significant challenges in developing a process conducive to robust operation. This was especially true for terrain models to be used as the spacecraft gets close to the asteroid and higher fidelity models are required for reliable image correlation. This paper will present some of the challenges and lessons learned from the development of the NFT system which includes not just the flight hardware and software but the development of the terrain models used to generate the onboard rendered images.
ERIC Educational Resources Information Center
Truckenmiller, James L.
The former HEW National Strategy for Youth Development model was a community-based planning and procedural tool to enhance and to prevent delinquency through a process of youth needs assessments, needs targeted programs, and program impact evaluation. The program's 12 Impact Scales have been found to have acceptable reliabilities, substantial…
ERIC Educational Resources Information Center
Stockdale, Susan L.; Brockett, Ralph G.
2011-01-01
The purpose of this study was to develop a reliable and valid instrument to measure self-directedness in learning among college students based on an operationalization of the personal responsibility orientation (PRO) model of self-direction in learning. The resultant 25-item Personal Responsibility Orientation to Self-Direction in Learning Scale…
Multidisciplinary System Reliability Analysis
NASA Technical Reports Server (NTRS)
Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)
2001-01-01
The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.
Multi-Disciplinary System Reliability Analysis
NASA Technical Reports Server (NTRS)
Mahadevan, Sankaran; Han, Song
1997-01-01
The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.
Middle Cerebral Artery Occlusion Model of Stroke in Rodents: A Step-by-Step Approach
Shahjouei, Shima; Cai, Peter Y.; Ansari, Saeed; Sharififar, Sharareh; Azari, Hassan; Ganji, Sarah; Zand, Ramin
2016-01-01
Stroke is one of the leading causes of morbidity and mortality in developed countries and an immense amount of medical care resources are devoted to combat the poststroke debilitating consequences. The key to develop effective and clinically applicable treatment methodologies is a better understanding of the pathophysiology of the disease, including the root causes and targets for pharmacology. Developing these foundations requires the use of standard animal models that mimic the physicochemical process of the diseases that can reliably replicate results in order to test and fine-tune therapeutic modalities. Middle cerebral artery occlusion (MCAO), endothelin-1-induced ischemic stroke, photothrombosis, devascularization, embolization, and spontaneous infarction using hemorrhage are some examples of different animal models. Reliability of MCAO has been proved and due to the ability to induce reperfusion similar to tissue plasminogen activator (tPA) therapy, this model is widely used in preclinical studies. Here, we describe a detailed methodology on how to develop MCAO stroke in rodents using intra-arterial insertion of filament to occlude the middle cerebral artery. This approach allows for the study of a wide array of basic pathophysiology mechanisms, regenerative medicine and rehabilitation therapy. PMID:26958146
NASA Technical Reports Server (NTRS)
Shah, Ashwin
2001-01-01
Literature survey related to the EBC/TBC (environmental barrier coating/thermal barrier coating) fife models, failure mechanisms in EBC/TBC and the initial work plan for the proposed EBC/TBC life prediction methods development was developed as well as the finite element model for the thermal/stress analysis of the GRC-developed EBC system was prepared. Technical report for these activities is given in the subsequent sections.
Exponential order statistic models of software reliability growth
NASA Technical Reports Server (NTRS)
Miller, D. R.
1985-01-01
Failure times of a software reliabilty growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.
Urban air quality estimation study, phase 1
NASA Technical Reports Server (NTRS)
Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.
1976-01-01
Possibilities are explored for applying estimation theory to the analysis, interpretation, and use of air quality measurements in conjunction with simulation models to provide a cost effective method of obtaining reliable air quality estimates for wide urban areas. The physical phenomenology of real atmospheric plumes from elevated localized sources is discussed. A fluctuating plume dispersion model is derived. Individual plume parameter formulations are developed along with associated a priori information. Individual measurement models are developed.
PSHFT - COMPUTERIZED LIFE AND RELIABILITY MODELLING FOR TURBOPROP TRANSMISSIONS
NASA Technical Reports Server (NTRS)
Savage, M.
1994-01-01
The computer program PSHFT calculates the life of a variety of aircraft transmissions. A generalized life and reliability model is presented for turboprop and parallel shaft geared prop-fan aircraft transmissions. The transmission life and reliability model is a combination of the individual reliability models for all the bearings and gears in the main load paths. The bearing and gear reliability models are based on the statistical two parameter Weibull failure distribution method and classical fatigue theories. The computer program developed to calculate the transmission model is modular. In its present form, the program can analyze five different transmissions arrangements. Moreover, the program can be easily modified to include additional transmission arrangements. PSHFT uses the properties of a common block two-dimensional array to separate the component and transmission property values from the analysis subroutines. The rows correspond to specific components with the first row containing the values for the entire transmission. Columns contain the values for specific properties. Since the subroutines (which determine the transmission life and dynamic capacity) interface solely with this property array, they are separated from any specific transmission configuration. The system analysis subroutines work in an identical manner for all transmission configurations considered. Thus, other configurations can be added to the program by simply adding component property determination subroutines. PSHFT consists of a main program, a series of configuration specific subroutines, generic component property analysis subroutines, systems analysis subroutines, and a common block. The main program selects the routines to be used in the analysis and sequences their operation. The series of configuration specific subroutines input the configuration data, perform the component force and life analyses (with the help of the generic component property analysis subroutines), fill the property array, call up the system analysis routines, and finally print out the analysis results for the system and components. PSHFT is written in FORTRAN 77 and compiled on a MicroSoft FORTRAN compiler. The program will run on an IBM PC AT compatible with at least 104k bytes of memory. The program was developed in 1988.
Vaughan, Brett
2018-01-01
Clinical teaching evaluations are common in health profession education programs to ensure students are receiving a quality clinical education experience. Questionnaires students use to evaluate their clinical teachers have been developed in professions such as medicine and nursing. The development of a questionnaire that is specifically for the osteopathy on-campus, student-led clinic environment is warranted. Previous work developed the 30-item Osteopathy Clinical Teaching Questionnaire. The current study utilised Rasch analysis to investigate the construct validity of the Osteopathy Clinical Teaching Questionnaire and provide evidence for the validity argument through fit to the Rasch model. Senior osteopathy students at four institutions in Australia, New Zealand and the United Kingdom rated their clinical teachers using the Osteopathy Clinical Teaching Questionnaire. Three hundred and ninety-nine valid responses were received and the data were evaluated for fit to the Rasch model. Reliability estimations (Cronbach's alpha and McDonald's omega) were also evaluated for the final model. The initial analysis demonstrated the data did not fit the Rasch model. Accordingly, modifications to the questionnaire were made including removing items, removing person responses, and rescoring one item. The final model contained 12 items and fit to the Rasch model was adequate. Support for unidimensionality was demonstrated through both the Principal Components Analysis/t-test, and the Cronbach's alpha and McDonald's omega reliability estimates. Analysis of the questionnaire using McDonald's omega hierarchical supported a general factor (quality of clinical teaching in osteopathy). The evidence for unidimensionality and the presence of a general factor support the calculation of a total score for the questionnaire as a sufficient statistic. Further work is now required to investigate the reliability of the 12-item Osteopathy Clinical Teaching Questionnaire to provide evidence for the validity argument.
An improved mounting device for attaching intracranial probes in large animal models.
Dunster, Kimble R
2015-12-01
The rigid support of intracranial probes can be difficult when using animal models, as mounting devices suitable for the probes are either not available, or designed for human use and not suitable in animal skulls. A cheap and reliable mounting device for securing intracranial probes in large animal models is described. Using commonly available clinical consumables, a universal mounting device for securing intracranial probes to the skull of large animals was developed and tested. A simply made mounting device to hold a variety of probes from 500 μm to 1.3 mm in diameter to the skull was developed. The device was used to hold probes to the skulls of sheep for up to 18 h. No adhesives or cements were used. The described device provides a reliable method of securing probes to the skull of animals.
Tensile failure criteria for fiber composite materials
NASA Technical Reports Server (NTRS)
Rosen, B. W.; Zweben, C. H.
1972-01-01
The analysis provides insight into the failure mechanics of these materials and defines criteria which serve as tools for preliminary design material selection and for material reliability assessment. The model incorporates both dispersed and propagation type failures and includes the influence of material heterogeneity. The important effects of localized matrix damage and post-failure matrix shear stress transfer are included in the treatment. The model is used to evaluate the influence of key parameters on the failure of several commonly used fiber-matrix systems. Analyses of three possible failure modes were developed. These modes are the fiber break propagation mode, the cumulative group fracture mode, and the weakest link mode. Application of the new model to composite material systems has indicated several results which require attention in the development of reliable structural composites. Prominent among these are the size effect and the influence of fiber strength variability.
ERIC Educational Resources Information Center
Ebert-May, Diane; Derting, Terry L.; Henkel, Timothy P.; Maher, Jessica Middlemis; Momsen, Jennifer L.; Arnold, Bryan; Passmore, Heather A.
2015-01-01
The availability of reliable evidence for teaching practices after professional development is limited across science, technology, engineering, and mathematics disciplines, making the identification of professional development "best practices" and effective models for change difficult. We aimed to determine the extent to which…
A probabilistic maintenance model for diesel engines
NASA Astrophysics Data System (ADS)
Pathirana, Shan; Abeygunawardane, Saranga Kumudu
2018-02-01
In this paper, a probabilistic maintenance model is developed for inspection based preventive maintenance of diesel engines based on the practical model concepts discussed in the literature. Developed model is solved using real data obtained from inspection and maintenance histories of diesel engines and experts' views. Reliability indices and costs were calculated for the present maintenance policy of diesel engines. A sensitivity analysis is conducted to observe the effect of inspection based preventive maintenance on the life cycle cost of diesel engines.
Developing and testing the CHORDS: Characteristics of Responsible Drinking Survey.
Barry, Adam E; Goodson, Patricia
2011-01-01
Report on the development and psychometric testing of a theoretically and evidence-grounded instrument, the Characteristics of Responsible Drinking Survey (CHORDS). Instrument subjected to four phases of pretesting (cognitive validity, cognitive and motivational qualities, pilot test, and item evaluation) and a final posttest implementation. Large public university in Texas. Randomly selected convenience sample (n = 729) of currently enrolled students. This 78-item questionnaire measures individuals' responsible drinking beliefs, motivations, intentions, and behaviors. Cronbach α, split-half reliability, principal components analysis and Spearman ρ were conducted to investigate reliability, stability, and validity. Measures in the CHORDS exhibited high internal consistency reliability and strong correlations of split-half reliability. Factor analyses indicated five distinct scales were present, as proposed in the theoretical model. Subscale composite scores also exhibited a correlation to alcohol consumption behaviors, indicating concurrent validity. The CHORDS represents the first instrument specifically designed to assess responsible drinking beliefs and behaviors. It was found to elicit valid and reliable data among a college student sample. This instrument holds much promise for practitioners who desire to empirically investigate dimensions of responsible drinking.
Multiple Subtypes among Vocationally Undecided College Students: A Model and Assessment Instrument.
ERIC Educational Resources Information Center
Jones, Lawrence K.; Chenery, Mary Faeth
1980-01-01
A model of vocational decision status was developed, and an instrument was constructed and used to assess its three dimensions. Results demonstrated the utility of the model, supported the reliability and validity of the instrument, and illustrated the value of viewing vocationally undecided students as multiple subtypes. (Author)
Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chassin, David P.; Posse, Christian
2005-09-15
The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabási-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using other methods and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.
Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chassin, David P.; Posse, Christian
2005-09-15
The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabasi-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using standard power engineering methods, and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.
Dong, Ren G; Welcome, Daniel E; McDowell, Thomas W; Wu, John Z
2013-11-25
The relationship between the vibration transmissibility and driving-point response functions (DPRFs) of the human body is important for understanding vibration exposures of the system and for developing valid models. This study identified their theoretical relationship and demonstrated that the sum of the DPRFs can be expressed as a linear combination of the transmissibility functions of the individual mass elements distributed throughout the system. The relationship is verified using several human vibration models. This study also clarified the requirements for reliably quantifying transmissibility values used as references for calibrating the system models. As an example application, this study used the developed theory to perform a preliminary analysis of the method for calibrating models using both vibration transmissibility and DPRFs. The results of the analysis show that the combined method can theoretically result in a unique and valid solution of the model parameters, at least for linear systems. However, the validation of the method itself does not guarantee the validation of the calibrated model, because the validation of the calibration also depends on the model structure and the reliability and appropriate representation of the reference functions. The basic theory developed in this study is also applicable to the vibration analyses of other structures.
NASA Technical Reports Server (NTRS)
Shooman, Martin L.
1991-01-01
Many of the most challenging reliability problems of our present decade involve complex distributed systems such as interconnected telephone switching computers, air traffic control centers, aircraft and space vehicles, and local area and wide area computer networks. In addition to the challenge of complexity, modern fault-tolerant computer systems require very high levels of reliability, e.g., avionic computers with MTTF goals of one billion hours. Most analysts find that it is too difficult to model such complex systems without computer aided design programs. In response to this need, NASA has developed a suite of computer aided reliability modeling programs beginning with CARE 3 and including a group of new programs such as: HARP, HARP-PC, Reliability Analysts Workbench (Combination of model solvers SURE, STEM, PAWS, and common front-end model ASSIST), and the Fault Tree Compiler. The HARP program is studied and how well the user can model systems using this program is investigated. One of the important objectives will be to study how user friendly this program is, e.g., how easy it is to model the system, provide the input information, and interpret the results. The experiences of the author and his graduate students who used HARP in two graduate courses are described. Some brief comparisons were made with the ARIES program which the students also used. Theoretical studies of the modeling techniques used in HARP are also included. Of course no answer can be any more accurate than the fidelity of the model, thus an Appendix is included which discusses modeling accuracy. A broad viewpoint is taken and all problems which occurred in the use of HARP are discussed. Such problems include: computer system problems, installation manual problems, user manual problems, program inconsistencies, program limitations, confusing notation, long run times, accuracy problems, etc.
Lin, Dexin; Wu, Xianbin; Ji, Xiaoke; Zhang, Qiyu; Lin, YuanWei; Chen, WeiJian; Jin, Wangxun; Deng, Liming; Chen, Yunzhi; Chen, Bicheng; Li, Jianmin
2012-01-01
Current large animal models that could closely resemble the typical features of cirrhotic portal hypertension in human have not been well established. Thus, we aimed to develop and describe a reliable and reproducible canine cirrhosis model of portal hypertension. A total of 30 mongrel dogs were randomly divided into four groups: 1 (control; n = 5), 2 (portal vein stenosis [PVS]; n = 5], 3 (thioacetamide [TAA]; n = 5), and 4 (PVS plus TAA; n = 15). After 4-months modeling period, liver and spleen CT perfusion, abdominal CT scans, portal hemodynamics, gastroscopy, hepatic function, blood routine, the bone marrow, liver, and spleen histology were studied. The animals in group 2 (PVS) developed extrahepatic portosystemic collateral circulation, particularly esophageal varices, without hepatic cirrhosis and portal hypertension. Animals from group 3 (TAA) presented mild cirrhosis and portal hypertension without significant symptoms of esophageal varices and hypersplenism. In contrast, animals from group 4 (PVS + TAA) showed well-developed micronodular and macronodular cirrhosis, associated with significant portal hypertension and hypersplenism. The combination of PVS and TAA represents a novel, reliable, and reproducible canine cirrhosis model of portal hypertension, which is associated with the typical characteristics of portal hypertension, including hypersplenism.
Source Data Impacts on Epistemic Uncertainty for Launch Vehicle Fault Tree Models
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Novack, Steven; Ring, Robert
2016-01-01
Launch vehicle systems are designed and developed using both heritage and new hardware. Design modifications to the heritage hardware to fit new functional system requirements can impact the applicability of heritage reliability data. Risk estimates for newly designed systems must be developed from generic data sources such as commercially available reliability databases using reliability prediction methodologies, such as those addressed in MIL-HDBK-217F. Failure estimates must be converted from the generic environment to the specific operating environment of the system in which it is used. In addition, some qualification of applicability for the data source to the current system should be made. Characterizing data applicability under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This paper will demonstrate a data-source applicability classification method for suggesting epistemic component uncertainty to a target vehicle based on the source and operating environment of the originating data. The source applicability is determined using heuristic guidelines while translation of operating environments is accomplished by applying statistical methods to MIL-HDK-217F tables. The paper will provide one example for assigning environmental factors uncertainty when translating between operating environments for the microelectronic part-type components. The heuristic guidelines will be followed by uncertainty-importance routines to assess the need for more applicable data to reduce model uncertainty.
A reliability analysis tool for SpaceWire network
NASA Astrophysics Data System (ADS)
Zhou, Qiang; Zhu, Longjiang; Fei, Haidong; Wang, Xingyou
2017-04-01
A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. It is becoming more and more popular in space applications due to its technical advantages, including reliability, low power and fault protection, etc. High reliability is the vital issue for spacecraft. Therefore, it is very important to analyze and improve the reliability performance of the SpaceWire network. This paper deals with the problem of reliability modeling and analysis with SpaceWire network. According to the function division of distributed network, a reliability analysis method based on a task is proposed, the reliability analysis of every task can lead to the system reliability matrix, the reliability result of the network system can be deduced by integrating these entire reliability indexes in the matrix. With the method, we develop a reliability analysis tool for SpaceWire Network based on VC, where the computation schemes for reliability matrix and the multi-path-task reliability are also implemented. By using this tool, we analyze several cases on typical architectures. And the analytic results indicate that redundancy architecture has better reliability performance than basic one. In practical, the dual redundancy scheme has been adopted for some key unit, to improve the reliability index of the system or task. Finally, this reliability analysis tool will has a directive influence on both task division and topology selection in the phase of SpaceWire network system design.
Reliability and performance evaluation of systems containing embedded rule-based expert systems
NASA Technical Reports Server (NTRS)
Beaton, Robert M.; Adams, Milton B.; Harrison, James V. A.
1989-01-01
A method for evaluating the reliability of real-time systems containing embedded rule-based expert systems is proposed and investigated. It is a three stage technique that addresses the impact of knowledge-base uncertainties on the performance of expert systems. In the first stage, a Markov reliability model of the system is developed which identifies the key performance parameters of the expert system. In the second stage, the evaluation method is used to determine the values of the expert system's key performance parameters. The performance parameters can be evaluated directly by using a probabilistic model of uncertainties in the knowledge-base or by using sensitivity analyses. In the third and final state, the performance parameters of the expert system are combined with performance parameters for other system components and subsystems to evaluate the reliability and performance of the complete system. The evaluation method is demonstrated in the context of a simple expert system used to supervise the performances of an FDI algorithm associated with an aircraft longitudinal flight-control system.
Lyon, Aaron R; Pullmann, Michael D; Dorsey, Shannon; Martin, Prerna; Grigore, Alexandra A; Becker, Emily M; Jensen-Doss, Amanda
2018-05-11
Measurement-based care (MBC) is an increasingly popular, evidence-based practice, but there are no tools with established psychometrics to evaluate clinician use of MBC practices in mental health service delivery. The current study evaluated the reliability, validity, and factor structure of scores generated from a brief, standardized tool to measure MBC practices, the Current Assessment Practice Evaluation-Revised (CAPER). Survey data from a national sample of 479 mental health clinicians were used to conduct exploratory and confirmatory factor analyses, as well as reliability and validity analyses (e.g., relationships between CAPER subscales and clinician MBC attitudes). Analyses revealed competing two- and three-factor models. Regardless of the model used, scores from CAPER subscales demonstrated good reliability and convergent and divergent validity with MBC attitudes in the expected directions. The CAPER appears to be a psychometrically sound tool for assessing clinician MBC practices. Future directions for development and application of the tool are discussed.
NASA Astrophysics Data System (ADS)
Strunz, Richard; Herrmann, Jeffrey W.
2011-12-01
The hot fire test strategy for liquid rocket engines has always been a concern of space industry and agency alike because no recognized standard exists. Previous hot fire test plans focused on the verification of performance requirements but did not explicitly include reliability as a dimensioning variable. The stakeholders are, however, concerned about a hot fire test strategy that balances reliability, schedule, and affordability. A multiple criteria test planning model is presented that provides a framework to optimize the hot fire test strategy with respect to stakeholder concerns. The Staged Combustion Rocket Engine Demonstrator, a program of the European Space Agency, is used as example to provide the quantitative answer to the claim that a reduced thrust scale demonstrator is cost beneficial for a subsequent flight engine development. Scalability aspects of major subsystems are considered in the prior information definition inside the Bayesian framework. The model is also applied to assess the impact of an increase of the demonstrated reliability level on schedule and affordability.
Modeling Reliability Growth in Accelerated Stress Testing
2013-12-01
MODELING RELIABILITY GROWTH IN ACCELERATED STRESS TESTING DISSERTATION Jason K. Freels Major...Defense, or the United States Government. AFIT-ENS-DS-13-D-02 MODELING RELIABILITY GROWTH IN ACCELERATED STRESS TESTING ...DISTRIBUTION UNLIMITED AFIT-ENS-DS-13-D-02 MODELING RELIABILITY GROWTH IN ACCELERATED STRESS TESTING Jason K. Freels
Cabin Atmosphere Monitoring System (CAMS), pre-prototype model development continuation
NASA Technical Reports Server (NTRS)
Bursack, W. W.; Harris, W. A.
1975-01-01
The development of the Cabin Atmosphere Monitoring System (CAMS) is described. Attention was directed toward improving stability and reliability of the design using flight application guidelines. Considerable effort was devoted to the development of a temperature-stable RF/DC generator used for excitation of the quadrupole mass filter. Minor design changes were made in the preprototype model. Specific gas measurement examples are included along with a discussion of the measurement rationale employed.
Tokudome, Yuko; Okumura, Keiko; Kumagai, Yoshiko; Hirano, Hirohiko; Kim, Hunkyung; Morishita, Shiho; Watanabe, Yutaka
2017-11-01
Because few Japanese questionnaires assess the elderly's appetite, there is an urgent need to develop an appetite questionnaire with verified reliability, validity, and reproducibility. We translated and back-translated the Council on Nutrition Appetite Questionnaire (CNAQ), which has eight items, into Japanese (CNAQ-J), as well as the Simplified Nutritional Appetite Questionnaire (SNAQ-J), which includes four CNAQ-J-derived items. Using structural equation modeling, we examined the CNAQ-J structure based on data of 649 Japanese elderly people in 2013, including individuals having a certain degree of cognitive impairment, and we developed the SNAQ for the Japanese elderly (SNAQ-JE) according to an exploratory factor analysis. Confirmatory factor analyses on the appetite questionnaires were conducted to probe fitting to the model. We computed Cronbach's α coefficients and criterion-referenced/-related validity figures examining associations of the three appetite battery scores with body mass index (BMI) values and with nutrition-related questionnaire values. Test-retest reproducibility of appetite tools was scrutinized over an approximately 2-week interval. An exploratory factor analysis demonstrated that the CNAQ-J was constructed of one factor (appetite), yielding the SNAQ-JE, which includes four questions derived from the CNAQ-J. The three appetite instruments showed almost equivalent fitting to the model and reproducibility. The CNAQ-J and SNAQ-JE demonstrated satisfactory reliability and significant criterion-referenced/-related validity values, including BMIs, but the SNAQ-J included a low factor-loading item, exhibited less satisfactory reliability and had a non-significant relationship to BMI. The CNAQ-J and SNAQ-JE may be applied to assess the appetite of Japanese elderly, including persons with some cognitive impairment. Copyright © 2017 The Authors. Production and hosting by Elsevier B.V. All rights reserved.
Reliability and Productivity Modeling for the Optimization of Separated Spacecraft Interferometers
NASA Technical Reports Server (NTRS)
Kenny, Sean (Technical Monitor); Wertz, Julie
2002-01-01
As technological systems grow in capability, they also grow in complexity. Due to this complexity, it is no longer possible for a designer to use engineering judgement to identify the components that have the largest impact on system life cycle metrics, such as reliability, productivity, cost, and cost effectiveness. One way of identifying these key components is to build quantitative models and analysis tools that can be used to aid the designer in making high level architecture decisions. Once these key components have been identified, two main approaches to improving a system using these components exist: add redundancy or improve the reliability of the component. In reality, the most effective approach to almost any system will be some combination of these two approaches, in varying orders of magnitude for each component. Therefore, this research tries to answer the question of how to divide funds, between adding redundancy and improving the reliability of components, to most cost effectively improve the life cycle metrics of a system. While this question is relevant to any complex system, this research focuses on one type of system in particular: Separate Spacecraft Interferometers (SSI). Quantitative models are developed to analyze the key life cycle metrics of different SSI system architectures. Next, tools are developed to compare a given set of architectures in terms of total performance, by coupling different life cycle metrics together into one performance metric. Optimization tools, such as simulated annealing and genetic algorithms, are then used to search the entire design space to find the "optimal" architecture design. Sensitivity analysis tools have been developed to determine how sensitive the results of these analyses are to uncertain user defined parameters. Finally, several possibilities for the future work that could be done in this area of research are presented.
Hybrid automated reliability predictor integrated work station (HiREL)
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.
1991-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated reliability (HiREL) workstation tool system marks another step toward the goal of producing a totally integrated computer aided design (CAD) workstation design capability. Since a reliability engineer must generally graphically represent a reliability model before he can solve it, the use of a graphical input description language increases productivity and decreases the incidence of error. The captured image displayed on a cathode ray tube (CRT) screen serves as a documented copy of the model and provides the data for automatic input to the HARP reliability model solver. The introduction of dependency gates to a fault tree notation allows the modeling of very large fault tolerant system models using a concise and visually recognizable and familiar graphical language. In addition to aiding in the validation of the reliability model, the concise graphical representation presents company management, regulatory agencies, and company customers a means of expressing a complex model that is readily understandable. The graphical postprocessor computer program HARPO (HARP Output) makes it possible for reliability engineers to quickly analyze huge amounts of reliability/availability data to observe trends due to exploratory design changes.
An Energy-Based Limit State Function for Estimation of Structural Reliability in Shock Environments
Guthrie, Michael A.
2013-01-01
limit state function is developed for the estimation of structural reliability in shock environments. This limit state function uses peak modal strain energies to characterize environmental severity and modal strain energies at failure to characterize the structural capacity. The Hasofer-Lind reliability index is briefly reviewed and its computation for the energy-based limit state function is discussed. Applications to two degree of freedom mass-spring systems and to a simple finite element model are considered. For these examples, computation of the reliability index requires little effort beyond a modal analysis, but still accounts for relevant uncertainties in both the structure and environment.more » For both examples, the reliability index is observed to agree well with the results of Monte Carlo analysis. In situations where fast, qualitative comparison of several candidate designs is required, the reliability index based on the proposed limit state function provides an attractive metric which can be used to compare and control reliability.« less
Identification of the contribution of the ankle and hip joints to multi-segmental balance control
2013-01-01
Background Human stance involves multiple segments, including the legs and trunk, and requires coordinated actions of both. A novel method was developed that reliably estimates the contribution of the left and right leg (i.e., the ankle and hip joints) to the balance control of individual subjects. Methods The method was evaluated using simulations of a double-inverted pendulum model and the applicability was demonstrated with an experiment with seven healthy and one Parkinsonian participant. Model simulations indicated that two perturbations are required to reliably estimate the dynamics of a double-inverted pendulum balance control system. In the experiment, two multisine perturbation signals were applied simultaneously. The balance control system dynamic behaviour of the participants was estimated by Frequency Response Functions (FRFs), which relate ankle and hip joint angles to joint torques, using a multivariate closed-loop system identification technique. Results In the model simulations, the FRFs were reliably estimated, also in the presence of realistic levels of noise. In the experiment, the participants responded consistently to the perturbations, indicated by low noise-to-signal ratios of the ankle angle (0.24), hip angle (0.28), ankle torque (0.07), and hip torque (0.33). The developed method could detect that the Parkinson patient controlled his balance asymmetrically, that is, the right ankle and hip joints produced more corrective torque. Conclusion The method allows for a reliable estimate of the multisegmental feedback mechanism that stabilizes stance, of individual participants and of separate legs. PMID:23433148
DOT National Transportation Integrated Search
2015-10-01
Pavement performance models describe the deterioration behavior of pavements. They are essential in a pavement management : system if the goal is to make more objective, reliable, and cost-effective decisions regarding the timing and nature of paveme...
Sustainable, Reliable Mission-Systems Architecture
NASA Technical Reports Server (NTRS)
O'Neil, Graham; Orr, James K.; Watson, Steve
2005-01-01
A mission-systems architecture, based on a highly modular infrastructure utilizing open-standards hardware and software interfaces as the enabling technology is essential for affordable md sustainable space exploration programs. This mission-systems architecture requires (8) robust communication between heterogeneous systems, (b) high reliability, (c) minimal mission-to-mission reconfiguration, (d) affordable development, system integration, end verification of systems, and (e) minimal sustaining engineering. This paper proposes such an architecture. Lessons learned from the Space Shuttle program and Earthbound complex engineered systems are applied to define the model. Technology projections reaching out 5 years are made to refine model details.
Sustainable, Reliable Mission-Systems Architecture
NASA Technical Reports Server (NTRS)
O'Neil, Graham; Orr, James K.; Watson, Steve
2007-01-01
A mission-systems architecture, based on a highly modular infrastructure utilizing: open-standards hardware and software interfaces as the enabling technology is essential for affordable and sustainable space exploration programs. This mission-systems architecture requires (a) robust communication between heterogeneous system, (b) high reliability, (c) minimal mission-to-mission reconfiguration, (d) affordable development, system integration, and verification of systems, and (e) minimal sustaining engineering. This paper proposes such an architecture. Lessons learned from the Space Shuttle program and Earthbound complex engineered system are applied to define the model. Technology projections reaching out 5 years are mde to refine model details.
An Online Risk Monitor System (ORMS) to Increase Safety and Security Levels in Industry
NASA Astrophysics Data System (ADS)
Zubair, M.; Rahman, Khalil Ur; Hassan, Mehmood Ul
2013-12-01
The main idea of this research is to develop an Online Risk Monitor System (ORMS) based on Living Probabilistic Safety Assessment (LPSA). The article highlights the essential features and functions of ORMS. The basic models and modules such as, Reliability Data Update Model (RDUM), running time update, redundant system unavailability update, Engineered Safety Features (ESF) unavailability update and general system update have been described in this study. ORMS not only provides quantitative analysis but also highlights qualitative aspects of risk measures. ORMS is capable of automatically updating the online risk models and reliability parameters of equipment. ORMS can support in the decision making process of operators and managers in Nuclear Power Plants.
A new statistical model for subgrid dispersion in large eddy simulations of particle-laden flows
NASA Astrophysics Data System (ADS)
Muela, Jordi; Lehmkuhl, Oriol; Pérez-Segarra, Carles David; Oliva, Asensi
2016-09-01
Dispersed multiphase turbulent flows are present in many industrial and commercial applications like internal combustion engines, turbofans, dispersion of contaminants, steam turbines, etc. Therefore, there is a clear interest in the development of models and numerical tools capable of performing detailed and reliable simulations about these kind of flows. Large Eddy Simulations offer good accuracy and reliable results together with reasonable computational requirements, making it a really interesting method to develop numerical tools for particle-laden turbulent flows. Nonetheless, in multiphase dispersed flows additional difficulties arises in LES, since the effect of the unresolved scales of the continuous phase over the dispersed phase is lost due to the filtering procedure. In order to solve this issue a model able to reconstruct the subgrid velocity seen by the particles is required. In this work a new model for the reconstruction of the subgrid scale effects over the dispersed phase is presented and assessed. This innovative methodology is based in the reconstruction of statistics via Probability Density Functions (PDFs).
A new framework for comprehensive, robust, and efficient global sensitivity analysis: 2. Application
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2016-01-01
Based on the theoretical framework for sensitivity analysis called "Variogram Analysis of Response Surfaces" (VARS), developed in the companion paper, we develop and implement a practical "star-based" sampling strategy (called STAR-VARS), for the application of VARS to real-world problems. We also develop a bootstrap approach to provide confidence level estimates for the VARS sensitivity metrics and to evaluate the reliability of inferred factor rankings. The effectiveness, efficiency, and robustness of STAR-VARS are demonstrated via two real-data hydrological case studies (a 5-parameter conceptual rainfall-runoff model and a 45-parameter land surface scheme hydrology model), and a comparison with the "derivative-based" Morris and "variance-based" Sobol approaches are provided. Our results show that STAR-VARS provides reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being 1-2 orders of magnitude more efficient than the Morris or Sobol approaches.
An overview of the mathematical and statistical analysis component of RICIS
NASA Technical Reports Server (NTRS)
Hallum, Cecil R.
1987-01-01
Mathematical and statistical analysis components of RICIS (Research Institute for Computing and Information Systems) can be used in the following problem areas: (1) quantification and measurement of software reliability; (2) assessment of changes in software reliability over time (reliability growth); (3) analysis of software-failure data; and (4) decision logic for whether to continue or stop testing software. Other areas of interest to NASA/JSC where mathematical and statistical analysis can be successfully employed include: math modeling of physical systems, simulation, statistical data reduction, evaluation methods, optimization, algorithm development, and mathematical methods in signal processing.
NASA Technical Reports Server (NTRS)
Simmons, D. B.
1975-01-01
The DOMONIC system has been modified to run on the Univac 1108 and the CDC 6600 as well as the IBM 370 computer system. The DOMONIC monitor system has been implemented to gather data which can be used to optimize the DOMONIC system and to predict the reliability of software developed using DOMONIC. The areas of quality metrics, error characterization, program complexity, program testing, validation and verification are analyzed. A software reliability model for estimating program completion levels and one on which to base system acceptance have been developed. The DAVE system which performs flow analysis and error detection has been converted from the University of Colorado CDC 6400/6600 computer to the IBM 360/370 computer system for use with the DOMONIC system.
Optimization Testbed Cometboards Extended into Stochastic Domain
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.; Patnaik, Surya N.
2010-01-01
COMparative Evaluation Testbed of Optimization and Analysis Routines for the Design of Structures (CometBoards) is a multidisciplinary design optimization software. It was originally developed for deterministic calculation. It has now been extended into the stochastic domain for structural design problems. For deterministic problems, CometBoards is introduced through its subproblem solution strategy as well as the approximation concept in optimization. In the stochastic domain, a design is formulated as a function of the risk or reliability. Optimum solution including the weight of a structure, is also obtained as a function of reliability. Weight versus reliability traced out an inverted-S-shaped graph. The center of the graph corresponded to 50 percent probability of success, or one failure in two samples. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponded to unity for reliability. Weight can be reduced to a small value for the most failure-prone design with a compromised reliability approaching zero. The stochastic design optimization (SDO) capability for an industrial problem was obtained by combining three codes: MSC/Nastran code was the deterministic analysis tool, fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life airframe component made of metallic and composite materials.
A Methodology for Quantifying Certain Design Requirements During the Design Phase
NASA Technical Reports Server (NTRS)
Adams, Timothy; Rhodes, Russel
2005-01-01
A methodology for developing and balancing quantitative design requirements for safety, reliability, and maintainability has been proposed. Conceived as the basis of a more rational approach to the design of spacecraft, the methodology would also be applicable to the design of automobiles, washing machines, television receivers, or almost any other commercial product. Heretofore, it has been common practice to start by determining the requirements for reliability of elements of a spacecraft or other system to ensure a given design life for the system. Next, safety requirements are determined by assessing the total reliability of the system and adding redundant components and subsystems necessary to attain safety goals. As thus described, common practice leaves the maintainability burden to fall to chance; therefore, there is no control of recurring costs or of the responsiveness of the system. The means that have been used in assessing maintainability have been oriented toward determining the logistical sparing of components so that the components are available when needed. The process established for developing and balancing quantitative requirements for safety (S), reliability (R), and maintainability (M) derives and integrates NASA s top-level safety requirements and the controls needed to obtain program key objectives for safety and recurring cost (see figure). Being quantitative, the process conveniently uses common mathematical models. Even though the process is shown as being worked from the top down, it can also be worked from the bottom up. This process uses three math models: (1) the binomial distribution (greaterthan- or-equal-to case), (2) reliability for a series system, and (3) the Poisson distribution (less-than-or-equal-to case). The zero-fail case for the binomial distribution approximates the commonly known exponential distribution or "constant failure rate" distribution. Either model can be used. The binomial distribution was selected for modeling flexibility because it conveniently addresses both the zero-fail and failure cases. The failure case is typically used for unmanned spacecraft as with missiles.
Development of a 3D bone marrow adipose tissue model.
Fairfield, Heather; Falank, Carolyne; Farrell, Mariah; Vary, Calvin; Boucher, Joshua M; Driscoll, Heather; Liaw, Lucy; Rosen, Clifford J; Reagan, Michaela R
2018-01-26
Over the past twenty years, evidence has accumulated that biochemically and spatially defined networks of extracellular matrix, cellular components, and interactions dictate cellular differentiation, proliferation, and function in a variety of tissue and diseases. Modeling in vivo systems in vitro has been undeniably necessary, but when simplified 2D conditions rather than 3D in vitro models are used, the reliability and usefulness of the data derived from these models decreases. Thus, there is a pressing need to develop and validate reliable in vitro models to reproduce specific tissue-like structures and mimic functions and responses of cells in a more realistic manner for both drug screening/disease modeling and tissue regeneration applications. In adipose biology and cancer research, these models serve as physiologically relevant 3D platforms to bridge the divide between 2D cultures and in vivo models, bringing about more reliable and translationally useful data to accelerate benchtop to bedside research. Currently, no model has been developed for bone marrow adipose tissue (BMAT), a novel adipose depot that has previously been overlooked as "filler tissue" but has more recently been recognized as endocrine-signaling and systemically relevant. Herein we describe the development of the first 3D, BMAT model derived from either human or mouse bone marrow (BM) mesenchymal stromal cells (MSCs). We found that BMAT models can be stably cultured for at least 3 months in vitro, and that myeloma cells (5TGM1, OPM2 and MM1S cells) can be cultured on these for at least 2 weeks. Upon tumor cell co-culture, delipidation occurred in BMAT adipocytes, suggesting a bidirectional relationship between these two important cell types in the malignant BM niche. Overall, our studies suggest that 3D BMAT represents a "healthier," more realistic tissue model that may be useful for elucidating the effects of MAT on tumor cells, and tumor cells on MAT, to identify novel therapeutic targets. In addition, proteomic characterization as well as microarray data (expression of >22,000 genes) coupled with KEGG pathway analysis and gene set expression analysis (GSEA) supported our development of less-inflammatory 3D BMAT compared to 2D culture. In sum, we developed the first 3D, tissue-engineered bone marrow adipose tissue model, which is a versatile, novel model that can be used to study numerous diseases and biological processes involved with the bone marrow. Copyright © 2018. Published by Elsevier Inc.
Quality assessment of a new surgical simulator for neuroendoscopic training.
Filho, Francisco Vaz Guimarães; Coelho, Giselle; Cavalheiro, Sergio; Lyra, Marcos; Zymberg, Samuel T
2011-04-01
Ideal surgical training models should be entirely reliable, atoxic, easy to handle, and, if possible, low cost. All available models have their advantages and disadvantages. The choice of one or another will depend on the type of surgery to be performed. The authors created an anatomical model called the S.I.M.O.N.T. (Sinus Model Oto-Rhino Neuro Trainer) Neurosurgical Endotrainer, which can provide reliable neuroendoscopic training. The aim in the present study was to assess both the quality of the model and the development of surgical skills by trainees. The S.I.M.O.N.T. is built of a synthetic thermoretractable, thermosensible rubber called Neoderma, which, combined with different polymers, produces more than 30 different formulas. Quality assessment of the model was based on qualitative and quantitative data obtained from training sessions with 9 experienced and 13 inexperienced neurosurgeons. The techniques used for evaluation were face validation, retest and interrater reliability, and construct validation. The experts considered the S.I.M.O.N.T. capable of reproducing surgical situations as if they were real and presenting great similarity with the human brain. Surgical results of serial training showed that the model could be considered precise. Finally, development and improvement in surgical skills by the trainees were observed and considered relevant to further training. It was also observed that the probability of any single error was dramatically decreased after each training session, with a mean reduction of 41.65% (range 38.7%-45.6%). Neuroendoscopic training has some specific requirements. A unique set of instruments is required, as is a model that can resemble real-life situations. The S.I.M.O.N.T. is a new alternative model specially designed for this purpose. Validation techniques followed by precision assessments attested to the model's feasibility.
Taheriyoun, Masoud; Moradinejad, Saber
2015-01-01
The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.
NASA Astrophysics Data System (ADS)
Feng, Ting
Today, GaAs based field effect transistors (FETs) have been used in a broad range of high-speed electronic military and commercial applications. However, their reliability still needs to be improved. Particularly the hydrogen induced degradation is a large remaining issue in the reliability of GaAs FETs, because hydrogen can easily be incorporated into devices during the crystal growth and virtually every device processing step. The main objective of this research work is to develop a new gate metallization system in order to reduce the hydrogen induced degradation from the gate region for GaAs based MESFETs and HEMTs. Cu/Ti gate metallization has been introduced into the GaAs MESFETs and HEMTs in our work in order to solve the hydrogen problem. The purpose of the use of copper is to tie up the hydrogen atoms and prevent hydrogen penetration into the device active region as well as to keep a low gate resistance for low noise applications. In this work, the fabrication technology of GaAs MESFETs and AlGaAs/GaAs HEMTs with Cu/Ti metallized gates have been successfully developed and the fabricated Cu/Ti FETs have shown comparable DC performance with similar Au-based GaAs FETs. The Cu/Ti FETs were subjected to temperature accelerated testing at NOT under 5% hydrogen forming gas and the experimental results show the hydrogen induced degradation has been reduced for the Cu/Ti FETs compared to commonly used AuPtTi based GaAs FETs. A long-term reliability testing for Cu/Ti FETs has also been carried out at 200°C and up to 1000hours and testing results show Cu/Ti FETs performed with adequate reliability. The failure modes were found to consist of a decrease in drain saturation current and pinch-off voltage and an increase in source ohmic contact resistance. Material characterization tools including Rutherford backscattering spectroscopy and a back etching technique were used in Cu/Ti GaAs FETs, and pronounced gate metal copper in-diffusion and intermixing compounds at the interface between the gate and GaAs channel layer were found. A quantifying gate sinking degradation model was developed in order to extend device physics models to reliability testing results of Cu/Ti GaAs FETs. The gate sinking degradation model includes the gate metal and hydrogen in-diffusion effect, decrease of effective channel due to the formation of interfacial compounds, decrease of electron mobility due to the increase of in-diffused impurities, and donor compensation from in-diffused metal impurity acceptors or hydrogen passivation. A variational charge control model was applied to simulate and understand the degradation mechanisms of Cu/Ti HEMTs, including hydrogen induced degradation due to the neutralization of donors. The degradation model established in this study is also applicable to other Au or Al metallized GaAs FETs for understanding the failure mechanisms induced by gate sinking and hydrogen neutralization of donors and con-elating the device physics model with reliability testing results.
Improved reliability of wind turbine towers with active tuned mass dampers (ATMDs)
NASA Astrophysics Data System (ADS)
Fitzgerald, Breiffni; Sarkar, Saptarshi; Staino, Andrea
2018-04-01
Modern multi-megawatt wind turbines are composed of slender, flexible, and lightly damped blades and towers. These components exhibit high susceptibility to wind-induced vibrations. As the size, flexibility and cost of the towers have increased in recent years, the need to protect these structures against damage induced by turbulent aerodynamic loading has become apparent. This paper combines structural dynamic models and probabilistic assessment tools to demonstrate improvements in structural reliability when modern wind turbine towers are equipped with active tuned mass dampers (ATMDs). This study proposes a multi-modal wind turbine model for wind turbine control design and analysis. This study incorporates an ATMD into the tower of this model. The model is subjected to stochastically generated wind loads of varying speeds to develop wind-induced probabilistic demand models for towers of modern multi-megawatt wind turbines under structural uncertainty. Numerical simulations have been carried out to ascertain the effectiveness of the active control system to improve the structural performance of the wind turbine and its reliability. The study constructs fragility curves, which illustrate reductions in the vulnerability of towers to wind loading owing to the inclusion of the damper. Results show that the active controller is successful in increasing the reliability of the tower responses. According to the analysis carried out in this paper, a strong reduction of the probability of exceeding a given displacement at the rated wind speed has been observed.
NASA Astrophysics Data System (ADS)
Abramov, Ivan
2018-03-01
Development of design documentation for a future construction project gives rise to a number of issues with the main one being selection of manpower for structural units of the project's overall implementation system. Well planned and competently staffed integrated structural construction units will help achieve a high level of reliability and labor productivity and avoid negative (extraordinary) situations during the construction period eventually ensuring improved project performance. Research priorities include the development of theoretical recommendations for enhancing reliability of a structural unit staffed as an integrated construction crew. The author focuses on identification of destabilizing factors affecting formation of an integrated construction crew; assessment of these destabilizing factors; based on the developed mathematical model, highlighting the impact of these factors on the integration criterion with subsequent identification of an efficiency and reliability criterion for the structural unit in general. The purpose of this article is to develop theoretical recommendations and scientific and methodological provisions of an organizational and technological nature in order to identify a reliability criterion for a structural unit based on manpower integration and productivity criteria. With this purpose in mind, complex scientific tasks have been defined requiring special research, development of corresponding provisions and recommendations based on the system analysis findings presented herein.
A Reliability Estimation in Modeling Watershed Runoff With Uncertainties
NASA Astrophysics Data System (ADS)
Melching, Charles S.; Yen, Ben Chie; Wenzel, Harry G., Jr.
1990-10-01
The reliability of simulation results produced by watershed runoff models is a function of uncertainties in nature, data, model parameters, and model structure. A framework is presented here for using a reliability analysis method (such as first-order second-moment techniques or Monte Carlo simulation) to evaluate the combined effect of the uncertainties on the reliability of output hydrographs from hydrologic models. For a given event the prediction reliability can be expressed in terms of the probability distribution of the estimated hydrologic variable. The peak discharge probability for a watershed in Illinois using the HEC-1 watershed model is given as an example. The study of the reliability of predictions from watershed models provides useful information on the stochastic nature of output from deterministic models subject to uncertainties and identifies the relative contribution of the various uncertainties to unreliability of model predictions.
Upwelling regime off the Cabo Frio region in Brazil and impact on acoustic propagation.
Calado, Leandro; Camargo Rodríguez, Orlando; Codato, Gabriel; Contrera Xavier, Fabio
2018-03-01
This work introduces a description of the complex upwelling regime off the Cabo Frio region in Brazil and shows that ocean modeling, based on the feature-oriented regional modeling system (FORMS) technique, can produce reliable predictions of sound speed fields for the corresponding shallow water environment. This work also shows, through the development of simulations, that the upwelling regime can be responsible for the creation of shadow coastal zones, in which the detection probability is too low for an acoustic source to be detected. The development of the FORMS technique and its validation with real data, for the particular region of coastal upwelling off Cabo Frio, reveals the possibility of a sustainable and reliable forecast system for the corresponding (variable in space and time) underwater acoustic environment.
Probabilistic Solar Energetic Particle Models
NASA Technical Reports Server (NTRS)
Adams, James H., Jr.; Dietrich, William F.; Xapsos, Michael A.
2011-01-01
To plan and design safe and reliable space missions, it is necessary to take into account the effects of the space radiation environment. This is done by setting the goal of achieving safety and reliability with some desired level of confidence. To achieve this goal, a worst-case space radiation environment at the required confidence level must be obtained. Planning and designing then proceeds, taking into account the effects of this worst-case environment. The result will be a mission that is reliable against the effects of the space radiation environment at the desired confidence level. In this paper we will describe progress toward developing a model that provides worst-case space radiation environments at user-specified confidence levels. We will present a model for worst-case event-integrated solar proton environments that provide the worst-case differential proton spectrum. This model is based on data from IMP-8 and GOES spacecraft that provide a data base extending from 1974 to the present. We will discuss extending this work to create worst-case models for peak flux and mission-integrated fluence for protons. We will also describe plans for similar models for helium and heavier ions.
Tan, Christine L.; Hassali, Mohamed A.; Saleem, Fahad; Shafie, Asrul A.; Aljadhey, Hisham; Gan, Vincent B.
2015-01-01
Objective: (i) To develop the Pharmacy Value-Added Services Questionnaire (PVASQ) using emerging themes generated from interviews. (ii) To establish reliability and validity of questionnaire instrument. Methods: Using an extended Theory of Planned Behavior as the theoretical model, face-to-face interviews generated salient beliefs of pharmacy value-added services. The PVASQ was constructed initially in English incorporating important themes and later translated into the Malay language with forward and backward translation. Intention (INT) to adopt pharmacy value-added services is predicted by attitudes (ATT), subjective norms (SN), perceived behavioral control (PBC), knowledge and expectations. Using a 7-point Likert-type scale and a dichotomous scale, test-retest reliability (N=25) was assessed by administrating the questionnaire instrument twice at an interval of one week apart. Internal consistency was measured by Cronbach’s alpha and construct validity between two administrations was assessed using the kappa statistic and the intraclass correlation coefficient (ICC). Confirmatory Factor Analysis, CFA (N=410) was conducted to assess construct validity of the PVASQ. Results: The kappa coefficients indicate a moderate to almost perfect strength of agreement between test and retest. The ICC for all scales tested for intra-rater (test-retest) reliability was good. The overall Cronbach’ s alpha (N=25) is 0.912 and 0.908 for the two time points. The result of CFA (N=410) showed most items loaded strongly and correctly into corresponding factors. Only one item was eliminated. Conclusions: This study is the first to develop and establish the reliability and validity of the Pharmacy Value-Added Services Questionnaire instrument using the Theory of Planned Behavior as the theoretical model. The translated Malay language version of PVASQ is reliable and valid to predict Malaysian patients’ intention to adopt pharmacy value-added services to collect partial medicine supply. PMID:26445622
LACIE: Wheat yield models for the USSR
NASA Technical Reports Server (NTRS)
Sakamoto, C. M.; Leduc, S. K.
1977-01-01
A quantitative model determining the relationship between weather conditions and wheat yield in the U.S.S.R. was studied to provide early reliable forecasts on the size of the U.S.S.R. wheat harvest. Separate models are developed for spring wheat and for winter. Differences in yield potential and responses to stress conditions and cultural improvements necessitate models for each class.
Domeyer, Philip J; Aletras, Vassilis; Anagnostopoulos, Fotios; Katsari, Vasiliki; Niakas, Dimitris
2017-01-01
The use of generic medicines is a cost-effective policy, often dictated by fiscal restraints. To our knowledge, no fully validated tool exploring the students' knowledge and attitudes towards generic medicines exists. The aim of our study was to develop and validate a questionnaire exploring the knowledge and attitudes of M.Sc. in Health Care Management students and recent alumni's towards generic drugs in Greece. The development of the questionnaire was a result of literature review and pilot-testing of its preliminary versions to researchers and students. The final version of the questionnaire contains 18 items measuring the respondents' knowledge and attitude towards generic medicines on a 5-point Likert scale. Given the ordinal nature of the data, ordinal alpha and polychoric correlations were computed. The sample was randomly split into two halves. Exploratory factor analysis, performed in the first sample, was used for the creation of multi-item scales. Confirmatory factor analysis and Generalized Linear Latent and Mixed Model analysis (GLLAMM) with the use of the rating scale model were used in the second sample to assess goodness of fit. An assessment of internal consistency reliability, test-retest reliability, and construct validity was also performed. Among 1402 persons contacted, 986 persons completed our questionnaire (response rate = 70.3%). Overall Cronbach's alpha was 0.871. The conjoint use of exploratory and confirmatory factor analysis resulted in a six-scale model, which seemed to fit the data well. Five of the six scales, namely trust, drug quality, state audit, fiscal impact and drug substitution were found to be valid and reliable, while the knowledge scale suffered only from low inter-scale correlations and a ceiling effect. However, the subsequent confirmatory factor and GLLAMM analyses indicated a good fit of the model to the data. The ATTOGEN instrument proved to be a reliable and valid tool, suitable for assessing students' knowledge and attitudes towards generic medicines.
Data Used in Quantified Reliability Models
NASA Technical Reports Server (NTRS)
DeMott, Diana; Kleinhammer, Roger K.; Kahn, C. J.
2014-01-01
Data is the crux to developing quantitative risk and reliability models, without the data there is no quantification. The means to find and identify reliability data or failure numbers to quantify fault tree models during conceptual and design phases is often the quagmire that precludes early decision makers consideration of potential risk drivers that will influence design. The analyst tasked with addressing a system or product reliability depends on the availability of data. But, where is does that data come from and what does it really apply to? Commercial industries, government agencies, and other international sources might have available data similar to what you are looking for. In general, internal and external technical reports and data based on similar and dissimilar equipment is often the first and only place checked. A common philosophy is "I have a number - that is good enough". But, is it? Have you ever considered the difference in reported data from various federal datasets and technical reports when compared to similar sources from national and/or international datasets? Just how well does your data compare? Understanding how the reported data was derived, and interpreting the information and details associated with the data is as important as the data itself.
Analytical models for coupling reliability in identical two-magnet systems during slow reversals
NASA Astrophysics Data System (ADS)
Kani, Nickvash; Naeemi, Azad
2017-12-01
This paper follows previous works which investigated the strength of dipolar coupling in two-magnet systems. While those works focused on qualitative analyses, this manuscript elucidates reversal through dipolar coupling culminating in analytical expressions for reversal reliability in identical two-magnet systems. The dipolar field generated by a mono-domain magnetic body can be represented by a tensor containing both longitudinal and perpendicular field components; this field changes orientation and magnitude based on the magnetization of neighboring nanomagnets. While the dipolar field does reduce to its longitudinal component at short time-scales, for slow magnetization reversals, the simple longitudinal field representation greatly underestimates the scope of parameters that ensure reliable coupling. For the first time, analytical models that map the geometric and material parameters required for reliable coupling in two-magnet systems are developed. It is shown that in biaxial nanomagnets, the x ̂ and y ̂ components of the dipolar field contribute to the coupling, while all three dimensions contribute to the coupling between a pair of uniaxial magnets. Additionally, the ratio of the longitudinal and perpendicular components of the dipolar field is also very important. If the perpendicular components in the dipolar tensor are too large, the nanomagnet pair may come to rest in an undesirable meta-stable state away from the free axis. The analytical models formulated in this manuscript map the minimum and maximum parameters for reliable coupling. Using these models, it is shown that there is a very small range of material parameters which can facilitate reliable coupling between perpendicular-magnetic-anisotropy nanomagnets; hence, in-plane nanomagnets are more suitable for coupled systems.
Assessment of NDE reliability data
NASA Technical Reports Server (NTRS)
Yee, B. G. W.; Couchman, J. C.; Chang, F. H.; Packman, D. F.
1975-01-01
Twenty sets of relevant nondestructive test (NDT) reliability data were identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations was formulated, and a model to grade the quality and validity of the data sets was developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, were formulated for each NDE method. A comprehensive computer program was written and debugged to calculate the probability of flaw detection at several confidence limits by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. An example of the calculated reliability of crack detection in bolt holes by an automatic eddy current method is presented.
NASA Technical Reports Server (NTRS)
Perino, Scott; Bayandor, Javid; Siddens, Aaron
2012-01-01
The anticipated NASA Mars Sample Return Mission (MSR) requires a simple and reliable method in which to return collected Martian samples back to earth for scientific analysis. The Multi-Mission Earth Entry Vehicle (MMEEV) is NASA's proposed solution to this MSR requirement. Key aspects of the MMEEV are its reliable and passive operation, energy absorbing foam-composite structure, and modular impact sphere (IS) design. To aid in the development of an EEV design that can be modified for various missions requirements, two fully parametric finite element models were developed. The first model was developed in an explicit finite element code and was designed to evaluate the impact response of the vehicle and payload during the final stage of the vehicle's return to earth. The second model was developed in an explicit code and was designed to evaluate the static and dynamic structural response of the vehicle during launch and reentry. In contrast to most other FE models, built through a Graphical User Interface (GUI) pre-processor, the current model was developed using a coding technique that allows the analyst to quickly change nearly all aspects of the model including: geometric dimensions, material properties, load and boundary conditions, mesh properties, and analysis controls. Using the developed design tool, a full range of proposed designs can quickly be analyzed numerically and thus the design trade space for the EEV can be fully understood. An engineer can then quickly reach the best design for a specific mission and also adapt and optimize the general design for different missions.
Operations and support cost modeling of conceptual space vehicles
NASA Technical Reports Server (NTRS)
Ebeling, Charles
1994-01-01
The University of Dayton is pleased to submit this annual report to the National Aeronautics and Space Administration (NASA) Langley Research Center which documents the development of an operations and support (O&S) cost model as part of a larger life cycle cost (LCC) structure. It is intended for use during the conceptual design of new launch vehicles and spacecraft. This research is being conducted under NASA Research Grant NAG-1-1327. This research effort changes the focus from that of the first two years in which a reliability and maintainability model was developed to the initial development of an operations and support life cycle cost model. Cost categories were initially patterned after NASA's three axis work breakdown structure consisting of a configuration axis (vehicle), a function axis, and a cost axis. A revised cost element structure (CES), which is currently under study by NASA, was used to established the basic cost elements used in the model. While the focus of the effort was on operations and maintenance costs and other recurring costs, the computerized model allowed for other cost categories such as RDT&E and production costs to be addressed. Secondary tasks performed concurrent with the development of the costing model included support and upgrades to the reliability and maintainability (R&M) model. The primary result of the current research has been a methodology and a computer implementation of the methodology to provide for timely operations and support cost analysis during the conceptual design activities.
Reliability Assessment for Low-cost Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Freeman, Paul Michael
Existing low-cost unmanned aerospace systems are unreliable, and engineers must blend reliability analysis with fault-tolerant control in novel ways. This dissertation introduces the University of Minnesota unmanned aerial vehicle flight research platform, a comprehensive simulation and flight test facility for reliability and fault-tolerance research. An industry-standard reliability assessment technique, the failure modes and effects analysis, is performed for an unmanned aircraft. Particular attention is afforded to the control surface and servo-actuation subsystem. Maintaining effector health is essential for safe flight; failures may lead to loss of control incidents. Failure likelihood, severity, and risk are qualitatively assessed for several effector failure modes. Design changes are recommended to improve aircraft reliability based on this analysis. Most notably, the control surfaces are split, providing independent actuation and dual-redundancy. The simulation models for control surface aerodynamic effects are updated to reflect the split surfaces using a first-principles geometric analysis. The failure modes and effects analysis is extended by using a high-fidelity nonlinear aircraft simulation. A trim state discovery is performed to identify the achievable steady, wings-level flight envelope of the healthy and damaged vehicle. Tolerance of elevator actuator failures is studied using familiar tools from linear systems analysis. This analysis reveals significant inherent performance limitations for candidate adaptive/reconfigurable control algorithms used for the vehicle. Moreover, it demonstrates how these tools can be applied in a design feedback loop to make safety-critical unmanned systems more reliable. Control surface impairments that do occur must be quickly and accurately detected. This dissertation also considers fault detection and identification for an unmanned aerial vehicle using model-based and model-free approaches and applies those algorithms to experimental faulted and unfaulted flight test data. Flight tests are conducted with actuator faults that affect the plant input and sensor faults that affect the vehicle state measurements. A model-based detection strategy is designed and uses robust linear filtering methods to reject exogenous disturbances, e.g. wind, while providing robustness to model variation. A data-driven algorithm is developed to operate exclusively on raw flight test data without physical model knowledge. The fault detection and identification performance of these complementary but different methods is compared. Together, enhanced reliability assessment and multi-pronged fault detection and identification techniques can help to bring about the next generation of reliable low-cost unmanned aircraft.
Li, Wei Bo; Greiter, Matthias; Oeh, Uwe; Hoeschen, Christoph
2011-12-01
The reliability of biokinetic models is essential for the assessment of internal doses and a radiation risk analysis for the public and occupational workers exposed to radionuclides. In the present study, a method for assessing the reliability of biokinetic models by means of uncertainty and sensitivity analysis was developed. In the first part of the paper, the parameter uncertainty was analyzed for two biokinetic models of zirconium (Zr); one was reported by the International Commission on Radiological Protection (ICRP), and one was developed at the Helmholtz Zentrum München-German Research Center for Environmental Health (HMGU). In the second part of the paper, the parameter uncertainties and distributions of the Zr biokinetic models evaluated in Part I are used as the model inputs for identifying the most influential parameters in the models. Furthermore, the most influential model parameter on the integral of the radioactivity of Zr over 50 y in source organs after ingestion was identified. The results of the systemic HMGU Zr model showed that over the first 10 d, the parameters of transfer rates between blood and other soft tissues have the largest influence on the content of Zr in the blood and the daily urinary excretion; however, after day 1,000, the transfer rate from bone to blood becomes dominant. For the retention in bone, the transfer rate from blood to bone surfaces has the most influence out to the endpoint of the simulation; the transfer rate from blood to the upper larger intestine contributes a lot in the later days; i.e., after day 300. The alimentary tract absorption factor (fA) influences mostly the integral of radioactivity of Zr in most source organs after ingestion.
NASA Astrophysics Data System (ADS)
Xu, Shicai; Zhan, Jian; Man, Baoyuan; Jiang, Shouzhen; Yue, Weiwei; Gao, Shoubao; Guo, Chengang; Liu, Hanping; Li, Zhenhua; Wang, Jihua; Zhou, Yaoqi
2017-03-01
Reliable determination of binding kinetics and affinity of DNA hybridization and single-base mismatches plays an essential role in systems biology, personalized and precision medicine. The standard tools are optical-based sensors that are difficult to operate in low cost and to miniaturize for high-throughput measurement. Biosensors based on nanowire field-effect transistors have been developed, but reliable and cost-effective fabrication remains a challenge. Here, we demonstrate that a graphene single-crystal domain patterned into multiple channels can measure time- and concentration-dependent DNA hybridization kinetics and affinity reliably and sensitively, with a detection limit of 10 pM for DNA. It can distinguish single-base mutations quantitatively in real time. An analytical model is developed to estimate probe density, efficiency of hybridization and the maximum sensor response. The results suggest a promising future for cost-effective, high-throughput screening of drug candidates, genetic variations and disease biomarkers by using an integrated, miniaturized, all-electrical multiplexed, graphene-based DNA array.
NASA Astrophysics Data System (ADS)
Andarani, Pertiwi; Setiyo Huboyo, Haryono; Setyanti, Diny; Budiawan, Wiwik
2018-02-01
Noise is considered as one of the main environmental impact of Adi Soemarmo International Airport (ASIA), the second largest airport in Central Java Province, Indonesia. In order to manage the noise of airport, airport noise mapping is necessary. However, a model that requires simple input but still reliable was not available in ASIA. Therefore, the objective of this study are to develop model using Matlab software, to verify its reliability by measuring actual noise exposure, and to analyze the area of noise levels‥ The model was developed based on interpolation or extrapolation of identified Noise-Power-Distance (NPD) data. In accordance with Indonesian Government Ordinance No.40/2012, the noise metric used is WECPNL (Weighted Equivalent Continuous Perceived Noise Level). Based on this model simulation, there are residence area in the region of noise level II (1.912 km2) and III (1.16 km2) and 18 school buildings in the area of noise levels I, II, and III. These land-uses are actually prohibited unless noise insulation is equipped. The model using Matlab in the case of Adi Soemarmo International Airport is valid based on comparison of the field measurement (6 sampling points). However, it is important to validate the model again once the case study (the airport) is changed.
Drought Water Right Curtailment
NASA Astrophysics Data System (ADS)
Walker, W.; Tweet, A.; Magnuson-Skeels, B.; Whittington, C.; Arnold, B.; Lund, J. R.
2016-12-01
California's water rights system allocates water based on priority, where lower priority, "junior" rights are curtailed first in a drought. The Drought Water Rights Allocation Tool (DWRAT) was developed to integrate water right allocation models with legal objectives to suggest water rights curtailments during drought. DWRAT incorporates water right use and priorities with a flow-forecasting model to mathematically represent water law and hydrology and suggest water allocations among water rights holders. DWRAT is compiled within an Excel workbook, with an interface and an open-source solver. By implementing California water rights law as an algorithm, DWRAT provides a precise and transparent framework for the complicated and often controversial technical aspects of curtailing water rights use during drought. DWRAT models have been developed for use in the Eel, Russian, and Sacramento river basins. In this study, an initial DWRAT model has been developed for the San Joaquin watershed, which incorporates all water rights holders in the basin and reference gage flows for major tributaries. The San Joaquin DWRAT can assess water allocation reliability by determining probability of rights holders' curtailment for a range of hydrologic conditions. Forecasted flow values can be input to the model to provide decision makers with the ability to make curtailment and water supply strategy decisions. Environmental flow allocations will be further integrated into the model to protect and improve ecosystem water reliability.
Probabilistic fatigue life prediction of metallic and composite materials
NASA Astrophysics Data System (ADS)
Xiang, Yibing
Fatigue is one of the most common failure modes for engineering structures, such as aircrafts, rotorcrafts and aviation transports. Both metallic materials and composite materials are widely used and affected by fatigue damage. Huge uncertainties arise from material properties, measurement noise, imperfect models, future anticipated loads and environmental conditions. These uncertainties are critical issues for accurate remaining useful life (RUL) prediction for engineering structures in service. Probabilistic fatigue prognosis considering various uncertainties is of great importance for structural safety. The objective of this study is to develop probabilistic fatigue life prediction models for metallic materials and composite materials. A fatigue model based on crack growth analysis and equivalent initial flaw size concept is proposed for metallic materials. Following this, the developed model is extended to include structural geometry effects (notch effect), environmental effects (corroded specimens) and manufacturing effects (shot peening effects). Due to the inhomogeneity and anisotropy, the fatigue model suitable for metallic materials cannot be directly applied to composite materials. A composite fatigue model life prediction is proposed based on a mixed-mode delamination growth model and a stiffness degradation law. After the development of deterministic fatigue models of metallic and composite materials, a general probabilistic life prediction methodology is developed. The proposed methodology combines an efficient Inverse First-Order Reliability Method (IFORM) for the uncertainty propogation in fatigue life prediction. An equivalent stresstransformation has been developed to enhance the computational efficiency under realistic random amplitude loading. A systematical reliability-based maintenance optimization framework is proposed for fatigue risk management and mitigation of engineering structures.
The Development of Chorus Motivation Scale (CMS) for Prospective Music Teacher
ERIC Educational Resources Information Center
Ozgul, Ilhan; Yigit, Nalan
2017-01-01
The purpose of this study was to develop a Chorus Motivation Scale (CMS) that is tested in terms of reliability and construct validity by determining the student perceptions of effective motivation strategies in Chorus training in Turkish Music Teacher Training Model. In order to develop a Chorus Motivation Scale, Questionnaire-Effective…
ERIC Educational Resources Information Center
Vezeau, Susan Lynn; Powell, Robert B.; Stern, Marc J.; Moore, D. DeWayne; Wright, Brett A.
2017-01-01
This investigation examines the development of two scales that measure elaboration and behaviors associated with stewardship in children. The scales were developed using confirmatory factor analysis to investigate their construct validity, reliability, and psychometric properties. Results suggest that a second-order factor model structure provides…
Reliable probabilities through statistical post-processing of ensemble predictions
NASA Astrophysics Data System (ADS)
Van Schaeybroeck, Bert; Vannitsem, Stéphane
2013-04-01
We develop post-processing or calibration approaches based on linear regression that make ensemble forecasts more reliable. We enforce climatological reliability in the sense that the total variability of the prediction is equal to the variability of the observations. Second, we impose ensemble reliability such that the spread around the ensemble mean of the observation coincides with the one of the ensemble members. In general the attractors of the model and reality are inhomogeneous. Therefore ensemble spread displays a variability not taken into account in standard post-processing methods. We overcome this by weighting the ensemble by a variable error. The approaches are tested in the context of the Lorenz 96 model (Lorenz 1996). The forecasts become more reliable at short lead times as reflected by a flatter rank histogram. Our best method turns out to be superior to well-established methods like EVMOS (Van Schaeybroeck and Vannitsem, 2011) and Nonhomogeneous Gaussian Regression (Gneiting et al., 2005). References [1] Gneiting, T., Raftery, A. E., Westveld, A., Goldman, T., 2005: Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation. Mon. Weather Rev. 133, 1098-1118. [2] Lorenz, E. N., 1996: Predictability - a problem partly solved. Proceedings, Seminar on Predictability ECMWF. 1, 1-18. [3] Van Schaeybroeck, B., and S. Vannitsem, 2011: Post-processing through linear regression, Nonlin. Processes Geophys., 18, 147.
NASA Astrophysics Data System (ADS)
Matos, José P.; Schaefli, Bettina; Schleiss, Anton J.
2017-04-01
Uncertainty affects hydrological modelling efforts from the very measurements (or forecasts) that serve as inputs to the more or less inaccurate predictions that are produced. Uncertainty is truly inescapable in hydrology and yet, due to the theoretical and technical hurdles associated with its quantification, it is at times still neglected or estimated only qualitatively. In recent years the scientific community has made a significant effort towards quantifying this hydrologic prediction uncertainty. Despite this, most of the developed methodologies can be computationally demanding, are complex from a theoretical point of view, require substantial expertise to be employed, and are constrained by a number of assumptions about the model error distribution. These assumptions limit the reliability of many methods in case of errors that show particular cases of non-normality, heteroscedasticity, or autocorrelation. The present contribution builds on a non-parametric data-driven approach that was developed for uncertainty quantification in operational (real-time) forecasting settings. The approach is based on the concept of Pareto optimality and can be used as a standalone forecasting tool or as a postprocessor. By virtue of its non-parametric nature and a general operating principle, it can be applied directly and with ease to predictions of streamflow, water stage, or even accumulated runoff. Also, it is a methodology capable of coping with high heteroscedasticity and seasonal hydrological regimes (e.g. snowmelt and rainfall driven events in the same catchment). Finally, the training and operation of the model are very fast, making it a tool particularly adapted to operational use. To illustrate its practical use, the uncertainty quantification method is coupled with a process-based hydrological model to produce statistically reliable forecasts for an Alpine catchment located in Switzerland. Results are presented and discussed in terms of their reliability and resolution.
Development, reliability and validation of an infant mammalian penetration-aspiration scale
Holman, Shaina Devi; Campbell-Malone, Regina; Ding, Peng; Gierbolini-Norat, Estela M.; Griffioen, Anne M.; Inokuchi, Haruhi; Lukasik, Stacey L.; German, Rebecca Z.
2012-01-01
A penetration-aspiration scale exists for assessing airway protection in adult videofluoroscopy and fiberoptic endoscopic swallowing studies, however no such scale exists for animal models. The aim of this study was threefold to 1) develop a Penetration-Aspiration Scale (PAS) for infant mammals, 2) test the scale’s intra- and inter-rater reliability, and 3) to validate the use of the scale for distinguishing between abnormal and normal animals. After discussion and reviewing many videos, the result was a 7-Point Infant Mammal PAS. Reliability was tested by having 5 judges score 90 swallows recorded with videofluoroscopy across two time points. In these videos, the frame rate was either 30 or 60 frames per second and the animals were either normal, had a unilateral superior laryngeal nerve (SLN) lesion, or had hard palate local anesthesia. The scale was validated by having one judge score videos of both normal and SLN lesioned pigs and testing the difference using a t-test. Raters had a high intra-rater (average kappa of 0.82, intraclass correlation coefficient (ICC)= 0.92) and high inter-rater reliability (average kappa of 0.68, ICC= 0.66). There was a significant difference in reliability for videos captured at 30 and 60 frames per second for scores of 3 and 7 (p<0.001). The scale was also validated for distinguishing between normal and abnormal pigs (p<0.001). Given the increasing number of animal studies using videofluoroscopy to study dysphagia, this scale provides a valid and reliable measure of airway protection during swallowing in infant pigs that will give these animal models increased translational significance. PMID:23129423
NASA Astrophysics Data System (ADS)
Moghaddam, Kamran S.; Usher, John S.
2011-07-01
In this article, a new multi-objective optimization model is developed to determine the optimal preventive maintenance and replacement schedules in a repairable and maintainable multi-component system. In this model, the planning horizon is divided into discrete and equally-sized periods in which three possible actions must be planned for each component, namely maintenance, replacement, or do nothing. The objective is to determine a plan of actions for each component in the system while minimizing the total cost and maximizing overall system reliability simultaneously over the planning horizon. Because of the complexity, combinatorial and highly nonlinear structure of the mathematical model, two metaheuristic solution methods, generational genetic algorithm, and a simulated annealing are applied to tackle the problem. The Pareto optimal solutions that provide good tradeoffs between the total cost and the overall reliability of the system can be obtained by the solution approach. Such a modeling approach should be useful for maintenance planners and engineers tasked with the problem of developing recommended maintenance plans for complex systems of components.
Reliability and cost analysis methods
NASA Technical Reports Server (NTRS)
Suich, Ronald C.
1991-01-01
In the design phase of a system, how does a design engineer or manager choose between a subsystem with .990 reliability and a more costly subsystem with .995 reliability? When is the increased cost justified? High reliability is not necessarily an end in itself but may be desirable in order to reduce the expected cost due to subsystem failure. However, this may not be the wisest use of funds since the expected cost due to subsystem failure is not the only cost involved. The subsystem itself may be very costly. We should not consider either the cost of the subsystem or the expected cost due to subsystem failure separately but should minimize the total of the two costs, i.e., the total of the cost of the subsystem plus the expected cost due to subsystem failure. This final report discusses the Combined Analysis of Reliability, Redundancy, and Cost (CARRAC) methods which were developed under Grant Number NAG 3-1100 from the NASA Lewis Research Center. CARRAC methods and a CARRAC computer program employ five models which can be used to cover a wide range of problems. The models contain an option which can include repair of failed modules.
NDE reliability and probability of detection (POD) evolution and paradigm shift
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Surendra
2014-02-18
The subject of NDE Reliability and POD has gone through multiple phases since its humble beginning in the late 1960s. This was followed by several programs including the important one nicknamed “Have Cracks – Will Travel” or in short “Have Cracks” by Lockheed Georgia Company for US Air Force during 1974–1978. This and other studies ultimately led to a series of developments in the field of reliability and POD starting from the introduction of fracture mechanics and Damaged Tolerant Design (DTD) to statistical framework by Bernes and Hovey in 1981 for POD estimation to MIL-STD HDBK 1823 (1999) and 1823Amore » (2009). During the last decade, various groups and researchers have further studied the reliability and POD using Model Assisted POD (MAPOD), Simulation Assisted POD (SAPOD), and applying Bayesian Statistics. All and each of these developments had one objective, i.e., improving accuracy of life prediction in components that to a large extent depends on the reliability and capability of NDE methods. Therefore, it is essential to have a reliable detection and sizing of large flaws in components. Currently, POD is used for studying reliability and capability of NDE methods, though POD data offers no absolute truth regarding NDE reliability, i.e., system capability, effects of flaw morphology, and quantifying the human factors. Furthermore, reliability and POD have been reported alike in meaning but POD is not NDE reliability. POD is a subset of the reliability that consists of six phases: 1) samples selection using DOE, 2) NDE equipment setup and calibration, 3) System Measurement Evaluation (SME) including Gage Repeatability and Reproducibility (Gage R and R) and Analysis Of Variance (ANOVA), 4) NDE system capability and electronic and physical saturation, 5) acquiring and fitting data to a model, and data analysis, and 6) POD estimation. This paper provides an overview of all major POD milestones for the last several decades and discuss rationale for using Integrated Computational Materials Engineering (ICME), MAPOD, SAPOD, and Bayesian statistics for studying controllable and non-controllable variables including human factors for estimating POD. Another objective is to list gaps between “hoped for” versus validated or fielded failed hardware.« less
Special methods for aerodynamic-moment calculations from parachute FSI modeling
NASA Astrophysics Data System (ADS)
Takizawa, Kenji; Tezduyar, Tayfun E.; Boswell, Cody; Tsutsui, Yuki; Montel, Kenneth
2015-06-01
The space-time fluid-structure interaction (STFSI) methods for 3D parachute modeling are now at a level where they can bring reliable, practical analysis to some of the most complex parachute systems, such as spacecraft parachutes. The methods include the Deforming-Spatial-Domain/Stabilized ST method as the core computational technology, and a good number of special FSI methods targeting parachutes. Evaluating the stability characteristics of a parachute based on how the aerodynamic moment varies as a function of the angle of attack is one of the practical analyses that reliable parachute FSI modeling can deliver. We describe the special FSI methods we developed for this specific purpose and present the aerodynamic-moment data obtained from FSI modeling of NASA Orion spacecraft parachutes and Japan Aerospace Exploration Agency (JAXA) subscale parachutes.
Perriman, Noelyn; Davis, Deborah
2016-06-01
The objective of this systematic integrative review is to identify, summarise and communicate the findings of research relating to tools that measure maternal satisfaction with continuity of maternity care models. In so doing the most appropriate, reliable and valid tool that can be used to measure maternal satisfaction with continuity of maternity care will be determined. A systematic integrative review of published and unpublished literature was undertaken using selected databases. Research papers were included if they measured maternal satisfaction in a continuity model of maternity care, were published in English after 1999 and if they included (or made available) the instrument used to measure satisfaction. Six hundred and thirty two unique papers were identified and after applying the selection criteria, four papers were included in the review. Three of these originated in Australia and one in Canada. The primary focus of all papers was not on the development of a tool to measure maternal satisfaction but on the comparison of outcomes in different models of care. The instruments developed varied in terms of the degree to which they were tested for validity and reliability. Women's satisfaction with maternity services is an important measure of quality. Most satisfaction surveys in maternity appear to reflect fragmented models of care though continuity of care models are increasing in line with the evidence demonstrating their effectiveness. It is important that robust tools are developed for this context and that there is some consistency in the way this is measured and reported for the purposes of benchmarking and quality improvement. Copyright © 2016 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.
Validation of behave fire behavior predictions in oak savannas
Grabner, Keith W.; Dwyer, John; Cutter, Bruce E.
1997-01-01
Prescribed fire is a valuable tool in the restoration and management of oak savannas. BEHAVE, a fire behavior prediction system developed by the United States Forest Service, can be a useful tool when managing oak savannas with prescribed fire. BEHAVE predictions of fire rate-of-spread and flame length were validated using four standardized fuel models: Fuel Model 1 (short grass), Fuel Model 2 (timber and grass), Fuel Model 3 (tall grass), and Fuel Model 9 (hardwood litter). Also, a customized oak savanna fuel model (COSFM) was created and validated. Results indicate that standardized fuel model 2 and the COSFM reliably estimate mean rate-of-spread (MROS). The COSFM did not appreciably reduce MROS variation when compared to fuel model 2. Fuel models 1, 3, and 9 did not reliably predict MROS. Neither the standardized fuel models nor the COSFM adequately predicted flame lengths. We concluded that standardized fuel model 2 should be used with BEHAVE when predicting fire rates-of-spread in established oak savannas.
Reliability model generator specification
NASA Technical Reports Server (NTRS)
Cohen, Gerald C.; Mccann, Catherine
1990-01-01
The Reliability Model Generator (RMG), a program which produces reliability models from block diagrams for ASSIST, the interface for the reliability evaluation tool SURE is described. An account is given of motivation for RMG and the implemented algorithms are discussed. The appendices contain the algorithms and two detailed traces of examples.
A Comparison of Three Multivariate Models for Estimating Test Battery Reliability.
ERIC Educational Resources Information Center
Wood, Terry M.; Safrit, Margaret J.
1987-01-01
A comparison of three multivariate models (canonical reliability model, maximum generalizability model, canonical correlation model) for estimating test battery reliability indicated that the maximum generalizability model showed the least degree of bias, smallest errors in estimation, and the greatest relative efficiency across all experimental…
Designing a Pedagogical Model for Web Engineering Education: An Evolutionary Perspective
ERIC Educational Resources Information Center
Hadjerrouit, Said
2005-01-01
In contrast to software engineering, which relies on relatively well established development approaches, there is a lack of a proven methodology that guides Web engineers in building reliable and effective Web-based systems. Currently, Web engineering lacks process models, architectures, suitable techniques and methods, quality assurance, and a…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-15
... that is based on rigorous scientifically based research methods to assess the effectiveness of a...) Relies on measurements or observational methods that provide reliable and valid data across evaluators... of innovative, cohesive models that are based on research and have demonstrated that they effectively...
Lance A. Vickers; Thomas R. Fox; David L. Loftis; David A. Boucugnani
2013-01-01
The difficulty of achieving reliable oak (Quercus spp.) regeneration is well documented. Application of silvicultural techniques to facilitate oak regeneration largely depends on current regeneration potential. A computer model to assess regeneration potential based on existing advanced reproduction in Appalachian hardwoods was developed by David...
Developing a model for hospital inherent safety assessment: Conceptualization and validation.
Yari, Saeed; Akbari, Hesam; Gholami Fesharaki, Mohammad; Khosravizadeh, Omid; Ghasemi, Mohammad; Barsam, Yalda; Akbari, Hamed
2018-01-01
Paying attention to the safety of hospitals, as the most crucial institute for providing medical and health services wherein a bundle of facilities, equipment, and human resource exist, is of significant importance. The present research aims at developing a model for assessing hospitals' safety based on principles of inherent safety design. Face validity (30 experts), content validity (20 experts), construct validity (268 examples), convergent validity, and divergent validity have been employed to validate the prepared questionnaire; and the items analysis, the Cronbach's alpha test, ICC test (to measure reliability of the test), composite reliability coefficient have been used to measure primary reliability. The relationship between variables and factors has been confirmed at 0.05 significance level by conducting confirmatory factor analysis (CFA) and structural equations modeling (SEM) technique with the use of Smart-PLS. R-square and load factors values, which were higher than 0.67 and 0.300 respectively, indicated the strong fit. Moderation (0.970), simplification (0.959), substitution (0.943), and minimization (0.5008) have had the most weights in determining the inherent safety of hospital respectively. Moderation, simplification, and substitution, among the other dimensions, have more weight on the inherent safety, while minimization has the less weight, which could be due do its definition as to minimize the risk.
Learned helplessness: validity and reliability of depressive-like states in mice.
Chourbaji, S; Zacher, C; Sanchis-Segura, C; Dormann, C; Vollmayr, B; Gass, P
2005-12-01
The learned helplessness paradigm is a depression model in which animals are exposed to unpredictable and uncontrollable stress, e.g. electroshocks, and subsequently develop coping deficits for aversive but escapable situations (J.B. Overmier, M.E. Seligman, Effects of inescapable shock upon subsequent escape and avoidance responding, J. Comp. Physiol. Psychol. 63 (1967) 28-33 ). It represents a model with good similarity to the symptoms of depression, construct, and predictive validity in rats. Despite an increased need to investigate emotional, in particular depression-like behaviors in transgenic mice, so far only a few studies have been published using the learned helplessness paradigm. One reason may be the fact that-in contrast to rats (B. Vollmayr, F.A. Henn, Learned helplessness in the rat: improvements in validity and reliability, Brain Res. Brain Res. Protoc. 8 (2001) 1-7)--there is no generally accepted learned helplessness protocol available for mice. This prompted us to develop a reliable helplessness procedure in C57BL/6N mice, to exclude possible artifacts, and to establish a protocol, which yields a consistent fraction of helpless mice following the shock exposure. Furthermore, we validated this protocol pharmacologically using the tricyclic antidepressant imipramine. Here, we present a mouse model with good face and predictive validity that can be used for transgenic, behavioral, and pharmacological studies.
Numerical aerodynamic simulation facility. Preliminary study extension
NASA Technical Reports Server (NTRS)
1978-01-01
The production of an optimized design of key elements of the candidate facility was the primary objective of this report. This was accomplished by effort in the following tasks: (1) to further develop, optimize and describe the function description of the custom hardware; (2) to delineate trade off areas between performance, reliability, availability, serviceability, and programmability; (3) to develop metrics and models for validation of the candidate systems performance; (4) to conduct a functional simulation of the system design; (5) to perform a reliability analysis of the system design; and (6) to develop the software specifications to include a user level high level programming language, a correspondence between the programming language and instruction set and outline the operation system requirements.
Customer-Driven Reliability Models for Multistate Coherent Systems
1992-01-01
AENCYUSEONLY(Leae bank)2. RPO- COVERED 1 11992DISSERTATION 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS Customer -Driven Reliability Models For Multistate Coherent...UNIVERSITY OF OKLAHOMA GRADUATE COLLEGE CUSTOMER -DRIVEN RELIABILITY MODELS FOR MULTISTATE COHERENT SYSTEMS A DISSERTATION SUBMITTED TO THE GRADUATE FACULTY...BOEDIGHEIMER I Norman, Oklahoma Distribution/ Av~ilability Codes 1992 A vil andior Dist Special CUSTOMER -DRIVEN RELIABILITY MODELS FOR MULTISTATE
Constructing the "Best" Reliability Data for the Job
NASA Technical Reports Server (NTRS)
DeMott, D. L.; Kleinhammer, R. K.
2014-01-01
Modern business and technical decisions are based on the results of analyses. When considering assessments using "reliability data", the concern is how long a system will continue to operate as designed. Generally, the results are only as good as the data used. Ideally, a large set of pass/fail tests or observations to estimate the probability of failure of the item under test would produce the best data. However, this is a costly endeavor if used for every analysis and design. Developing specific data is costly and time consuming. Instead, analysts rely on available data to assess reliability. Finding data relevant to the specific use and environment for any project is difficult, if not impossible. Instead, we attempt to develop the "best" or composite analog data to support our assessments. One method used incorporates processes for reviewing existing data sources and identifying the available information based on similar equipment, then using that generic data to derive an analog composite. Dissimilarities in equipment descriptions, environment of intended use, quality and even failure modes impact the "best" data incorporated in an analog composite. Once developed, this composite analog data provides a "better" representation of the reliability of the equipment or component can be used to support early risk or reliability trade studies, or analytical models to establish the predicted reliability data points. Data that is more representative of reality and more project specific would provide more accurate analysis, and hopefully a better final decision.
Constructing the Best Reliability Data for the Job
NASA Technical Reports Server (NTRS)
Kleinhammer, R. K.; Kahn, J. C.
2014-01-01
Modern business and technical decisions are based on the results of analyses. When considering assessments using "reliability data", the concern is how long a system will continue to operate as designed. Generally, the results are only as good as the data used. Ideally, a large set of pass/fail tests or observations to estimate the probability of failure of the item under test would produce the best data. However, this is a costly endeavor if used for every analysis and design. Developing specific data is costly and time consuming. Instead, analysts rely on available data to assess reliability. Finding data relevant to the specific use and environment for any project is difficult, if not impossible. Instead, we attempt to develop the "best" or composite analog data to support our assessments. One method used incorporates processes for reviewing existing data sources and identifying the available information based on similar equipment, then using that generic data to derive an analog composite. Dissimilarities in equipment descriptions, environment of intended use, quality and even failure modes impact the "best" data incorporated in an analog composite. Once developed, this composite analog data provides a "better" representation of the reliability of the equipment or component can be used to support early risk or reliability trade studies, or analytical models to establish the predicted reliability data points. Data that is more representative of reality and more project specific would provide more accurate analysis, and hopefully a better final decision.
Niraula, Rewati; Norman, Laura A.; Meixner, Thomas; Callegary, James B.
2012-01-01
In most watershed-modeling studies, flow is calibrated at one monitoring site, usually at the watershed outlet. Like many arid and semi-arid watersheds, the main reach of the Santa Cruz watershed, located on the Arizona-Mexico border, is discontinuous for most of the year except during large flood events, and therefore the flow characteristics at the outlet do not represent the entire watershed. Calibration is required at multiple locations along the Santa Cruz River to improve model reliability. The objective of this study was to best portray surface water flow in this semiarid watershed and evaluate the effect of multi-gage calibration on flow predictions. In this study, the Soil and Water Assessment Tool (SWAT) was calibrated at seven monitoring stations, which improved model performance and increased the reliability of flow, in the Santa Cruz watershed. The most sensitive parameters to affect flow were found to be curve number (CN2), soil evaporation and compensation coefficient (ESCO), threshold water depth in shallow aquifer for return flow to occur (GWQMN), base flow alpha factor (Alpha_Bf), and effective hydraulic conductivity of the soil layer (Ch_K2). In comparison, when the model was established with a single calibration at the watershed outlet, flow predictions at other monitoring gages were inaccurate. This study emphasizes the importance of multi-gage calibration to develop a reliable watershed model in arid and semiarid environments. The developed model, with further calibration of water quality parameters will be an integral part of the Santa Cruz Watershed Ecosystem Portfolio Model (SCWEPM), an online decision support tool, to assess the impacts of climate change and urban growth in the Santa Cruz watershed.
Hukkerikar, Amol Shivajirao; Kalakul, Sawitree; Sarup, Bent; Young, Douglas M; Sin, Gürkan; Gani, Rafiqul
2012-11-26
The aim of this work is to develop group-contribution(+) (GC(+)) method (combined group-contribution (GC) method and atom connectivity index (CI) method) based property models to provide reliable estimations of environment-related properties of organic chemicals together with uncertainties of estimated property values. For this purpose, a systematic methodology for property modeling and uncertainty analysis is used. The methodology includes a parameter estimation step to determine parameters of property models and an uncertainty analysis step to establish statistical information about the quality of parameter estimation, such as the parameter covariance, the standard errors in predicted properties, and the confidence intervals. For parameter estimation, large data sets of experimentally measured property values of a wide range of chemicals (hydrocarbons, oxygenated chemicals, nitrogenated chemicals, poly functional chemicals, etc.) taken from the database of the US Environmental Protection Agency (EPA) and from the database of USEtox is used. For property modeling and uncertainty analysis, the Marrero and Gani GC method and atom connectivity index method have been considered. In total, 22 environment-related properties, which include the fathead minnow 96-h LC(50), Daphnia magna 48-h LC(50), oral rat LD(50), aqueous solubility, bioconcentration factor, permissible exposure limit (OSHA-TWA), photochemical oxidation potential, global warming potential, ozone depletion potential, acidification potential, emission to urban air (carcinogenic and noncarcinogenic), emission to continental rural air (carcinogenic and noncarcinogenic), emission to continental fresh water (carcinogenic and noncarcinogenic), emission to continental seawater (carcinogenic and noncarcinogenic), emission to continental natural soil (carcinogenic and noncarcinogenic), and emission to continental agricultural soil (carcinogenic and noncarcinogenic) have been modeled and analyzed. The application of the developed property models for the estimation of environment-related properties and uncertainties of the estimated property values is highlighted through an illustrative example. The developed property models provide reliable estimates of environment-related properties needed to perform process synthesis, design, and analysis of sustainable chemical processes and allow one to evaluate the effect of uncertainties of estimated property values on the calculated performance of processes giving useful insights into quality and reliability of the design of sustainable processes.
Comparison of CEAS and Williams-type models for spring wheat yields in North Dakota and Minnesota
NASA Technical Reports Server (NTRS)
Barnett, T. L. (Principal Investigator)
1982-01-01
The CEAS and Williams-type yield models are both based on multiple regression analysis of historical time series data at CRD level. The CEAS model develops a separate relation for each CRD; the Williams-type model pools CRD data to regional level (groups of similar CRDs). Basic variables considered in the analyses are USDA yield, monthly mean temperature, monthly precipitation, and variables derived from these. The Williams-type model also used soil texture and topographic information. Technological trend is represented in both by piecewise linear functions of year. Indicators of yield reliability obtained from a ten-year bootstrap test of each model (1970-1979) demonstrate that the models are very similar in performance in all respects. Both models are about equally objective, adequate, timely, simple, and inexpensive. Both consider scientific knowledge on a broad scale but not in detail. Neither provides a good current measure of modeled yield reliability. The CEAS model is considered very slightly preferable for AgRISTARS applications.
[Development of a Japanese version of a short form of the Profile of Emotional Competence].
Nozaki, Yuki; Koyasu, Masuo
2015-06-01
Emotional competence refers to individual differences in the ability to appropriately identity, understand, express, regulate, and utilize one's own emotions and those of others. This study developed a Japanese version of a short form of the Profile of Emotional Competence, a measure that allows the comprehensive assessment of intra- and interpersonal emotional competence with shorter items, and investigated its reliability and validity. In Study 1, we selected items for a short version and compared it with the full scale in terms of scores, internal consistency, and validity. In Study 2, we examined the short form's test-retest reliability. Results supported the original two-factor model and the measure had adequate reliability and validity. We discuss the construct validity and practical applicability of the short form of the Profile of Emotional Competence.
NASA Astrophysics Data System (ADS)
Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai
2017-10-01
With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.
A Synthetic Vision Preliminary Integrated Safety Analysis
NASA Technical Reports Server (NTRS)
Hemm, Robert; Houser, Scott
2001-01-01
This report documents efforts to analyze a sample of aviation safety programs, using the LMI-developed integrated safety analysis tool to determine the change in system risk resulting from Aviation Safety Program (AvSP) technology implementation. Specifically, we have worked to modify existing system safety tools to address the safety impact of synthetic vision (SV) technology. Safety metrics include reliability, availability, and resultant hazard. This analysis of SV technology is intended to be part of a larger effort to develop a model that is capable of "providing further support to the product design and development team as additional information becomes available". The reliability analysis portion of the effort is complete and is fully documented in this report. The simulation analysis is still underway; it will be documented in a subsequent report. The specific goal of this effort is to apply the integrated safety analysis to SV technology. This report also contains a brief discussion of data necessary to expand the human performance capability of the model, as well as a discussion of human behavior and its implications for system risk assessment in this modeling environment.
NASA Astrophysics Data System (ADS)
Chakraborty, A.; Goto, H.
2017-12-01
The 2011 off the Pacific coast of Tohoku earthquake caused severe damage in many areas further inside the mainland because of site-amplification. Furukawa district in Miyagi Prefecture, Japan recorded significant spatial differences in ground motion even at sub-kilometer scales. The site responses in the damage zone far exceeded the levels in the hazard maps. A reason why the mismatch occurred is that mapping follow only the mean value at the measurement locations with no regard to the data uncertainties and thus are not always reliable. Our research objective is to develop a methodology to incorporate data uncertainties in mapping and propose a reliable map. The methodology is based on a hierarchical Bayesian modeling of normally-distributed site responses in space where the mean (μ), site-specific variance (σ2) and between-sites variance(s2) parameters are treated as unknowns with a prior distribution. The observation data is artificially created site responses with varying means and variances for 150 seismic events across 50 locations in one-dimensional space. Spatially auto-correlated random effects were added to the mean (μ) using a conditionally autoregressive (CAR) prior. The inferences on the unknown parameters are done using Markov Chain Monte Carlo methods from the posterior distribution. The goal is to find reliable estimates of μ sensitive to uncertainties. During initial trials, we observed that the tau (=1/s2) parameter of CAR prior controls the μ estimation. Using a constraint, s = 1/(k×σ), five spatial models with varying k-values were created. We define reliability to be measured by the model likelihood and propose the maximum likelihood model to be highly reliable. The model with maximum likelihood was selected using a 5-fold cross-validation technique. The results show that the maximum likelihood model (μ*) follows the site-specific mean at low uncertainties and converges to the model-mean at higher uncertainties (Fig.1). This result is highly significant as it successfully incorporates the effect of data uncertainties in mapping. This novel approach can be applied to any research field using mapping techniques. The methodology is now being applied to real records from a very dense seismic network in Furukawa district, Miyagi Prefecture, Japan to generate a reliable map of the site responses.
NASA Technical Reports Server (NTRS)
Rothmann, Elizabeth; Dugan, Joanne Bechta; Trivedi, Kishor S.; Mittal, Nitin; Bavuso, Salvatore J.
1994-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. The Hybrid Automated Reliability Predictor (HARP) tutorial provides insight into HARP modeling techniques and the interactive textual prompting input language via a step-by-step explanation and demonstration of HARP's fault occurrence/repair model and the fault/error handling models. Example applications are worked in their entirety and the HARP tabular output data are presented for each. Simple models are presented at first with each succeeding example demonstrating greater modeling power and complexity. This document is not intended to present the theoretical and mathematical basis for HARP.
A Model-Driven Development Method for Management Information Systems
NASA Astrophysics Data System (ADS)
Mizuno, Tomoki; Matsumoto, Keinosuke; Mori, Naoki
Traditionally, a Management Information System (MIS) has been developed without using formal methods. By the informal methods, the MIS is developed on its lifecycle without having any models. It causes many problems such as lack of the reliability of system design specifications. In order to overcome these problems, a model theory approach was proposed. The approach is based on an idea that a system can be modeled by automata and set theory. However, it is very difficult to generate automata of the system to be developed right from the start. On the other hand, there is a model-driven development method that can flexibly correspond to changes of business logics or implementing technologies. In the model-driven development, a system is modeled using a modeling language such as UML. This paper proposes a new development method for management information systems applying the model-driven development method to a component of the model theory approach. The experiment has shown that a reduced amount of efforts is more than 30% of all the efforts.
Peer Review Documents Related to the Evaluation of ...
BMDS is one of the Agency's premier tools for estimating risk assessments, therefore the validity and reliability of its statistical models are of paramount importance. This page provides links to peer review and expert summaries of the BMDS application and its models as they were developed and eventually released documenting the rigorous review process taken to provide the best science tools available for statistical modeling. This page provides links to peer reviews and expert summaries of the BMDS applications and its models as they were developed and eventually released.
NASA Astrophysics Data System (ADS)
Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.
2016-06-01
The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.
Hydrologic Design in the Anthropocene
NASA Astrophysics Data System (ADS)
Vogel, R. M.; Farmer, W. H.; Read, L.
2014-12-01
In an era dubbed the Anthropocene, the natural world is being transformed by a myriad of human influences. As anthropogenic impacts permeate hydrologic systems, hydrologists are challenged to fully account for such changes and develop new methods of hydrologic design. Deterministic watershed models (DWM), which can account for the impacts of changes in land use, climate and infrastructure, are becoming increasing popular for the design of flood and/or drought protection measures. As with all models that are calibrated to existing datasets, DWMs are subject to model error or uncertainty. In practice, the model error component of DWM predictions is typically ignored yet DWM simulations which ignore model error produce model output which cannot reproduce the statistical properties of the observations they are intended to replicate. In the context of hydrologic design, we demonstrate how ignoring model error can lead to systematic downward bias in flood quantiles, upward bias in drought quantiles and upward bias in water supply yields. By reincorporating model error, we document how DWM models can be used to generate results that mimic actual observations and preserve their statistical behavior. In addition to use of DWM for improved predictions in a changing world, improved communication of the risk and reliability is also needed. Traditional statements of risk and reliability in hydrologic design have been characterized by return periods, but such statements often assume that the annual probability of experiencing a design event remains constant throughout the project horizon. We document the general impact of nonstationarity on the average return period and reliability in the context of hydrologic design. Our analyses reveal that return periods do not provide meaningful expressions of the likelihood of future hydrologic events. Instead, knowledge of system reliability over future planning horizons can more effectively prepare society and communicate the likelihood of future hydrologic events of interest.
The SURE reliability analysis program
NASA Technical Reports Server (NTRS)
Butler, R. W.
1986-01-01
The SURE program is a new reliability tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.
The SURE Reliability Analysis Program
NASA Technical Reports Server (NTRS)
Butler, R. W.
1986-01-01
The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.
NASA Astrophysics Data System (ADS)
Lu, Mengqian; Lall, Upmanu; Robertson, Andrew W.; Cook, Edward
2017-03-01
Streamflow forecasts at multiple time scales provide a new opportunity for reservoir management to address competing objectives. Market instruments such as forward contracts with specified reliability are considered as a tool that may help address the perceived risk associated with the use of such forecasts in lieu of traditional operation and allocation strategies. A water allocation process that enables multiple contracts for water supply and hydropower production with different durations, while maintaining a prescribed level of flood risk reduction, is presented. The allocation process is supported by an optimization model that considers multitime scale ensemble forecasts of monthly streamflow and flood volume over the upcoming season and year, the desired reliability and pricing of proposed contracts for hydropower and water supply. It solves for the size of contracts at each reliability level that can be allocated for each future period, while meeting target end of period reservoir storage with a prescribed reliability. The contracts may be insurable, given that their reliability is verified through retrospective modeling. The process can allow reservoir operators to overcome their concerns as to the appropriate skill of probabilistic forecasts, while providing water users with short-term and long-term guarantees as to how much water or energy they may be allocated. An application of the optimization model to the Bhakra Dam, India, provides an illustration of the process. The issues of forecast skill and contract performance are examined. A field engagement of the idea is useful to develop a real-world perspective and needs a suitable institutional environment.
Hu, Guo-Qing; Rao, Ke-Qin; Sun, Zhen-Qiu
2008-12-01
To develop a capacity questionnaire in public health emergency for Chinese local governments. Literature reviews, conceptual modelling, stake-holder analysis, focus group, interview, and Delphi technique were employed together to develop the questionnaire. Classical test theory and case study were used to assess the reliability and validity. (1) A 2-dimension conceptual model was built. A preparedness and response capacity questionnaire in public health emergency with 10 dimensions and 204 items, was developed. (2) Reliability and validity results. Internal consistency: except for dimension 3 and 8, the Cronbach's alpha coefficient of other dimensions was higher than 0.60. The alpha coefficients of dimension 3 and dimension 8 were 0.59 and 0.39 respectively; Content validity: the questionnaire was recognized by the investigatees; Construct validity: the Spearman correlation coefficients among the 10 dimensions fluctuated around 0.50, ranging from 0.26 to 0.75 (P<0.05); Discrimination validity: comparisons of 10 dimensions among 4 provinces did not show statistical significance using One-way analysis of variance (P>0.05). Criterion-related validity: case study showed significant difference among the 10 dimensions in Beijing between February 2003 (before SARS event) and November 2005 (after SARS event). The preparedness and response capacity questionnaire in public health emergency is a reliable and valid tool, which can be used in all provinces and municipalities in China.
Reliability Technology to Achieve Insertion of Advanced Packaging (RELTECH) program
NASA Astrophysics Data System (ADS)
Fayette, Daniel F.; Speicher, Patricia; Stoklosa, Mark J.; Evans, Jillian V.; Evans, John W.; Gentile, Mike; Pagel, Chuck A.; Hakim, Edward
1993-08-01
A joint military-commercial effort to evaluate multichip module (MCM) structures is discussed. The program, Reliability Technology to Achieve Insertion of Advanced Packaging (RELTECH), has been designed to identify the failure mechanisms that are possible in MCM structures. The RELTECH test vehicles, technical assessment task, product evaluation plan, reliability modeling task, accelerated and environmental testing, and post-test physical analysis and failure analysis are described. The information obtained through RELTECH can be used to address standardization issues, through development of cost effective qualification and appropriate screening criteria, for inclusion into a commercial specification and the MIL-H-38534 general specification for hybrid microcircuits.
Reliability Technology to Achieve Insertion of Advanced Packaging (RELTECH) program
NASA Technical Reports Server (NTRS)
Fayette, Daniel F.; Speicher, Patricia; Stoklosa, Mark J.; Evans, Jillian V.; Evans, John W.; Gentile, Mike; Pagel, Chuck A.; Hakim, Edward
1993-01-01
A joint military-commercial effort to evaluate multichip module (MCM) structures is discussed. The program, Reliability Technology to Achieve Insertion of Advanced Packaging (RELTECH), has been designed to identify the failure mechanisms that are possible in MCM structures. The RELTECH test vehicles, technical assessment task, product evaluation plan, reliability modeling task, accelerated and environmental testing, and post-test physical analysis and failure analysis are described. The information obtained through RELTECH can be used to address standardization issues, through development of cost effective qualification and appropriate screening criteria, for inclusion into a commercial specification and the MIL-H-38534 general specification for hybrid microcircuits.
Xie, Y L; Li, Y P; Huang, G H; Li, Y F; Chen, L R
2011-04-15
In this study, an inexact-chance-constrained water quality management (ICC-WQM) model is developed for planning regional environmental management under uncertainty. This method is based on an integration of interval linear programming (ILP) and chance-constrained programming (CCP) techniques. ICC-WQM allows uncertainties presented as both probability distributions and interval values to be incorporated within a general optimization framework. Complexities in environmental management systems can be systematically reflected, thus applicability of the modeling process can be highly enhanced. The developed method is applied to planning chemical-industry development in Binhai New Area of Tianjin, China. Interval solutions associated with different risk levels of constraint violation have been obtained. They can be used for generating decision alternatives and thus help decision makers identify desired policies under various system-reliability constraints of water environmental capacity of pollutant. Tradeoffs between system benefits and constraint-violation risks can also be tackled. They are helpful for supporting (a) decision of wastewater discharge and government investment, (b) formulation of local policies regarding water consumption, economic development and industry structure, and (c) analysis of interactions among economic benefits, system reliability and pollutant discharges. Copyright © 2011 Elsevier B.V. All rights reserved.
A measure of early physical functioning (EPF) post-stroke.
Finch, Lois E; Higgins, Johanne; Wood-Dauphinee, Sharon; Mayo, Nancy E
2008-07-01
To develop a comprehensive measure of Early Physical Functioning (EPF) post-stroke quantified through Rasch analysis and conceptualized using the International Classification of Functioning Disability and Health (ICF). An observational cohort study. A cohort of 262 subjects (mean age 71.6 (standard deviation 12.5) years) hospitalized post-acute stroke. Functional assessments were made within 3 days of stroke with items from valid and reliable indices commonly utilized to evaluate stroke survivors. Information on important variables was also collected. Principal component and Rasch analysis confirmed the factor structure, and dimensionality of the measure. Rasch analysis combined items across ICF components to develop the measure. Items were deleted iteratively, those retained fit the model and were related to the construct; reliability and validity were assessed. A 38-item unidimensional measure of the EPF met all Rasch model requirements. The item difficulty matched the person ability (mean person measure: -0.31; standard error 0.37 logits), reliability of the person-item-hierarchy was excellent at 0.97. Initial validity was adequate. The 38-item EPF measure was developed. It expands the range of assessment post acute stroke; it covers a broad spectrum of difficulty with good initial psychometric properties that, once revalidated, can assist in planning and evaluating early interventions.
La Padula, Simone; Hersant, Barbara; SidAhmed, Mounia; Niddam, Jeremy; Meningaud, Jean Paul
2016-07-01
Most patients requesting aesthetic rejuvenation treatment expect to look healthier and younger. Some scales for ageing assessment have been proposed, but none is focused on patient age prediction. The aim of this study was to develop and validate a new facial rating scale assessing facial ageing sign severity. One thousand Caucasian patients were included and assessed. The Rasch model was used as part of the validation process. A score was attributed to each patient, based on the scales we developed. The correlation between the real age and scores obtained, the inter-rater reliability and test-retest reliability were analysed. The objective was to develop a tool enabling the assigning of a patient to a specific age range based on the calculated score. All scales exceeded criteria for acceptability, reliability and validity. The real age strongly correlated with the total facial score in both sex groups. The test-retest reliability confirmed this strong correlation. We developed a facial ageing scale which could be a useful tool to assess patients before and after rejuvenation treatment and an important new metrics to be used in facial rejuvenation and regenerative clinical research. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Suen, Yi-Nam; Cerin, Ester; Barnett, Anthony; Huang, Wendy Y J; Mellecker, Robin R
2017-09-01
Valid instruments of parenting practices related to children's physical activity (PA) are essential to understand how parents affect preschoolers' PA. This study developed and validated a questionnaire of PA-related parenting practices for Chinese-speaking parents of preschoolers in Hong Kong. Parents (n = 394) completed a questionnaire developed using findings from formative qualitative research and literature searches. Test-retest reliability was determined on a subsample (n = 61). Factorial validity was assessed using confirmatory factor analysis. Subscale internal consistency was determined. The scale of parenting practices encouraging PA comprised 2 latent factors: Modeling, structure and participatory engagement in PA (23 items), and Provision of appropriate places for child's PA (4 items). The scale of parenting practices discouraging PA scale encompassed 4 latent factors: Safety concern/overprotection (6 items), Psychological/behavioral control (5 items), Promoting inactivity (4 items), and Promoting screen time (2 items). Test-retest reliabilities were moderate to excellent (0.58 to 0.82), and internal subscale reliabilities were acceptable (0.63 to 0.89). We developed a theory-based questionnaire for assessing PA-related parenting practices among Chinese-speaking parents of Hong Kong preschoolers. While some items were context and culture specific, many were similar to those previously found in other populations, indicating a degree of construct generalizability across cultures.
NASA Astrophysics Data System (ADS)
Brekke, L. D.; Scott, J.; Ferguson, I. M.; Arnold, J.; Raff, D. A.; Webb, R. S.
2012-12-01
Water managers need to understand the applicability of climate projection information available for decision-support at the scale of their applications. Applicability depends on information reliability and relevance. This need to understand applicability stems from expectations that entities rationalize adaptation investments or decisions to delay investment. It is also occurring at a time when new global climate projections are being released through the World Climate Research Programme Coupled Model Intercomparison Project phase 5 (CMIP5), which introduces new information opportunities and interpretation challenges. This project involves an interagency collaboration to evaluate the applicability of CMIP5 projections for use in water and environmental resources planning. The overarching goal is to develop and demonstrate a framework that involves dual evaluations of relevance and reliability informing an ultimate discussion and judgment of applicability, which is expected to vary with decision-making context. The framework is being developed and demonstrated within the context of reservoir systems management in California's Sacramento and San Joaquin River basins. The relevance evaluation focuses on identifying the climate variables and statistical measures relevant to long-term management questions, which may depend on satisfying multiple objectives. Past studies' results are being considered in this evaluation, along with new results from system sensitivity analyses conducted through this effort. The reliability evaluation focuses on the CMIP5 climate models' ability to simulate past conditions relative to observed references. The evaluation is being conducted across the global domain using a large menu of climate variables and statistical measures, leveraging lessons learned from similar evaluations of CMIP3 climate models. The global focus addresses a broader project goal of producing a web resource that can serve reliability information to applicability discussions around the world, with evaluation results being served through a web-portal similar to that developed by NOAA/CIRES to serve CMIP3 information on future climate extremes (http://www.esrl.noaa.gov/psd/ipcc/extremes/). The framework concludes with an applicability discussion informed by relevance and reliability results. The goal is to observe the discussion process and identify features, choice points, and challenges that might be summarized and shared with other resource management groups facing applicability questions. This presentation will discuss the project framework and preliminary results. In addition to considering CMIP5 21st century projection information, the framework is being developed to support evaluation of CMIP5 decadal predictability experiment simulations and reconcile those simulations with 21st century projections. The presentation will also discuss implications of considering the applicability of bias-corrected and downscaled information within this framework.
Composite Stress Rupture: A New Reliability Model Based on Strength Decay
NASA Technical Reports Server (NTRS)
Reeder, James R.
2012-01-01
A model is proposed to estimate reliability for stress rupture of composite overwrap pressure vessels (COPVs) and similar composite structures. This new reliability model is generated by assuming a strength degradation (or decay) over time. The model suggests that most of the strength decay occurs late in life. The strength decay model will be shown to predict a response similar to that predicted by a traditional reliability model for stress rupture based on tests at a single stress level. In addition, the model predicts that even though there is strength decay due to proof loading, a significant overall increase in reliability is gained by eliminating any weak vessels, which would fail early. The model predicts that there should be significant periods of safe life following proof loading, because time is required for the strength to decay from the proof stress level to the subsequent loading level. Suggestions for testing the strength decay reliability model have been made. If the strength decay reliability model predictions are shown through testing to be accurate, COPVs may be designed to carry a higher level of stress than is currently allowed, which will enable the production of lighter structures
Evapotranspiration and canopy resistance at an undeveloped prairie in a humid subtropical climate
Bidlake, W.R.
2002-01-01
Reliable estimates of evapotranspiration from areas of wildland vegetation are needed for many types of water-resource investigations. However, little is known about surface fluxes from many areally important vegetation types, and relatively few comparisons have been made to examine how well evapotranspiration models can predict evapotranspiration for soil-, climate-, or vegetation-types that differ from those under which the models have been calibrated. In this investigation at a prairie site in west-central Florida, latent heat flux (??E) computed from the energy balance and alternatively by eddy covariance during a 15-month period differed by 4 percent and 7 percent on hourly and daily time scales, respectively. Annual evapotranspiration computed from the energy balance and by eddy covariance were 978 and 944 mm, respectively. An hourly Penman-Monteith (PM) evapotranspiration model with stomatal control predicated on water-vapor-pressure deficit at canopy level, incoming solar radiation intensity, and soil water deficit was developed and calibrated using surface fluxes from eddy covariance. Model-predicted ??E agreed closely with ??E computed from the energy balance except when moisture from dew or precipitation covered vegetation surfaces. Finally, an hourly PM model developed for an Amazonian pasture predicted ??E for the Florida prairie with unexpected reliability. Additional comparisons of PM-type models that have been developed for differing types of short vegetation could aid in assessing interchangeability of such models.
Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S
2015-01-16
Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.
Salum, Giovanni A; DeSousa, Diogo Araújo; Manfro, Gisele Gus; Pan, Pedro Mario; Gadelha, Ary; Brietzke, Elisa; Miguel, Eurípedes Constantino; Mari, Jair J; do Rosário, Maria Conceição; Grassi-Oliveira, Rodrigo
2016-01-01
To investigate the validity and reliability of a multi-informant approach to measuring child maltreatment (CM) comprising seven questions assessing CM administered to children and their parents in a large community sample. Our sample comprised 2,512 children aged 6 to 12 years and their parents. Child maltreatment (CM) was assessed with three questions answered by the children and four answered by their parents, covering physical abuse, physical neglect, emotional abuse and sexual abuse. Confirmatory factor analysis was used to compare the fit indices of different models. Convergent and divergent validity were tested using parent-report and teacher-report scores on the Strengths and Difficulties Questionnaire. Discriminant validity was investigated using the Development and Well-Being Assessment to divide subjects into five diagnostic groups: typically developing controls (n = 1,880), fear disorders (n = 108), distress disorders (n = 76), attention deficit hyperactivity disorder (n = 143) and oppositional defiant disorder/conduct disorder (n = 56). A higher-order model with one higher-order factor (child maltreatment) encompassing two lower-order factors (child report and parent report) exhibited the best fit to the data and this model's reliability results were acceptable. As expected, child maltreatment was positively associated with measures of psychopathology and negatively associated with prosocial measures. All diagnostic category groups had higher levels of overall child maltreatment than typically developing children. We found evidence for the validity and reliability of this brief measure of child maltreatment using data from a large survey combining information from parents and their children.
Vaingankar, Janhavi Ajit; Subramaniam, Mythily; Abdin, Edimansyah; Picco, Louisa; Chua, Boon Yiang; Eng, Goi Khia; Sambasivam, Rajeswari; Shafie, Saleha; Zhang, Yunjue; Chong, Siow Ann
2014-06-01
The 47-item positive mental health (PMH) instrument measures the level of PMH in multiethnic adult Asian populations. This study aimed to (1) develop a short PMH instrument and (2) establish its validity and reliability among the adult Singapore population. Two separate studies were conducted among adult community-dwelling Singapore residents of Chinese, Malay or Indian ethnicity where participants completed self-administered questionnaires. In the first study, secondary data analysis was conducted using confirmatory factor analysis (CFA) to shorten the PMH instrument. In the second study, the newly developed short PMH instrument and other scales were administered to 201 residents to establish its factor structure, validity and reliability. A 20-item short PMH instrument fulfilling a higher-order six-factor structure was developed following secondary analysis. The mean age of the participants in the second study was 41 years and about 53% were women. One item with poor factor loading was further removed to generate a 19-item version of the PMH instrument. CFA demonstrated a first-order six-factor model of the short PMH instrument. The PMH-19 instrument and its subscales fulfilled criterion validity hypotheses. Internal consistency and test-retest reliability of the PMH-19 instrument were high (Cronbach's α coefficient = 0.87; intraclass correlation coefficient = 0.93, respectively). The 19-item PMH instrument is multidimensional, valid and reliable, and most importantly, with its reduced administration time, the short PMH instrument can be used to measure and evaluate PMH in Asian communities.
Development and evaluation of height diameter at breast models for native Chinese Metasequoia.
Liu, Mu; Feng, Zhongke; Zhang, Zhixiang; Ma, Chenghui; Wang, Mingming; Lian, Bo-Ling; Sun, Renjie; Zhang, Li
2017-01-01
Accurate tree height and diameter at breast height (dbh) are important input variables for growth and yield models. A total of 5503 Chinese Metasequoia trees were used in this study. We studied 53 fitted models, of which 7 were linear models and 46 were non-linear models. These models were divided into two groups of single models and multivariate models according to the number of independent variables. The results show that the allometry equation of tree height which has diameter at breast height as independent variable can better reflect the change of tree height; in addition the prediction accuracy of the multivariate composite models is higher than that of the single variable models. Although tree age is not the most important variable in the study of the relationship between tree height and dbh, the consideration of tree age when choosing models and parameters in model selection can make the prediction of tree height more accurate. The amount of data is also an important parameter what can improve the reliability of models. Other variables such as tree height, main dbh and altitude, etc can also affect models. In this study, the method of developing the recommended models for predicting the tree height of native Metasequoias aged 50-485 years is statistically reliable and can be used for reference in predicting the growth and production of mature native Metasequoia.
Development and evaluation of height diameter at breast models for native Chinese Metasequoia
Feng, Zhongke; Zhang, Zhixiang; Ma, Chenghui; Wang, Mingming; Lian, Bo-ling; Sun, Renjie; Zhang, Li
2017-01-01
Accurate tree height and diameter at breast height (dbh) are important input variables for growth and yield models. A total of 5503 Chinese Metasequoia trees were used in this study. We studied 53 fitted models, of which 7 were linear models and 46 were non-linear models. These models were divided into two groups of single models and multivariate models according to the number of independent variables. The results show that the allometry equation of tree height which has diameter at breast height as independent variable can better reflect the change of tree height; in addition the prediction accuracy of the multivariate composite models is higher than that of the single variable models. Although tree age is not the most important variable in the study of the relationship between tree height and dbh, the consideration of tree age when choosing models and parameters in model selection can make the prediction of tree height more accurate. The amount of data is also an important parameter what can improve the reliability of models. Other variables such as tree height, main dbh and altitude, etc can also affect models. In this study, the method of developing the recommended models for predicting the tree height of native Metasequoias aged 50–485 years is statistically reliable and can be used for reference in predicting the growth and production of mature native Metasequoia. PMID:28817600
Cost effective management of space venture risks
NASA Technical Reports Server (NTRS)
Giuntini, Ronald E.; Storm, Richard E.
1986-01-01
The development of a model for the cost-effective management of space venture risks is discussed. The risk assessment and control program of insurance companies is examined. A simplified system development cycle which consists of a conceptual design phase, a preliminary design phase, a final design phase, a construction phase, and a system operations and maintenance phase is described. The model incorporates insurance safety risk methods and reliability engineering, and testing practices used in the development of large aerospace and defense systems.
2010-05-01
prenatal stress as a factor in vulnerability to develop PTSD upon adult exposure to traumatic stress. Thus, although the duration of this Concept...including fear conditioning and extinction; and 3) test the combined drug treatment using the established model. Although we developed a reliable test...our lab, but had to be modified and adjusted to suit the time frame required by the design of these experiments. Others we had to develop , set up
Tabuse, Hideaki; Kalali, Amir; Azuma, Hideki; Ozaki, Norio; Iwata, Nakao; Naitoh, Hiroshi; Higuchi, Teruhiko; Kanba, Shigenobu; Shioe, Kunihiko; Akechi, Tatsuo; Furukawa, Toshi A
2007-09-30
The Hamilton Rating Scale for Depression (HAMD) is the de facto international gold standard for the assessment of depression. There are some criticisms, however, especially with regard to its inter-rater reliability, due to the lack of standardized questions or explicit scoring procedures. The GRID-HAMD was developed to provide standardized explicit scoring conventions and a structured interview guide for administration and scoring of the HAMD. We developed the Japanese version of the GRID-HAMD and examined its inter-rater reliability among experienced and inexperienced clinicians (n=70), how rater characteristics may affect it, and how training can improve it in the course of a model training program using videotaped interviews. The results showed that the inter-rater reliability of the GRID-HAMD total score was excellent to almost perfect and those of most individual items were also satisfactory to excellent, both with experienced and inexperienced raters, and both before and after the training. With its standardized definitions, questions and detailed scoring conventions, the GRID-HAMD appears to be the best achievable set of interview guides for the HAMD and can provide a solid tool for highly reliable assessment of depression severity.
An experimental evaluation of software redundancy as a strategy for improving reliability
NASA Technical Reports Server (NTRS)
Eckhardt, Dave E., Jr.; Caglayan, Alper K.; Knight, John C.; Lee, Larry D.; Mcallister, David F.; Vouk, Mladen A.; Kelly, John P. J.
1990-01-01
The strategy of using multiple versions of independently developed software as a means to tolerate residual software design faults is suggested by the success of hardware redundancy for tolerating hardware failures. Although, as generally accepted, the independence of hardware failures resulting from physical wearout can lead to substantial increases in reliability for redundant hardware structures, a similar conclusion is not immediate for software. The degree to which design faults are manifested as independent failures determines the effectiveness of redundancy as a method for improving software reliability. Interest in multi-version software centers on whether it provides an adequate measure of increased reliability to warrant its use in critical applications. The effectiveness of multi-version software is studied by comparing estimates of the failure probabilities of these systems with the failure probabilities of single versions. The estimates are obtained under a model of dependent failures and compared with estimates obtained when failures are assumed to be independent. The experimental results are based on twenty versions of an aerospace application developed and certified by sixty programmers from four universities. Descriptions of the application, development and certification processes, and operational evaluation are given together with an analysis of the twenty versions.
Calculating system reliability with SRFYDO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M
2010-01-01
SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for themore » system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.« less
Optimal sample sizes for the design of reliability studies: power consideration.
Shieh, Gwowen
2014-09-01
Intraclass correlation coefficients are used extensively to measure the reliability or degree of resemblance among group members in multilevel research. This study concerns the problem of the necessary sample size to ensure adequate statistical power for hypothesis tests concerning the intraclass correlation coefficient in the one-way random-effects model. In view of the incomplete and problematic numerical results in the literature, the approximate sample size formula constructed from Fisher's transformation is reevaluated and compared with an exact approach across a wide range of model configurations. These comprehensive examinations showed that the Fisher transformation method is appropriate only under limited circumstances, and therefore it is not recommended as a general method in practice. For advance design planning of reliability studies, the exact sample size procedures are fully described and illustrated for various allocation and cost schemes. Corresponding computer programs are also developed to implement the suggested algorithms.
Recombination of open-f-shell tungsten ions
NASA Astrophysics Data System (ADS)
Krantz, C.; Badnell, N. R.; Müller, A.; Schippers, S.; Wolf, A.
2017-03-01
We review experimental and theoretical efforts aimed at a detailed understanding of the recombination of electrons with highly charged tungsten ions characterised by an open 4f sub-shell. Highly charged tungsten occurs as a plasma contaminant in ITER-like tokamak experiments, where it acts as an unwanted cooling agent. Modelling of the charge state populations in a plasma requires reliable thermal rate coefficients for charge-changing electron collisions. The electron recombination of medium-charged tungsten species with open 4f sub-shells is especially challenging to compute reliably. Storage-ring experiments have been conducted that yielded recombination rate coefficients at high energy resolution and well-understood systematics. Significant deviations compared to simplified, but prevalent, computational models have been found. A new class of ab initio numerical calculations has been developed that provides reliable predictions of the total plasma recombination rate coefficients for these ions.
NASA Astrophysics Data System (ADS)
Zammouri, Mounira; Ribeiro, Luis
2017-05-01
Groundwater flow model of the transboundary Saharan aquifer system is developed in 2003 and used for management and decision-making by Algeria, Tunisia and Libya. In decision-making processes, reliability plays a decisive role. This paper looks into the reliability assessment of the Saharan aquifers model. It aims to detect the shortcomings of the model considered properly calibrated. After presenting the calibration results of the effort modelling in 2003, the uncertainty in the model which arising from the lack of the groundwater level and the transmissivity data is analyzed using kriging technique and stochastic approach. The structural analysis of piezometry in steady state and logarithms of transmissivity were carried out for the Continental Intercalaire (CI) and the Complexe Terminal (CT) aquifers. The available data (piezometry and transmissivity) were compared to the calculated values, using geostatistics approach. Using a stochastic approach, 2500 realizations of a log-normal random transmissivity field of the CI aquifer has been performed to assess the errors of the model output, due to the uncertainty in transmissivity. Two types of bad calibration are shown. In some regions, calibration should be improved using the available data. In others areas, undertaking the model refinement requires gathering new data to enhance the aquifer system knowledge. Stochastic simulations' results showed that the calculated drawdowns in 2050 could be higher than the values predicted by the calibrated model.
Improving reliability of a residency interview process.
Peeters, Michael J; Serres, Michelle L; Gundrum, Todd E
2013-10-14
To improve the reliability and discrimination of a pharmacy resident interview evaluation form, and thereby improve the reliability of the interview process. In phase 1 of the study, authors used a Many-Facet Rasch Measurement model to optimize an existing evaluation form for reliability and discrimination. In phase 2, interviewer pairs used the modified evaluation form within 4 separate interview stations. In phase 3, 8 interviewers individually-evaluated each candidate in one-on-one interviews. In phase 1, the evaluation form had a reliability of 0.98 with person separation of 6.56; reproducibly, the form separated applicants into 6 distinct groups. Using that form in phase 2 and 3, our largest variation source was candidates, while content specificity was the next largest variation source. The phase 2 g-coefficient was 0.787, while confirmatory phase 3 was 0.922. Process reliability improved with more stations despite fewer interviewers per station-impact of content specificity was greatly reduced with more interview stations. A more reliable, discriminating evaluation form was developed to evaluate candidates during resident interviews, and a process was designed that reduced the impact from content specificity.
Earthquake Source Inversion Blindtest: Initial Results and Further Developments
NASA Astrophysics Data System (ADS)
Mai, P.; Burjanek, J.; Delouis, B.; Festa, G.; Francois-Holden, C.; Monelli, D.; Uchide, T.; Zahradnik, J.
2007-12-01
Images of earthquake ruptures, obtained from modelling/inverting seismic and/or geodetic data exhibit a high degree in spatial complexity. This earthquake source heterogeneity controls seismic radiation, and is determined by the details of the dynamic rupture process. In turn, such rupture models are used for studying source dynamics and for ground-motion prediction. But how reliable and trustworthy are these earthquake source inversions? Rupture models for a given earthquake, obtained by different research teams, often display striking disparities (see http://www.seismo.ethz.ch/srcmod) However, well resolved, robust, and hence reliable source-rupture models are an integral part to better understand earthquake source physics and to improve seismic hazard assessment. Therefore it is timely to conduct a large-scale validation exercise for comparing the methods, parameterization and data-handling in earthquake source inversions.We recently started a blind test in which several research groups derive a kinematic rupture model from synthetic seismograms calculated for an input model unknown to the source modelers. The first results, for an input rupture model with heterogeneous slip but constant rise time and rupture velocity, reveal large differences between the input and inverted model in some cases, while a few studies achieve high correlation between the input and inferred model. Here we report on the statistical assessment of the set of inverted rupture models to quantitatively investigate their degree of (dis-)similarity. We briefly discuss the different inversion approaches, their possible strength and weaknesses, and the use of appropriate misfit criteria. Finally we present new blind-test models, with increasing source complexity and ambient noise on the synthetics. The goal is to attract a large group of source modelers to join this source-inversion blindtest in order to conduct a large-scale validation exercise to rigorously asses the performance and reliability of current inversion methods and to discuss future developments.
Modeling Student Software Testing Processes: Attitudes, Behaviors, Interventions, and Their Effects
ERIC Educational Resources Information Center
Buffardi, Kevin John
2014-01-01
Effective software testing identifies potential bugs and helps correct them, producing more reliable and maintainable software. As software development processes have evolved, incremental testing techniques have grown in popularity, particularly with introduction of test-driven development (TDD). However, many programmers struggle to adopt TDD's…
Describing the Climate of Student Organizations: The Student Organization Environment Scales.
ERIC Educational Resources Information Center
Winston, Roger B., Jr.; Bledsoe, Tyrone; Goldstein, Adam R.; Wisbey, Martha E.; Street, James L.; Brown, Steven R.; Goyen, Kenneth D.; Rounds, Linda E.
1997-01-01
Using M. R. Weisbord's model of organizational diagnosis, researchers developed the Student Organization Environment Scales to measure students' perceptions of the psychosocial environment or climate of college student organizations. Development of the instrument is described and estimates of its reliability and validity are reported. Describes…
Developing Attitude Scale, Reliability and Validity for Pre-Service Teachers towards Drama Lesson
ERIC Educational Resources Information Center
Çelik, Özkan; Bozdemir, Hafife; Uyanik, Gökhan
2016-01-01
The purpose of this study is to develop an attitude scale for pre-service teachers towards drama lesson. Survey model was used in study. The sample of study consisted of 258 pre-service teachers. "Attitude scale towards drama lesson for pre-service teachers" was developed and used as data collection tool. Exploratory and confirmatory…
Development of a CFD code for casting simulation
NASA Technical Reports Server (NTRS)
Murph, Jesse E.
1993-01-01
Because of high rejection rates for large structural castings (e.g., the Space Shuttle Main Engine Alternate Turbopump Design Program), a reliable casting simulation computer code is very desirable. This code would reduce both the development time and life cycle costs by allowing accurate modeling of the entire casting process. While this code could be used for other types of castings, the most significant reductions of time and cost would probably be realized in complex investment castings, where any reduction in the number of development castings would be of significant benefit. The casting process is conveniently divided into three distinct phases: (1) mold filling, where the melt is poured or forced into the mold cavity; (2) solidification, where the melt undergoes a phase change to the solid state; and (3) cool down, where the solidified part continues to cool to ambient conditions. While these phases may appear to be separate and distinct, temporal overlaps do exist between phases (e.g., local solidification occurring during mold filling), and some phenomenological events are affected by others (e.g., residual stresses depend on solidification and cooling rates). Therefore, a reliable code must accurately model all three phases and the interactions between each. While many codes have been developed (to various stages of complexity) to model the solidification and cool down phases, only a few codes have been developed to model mold filling.
Brandstätter, Christian; Laner, David; Prantl, Roman; Fellner, Johann
2014-12-01
Municipal solid waste landfills pose a threat on environment and human health, especially old landfills which lack facilities for collection and treatment of landfill gas and leachate. Consequently, missing information about emission flows prevent site-specific environmental risk assessments. To overcome this gap, the combination of waste sampling and analysis with statistical modeling is one option for estimating present and future emission potentials. Optimizing the tradeoff between investigation costs and reliable results requires knowledge about both: the number of samples to be taken and variables to be analyzed. This article aims to identify the optimized number of waste samples and variables in order to predict a larger set of variables. Therefore, we introduce a multivariate linear regression model and tested the applicability by usage of two case studies. Landfill A was used to set up and calibrate the model based on 50 waste samples and twelve variables. The calibrated model was applied to Landfill B including 36 waste samples and twelve variables with four predictor variables. The case study results are twofold: first, the reliable and accurate prediction of the twelve variables can be achieved with the knowledge of four predictor variables (Loi, EC, pH and Cl). For the second Landfill B, only ten full measurements would be needed for a reliable prediction of most response variables. The four predictor variables would exhibit comparably low analytical costs in comparison to the full set of measurements. This cost reduction could be used to increase the number of samples yielding an improved understanding of the spatial waste heterogeneity in landfills. Concluding, the future application of the developed model potentially improves the reliability of predicted emission potentials. The model could become a standard screening tool for old landfills if its applicability and reliability would be tested in additional case studies. Copyright © 2014 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Raphael, Dennis; And Others
1996-01-01
A conceptual model of quality of life, developed at the Centre for Health Promotion at the University of Toronto (Canada), and associated instrumentation for collecting data from persons with developmental disabilities are presented. Results from a preliminary study with 41 participants support the reliability and validity of the model's…
Test Platforms for Model-Based Flight Research
NASA Astrophysics Data System (ADS)
Dorobantu, Andrei
Demonstrating the reliability of flight control algorithms is critical to integrating unmanned aircraft systems into the civilian airspace. For many potential applications, design and certification of these algorithms will rely heavily on mathematical models of the aircraft dynamics. Therefore, the aerospace community must develop flight test platforms to support the advancement of model-based techniques. The University of Minnesota has developed a test platform dedicated to model-based flight research for unmanned aircraft systems. This thesis provides an overview of the test platform and its research activities in the areas of system identification, model validation, and closed-loop control for small unmanned aircraft.
Peer Review of EPA's Draft BMDS Document: Exponential ...
BMDS is one of the Agency's premier tools for estimating risk assessments, therefore the validity and reliability of its statistical models are of paramount importance. This page provides links to peer review of the BMDS applications and its models as they were developed and eventually released documenting the rigorous review process taken to provide the best science tools available for statistical modeling. This page provides links to peer review of the BMDS applications and its models as they were developed and eventually released documenting the rigorous review process taken to provide the best science tools available for statistical modeling.
Go, Kristina L; Delitto, Daniel; Judge, Sarah M; Gerber, Michael H; George, Thomas J; Behrns, Kevin E; Hughes, Steven J; Judge, Andrew R; Trevino, Jose G
2017-07-01
Limitations associated with current animal models serve as a major obstacle to reliable preclinical evaluation of therapies in pancreatic cancer (PC). In an effort to develop more reliable preclinical models, we have recently established a subcutaneous patient-derived xenograft (PDX) model. However, critical aspects of PC responsible for its highly lethal nature, such as the development of distant metastasis and cancer cachexia, remain underrepresented in the flank PDX model. The purpose of this study was to evaluate the degree to which an orthotopic PDX model of PC recapitulates these aspects of the human disease. Human PDX-derived PC tumors were implanted directly into the pancreas of NOD.Cg-Prkdc Il2rg/SzJ mice. Tumor growth, metastasis, and muscle wasting were then evaluated. Orthotopically implanted PDX-derived tumors consistently incorporated into the murine pancreatic parenchyma, metastasized to both the liver and lungs and induced muscle wasting directly proportional to the size of the tumor, consistent of the cancer cachexia syndrome. Through the orthotopic implantation technique described, we demonstrate a highly reproducible model that recapitulates both local and systemic aspects of human PC.
NASA Astrophysics Data System (ADS)
Trachtenberg, I.
How a reliability model might be developed with new data from accelerated stress testing, failure mechanisms, process control monitoring, and test structure evaluations is illustrated. The effects of the acceleration of temperature on operating life is discussed. Test structures that will further accelerate the failure rate are discussed. Corrosion testing is addressed. The uncoated structure is encapsulated in a variety of mold compounds and subjected to pressure-cooker testing.
Integrating O/S models during conceptual design, part 2
NASA Technical Reports Server (NTRS)
Ebeling, Charles E.
1994-01-01
This report documents the procedures for utilizing and maintaining the Reliability & Maintainability Model (RAM) developed by the University of Dayton for the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC) under NASA research grant NAG-1-1327. The purpose of the grant is to provide support to NASA in establishing operational and support parameters and costs of proposed space systems. As part of this research objective, the model described here was developed. Additional documentation concerning the development of this model may be found in Part 1 of this report. This is the 2nd part of a 3 part technical report.
Stress and Reliability Analysis of a Metal-Ceramic Dental Crown
NASA Technical Reports Server (NTRS)
Anusavice, Kenneth J; Sokolowski, Todd M.; Hojjatie, Barry; Nemeth, Noel N.
1996-01-01
Interaction of mechanical and thermal stresses with the flaws and microcracks within the ceramic region of metal-ceramic dental crowns can result in catastrophic or delayed failure of these restorations. The objective of this study was to determine the combined influence of induced functional stresses and pre-existing flaws and microcracks on the time-dependent probability of failure of a metal-ceramic molar crown. A three-dimensional finite element model of a porcelain fused-to-metal (PFM) molar crown was developed using the ANSYS finite element program. The crown consisted of a body porcelain, opaque porcelain, and a metal substrate. The model had a 300 Newton load applied perpendicular to one cusp, a load of 30ON applied at 30 degrees from the perpendicular load case, directed toward the center, and a 600 Newton vertical load. Ceramic specimens were subjected to a biaxial flexure test and the load-to-failure of each specimen was measured. The results of the finite element stress analysis and the flexure tests were incorporated in the NASA developed CARES/LIFE program to determine the Weibull and fatigue parameters and time-dependent fracture reliability of the PFM crown. CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/Or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program.
Lubans, David R; Smith, Jordan J; Harries, Simon K; Barnett, Lisa M; Faigenbaum, Avery D
2014-05-01
The aim of this study was to describe the development and assess test-retest reliability and construct validity of the Resistance Training Skills Battery (RTSB) for adolescents. The RTSB provides an assessment of resistance training skill competency and includes 6 exercises (i.e., body weight squat, push-up, lunge, suspended row, standing overhead press, and front support with chest touches). Scoring for each skill is based on the number of performance criteria successfully demonstrated. An overall resistance training skill quotient (RTSQ) is created by adding participants' scores for the 6 skills. Participants (44 boys and 19 girls, mean age = 14.5 ± 1.2 years) completed the RTSB on 2 occasions separated by 7 days. Participants also completed the following fitness tests, which were used to create a muscular fitness score (MFS): handgrip strength, timed push-up, and standing long jump tests. Intraclass correlation (ICC), paired samples t-tests, and typical error were used to assess test-retest reliability. To assess construct validity, gender and RTSQ were entered into a regression model predicting MFS. The rank order repeatability of the RTSQ was high (ICC = 0.88). The model explained 39% of the variance in MFS (p ≤ 0.001) and RTSQ (r = 0.40, p ≤ 0.001) was a significant predictor. This study has demonstrated the construct validity and test-retest reliability of the RTSB in a sample of adolescents. The RTSB can reliably rank participants in regards to their resistance training competency and has the necessary sensitivity to detect small changes in resistance training skill proficiency.
Space Flight Cable Model Development
NASA Technical Reports Server (NTRS)
Spak, Kaitlin
2013-01-01
This work concentrates the modeling efforts presented in last year's VSGC conference paper, "Model Development for Cable-Harnessed Beams." The focus is narrowed to modeling of space-flight cables only, as a reliable damped cable model is not yet readily available and is necessary to continue modeling cable-harnessed space structures. New experimental data is presented, eliminating the low-frequency noise that plagued the first year's efforts. The distributed transfer function method is applied to a single section of space flight cable for Euler-Bernoulli and shear beams. The work presented here will be developed into a damped cable model that can be incorporated into an interconnected beam-cable system. The overall goal of this work is to accurately predict natural frequencies and modal damping ratios for cabled space structures.
NASA Astrophysics Data System (ADS)
Tamura, Yoshinobu; Yamada, Shigeru
OSS (open source software) systems which serve as key components of critical infrastructures in our social life are still ever-expanding now. Especially, embedded OSS systems have been gaining a lot of attention in the embedded system area, i.e., Android, BusyBox, TRON, etc. However, the poor handling of quality problem and customer support prohibit the progress of embedded OSS. Also, it is difficult for developers to assess the reliability and portability of embedded OSS on a single-board computer. In this paper, we propose a method of software reliability assessment based on flexible hazard rates for the embedded OSS. Also, we analyze actual data of software failure-occurrence time-intervals to show numerical examples of software reliability assessment for the embedded OSS. Moreover, we compare the proposed hazard rate model for the embedded OSS with the typical conventional hazard rate models by using the comparison criteria of goodness-of-fit. Furthermore, we discuss the optimal software release problem for the porting-phase based on the total expected software maintenance cost.
Simulated Students and Classroom Use of Model-Based Intelligent Tutoring
NASA Technical Reports Server (NTRS)
Koedinger, Kenneth R.
2008-01-01
Two educational uses of models and simulations: 1) Students create models and use simulations ; and 2) Researchers create models of learners to guide development of reliably effective materials. Cognitive tutors simulate and support tutoring - data is crucial to create effective model. Pittsburgh Science of Learning Center: Resources for modeling, authoring, experimentation. Repository of data and theory. Examples of advanced modeling efforts: SimStudent learns rule-based model. Help-seeking model: Tutors metacognition. Scooter uses machine learning detectors of student engagement.
Huang, Chih-Ling; Cheng, Chung-Ping; Huang, Hui-Wen
2013-10-01
The purpose of this study was to develop a scale to measure the social smoking motives of adult male smokers using a Chinese social context. Three phases were conducted between February 2006 and May 2009. First, the initial instrument development was guided by a literature review, interviews with smokers, and item analysis. Second, the validity and reliability of the refined scale were tested. The factor structures of the Social Smoking Measures (SSM-12) scale were validated. The final scale consists of 12 items. Two factors that account for 49.2% of the variance emerged from the exploratory factor analysis. Cronbach's alpha was .88, and test-retest reliability was .82. The results of the confirmatory factor analysis indicated that the SSM model was a two-correlated factor. Field testing revealed the SSM-12 to be a reliable and valid Chinese-language instrument to measure social smoking motives, which can be used to guide nursing interventions that support culturally and socially appropriate smoking cessation programs.
Accurate reliability analysis method for quantum-dot cellular automata circuits
NASA Astrophysics Data System (ADS)
Cui, Huanqing; Cai, Li; Wang, Sen; Liu, Xiaoqiang; Yang, Xiaokuo
2015-10-01
Probabilistic transfer matrix (PTM) is a widely used model in the reliability research of circuits. However, PTM model cannot reflect the impact of input signals on reliability, so it does not completely conform to the mechanism of the novel field-coupled nanoelectronic device which is called quantum-dot cellular automata (QCA). It is difficult to get accurate results when PTM model is used to analyze the reliability of QCA circuits. To solve this problem, we present the fault tree models of QCA fundamental devices according to different input signals. After that, the binary decision diagram (BDD) is used to quantitatively investigate the reliability of two QCA XOR gates depending on the presented models. By employing the fault tree models, the impact of input signals on reliability can be identified clearly and the crucial components of a circuit can be found out precisely based on the importance values (IVs) of components. So this method is contributive to the construction of reliable QCA circuits.
Frey, Jennifer K.; Lewis, Jeremy C.; Guy, Rachel K.; Stuart, James N.
2013-01-01
Simple Summary We evaluated the influence of occurrence records with different reliability on predicted distribution of a unique, rare mammal in the American Southwest, the white-nosed coati (Nasua narica). We concluded that occurrence datasets that include anecdotal records can be used to infer species distributions, providing such data are used only for easily-identifiable species and based on robust modeling methods such as maximum entropy. Use of a reliability rating system is critical for using anecdotal data. Abstract Species distributions are usually inferred from occurrence records. However, these records are prone to errors in spatial precision and reliability. Although influence of spatial errors has been fairly well studied, there is little information on impacts of poor reliability. Reliability of an occurrence record can be influenced by characteristics of the species, conditions during the observation, and observer’s knowledge. Some studies have advocated use of anecdotal data, while others have advocated more stringent evidentiary standards such as only accepting records verified by physical evidence, at least for rare or elusive species. Our goal was to evaluate the influence of occurrence records with different reliability on species distribution models (SDMs) of a unique mammal, the white-nosed coati (Nasua narica) in the American Southwest. We compared SDMs developed using maximum entropy analysis of combined bioclimatic and biophysical variables and based on seven subsets of occurrence records that varied in reliability and spatial precision. We found that the predicted distribution of the coati based on datasets that included anecdotal occurrence records were similar to those based on datasets that only included physical evidence. Coati distribution in the American Southwest was predicted to occur in southwestern New Mexico and southeastern Arizona and was defined primarily by evenness of climate and Madrean woodland and chaparral land-cover types. Coati distribution patterns in this region suggest a good model for understanding the biogeographic structure of range margins. We concluded that occurrence datasets that include anecdotal records can be used to infer species distributions, providing such data are used only for easily-identifiable species and based on robust modeling methods such as maximum entropy. Use of a reliability rating system is critical for using anecdotal data. PMID:26487405
Meylan, Grégoire; Reck, Barbara K; Rechberger, Helmut; Graedel, Thomas E; Schwab, Oliver
2017-10-17
Decision-makers traditionally expect "hard facts" from scientific inquiry, an expectation that the results of material flow analyses (MFAs) can hardly meet. MFA limitations are attributable to incompleteness of flowcharts, limited data quality, and model assumptions. Moreover, MFA results are, for the most part, based less on empirical observation but rather on social knowledge construction processes. Developing, applying, and improving the means of evaluating and communicating the reliability of MFA results is imperative. We apply two recently proposed approaches for making quantitative statements on MFA reliability to national minor metals systems: rhenium, gallium, and germanium in the United States in 2012. We discuss the reliability of results in policy and management contexts. The first approach consists of assessing data quality based on systematic characterization of MFA data and the associated meta-information and quantifying the "information content" of MFAs. The second is a quantification of data inconsistencies indicated by the "degree of data reconciliation" between the data and the model. A high information content and a low degree of reconciliation indicate reliable or certain MFA results. This article contributes to reliability and uncertainty discourses in MFA, exemplifying the usefulness of the approaches in policy and management, and to raw material supply discussions by providing country-level information on three important minor metals often considered critical.
A particle swarm model for estimating reliability and scheduling system maintenance
NASA Astrophysics Data System (ADS)
Puzis, Rami; Shirtz, Dov; Elovici, Yuval
2016-05-01
Modifying data and information system components may introduce new errors and deteriorate the reliability of the system. Reliability can be efficiently regained with reliability centred maintenance, which requires reliability estimation for maintenance scheduling. A variant of the particle swarm model is used to estimate reliability of systems implemented according to the model view controller paradigm. Simulations based on data collected from an online system of a large financial institute are used to compare three component-level maintenance policies. Results show that appropriately scheduled component-level maintenance greatly reduces the cost of upholding an acceptable level of reliability by reducing the need in system-wide maintenance.
Modeling and Verification of Dependable Electronic Power System Architecture
NASA Astrophysics Data System (ADS)
Yuan, Ling; Fan, Ping; Zhang, Xiao-fang
The electronic power system can be viewed as a system composed of a set of concurrently interacting subsystems to generate, transmit, and distribute electric power. The complex interaction among sub-systems makes the design of electronic power system complicated. Furthermore, in order to guarantee the safe generation and distribution of electronic power, the fault tolerant mechanisms are incorporated in the system design to satisfy high reliability requirements. As a result, the incorporation makes the design of such system more complicated. We propose a dependable electronic power system architecture, which can provide a generic framework to guide the development of electronic power system to ease the development complexity. In order to provide common idioms and patterns to the system *designers, we formally model the electronic power system architecture by using the PVS formal language. Based on the PVS model of this system architecture, we formally verify the fault tolerant properties of the system architecture by using the PVS theorem prover, which can guarantee that the system architecture can satisfy high reliability requirements.
Predictive model of muscle fatigue after spinal cord injury in humans.
Shields, Richard K; Chang, Ya-Ju; Dudley-Javoroski, Shauna; Lin, Cheng-Hsiang
2006-07-01
The fatigability of paralyzed muscle limits its ability to deliver physiological loads to paralyzed extremities during repetitive electrical stimulation. The purposes of this study were to determine the reliability of measuring paralyzed muscle fatigue and to develop a model to predict the temporal changes in muscle fatigue that occur after spinal cord injury (SCI). Thirty-four subjects underwent soleus fatigue testing with a modified Burke electrical stimulation fatigue protocol. The between-day reliability of this protocol was high (intraclass correlation, 0.96). We fit the fatigue index (FI) data to a quadratic-linear segmental polynomial model. FI declined rapidly (0.3854 per year) for the first 1.7 years, and more slowly (0.01 per year) thereafter. The rapid decline of FI immediately after SCI implies that a "window of opportunity" exists for the clinician if the goal is to prevent these changes. Understanding the timing of change in muscle endurance properties (and, therefore, load-generating capacity) after SCI may assist clinicians when developing therapeutic interventions to maintain musculoskeletal integrity.
NASA Technical Reports Server (NTRS)
Hanagud, S.; Uppaluri, B.
1975-01-01
This paper describes a methodology for making cost effective fatigue design decisions. The methodology is based on a probabilistic model for the stochastic process of fatigue crack growth with time. The development of a particular model for the stochastic process is also discussed in the paper. The model is based on the assumption of continuous time and discrete space of crack lengths. Statistical decision theory and the developed probabilistic model are used to develop the procedure for making fatigue design decisions on the basis of minimum expected cost or risk function and reliability bounds. Selections of initial flaw size distribution, NDT, repair threshold crack lengths, and inspection intervals are discussed.
Aerodynamic force measurement on a large-scale model in a short duration test facility
NASA Astrophysics Data System (ADS)
Tanno, H.; Kodera, M.; Komuro, T.; Sato, K.; Takahasi, M.; Itoh, K.
2005-03-01
A force measurement technique has been developed for large-scale aerodynamic models with a short test time. The technique is based on direct acceleration measurements, with miniature accelerometers mounted on a test model suspended by wires. Measuring acceleration at two different locations, the technique can eliminate oscillations from natural vibration of the model. The technique was used for drag force measurements on a 3m long supersonic combustor model in the HIEST free-piston driven shock tunnel. A time resolution of 350μs is guaranteed during measurements, whose resolution is enough for ms order test time in HIEST. To evaluate measurement reliability and accuracy, measured values were compared with results from a three-dimensional Navier-Stokes numerical simulation. The difference between measured values and numerical simulation values was less than 5%. We conclude that this measurement technique is sufficiently reliable for measuring aerodynamic force within test durations of 1ms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joe, Jeffrey Clark; Boring, Ronald Laurids; Herberger, Sarah Elizabeth Marie
The United States (U.S.) Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program has the overall objective to help sustain the existing commercial nuclear power plants (NPPs). To accomplish this program objective, there are multiple LWRS “pathways,” or research and development (R&D) focus areas. One LWRS focus area is called the Risk-Informed Safety Margin and Characterization (RISMC) pathway. Initial efforts under this pathway to combine probabilistic and plant multi-physics models to quantify safety margins and support business decisions also included HRA, but in a somewhat simplified manner. HRA experts at Idaho National Laboratory (INL) have been collaborating with othermore » experts to develop a computational HRA approach, called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER), for inclusion into the RISMC framework. The basic premise of this research is to leverage applicable computational techniques, namely simulation and modeling, to develop and then, using RAVEN as a controller, seamlessly integrate virtual operator models (HUNTER) with 1) the dynamic computational MOOSE runtime environment that includes a full-scope plant model, and 2) the RISMC framework PRA models already in use. The HUNTER computational HRA approach is a hybrid approach that leverages past work from cognitive psychology, human performance modeling, and HRA, but it is also a significant departure from existing static and even dynamic HRA methods. This report is divided into five chapters that cover the development of an external flooding event test case and associated statistical modeling considerations.« less
Mission Reliability Estimation for Repairable Robot Teams
NASA Technical Reports Server (NTRS)
Trebi-Ollennu, Ashitey; Dolan, John; Stancliff, Stephen
2010-01-01
A mission reliability estimation method has been designed to translate mission requirements into choices of robot modules in order to configure a multi-robot team to have high reliability at minimal cost. In order to build cost-effective robot teams for long-term missions, one must be able to compare alternative design paradigms in a principled way by comparing the reliability of different robot models and robot team configurations. Core modules have been created including: a probabilistic module with reliability-cost characteristics, a method for combining the characteristics of multiple modules to determine an overall reliability-cost characteristic, and a method for the generation of legitimate module combinations based on mission specifications and the selection of the best of the resulting combinations from a cost-reliability standpoint. The developed methodology can be used to predict the probability of a mission being completed, given information about the components used to build the robots, as well as information about the mission tasks. In the research for this innovation, sample robot missions were examined and compared to the performance of robot teams with different numbers of robots and different numbers of spare components. Data that a mission designer would need was factored in, such as whether it would be better to have a spare robot versus an equivalent number of spare parts, or if mission cost can be reduced while maintaining reliability using spares. This analytical model was applied to an example robot mission, examining the cost-reliability tradeoffs among different team configurations. Particularly scrutinized were teams using either redundancy (spare robots) or repairability (spare components). Using conservative estimates of the cost-reliability relationship, results show that it is possible to significantly reduce the cost of a robotic mission by using cheaper, lower-reliability components and providing spares. This suggests that the current design paradigm of building a minimal number of highly robust robots may not be the best way to design robots for extended missions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strons, Philip; Bailey, James L.; Davis, John
2016-03-01
In this work, we apply the CFD in modeling airflow and particulate transport. This modeling is then compared to field validation studies to both inform and validate the modeling assumptions. Based on the results of field tests, modeling assumptions and boundary conditions are refined and the process is repeated until the results are found to be reliable with a high level of confidence.
USDA-ARS?s Scientific Manuscript database
The Ogallala aquifer is the only reliable source of water in the southern High Plains (SHP) region of Texas, New Mexico and Oklahoma. Groundwater availability has fostered a strong agricultural economy that has a significant impact on global food security. Groundwater models that not only capture ...
Testing DRAINMOD-FOREST for predicting evapotranspiration in a mid-rotation pine plantation
Shiying Tian; Mohamed A. Youssef; Ge Sun; George M. Chescheir; Asko Noormets; Devendra M. Amatya; R. Wayne Skaggs; John S. King; Steve McNulty; Michael Gavazzi; Guofang Miao; Jean-Christophe Domec
2015-01-01
Evapotranspiration (ET) is a key component of the hydrologic cycle in terrestrial ecosystems and accurate description of ET processes is essential for developing reliable ecohydrological models. This study investigated the accuracy of ET prediction by the DRAINMOD-FOREST after its calibration/validation for predicting commonly measured hydrological variables. The model...
A Teacher-Friendly Instrument in Identifying Learning Styles in the Classroom.
ERIC Educational Resources Information Center
Pitts, Joseph I.
This report describes a reliability and validity study on a learning styles instrument that was developed based on the Dunn, Dunn, & Price model. That model included 104 Likert five-point scale items for investigating 24 scales grouped into five categories considered likely to affect learning. The Learning Style Preference Inventory (LSPI)…
Regional estimates of biogenic volatile organic compound (BVOC) emissions are important inputs for models of atmospheric chemistry and carbon budgets. Since forests are the primary emitters of BVOCs, it is important to develop reliable estimates of their areal coverage and BVOC e...
Measurement and modeling of intrinsic transcription terminators
Cambray, Guillaume; Guimaraes, Joao C.; Mutalik, Vivek K.; Lam, Colin; Mai, Quynh-Anh; Thimmaiah, Tim; Carothers, James M.; Arkin, Adam P.; Endy, Drew
2013-01-01
The reliable forward engineering of genetic systems remains limited by the ad hoc reuse of many types of basic genetic elements. Although a few intrinsic prokaryotic transcription terminators are used routinely, termination efficiencies have not been studied systematically. Here, we developed and validated a genetic architecture that enables reliable measurement of termination efficiencies. We then assembled a collection of 61 natural and synthetic terminators that collectively encode termination efficiencies across an ∼800-fold dynamic range within Escherichia coli. We simulated co-transcriptional RNA folding dynamics to identify competing secondary structures that might interfere with terminator folding kinetics or impact termination activity. We found that structures extending beyond the core terminator stem are likely to increase terminator activity. By excluding terminators encoding such context-confounding elements, we were able to develop a linear sequence-function model that can be used to estimate termination efficiencies (r = 0.9, n = 31) better than models trained on all terminators (r = 0.67, n = 54). The resulting systematically measured collection of terminators should improve the engineering of synthetic genetic systems and also advance quantitative modeling of transcription termination. PMID:23511967
Li, Chunxiao; Martindale, Russell; Wu, Yandan; Si, Gangyan
2018-01-01
The development of talented athletes is a priority for many countries across the world, including China. A validated Chinese 5-factor Talent Development Environment Questionnaire (TDEQ-5) would go some way in helping researchers and practitioners investigate talent development systems within China from an evidence-based perspective. For this purpose, the 25-item English TDEQ-5 was translated to Chinese through a standardised process. The translated scale was then administered to 538 talented Chinese youth athletes. Confirmatory factor analysis revealed adequate model fit of the scale. The internal reliability, concurrent and discriminant validity, and test-retest reliability of the scale were adequately supported. The scale was also invariant across gender. It is recommended that the Chinese TDEQ-5 can be used with confidence in both applied and research settings.
ERIC Educational Resources Information Center
Mishara, Brian L.; Chagnon, Francois; Daigle, Marc; Balan, Bogdan; Raymond, Sylvaine; Marcoux, Isabelle; Bardon, Cecile; Campbell, Julie K.; Berman, Alan
2007-01-01
Models of telephone crisis intervention in suicide prevention and best practices were developed from a literature review and surveys of crisis centers. We monitored 2,611 calls to 14 centers using reliable behavioral ratings to compare actual interventions with the models. Active listening and collaborative problem-solving models describe help…
Evaluation of Reliability Coefficients for Two-Level Models via Latent Variable Analysis
ERIC Educational Resources Information Center
Raykov, Tenko; Penev, Spiridon
2010-01-01
A latent variable analysis procedure for evaluation of reliability coefficients for 2-level models is outlined. The method provides point and interval estimates of group means' reliability, overall reliability of means, and conditional reliability. In addition, the approach can be used to test simple hypotheses about these parameters. The…
Stereotaxical Infusion of Rotenone: A Reliable Rodent Model for Parkinson's Disease
Xiong, Nian; Huang, Jinsha; Zhang, Zhentao; Zhang, Zhaowen; Xiong, Jing; Liu, Xingyuan; Jia, Min; Wang, Fang; Chen, Chunnuan; Cao, Xuebing; Liang, Zhihou; Sun, Shenggang; Lin, Zhicheng; Wang, Tao
2009-01-01
A clinically-related animal model of Parkinson's disease (PD) may enable the elucidation of the etiology of the disease and assist the development of medications. However, none of the current neurotoxin-based models recapitulates the main clinical features of the disease or the pathological hallmarks, such as dopamine (DA) neuron specificity of degeneration and Lewy body formation, which limits the use of these models in PD research. To overcome these limitations, we developed a rat model by stereotaxically (ST) infusing small doses of the mitochondrial complex-I inhibitor, rotenone, into two brain sites: the right ventral tegmental area and the substantia nigra. Four weeks after ST rotenone administration, tyrosine hydroxylase (TH) immunoreactivity in the infusion side decreased by 43.7%, in contrast to a 75.8% decrease observed in rats treated systemically with rotenone (SYS). The rotenone infusion also reduced the DA content, the glutathione and superoxide dismutase activities, and induced alpha-synuclein expression, when compared to the contralateral side. This ST model displays neither peripheral toxicity or mortality and has a high success rate. This rotenone-based ST model thus recapitulates the slow and specific loss of DA neurons and better mimics the clinical features of idiopathic PD, representing a reliable and more clinically-related model for PD research. PMID:19924288
Parametric diagnosis of the adaptive gas path in the automatic control system of the aircraft engine
NASA Astrophysics Data System (ADS)
Kuznetsova, T. A.
2017-01-01
The paper dwells on the adaptive multimode mathematical model of the gas-turbine aircraft engine (GTE) embedded in the automatic control system (ACS). The mathematical model is based on the throttle performances, and is characterized by high accuracy of engine parameters identification in stationary and dynamic modes. The proposed on-board engine model is the state space linearized low-level simulation. The engine health is identified by the influence of the coefficient matrix. The influence coefficient is determined by the GTE high-level mathematical model based on measurements of gas-dynamic parameters. In the automatic control algorithm, the sum of squares of the deviation between the parameters of the mathematical model and real GTE is minimized. The proposed mathematical model is effectively used for gas path defects detecting in on-line GTE health monitoring. The accuracy of the on-board mathematical model embedded in ACS determines the quality of adaptive control and reliability of the engine. To improve the accuracy of identification solutions and sustainability provision, the numerical method of Monte Carlo was used. The parametric diagnostic algorithm based on the LPτ - sequence was developed and tested. Analysis of the results suggests that the application of the developed algorithms allows achieving higher identification accuracy and reliability than similar models used in practice.
Binks-Cantrell, Emily; Joshi, R Malatesha; Washburn, Erin K
2012-10-01
Recent national reports have stressed the importance of teacher knowledge in teaching reading. However, in the past, teachers' knowledge of language and literacy constructs has typically been assessed with instruments that are not fully tested for validity. In the present study, an instrument was developed; and its reliability, item difficulty, and item discrimination were computed and examined to identify model fit by applying exploratory factor analysis. Such analyses showed that the instrument demonstrated adequate estimates of reliability in assessing teachers' knowledge of language constructs. The implications for professional development of in-service teachers as well as preservice teacher education are also discussed.
Richards, Rickelle; Brown, Lora Beth; Williams, D Pauline; Eggett, Dennis L
2017-02-01
Develop a questionnaire to measure students' knowledge, attitude, behavior, self-efficacy, and environmental factors related to the use of canned foods. The Knowledge-Attitude-Behavior Model, Social Cognitive Theory, and Canned Foods Alliance survey were used as frameworks for questionnaire development. Cognitive interviews were conducted with college students (n = 8). Nutrition and survey experts assessed content validity. Reliability was measured via Cronbach α and 2 rounds (1, n = 81; 2, n = 65) of test-retest statistics. Means and frequencies were used. The 65-item questionnaire had a test-retest reliability of .69. Cronbach α scores were .87 for knowledge (9 items), .86 for attitude (30 items), .80 for self-efficacy (12 items), .68 for canned foods use (8 items), and .30 for environment (6 items). A reliable questionnaire was developed to measure perceptions and use of canned foods. Nutrition educators may find this questionnaire useful to evaluate pretest-posttest changes from canned foods-based interventions among college students. Copyright © 2016 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
A Reliability Model for Ni-BaTiO3-Based (BME) Ceramic Capacitors
NASA Technical Reports Server (NTRS)
Liu, Donhang
2014-01-01
The evaluation of multilayer ceramic capacitors (MLCCs) with base-metal electrodes (BMEs) for potential NASA space project applications requires an in-depth understanding of their reliability. The reliability of an MLCC is defined as the ability of the dielectric material to retain its insulating properties under stated environmental and operational conditions for a specified period of time t. In this presentation, a general mathematic expression of a reliability model for a BME MLCC is developed and discussed. The reliability model consists of three parts: (1) a statistical distribution that describes the individual variation of properties in a test group of samples (Weibull, log normal, normal, etc.), (2) an acceleration function that describes how a capacitors reliability responds to external stresses such as applied voltage and temperature (All units in the test group should follow the same acceleration function if they share the same failure mode, independent of individual units), and (3) the effect and contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size r, and capacitor chip size S. In general, a two-parameter Weibull statistical distribution model is used in the description of a BME capacitors reliability as a function of time. The acceleration function that relates a capacitors reliability to external stresses is dependent on the failure mode. Two failure modes have been identified in BME MLCCs: catastrophic and slow degradation. A catastrophic failure is characterized by a time-accelerating increase in leakage current that is mainly due to existing processing defects (voids, cracks, delamination, etc.), or the extrinsic defects. A slow degradation failure is characterized by a near-linear increase in leakage current against the stress time; this is caused by the electromigration of oxygen vacancies (intrinsic defects). The two identified failure modes follow different acceleration functions. Catastrophic failures follow the traditional power-law relationship to the applied voltage. Slow degradation failures fit well to an exponential law relationship to the applied electrical field. Finally, the impact of capacitor structure on the reliability of BME capacitors is discussed with respect to the number of dielectric layers in an MLCC unit, the number of BaTiO3 grains per dielectric layer, and the chip size of the capacitor device.
An approach to verification and validation of a reliable multicasting protocol: Extended Abstract
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1995-01-01
This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. This initial version did not handle off-nominal cases such as network partitions or site failures. Meanwhile, the V&V team concurrently developed a formal model of the requirements using a variant of SCR-based state tables. Based on these requirements tables, the V&V team developed test cases to exercise the implementation. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or off-nominal behaviors predicted by the current model. If the execution of a test in the model and implementation agreed, then the test either found a potential problem or verified a required behavior. However, if the execution of a test was different in the model and implementation, then the differences helped identify inconsistencies between the model and implementation. In either case, the dialogue between both teams drove the co-evolution of the model and implementation. We have found that this interactive, iterative approach to development allows software designers to focus on delivery of nominal functionality while the V&V team can focus on analysis of off nominal cases. Testing serves as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP. Although RMP has provided our research effort with a rich set of test cases, it also has practical applications within NASA. For example, RMP is being considered for use in the NASA EOSDIS project due to its significant performance benefits in applications that need to replicate large amounts of data to many network sites.
Yellepeddi, Venkata; Rower, Joseph; Liu, Xiaoxi; Kumar, Shaun; Rashid, Jahidur; Sherwin, Catherine M T
2018-05-18
Physiologically based pharmacokinetic modeling and simulation is an important tool for predicting the pharmacokinetics, pharmacodynamics, and safety of drugs in pediatrics. Physiologically based pharmacokinetic modeling is applied in pediatric drug development for first-time-in-pediatric dose selection, simulation-based trial design, correlation with target organ toxicities, risk assessment by investigating possible drug-drug interactions, real-time assessment of pharmacokinetic-safety relationships, and assessment of non-systemic biodistribution targets. This review summarizes the details of a physiologically based pharmacokinetic modeling approach in pediatric drug research, emphasizing reports on pediatric physiologically based pharmacokinetic models of individual drugs. We also compare and contrast the strategies employed by various researchers in pediatric physiologically based pharmacokinetic modeling and provide a comprehensive overview of physiologically based pharmacokinetic modeling strategies and approaches in pediatrics. We discuss the impact of physiologically based pharmacokinetic models on regulatory reviews and product labels in the field of pediatric pharmacotherapy. Additionally, we examine in detail the current limitations and future directions of physiologically based pharmacokinetic modeling in pediatrics with regard to the ability to predict plasma concentrations and pharmacokinetic parameters. Despite the skepticism and concern in the pediatric community about the reliability of physiologically based pharmacokinetic models, there is substantial evidence that pediatric physiologically based pharmacokinetic models have been used successfully to predict differences in pharmacokinetics between adults and children for several drugs. It is obvious that the use of physiologically based pharmacokinetic modeling to support various stages of pediatric drug development is highly attractive and will rapidly increase, provided the robustness and reliability of these techniques are well established.
Wang, Ning; Björvell, Catrin; Hailey, David; Yu, Ping
2014-12-01
To develop an Australian nursing documentation in aged care (Quality of Australian Nursing Documentation in Aged Care (QANDAC)) instrument to measure the quality of paper-based and electronic resident records. The instrument was based on the nursing process model and on three attributes of documentation quality identified in a systematic review. The development process involved five phases following approaches to designing criterion-referenced measures. The face and content validities and the inter-rater reliability of the instrument were estimated using a focus group approach and consensus model. The instrument contains 34 questions in three sections: completion of nursing history and assessment, description of care process and meeting the requirements of data entry. Estimates of the validity and inter-rater reliability of the instrument gave satisfactory results. The QANDAC instrument may be a useful audit tool for quality improvement and research in aged care documentation. © 2013 ACOTA.
Walker, Lorraine O; Kirby, Russell S
2010-11-01
Early parenting practices are significant to public health because of their linkages to child health outcomes. This paper focuses on the current state of the science regarding conceptual frameworks that incorporate early parenting practices in epidemiologic research and evidence supporting reliability and validity of self-report measures of such practices. Guided by a provisional definition of early parenting practices, literature searches were conducted using PubMed and Sociological Abstracts. Twenty-five published studies that included parent-report measures of early parenting practices met inclusion criteria. Findings on conceptual frameworks were analyzed qualitatively, whereas evidence of reliability and validity were organized into four domains (safety, feeding and oral health, development promotion, and discipline) and summarized in tabular form. Quantitative estimates of measures of reliability and validity were extracted, where available. We found two frameworks incorporating early parenting: one a program theory and the other a predictive model. We found no reported evidence of the reliability or validity of parent-report measures of safety or feeding and oral health practices. Evidence for reliability and validity were reported with greater frequency for development promotion and discipline practices, but report of the most pertinent type of reliability estimation, test-retest reliability, was rare. Failure to examine associations of early parenting practices with any child outcomes within most studies resulted in missed opportunities to indirectly estimate validity of parenting practice measures. Stronger evidence concerning specific measurement properties of early parenting practices is important to advancing maternal-child research, surveillance, and practice.
System Design under Uncertainty: Evolutionary Optimization of the Gravity Probe-B Spacecraft
NASA Technical Reports Server (NTRS)
Pullen, Samuel P.; Parkinson, Bradford W.
1994-01-01
This paper discusses the application of evolutionary random-search algorithms (Simulated Annealing and Genetic Algorithms) to the problem of spacecraft design under performance uncertainty. Traditionally, spacecraft performance uncertainty has been measured by reliability. Published algorithms for reliability optimization are seldom used in practice because they oversimplify reality. The algorithm developed here uses random-search optimization to allow us to model the problem more realistically. Monte Carlo simulations are used to evaluate the objective function for each trial design solution. These methods have been applied to the Gravity Probe-B (GP-B) spacecraft being developed at Stanford University for launch in 1999, Results of the algorithm developed here for GP-13 are shown, and their implications for design optimization by evolutionary algorithms are discussed.
Difficult Decisions Made Easier
NASA Technical Reports Server (NTRS)
2006-01-01
NASA missions are extremely complex and prone to sudden, catastrophic failure if equipment falters or if an unforeseen event occurs. For these reasons, NASA trains to expect the unexpected. It tests its equipment and systems in extreme conditions, and it develops risk-analysis tests to foresee any possible problems. The Space Agency recently worked with an industry partner to develop reliability analysis software capable of modeling complex, highly dynamic systems, taking into account variations in input parameters and the evolution of the system over the course of a mission. The goal of this research was multifold. It included performance and risk analyses of complex, multiphase missions, like the insertion of the Mars Reconnaissance Orbiter; reliability analyses of systems with redundant and/or repairable components; optimization analyses of system configurations with respect to cost and reliability; and sensitivity analyses to identify optimal areas for uncertainty reduction or performance enhancement.
Factor structure of the Childhood Autism Rating Scale as per DSM-5.
Park, Eun-Young; Kim, Joungmin
2016-02-01
The DSM-5 recently proposed new diagnostic criteria for autism spectrum disorder (ASD). Although many new or updated tools have been developed since the DSM-IV was published in 1994, the Childhood Autism Rating Scale (CARS) has been used consistently in ASD diagnosis and research due to its technical adequacy, cost-effectiveness, and practicality. Additionally, items in the CARS did not alter following the release of the revised DSM-IV because the CARS factor structure was found to be consistent with the revised criteria after factor analysis. For that reason, in this study confirmatory factor analysis was used to identify the factor structure of the CARS. Participants (n = 150) consisted of children with an ASD diagnosis or who met the criteria for broader autism or emotional/behavior disorder with comorbid disorders such as attention-deficit hyperactivity disorder, bipolar disorder, intellectual or developmental disabilities. Previous studies used one-, two-, and four-factor models, all of which we examined to confirm the best-fit model on confirmatory factor analysis. Appropriate comparative fit indices and root mean square errors were obtained for all four models. The two-factor model, based on DSM-5 criteria, was the most valid and reliable. The inter-item consistency of the CARS was 0.926 and demonstrated adequate reliability, thereby supporting the validity and reliability of the two-factor model of CARS. Although CARS was developed prior to the introduction of DSM-5, its psychometric properties, conceptual relevance, and flexible administration procedures support its continued role as a screening device in the diagnostic decision-making process. © 2015 Japan Pediatric Society.
Demonstrating the Alaska Ocean Observing System in Prince William Sound
NASA Astrophysics Data System (ADS)
Schoch, G. Carl; McCammon, Molly
2013-07-01
The Alaska Ocean Observing System and the Oil Spill Recovery Institute developed a demonstration project over a 5 year period in Prince William Sound. The primary goal was to develop a quasi-operational system that delivers weather and ocean information in near real time to diverse user communities. This observing system now consists of atmospheric and oceanic sensors, and a new generation of computer models to numerically simulate and forecast weather, waves, and ocean circulation. A state of the art data management system provides access to these products from one internet portal at http://www.aoos.org. The project culminated in a 2009 field experiment that evaluated the observing system and performance of the model forecasts. Observations from terrestrial weather stations and weather buoys validated atmospheric circulation forecasts. Observations from wave gages on weather buoys validated forecasts of significant wave heights and periods. There was an emphasis on validation of surface currents forecasted by the ocean circulation model for oil spill response and search and rescue applications. During the 18 day field experiment a radar array mapped surface currents and drifting buoys were deployed. Hydrographic profiles at fixed stations, and by autonomous vehicles along transects, were made to acquire measurements through the water column. Terrestrial weather stations were the most reliable and least costly to operate, and in situ ocean sensors were more costly and considerably less reliable. The radar surface current mappers were the least reliable and most costly but provided the assimilation and validation data that most improved ocean circulation forecasts. We describe the setting of Prince William Sound and the various observational platforms and forecast models of the observing system, and discuss recommendations for future development.