Computer-Aided Reliability Estimation
NASA Technical Reports Server (NTRS)
Bavuso, S. J.; Stiffler, J. J.; Bryant, L. A.; Petersen, P. L.
1986-01-01
CARE III (Computer-Aided Reliability Estimation, Third Generation) helps estimate reliability of complex, redundant, fault-tolerant systems. Program specifically designed for evaluation of fault-tolerant avionics systems. However, CARE III general enough for use in evaluation of other systems as well.
A particle swarm model for estimating reliability and scheduling system maintenance
NASA Astrophysics Data System (ADS)
Puzis, Rami; Shirtz, Dov; Elovici, Yuval
2016-05-01
Modifying data and information system components may introduce new errors and deteriorate the reliability of the system. Reliability can be efficiently regained with reliability centred maintenance, which requires reliability estimation for maintenance scheduling. A variant of the particle swarm model is used to estimate reliability of systems implemented according to the model view controller paradigm. Simulations based on data collected from an online system of a large financial institute are used to compare three component-level maintenance policies. Results show that appropriately scheduled component-level maintenance greatly reduces the cost of upholding an acceptable level of reliability by reducing the need in system-wide maintenance.
Calculating system reliability with SRFYDO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M
2010-01-01
SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for themore » system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.« less
General Aviation Aircraft Reliability Study
NASA Technical Reports Server (NTRS)
Pettit, Duane; Turnbull, Andrew; Roelant, Henk A. (Technical Monitor)
2001-01-01
This reliability study was performed in order to provide the aviation community with an estimate of Complex General Aviation (GA) Aircraft System reliability. To successfully improve the safety and reliability for the next generation of GA aircraft, a study of current GA aircraft attributes was prudent. This was accomplished by benchmarking the reliability of operational Complex GA Aircraft Systems. Specifically, Complex GA Aircraft System reliability was estimated using data obtained from the logbooks of a random sample of the Complex GA Aircraft population.
NASA Astrophysics Data System (ADS)
Liu, Yiming; Shi, Yimin; Bai, Xuchao; Zhan, Pei
2018-01-01
In this paper, we study the estimation for the reliability of a multicomponent system, named N- M-cold-standby redundancy system, based on progressive Type-II censoring sample. In the system, there are N subsystems consisting of M statistically independent distributed strength components, and only one of these subsystems works under the impact of stresses at a time and the others remain as standbys. Whenever the working subsystem fails, one from the standbys takes its place. The system fails when the entire subsystems fail. It is supposed that the underlying distributions of random strength and stress both belong to the generalized half-logistic distribution with different shape parameter. The reliability of the system is estimated by using both classical and Bayesian statistical inference. Uniformly minimum variance unbiased estimator and maximum likelihood estimator for the reliability of the system are derived. Under squared error loss function, the exact expression of the Bayes estimator for the reliability of the system is developed by using the Gauss hypergeometric function. The asymptotic confidence interval and corresponding coverage probabilities are derived based on both the Fisher and the observed information matrices. The approximate highest probability density credible interval is constructed by using Monte Carlo method. Monte Carlo simulations are performed to compare the performances of the proposed reliability estimators. A real data set is also analyzed for an illustration of the findings.
Parts and Components Reliability Assessment: A Cost Effective Approach
NASA Technical Reports Server (NTRS)
Lee, Lydia
2009-01-01
System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.
Methods and Costs to Achieve Ultra Reliable Life Support
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2012-01-01
A published Mars mission is used to explore the methods and costs to achieve ultra reliable life support. The Mars mission and its recycling life support design are described. The life support systems were made triply redundant, implying that each individual system will have fairly good reliability. Ultra reliable life support is needed for Mars and other long, distant missions. Current systems apparently have insufficient reliability. The life cycle cost of the Mars life support system is estimated. Reliability can be increased by improving the intrinsic system reliability, adding spare parts, or by providing technically diverse redundant systems. The costs of these approaches are estimated. Adding spares is least costly but may be defeated by common cause failures. Using two technically diverse systems is effective but doubles the life cycle cost. Achieving ultra reliability is worth its high cost because the penalty for failure is very high.
Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chassin, David P.; Posse, Christian
2005-09-15
The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabási-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using other methods and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.
Evaluating North American Electric Grid Reliability Using the Barabasi-Albert Network Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chassin, David P.; Posse, Christian
2005-09-15
The reliability of electric transmission systems is examined using a scale-free model of network topology and failure propagation. The topologies of the North American eastern and western electric grids are analyzed to estimate their reliability based on the Barabasi-Albert network model. A commonly used power system reliability index is computed using a simple failure propagation model. The results are compared to the values of power system reliability indices previously obtained using standard power engineering methods, and they suggest that scale-free network models are usable to estimate aggregate electric grid reliability.
User's guide to the Reliability Estimation System Testbed (REST)
NASA Technical Reports Server (NTRS)
Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam
1992-01-01
The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.
Practical Issues in Implementing Software Reliability Measurement
NASA Technical Reports Server (NTRS)
Nikora, Allen P.; Schneidewind, Norman F.; Everett, William W.; Munson, John C.; Vouk, Mladen A.; Musa, John D.
1999-01-01
Many ways of estimating software systems' reliability, or reliability-related quantities, have been developed over the past several years. Of particular interest are methods that can be used to estimate a software system's fault content prior to test, or to discriminate between components that are fault-prone and those that are not. The results of these methods can be used to: 1) More accurately focus scarce fault identification resources on those portions of a software system most in need of it. 2) Estimate and forecast the risk of exposure to residual faults in a software system during operation, and develop risk and safety criteria to guide the release of a software system to fielded use. 3) Estimate the efficiency of test suites in detecting residual faults. 4) Estimate the stability of the software maintenance process.
Multidisciplinary System Reliability Analysis
NASA Technical Reports Server (NTRS)
Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)
2001-01-01
The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.
Multi-Disciplinary System Reliability Analysis
NASA Technical Reports Server (NTRS)
Mahadevan, Sankaran; Han, Song
1997-01-01
The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.
Autonomous navigation system based on GPS and magnetometer data
NASA Technical Reports Server (NTRS)
Julie, Thienel K. (Inventor); Richard, Harman R. (Inventor); Bar-Itzhack, Itzhack Y. (Inventor)
2004-01-01
This invention is drawn to an autonomous navigation system using Global Positioning System (GPS) and magnetometers for low Earth orbit satellites. As a magnetometer is reliable and always provides information on spacecraft attitude, rate, and orbit, the magnetometer-GPS configuration solves GPS initialization problem, decreasing the convergence time for navigation estimate and improving the overall accuracy. Eventually the magnetometer-GPS configuration enables the system to avoid costly and inherently less reliable gyro for rate estimation. Being autonomous, this invention would provide for black-box spacecraft navigation, producing attitude, orbit, and rate estimates without any ground input with high accuracy and reliability.
NASA Astrophysics Data System (ADS)
Iskandar, Ismed; Satria Gondokaryono, Yudi
2016-02-01
In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range between the true value and the maximum likelihood estimated value lines.
to do so, and (5) three distinct versions of the problem of estimating component reliability from system failure-time data are treated, each resulting inconsistent estimators with asymptotically normal distributions.
Evaluation of reliability modeling tools for advanced fault tolerant systems
NASA Technical Reports Server (NTRS)
Baker, Robert; Scheper, Charlotte
1986-01-01
The Computer Aided Reliability Estimation (CARE III) and Automated Reliability Interactice Estimation System (ARIES 82) reliability tools for application to advanced fault tolerance aerospace systems were evaluated. To determine reliability modeling requirements, the evaluation focused on the Draper Laboratories' Advanced Information Processing System (AIPS) architecture as an example architecture for fault tolerance aerospace systems. Advantages and limitations were identified for each reliability evaluation tool. The CARE III program was designed primarily for analyzing ultrareliable flight control systems. The ARIES 82 program's primary use was to support university research and teaching. Both CARE III and ARIES 82 were not suited for determining the reliability of complex nodal networks of the type used to interconnect processing sites in the AIPS architecture. It was concluded that ARIES was not suitable for modeling advanced fault tolerant systems. It was further concluded that subject to some limitations (the difficulty in modeling systems with unpowered spare modules, systems where equipment maintenance must be considered, systems where failure depends on the sequence in which faults occurred, and systems where multiple faults greater than a double near coincident faults must be considered), CARE III is best suited for evaluating the reliability of advanced tolerant systems for air transport.
Bayesian methods in reliability
NASA Astrophysics Data System (ADS)
Sander, P.; Badoux, R.
1991-11-01
The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.
Perceptual and Acoustic Reliability Estimates for the Speech Disorders Classification System (SDCS)
ERIC Educational Resources Information Center
Shriberg, Lawrence D.; Fourakis, Marios; Hall, Sheryl D.; Karlsson, Heather B.; Lohmeier, Heather L.; McSweeny, Jane L.; Potter, Nancy L.; Scheer-Cohen, Alison R.; Strand, Edythe A.; Tilkens, Christie M.; Wilson, David L.
2010-01-01
A companion paper describes three extensions to a classification system for paediatric speech sound disorders termed the Speech Disorders Classification System (SDCS). The SDCS uses perceptual and acoustic data reduction methods to obtain information on a speaker's speech, prosody, and voice. The present paper provides reliability estimates for…
The Challenges of Credible Thermal Protection System Reliability Quantification
NASA Technical Reports Server (NTRS)
Green, Lawrence L.
2013-01-01
The paper discusses several of the challenges associated with developing a credible reliability estimate for a human-rated crew capsule thermal protection system. The process of developing such a credible estimate is subject to the quantification, modeling and propagation of numerous uncertainties within a probabilistic analysis. The development of specific investment recommendations, to improve the reliability prediction, among various potential testing and programmatic options is then accomplished through Bayesian analysis.
Reliability of digital reactor protection system based on extenics.
Zhao, Jing; He, Ya-Nan; Gu, Peng-Fei; Chen, Wei-Hua; Gao, Feng
2016-01-01
After the Fukushima nuclear accident, safety of nuclear power plants (NPPs) is widespread concerned. The reliability of reactor protection system (RPS) is directly related to the safety of NPPs, however, it is difficult to accurately evaluate the reliability of digital RPS. The method is based on estimating probability has some uncertainties, which can not reflect the reliability status of RPS dynamically and support the maintenance and troubleshooting. In this paper, the reliability quantitative analysis method based on extenics is proposed for the digital RPS (safety-critical), by which the relationship between the reliability and response time of RPS is constructed. The reliability of the RPS for CPR1000 NPP is modeled and analyzed by the proposed method as an example. The results show that the proposed method is capable to estimate the RPS reliability effectively and provide support to maintenance and troubleshooting of digital RPS system.
Reliability Modeling of Microelectromechanical Systems Using Neural Networks
NASA Technical Reports Server (NTRS)
Perera. J. Sebastian
2000-01-01
Microelectromechanical systems (MEMS) are a broad and rapidly expanding field that is currently receiving a great deal of attention because of the potential to significantly improve the ability to sense, analyze, and control a variety of processes, such as heating and ventilation systems, automobiles, medicine, aeronautical flight, military surveillance, weather forecasting, and space exploration. MEMS are very small and are a blend of electrical and mechanical components, with electrical and mechanical systems on one chip. This research establishes reliability estimation and prediction for MEMS devices at the conceptual design phase using neural networks. At the conceptual design phase, before devices are built and tested, traditional methods of quantifying reliability are inadequate because the device is not in existence and cannot be tested to establish the reliability distributions. A novel approach using neural networks is created to predict the overall reliability of a MEMS device based on its components and each component's attributes. The methodology begins with collecting attribute data (fabrication process, physical specifications, operating environment, property characteristics, packaging, etc.) and reliability data for many types of microengines. The data are partitioned into training data (the majority) and validation data (the remainder). A neural network is applied to the training data (both attribute and reliability); the attributes become the system inputs and reliability data (cycles to failure), the system output. After the neural network is trained with sufficient data. the validation data are used to verify the neural networks provided accurate reliability estimates. Now, the reliability of a new proposed MEMS device can be estimated by using the appropriate trained neural networks developed in this work.
Reliability Impacts in Life Support Architecture and Technology Selection
NASA Technical Reports Server (NTRS)
Lange, Kevin E.; Anderson, Molly S.
2011-01-01
Equivalent System Mass (ESM) and reliability estimates were performed for different life support architectures based primarily on International Space Station (ISS) technologies. The analysis was applied to a hypothetical 1-year deep-space mission. High-level fault trees were initially developed relating loss of life support functionality to the Loss of Crew (LOC) top event. System reliability was then expressed as the complement (nonoccurrence) this event and was increased through the addition of redundancy and spares, which added to the ESM. The reliability analysis assumed constant failure rates and used current projected values of the Mean Time Between Failures (MTBF) from an ISS database where available. Results were obtained showing the dependence of ESM on system reliability for each architecture. Although the analysis employed numerous simplifications and many of the input parameters are considered to have high uncertainty, the results strongly suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support system mass. As a point of reference, the reliability for a single-string architecture using the most regenerative combination of ISS technologies without unscheduled replacement spares was estimated to be less than 1%. The results also demonstrate how adding technologies in a serial manner to increase system closure forces the reliability of other life support technologies to increase in order to meet the system reliability requirement. This increase in reliability results in increased mass for multiple technologies through the need for additional spares. Alternative parallel architecture approaches and approaches with the potential to do more with less are discussed. The tall poles in life support ESM are also reexamined in light of estimated reliability impacts.
Reliability models applicable to space telescope solar array assembly system
NASA Technical Reports Server (NTRS)
Patil, S. A.
1986-01-01
A complex system may consist of a number of subsystems with several components in series, parallel, or combination of both series and parallel. In order to predict how well the system will perform, it is necessary to know the reliabilities of the subsystems and the reliability of the whole system. The objective of the present study is to develop mathematical models of the reliability which are applicable to complex systems. The models are determined by assuming k failures out of n components in a subsystem. By taking k = 1 and k = n, these models reduce to parallel and series models; hence, the models can be specialized to parallel, series combination systems. The models are developed by assuming the failure rates of the components as functions of time and as such, can be applied to processes with or without aging effects. The reliability models are further specialized to Space Telescope Solar Arrray (STSA) System. The STSA consists of 20 identical solar panel assemblies (SPA's). The reliabilities of the SPA's are determined by the reliabilities of solar cell strings, interconnects, and diodes. The estimates of the reliability of the system for one to five years are calculated by using the reliability estimates of solar cells and interconnects given n ESA documents. Aging effects in relation to breaks in interconnects are discussed.
NASA Astrophysics Data System (ADS)
Yu, Z. P.; Yue, Z. F.; Liu, W.
2018-05-01
With the development of artificial intelligence, more and more reliability experts have noticed the roles of subjective information in the reliability design of complex system. Therefore, based on the certain numbers of experiment data and expert judgments, we have divided the reliability estimation based on distribution hypothesis into cognition process and reliability calculation. Consequently, for an illustration of this modification, we have taken the information fusion based on intuitional fuzzy belief functions as the diagnosis model of cognition process, and finished the reliability estimation for the open function of cabin door affected by the imprecise judgment corresponding to distribution hypothesis.
Estimates Of The Orbiter RSI Thermal Protection System Thermal Reliability
NASA Technical Reports Server (NTRS)
Kolodziej, P.; Rasky, D. J.
2002-01-01
In support of the Space Shuttle Orbiter post-flight inspection, structure temperatures are recorded at selected positions on the windward, leeward, starboard and port surfaces. Statistical analysis of this flight data and a non-dimensional load interference (NDLI) method are used to estimate the thermal reliability at positions were reusable surface insulation (RSI) is installed. In this analysis, structure temperatures that exceed the design limit define the critical failure mode. At thirty-three positions the RSI thermal reliability is greater than 0.999999 for the missions studied. This is not the overall system level reliability of the thermal protection system installed on an Orbiter. The results from two Orbiters, OV-102 and OV-105, are in good agreement. The original RSI designs on the OV-102 Orbital Maneuvering System pods, which had low reliability, were significantly improved on OV-105. The NDLI method was also used to estimate thermal reliability from an assessment of TPS uncertainties that was completed shortly before the first Orbiter flight. Results fiom the flight data analysis and the pre-flight assessment agree at several positions near each other. The NDLI method is also effective for optimizing RSI designs to provide uniform thermal reliability on the acreage surface of reusable launch vehicles.
NASA Astrophysics Data System (ADS)
Kim, K.-h.; Oh, T.-s.; Park, K.-r.; Lee, J. H.; Ghim, Y.-c.
2017-11-01
One factor determining the reliability of measurements of electron temperature using a Thomson scattering (TS) system is transmittance of the optical bandpass filters in polychromators. We investigate the system performance as a function of electron temperature to determine reliable range of measurements for a given set of the optical bandpass filters. We show that such a reliability, i.e., both bias and random errors, can be obtained by building a forward model of the KSTAR TS system to generate synthetic TS data with the prescribed electron temperature and density profiles. The prescribed profiles are compared with the estimated ones to quantify both bias and random errors.
On modeling human reliability in space flights - Redundancy and recovery operations
NASA Astrophysics Data System (ADS)
Aarset, M.; Wright, J. F.
The reliability of humans is of paramount importance to the safety of space flight systems. This paper describes why 'back-up' operators might not be the best solution, and in some cases, might even degrade system reliability. The problem associated with human redundancy calls for special treatment in reliability analyses. The concept of Standby Redundancy is adopted, and psychological and mathematical models are introduced to improve the way such problems can be estimated and handled. In the past, human reliability has practically been neglected in most reliability analyses, and, when included, the humans have been modeled as a component and treated numerically the way technical components are. This approach is not wrong in itself, but it may lead to systematic errors if too simple analogies from the technical domain are used in the modeling of human behavior. In this paper redundancy in a man-machine system will be addressed. It will be shown how simplification from the technical domain, when applied to human components of a system, may give non-conservative estimates of system reliability.
Program for computer aided reliability estimation
NASA Technical Reports Server (NTRS)
Mathur, F. P. (Inventor)
1972-01-01
A computer program for estimating the reliability of self-repair and fault-tolerant systems with respect to selected system and mission parameters is presented. The computer program is capable of operation in an interactive conversational mode as well as in a batch mode and is characterized by maintenance of several general equations representative of basic redundancy schemes in an equation repository. Selected reliability functions applicable to any mathematical model formulated with the general equations, used singly or in combination with each other, are separately stored. One or more system and/or mission parameters may be designated as a variable. Data in the form of values for selected reliability functions is generated in a tabular or graphic format for each formulated model.
Examples of Nonconservatism in the CARE 3 Program
NASA Technical Reports Server (NTRS)
Dotson, Kelly J.
1988-01-01
This paper presents parameter regions in the CARE 3 (Computer-Aided Reliability Estimation version 3) computer program where the program overestimates the reliability of a modeled system without warning the user. Five simple models of fault-tolerant computer systems are analyzed; and, the parameter regions where reliability is overestimated are given. The source of the error in the reliability estimates for models which incorporate transient fault occurrences was not readily apparent. However, the source of much of the error for models with permanent and intermittent faults can be attributed to the choice of values for the run-time parameters of the program.
Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.
Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander
2018-04-10
A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing techniques and assessment of soft computing techniques to predict reliability. The parameter considered while estimating and prediction of reliability are also discussed. This study can be used in estimation and prediction of the reliability of various instruments used in the medical system, software engineering, computer engineering and mechanical engineering also. These concepts can be applied to both software and hardware, to predict the reliability using CBSE.
Proposed Reliability/Cost Model
NASA Technical Reports Server (NTRS)
Delionback, L. M.
1982-01-01
New technique estimates cost of improvement in reliability for complex system. Model format/approach is dependent upon use of subsystem cost-estimating relationships (CER's) in devising cost-effective policy. Proposed methodology should have application in broad range of engineering management decisions.
System reliability approaches for advanced propulsion system structures
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Mahadevan, S.
1991-01-01
This paper identifies significant issues that pertain to the estimation and use of system reliability in the design of advanced propulsion system structures. Linkages between the reliabilities of individual components and their effect on system design issues such as performance, cost, availability, and certification are examined. The need for system reliability computation to address the continuum nature of propulsion system structures and synergistic progressive damage modes has been highlighted. Available system reliability models are observed to apply only to discrete systems. Therefore a sequential structural reanalysis procedure is formulated to rigorously compute the conditional dependencies between various failure modes. The method is developed in a manner that supports both top-down and bottom-up analyses in system reliability.
Methodology for Software Reliability Prediction. Volume 1.
1987-11-01
SPACECRAFT 0 MANNED SPACECRAFT B ATCH SYSTEM AIRBORNE AVIONICS 0 UNMANNED EVENT C014TROL a REAL TIME CLOSED 0 UNMANNED SPACECRAFT LOOP OPERATINS SPACECRAFT...software reliability. A Software Reliability Measurement Framework was established which spans the life cycle of a software system and includes the...specification, prediction, estimation, and assessment of software reliability. Data from 59 systems , representing over 5 million lines of code, were
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, J.R.; Heger, A.S.; Koen, B.V.
1984-04-01
This report is the result of a preliminary feasibility study of the applicability of Stein and related parametric empirical Bayes (PEB) estimators to the Nuclear Plant Reliability Data System (NPRDS). A new estimator is derived for the means of several independent Poisson distributions with different sampling times. This estimator is applied to data from NPRDS in an attempt to improve failure rate estimation. Theoretical and Monte Carlo results indicate that the new PEB estimator can perform significantly better than the standard maximum likelihood estimator if the estimation of the individual means can be combined through the loss function or throughmore » a parametric class of prior distributions.« less
Cost Estimation of Software Development and the Implications for the Program Manager
1992-06-01
Software Lifecycle Model (SLIM), the Jensen System-4 model, the Software Productivity, Quality, and Reliability Estimator ( SPQR \\20), the Constructive...function models in current use are the Software Productivity, Quality, and Reliability Estimator ( SPQR /20) and the Software Architecture Sizing and...Estimator ( SPQR /20) was developed by T. Capers Jones of Software Productivity Research, Inc., in 1985. The model is intended to estimate the outcome
System reliability of randomly vibrating structures: Computational modeling and laboratory testing
NASA Astrophysics Data System (ADS)
Sundar, V. S.; Ammanagi, S.; Manohar, C. S.
2015-09-01
The problem of determination of system reliability of randomly vibrating structures arises in many application areas of engineering. We discuss in this paper approaches based on Monte Carlo simulations and laboratory testing to tackle problems of time variant system reliability estimation. The strategy we adopt is based on the application of Girsanov's transformation to the governing stochastic differential equations which enables estimation of probability of failure with significantly reduced number of samples than what is needed in a direct simulation study. Notably, we show that the ideas from Girsanov's transformation based Monte Carlo simulations can be extended to conduct laboratory testing to assess system reliability of engineering structures with reduced number of samples and hence with reduced testing times. Illustrative examples include computational studies on a 10-degree of freedom nonlinear system model and laboratory/computational investigations on road load response of an automotive system tested on a four-post test rig.
Jones, Terry L; Schlegel, Cara
2014-02-01
Accurate, precise, unbiased, reliable, and cost-effective estimates of nursing time use are needed to insure safe staffing levels. Direct observation of nurses is costly, and conventional surrogate measures have limitations. To test the potential of electronic capture of time and motion through real time location systems (RTLS), a pilot study was conducted to assess efficacy (method agreement) of RTLS time use; inter-rater reliability of RTLS time-use estimates; and associated costs. Method agreement was high (mean absolute difference = 28 seconds); inter-rater reliability was high (ICC = 0.81-0.95; mean absolute difference = 2 seconds); and costs for obtaining RTLS time-use estimates on a single nursing unit exceeded $25,000. Continued experimentation with RTLS to obtain time-use estimates for nursing staff is warranted. © 2013 Wiley Periodicals, Inc.
Advanced reliability modeling of fault-tolerant computer-based systems
NASA Technical Reports Server (NTRS)
Bavuso, S. J.
1982-01-01
Two methodologies for the reliability assessment of fault tolerant digital computer based systems are discussed. The computer-aided reliability estimation 3 (CARE 3) and gate logic software simulation (GLOSS) are assessment technologies that were developed to mitigate a serious weakness in the design and evaluation process of ultrareliable digital systems. The weak link is based on the unavailability of a sufficiently powerful modeling technique for comparing the stochastic attributes of one system against others. Some of the more interesting attributes are reliability, system survival, safety, and mission success.
PREDICTION OF RELIABILITY IN BIOGRAPHICAL QUESTIONNAIRES.
ERIC Educational Resources Information Center
STARRY, ALLAN R.
THE OBJECTIVES OF THIS STUDY WERE (1) TO DEVELOP A GENERAL CLASSIFICATION SYSTEM FOR LIFE HISTORY ITEMS, (2) TO DETERMINE TEST-RETEST RELIABILITY ESTIMATES, AND (3) TO ESTIMATE RESISTANCE TO EXAMINEE FAKING, FOR REPRESENTATIVE BIOGRAPHICAL QUESTIONNAIRES. TWO 100-ITEM QUESTIONNAIRES WERE CONSTRUCTED THROUGH RANDOM ASSIGNMENT BY CONTENT AREA OF 200…
Monte Carlo Approach for Reliability Estimations in Generalizability Studies.
ERIC Educational Resources Information Center
Dimitrov, Dimiter M.
A Monte Carlo approach is proposed, using the Statistical Analysis System (SAS) programming language, for estimating reliability coefficients in generalizability theory studies. Test scores are generated by a probabilistic model that considers the probability for a person with a given ability score to answer an item with a given difficulty…
NASA Technical Reports Server (NTRS)
Juhasz, A. J.; Bloomfield, H. S.
1985-01-01
A combinatorial reliability approach is used to identify potential dynamic power conversion systems for space mission applications. A reliability and mass analysis is also performed, specifically for a 100 kWe nuclear Brayton power conversion system with parallel redundancy. Although this study is done for a reactor outlet temperature of 1100K, preliminary system mass estimates are also included for reactor outlet temperatures ranging up to 1500 K.
Adaptive vehicle motion estimation and prediction
NASA Astrophysics Data System (ADS)
Zhao, Liang; Thorpe, Chuck E.
1999-01-01
Accurate motion estimation and reliable maneuver prediction enable an automated car to react quickly and correctly to the rapid maneuvers of the other vehicles, and so allow safe and efficient navigation. In this paper, we present a car tracking system which provides motion estimation, maneuver prediction and detection of the tracked car. The three strategies employed - adaptive motion modeling, adaptive data sampling, and adaptive model switching probabilities - result in an adaptive interacting multiple model algorithm (AIMM). The experimental results on simulated and real data demonstrate that our tracking system is reliable, flexible, and robust. The adaptive tracking makes the system intelligent and useful in various autonomous driving tasks.
NASA Astrophysics Data System (ADS)
Chen, Shanjun; Duan, Haibin; Deng, Yimin; Li, Cong; Zhao, Guozhi; Xu, Yan
2017-12-01
Autonomous aerial refueling is a significant technology that can significantly extend the endurance of unmanned aerial vehicles. A reliable method that can accurately estimate the position and attitude of the probe relative to the drogue is the key to such a capability. A drogue pose estimation method based on infrared vision sensor is introduced with the general goal of yielding an accurate and reliable drogue state estimate. First, by employing direct least squares ellipse fitting and convex hull in OpenCV, a feature point matching and interference point elimination method is proposed. In addition, considering the conditions that some infrared LEDs are damaged or occluded, a missing point estimation method based on perspective transformation and affine transformation is designed. Finally, an accurate and robust pose estimation algorithm improved by the runner-root algorithm is proposed. The feasibility of the designed visual measurement system is demonstrated by flight test, and the results indicate that our proposed method enables precise and reliable pose estimation of the probe relative to the drogue, even in some poor conditions.
2016-04-30
Dabkowski, and Dixit (2015), we demonstrate that the DoDAF models required pre–MS A map to 14 of the 18 parameters of the Constructive Systems...engineering effort in complex systems. Saarbrücken, Germany: VDM Verlag. Valerdi, R., Dabkowski, M., & Dixit , I. (2015). Reliability improvement of...R., Dabkowski, M., & Dixit , I. (2015). Reliability Improvement of Major Defense Acquisition Program Cost Estimates – Mapping DoDAF to COSYSMO
Data Applicability of Heritage and New Hardware for Launch Vehicle System Reliability Models
NASA Technical Reports Server (NTRS)
Al Hassan Mohammad; Novack, Steven
2015-01-01
Many launch vehicle systems are designed and developed using heritage and new hardware. In most cases, the heritage hardware undergoes modifications to fit new functional system requirements, impacting the failure rates and, ultimately, the reliability data. New hardware, which lacks historical data, is often compared to like systems when estimating failure rates. Some qualification of applicability for the data source to the current system should be made. Accurately characterizing the reliability data applicability and quality under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This presentation will demonstrate a data-source classification method that ranks reliability data according to applicability and quality criteria to a new launch vehicle. This method accounts for similarities/dissimilarities in source and applicability, as well as operating environments like vibrations, acoustic regime, and shock. This classification approach will be followed by uncertainty-importance routines to assess the need for additional data to reduce uncertainty.
Estimating the Reliability of a Soyuz Spacecraft Mission
NASA Technical Reports Server (NTRS)
Lutomski, Michael G.; Farnham, Steven J., II; Grant, Warren C.
2010-01-01
Once the US Space Shuttle retires in 2010, the Russian Soyuz Launcher and Soyuz Spacecraft will comprise the only means for crew transportation to and from the International Space Station (ISS). The U.S. Government and NASA have contracted for crew transportation services to the ISS with Russia. The resulting implications for the US space program including issues such as astronaut safety must be carefully considered. Are the astronauts and cosmonauts safer on the Soyuz than the Space Shuttle system? Is the Soyuz launch system more robust than the Space Shuttle? Is it safer to continue to fly the 30 year old Shuttle fleet for crew transportation and cargo resupply than the Soyuz? Should we extend the life of the Shuttle Program? How does the development of the Orion/Ares crew transportation system affect these decisions? The Soyuz launcher has been in operation for over 40 years. There have been only two loss of life incidents and two loss of mission incidents. Given that the most recent incident took place in 1983, how do we determine current reliability of the system? Do failures of unmanned Soyuz rockets impact the reliability of the currently operational man-rated launcher? Does the Soyuz exhibit characteristics that demonstrate reliability growth and how would that be reflected in future estimates of success? NASA s next manned rocket and spacecraft development project is currently underway. Though the projects ultimate goal is to return to the Moon and then to Mars, the launch vehicle and spacecraft s first mission will be for crew transportation to and from the ISS. The reliability targets are currently several times higher than the Shuttle and possibly even the Soyuz. Can these targets be compared to the reliability of the Soyuz to determine whether they are realistic and achievable? To help answer these questions this paper will explore how to estimate the reliability of the Soyuz Launcher/Spacecraft system, compare it to the Space Shuttle, and its potential impacts for the future of manned spaceflight. Specifically it will look at estimating the Loss of Mission (LOM) probability using historical data, reliability growth, and Probabilistic Risk Assessment techniques
Reliability/safety analysis of a fly-by-wire system
NASA Technical Reports Server (NTRS)
Brock, L. D.; Goddman, H. A.
1980-01-01
An analysis technique has been developed to estimate the reliability of a very complex, safety-critical system by constructing a diagram of the reliability equations for the total system. This diagram has many of the characteristics of a fault-tree or success-path diagram, but is much easier to construct for complex redundant systems. The diagram provides insight into system failure characteristics and identifies the most likely failure modes. A computer program aids in the construction of the diagram and the computation of reliability. Analysis of the NASA F-8 Digital Fly-by-Wire Flight Control System is used to illustrate the technique.
Pre-Proposal Assessment of Reliability for Spacecraft Docking with Limited Information
NASA Technical Reports Server (NTRS)
Brall, Aron
2013-01-01
This paper addresses the problem of estimating the reliability of a critical system function as well as its impact on the system reliability when limited information is available. The approach addresses the basic function reliability, and then the impact of multiple attempts to accomplish the function. The dependence of subsequent attempts on prior failure to accomplish the function is also addressed. The autonomous docking of two spacecraft was the specific example that generated the inquiry, and the resultant impact on total reliability generated substantial interest in presenting the results due to the relative insensitivity of overall performance to basic function reliability and moderate degradation given sufficient attempts to try and accomplish the required goal. The application of the methodology allows proper emphasis on the characteristics that can be estimated with some knowledge, and to insulate the integrity of the design from those characteristics that can't be properly estimated with any rational value of uncertainty. The nature of NASA's missions contains a great deal of uncertainty due to the pursuit of new science or operations. This approach can be applied to any function where multiple attempts at success, with or without degradation, are allowed.
Alwan, Faris M; Baharum, Adam; Hassan, Geehan S
2013-01-01
The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter [Formula: see text] and shape parameters [Formula: see text] and [Formula: see text]. Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models.
Alwan, Faris M.; Baharum, Adam; Hassan, Geehan S.
2013-01-01
The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter and shape parameters and . Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models. PMID:23936346
Bourret, A; Garant, D
2017-03-01
Quantitative genetics approaches, and particularly animal models, are widely used to assess the genetic (co)variance of key fitness related traits and infer adaptive potential of wild populations. Despite the importance of precision and accuracy of genetic variance estimates and their potential sensitivity to various ecological and population specific factors, their reliability is rarely tested explicitly. Here, we used simulations and empirical data collected from an 11-year study on tree swallow (Tachycineta bicolor), a species showing a high rate of extra-pair paternity and a low recruitment rate, to assess the importance of identity errors, structure and size of the pedigree on quantitative genetic estimates in our dataset. Our simulations revealed an important lack of precision in heritability and genetic-correlation estimates for most traits, a low power to detect significant effects and important identifiability problems. We also observed a large bias in heritability estimates when using the social pedigree instead of the genetic one (deflated heritabilities) or when not accounting for an important cause of resemblance among individuals (for example, permanent environment or brood effect) in model parameterizations for some traits (inflated heritabilities). We discuss the causes underlying the low reliability observed here and why they are also likely to occur in other study systems. Altogether, our results re-emphasize the difficulties of generalizing quantitative genetic estimates reliably from one study system to another and the importance of reporting simulation analyses to evaluate these important issues.
Care 3 phase 2 report, maintenance manual
NASA Technical Reports Server (NTRS)
Bryant, L. A.; Stiffler, J. J.
1982-01-01
CARE 3 (Computer-Aided Reliability Estimation, version three) is a computer program designed to help estimate the reliability of complex, redundant systems. Although the program can model a wide variety of redundant structures, it was developed specifically for fault-tolerant avionics systems--systems distinguished by the need for extremely reliable performance since a system failure could well result in the loss of human life. It substantially generalizes the class of redundant configurations that could be accommodated, and includes a coverage model to determine the various coverage probabilities as a function of the applicable fault recovery mechanisms (detection delay, diagnostic scheduling interval, isolation and recovery delay, etc.). CARE 3 further generalizes the class of system structures that can be modeled and greatly expands the coverage model to take into account such effects as intermittent and transient faults, latent faults, error propagation, etc.
Reliability-Based Control Design for Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.
2005-01-01
This paper presents a robust control design methodology for systems with probabilistic parametric uncertainty. Control design is carried out by solving a reliability-based multi-objective optimization problem where the probability of violating design requirements is minimized. Simultaneously, failure domains are optimally enlarged to enable global improvements in the closed-loop performance. To enable an efficient numerical implementation, a hybrid approach for estimating reliability metrics is developed. This approach, which integrates deterministic sampling and asymptotic approximations, greatly reduces the numerical burden associated with complex probabilistic computations without compromising the accuracy of the results. Examples using output-feedback and full-state feedback with state estimation are used to demonstrate the ideas proposed.
Large-scale-system effectiveness analysis. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, A.D.; Ayoub, A.K.; Foster, J.W.
1979-11-01
Objective of the research project has been the investigation and development of methods for calculating system reliability indices which have absolute, and measurable, significance to consumers. Such indices are a necessary prerequisite to any scheme for system optimization which includes the economic consequences of consumer service interruptions. A further area of investigation has been joint consideration of generation and transmission in reliability studies. Methods for finding or estimating the probability distributions of some measures of reliability performance have been developed. The application of modern Monte Carlo simulation methods to compute reliability indices in generating systems has been studied.
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.
Rocket engine system reliability analyses using probabilistic and fuzzy logic techniques
NASA Technical Reports Server (NTRS)
Hardy, Terry L.; Rapp, Douglas C.
1994-01-01
The reliability of rocket engine systems was analyzed by using probabilistic and fuzzy logic techniques. Fault trees were developed for integrated modular engine (IME) and discrete engine systems, and then were used with the two techniques to quantify reliability. The IRRAS (Integrated Reliability and Risk Analysis System) computer code, developed for the U.S. Nuclear Regulatory Commission, was used for the probabilistic analyses, and FUZZYFTA (Fuzzy Fault Tree Analysis), a code developed at NASA Lewis Research Center, was used for the fuzzy logic analyses. Although both techniques provided estimates of the reliability of the IME and discrete systems, probabilistic techniques emphasized uncertainty resulting from randomness in the system whereas fuzzy logic techniques emphasized uncertainty resulting from vagueness in the system. Because uncertainty can have both random and vague components, both techniques were found to be useful tools in the analysis of rocket engine system reliability.
Wagner, Brian J.; Gorelick, Steven M.
1986-01-01
A simulation nonlinear multiple-regression methodology for estimating parameters that characterize the transport of contaminants is developed and demonstrated. Finite difference contaminant transport simulation is combined with a nonlinear weighted least squares multiple-regression procedure. The technique provides optimal parameter estimates and gives statistics for assessing the reliability of these estimates under certain general assumptions about the distributions of the random measurement errors. Monte Carlo analysis is used to estimate parameter reliability for a hypothetical homogeneous soil column for which concentration data contain large random measurement errors. The value of data collected spatially versus data collected temporally was investigated for estimation of velocity, dispersion coefficient, effective porosity, first-order decay rate, and zero-order production. The use of spatial data gave estimates that were 2–3 times more reliable than estimates based on temporal data for all parameters except velocity. Comparison of estimated linear and nonlinear confidence intervals based upon Monte Carlo analysis showed that the linear approximation is poor for dispersion coefficient and zero-order production coefficient when data are collected over time. In addition, examples demonstrate transport parameter estimation for two real one-dimensional systems. First, the longitudinal dispersivity and effective porosity of an unsaturated soil are estimated using laboratory column data. We compare the reliability of estimates based upon data from individual laboratory experiments versus estimates based upon pooled data from several experiments. Second, the simulation nonlinear regression procedure is extended to include an additional governing equation that describes delayed storage during contaminant transport. The model is applied to analyze the trends, variability, and interrelationship of parameters in a mourtain stream in northern California.
NASA Technical Reports Server (NTRS)
Mathur, F. P.
1972-01-01
Description of an on-line interactive computer program called CARE (Computer-Aided Reliability Estimation) which can model self-repair and fault-tolerant organizations and perform certain other functions. Essentially CARE consists of a repository of mathematical equations defining the various basic redundancy schemes. These equations, under program control, are then interrelated to generate the desired mathematical model to fit the architecture of the system under evaluation. The mathematical model is then supplied with ground instances of its variables and is then evaluated to generate values for the reliability-theoretic functions applied to the model.
NASA Technical Reports Server (NTRS)
Migneault, G. E.
1979-01-01
Emulation techniques are proposed as a solution to a difficulty arising in the analysis of the reliability of highly reliable computer systems for future commercial aircraft. The difficulty, viz., the lack of credible precision in reliability estimates obtained by analytical modeling techniques are established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible, (2) a complex system design technique, fault tolerance, (3) system reliability dominated by errors due to flaws in the system definition, and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. The technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. The use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques.
Edouard, Pascal; Junge, Astrid; Kiss-Polauf, Marianna; Ramirez, Christophe; Sousa, Monica; Timpka, Toomas; Branco, Pedro
2018-03-01
The quality of epidemiological injury data depends on the reliability of reporting to an injury surveillance system. Ascertaining whether all physicians/physiotherapists report the same information for the same injury case is of major interest to determine data validity. The aim of this study was therefore to analyse the data collection reliability through the analysis of the interrater reliability. Cross-sectional survey. During the 2016 European Athletics Advanced Athletics Medicine Course in Amsterdam, all national medical teams were asked to complete seven virtual case reports on a standardised injury report form using the same definitions and classifications of injuries as the international athletics championships injury surveillance protocol. The completeness of data and the Fleiss' kappa coefficients for the inter-rater reliability were calculated for: sex, age, event, circumstance, location, type, assumed cause and estimated time-loss. Forty-one team physicians and physiotherapists of national medical teams participated in the study (response rate 89.1%). Data completeness was 96.9%. The Fleiss' kappa coefficients were: almost perfect for sex (k=1), injury location (k=0.991), event (k=0.953), circumstance (k=0.942), and age (k=0.870), moderate for type (k=0.507), fair for assumed cause (k=0.394), and poor for estimated time-loss (k=0.155). The injury surveillance system used during international athletics championships provided reliable data for "sex", "location", "event", "circumstance", and "age". More caution should be taken for "assumed cause" and "type", and even more for "estimated time-loss". This injury surveillance system displays satisfactory data quality (reliable data and high data completeness), and thus, can be recommended as tool to collect epidemiology information on injuries during international athletics championships. Copyright © 2018 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Application of a truncated normal failure distribution in reliability testing
NASA Technical Reports Server (NTRS)
Groves, C., Jr.
1968-01-01
Statistical truncated normal distribution function is applied as a time-to-failure distribution function in equipment reliability estimations. Age-dependent characteristics of the truncated function provide a basis for formulating a system of high-reliability testing that effectively merges statistical, engineering, and cost considerations.
Identification of the contribution of the ankle and hip joints to multi-segmental balance control
2013-01-01
Background Human stance involves multiple segments, including the legs and trunk, and requires coordinated actions of both. A novel method was developed that reliably estimates the contribution of the left and right leg (i.e., the ankle and hip joints) to the balance control of individual subjects. Methods The method was evaluated using simulations of a double-inverted pendulum model and the applicability was demonstrated with an experiment with seven healthy and one Parkinsonian participant. Model simulations indicated that two perturbations are required to reliably estimate the dynamics of a double-inverted pendulum balance control system. In the experiment, two multisine perturbation signals were applied simultaneously. The balance control system dynamic behaviour of the participants was estimated by Frequency Response Functions (FRFs), which relate ankle and hip joint angles to joint torques, using a multivariate closed-loop system identification technique. Results In the model simulations, the FRFs were reliably estimated, also in the presence of realistic levels of noise. In the experiment, the participants responded consistently to the perturbations, indicated by low noise-to-signal ratios of the ankle angle (0.24), hip angle (0.28), ankle torque (0.07), and hip torque (0.33). The developed method could detect that the Parkinson patient controlled his balance asymmetrically, that is, the right ankle and hip joints produced more corrective torque. Conclusion The method allows for a reliable estimate of the multisegmental feedback mechanism that stabilizes stance, of individual participants and of separate legs. PMID:23433148
Enhanced Component Performance Study: Motor-Driven Pumps 1998–2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2016-02-01
This report presents an enhanced performance evaluation of motor-driven pumps at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience failure reports from fiscal year 1998 through 2014 for the component reliability as reported in the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The motor-driven pump failure modes considered for standby systems are failure to start, failure to run less than or equal to one hour, and failure to run more than one hour; for normally running systems, the failure modes considered are failure to start and failure tomore » run. An eight hour unreliability estimate is also calculated and trended. The component reliability estimates and the reliability data are trended for the most recent 10-year period while yearly estimates for reliability are provided for the entire active period. Statistically significant increasing trends were identified in pump run hours per reactor year. Statistically significant decreasing trends were identified for standby systems industry-wide frequency of start demands, and run hours per reactor year for runs of less than or equal to one hour.« less
The reliability of the Glasgow Coma Scale: a systematic review.
Reith, Florence C M; Van den Brande, Ruben; Synnot, Anneliese; Gruen, Russell; Maas, Andrew I R
2016-01-01
The Glasgow Coma Scale (GCS) provides a structured method for assessment of the level of consciousness. Its derived sum score is applied in research and adopted in intensive care unit scoring systems. Controversy exists on the reliability of the GCS. The aim of this systematic review was to summarize evidence on the reliability of the GCS. A literature search was undertaken in MEDLINE, EMBASE and CINAHL. Observational studies that assessed the reliability of the GCS, expressed by a statistical measure, were included. Methodological quality was evaluated with the consensus-based standards for the selection of health measurement instruments checklist and its influence on results considered. Reliability estimates were synthesized narratively. We identified 52 relevant studies that showed significant heterogeneity in the type of reliability estimates used, patients studied, setting and characteristics of observers. Methodological quality was good (n = 7), fair (n = 18) or poor (n = 27). In good quality studies, kappa values were ≥0.6 in 85%, and all intraclass correlation coefficients indicated excellent reliability. Poor quality studies showed lower reliability estimates. Reliability for the GCS components was higher than for the sum score. Factors that may influence reliability include education and training, the level of consciousness and type of stimuli used. Only 13% of studies were of good quality and inconsistency in reported reliability estimates was found. Although the reliability was adequate in good quality studies, further improvement is desirable. From a methodological perspective, the quality of reliability studies needs to be improved. From a clinical perspective, a renewed focus on training/education and standardization of assessment is required.
Jackson, Brian A; Faith, Kay Sullivan
2013-02-01
Although significant progress has been made in measuring public health emergency preparedness, system-level performance measures are lacking. This report examines a potential approach to such measures for Strategic National Stockpile (SNS) operations. We adapted an engineering analytic technique used to assess the reliability of technological systems-failure mode and effects analysis-to assess preparedness. That technique, which includes systematic mapping of the response system and identification of possible breakdowns that affect performance, provides a path to use data from existing SNS assessment tools to estimate likely future performance of the system overall. Systems models of SNS operations were constructed and failure mode analyses were performed for each component. Linking data from existing assessments, including the technical assistance review and functional drills, to reliability assessment was demonstrated using publicly available information. The use of failure mode and effects estimates to assess overall response system reliability was demonstrated with a simple simulation example. Reliability analysis appears an attractive way to integrate information from the substantial investment in detailed assessments for stockpile delivery and dispensing to provide a view of likely future response performance.
Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model
NASA Astrophysics Data System (ADS)
Yuan, Zhongda; Deng, Junxiang; Wang, Dawei
2018-02-01
Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.
On decentralized estimation. [for large linear systems
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Vukcevic, M. B.
1978-01-01
A multilevel scheme is proposed to construct decentralized estimators for large linear systems. The scheme is numerically attractive since only observability tests of low-order subsystems are required. Equally important is the fact that the constructed estimators are reliable under structural perturbations and can tolerate a wide range of nonlinearities in coupling among the subsystems.
Reliability and maintainability assessment factors for reliable fault-tolerant systems
NASA Technical Reports Server (NTRS)
Bavuso, S. J.
1984-01-01
A long term goal of the NASA Langley Research Center is the development of a reliability assessment methodology of sufficient power to enable the credible comparison of the stochastic attributes of one ultrareliable system design against others. This methodology, developed over a 10 year period, is a combined analytic and simulative technique. An analytic component is the Computer Aided Reliability Estimation capability, third generation, or simply CARE III. A simulative component is the Gate Logic Software Simulator capability, or GLOSS. The numerous factors that potentially have a degrading effect on system reliability and the ways in which these factors that are peculiar to highly reliable fault tolerant systems are accounted for in credible reliability assessments. Also presented are the modeling difficulties that result from their inclusion and the ways in which CARE III and GLOSS mitigate the intractability of the heretofore unworkable mathematics.
Reliability of mobile systems in construction
NASA Astrophysics Data System (ADS)
Narezhnaya, Tamara; Prykina, Larisa
2017-10-01
The purpose of the article is to analyze the influence of the mobility of construction production in the article taking into account the properties of reliability and readiness. Basing on the studied systems the effectiveness and efficiency is estimated. The construction system is considered to be the complete organizational structure providing creation or updating of construction facilities. At the same time the production sphere of these systems joins the production on the building site itself, material and technical resources of the construction production and live labour in these spheres within the construction dynamics. The author concludes, that the estimation of the degree of mobility of systems the of construction production makes a great positive effect in the project.
Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method
NASA Technical Reports Server (NTRS)
Kowal, Michael T.
1997-01-01
The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.
Thermoelectric Outer Planets Spacecraft (TOPS)
NASA Technical Reports Server (NTRS)
1973-01-01
The research and advanced development work is reported on a ballistic-mode, outer planet spacecraft using radioisotope thermoelectric generator (RTG) power. The Thermoelectric Outer Planet Spacecraft (TOPS) project was established to provide the advanced systems technology that would allow the realistic estimates of performance, cost, reliability, and scheduling that are required for an actual flight mission. A system design of the complete RTG-powered outer planet spacecraft was made; major technical innovations of certain hardware elements were designed, developed, and tested; and reliability and quality assurance concepts were developed for long-life requirements. At the conclusion of its active phase, the TOPS Project reached its principal objectives: a development and experience base was established for project definition, and for estimating cost, performance, and reliability; an understanding of system and subsystem capabilities for successful outer planets missions was achieved. The system design answered long-life requirements with massive redundancy, controlled by on-board analysis of spacecraft performance data.
Reliability analysis in the Office of Safety, Environmental, and Mission Assurance (OSEMA)
NASA Astrophysics Data System (ADS)
Kauffmann, Paul J.
1994-12-01
The technical personnel in the SEMA office are working to provide the highest degree of value-added activities to their support of the NASA Langley Research Center mission. Management perceives that reliability analysis tools and an understanding of a comprehensive systems approach to reliability will be a foundation of this change process. Since the office is involved in a broad range of activities supporting space mission projects and operating activities (such as wind tunnels and facilities), it was not clear what reliability tools the office should be familiar with and how these tools could serve as a flexible knowledge base for organizational growth. Interviews and discussions with the office personnel (both technicians and engineers) revealed that job responsibilities ranged from incoming inspection to component or system analysis to safety and risk. It was apparent that a broad base in applied probability and reliability along with tools for practical application was required by the office. A series of ten class sessions with a duration of two hours each was organized and scheduled. Hand-out materials were developed and practical examples based on the type of work performed by the office personnel were included. Topics covered were: Reliability Systems - a broad system oriented approach to reliability; Probability Distributions - discrete and continuous distributions; Sampling and Confidence Intervals - random sampling and sampling plans; Data Analysis and Estimation - Model selection and parameter estimates; and Reliability Tools - block diagrams, fault trees, event trees, FMEA. In the future, this information will be used to review and assess existing equipment and processes from a reliability system perspective. An analysis of incoming materials sampling plans was also completed. This study looked at the issues associated with Mil Std 105 and changes for a zero defect acceptance sampling plan.
Reliability analysis in the Office of Safety, Environmental, and Mission Assurance (OSEMA)
NASA Technical Reports Server (NTRS)
Kauffmann, Paul J.
1994-01-01
The technical personnel in the SEMA office are working to provide the highest degree of value-added activities to their support of the NASA Langley Research Center mission. Management perceives that reliability analysis tools and an understanding of a comprehensive systems approach to reliability will be a foundation of this change process. Since the office is involved in a broad range of activities supporting space mission projects and operating activities (such as wind tunnels and facilities), it was not clear what reliability tools the office should be familiar with and how these tools could serve as a flexible knowledge base for organizational growth. Interviews and discussions with the office personnel (both technicians and engineers) revealed that job responsibilities ranged from incoming inspection to component or system analysis to safety and risk. It was apparent that a broad base in applied probability and reliability along with tools for practical application was required by the office. A series of ten class sessions with a duration of two hours each was organized and scheduled. Hand-out materials were developed and practical examples based on the type of work performed by the office personnel were included. Topics covered were: Reliability Systems - a broad system oriented approach to reliability; Probability Distributions - discrete and continuous distributions; Sampling and Confidence Intervals - random sampling and sampling plans; Data Analysis and Estimation - Model selection and parameter estimates; and Reliability Tools - block diagrams, fault trees, event trees, FMEA. In the future, this information will be used to review and assess existing equipment and processes from a reliability system perspective. An analysis of incoming materials sampling plans was also completed. This study looked at the issues associated with Mil Std 105 and changes for a zero defect acceptance sampling plan.
Remotely piloted vehicle: Application of the GRASP analysis method
NASA Technical Reports Server (NTRS)
Andre, W. L.; Morris, J. B.
1981-01-01
The application of General Reliability Analysis Simulation Program (GRASP) to the remotely piloted vehicle (RPV) system is discussed. The model simulates the field operation of the RPV system. By using individual component reliabilities, the overall reliability of the RPV system is determined. The results of the simulations are given in operational days. The model represented is only a basis from which more detailed work could progress. The RPV system in this model is based on preliminary specifications and estimated values. The use of GRASP from basic system definition, to model input, and to model verification is demonstrated.
NASA Technical Reports Server (NTRS)
Migneault, G. E.
1979-01-01
Emulation techniques applied to the analysis of the reliability of highly reliable computer systems for future commercial aircraft are described. The lack of credible precision in reliability estimates obtained by analytical modeling techniques is first established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Next, the technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. Use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques. Finally an illustrative example is presented to demonstrate from actual use the promise of the proposed application of emulation.
NASA Technical Reports Server (NTRS)
Migneault, Gerard E.
1987-01-01
Emulation techniques can be a solution to a difficulty that arises in the analysis of the reliability of guidance and control computer systems for future commercial aircraft. Described here is the difficulty, the lack of credibility of reliability estimates obtained by analytical modeling techniques. The difficulty is an unavoidable consequence of the following: (1) a reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Use of emulation techniques for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques is then discussed. Finally several examples of the application of emulation techniques are described.
System statistical reliability model and analysis
NASA Technical Reports Server (NTRS)
Lekach, V. S.; Rood, H.
1973-01-01
A digital computer code was developed to simulate the time-dependent behavior of the 5-kwe reactor thermoelectric system. The code was used to determine lifetime sensitivity coefficients for a number of system design parameters, such as thermoelectric module efficiency and degradation rate, radiator absorptivity and emissivity, fuel element barrier defect constant, beginning-of-life reactivity, etc. A probability distribution (mean and standard deviation) was estimated for each of these design parameters. Then, error analysis was used to obtain a probability distribution for the system lifetime (mean = 7.7 years, standard deviation = 1.1 years). From this, the probability that the system will achieve the design goal of 5 years lifetime is 0.993. This value represents an estimate of the degradation reliability of the system.
Reliability and Maintainability Data for Lead Lithium Cooling Systems
Cadwallader, Lee
2016-11-16
This article presents component failure rate data for use in assessment of lead lithium cooling systems. Best estimate data applicable to this liquid metal coolant is presented. Repair times for similar components are also referenced in this work. These data support probabilistic safety assessment and reliability, availability, maintainability and inspectability analyses.
Validity and reliability of Optojump photoelectric cells for estimating vertical jump height.
Glatthorn, Julia F; Gouge, Sylvain; Nussbaumer, Silvio; Stauffacher, Simone; Impellizzeri, Franco M; Maffiuletti, Nicola A
2011-02-01
Vertical jump is one of the most prevalent acts performed in several sport activities. It is therefore important to ensure that the measurements of vertical jump height made as a part of research or athlete support work have adequate validity and reliability. The aim of this study was to evaluate concurrent validity and reliability of the Optojump photocell system (Microgate, Bolzano, Italy) with force plate measurements for estimating vertical jump height. Twenty subjects were asked to perform maximal squat jumps and countermovement jumps, and flight time-derived jump heights obtained by the force plate were compared with those provided by Optojump, to examine its concurrent (criterion-related) validity (study 1). Twenty other subjects completed the same jump series on 2 different occasions (separated by 1 week), and jump heights of session 1 were compared with session 2, to investigate test-retest reliability of the Optojump system (study 2). Intraclass correlation coefficients (ICCs) for validity were very high (0.997-0.998), even if a systematic difference was consistently observed between force plate and Optojump (-1.06 cm; p < 0.001). Test-retest reliability of the Optojump system was excellent, with ICCs ranging from 0.982 to 0.989, low coefficients of variation (2.7%), and low random errors (±2.81 cm). The Optojump photocell system demonstrated strong concurrent validity and excellent test-retest reliability for the estimation of vertical jump height. We propose the following equation that allows force plate and Optojump results to be used interchangeably: force plate jump height (cm) = 1.02 × Optojump jump height + 0.29. In conclusion, the use of Optojump photoelectric cells is legitimate for field-based assessments of vertical jump height.
A Bayesian approach to reliability and confidence
NASA Technical Reports Server (NTRS)
Barnes, Ron
1989-01-01
The historical evolution of NASA's interest in quantitative measures of reliability assessment is outlined. The introduction of some quantitative methodologies into the Vehicle Reliability Branch of the Safety, Reliability and Quality Assurance (SR and QA) Division at Johnson Space Center (JSC) was noted along with the development of the Extended Orbiter Duration--Weakest Link study which will utilize quantitative tools for a Bayesian statistical analysis. Extending the earlier work of NASA sponsor, Richard Heydorn, researchers were able to produce a consistent Bayesian estimate for the reliability of a component and hence by a simple extension for a system of components in some cases where the rate of failure is not constant but varies over time. Mechanical systems in general have this property since the reliability usually decreases markedly as the parts degrade over time. While they have been able to reduce the Bayesian estimator to a simple closed form for a large class of such systems, the form for the most general case needs to be attacked by the computer. Once a table is generated for this form, researchers will have a numerical form for the general solution. With this, the corresponding probability statements about the reliability of a system can be made in the most general setting. Note that the utilization of uniform Bayesian priors represents a worst case scenario in the sense that as researchers incorporate more expert opinion into the model, they will be able to improve the strength of the probability calculations.
2011-11-01
assessment to quality of localization/characterization estimates. This protocol includes four critical components: (1) a procedure to identify the...critical factors impacting SHM system performance; (2) a multistage or hierarchical approach to SHM system validation; (3) a model -assisted evaluation...Lindgren, E. A ., Buynak, C. F., Steffes, G., Derriso, M., “ Model -assisted Probabilistic Reliability Assessment for Structural Health Monitoring
System and Software Reliability (C103)
NASA Technical Reports Server (NTRS)
Wallace, Dolores
2003-01-01
Within the last decade better reliability models (hardware. software, system) than those currently used have been theorized and developed but not implemented in practice. Previous research on software reliability has shown that while some existing software reliability models are practical, they are no accurate enough. New paradigms of development (e.g. OO) have appeared and associated reliability models have been proposed posed but not investigated. Hardware models have been extensively investigated but not integrated into a system framework. System reliability modeling is the weakest of the three. NASA engineers need better methods and tools to demonstrate that the products meet NASA requirements for reliability measurement. For the new models for the software component of the last decade, there is a great need to bring them into a form that they can be used on software intensive systems. The Statistical Modeling and Estimation of Reliability Functions for Systems (SMERFS'3) tool is an existing vehicle that may be used to incorporate these new modeling advances. Adapting some existing software reliability modeling changes to accommodate major changes in software development technology may also show substantial improvement in prediction accuracy. With some additional research, the next step is to identify and investigate system reliability. System reliability models could then be incorporated in a tool such as SMERFS'3. This tool with better models would greatly add value in assess in GSFC projects.
Reliability optimization design of the gear modification coefficient based on the meshing stiffness
NASA Astrophysics Data System (ADS)
Wang, Qianqian; Wang, Hui
2018-04-01
Since the time varying meshing stiffness of gear system is the key factor affecting gear vibration, it is important to design the meshing stiffness to reduce vibration. Based on the effect of gear modification coefficient on the meshing stiffness, considering the random parameters, reliability optimization design of the gear modification is researched. The dimension reduction and point estimation method is used to estimate the moment of the limit state function, and the reliability is obtained by the forth moment method. The cooperation of the dynamic amplitude results before and after optimization indicates that the research is useful for the reduction of vibration and noise and the improvement of the reliability.
An approximation formula for a class of fault-tolerant computers
NASA Technical Reports Server (NTRS)
White, A. L.
1986-01-01
An approximation formula is derived for the probability of failure for fault-tolerant process-control computers. These computers use redundancy and reconfiguration to achieve high reliability. Finite-state Markov models capture the dynamic behavior of component failure and system recovery, and the approximation formula permits an estimation of system reliability by an easy examination of the model.
Reliability of Space-Shuttle Pressure Vessels with Random Batch Effects
NASA Technical Reports Server (NTRS)
Feiveson, Alan H.; Kulkarni, Pandurang M.
2000-01-01
In this article we revisit the problem of estimating the joint reliability against failure by stress rupture of a group of fiber-wrapped pressure vessels used on Space-Shuttle missions. The available test data were obtained from an experiment conducted at the U.S. Department of Energy Lawrence Livermore Laboratory (LLL) in which scaled-down vessels were subjected to life testing at four accelerated levels of pressure. We estimate the reliability assuming that both the Shuttle and LLL vessels were chosen at random in a two-stage process from an infinite population with spools of fiber as the primary sampling unit. Two main objectives of this work are: (1) to obtain practical estimates of reliability taking into account random spool effects and (2) to obtain a realistic assessment of estimation accuracy under the random model. Here, reliability is calculated in terms of a 'system' of 22 fiber-wrapped pressure vessels, taking into account typical pressures and exposure times experienced by Shuttle vessels. Comparisons are made with previous studies. The main conclusion of this study is that, although point estimates of reliability are still in the 'comfort zone,' it is advisable to plan for replacement of the pressure vessels well before the expected Lifetime of 100 missions per Shuttle Orbiter. Under a random-spool model, there is simply not enough information in the LLL data to provide reasonable assurance that such replacement would not be necessary.
Constraining uncertainties in water supply reliability in a tropical data scarce basin
NASA Astrophysics Data System (ADS)
Kaune, Alexander; Werner, Micha; Rodriguez, Erasmo; de Fraiture, Charlotte
2015-04-01
Assessing the water supply reliability in river basins is essential for adequate planning and development of irrigated agriculture and urban water systems. In many cases hydrological models are applied to determine the surface water availability in river basins. However, surface water availability and variability is often not appropriately quantified due to epistemic uncertainties, leading to water supply insecurity. The objective of this research is to determine the water supply reliability in order to support planning and development of irrigated agriculture in a tropical, data scarce environment. The approach proposed uses a simple hydrological model, but explicitly includes model parameter uncertainty. A transboundary river basin in the tropical region of Colombia and Venezuela with an approximately area of 2100 km² was selected as a case study. The Budyko hydrological framework was extended to consider climatological input variability and model parameter uncertainty, and through this the surface water reliability to satisfy the irrigation and urban demand was estimated. This provides a spatial estimate of the water supply reliability across the basin. For the middle basin the reliability was found to be less than 30% for most of the months when the water is extracted from an upstream source. Conversely, the monthly water supply reliability was high (r>98%) in the lower basin irrigation areas when water was withdrawn from a source located further downstream. Including model parameter uncertainty provides a complete estimate of the water supply reliability, but that estimate is influenced by the uncertainty in the model. Reducing the uncertainty in the model through improved data and perhaps improved model structure will improve the estimate of the water supply reliability allowing better planning of irrigated agriculture and dependable water allocation decisions.
NDE reliability and probability of detection (POD) evolution and paradigm shift
NASA Astrophysics Data System (ADS)
Singh, Surendra
2014-02-01
The subject of NDE Reliability and POD has gone through multiple phases since its humble beginning in the late 1960s. This was followed by several programs including the important one nicknamed "Have Cracks - Will Travel" or in short "Have Cracks" by Lockheed Georgia Company for US Air Force during 1974-1978. This and other studies ultimately led to a series of developments in the field of reliability and POD starting from the introduction of fracture mechanics and Damaged Tolerant Design (DTD) to statistical framework by Bernes and Hovey in 1981 for POD estimation to MIL-STD HDBK 1823 (1999) and 1823A (2009). During the last decade, various groups and researchers have further studied the reliability and POD using Model Assisted POD (MAPOD), Simulation Assisted POD (SAPOD), and applying Bayesian Statistics. All and each of these developments had one objective, i.e., improving accuracy of life prediction in components that to a large extent depends on the reliability and capability of NDE methods. Therefore, it is essential to have a reliable detection and sizing of large flaws in components. Currently, POD is used for studying reliability and capability of NDE methods, though POD data offers no absolute truth regarding NDE reliability, i.e., system capability, effects of flaw morphology, and quantifying the human factors. Furthermore, reliability and POD have been reported alike in meaning but POD is not NDE reliability. POD is a subset of the reliability that consists of six phases: 1) samples selection using DOE, 2) NDE equipment setup and calibration, 3) System Measurement Evaluation (SME) including Gage Repeatability &Reproducibility (Gage R&R) and Analysis Of Variance (ANOVA), 4) NDE system capability and electronic and physical saturation, 5) acquiring and fitting data to a model, and data analysis, and 6) POD estimation. This paper provides an overview of all major POD milestones for the last several decades and discuss rationale for using Integrated Computational Materials Engineering (ICME), MAPOD, SAPOD, and Bayesian statistics for studying controllable and non-controllable variables including human factors for estimating POD. Another objective is to list gaps between "hoped for" versus validated or fielded failed hardware.
Earthquake magnitude estimation using the τ c and P d method for earthquake early warning systems
NASA Astrophysics Data System (ADS)
Jin, Xing; Zhang, Hongcai; Li, Jun; Wei, Yongxiang; Ma, Qiang
2013-10-01
Earthquake early warning (EEW) systems are one of the most effective ways to reduce earthquake disaster. Earthquake magnitude estimation is one of the most important and also the most difficult parts of the entire EEW system. In this paper, based on 142 earthquake events and 253 seismic records that were recorded by the KiK-net in Japan, and aftershocks of the large Wenchuan earthquake in Sichuan, we obtained earthquake magnitude estimation relationships using the τ c and P d methods. The standard variances of magnitude calculation of these two formulas are ±0.65 and ±0.56, respectively. The P d value can also be used to estimate the peak ground motion of velocity, then warning information can be released to the public rapidly, according to the estimation results. In order to insure the stability and reliability of magnitude estimation results, we propose a compatibility test according to the natures of these two parameters. The reliability of the early warning information is significantly improved though this test.
Integrating Formal Methods and Testing 2002
NASA Technical Reports Server (NTRS)
Cukic, Bojan
2002-01-01
Traditionally, qualitative program verification methodologies and program testing are studied in separate research communities. None of them alone is powerful and practical enough to provide sufficient confidence in ultra-high reliability assessment when used exclusively. Significant advances can be made by accounting not only tho formal verification and program testing. but also the impact of many other standard V&V techniques, in a unified software reliability assessment framework. The first year of this research resulted in the statistical framework that, given the assumptions on the success of the qualitative V&V and QA procedures, significantly reduces the amount of testing needed to confidently assess reliability at so-called high and ultra-high levels (10-4 or higher). The coming years shall address the methodologies to realistically estimate the impacts of various V&V techniques to system reliability and include the impact of operational risk to reliability assessment. Combine formal correctness verification, process and product metrics, and other standard qualitative software assurance methods with statistical testing with the aim of gaining higher confidence in software reliability assessment for high-assurance applications. B) Quantify the impact of these methods on software reliability. C) Demonstrate that accounting for the effectiveness of these methods reduces the number of tests needed to attain certain confidence level. D) Quantify and justify the reliability estimate for systems developed using various methods.
[Research & development on computer expert system for forensic bones estimation].
Zhao, Jun-ji; Zhang, Jan-zheng; Liu, Nin-guo
2005-08-01
To build an expert system for forensic bones estimation. By using the object oriented method, employing statistical data of forensic anthropology, combining the statistical data frame knowledge representation with productions and also using the fuzzy matching and DS evidence theory method. Software for forensic estimation of sex, age and height with opened knowledge base was designed. This system is reliable and effective, and it would be a good assistant of the forensic technician.
Care 3 model overview and user's guide, first revision
NASA Technical Reports Server (NTRS)
Bavuso, S. J.; Petersen, P. L.
1985-01-01
A manual was written to introduce the CARE III (Computer-Aided Reliability Estimation) capability to reliability and design engineers who are interested in predicting the reliability of highly reliable fault-tolerant systems. It was also structured to serve as a quick-look reference manual for more experienced users. The guide covers CARE III modeling and reliability predictions for execution in the CDC CYber 170 series computers, DEC VAX-11/700 series computer, and most machines that compile ANSI Standard FORTRAN 77.
Corroded Anchor Structure Stability/Reliability (CAS_Stab-R) Software for Hydraulic Structures
2017-12-01
This report describes software that provides a probabilistic estimate of time -to-failure for a corroding anchor strand system. These anchor...stability to the structure. A series of unique pull-test experiments conducted by Ebeling et al. (2016) at the U.S. Army Engineer Research and...Reliability (CAS_Stab-R) produces probabilistic Remaining Anchor Life time estimates for anchor cables based upon the direct corrosion rate for the
NASA Technical Reports Server (NTRS)
Sizlo, T. R.; Berg, R. A.; Gilles, D. L.
1979-01-01
An augmentation system for a 230 passenger, twin engine aircraft designed with a relaxation of conventional longitudinal static stability was developed. The design criteria are established and candidate augmentation system control laws and hardware architectures are formulated and evaluated with respect to reliability, flying qualities, and flight path tracking performance. The selected systems are shown to satisfy the interpreted regulatory safety and reliability requirements while maintaining the present DC 10 (study baseline) level of maintainability and reliability for the total flight control system. The impact of certification of the relaxed static stability augmentation concept is also estimated with regard to affected federal regulations, system validation plan, and typical development/installation costs.
A Web-Based System for Bayesian Benchmark Dose Estimation.
Shao, Kan; Shapiro, Andrew J
2018-01-11
Benchmark dose (BMD) modeling is an important step in human health risk assessment and is used as the default approach to identify the point of departure for risk assessment. A probabilistic framework for dose-response assessment has been proposed and advocated by various institutions and organizations; therefore, a reliable tool is needed to provide distributional estimates for BMD and other important quantities in dose-response assessment. We developed an online system for Bayesian BMD (BBMD) estimation and compared results from this software with U.S. Environmental Protection Agency's (EPA's) Benchmark Dose Software (BMDS). The system is built on a Bayesian framework featuring the application of Markov chain Monte Carlo (MCMC) sampling for model parameter estimation and BMD calculation, which makes the BBMD system fundamentally different from the currently prevailing BMD software packages. In addition to estimating the traditional BMDs for dichotomous and continuous data, the developed system is also capable of computing model-averaged BMD estimates. A total of 518 dichotomous and 108 continuous data sets extracted from the U.S. EPA's Integrated Risk Information System (IRIS) database (and similar databases) were used as testing data to compare the estimates from the BBMD and BMDS programs. The results suggest that the BBMD system may outperform the BMDS program in a number of aspects, including fewer failed BMD and BMDL calculations and estimates. The BBMD system is a useful alternative tool for estimating BMD with additional functionalities for BMD analysis based on most recent research. Most importantly, the BBMD has the potential to incorporate prior information to make dose-response modeling more reliable and can provide distributional estimates for important quantities in dose-response assessment, which greatly facilitates the current trend for probabilistic risk assessment. https://doi.org/10.1289/EHP1289.
Jackson, T
2001-05-01
Casemix-funding systems for hospital inpatient care require a set of resource weights which will not inadvertently distort patterns of patient care. Few health systems have very good sources of cost information, and specific studies to derive empirical cost relativities are themselves costly. This paper reports a 5 year program of research into the use of data from hospital management information systems (clinical costing systems) to estimate resource relativities for inpatient hospital care used in Victoria's DRG-based payment system. The paper briefly describes international approaches to cost weight estimation. It describes the architecture of clinical costing systems, and contrasts process and job costing approaches to cost estimation. Techniques of data validation and reliability testing developed in the conduct of four of the first five of the Victorian Cost Weight Studies (1993-1998) are described. Improvement in sampling, data validity and reliability are documented over the course of the research program, the advantages of patient-level data are highlighted. The usefulness of these byproduct data for estimation of relative resource weights and other policy applications may be an important factor in hospital and health system decisions to invest in clinical costing technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
CARLSON, A.B.
The document presents updated results of the preliminary reliability, availability, maintainability analysis performed for delivery of waste feed from tanks 241-AZ-101 and 241-AN-105 to British Nuclear Fuels Limited, inc. under the Tank Waste Remediation System Privatization Contract. The operational schedule delay risk is estimated and contributing factors are discussed.
Manual and automatic locomotion scoring systems in dairy cows: a review.
Schlageter-Tello, Andrés; Bokkers, Eddie A M; Koerkamp, Peter W G Groot; Van Hertem, Tom; Viazzi, Stefano; Romanini, Carlos E B; Halachmi, Ilan; Bahr, Claudia; Berckmans, Daniël; Lokhorst, Kees
2014-09-01
The objective of this review was to describe, compare and evaluate agreement, reliability, and validity of manual and automatic locomotion scoring systems (MLSSs and ALSSs, respectively) used in dairy cattle lameness research. There are many different types of MLSSs and ALSSs. Twenty-five MLSSs were found in 244 articles. MLSSs use different types of scale (ordinal or continuous) and different gait and posture traits need to be observed. The most used MLSS (used in 28% of the references) is based on asymmetric gait, reluctance to bear weight, and arched back, and is scored on a five-level scale. Fifteen ALSSs were found that could be categorized according to three approaches: (a) the kinetic approach measures forces involved in locomotion, (b) the kinematic approach measures time and distance of variables associated to limb movement and some specific posture variables, and (c) the indirect approach uses behavioural variables or production variables as indicators for impaired locomotion. Agreement and reliability estimates were scarcely reported in articles related to MLSSs. When reported, inappropriate statistical methods such as PABAK and Pearson and Spearman correlation coefficients were commonly used. Some of the most frequently used MLSSs were poorly evaluated for agreement and reliability. Agreement and reliability estimates for the original four-, five- or nine-level MLSS, expressed in percentage of agreement, kappa and weighted kappa, showed large ranges among and sometimes also within articles. After the transformation into a two-level scale, agreement and reliability estimates showed acceptable estimates (percentage of agreement ≥ 75%; kappa and weighted kappa ≥ 0.6), but still estimates showed a large variation between articles. Agreement and reliability estimates for ALSSs were not reported in any article. Several ALSSs use MLSSs as a reference for model calibration and validation. However, varying agreement and reliability estimates of MLSSs make a clear definition of a lameness case difficult, and thus affect the validity of ALSSs. MLSSs and ALSSs showed limited validity for hoof lesion detection and pain assessment. The utilization of MLSSs and ALSSs should aim to the prevention and efficient management of conditions that induce impaired locomotion. Long-term studies comparing MLSSs and ALSSs while applying various strategies to detect and control unfavourable conditions leading to impaired locomotion are required to determine the usefulness of MLSSs and ALSSs for securing optimal production and animal welfare in practice. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kamiaka, Shoya; Benomar, Othman; Suto, Yasushi
2018-05-01
Advances in asteroseismology of solar-like stars, now provide a unique method to estimate the stellar inclination i⋆. This enables to evaluate the spin-orbit angle of transiting planetary systems, in a complementary fashion to the Rossiter-McLaughlineffect, a well-established method to estimate the projected spin-orbit angle λ. Although the asteroseismic method has been broadly applied to the Kepler data, its reliability has yet to be assessed intensively. In this work, we evaluate the accuracy of i⋆ from asteroseismology of solar-like stars using 3000 simulated power spectra. We find that the low signal-to-noise ratio of the power spectra induces a systematic under-estimate (over-estimate) bias for stars with high (low) inclinations. We derive analytical criteria for the reliable asteroseismic estimate, which indicates that reliable measurements are possible in the range of 20° ≲ i⋆ ≲ 80° only for stars with high signal-to-noise ratio. We also analyse and measure the stellar inclination of 94 Kepler main-sequence solar-like stars, among which 33 are planetary hosts. According to our reliability criteria, a third of them (9 with planets, 22 without) have accurate stellar inclination. Comparison of our asteroseismic estimate of vsin i⋆ against spectroscopic measurements indicates that the latter suffers from a large uncertainty possibly due to the modeling of macro-turbulence, especially for stars with projected rotation speed vsin i⋆ ≲ 5km/s. This reinforces earlier claims, and the stellar inclination estimated from the combination of measurements from spectroscopy and photometric variation for slowly rotating stars needs to be interpreted with caution.
An Energy-Based Limit State Function for Estimation of Structural Reliability in Shock Environments
Guthrie, Michael A.
2013-01-01
limit state function is developed for the estimation of structural reliability in shock environments. This limit state function uses peak modal strain energies to characterize environmental severity and modal strain energies at failure to characterize the structural capacity. The Hasofer-Lind reliability index is briefly reviewed and its computation for the energy-based limit state function is discussed. Applications to two degree of freedom mass-spring systems and to a simple finite element model are considered. For these examples, computation of the reliability index requires little effort beyond a modal analysis, but still accounts for relevant uncertainties in both the structure and environment.more » For both examples, the reliability index is observed to agree well with the results of Monte Carlo analysis. In situations where fast, qualitative comparison of several candidate designs is required, the reliability index based on the proposed limit state function provides an attractive metric which can be used to compare and control reliability.« less
Advanced Reactor PSA Methodologies for System Reliability Analysis and Source Term Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, D.; Brunett, A.; Passerini, S.
Beginning in 2015, a project was initiated to update and modernize the probabilistic safety assessment (PSA) of the GE-Hitachi PRISM sodium fast reactor. This project is a collaboration between GE-Hitachi and Argonne National Laboratory (Argonne), and funded in part by the U.S. Department of Energy. Specifically, the role of Argonne is to assess the reliability of passive safety systems, complete a mechanistic source term calculation, and provide component reliability estimates. The assessment of passive system reliability focused on the performance of the Reactor Vessel Auxiliary Cooling System (RVACS) and the inherent reactivity feedback mechanisms of the metal fuel core. Themore » mechanistic source term assessment attempted to provide a sequence specific source term evaluation to quantify offsite consequences. Lastly, the reliability assessment focused on components specific to the sodium fast reactor, including electromagnetic pumps, intermediate heat exchangers, the steam generator, and sodium valves and piping.« less
Transmission overhaul estimates for partial and full replacement at repair
NASA Technical Reports Server (NTRS)
Savage, M.; Lewicki, D. G.
1991-01-01
Timely transmission overhauls increase in-flight service reliability greater than the calculated design reliabilities of the individual aircraft transmission components. Although necessary for aircraft safety, transmission overhauls contribute significantly to aircraft expense. Predictions of a transmission's maintenance needs at the design stage should enable the development of more cost effective and reliable transmissions in the future. The frequency is estimated of overhaul along with the number of transmissions or components needed to support the overhaul schedule. Two methods based on the two parameter Weibull statistical distribution for component life are used to estimate the time between transmission overhauls. These methods predict transmission lives for maintenance schedules which repair the transmission with a complete system replacement or repair only failed components of the transmission. An example illustrates the methods.
DOT National Transportation Integrated Search
2003-12-01
This study explores the on-time reliability benefits to potential users of a personalized advanced traveler information system (ATIS) providing real-time pre-trip roadway information for the Seattle morning peak period through the application of Heur...
Cue Reliability Represented in the Shape of Tuning Curves in the Owl's Sound Localization System
Fischer, Brian J.; Peña, Jose L.
2016-01-01
Optimal use of sensory information requires that the brain estimates the reliability of sensory cues, but the neural correlate of cue reliability relevant for behavior is not well defined. Here, we addressed this issue by examining how the reliability of spatial cue influences neuronal responses and behavior in the owl's auditory system. We show that the firing rate and spatial selectivity changed with cue reliability due to the mechanisms generating the tuning to the sound localization cue. We found that the correlated variability among neurons strongly depended on the shape of the tuning curves. Finally, we demonstrated that the change in the neurons' selectivity was necessary and sufficient for a network of stochastic neurons to predict behavior when sensory cues were corrupted with noise. This study demonstrates that the shape of tuning curves can stand alone as a coding dimension of environmental statistics. SIGNIFICANCE STATEMENT In natural environments, sensory cues are often corrupted by noise and are therefore unreliable. To make the best decisions, the brain must estimate the degree to which a cue can be trusted. The behaviorally relevant neural correlates of cue reliability are debated. In this study, we used the barn owl's sound localization system to address this question. We demonstrated that the mechanisms that account for spatial selectivity also explained how neural responses changed with degraded signals. This allowed for the neurons' selectivity to capture cue reliability, influencing the population readout commanding the owl's sound-orienting behavior. PMID:26888922
NASA Technical Reports Server (NTRS)
Gernand, Jeffrey L.; Gillespie, Amanda M.; Monaghan, Mark W.; Cummings, Nicholas H.
2010-01-01
Success of the Constellation Program's lunar architecture requires successfully launching two vehicles, Ares I/Orion and Ares V/Altair, in a very limited time period. The reliability and maintainability of flight vehicles and ground systems must deliver a high probability of successfully launching the second vehicle in order to avoid wasting the on-orbit asset launched by the first vehicle. The Ground Operations Project determined which ground subsystems had the potential to affect the probability of the second launch and allocated quantitative availability requirements to these subsystems. The Ground Operations Project also developed a methodology to estimate subsystem reliability, availability and maintainability to ensure that ground subsystems complied with allocated launch availability and maintainability requirements. The verification analysis developed quantitative estimates of subsystem availability based on design documentation; testing results, and other information. Where appropriate, actual performance history was used for legacy subsystems or comparative components that will support Constellation. The results of the verification analysis will be used to verify compliance with requirements and to highlight design or performance shortcomings for further decision-making. This case study will discuss the subsystem requirements allocation process, describe the ground systems methodology for completing quantitative reliability, availability and maintainability analysis, and present findings and observation based on analysis leading to the Ground Systems Preliminary Design Review milestone.
NASA Technical Reports Server (NTRS)
Feldstein, J. F.
1977-01-01
Failure data from 16 commercial spacecraft were analyzed to evaluate failure trends, reliability growth, and effectiveness of tests. It was shown that the test programs were highly effective in ensuring a high level of in-orbit reliability. There was only a single catastrophic problem in 44 years of in-orbit operation on 12 spacecraft. The results also indicate that in-orbit failure rates are highly correlated with unit and systems test failure rates. The data suggest that test effectiveness estimates can be used to guide the content of a test program to ensure that in-orbit reliability goals are achieved.
NASA Technical Reports Server (NTRS)
Simmons, D. B.
1975-01-01
The DOMONIC system has been modified to run on the Univac 1108 and the CDC 6600 as well as the IBM 370 computer system. The DOMONIC monitor system has been implemented to gather data which can be used to optimize the DOMONIC system and to predict the reliability of software developed using DOMONIC. The areas of quality metrics, error characterization, program complexity, program testing, validation and verification are analyzed. A software reliability model for estimating program completion levels and one on which to base system acceptance have been developed. The DAVE system which performs flow analysis and error detection has been converted from the University of Colorado CDC 6400/6600 computer to the IBM 360/370 computer system for use with the DOMONIC system.
Developing Reliable Life Support for Mars
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2017-01-01
A human mission to Mars will require highly reliable life support systems. Mars life support systems may recycle water and oxygen using systems similar to those on the International Space Station (ISS). However, achieving sufficient reliability is less difficult for ISS than it will be for Mars. If an ISS system has a serious failure, it is possible to provide spare parts, or directly supply water or oxygen, or if necessary bring the crew back to Earth. Life support for Mars must be designed, tested, and improved as needed to achieve high demonstrated reliability. A quantitative reliability goal should be established and used to guide development t. The designers should select reliable components and minimize interface and integration problems. In theory a system can achieve the component-limited reliability, but testing often reveal unexpected failures due to design mistakes or flawed components. Testing should extend long enough to detect any unexpected failure modes and to verify the expected reliability. Iterated redesign and retest may be required to achieve the reliability goal. If the reliability is less than required, it may be improved by providing spare components or redundant systems. The number of spares required to achieve a given reliability goal depends on the component failure rate. If the failure rate is under estimated, the number of spares will be insufficient and the system may fail. If the design is likely to have undiscovered design or component problems, it is advisable to use dissimilar redundancy, even though this multiplies the design and development cost. In the ideal case, a human tended closed system operational test should be conducted to gain confidence in operations, maintenance, and repair. The difficulty in achieving high reliability in unproven complex systems may require the use of simpler, more mature, intrinsically higher reliability systems. The limitations of budget, schedule, and technology may suggest accepting lower and less certain expected reliability. A plan to develop reliable life support is needed to achieve the best possible reliability.
Challenges of Estimating the Annual Caseload of Severe Acute Malnutrition: The Case of Niger
Hallarou, Mahaman; Gérard, Jean-Christophe; Donnen, Philippe; Macq, Jean
2016-01-01
Introduction Reliable prospective estimates of annual severe acute malnutrition (SAM) caseloads for treatment are needed for policy decisions and planning of quality services in the context of competing public health priorities and limited resources. This paper compares the reliability of SAM caseloads of children 6–59 months of age in Niger estimated from prevalence at the start of the year and counted from incidence at the end of the year. Methods Secondary data from two health districts for 2012 and the country overall for 2013 were used to calculate annual caseload of SAM. Prevalence and coverage were extracted from survey reports, and incidence from weekly surveillance systems. Results The prospective caseload estimate derived from prevalence and duration of illness underestimated the true burden. Similar incidence was derived from two weekly surveillance systems, but differed from that obtained from the monthly system. Incidence conversion factors were two to five times higher than recommended. Discussion Obtaining reliable prospective caseloads was challenging because prevalence is unsuitable for estimating incidence of SAM. Different SAM indicators identified different SAM populations, and duration of illness, expected contact coverage and population figures were inaccurate. The quality of primary data measurement, recording and reporting affected incidence numbers from surveillance. Coverage estimated in population surveys was rarely available, and coverage obtained by comparing admissions with prospective caseload estimates was unrealistic or impractical. Conclusions Caseload estimates derived from prevalence are unreliable and should be used with caution. Policy and service decisions that depend on these numbers may weaken performance of service delivery. Niger may improve SAM surveillance by simplifying and improving primary data collection and methods using innovative information technologies for single data entry at the first contact with the health system. Lessons may be relevant for countries with a high burden of SAM, including for targeted emergency responses. PMID:27606677
Comparing capacity value estimation techniques for photovoltaic solar power
Madaeni, Seyed Hossein; Sioshansi, Ramteen; Denholm, Paul
2012-09-28
In this paper, we estimate the capacity value of photovoltaic (PV) solar plants in the western U.S. Our results show that PV plants have capacity values that range between 52% and 93%, depending on location and sun-tracking capability. We further compare more robust but data- and computationally-intense reliability-based estimation techniques with simpler approximation methods. We show that if implemented properly, these techniques provide accurate approximations of reliability-based methods. Overall, methods that are based on the weighted capacity factor of the plant provide the most accurate estimate. As a result, we also examine the sensitivity of PV capacity value to themore » inclusion of sun-tracking systems.« less
NDE reliability and probability of detection (POD) evolution and paradigm shift
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Surendra
2014-02-18
The subject of NDE Reliability and POD has gone through multiple phases since its humble beginning in the late 1960s. This was followed by several programs including the important one nicknamed “Have Cracks – Will Travel” or in short “Have Cracks” by Lockheed Georgia Company for US Air Force during 1974–1978. This and other studies ultimately led to a series of developments in the field of reliability and POD starting from the introduction of fracture mechanics and Damaged Tolerant Design (DTD) to statistical framework by Bernes and Hovey in 1981 for POD estimation to MIL-STD HDBK 1823 (1999) and 1823Amore » (2009). During the last decade, various groups and researchers have further studied the reliability and POD using Model Assisted POD (MAPOD), Simulation Assisted POD (SAPOD), and applying Bayesian Statistics. All and each of these developments had one objective, i.e., improving accuracy of life prediction in components that to a large extent depends on the reliability and capability of NDE methods. Therefore, it is essential to have a reliable detection and sizing of large flaws in components. Currently, POD is used for studying reliability and capability of NDE methods, though POD data offers no absolute truth regarding NDE reliability, i.e., system capability, effects of flaw morphology, and quantifying the human factors. Furthermore, reliability and POD have been reported alike in meaning but POD is not NDE reliability. POD is a subset of the reliability that consists of six phases: 1) samples selection using DOE, 2) NDE equipment setup and calibration, 3) System Measurement Evaluation (SME) including Gage Repeatability and Reproducibility (Gage R and R) and Analysis Of Variance (ANOVA), 4) NDE system capability and electronic and physical saturation, 5) acquiring and fitting data to a model, and data analysis, and 6) POD estimation. This paper provides an overview of all major POD milestones for the last several decades and discuss rationale for using Integrated Computational Materials Engineering (ICME), MAPOD, SAPOD, and Bayesian statistics for studying controllable and non-controllable variables including human factors for estimating POD. Another objective is to list gaps between “hoped for” versus validated or fielded failed hardware.« less
Validity and Reliability of a New Device (WIMU®) for Measuring Hamstring Muscle Extensibility.
Muyor, José M
2017-09-01
The aims of the current study were 1) to evaluate the validity of the WIMU ® system for measuring hamstring muscle extensibility in the passive straight leg raise (PSLR) test using an inclinometer for the criterion and 2) to determine the test-retest reliability of the WIMU ® system to measure hamstring muscle extensibility during the PSLR test. 55 subjects were evaluated on 2 separate occasions. Data from a Unilever inclinometer and WIMU ® system were collected simultaneously. Intraclass correlation coefficients (ICCs) for the validity were very high (0.983-1); a very low systematic bias (-0.21°--0.42°), random error (0.05°-0.04°) and standard error of the estimate (0.43°-0.34°) were observed (left-right leg, respectively) between the 2 devices (inclinometer and the WIMU ® system). The R 2 between the devices was 0.999 (p<0.001) in both the left and right legs. The test-retest reliability of the WIMU ® system was excellent, with ICCs ranging from 0.972-0.995, low coefficients of variation (0.01%), and a low standard error of the estimate (0.19-0.31°). The WIMU ® system showed strong concurrent validity and excellent test-retest reliability for the evaluation of hamstring muscle extensibility in the PSLR test. © Georg Thieme Verlag KG Stuttgart · New York.
NASA Astrophysics Data System (ADS)
Abdenov, A. Zh; Trushin, V. A.; Abdenova, G. A.
2018-01-01
The paper considers the questions of filling the relevant SIEM nodes based on calculations of objective assessments in order to improve the reliability of subjective expert assessments. The proposed methodology is necessary for the most accurate security risk assessment of information systems. This technique is also intended for the purpose of establishing real-time operational information protection in the enterprise information systems. Risk calculations are based on objective estimates of the adverse events implementation probabilities, predictions of the damage magnitude from information security violations. Calculations of objective assessments are necessary to increase the reliability of the proposed expert assessments.
Source Data Impacts on Epistemic Uncertainty for Launch Vehicle Fault Tree Models
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Novack, Steven; Ring, Robert
2016-01-01
Launch vehicle systems are designed and developed using both heritage and new hardware. Design modifications to the heritage hardware to fit new functional system requirements can impact the applicability of heritage reliability data. Risk estimates for newly designed systems must be developed from generic data sources such as commercially available reliability databases using reliability prediction methodologies, such as those addressed in MIL-HDBK-217F. Failure estimates must be converted from the generic environment to the specific operating environment of the system in which it is used. In addition, some qualification of applicability for the data source to the current system should be made. Characterizing data applicability under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This paper will demonstrate a data-source applicability classification method for suggesting epistemic component uncertainty to a target vehicle based on the source and operating environment of the originating data. The source applicability is determined using heuristic guidelines while translation of operating environments is accomplished by applying statistical methods to MIL-HDK-217F tables. The paper will provide one example for assigning environmental factors uncertainty when translating between operating environments for the microelectronic part-type components. The heuristic guidelines will be followed by uncertainty-importance routines to assess the need for more applicable data to reduce model uncertainty.
DOT National Transportation Integrated Search
2004-03-01
The ability of Advanced Traveler Information Systems (ATIS) to improve the on-time reliability of urban truck movements is evaluated through the application of the Heuristic On-Line Web- : Linked Arrival Time Estimation (HOWLATE) methodology. In HOWL...
NASA Astrophysics Data System (ADS)
Saini, K. K.; Sehgal, R. K.; Sethi, B. L.
2008-10-01
In this paper major reliability estimators are analyzed and there comparatively result are discussed. There strengths and weaknesses are evaluated in this case study. Each of the reliability estimators has certain advantages and disadvantages. Inter-rater reliability is one of the best ways to estimate reliability when your measure is an observation. However, it requires multiple raters or observers. As an alternative, you could look at the correlation of ratings of the same single observer repeated on two different occasions. Each of the reliability estimators will give a different value for reliability. In general, the test-retest and inter-rater reliability estimates will be lower in value than the parallel forms and internal consistency ones because they involve measuring at different times or with different raters. Since reliability estimates are often used in statistical analyses of quasi-experimental designs.
2013-10-21
depend on the quality of allocating resources. This work uses a reliability model of system and environmental covariates incorporating information at...state space. Further, the use of condition variables allows for the direct modeling of maintenance impact with the assumption that a nominal value ... value ), the model in the application of aviation maintenance can provide a useful estimation of reliability at multiple levels. Adjusted survival
Lower Bounds to the Reliabilities of Factor Score Estimators.
Hessen, David J
2016-10-06
Under the general common factor model, the reliabilities of factor score estimators might be of more interest than the reliability of the total score (the unweighted sum of item scores). In this paper, lower bounds to the reliabilities of Thurstone's factor score estimators, Bartlett's factor score estimators, and McDonald's factor score estimators are derived and conditions are given under which these lower bounds are equal. The relative performance of the derived lower bounds is studied using classic example data sets. The results show that estimates of the lower bounds to the reliabilities of Thurstone's factor score estimators are greater than or equal to the estimates of the lower bounds to the reliabilities of Bartlett's and McDonald's factor score estimators.
A Statistical Simulation Approach to Safe Life Fatigue Analysis of Redundant Metallic Components
NASA Technical Reports Server (NTRS)
Matthews, William T.; Neal, Donald M.
1997-01-01
This paper introduces a dual active load path fail-safe fatigue design concept analyzed by Monte Carlo simulation. The concept utilizes the inherent fatigue life differences between selected pairs of components for an active dual path system, enhanced by a stress level bias in one component. The design is applied to a baseline design; a safe life fatigue problem studied in an American Helicopter Society (AHS) round robin. The dual active path design is compared with a two-element standby fail-safe system and the baseline design for life at specified reliability levels and weight. The sensitivity of life estimates for both the baseline and fail-safe designs was examined by considering normal and Weibull distribution laws and coefficient of variation levels. Results showed that the biased dual path system lifetimes, for both the first element failure and residual life, were much greater than for standby systems. The sensitivity of the residual life-weight relationship was not excessive at reliability levels up to R = 0.9999 and the weight penalty was small. The sensitivity of life estimates increases dramatically at higher reliability levels.
Probabilistic confidence for decisions based on uncertain reliability estimates
NASA Astrophysics Data System (ADS)
Reid, Stuart G.
2013-05-01
Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.
Lockheed L-1101 avionic flight control redundant systems
NASA Technical Reports Server (NTRS)
Throndsen, E. O.
1976-01-01
The Lockheed L-1011 automatic flight control systems - yaw stability augmentation and automatic landing - are described in terms of their redundancies. The reliability objectives for these systems are discussed and related to in-service experience. In general, the availability of the stability augmentation system is higher than the original design requirement, but is commensurate with early estimates. The in-service experience with automatic landing is not sufficient to provide verification of Category 3 automatic landing system estimated availability.
Cue Reliability Represented in the Shape of Tuning Curves in the Owl's Sound Localization System.
Cazettes, Fanny; Fischer, Brian J; Peña, Jose L
2016-02-17
Optimal use of sensory information requires that the brain estimates the reliability of sensory cues, but the neural correlate of cue reliability relevant for behavior is not well defined. Here, we addressed this issue by examining how the reliability of spatial cue influences neuronal responses and behavior in the owl's auditory system. We show that the firing rate and spatial selectivity changed with cue reliability due to the mechanisms generating the tuning to the sound localization cue. We found that the correlated variability among neurons strongly depended on the shape of the tuning curves. Finally, we demonstrated that the change in the neurons' selectivity was necessary and sufficient for a network of stochastic neurons to predict behavior when sensory cues were corrupted with noise. This study demonstrates that the shape of tuning curves can stand alone as a coding dimension of environmental statistics. In natural environments, sensory cues are often corrupted by noise and are therefore unreliable. To make the best decisions, the brain must estimate the degree to which a cue can be trusted. The behaviorally relevant neural correlates of cue reliability are debated. In this study, we used the barn owl's sound localization system to address this question. We demonstrated that the mechanisms that account for spatial selectivity also explained how neural responses changed with degraded signals. This allowed for the neurons' selectivity to capture cue reliability, influencing the population readout commanding the owl's sound-orienting behavior. Copyright © 2016 the authors 0270-6474/16/362101-10$15.00/0.
The Potential of Energy Storage Systems with Respect to Generation Adequacy and Economic Viability
NASA Astrophysics Data System (ADS)
Bradbury, Kyle Joseph
Intermittent energy resources, including wind and solar power, continue to be rapidly added to the generation fleet domestically and abroad. The variable power of these resources introduces new levels of stochasticity into electric interconnections that must be continuously balanced in order to maintain system reliability. Energy storage systems (ESSs) offer one potential option to compensate for the intermittency of renewables. ESSs for long-term storage (1-hour or greater), aside from a few pumped hydroelectric installations, are not presently in widespread use in the U.S. The deployment of ESSs would be most likely driven by either the potential for a strong internal rate of return (IRR) on investment and through significant benefits to system reliability that independent system operators (ISOs) could incentivize. To assess the potential of ESSs three objectives are addressed. (1) Evaluate the economic viability of energy storage for price arbitrage in real-time energy markets and determine system cost improvements for ESSs to become attractive investments. (2) Estimate the reliability impact of energy storage systems on the large-scale integration of intermittent generation. (3) Analyze the economic, environmental, and reliability tradeoffs associated with using energy storage in conjunction with stochastic generation. First, using real-time energy market price data from seven markets across the U.S. and the physical parameters of fourteen ESS technologies, the maximum potential IRR of each technology from price arbitrage was evaluated in each market, along with the optimal ESS system size. Additionally, the reductions in capital cost needed to achieve a 10% IRR were estimated for each ESS. The results indicate that the profit-maximizing size of an ESS is primarily determined by its technological characteristics (round-trip charge/discharge efficiency and self-discharge) and not market price volatility, which instead increases IRR. This analysis demonstrates that few ESS technologies are likely to be implemented by investors alone. Next, the effects of ESSs on system reliability are quantified. Using historic data for wind, solar, and conventional generation, a correlation-preserving, copula-transform model was implemented in conjunction with Markov chain Monte Carlo framework for estimating system reliability indices. Systems with significant wind and solar penetration (25% or greater), even with added energy storage capacity, resulted in considerable decreases in generation adequacy. Lastly, rather than analyzing the reliability and costs in isolation of one another, system reliability, cost, and emissions were analyzed in 3-space to quantify and visualize the system tradeoffs. The modeling results implied that ESSs perform similarly to natural gas combined cycle (NGCC) systems with respect to generation adequacy and system cost, with the primary difference being that the generation adequacy improvements are less for ESSs than that of NGCC systems and the increase in LCOE is greater for ESSs than NGCC systems. Although ESSs do not appear to offer greater benefits than NGCC systems for managing energy on time intervals of 1-hour or more, we conclude that future research into short-term power balancing applications of ESSs, in particular for frequency regulation, is necessary to understand the full potential of ESSs in modern electric interconnections.
Reliability and economy -- Hydro electricity for Iran
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jahromi-Shirazi, M.J.; Zarbakhsh, M.H.
1998-12-31
Reliability is the probability that a device or system will perform its function adequately, for the period of time intended, under the operating conditions intended. Reliability and economy are two important factors in operating any system, especially in power generation. Due to the high rate in population growth in Iran, the experts have estimated that the demand for electricity will be about 63,000 MW in the next 25 years, the installed power is now about 26,000 MW. Therefore, the energy policy decision made in Iran is to go to power generation by hydroelectric plants because of reliability, availability of watermore » resources and the economics of hydroelectric power.« less
NASA Astrophysics Data System (ADS)
Fenicia, Fabrizio; Reichert, Peter; Kavetski, Dmitri; Albert, Calro
2016-04-01
The calibration of hydrological models based on signatures (e.g. Flow Duration Curves - FDCs) is often advocated as an alternative to model calibration based on the full time series of system responses (e.g. hydrographs). Signature based calibration is motivated by various arguments. From a conceptual perspective, calibration on signatures is a way to filter out errors that are difficult to represent when calibrating on the full time series. Such errors may for example occur when observed and simulated hydrographs are shifted, either on the "time" axis (i.e. left or right), or on the "streamflow" axis (i.e. above or below). These shifts may be due to errors in the precipitation input (time or amount), and if not properly accounted in the likelihood function, may cause biased parameter estimates (e.g. estimated model parameters that do not reproduce the recession characteristics of a hydrograph). From a practical perspective, signature based calibration is seen as a possible solution for making predictions in ungauged basins. Where streamflow data are not available, it may in fact be possible to reliably estimate streamflow signatures. Previous research has for example shown how FDCs can be reliably estimated at ungauged locations based on climatic and physiographic influence factors. Typically, the goal of signature based calibration is not the prediction of the signatures themselves, but the prediction of the system responses. Ideally, the prediction of system responses should be accompanied by a reliable quantification of the associated uncertainties. Previous approaches for signature based calibration, however, do not allow reliable estimates of streamflow predictive distributions. Here, we illustrate how the Bayesian approach can be employed to obtain reliable streamflow predictive distributions based on signatures. A case study is presented, where a hydrological model is calibrated on FDCs and additional signatures. We propose an approach where the likelihood function for the signatures is derived from the likelihood for streamflow (rather than using an "ad-hoc" likelihood for the signatures as done in previous approaches). This likelihood is not easily tractable analytically and we therefore cannot apply "simple" MCMC methods. This numerical problem is solved using Approximate Bayesian Computation (ABC). Our result indicate that the proposed approach is suitable for producing reliable streamflow predictive distributions based on calibration to signature data. Moreover, our results provide indications on which signatures are more appropriate to represent the information content of the hydrograph.
Source Data Applicability Impacts on Epistemic Uncertainty for Launch Vehicle Fault Tree Models
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Novack, Steven D.; Ring, Robert W.
2016-01-01
Launch vehicle systems are designed and developed using both heritage and new hardware. Design modifications to the heritage hardware to fit new functional system requirements can impact the applicability of heritage reliability data. Risk estimates for newly designed systems must be developed from generic data sources such as commercially available reliability databases using reliability prediction methodologies, such as those addressed in MIL-HDBK-217F. Failure estimates must be converted from the generic environment to the specific operating environment of the system where it is used. In addition, some qualification of applicability for the data source to the current system should be made. Characterizing data applicability under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This paper will demonstrate a data-source applicability classification method for assigning uncertainty to a target vehicle based on the source and operating environment of the originating data. The source applicability is determined using heuristic guidelines while translation of operating environments is accomplished by applying statistical methods to MIL-HDK-217F tables. The paper will provide a case study example by translating Ground Benign (GB) and Ground Mobile (GM) to the Airborne Uninhabited Fighter (AUF) environment for three electronic components often found in space launch vehicle control systems. The classification method will be followed by uncertainty-importance routines to assess the need to for more applicable data to reduce uncertainty.
Li, Zheng; Zhang, Hai; Zhou, Qifan; Che, Huan
2017-09-05
The main objective of the introduced study is to design an adaptive Inertial Navigation System/Global Navigation Satellite System (INS/GNSS) tightly-coupled integration system that can provide more reliable navigation solutions by making full use of an adaptive Kalman filter (AKF) and satellite selection algorithm. To achieve this goal, we develop a novel redundant measurement noise covariance estimation (RMNCE) theorem, which adaptively estimates measurement noise properties by analyzing the difference sequences of system measurements. The proposed RMNCE approach is then applied to design both a modified weighted satellite selection algorithm and a type of adaptive unscented Kalman filter (UKF) to improve the performance of the tightly-coupled integration system. In addition, an adaptive measurement noise covariance expanding algorithm is developed to mitigate outliers when facing heavy multipath and other harsh situations. Both semi-physical simulation and field experiments were conducted to evaluate the performance of the proposed architecture and were compared with state-of-the-art algorithms. The results validate that the RMNCE provides a significant improvement in the measurement noise covariance estimation and the proposed architecture can improve the accuracy and reliability of the INS/GNSS tightly-coupled systems. The proposed architecture can effectively limit positioning errors under conditions of poor GNSS measurement quality and outperforms all the compared schemes.
Li, Zheng; Zhang, Hai; Zhou, Qifan; Che, Huan
2017-01-01
The main objective of the introduced study is to design an adaptive Inertial Navigation System/Global Navigation Satellite System (INS/GNSS) tightly-coupled integration system that can provide more reliable navigation solutions by making full use of an adaptive Kalman filter (AKF) and satellite selection algorithm. To achieve this goal, we develop a novel redundant measurement noise covariance estimation (RMNCE) theorem, which adaptively estimates measurement noise properties by analyzing the difference sequences of system measurements. The proposed RMNCE approach is then applied to design both a modified weighted satellite selection algorithm and a type of adaptive unscented Kalman filter (UKF) to improve the performance of the tightly-coupled integration system. In addition, an adaptive measurement noise covariance expanding algorithm is developed to mitigate outliers when facing heavy multipath and other harsh situations. Both semi-physical simulation and field experiments were conducted to evaluate the performance of the proposed architecture and were compared with state-of-the-art algorithms. The results validate that the RMNCE provides a significant improvement in the measurement noise covariance estimation and the proposed architecture can improve the accuracy and reliability of the INS/GNSS tightly-coupled systems. The proposed architecture can effectively limit positioning errors under conditions of poor GNSS measurement quality and outperforms all the compared schemes. PMID:28872629
Development of modelling algorithm of technological systems by statistical tests
NASA Astrophysics Data System (ADS)
Shemshura, E. A.; Otrokov, A. V.; Chernyh, V. G.
2018-03-01
The paper tackles the problem of economic assessment of design efficiency regarding various technological systems at the stage of their operation. The modelling algorithm of a technological system was performed using statistical tests and with account of the reliability index allows estimating the level of machinery technical excellence and defining the efficiency of design reliability against its performance. Economic feasibility of its application shall be determined on the basis of service quality of a technological system with further forecasting of volumes and the range of spare parts supply.
Scaling Impacts in Life Support Architecture and Technology Selection
NASA Technical Reports Server (NTRS)
Lange, Kevin
2016-01-01
For long-duration space missions outside of Earth orbit, reliability considerations will drive higher levels of redundancy and/or on-board spares for life support equipment. Component scaling will be a critical element in minimizing overall launch mass while maintaining an acceptable level of system reliability. Building on an earlier reliability study (AIAA 2012-3491), this paper considers the impact of alternative scaling approaches, including the design of technology assemblies and their individual components to maximum, nominal, survival, or other fractional requirements. The optimal level of life support system closure is evaluated for deep-space missions of varying duration using equivalent system mass (ESM) as the comparative basis. Reliability impacts are included in ESM by estimating the number of component spares required to meet a target system reliability. Common cause failures are included in the analysis. ISS and ISS-derived life support technologies are considered along with selected alternatives. This study focusses on minimizing launch mass, which may be enabling for deep-space missions.
Glenn, Jordan M; Galey, Madeline; Edwards, Abigail; Rickert, Bradley; Washington, Tyrone A
2015-07-01
Ability to generate force from the core musculature is a critical factor for sports and general activities with insufficiencies predisposing individuals to injury. This study evaluated isometric force production as a valid and reliable method of assessing abdominal force using the abdominal test and evaluation systems tool (ABTEST). Secondary analysis estimated 1-repetition maximum on commercially available abdominal machine compared to maximum force and average power on ABTEST system. This study utilized test-retest reliability and comparative analysis for validity. Reliability was measured using test-retest design on ABTEST. Validity was measured via comparison to estimated 1-repetition maximum on a commercially available abdominal device. Participants applied isometric, abdominal force against a transducer and muscular activation was evaluated measuring normalized electromyographic activity at the rectus-abdominus, rectus-femoris, and erector-spinae. Test, re-test force production on ABTEST was significantly correlated (r=0.84; p<0.001). Mean electromyographic activity for the rectus-abdominus (72.93% and 75.66%), rectus-femoris (6.59% and 6.51%), and erector-spinae (6.82% and 5.48%) were observed for trial-1 and trial-2, respectively. Significant correlations for the estimated 1-repetition maximum were found for average power (r=0.70, p=0.002) and maximum force (r=0.72, p<0.001). Data indicate the ABTEST can accurately measure rectus-abdominus force isolated from hip-flexor involvement. Negligible activation of erector-spinae substantiates little subjective effort among participants in the lower back. Results suggest ABTEST is a valid and reliable method of evaluating abdominal force. Copyright © 2014 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Czech results at criticality dosimetry intercomparison 2002.
Frantisek, Spurný; Jaroslav, Trousil
2004-01-01
Two criticality dosimetry systems were tested by Czech participants during the intercomparison held in Valduc, France, June 2002. The first consisted of the thermoluminescent detectors (TLDs) (Al-P glasses) and Si-diodes as passive neutron dosemeters. Second, it was studied to what extent the individual dosemeters used in the Czech routine personal dosimetry service can give a reliable estimation of criticality accident exposure. It was found that the first system furnishes quite reliable estimation of accidental doses. For routine individual dosimetry system, no important problems were encountered in the case of photon dosemeters (TLDs, film badge). For etched track detectors in contact with the 232Th or 235U-Al alloy, the track density saturation for the spark counting method limits the upper dose at approximately 1 Gy for neutrons with the energy >1 MeV.
Evaluation of air-displacement plethysmography for body composition assessment in preterm infants
USDA-ARS?s Scientific Manuscript database
Adiposity may contribute to the future risk of disease. The aim of this study was to evaluate the accuracy and reliability of an air-displacement plethysmography (ADP) system to estimate percentage fat mass (%FM) in preterm infants and to evaluate interdevice reliability in infants. A total of 70 pr...
Control optimization, stabilization and computer algorithms for aircraft applications
NASA Technical Reports Server (NTRS)
1975-01-01
Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.
Experiments in fault tolerant software reliability
NASA Technical Reports Server (NTRS)
Mcallister, David F.; Tai, K. C.; Vouk, Mladen A.
1987-01-01
The reliability of voting was evaluated in a fault-tolerant software system for small output spaces. The effectiveness of the back-to-back testing process was investigated. Version 3.0 of the RSDIMU-ATS, a semi-automated test bed for certification testing of RSDIMU software, was prepared and distributed. Software reliability estimation methods based on non-random sampling are being studied. The investigation of existing fault-tolerance models was continued and formulation of new models was initiated.
Hu, Yu; Chen, Yaping
2017-07-11
Vaccination coverage in Zhejiang province, east China, is evaluated through repeated coverage surveys. The Zhejiang provincial immunization information system (ZJIIS) was established in 2004 with links to all immunization clinics. ZJIIS has become an alternative to quickly assess the vaccination coverage. To assess the current completeness and accuracy on the vaccination coverage derived from ZJIIS, we compared the estimates from ZJIIS with the estimates from the most recent provincial coverage survey in 2014, which combined interview data with verified data from ZJIIS. Of the enrolled 2772 children in the 2014 provincial survey, the proportions of children with vaccination cards and registered in ZJIIS were 94.0% and 87.4%, respectively. Coverage estimates from ZJIIS were systematically higher than the corresponding estimates obtained through the survey, with a mean difference of 4.5%. Of the vaccination doses registered in ZJIIS, 16.7% differed from the date recorded in the corresponding vaccination cards. Under-registration in ZJIIS significantly influenced the coverage estimates derived from ZJIIS. Therefore, periodic coverage surveys currently provide more complete and reliable results than the estimates based on ZJIIS alone. However, further improvement of completeness and accuracy of ZJIIS will likely allow more reliable and timely estimates in future.
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Rodemich, E. R.
1994-01-01
An algorithm for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system was developed and analyzed. The input signal is assumed to be broadband radiation of thermal origin, generated by a distant radio source. Currently, seven video converters operating in conjunction with the real-time correlator are used to obtain these weight estimates. The algorithm described here requires only simple operations that can be implemented on a PC-based combining system, greatly reducing the amount of hardware. Therefore, system reliability and portability will be improved.
A robust motion estimation system for minimal invasive laparoscopy
NASA Astrophysics Data System (ADS)
Marcinczak, Jan Marek; von Öhsen, Udo; Grigat, Rolf-Rainer
2012-02-01
Laparoscopy is a reliable imaging method to examine the liver. However, due to the limited field of view, a lot of experience is required from the surgeon to interpret the observed anatomy. Reconstruction of organ surfaces provide valuable additional information to the surgeon for a reliable diagnosis. Without an additional external tracking system the structure can be recovered from feature correspondences between different frames. In laparoscopic images blurred frames, specular reflections and inhomogeneous illumination make feature tracking a challenging task. We propose an ego-motion estimation system for minimal invasive laparoscopy that can cope with specular reflection, inhomogeneous illumination and blurred frames. To obtain robust feature correspondence, the approach combines SIFT and specular reflection segmentation with a multi-frame tracking scheme. The calibrated five-point algorithm is used with the MSAC robust estimator to compute the motion of the endoscope from multi-frame correspondence. The algorithm is evaluated using endoscopic videos of a phantom. The small incisions and the rigid endoscope limit the motion in minimal invasive laparoscopy. These limitations are considered in our evaluation and are used to analyze the accuracy of pose estimation that can be achieved by our approach. The endoscope is moved by a robotic system and the ground truth motion is recorded. The evaluation on typical endoscopic motion gives precise results and demonstrates the practicability of the proposed pose estimation system.
Schäfer, Axel; Lüdtke, Kerstin; Breuel, Franziska; Gerloff, Nikolas; Knust, Maren; Kollitsch, Christian; Laukart, Alex; Matej, Laura; Müller, Antje; Schöttker-Königer, Thomas; Hall, Toby
2018-08-01
Headache is a common and costly health problem. Although pathogenesis of headache is heterogeneous, one reported contributing factor is dysfunction of the upper cervical spine. The flexion rotation test (FRT) is a commonly used diagnostic test to detect upper cervical movement impairment. The aim of this cross-sectional study was to investigate concurrent validity of detecting high cervical ROM impairment during the FRT by comparing measurements established by an ultrasound-based system (gold standard) with eyeball estimation. Secondary aim was to investigate intra-rater reliability of FRT ROM eyeball estimation. The examiner (6 years experience) was blinded to the data from the ultrasound-based device and to the symptoms of the patients. FRT test result (positive or negative) was based on visual estimation of range of rotation less than 34° to either side. Concurrently, range of rotation was evaluated using the ultrasound-based device. A total of 43 subjects with headache (79% female), mean age of 35.05 years (SD 13.26) were included. According to the International Headache Society Classification 23 subjects had migraine, 4 tension type headache, and 16 multiple headache forms. Sensitivity and specificity were 0.96 and 0.89 for combined rotation, indicating good concurrent reliability. The area under the ROC curve was 0.95 (95% CI 0.91-0.98) for rotation to both sides. Intra-rater reliability for eyeball estimation was excellent with Fleiss Kappa 0.79 for right rotation and left rotation. The results of this study indicate that the FRT is a valid and reliable test to detect impairment of upper cervical ROM in patients with headache.
Graphical workstation capability for reliability modeling
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Koppen, Sandra V.; Haley, Pamela J.
1992-01-01
In addition to computational capabilities, software tools for estimating the reliability of fault-tolerant digital computer systems must also provide a means of interfacing with the user. Described here is the new graphical interface capability of the hybrid automated reliability predictor (HARP), a software package that implements advanced reliability modeling techniques. The graphics oriented (GO) module provides the user with a graphical language for modeling system failure modes through the selection of various fault-tree gates, including sequence-dependency gates, or by a Markov chain. By using this graphical input language, a fault tree becomes a convenient notation for describing a system. In accounting for any sequence dependencies, HARP converts the fault-tree notation to a complex stochastic process that is reduced to a Markov chain, which it can then solve for system reliability. The graphics capability is available for use on an IBM-compatible PC, a Sun, and a VAX workstation. The GO module is written in the C programming language and uses the graphical kernal system (GKS) standard for graphics implementation. The PC, VAX, and Sun versions of the HARP GO module are currently in beta-testing stages.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, A.D.; Ayoub, A.K.; Singh, C.
1982-07-01
Existing methods for generating capacity reliability evaluation do not explicitly recognize a number of operating considerations which may have important effects in system reliability performance. Thus, current methods may yield estimates of system reliability which differ appreciably from actual observed reliability. Further, current methods offer no means of accurately studying or evaluating alternatives which may differ in one or more operating considerations. Operating considerations which are considered to be important in generating capacity reliability evaluation include: unit duty cycles as influenced by load cycle shape, reliability performance of other units, unit commitment policy, and operating reserve policy; unit start-up failuresmore » distinct from unit running failures; unit start-up times; and unit outage postponability and the management of postponable outages. A detailed Monte Carlo simulation computer model called GENESIS and two analytical models called OPCON and OPPLAN have been developed which are capable of incorporating the effects of many operating considerations including those noted above. These computer models have been used to study a variety of actual and synthetic systems and are available from EPRI. The new models are shown to produce system reliability indices which differ appreciably from index values computed using traditional models which do not recognize operating considerations.« less
Assessment of the Maximal Split-Half Coefficient to Estimate Reliability
ERIC Educational Resources Information Center
Thompson, Barry L.; Green, Samuel B.; Yang, Yanyun
2010-01-01
The maximal split-half coefficient is computed by calculating all possible split-half reliability estimates for a scale and then choosing the maximal value as the reliability estimate. Osburn compared the maximal split-half coefficient with 10 other internal consistency estimates of reliability and concluded that it yielded the most consistently…
ERIC Educational Resources Information Center
Morgan, Grant B.; Zhu, Min; Johnson, Robert L.; Hodge, Kari J.
2014-01-01
Common estimators of interrater reliability include Pearson product-moment correlation coefficients, Spearman rank-order correlations, and the generalizability coefficient. The purpose of this study was to examine the accuracy of estimators of interrater reliability when varying the true reliability, number of scale categories, and number of…
Hardware and software reliability estimation using simulations
NASA Technical Reports Server (NTRS)
Swern, Frederic L.
1994-01-01
The simulation technique is used to explore the validation of both hardware and software. It was concluded that simulation is a viable means for validating both hardware and software and associating a reliability number with each. This is useful in determining the overall probability of system failure of an embedded processor unit, and improving both the code and the hardware where necessary to meet reliability requirements. The methodologies were proved using some simple programs, and simple hardware models.
NASA Technical Reports Server (NTRS)
Gernand, Jeffrey L.; Gillespie, Amanda M.; Monaghan, Mark W.; Cummings, Nicholas H.
2010-01-01
Success of the Constellation Program's lunar architecture requires successfully launching two vehicles, Ares I/Orion and Ares V/Altair, within a very limited time period. The reliability and maintainability of flight vehicles and ground systems must deliver a high probability of successfully launching the second vehicle in order to avoid wasting the on-orbit asset launched by the first vehicle. The Ground Operations Project determined which ground subsystems had the potential to affect the probability of the second launch and allocated quantitative availability requirements to these subsystems. The Ground Operations Project also developed a methodology to estimate subsystem reliability, availability, and maintainability to ensure that ground subsystems complied with allocated launch availability and maintainability requirements. The verification analysis developed quantitative estimates of subsystem availability based on design documentation, testing results, and other information. Where appropriate, actual performance history was used to calculate failure rates for legacy subsystems or comparative components that will support Constellation. The results of the verification analysis will be used to assess compliance with requirements and to highlight design or performance shortcomings for further decision making. This case study will discuss the subsystem requirements allocation process, describe the ground systems methodology for completing quantitative reliability, availability, and maintainability analysis, and present findings and observation based on analysis leading to the Ground Operations Project Preliminary Design Review milestone.
Estimating soil solution nitrate concentration from dielectric spectra using PLS analysis
USDA-ARS?s Scientific Manuscript database
Fast and reliable methods for in situ monitoring of soil nitrate-nitrogen concentration are vital for reducing nitrate-nitrogen losses to ground and surface waters from agricultural systems. While several studies have been done to indirectly estimate nitrate-nitrogen concentration from time domain s...
Children's Reaction to Types of Television. Technical Report No. 28.
ERIC Educational Resources Information Center
Hines, Brainard W.
An observational system having high inter-rater reliability and providing a reliable estimate of patterns of behavior across time periods is developed and tested for use in evaluating children's responses to a number of television styles and modes of presentation. This project was designed to meet three goals: first, to develop a valid and…
NASA Astrophysics Data System (ADS)
Kuznetsov, P. A.; Kovalev, I. V.; Losev, V. V.; Kalinin, A. O.; Murygin, A. V.
2016-04-01
The article discusses the reliability of automated control systems. Analyzes the approach to the classification systems for health States. This approach can be as traditional binary approach, operating with the concept of "serviceability", and other variants of estimation of the system state. This article provides one such option, providing selective evaluation of components for the reliability of the entire system. Introduced description of various automatic control systems and their elements from the point of view of health and risk, mathematical method of determining the transition object from state to state, they differ from each other in the implementation of the objective function. Explores the interplay of elements in different States, the aggregate state of the elements connected in series or in parallel. Are the tables of various logic States and the principles of their calculation in series and parallel connection. Through simulation the proposed approach is illustrated by finding the probability of getting into the system state data in parallel and serially connected elements, with their different probabilities of moving from state to state. In general, the materials of article will be useful for analyzing of the reliability the automated control systems and engineering of the highly-reliable systems. Thus, this mechanism to determine the State of the system provides more detailed information about it and allows a selective approach to the reliability of the system as a whole. Detailed results when assessing the reliability of the automated control systems allows the engineer to make an informed decision when designing means of improving reliability.
Optimal Bi-Objective Redundancy Allocation for Systems Reliability and Risk Management.
Govindan, Kannan; Jafarian, Ahmad; Azbari, Mostafa E; Choi, Tsan-Ming
2016-08-01
In the big data era, systems reliability is critical to effective systems risk management. In this paper, a novel multiobjective approach, with hybridization of a known algorithm called NSGA-II and an adaptive population-based simulated annealing (APBSA) method is developed to solve the systems reliability optimization problems. In the first step, to create a good algorithm, we use a coevolutionary strategy. Since the proposed algorithm is very sensitive to parameter values, the response surface method is employed to estimate the appropriate parameters of the algorithm. Moreover, to examine the performance of our proposed approach, several test problems are generated, and the proposed hybrid algorithm and other commonly known approaches (i.e., MOGA, NRGA, and NSGA-II) are compared with respect to four performance measures: 1) mean ideal distance; 2) diversification metric; 3) percentage of domination; and 4) data envelopment analysis. The computational studies have shown that the proposed algorithm is an effective approach for systems reliability and risk management.
NASA Astrophysics Data System (ADS)
Zheng, Yuejiu; Ouyang, Minggao; Han, Xuebing; Lu, Languang; Li, Jianqiu
2018-02-01
Sate of charge (SOC) estimation is generally acknowledged as one of the most important functions in battery management system for lithium-ion batteries in new energy vehicles. Though every effort is made for various online SOC estimation methods to reliably increase the estimation accuracy as much as possible within the limited on-chip resources, little literature discusses the error sources for those SOC estimation methods. This paper firstly reviews the commonly studied SOC estimation methods from a conventional classification. A novel perspective focusing on the error analysis of the SOC estimation methods is proposed. SOC estimation methods are analyzed from the views of the measured values, models, algorithms and state parameters. Subsequently, the error flow charts are proposed to analyze the error sources from the signal measurement to the models and algorithms for the widely used online SOC estimation methods in new energy vehicles. Finally, with the consideration of the working conditions, choosing more reliable and applicable SOC estimation methods is discussed, and the future development of the promising online SOC estimation methods is suggested.
Field Demonstration of Condition Assessment Technologies for Wastewater Collection Systems
Reliable information on pipe condition is needed to accurately estimate the remaining service life of wastewater collection system assets. Although inspections with conventional closed-circuit television (CCTV) have been the mainstay of pipeline condition assessment for decades,...
Kainz, Hans; Hajek, Martin; Modenese, Luca; Saxby, David J; Lloyd, David G; Carty, Christopher P
2017-03-01
In human motion analysis predictive or functional methods are used to estimate the location of the hip joint centre (HJC). It has been shown that the Harrington regression equations (HRE) and geometric sphere fit (GSF) method are the most accurate predictive and functional methods, respectively. To date, the comparative reliability of both approaches has not been assessed. The aims of this study were to (1) compare the reliability of the HRE and the GSF methods, (2) analyse the impact of the number of thigh markers used in the GSF method on the reliability, (3) evaluate how alterations to the movements that comprise the functional trials impact HJC estimations using the GSF method, and (4) assess the influence of the initial guess in the GSF method on the HJC estimation. Fourteen healthy adults were tested on two occasions using a three-dimensional motion capturing system. Skin surface marker positions were acquired while participants performed quite stance, perturbed and non-perturbed functional trials, and walking trials. Results showed that the HRE were more reliable in locating the HJC than the GSF method. However, comparison of inter-session hip kinematics during gait did not show any significant difference between the approaches. Different initial guesses in the GSF method did not result in significant differences in the final HJC location. The GSF method was sensitive to the functional trial performance and therefore it is important to standardize the functional trial performance to ensure a repeatable estimate of the HJC when using the GSF method. Copyright © 2017 Elsevier B.V. All rights reserved.
Estimation of Faults in DC Electrical Power System
NASA Technical Reports Server (NTRS)
Gorinevsky, Dimitry; Boyd, Stephen; Poll, Scott
2009-01-01
This paper demonstrates a novel optimization-based approach to estimating fault states in a DC power system. Potential faults changing the circuit topology are included along with faulty measurements. Our approach can be considered as a relaxation of the mixed estimation problem. We develop a linear model of the circuit and pose a convex problem for estimating the faults and other hidden states. A sparse fault vector solution is computed by using 11 regularization. The solution is computed reliably and efficiently, and gives accurate diagnostics on the faults. We demonstrate a real-time implementation of the approach for an instrumented electrical power system testbed, the ADAPT testbed at NASA ARC. The estimates are computed in milliseconds on a PC. The approach performs well despite unmodeled transients and other modeling uncertainties present in the system.
Training and Maintaining System-Wide Reliability in Outcome Management.
Barwick, Melanie A; Urajnik, Diana J; Moore, Julia E
2014-01-01
The Child and Adolescent Functional Assessment Scale (CAFAS) is widely used for outcome management, for providing real time client and program level data, and the monitoring of evidence-based practices. Methods of reliability training and the assessment of rater drift are critical for service decision-making within organizations and systems of care. We assessed two approaches for CAFAS training: external technical assistance and internal technical assistance. To this end, we sampled 315 practitioners trained by external technical assistance approach from 2,344 Ontario practitioners who had achieved reliability on the CAFAS. To assess the internal technical assistance approach as a reliable alternative training method, 140 practitioners trained internally were selected from the same pool of certified raters. Reliabilities were high for both practitioners trained by external technical assistance and internal technical assistance approaches (.909-.995, .915-.997, respectively). 1 and 3-year estimates showed some drift on several scales. High and consistent reliabilities over time and training method has implications for CAFAS training of behavioral health care practitioners, and the maintenance of CAFAS as a global outcome management tool in systems of care.
Culture Representation in Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
David Gertman; Julie Marble; Steven Novack
Understanding human-system response is critical to being able to plan and predict mission success in the modern battlespace. Commonly, human reliability analysis has been used to predict failures of human performance in complex, critical systems. However, most human reliability methods fail to take culture into account. This paper takes an easily understood state of the art human reliability analysis method and extends that method to account for the influence of culture, including acceptance of new technology, upon performance. The cultural parameters used to modify the human reliability analysis were determined from two standard industry approaches to cultural assessment: Hofstede’s (1991)more » cultural factors and Davis’ (1989) technology acceptance model (TAM). The result is called the Culture Adjustment Method (CAM). An example is presented that (1) reviews human reliability assessment with and without cultural attributes for a Supervisory Control and Data Acquisition (SCADA) system attack, (2) demonstrates how country specific information can be used to increase the realism of HRA modeling, and (3) discusses the differences in human error probability estimates arising from cultural differences.« less
Reliability Estimation of Parameters of Helical Wind Turbine with Vertical Axis
Dumitrascu, Adela-Eliza; Lepadatescu, Badea; Dumitrascu, Dorin-Ion; Nedelcu, Anisor; Ciobanu, Doina Valentina
2015-01-01
Due to the prolonged use of wind turbines they must be characterized by high reliability. This can be achieved through a rigorous design, appropriate simulation and testing, and proper construction. The reliability prediction and analysis of these systems will lead to identifying the critical components, increasing the operating time, minimizing failure rate, and minimizing maintenance costs. To estimate the produced energy by the wind turbine, an evaluation approach based on the Monte Carlo simulation model is developed which enables us to estimate the probability of minimum and maximum parameters. In our simulation process we used triangular distributions. The analysis of simulation results has been focused on the interpretation of the relative frequency histograms and cumulative distribution curve (ogive diagram), which indicates the probability of obtaining the daily or annual energy output depending on wind speed. The experimental researches consist in estimation of the reliability and unreliability functions and hazard rate of the helical vertical axis wind turbine designed and patented to climatic conditions for Romanian regions. Also, the variation of power produced for different wind speeds, the Weibull distribution of wind probability, and the power generated were determined. The analysis of experimental results indicates that this type of wind turbine is efficient at low wind speed. PMID:26167524
Reliability Estimation of Parameters of Helical Wind Turbine with Vertical Axis.
Dumitrascu, Adela-Eliza; Lepadatescu, Badea; Dumitrascu, Dorin-Ion; Nedelcu, Anisor; Ciobanu, Doina Valentina
2015-01-01
Due to the prolonged use of wind turbines they must be characterized by high reliability. This can be achieved through a rigorous design, appropriate simulation and testing, and proper construction. The reliability prediction and analysis of these systems will lead to identifying the critical components, increasing the operating time, minimizing failure rate, and minimizing maintenance costs. To estimate the produced energy by the wind turbine, an evaluation approach based on the Monte Carlo simulation model is developed which enables us to estimate the probability of minimum and maximum parameters. In our simulation process we used triangular distributions. The analysis of simulation results has been focused on the interpretation of the relative frequency histograms and cumulative distribution curve (ogive diagram), which indicates the probability of obtaining the daily or annual energy output depending on wind speed. The experimental researches consist in estimation of the reliability and unreliability functions and hazard rate of the helical vertical axis wind turbine designed and patented to climatic conditions for Romanian regions. Also, the variation of power produced for different wind speeds, the Weibull distribution of wind probability, and the power generated were determined. The analysis of experimental results indicates that this type of wind turbine is efficient at low wind speed.
Boerebach, Benjamin C M; Lombarts, Kiki M J M H; Arah, Onyebuchi A
2016-03-01
The System for Evaluation of Teaching Qualities (SETQ) was developed as a formative system for the continuous evaluation and development of physicians' teaching performance in graduate medical training. It has been seven years since the introduction and initial exploratory psychometric analysis of the SETQ questionnaires. This study investigates the validity and reliability of the SETQ questionnaires across hospitals and medical specialties using confirmatory factor analyses (CFAs), reliability analysis, and generalizability analysis. The SETQ questionnaires were tested in a sample of 3,025 physicians and 2,848 trainees in 46 hospitals. The CFA revealed acceptable fit of the data to the previously identified five-factor model. The high internal consistency estimates suggest satisfactory reliability of the subscales. These results provide robust evidence for the validity and reliability of the SETQ questionnaires for evaluating physicians' teaching performance. © The Author(s) 2014.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graves, Todd L; Hamada, Michael S
2008-01-01
Good estimates of the reliability of a system make use of test data and expert knowledge at all available levels. Furthermore, by integrating all these information sources, one can determine how best to allocate scarce testing resources to reduce uncertainty. Both of these goals are facilitated by modern Bayesian computational methods. We apply these tools to examples that were previously solvable only through the use of ingenious approximations, and use genetic algorithms to guide resource allocation.
Reliability Analysis of Systems Subject to First-Passage Failure
NASA Technical Reports Server (NTRS)
Lutes, Loren D.; Sarkani, Shahram
2009-01-01
An obvious goal of reliability analysis is the avoidance of system failure. However, it is generally recognized that it is often not feasible to design a practical or useful system for which failure is impossible. Thus it is necessary to use techniques that estimate the likelihood of failure based on modeling the uncertainty about such items as the demands on and capacities of various elements in the system. This usually involves the use of probability theory, and a design is considered acceptable if it has a sufficiently small probability of failure. This report contains findings of analyses of systems subject to first-passage failure.
Metrics for Assessing the Reliability of a Telemedicine Remote Monitoring System
Fox, Mark; Papadopoulos, Amy; Crump, Cindy
2013-01-01
Abstract Objective: The goal of this study was to assess using new metrics the reliability of a real-time health monitoring system in homes of older adults. Materials and Methods: The “MobileCare Monitor” system was installed into the homes of nine older adults >75 years of age for a 2-week period. The system consisted of a wireless wristwatch-based monitoring system containing sensors for location, temperature, and impacts and a “panic” button that was connected through a mesh network to third-party wireless devices (blood pressure cuff, pulse oximeter, weight scale, and a survey-administering device). To assess system reliability, daily phone calls instructed participants to conduct system tests and reminded them to fill out surveys and daily diaries. Phone reports and participant diary entries were checked against data received at a secure server. Results: Reliability metrics assessed overall system reliability, data concurrence, study effectiveness, and system usability. Except for the pulse oximeter, system reliability metrics varied between 73% and 92%. Data concurrence for proximal and distal readings exceeded 88%. System usability following the pulse oximeter firmware update varied between 82% and 97%. An estimate of watch-wearing adherence within the home was quite high, about 80%, although given the inability to assess watch-wearing when a participant left the house, adherence likely exceeded the 10 h/day requested time. In total, 3,436 of 3,906 potential measurements were obtained, indicating a study effectiveness of 88%. Conclusions: The system was quite effective in providing accurate remote health data. The different system reliability measures identify important error sources in remote monitoring systems. PMID:23611640
NASA Technical Reports Server (NTRS)
Kleinhammer, Roger K.; Graber, Robert R.; DeMott, D. L.
2016-01-01
Reliability practitioners advocate getting reliability involved early in a product development process. However, when assigned to estimate or assess the (potential) reliability of a product or system early in the design and development phase, they are faced with lack of reasonable models or methods for useful reliability estimation. Developing specific data is costly and time consuming. Instead, analysts rely on available data to assess reliability. Finding data relevant to the specific use and environment for any project is difficult, if not impossible. Instead, analysts attempt to develop the "best" or composite analog data to support the assessments. Industries, consortia and vendors across many areas have spent decades collecting, analyzing and tabulating fielded item and component reliability performance in terms of observed failures and operational use. This data resource provides a huge compendium of information for potential use, but can also be compartmented by industry, difficult to find out about, access, or manipulate. One method used incorporates processes for reviewing these existing data sources and identifying the available information based on similar equipment, then using that generic data to derive an analog composite. Dissimilarities in equipment descriptions, environment of intended use, quality and even failure modes impact the "best" data incorporated in an analog composite. Once developed, this composite analog data provides a "better" representation of the reliability of the equipment or component. It can be used to support early risk or reliability trade studies, or analytical models to establish the predicted reliability data points. It also establishes a baseline prior that may updated based on test data or observed operational constraints and failures, i.e., using Bayesian techniques. This tutorial presents a descriptive compilation of historical data sources across numerous industries and disciplines, along with examples of contents and data characteristics. It then presents methods for combining failure information from different sources and mathematical use of this data in early reliability estimation and analyses.
Comparing reliabilities of strip and conventional patch testing.
Dickel, Heinrich; Geier, Johannes; Kreft, Burkhard; Pfützner, Wolfgang; Kuss, Oliver
2017-06-01
The standardized protocol for performing the strip patch test has proven to be valid, but evidence on its reliability is still missing. To estimate the parallel-test reliability of the strip patch test as compared with the conventional patch test. In this multicentre, prospective, randomized, investigator-blinded reliability study, 132 subjects were enrolled. Simultaneous duplicate strip and conventional patch tests were performed with the Finn Chambers ® on Scanpor ® tape test system and the patch test preparations nickel sulfate 5% pet., potassium dichromate 0.5% pet., and lanolin alcohol 30% pet. Reliability was estimated by the use of Cohen's kappa coefficient. Parallel-test reliability values of the three standard patch test preparations turned out to be acceptable, with slight advantages for the strip patch test. The differences in reliability were 9% (95%CI: -8% to 26%) for nickel sulfate and 23% (95%CI: -16% to 63%) for potassium dichromate, both favouring the strip patch test. The standardized strip patch test method for the detection of allergic contact sensitization in patients with suspected allergic contact dermatitis is reliable. Its application in routine clinical practice can be recommended, especially if the conventional patch test result is presumably false negative. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Hu, Yu; Chen, Yaping
2017-01-01
Vaccination coverage in Zhejiang province, east China, is evaluated through repeated coverage surveys. The Zhejiang provincial immunization information system (ZJIIS) was established in 2004 with links to all immunization clinics. ZJIIS has become an alternative to quickly assess the vaccination coverage. To assess the current completeness and accuracy on the vaccination coverage derived from ZJIIS, we compared the estimates from ZJIIS with the estimates from the most recent provincial coverage survey in 2014, which combined interview data with verified data from ZJIIS. Of the enrolled 2772 children in the 2014 provincial survey, the proportions of children with vaccination cards and registered in ZJIIS were 94.0% and 87.4%, respectively. Coverage estimates from ZJIIS were systematically higher than the corresponding estimates obtained through the survey, with a mean difference of 4.5%. Of the vaccination doses registered in ZJIIS, 16.7% differed from the date recorded in the corresponding vaccination cards. Under-registration in ZJIIS significantly influenced the coverage estimates derived from ZJIIS. Therefore, periodic coverage surveys currently provide more complete and reliable results than the estimates based on ZJIIS alone. However, further improvement of completeness and accuracy of ZJIIS will likely allow more reliable and timely estimates in future. PMID:28696387
Arterial link travel time estimation using loop detector data : phase 1
DOT National Transportation Integrated Search
1997-11-01
The envisioned operational tests of Advanced Traveler Information Systems (ATIS) and Advanced Traffic Management Systems (ATMS) in the Minneapolis/St. Paul area call for the provision of timely and reliable travel times over an entire rod network. Un...
Design of on-board Bluetooth wireless network system based on fault-tolerant technology
NASA Astrophysics Data System (ADS)
You, Zheng; Zhang, Xiangqi; Yu, Shijie; Tian, Hexiang
2007-11-01
In this paper, the Bluetooth wireless data transmission technology is applied in on-board computer system, to realize wireless data transmission between peripherals of the micro-satellite integrating electronic system, and in view of the high demand of reliability of a micro-satellite, a design of Bluetooth wireless network based on fault-tolerant technology is introduced. The reliability of two fault-tolerant systems is estimated firstly using Markov model, then the structural design of this fault-tolerant system is introduced; several protocols are established to make the system operate correctly, some related problems are listed and analyzed, with emphasis on Fault Auto-diagnosis System, Active-standby switch design and Data-Integrity process.
A stochastic hybrid systems based framework for modeling dependent failure processes
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods. PMID:28231313
A stochastic hybrid systems based framework for modeling dependent failure processes.
Fan, Mengfei; Zeng, Zhiguo; Zio, Enrico; Kang, Rui; Chen, Ying
2017-01-01
In this paper, we develop a framework to model and analyze systems that are subject to dependent, competing degradation processes and random shocks. The degradation processes are described by stochastic differential equations, whereas transitions between the system discrete states are triggered by random shocks. The modeling is, then, based on Stochastic Hybrid Systems (SHS), whose state space is comprised of a continuous state determined by stochastic differential equations and a discrete state driven by stochastic transitions and reset maps. A set of differential equations are derived to characterize the conditional moments of the state variables. System reliability and its lower bounds are estimated from these conditional moments, using the First Order Second Moment (FOSM) method and Markov inequality, respectively. The developed framework is applied to model three dependent failure processes from literature and a comparison is made to Monte Carlo simulations. The results demonstrate that the developed framework is able to yield an accurate estimation of reliability with less computational costs compared to traditional Monte Carlo-based methods.
NASA Astrophysics Data System (ADS)
Forbes, Kevin F.; St. Cyr, O. C.
2017-10-01
This paper addresses whether geomagnetic activity challenged the reliability of the electric power system during part of the declining phase of solar cycle 23. Operations by National Grid in England and Wales are examined over the period of 11 March 2003 through 31 March 2005. This paper examines the relationship between measures of geomagnetic activity and a metric of challenged electric power reliability known as the net imbalance volume (NIV). Measured in megawatt hours, NIV represents the sum of all energy deployments initiated by the system operator to balance the electric power system. The relationship between geomagnetic activity and NIV is assessed using a multivariate econometric model. The model was estimated using half-hour settlement data over the period of 11 March 2003 through 31 December 2004. The results indicate that geomagnetic activity had a demonstrable effect on NIV over the sample period. Based on the parameter estimates, out-of-sample predictions of NIV were generated for each half hour over the period of 1 January to 31 March 2005. Consistent with the existence of a causal relationship between geomagnetic activity and the electricity market imbalance, the root-mean-square error of the out-of-sample predictions of NIV is smaller; that is, the predictions are more accurate, when the statistically significant estimated effects of geomagnetic activity are included as drivers in the predictions.
NASA Astrophysics Data System (ADS)
Ha, Taesung
A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential usefulness of quantifying model uncertainty as sensitivity analysis in the PRA model.
48 CFR 215.404-1 - Proposal analysis techniques.
Code of Federal Regulations, 2010 CFR
2010-10-01
... reliability of its estimating and accounting systems. [63 FR 55040, Oct. 14, 1998, as amended at 71 FR 69494... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Proposal analysis techniques. 215.404-1 Section 215.404-1 Federal Acquisition Regulations System DEFENSE ACQUISITION...
Accuracy and Reliability of the Klales et al. (2012) Morphoscopic Pelvic Sexing Method.
Lesciotto, Kate M; Doershuk, Lily J
2018-01-01
Klales et al. (2012) devised an ordinal scoring system for the morphoscopic pelvic traits described by Phenice (1969) and used for sex estimation of skeletal remains. The aim of this study was to test the accuracy and reliability of the Klales method using a large sample from the Hamann-Todd collection (n = 279). Two observers were blinded to sex, ancestry, and age and used the Klales et al. method to estimate the sex of each individual. Sex was correctly estimated for females with over 95% accuracy; however, the male allocation accuracy was approximately 50%. Weighted Cohen's kappa and intraclass correlation coefficient analysis for evaluating intra- and interobserver error showed moderate to substantial agreement for all traits. Although each trait can be reliably scored using the Klales method, low accuracy rates and high sex bias indicate better trait descriptions and visual guides are necessary to more accurately reflect the range of morphological variation. © 2017 American Academy of Forensic Sciences.
Reliable information on pipe condition is needed to accurately estimate the remaining service life of wastewater collection system assets. Although inspections with conventional closed-circuit television (CCTV) have been the mainstay of pipeline condition assessment for decades,...
Reliability Problems of the Datum: Solutions for Questionnaire Responses.
ERIC Educational Resources Information Center
Bastick, Tony
Questionnaires often ask for estimates, and these estimates are given with different reliabilities. It is difficult to know the different reliabilities of single estimates and to take these into account in subsequent analyses. This paper contains a practical example to show that not taking the reliability of different responses into account can…
Use of Internal Consistency Coefficients for Estimating Reliability of Experimental Tasks Scores
Green, Samuel B.; Yang, Yanyun; Alt, Mary; Brinkley, Shara; Gray, Shelley; Hogan, Tiffany; Cowan, Nelson
2017-01-01
Reliabilities of scores for experimental tasks are likely to differ from one study to another to the extent that the task stimuli change, the number of trials varies, the type of individuals taking the task changes, the administration conditions are altered, or the focal task variable differs. Given reliabilities vary as a function of the design of these tasks and the characteristics of the individuals taking them, making inferences about the reliability of scores in an ongoing study based on reliability estimates from prior studies is precarious. Thus, it would be advantageous to estimate reliability based on data from the ongoing study. We argue that internal consistency estimates of reliability are underutilized for experimental task data and in many applications could provide this information using a single administration of a task. We discuss different methods for computing internal consistency estimates with a generalized coefficient alpha and the conditions under which these estimates are accurate. We illustrate use of these coefficients using data for three different tasks. PMID:26546100
An experimental evaluation of software redundancy as a strategy for improving reliability
NASA Technical Reports Server (NTRS)
Eckhardt, Dave E., Jr.; Caglayan, Alper K.; Knight, John C.; Lee, Larry D.; Mcallister, David F.; Vouk, Mladen A.; Kelly, John P. J.
1990-01-01
The strategy of using multiple versions of independently developed software as a means to tolerate residual software design faults is suggested by the success of hardware redundancy for tolerating hardware failures. Although, as generally accepted, the independence of hardware failures resulting from physical wearout can lead to substantial increases in reliability for redundant hardware structures, a similar conclusion is not immediate for software. The degree to which design faults are manifested as independent failures determines the effectiveness of redundancy as a method for improving software reliability. Interest in multi-version software centers on whether it provides an adequate measure of increased reliability to warrant its use in critical applications. The effectiveness of multi-version software is studied by comparing estimates of the failure probabilities of these systems with the failure probabilities of single versions. The estimates are obtained under a model of dependent failures and compared with estimates obtained when failures are assumed to be independent. The experimental results are based on twenty versions of an aerospace application developed and certified by sixty programmers from four universities. Descriptions of the application, development and certification processes, and operational evaluation are given together with an analysis of the twenty versions.
Dinsdale, Graham; Moore, Tonia; O'Leary, Neil; Berks, Michael; Roberts, Christopher; Manning, Joanne; Allen, John; Anderson, Marina; Cutolo, Maurizio; Hesselstrand, Roger; Howell, Kevin; Pizzorni, Carmen; Smith, Vanessa; Sulli, Alberto; Wildt, Marie; Taylor, Christopher; Murray, Andrea; Herrick, Ariane L
2017-09-01
Nailfold capillaroscopic parameters hold increasing promise as outcome measures for clinical trials in systemic sclerosis (SSc). Their inclusion as outcomes would often naturally require capillaroscopy images to be captured at several time points during any one study. Our objective was to assess repeatability of image acquisition (which has been little studied), as well as of measurement. 41 patients (26 with SSc, 15 with primary Raynaud's phenomenon) and 10 healthy controls returned for repeat high-magnification (300×) videocapillaroscopy mosaic imaging of 10 digits one week after initial imaging (as part of a larger study of reliability). Images were assessed in a random order by an expert blinded observer and 4 outcome measures extracted: (1) overall image grade and then (where possible) distal vessel locations were marked, allowing (2) vessel density (across the whole nailfold) to be calculated (3) apex width measurement and (4) giant vessel count. Intra-rater, intra-visit and intra-rater inter-visit (baseline vs. 1week) reliability were examined in 475 and 392 images respectively. A linear, mixed-effects model was used to estimate variance components, from which intra-class correlation coefficients (ICCs) were determined. Intra-visit and inter-visit reliability estimates (ICCs) were (respectively): overall image grade, 0.97 and 0.90; vessel density, 0.92 and 0.65; mean vessel width, 0.91 and 0.79; presence of giant capillary, 0.68 and 0.56. These estimates were conditional on each parameter being measurable. Within-operator image analysis and acquisition are reproducible. Quantitative nailfold capillaroscopy, at least with a single observer, provides reliable outcome measures for clinical studies including randomised controlled trials. Copyright © 2017 Elsevier Inc. All rights reserved.
Validity and reliability of Nike + Fuelband for estimating physical activity energy expenditure.
Tucker, Wesley J; Bhammar, Dharini M; Sawyer, Brandon J; Buman, Matthew P; Gaesser, Glenn A
2015-01-01
The Nike + Fuelband is a commercially available, wrist-worn accelerometer used to track physical activity energy expenditure (PAEE) during exercise. However, validation studies assessing the accuracy of this device for estimating PAEE are lacking. Therefore, this study examined the validity and reliability of the Nike + Fuelband for estimating PAEE during physical activity in young adults. Secondarily, we compared PAEE estimation of the Nike + Fuelband with the previously validated SenseWear Armband (SWA). Twenty-four participants (n = 24) completed two, 60-min semi-structured routines consisting of sedentary/light-intensity, moderate-intensity, and vigorous-intensity physical activity. Participants wore a Nike + Fuelband and SWA, while oxygen uptake was measured continuously with an Oxycon Mobile (OM) metabolic measurement system (criterion). The Nike + Fuelband (ICC = 0.77) and SWA (ICC = 0.61) both demonstrated moderate to good validity. PAEE estimates provided by the Nike + Fuelband (246 ± 67 kcal) and SWA (238 ± 57 kcal) were not statistically different than OM (243 ± 67 kcal). Both devices also displayed similar mean absolute percent errors for PAEE estimates (Nike + Fuelband = 16 ± 13 %; SWA = 18 ± 18 %). Test-retest reliability for PAEE indicated good stability for Nike + Fuelband (ICC = 0.96) and SWA (ICC = 0.90). The Nike + Fuelband provided valid and reliable estimates of PAEE, that are similar to the previously validated SWA, during a routine that included approximately equal amounts of sedentary/light-, moderate- and vigorous-intensity physical activity.
An economic analysis of a commercial approach to the design and fabrication of a space power system
NASA Technical Reports Server (NTRS)
Putney, Z.; Been, J. F.
1979-01-01
A commercial approach to the design and fabrication of an economical space power system is presented. Cost reductions are projected through the conceptual design of a 2 kW space power system built with the capability for having serviceability. The approach to system costing that is used takes into account both the constraints of operation in space and commercial production engineering approaches. The cost of this power system reflects a variety of cost/benefit tradeoffs that would reduce system cost as a function of system reliability requirements, complexity, and the impact of rigid specifications. A breakdown of the system design, documentation, fabrication, and reliability and quality assurance cost estimates are detailed.
Channel Estimation for Filter Bank Multicarrier Systems in Low SNR Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Driggs, Jonathan; Sibbett, Taylor; Moradiy, Hussein
Channel estimation techniques are crucial for reliable communications. This paper is concerned with channel estimation in a filter bank multicarrier spread spectrum (FBMCSS) system. We explore two channel estimator options: (i) a method that makes use of a periodic preamble and mimics the channel estimation techniques that are widely used in OFDM-based systems; and (ii) a method that stays within the traditional realm of filter bank signal processing. For the case where the channel noise is white, both methods are analyzed in detail and their performance is compared against their respective Cramer-Rao Lower Bounds (CRLB). Advantages and disadvantages of themore » two methods under different channel conditions are given to provide insight to the reader as to when one will outperform the other.« less
Ensemble-Based Parameter Estimation in a Coupled General Circulation Model
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-09-10
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
Clayson, Peter E; Miller, Gregory A
2017-01-01
Generalizability theory (G theory) provides a flexible, multifaceted approach to estimating score reliability. G theory's approach to estimating score reliability has important advantages over classical test theory that are relevant for research using event-related brain potentials (ERPs). For example, G theory does not require parallel forms (i.e., equal means, variances, and covariances), can handle unbalanced designs, and provides a single reliability estimate for designs with multiple sources of error. This monograph provides a detailed description of the conceptual framework of G theory using examples relevant to ERP researchers, presents the algorithms needed to estimate ERP score reliability, and provides a detailed walkthrough of newly-developed software, the ERP Reliability Analysis (ERA) Toolbox, that calculates score reliability using G theory. The ERA Toolbox is open-source, Matlab software that uses G theory to estimate the contribution of the number of trials retained for averaging, group, and/or event types on ERP score reliability. The toolbox facilitates the rigorous evaluation of psychometric properties of ERP scores recommended elsewhere in this special issue. Copyright © 2016 Elsevier B.V. All rights reserved.
Bayesian Approach for Reliability Assessment of Sunshield Deployment on JWST
NASA Technical Reports Server (NTRS)
Kaminskiy, Mark P.; Evans, John W.; Gallo, Luis D.
2013-01-01
Deployable subsystems are essential to mission success of most spacecraft. These subsystems enable critical functions including power, communications and thermal control. The loss of any of these functions will generally result in loss of the mission. These subsystems and their components often consist of unique designs and applications, for which various standardized data sources are not applicable for estimating reliability and for assessing risks. In this study, a Bayesian approach for reliability estimation of spacecraft deployment was developed for this purpose. This approach was then applied to the James Webb Space Telescope (JWST) Sunshield subsystem, a unique design intended for thermal control of the observatory's telescope and science instruments. In order to collect the prior information on deployable systems, detailed studies of "heritage information", were conducted extending over 45 years of spacecraft launches. The NASA Goddard Space Flight Center (GSFC) Spacecraft Operational Anomaly and Reporting System (SOARS) data were then used to estimate the parameters of the conjugative beta prior distribution for anomaly and failure occurrence, as the most consistent set of available data and that could be matched to launch histories. This allows for an emperical Bayesian prediction for the risk of an anomaly occurrence of the complex Sunshield deployment, with credibility limits, using prior deployment data and test information.
A Note on Structural Equation Modeling Estimates of Reliability
ERIC Educational Resources Information Center
Yang, Yanyun; Green, Samuel B.
2010-01-01
Reliability can be estimated using structural equation modeling (SEM). Two potential problems with this approach are that estimates may be unstable with small sample sizes and biased with misspecified models. A Monte Carlo study was conducted to investigate the quality of SEM estimates of reliability by themselves and relative to coefficient…
Estimating Measures of Pass-Fail Reliability from Parallel Half-Tests.
ERIC Educational Resources Information Center
Woodruff, David J.; Sawyer, Richard L.
Two methods for estimating measures of pass-fail reliability are derived, by which both theta and kappa may be estimated from a single test administration. The methods require only a single test administration and are computationally simple. Both are based on the Spearman-Brown formula for estimating stepped-up reliability. The non-distributional…
Large Sample Confidence Intervals for Item Response Theory Reliability Coefficients
ERIC Educational Resources Information Center
Andersson, Björn; Xin, Tao
2018-01-01
In applications of item response theory (IRT), an estimate of the reliability of the ability estimates or sum scores is often reported. However, analytical expressions for the standard errors of the estimators of the reliability coefficients are not available in the literature and therefore the variability associated with the estimated reliability…
Economical space power systems
NASA Technical Reports Server (NTRS)
Burkholder, J. H.
1980-01-01
A commercial approach to design and fabrication of an economical space power system is investigated. Cost projections are based on a 2 kW space power system conceptual design taking into consideration the capability for serviceability, constraints of operation in space, and commercial production engineering approaches. A breakdown of the system design, documentation, fabrication, and reliability and quality assurance estimated costs are detailed.
NASA Technical Reports Server (NTRS)
Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron
1994-01-01
This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.
Reliability Correction for Functional Connectivity: Theory and Implementation
Mueller, Sophia; Wang, Danhong; Fox, Michael D.; Pan, Ruiqi; Lu, Jie; Li, Kuncheng; Sun, Wei; Buckner, Randy L.; Liu, Hesheng
2016-01-01
Network properties can be estimated using functional connectivity MRI (fcMRI). However, regional variation of the fMRI signal causes systematic biases in network estimates including correlation attenuation in regions of low measurement reliability. Here we computed the spatial distribution of fcMRI reliability using longitudinal fcMRI datasets and demonstrated how pre-estimated reliability maps can correct for correlation attenuation. As a test case of reliability-based attenuation correction we estimated properties of the default network, where reliability was significantly lower than average in the medial temporal lobe and higher in the posterior medial cortex, heterogeneity that impacts estimation of the network. Accounting for this bias using attenuation correction revealed that the medial temporal lobe’s contribution to the default network is typically underestimated. To render this approach useful to a greater number of datasets, we demonstrate that test-retest reliability maps derived from repeated runs within a single scanning session can be used as a surrogate for multi-session reliability mapping. Using data segments with different scan lengths between 1 and 30 min, we found that test-retest reliability of connectivity estimates increases with scan length while the spatial distribution of reliability is relatively stable even at short scan lengths. Finally, analyses of tertiary data revealed that reliability distribution is influenced by age, neuropsychiatric status and scanner type, suggesting that reliability correction may be especially important when studying between-group differences. Collectively, these results illustrate that reliability-based attenuation correction is an easily implemented strategy that mitigates certain features of fMRI signal nonuniformity. PMID:26493163
Decentralization, stabilization, and estimation of large-scale linear systems
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Vukcevic, M. B.
1976-01-01
In this short paper we consider three closely related aspects of large-scale systems: decentralization, stabilization, and estimation. A method is proposed to decompose a large linear system into a number of interconnected subsystems with decentralized (scalar) inputs or outputs. The procedure is preliminary to the hierarchic stabilization and estimation of linear systems and is performed on the subsystem level. A multilevel control scheme based upon the decomposition-aggregation method is developed for stabilization of input-decentralized linear systems Local linear feedback controllers are used to stabilize each decoupled subsystem, while global linear feedback controllers are utilized to minimize the coupling effect among the subsystems. Systems stabilized by the method have a tolerance to a wide class of nonlinearities in subsystem coupling and high reliability with respect to structural perturbations. The proposed output-decentralization and stabilization schemes can be used directly to construct asymptotic state estimators for large linear systems on the subsystem level. The problem of dimensionality is resolved by constructing a number of low-order estimators, thus avoiding a design of a single estimator for the overall system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Liu, Z.; Zhang, S.
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
Montone, Verona O; Fraisse, Clyde W; Peres, Natalia A; Sentelhas, Paulo C; Gleason, Mark; Ellis, Michael; Schnabel, Guido
2016-11-01
Leaf wetness duration (LWD) plays a key role in disease development and is often used as an input in disease-warning systems. LWD is often estimated using mathematical models, since measurement by sensors is rarely available and/or reliable. A strawberry disease-warning system called "Strawberry Advisory System" (SAS) is used by growers in Florida, USA, in deciding when to spray their strawberry fields to control anthracnose and Botrytis fruit rot. Currently, SAS is implemented at six locations, where reliable LWD sensors are deployed. A robust LWD model would facilitate SAS expansion from Florida to other regions where reliable LW sensors are not available. The objective of this study was to evaluate the use of mathematical models to estimate LWD and time of spray recommendations in comparison to on site LWD measurements. Specific objectives were to (i) compare model estimated and observed LWD and resulting differences in timing and number of fungicide spray recommendations, (ii) evaluate the effects of weather station sensors precision on LWD models performance, and (iii) compare LWD models performance across four states in the USA. The LWD models evaluated were the classification and regression tree (CART), dew point depression (DPD), number of hours with relative humidity equal or greater than 90 % (NHRH ≥90 %), and Penman-Monteith (P-M). P-M model was expected to have the lowest errors, since it is a physically based and thus portable model. Indeed, the P-M model estimated LWD most accurately (MAE <2 h) at a weather station with high precision sensors but was the least accurate when lower precision sensors of relative humidity and estimated net radiation (based on solar radiation and temperature) were used (MAE = 3.7 h). The CART model was the most robust for estimating LWD and for advising growers on fungicide-spray timing for anthracnose and Botrytis fruit rot control and is therefore the model we recommend for expanding the strawberry disease warning beyond Florida, to other locations where weather stations may be deployed with lower precision sensors, and net radiation observations are not available.
NASA Astrophysics Data System (ADS)
Montone, Verona O.; Fraisse, Clyde W.; Peres, Natalia A.; Sentelhas, Paulo C.; Gleason, Mark; Ellis, Michael; Schnabel, Guido
2016-11-01
Leaf wetness duration (LWD) plays a key role in disease development and is often used as an input in disease-warning systems. LWD is often estimated using mathematical models, since measurement by sensors is rarely available and/or reliable. A strawberry disease-warning system called "Strawberry Advisory System" (SAS) is used by growers in Florida, USA, in deciding when to spray their strawberry fields to control anthracnose and Botrytis fruit rot. Currently, SAS is implemented at six locations, where reliable LWD sensors are deployed. A robust LWD model would facilitate SAS expansion from Florida to other regions where reliable LW sensors are not available. The objective of this study was to evaluate the use of mathematical models to estimate LWD and time of spray recommendations in comparison to on site LWD measurements. Specific objectives were to (i) compare model estimated and observed LWD and resulting differences in timing and number of fungicide spray recommendations, (ii) evaluate the effects of weather station sensors precision on LWD models performance, and (iii) compare LWD models performance across four states in the USA. The LWD models evaluated were the classification and regression tree (CART), dew point depression (DPD), number of hours with relative humidity equal or greater than 90 % (NHRH ≥90 %), and Penman-Monteith (P-M). P-M model was expected to have the lowest errors, since it is a physically based and thus portable model. Indeed, the P-M model estimated LWD most accurately (MAE <2 h) at a weather station with high precision sensors but was the least accurate when lower precision sensors of relative humidity and estimated net radiation (based on solar radiation and temperature) were used (MAE = 3.7 h). The CART model was the most robust for estimating LWD and for advising growers on fungicide-spray timing for anthracnose and Botrytis fruit rot control and is therefore the model we recommend for expanding the strawberry disease warning beyond Florida, to other locations where weather stations may be deployed with lower precision sensors, and net radiation observations are not available.
Improving Distribution Resiliency with Microgrids and State and Parameter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuffner, Francis K.; Williams, Tess L.; Schneider, Kevin P.
Modern society relies on low-cost reliable electrical power, both to maintain industry, as well as provide basic social services to the populace. When major disturbances occur, such as Hurricane Katrina or Hurricane Sandy, the nation’s electrical infrastructure can experience significant outages. To help prevent the spread of these outages, as well as facilitating faster restoration after an outage, various aspects of improving the resiliency of the power system are needed. Two such approaches are breaking the system into smaller microgrid sections, and to have improved insight into the operations to detect failures or mis-operations before they become critical. Breaking themore » system into smaller sections of microgrid islands, power can be maintained in smaller areas where distribution generation and energy storage resources are still available, but bulk power generation is no longer connected. Additionally, microgrid systems can maintain service to local pockets of customers when there has been extensive damage to the local distribution system. However, microgrids are grid connected a majority of the time and implementing and operating a microgrid is much different than when islanded. This report discusses work conducted by the Pacific Northwest National Laboratory that developed improvements for simulation tools to capture the characteristics of microgrids and how they can be used to develop new operational strategies. These operational strategies reduce the cost of microgrid operation and increase the reliability and resilience of the nation’s electricity infrastructure. In addition to the ability to break the system into microgrids, improved observability into the state of the distribution grid can make the power system more resilient. State estimation on the transmission system already provides great insight into grid operations and detecting abnormal conditions by leveraging existing measurements. These transmission-level approaches are expanded to using advanced metering infrastructure and other distribution-level measurements to create a three-phase, unbalanced distribution state estimation approach. With distribution-level state estimation, the grid can be operated more efficiently, and outages or equipment failures can be caught faster, improving the overall resilience and reliability of the grid.« less
Estimating the Reliability of Electronic Parts in High Radiation Fields
NASA Technical Reports Server (NTRS)
Everline, Chester; Clark, Karla; Man, Guy; Rasmussen, Robert; Johnston, Allan; Kohlhase, Charles; Paulos, Todd
2008-01-01
Radiation effects on materials and electronic parts constrain the lifetime of flight systems visiting Europa. Understanding mission lifetime limits is critical to the design and planning of such a mission. Therefore, the operational aspects of radiation dose are a mission success issue. To predict and manage mission lifetime in a high radiation environment, system engineers need capable tools to trade radiation design choices against system design and reliability, and science achievements. Conventional tools and approaches provided past missions with conservative designs without the ability to predict their lifetime beyond the baseline mission.This paper describes a more systematic approach to understanding spacecraft design margin, allowing better prediction of spacecraft lifetime. This is possible because of newly available electronic parts radiation effects statistics and an enhanced spacecraft system reliability methodology. This new approach can be used in conjunction with traditional approaches for mission design. This paper describes the fundamentals of the new methodology.
Guidi, Andrea; Salvi, Sergio; Ottaviano, Manuel; Gentili, Claudio; Bertschy, Gilles; de Rossi, Danilo; Scilingo, Enzo Pasquale; Vanello, Nicola
2015-11-06
Bipolar disorder is one of the most common mood disorders characterized by large and invalidating mood swings. Several projects focus on the development of decision support systems that monitor and advise patients, as well as clinicians. Voice monitoring and speech signal analysis can be exploited to reach this goal. In this study, an Android application was designed for analyzing running speech using a smartphone device. The application can record audio samples and estimate speech fundamental frequency, F0, and its changes. F0-related features are estimated locally on the smartphone, with some advantages with respect to remote processing approaches in terms of privacy protection and reduced upload costs. The raw features can be sent to a central server and further processed. The quality of the audio recordings, algorithm reliability and performance of the overall system were evaluated in terms of voiced segment detection and features estimation. The results demonstrate that mean F0 from each voiced segment can be reliably estimated, thus describing prosodic features across the speech sample. Instead, features related to F0 variability within each voiced segment performed poorly. A case study performed on a bipolar patient is presented.
Guidi, Andrea; Salvi, Sergio; Ottaviano, Manuel; Gentili, Claudio; Bertschy, Gilles; de Rossi, Danilo; Scilingo, Enzo Pasquale; Vanello, Nicola
2015-01-01
Bipolar disorder is one of the most common mood disorders characterized by large and invalidating mood swings. Several projects focus on the development of decision support systems that monitor and advise patients, as well as clinicians. Voice monitoring and speech signal analysis can be exploited to reach this goal. In this study, an Android application was designed for analyzing running speech using a smartphone device. The application can record audio samples and estimate speech fundamental frequency, F0, and its changes. F0-related features are estimated locally on the smartphone, with some advantages with respect to remote processing approaches in terms of privacy protection and reduced upload costs. The raw features can be sent to a central server and further processed. The quality of the audio recordings, algorithm reliability and performance of the overall system were evaluated in terms of voiced segment detection and features estimation. The results demonstrate that mean F0 from each voiced segment can be reliably estimated, thus describing prosodic features across the speech sample. Instead, features related to F0 variability within each voiced segment performed poorly. A case study performed on a bipolar patient is presented. PMID:26561811
Partially Decentralized Control Architectures for Satellite Formations
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Bauer, Frank H.
2002-01-01
In a partially decentralized control architecture, more than one but less than all nodes have supervisory capability. This paper describes an approach to choosing the number of supervisors in such au architecture, based on a reliability vs. cost trade. It also considers the implications of these results for the design of navigation systems for satellite formations that could be controlled with a partially decentralized architecture. Using an assumed cost model, analytic and simulation-based results indicate that it may be cheaper to achieve a given overall system reliability with a partially decentralized architecture containing only a few supervisors, than with either fully decentralized or purely centralized architectures. Nominally, the subset of supervisors may act as centralized estimation and control nodes for corresponding subsets of the remaining subordinate nodes, and act as decentralized estimation and control peers with respect to each other. However, in the context of partially decentralized satellite formation control, the absolute positions and velocities of each spacecraft are unique, so that correlations which make estimates using only local information suboptimal only occur through common biases and process noise. Covariance and monte-carlo analysis of a simplified system show that this lack of correlation may allow simplification of the local estimators while preserving the global optimality of the maneuvers commanded by the supervisors.
NASA Astrophysics Data System (ADS)
Kanjilal, Oindrila; Manohar, C. S.
2017-07-01
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.
2013-01-01
Background In recent years response rates on telephone surveys have been declining. Rates for the behavioral risk factor surveillance system (BRFSS) have also declined, prompting the use of new methods of weighting and the inclusion of cell phone sampling frames. A number of scholars and researchers have conducted studies of the reliability and validity of the BRFSS estimates in the context of these changes. As the BRFSS makes changes in its methods of sampling and weighting, a review of reliability and validity studies of the BRFSS is needed. Methods In order to assess the reliability and validity of prevalence estimates taken from the BRFSS, scholarship published from 2004–2011 dealing with tests of reliability and validity of BRFSS measures was compiled and presented by topics of health risk behavior. Assessments of the quality of each publication were undertaken using a categorical rubric. Higher rankings were achieved by authors who conducted reliability tests using repeated test/retest measures, or who conducted tests using multiple samples. A similar rubric was used to rank validity assessments. Validity tests which compared the BRFSS to physical measures were ranked higher than those comparing the BRFSS to other self-reported data. Literature which undertook more sophisticated statistical comparisons was also ranked higher. Results Overall findings indicated that BRFSS prevalence rates were comparable to other national surveys which rely on self-reports, although specific differences are noted for some categories of response. BRFSS prevalence rates were less similar to surveys which utilize physical measures in addition to self-reported data. There is very little research on reliability and validity for some health topics, but a great deal of information supporting the validity of the BRFSS data for others. Conclusions Limitations of the examination of the BRFSS were due to question differences among surveys used as comparisons, as well as mode of data collection differences. As the BRFSS moves to incorporating cell phone data and changing weighting methods, a review of reliability and validity research indicated that past BRFSS landline only data were reliable and valid as measured against other surveys. New analyses and comparisons of BRFSS data which include the new methodologies and cell phone data will be needed to ascertain the impact of these changes on estimates in the future. PMID:23522349
NASA Technical Reports Server (NTRS)
Orme, John S.; Gilyard, Glenn B.
1992-01-01
Integrated engine-airframe optimal control technology may significantly improve aircraft performance. This technology requires a reliable and accurate parameter estimator to predict unmeasured variables. To develop this technology base, NASA Dryden Flight Research Facility (Edwards, CA), McDonnell Aircraft Company (St. Louis, MO), and Pratt & Whitney (West Palm Beach, FL) have developed and flight-tested an adaptive performance seeking control system which optimizes the quasi-steady-state performance of the F-15 propulsion system. This paper presents flight and ground test evaluations of the propulsion system parameter estimation process used by the performance seeking control system. The estimator consists of a compact propulsion system model and an extended Kalman filter. The extended Laman filter estimates five engine component deviation parameters from measured inputs. The compact model uses measurements and Kalman-filter estimates as inputs to predict unmeasured propulsion parameters such as net propulsive force and fan stall margin. The ability to track trends and estimate absolute values of propulsion system parameters was demonstrated. For example, thrust stand results show a good correlation, especially in trends, between the performance seeking control estimated and measured thrust.
Clayson, Peter E; Miller, Gregory A
2017-01-01
Failing to consider psychometric issues related to reliability and validity, differential deficits, and statistical power potentially undermines the conclusions of a study. In research using event-related brain potentials (ERPs), numerous contextual factors (population sampled, task, data recording, analysis pipeline, etc.) can impact the reliability of ERP scores. The present review considers the contextual factors that influence ERP score reliability and the downstream effects that reliability has on statistical analyses. Given the context-dependent nature of ERPs, it is recommended that ERP score reliability be formally assessed on a study-by-study basis. Recommended guidelines for ERP studies include 1) reporting the threshold of acceptable reliability and reliability estimates for observed scores, 2) specifying the approach used to estimate reliability, and 3) justifying how trial-count minima were chosen. A reliability threshold for internal consistency of at least 0.70 is recommended, and a threshold of 0.80 is preferred. The review also advocates the use of generalizability theory for estimating score dependability (the generalizability theory analog to reliability) as an improvement on classical test theory reliability estimates, suggesting that the latter is less well suited to ERP research. To facilitate the calculation and reporting of dependability estimates, an open-source Matlab program, the ERP Reliability Analysis Toolbox, is presented. Copyright © 2016 Elsevier B.V. All rights reserved.
Ludwin, Artur; Ludwin, Inga; Kudla, Marek; Kottner, Jan
2015-09-01
To estimate the inter-rater/intrarater reliability of the European Society of Human Reproduction and Embryology/European Society for Gynaecological Endoscopy (ESHRE-ESGE) classification of congenital uterine malformations and to compare the results obtained with the reliability of the American Society for Reproductive Medicine (ASRM) classification supplemented with additional morphometric criteria. Reliability/agreement study. Private clinic. Uterine malformations (n = 50 patients, consecutively included) and normal uterus (n = 62 women, randomly selected) constituted the study. These were classified based on real-time three-dimensional ultrasound single volume transvaginal (or transrectal in the case of virgins, 4 cases) ultrasonography findings, which were assessed by an expert rater based on the ESHRE-ESGE criteria. The samples were obtained from women of reproductive age. Unprocessed three-dimensional datasets were independently evaluated offline by two experienced, blinded raters using both classification systems. The κ-values and proportions of agreement. Standardized interpretation indicated that the ESHRE-ESGE system has substantial/good or almost perfect/very good reliability (κ >0.60 and >0.80), but the interpretation of the clinically relevant cutoffs of κ-values showed insufficient reliability for clinical use (κ < 0.90), especially in the diagnosis of septate uterus. The ASRM system had sufficient reliability (κ > 0.95). The low reliability of the ESHRE-ESGE system may lead to a lack of consensus about the management of common uterine malformations and biased research interpretations. The use of the ASRM classification, supplemented with simple morphometric criteria, may be preferred if their sufficient reliability can be confirmed real-time in a large sample size. Copyright © 2015 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
Reliability of Fault Tolerant Control Systems. Part 2
NASA Technical Reports Server (NTRS)
Wu, N. Eva
2000-01-01
This paper reports Part II of a two part effort that is intended to delineate the relationship between reliability and fault tolerant control in a quantitative manner. Reliability properties peculiar to fault-tolerant control systems are emphasized, such as the presence of analytic redundancy in high proportion, the dependence of failures on control performance, and high risks associated with decisions in redundancy management due to multiple sources of uncertainties and sometimes large processing requirements. As a consequence, coverage of failures through redundancy management can be severely limited. The paper proposes to formulate the fault tolerant control problem as an optimization problem that maximizes coverage of failures through redundancy management. Coverage modeling is attempted in a way that captures its dependence on the control performance and on the diagnostic resolution. Under the proposed redundancy management policy, it is shown that an enhanced overall system reliability can be achieved with a control law of a superior robustness, with an estimator of a higher resolution, and with a control performance requirement of a lesser stringency.
ERIC Educational Resources Information Center
Lee, Guemin; Park, In-Yong
2012-01-01
Previous assessments of the reliability of test scores for testlet-composed tests have indicated that item-based estimation methods overestimate reliability. This study was designed to address issues related to the extent to which item-based estimation methods overestimate the reliability of test scores composed of testlets and to compare several…
Reliability analysis based on the losses from failures.
Todinov, M T
2006-04-01
The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.
Designing Fault-Injection Experiments for the Reliability of Embedded Systems
NASA Technical Reports Server (NTRS)
White, Allan L.
2012-01-01
This paper considers the long-standing problem of conducting fault-injections experiments to establish the ultra-reliability of embedded systems. There have been extensive efforts in fault injection, and this paper offers a partial summary of the efforts, but these previous efforts have focused on realism and efficiency. Fault injections have been used to examine diagnostics and to test algorithms, but the literature does not contain any framework that says how to conduct fault-injection experiments to establish ultra-reliability. A solution to this problem integrates field-data, arguments-from-design, and fault-injection into a seamless whole. The solution in this paper is to derive a model reduction theorem for a class of semi-Markov models suitable for describing ultra-reliable embedded systems. The derivation shows that a tight upper bound on the probability of system failure can be obtained using only the means of system-recovery times, thus reducing the experimental effort to estimating a reasonable number of easily-observed parameters. The paper includes an example of a system subject to both permanent and transient faults. There is a discussion of integrating fault-injection with field-data and arguments-from-design.
Valuation Diagramming and Accounting of Transactive Energy Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makhmalbaf, Atefe; Hammerstrom, Donald J.; Huang, Qiuhua
Transactive energy (TE) systems support both economic and technical objectives of a power system including efficiency and reliability. TE systems utilize value-driven mechanisms to coordinate and balance responsive supply and demand in the power system. Economic performance of TE systems cannot be assessed without estimating their value. Estimating the potential value of transactive energy systems requires a systematic valuation methodology that can capture value exchanges among different stakeholders (i.e., actors) and ultimately estimate impact of one TE design and compare it against another one. Such a methodology can help decision makers choose the alternative that results in preferred outcomes. Thismore » paper presents a valuation methodology developed to assess value of TE systems. A TE use-case example is discussed, and metrics identified in the valuation process are quantified using a TE simulation program.« less
Buganè, Francesca; Benedetti, Maria Grazia; D'Angeli, Valentina; Leardini, Alberto
2014-10-21
Kinematics measures from inertial sensors have a value in the clinical assessment of pathological gait, to track quantitatively the outcome of interventions and rehabilitation programs. To become a standard tool for clinicians, it is necessary to evaluate their capability to provide reliable and comprehensible information, possibly by comparing this with that provided by the traditional gait analysis. The aim of this study was to assess by state-of-the-art gait analysis the reliability of a single inertial device attached to the sacrum to measure pelvis kinematics during level walking. The output signals of the three-axis gyroscope were processed to estimate the spatial orientation of the pelvis in the sagittal (tilt angle), frontal (obliquity) and transverse (rotation) anatomical planes These estimated angles were compared with those provided by a 8 TV-cameras stereophotogrammetric system utilizing a standard experimental protocol, with four markers on the pelvis. This was observed in a group of sixteen healthy subjects while performing three repetitions of level walking along a 10 meter walkway at slow, normal and fast speeds. The determination coefficient, the scale factor and the bias of a linear regression model were calculated to represent the differences between the angular patterns from the two measurement systems. For the intra-subject variability, one volunteer was asked to repeat walking at normal speed 10 times. A good match was observed for obliquity and rotation angles. For the tilt angle, the pattern and range of motion was similar, but a bias was observed, due to the different initial inclination angle in the sagittal plane of the inertial sensor with respect to the pelvis anatomical frame. A good intra-subject consistency has also been shown by the small variability of the pelvic angles as estimated by the new system, confirmed by very small values of standard deviation for all three angles. These results suggest that this inertial device is a reliable alternative to stereophotogrammetric systems for pelvis kinematics measurements, in addition to being easier to use and cheaper. The device can provide to the patient and to the examiner reliable feedback in real-time during routine clinical tests.
ERIC Educational Resources Information Center
Black, Ryan A.; Yang, Yanyun; Beitra, Danette; McCaffrey, Stacey
2015-01-01
Estimation of composite reliability within a hierarchical modeling framework has recently become of particular interest given the growing recognition that the underlying assumptions of coefficient alpha are often untenable. Unfortunately, coefficient alpha remains the prominent estimate of reliability when estimating total scores from a scale with…
Liu, Ren; Srivastava, Anurag K.; Bakken, David E.; ...
2017-08-17
Intermittency of wind energy poses a great challenge for power system operation and control. Wind curtailment might be necessary at the certain operating condition to keep the line flow within the limit. Remedial Action Scheme (RAS) offers quick control action mechanism to keep reliability and security of the power system operation with high wind energy integration. In this paper, a new RAS is developed to maximize the wind energy integration without compromising the security and reliability of the power system based on specific utility requirements. A new Distributed Linear State Estimation (DLSE) is also developed to provide the fast andmore » accurate input data for the proposed RAS. A distributed computational architecture is designed to guarantee the robustness of the cyber system to support RAS and DLSE implementation. The proposed RAS and DLSE is validated using the modified IEEE-118 Bus system. Simulation results demonstrate the satisfactory performance of the DLSE and the effectiveness of RAS. Real-time cyber-physical testbed has been utilized to validate the cyber-resiliency of the developed RAS against computational node failure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ren; Srivastava, Anurag K.; Bakken, David E.
Intermittency of wind energy poses a great challenge for power system operation and control. Wind curtailment might be necessary at the certain operating condition to keep the line flow within the limit. Remedial Action Scheme (RAS) offers quick control action mechanism to keep reliability and security of the power system operation with high wind energy integration. In this paper, a new RAS is developed to maximize the wind energy integration without compromising the security and reliability of the power system based on specific utility requirements. A new Distributed Linear State Estimation (DLSE) is also developed to provide the fast andmore » accurate input data for the proposed RAS. A distributed computational architecture is designed to guarantee the robustness of the cyber system to support RAS and DLSE implementation. The proposed RAS and DLSE is validated using the modified IEEE-118 Bus system. Simulation results demonstrate the satisfactory performance of the DLSE and the effectiveness of RAS. Real-time cyber-physical testbed has been utilized to validate the cyber-resiliency of the developed RAS against computational node failure.« less
Statistical Aspects of Reliability, Maintainability, and Availability.
1987-10-01
A total of 33 research reports were issued, and 35 papers were published in scientific journals or are in press. Research topics included optimal assembly of systems, multistate system theory , testing whether new is better than used nonparameter survival function estimation measuring information in censored models, generalizations of total positively and
NORSAR Detection Processing System.
1987-05-31
systems have been reliable. NTA/Lillestrom and Hamar will take a new initiative medio April regarding 04C. The line will be remeasured and if a certain...estimate of the ambient noise level at the site of the FINESA array, ground motion spectra were calculated for four time intervals. Two intervals were
A Comparison of Three Multivariate Models for Estimating Test Battery Reliability.
ERIC Educational Resources Information Center
Wood, Terry M.; Safrit, Margaret J.
1987-01-01
A comparison of three multivariate models (canonical reliability model, maximum generalizability model, canonical correlation model) for estimating test battery reliability indicated that the maximum generalizability model showed the least degree of bias, smallest errors in estimation, and the greatest relative efficiency across all experimental…
In vivo estimation of target registration errors during augmented reality laparoscopic surgery.
Thompson, Stephen; Schneider, Crispin; Bosi, Michele; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J
2018-06-01
Successful use of augmented reality for laparoscopic surgery requires that the surgeon has a thorough understanding of the likely accuracy of any overlay. Whilst the accuracy of such systems can be estimated in the laboratory, it is difficult to extend such methods to the in vivo clinical setting. Herein we describe a novel method that enables the surgeon to estimate in vivo errors during use. We show that the method enables quantitative evaluation of in vivo data gathered with the SmartLiver image guidance system. The SmartLiver system utilises an intuitive display to enable the surgeon to compare the positions of landmarks visible in both a projected model and in the live video stream. From this the surgeon can estimate the system accuracy when using the system to locate subsurface targets not visible in the live video. Visible landmarks may be either point or line features. We test the validity of the algorithm using an anatomically representative liver phantom, applying simulated perturbations to achieve clinically realistic overlay errors. We then apply the algorithm to in vivo data. The phantom results show that using projected errors of surface features provides a reliable predictor of subsurface target registration error for a representative human liver shape. Applying the algorithm to in vivo data gathered with the SmartLiver image-guided surgery system shows that the system is capable of accuracies around 12 mm; however, achieving this reliably remains a significant challenge. We present an in vivo quantitative evaluation of the SmartLiver image-guided surgery system, together with a validation of the evaluation algorithm. This is the first quantitative in vivo analysis of an augmented reality system for laparoscopic surgery.
An analysis and demonstration of clock synchronization by VLBI
NASA Technical Reports Server (NTRS)
Hurd, W. J.
1972-01-01
A prototype of a semireal-time system for synchronizing the DSN station clocks by radio interferometry was successfully demonstrated. The system utilized an approximate maximum likelihood estimation procedure for processing the data, thereby achieving essentially optimum time synchronization estimates for a given amount of data, or equivalently, minimizing the amount of data required for reliable estimation. Synchronization accuracies as good as 100 nsec rms were achieved between DSS 11 and DSS 12, both at Goldstone, California. The accuracy can be improved by increasing the system bandwidth until the fundamental limitations due to position uncertainties of baseline and source and atmospheric effects are reached. These limitations are under ten nsec for transcontinental baselines.
NASA Astrophysics Data System (ADS)
Wei, Jingwen; Dong, Guangzhong; Chen, Zonghai
2017-10-01
With the rapid development of battery-powered electric vehicles, the lithium-ion battery plays a critical role in the reliability of vehicle system. In order to provide timely management and protection for battery systems, it is necessary to develop a reliable battery model and accurate battery parameters estimation to describe battery dynamic behaviors. Therefore, this paper focuses on an on-board adaptive model for state-of-charge (SOC) estimation of lithium-ion batteries. Firstly, a first-order equivalent circuit battery model is employed to describe battery dynamic characteristics. Then, the recursive least square algorithm and the off-line identification method are used to provide good initial values of model parameters to ensure filter stability and reduce the convergence time. Thirdly, an extended-Kalman-filter (EKF) is applied to on-line estimate battery SOC and model parameters. Considering that the EKF is essentially a first-order Taylor approximation of battery model, which contains inevitable model errors, thus, a proportional integral-based error adjustment technique is employed to improve the performance of EKF method and correct model parameters. Finally, the experimental results on lithium-ion batteries indicate that the proposed EKF with proportional integral-based error adjustment method can provide robust and accurate battery model and on-line parameter estimation.
Shift level analysis of cable yarder availability, utilization, and productive time
James R. Sherar; Chris B. LeDoux
1989-01-01
Decision makers, loggers, managers, and planners need to understand and have methods for estimating utilization and productive time of cable logging systems. In making an accurate prediction of how much area and volume a machine will log per unit time and the associated cable yarding costs, a reliable estimate of the availability, utilization, and productive time of...
Mission Status at Aura Science Team MOWG Meeting: EOS Aura
NASA Technical Reports Server (NTRS)
Fisher, Dominic
2016-01-01
Presentation at the 24797-16 Earth Observing System (EOS) Aura Science Team Meeting (Mission Operations Work Group (MOWG)) at Rotterdam, Netherlands August 29, 2016. Presentation topics include mission summary, spacecraft subsystems summary, recent and planned activities, spacecraft anomalies, data capture, propellant usage and lifetime estimates, spacecraft maneuvers and ground track history, mission highlights and past spacecraft anomalies and reliability estimates.
ECG on the road: robust and unobtrusive estimation of heart rate.
Wartzek, Tobias; Eilebrecht, Benjamin; Lem, Jeroen; Lindner, Hans-Joachim; Leonhardt, Steffen; Walter, Marian
2011-11-01
Modern automobiles include an increasing number of assistance systems to increase the driver's safety. This feasibility study investigated unobtrusive capacitive ECG measurements in an automotive environment. Electrodes integrated into the driving seat allowed to measure a reliable ECG in 86% of the drivers; when only (light) cotton clothing was worn by the drivers, this value increased to 95%. Results show that an array of sensors is needed that can adapt to the different drivers and sitting positions. Measurements while driving show that traveling on the highway does not distort the signal any more than with the car engine turned OFF, whereas driving in city traffic results in a lowered detection rate due to the driver's heavier movements. To enable robust and reliable estimation of heart rate, an algorithm is presented (based on principal component analysis) to detect and discard time intervals with artifacts. This, then, allows a reliable estimation of heart rate of up to 61% in city traffic and up to 86% on the highway: as a percentage of the total driving period with at least four consecutive QRS complexes.
NASA Astrophysics Data System (ADS)
Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi
2018-06-01
Objective. Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. Approach. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Main results. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Significance. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.
Kosugi, Akito; Takemi, Mitsuaki; Tia, Banty; Castagnola, Elisa; Ansaldo, Alberto; Sato, Kenta; Awiszus, Friedemann; Seki, Kazuhiko; Ricci, Davide; Fadiga, Luciano; Iriki, Atsushi; Ushiba, Junichi
2018-06-01
Motor map has been widely used as an indicator of motor skills and learning, cortical injury, plasticity, and functional recovery. Cortical stimulation mapping using epidural electrodes is recently adopted for animal studies. However, several technical limitations still remain. Test-retest reliability of epidural cortical stimulation (ECS) mapping has not been examined in detail. Many previous studies defined evoked movements and motor thresholds by visual inspection, and thus, lacked quantitative measurements. A reliable and quantitative motor map is important to elucidate the mechanisms of motor cortical reorganization. The objective of the current study was to perform reliable ECS mapping of motor representations based on the motor thresholds, which were stochastically estimated by motor evoked potentials and chronically implanted micro-electrocorticographical (µECoG) electrode arrays, in common marmosets. ECS was applied using the implanted µECoG electrode arrays in three adult common marmosets under awake conditions. Motor evoked potentials were recorded through electromyographical electrodes implanted in upper limb muscles. The motor threshold was calculated through a modified maximum likelihood threshold-hunting algorithm fitted with the recorded data from marmosets. Further, a computer simulation confirmed reliability of the algorithm. Computer simulation suggested that the modified maximum likelihood threshold-hunting algorithm enabled to estimate motor threshold with acceptable precision. In vivo ECS mapping showed high test-retest reliability with respect to the excitability and location of the cortical forelimb motor representations. Using implanted µECoG electrode arrays and a modified motor threshold-hunting algorithm, we were able to achieve reliable motor mapping in common marmosets with the ECS system.
LMI-based adaptive reliable H∞ static output feedback control against switched actuator failures
NASA Astrophysics Data System (ADS)
An, Liwei; Zhai, Ding; Dong, Jiuxiang; Zhang, Qingling
2017-08-01
This paper investigates the H∞ static output feedback (SOF) control problem for switched linear system under arbitrary switching, where the actuator failure models are considered to depend on switching signal. An active reliable control scheme is developed by combination of linear matrix inequality (LMI) method and adaptive mechanism. First, by exploiting variable substitution and Finsler's lemma, new LMI conditions are given for designing the SOF controller. Compared to the existing results, the proposed design conditions are more relaxed and can be applied to a wider class of no-fault linear systems. Then a novel adaptive mechanism is established, where the inverses of switched failure scaling factors are estimated online to accommodate the effects of actuator failure on systems. Two main difficulties arise: first is how to design the switched adaptive laws to prevent the missing of estimating information due to switching; second is how to construct a common Lyapunov function based on a switched estimate error term. It is shown that the new method can give less conservative results than that for the traditional control design with fixed gain matrices. Finally, simulation results on the HiMAT aircraft are given to show the effectiveness of the proposed approaches.
Data Applicability of Heritage and New Hardware For Launch Vehicle Reliability Models
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Novack, Steven
2015-01-01
Bayesian reliability requires the development of a prior distribution to represent degree of belief about the value of a parameter (such as a component's failure rate) before system specific data become available from testing or operations. Generic failure data are often provided in reliability databases as point estimates (mean or median). A component's failure rate is considered a random variable where all possible values are represented by a probability distribution. The applicability of the generic data source is a significant source of uncertainty that affects the spread of the distribution. This presentation discusses heuristic guidelines for quantifying uncertainty due to generic data applicability when developing prior distributions mainly from reliability predictions.
NASA Astrophysics Data System (ADS)
Nair, S. P.; Righetti, R.
2015-05-01
Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.
A review on prognostics approaches for remaining useful life of lithium-ion battery
NASA Astrophysics Data System (ADS)
Su, C.; Chen, H. J.
2017-11-01
Lithium-ion (Li-ion) battery is a core component for various industrial systems, including satellite, spacecraft and electric vehicle, etc. The mechanism of performance degradation and remaining useful life (RUL) estimation correlate closely to the operating state and reliability of the aforementioned systems. Furthermore, RUL prediction of Li-ion battery is crucial for the operation scheduling, spare parts management and maintenance decision for such kinds of systems. In recent years, performance degradation prognostics and RUL estimation approaches have become a focus of the research concerning with Li-ion battery. This paper summarizes the approaches used in Li-ion battery RUL estimation. Three categories are classified accordingly, i.e. model-based approach, data-based approach and hybrid approach. The key issues and future trends for battery RUL estimation are also discussed.
High rate concatenated coding systems using bandwidth efficient trellis inner codes
NASA Technical Reports Server (NTRS)
Deng, Robert H.; Costello, Daniel J., Jr.
1989-01-01
High-rate concatenated coding systems with bandwidth-efficient trellis inner codes and Reed-Solomon (RS) outer codes are investigated for application in high-speed satellite communication systems. Two concatenated coding schemes are proposed. In one the inner code is decoded with soft-decision Viterbi decoding, and the outer RS code performs error-correction-only decoding (decoding without side information). In the other, the inner code is decoded with a modified Viterbi algorithm, which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, whereas branch metrics are used to provide reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. The two schemes have been proposed for high-speed data communication on NASA satellite channels. The rates considered are at least double those used in current NASA systems, and the results indicate that high system reliability can still be achieved.
Spacecraft Conceptual Design Compared to the Apollo Lunar Lander
NASA Technical Reports Server (NTRS)
Young, C.; Bowie, J.; Rust, R.; Lenius, J.; Anderson, M.; Connolly, J.
2011-01-01
Future human exploration of the Moon will require an optimized spacecraft design with each sub-system achieving the required minimum capability and maintaining high reliability. The objective of this study was to trade capability with reliability and minimize mass for the lunar lander spacecraft. The NASA parametric concept for a 3-person vehicle to the lunar surface with a 30% mass margin totaled was considerably heavier than the Apollo 15 Lunar Module "as flown" mass of 16.4 metric tons. The additional mass was attributed to mission requirements and system design choices that were made to meet the realities of modern spaceflight. The parametric tool used to size the current concept, Envision, accounts for primary and secondary mass requirements. For example, adding an astronaut increases the mass requirements for suits, water, food, oxygen, as well as, the increase in volume. The environmental control sub-systems becomes heavier with the increased requirements and more structure was needed to support the additional mass. There was also an increase in propellant usage. For comparison, an "Apollo-like" vehicle was created by removing these additional requirements. Utilizing the Envision parametric mass calculation tool and a quantitative reliability estimation tool designed by Valador Inc., it was determined that with today?s current technology a Lunar Module (LM) with Apollo capability could be built with less mass and similar reliability. The reliability of this new lander was compared to Apollo Lunar Module utilizing the same methodology, adjusting for mission timeline changes as well as component differences. Interestingly, the parametric concept's overall estimated risk for loss of mission (LOM) and loss of crew (LOC) did not significantly improve when compared to Apollo.
Context-Aided Sensor Fusion for Enhanced Urban Navigation
Martí, Enrique David; Martín, David; García, Jesús; de la Escalera, Arturo; Molina, José Manuel; Armingol, José María
2012-01-01
The deployment of Intelligent Vehicles in urban environments requires reliable estimation of positioning for urban navigation. The inherent complexity of this kind of environments fosters the development of novel systems which should provide reliable and precise solutions to the vehicle. This article details an advanced GNSS/IMU fusion system based on a context-aided Unscented Kalman filter for navigation in urban conditions. The constrained non-linear filter is here conditioned by a contextual knowledge module which reasons about sensor quality and driving context in order to adapt it to the situation, while at the same time it carries out a continuous estimation and correction of INS drift errors. An exhaustive analysis has been carried out with available data in order to characterize the behavior of available sensors and take it into account in the developed solution. The performance is then analyzed with an extensive dataset containing representative situations. The proposed solution suits the use of fusion algorithms for deploying Intelligent Transport Systems in urban environments. PMID:23223080
Context-aided sensor fusion for enhanced urban navigation.
Martí, Enrique David; Martín, David; García, Jesús; de la Escalera, Arturo; Molina, José Manuel; Armingol, José María
2012-12-06
The deployment of Intelligent Vehicles in urban environments requires reliable estimation of positioning for urban navigation. The inherent complexity of this kind of environments fosters the development of novel systems which should provide reliable and precise solutions to the vehicle. This article details an advanced GNSS/IMU fusion system based on a context-aided Unscented Kalman filter for navigation in urban conditions. The constrained non-linear filter is here conditioned by a contextual knowledge module which reasons about sensor quality and driving context in order to adapt it to the situation, while at the same time it carries out a continuous estimation and correction of INS drift errors. An exhaustive analysis has been carried out with available data in order to characterize the behavior of available sensors and take it into account in the developed solution. The performance is then analyzed with an extensive dataset containing representative situations. The proposed solution suits the use of fusion algorithms for deploying Intelligent Transport Systems in urban environments.
Are Validity and Reliability "Relevant" in Qualitative Evaluation Research?
ERIC Educational Resources Information Center
Goodwin, Laura D.; Goodwin, William L.
1984-01-01
The views of prominant qualitative methodologists on the appropriateness of validity and reliability estimation for the measurement strategies employed in qualitative evaluations are summarized. A case is made for the relevance of validity and reliability estimation. Definitions of validity and reliability for qualitative measurement are presented…
Improvement of automatic control system for high-speed current collectors
NASA Astrophysics Data System (ADS)
Sidorov, O. A.; Goryunov, V. N.; Golubkov, A. S.
2018-01-01
The article considers the ways of regulation of pantographs to provide quality and reliability of current collection at high speeds. To assess impact of regulation was proposed integral criterion of the quality of current collection, taking into account efficiency and reliability of operation of the pantograph. The study was carried out using mathematical model of interaction of pantograph and catenary system, allowing to assess contact force and intensity of arcing at the contact zone at different movement speeds. The simulation results allowed us to estimate the efficiency of different methods of regulation of pantographs and determine the best option.
A General Approach for Estimating Scale Score Reliability for Panel Survey Data
ERIC Educational Resources Information Center
Biemer, Paul P.; Christ, Sharon L.; Wiesen, Christopher A.
2009-01-01
Scale score measures are ubiquitous in the psychological literature and can be used as both dependent and independent variables in data analysis. Poor reliability of scale score measures leads to inflated standard errors and/or biased estimates, particularly in multivariate analysis. Reliability estimation is usually an integral step to assess…
ERIC Educational Resources Information Center
Md Desa, Zairul Nor Deana
2012-01-01
In recent years, there has been increasing interest in estimating and improving subscore reliability. In this study, the multidimensional item response theory (MIRT) and the bi-factor model were combined to estimate subscores, to obtain subscores reliability, and subscores classification. Both the compensatory and partially compensatory MIRT…
Reliability and precision of pellet-group counts for estimating landscape-level deer density
David S. deCalesta
2013-01-01
This study provides hitherto unavailable methodology for reliably and precisely estimating deer density within forested landscapes, enabling quantitative rather than qualitative deer management. Reliability and precision of the deer pellet-group technique were evaluated in 1 small and 2 large forested landscapes. Density estimates, adjusted to reflect deer harvest and...
Method matters: Understanding diagnostic reliability in DSM-IV and DSM-5.
Chmielewski, Michael; Clark, Lee Anna; Bagby, R Michael; Watson, David
2015-08-01
Diagnostic reliability is essential for the science and practice of psychology, in part because reliability is necessary for validity. Recently, the DSM-5 field trials documented lower diagnostic reliability than past field trials and the general research literature, resulting in substantial criticism of the DSM-5 diagnostic criteria. Rather than indicating specific problems with DSM-5, however, the field trials may have revealed long-standing diagnostic issues that have been hidden due to a reliance on audio/video recordings for estimating reliability. We estimated the reliability of DSM-IV diagnoses using both the standard audio-recording method and the test-retest method used in the DSM-5 field trials, in which different clinicians conduct separate interviews. Psychiatric patients (N = 339) were diagnosed using the SCID-I/P; 218 were diagnosed a second time by an independent interviewer. Diagnostic reliability using the audio-recording method (N = 49) was "good" to "excellent" (M κ = .80) and comparable to the DSM-IV field trials estimates. Reliability using the test-retest method (N = 218) was "poor" to "fair" (M κ = .47) and similar to DSM-5 field-trials' estimates. Despite low test-retest diagnostic reliability, self-reported symptoms were highly stable. Moreover, there was no association between change in self-report and change in diagnostic status. These results demonstrate the influence of method on estimates of diagnostic reliability. (c) 2015 APA, all rights reserved).
Forecasting overhaul or replacement intervals based on estimated system failure intensity
NASA Astrophysics Data System (ADS)
Gannon, James M.
1994-12-01
System reliability can be expressed in terms of the pattern of failure events over time. Assuming a nonhomogeneous Poisson process and Weibull intensity function for complex repairable system failures, the degree of system deterioration can be approximated. Maximum likelihood estimators (MLE's) for the system Rate of Occurrence of Failure (ROCOF) function are presented. Evaluating the integral of the ROCOF over annual usage intervals yields the expected number of annual system failures. By associating a cost of failure with the expected number of failures, budget and program policy decisions can be made based on expected future maintenance costs. Monte Carlo simulation is used to estimate the range and the distribution of the net present value and internal rate of return of alternative cash flows based on the distributions of the cost inputs and confidence intervals of the MLE's.
Bayesian sparse channel estimation
NASA Astrophysics Data System (ADS)
Chen, Chulong; Zoltowski, Michael D.
2012-05-01
In Orthogonal Frequency Division Multiplexing (OFDM) systems, the technique used to estimate and track the time-varying multipath channel is critical to ensure reliable, high data rate communications. It is recognized that wireless channels often exhibit a sparse structure, especially for wideband and ultra-wideband systems. In order to exploit this sparse structure to reduce the number of pilot tones and increase the channel estimation quality, the application of compressed sensing to channel estimation is proposed. In this article, to make the compressed channel estimation more feasible for practical applications, it is investigated from a perspective of Bayesian learning. Under the Bayesian learning framework, the large-scale compressed sensing problem, as well as large time delay for the estimation of the doubly selective channel over multiple consecutive OFDM symbols, can be avoided. Simulation studies show a significant improvement in channel estimation MSE and less computing time compared to the conventional compressed channel estimation techniques.
Estimating mortality using data from civil registration: a cross-sectional study in India
Rao, Chalapati; Lakshmi, PVM; Prinja, Shankar; Kumar, Rajesh
2016-01-01
Abstract Objective To analyse the design and operational status of India’s civil registration and vital statistics system and facilitate the system’s development into an accurate and reliable source of mortality data. Methods We assessed the national civil registration and vital statistics system’s legal framework, administrative structure and design through document review. We did a cross-sectional study for the year 2013 at national level and in Punjab state to assess the quality of the system’s mortality data through analyses of life tables and investigation of the completeness of death registration and the proportion of deaths assigned ill-defined causes. We interviewed registrars, medical officers and coders in Punjab state to assess their knowledge and practice. Findings Although we found the legal framework and system design to be appropriate, data collection was based on complex intersectoral collaborations at state and local level and the collected data were found to be of poor quality. The registration data were inadequate for a robust estimate of mortality at national level. A medically certified cause of death was only recorded for 965 992 (16.8%) of the 5 735 082 deaths registered. Conclusion The data recorded by India’s civil registration and vital statistics system in 2011 were incomplete. If improved, the system could be used to reliably estimate mortality. We recommend improving political support and intersectoral coordination, capacity building, computerization and state-level initiatives to ensure that every death is registered and that reliable causes of death are recorded – at least within an adequate sample of registration units within each state. PMID:26769992
Terry, Leann; Kelley, Ken
2012-11-01
Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.
Han, Houzeng; Wang, Jian; Wang, Jinling; Tan, Xinglong
2015-01-01
The integration of Global Navigation Satellite Systems (GNSS) carrier phases with Inertial Navigation System (INS) measurements is essential to provide accurate and continuous position, velocity and attitude information, however it is necessary to fix ambiguities rapidly and reliably to obtain high accuracy navigation solutions. In this paper, we present the notion of combining the Global Positioning System (GPS), the BeiDou Navigation Satellite System (BDS) and low-cost micro-electro-mechanical sensors (MEMS) inertial systems for reliable navigation. An adaptive multipath factor-based tightly-coupled (TC) GPS/BDS/INS integration algorithm is presented and the overall performance of the integrated system is illustrated. A twenty seven states TC GPS/BDS/INS model is adopted with an extended Kalman filter (EKF), which is carried out by directly fusing ambiguity fixed double-difference (DD) carrier phase measurements with the INS predicted pseudoranges to estimate the error states. The INS-aided integer ambiguity resolution (AR) strategy is developed by using a dynamic model, a two-step estimation procedure is applied with adaptively estimated covariance matrix to further improve the AR performance. A field vehicular test was carried out to demonstrate the positioning performance of the combined system. The results show the TC GPS/BDS/INS system significantly improves the single-epoch AR reliability as compared to that of GPS/BDS-only or single satellite navigation system integrated strategy, especially for high cut-off elevations. The AR performance is also significantly improved for the combined system with adaptive covariance matrix in the presence of low elevation multipath related to the GNSS-only case. A total of fifteen simulated outage tests also show that the time to relock of the GPS/BDS signals is shortened, which improves the system availability. The results also indicate that TC integration system achieves a few centimeters accuracy in positioning based on the comparison analysis and covariance analysis, even in harsh environments (e.g., in urban canyons), thus we can see the advantage of positioning at high cut-off elevations that the combined GPS/BDS brings. PMID:25875191
Han, Houzeng; Wang, Jian; Wang, Jinling; Tan, Xinglong
2015-04-14
The integration of Global Navigation Satellite Systems (GNSS) carrier phases with Inertial Navigation System (INS) measurements is essential to provide accurate and continuous position, velocity and attitude information, however it is necessary to fix ambiguities rapidly and reliably to obtain high accuracy navigation solutions. In this paper, we present the notion of combining the Global Positioning System (GPS), the BeiDou Navigation Satellite System (BDS) and low-cost micro-electro-mechanical sensors (MEMS) inertial systems for reliable navigation. An adaptive multipath factor-based tightly-coupled (TC) GPS/BDS/INS integration algorithm is presented and the overall performance of the integrated system is illustrated. A twenty seven states TC GPS/BDS/INS model is adopted with an extended Kalman filter (EKF), which is carried out by directly fusing ambiguity fixed double-difference (DD) carrier phase measurements with the INS predicted pseudoranges to estimate the error states. The INS-aided integer ambiguity resolution (AR) strategy is developed by using a dynamic model, a two-step estimation procedure is applied with adaptively estimated covariance matrix to further improve the AR performance. A field vehicular test was carried out to demonstrate the positioning performance of the combined system. The results show the TC GPS/BDS/INS system significantly improves the single-epoch AR reliability as compared to that of GPS/BDS-only or single satellite navigation system integrated strategy, especially for high cut-off elevations. The AR performance is also significantly improved for the combined system with adaptive covariance matrix in the presence of low elevation multipath related to the GNSS-only case. A total of fifteen simulated outage tests also show that the time to relock of the GPS/BDS signals is shortened, which improves the system availability. The results also indicate that TC integration system achieves a few centimeters accuracy in positioning based on the comparison analysis and covariance analysis, even in harsh environments (e.g., in urban canyons), thus we can see the advantage of positioning at high cut-off elevations that the combined GPS/BDS brings.
A Measure for the Reliability of a Rating Scale Based on Longitudinal Clinical Trial Data
ERIC Educational Resources Information Center
Laenen, Annouschka; Alonso, Ariel; Molenberghs, Geert
2007-01-01
A new measure for reliability of a rating scale is introduced, based on the classical definition of reliability, as the ratio of the true score variance and the total variance. Clinical trial data can be employed to estimate the reliability of the scale in use, whenever repeated measurements are taken. The reliability is estimated from the…
Terada, Tasuku; Loehr, Sarah; Guigard, Emmanuel; McCargar, Linda J; Bell, Gordon J; Senior, Peter; Boulé, Normand G
2014-08-01
This study determined the test-retest reliability of a continuous glucose monitoring system (CGMS) (iPro™2; Medtronic, Northridge, CA) under standardized conditions in individuals with type 2 diabetes (T2D). Fourteen individuals with T2D spent two nonconsecutive days in a calorimetry unit. On both days, meals, medication, and exercise were standardized. Glucose concentrations were measured continuously by CGMS, from which daily mean glucose concentration (GLU(mean)), time spent in hyperglycemia (t(>10.0 mmol/L)), and meal, exercise, and nocturnal mean glucose concentrations, as well as glycemic variability (SD(w), percentage coefficient of variation [%cv(w)], mean amplitude of glycemic excursions [MAGEc, MAGE(ave), and MAGE(abs.gos)], and continuous overlapping net glycemic action [CONGA(n)]) were estimated. Absolute and relative reliabilities were investigated using coefficient of variation (CV) and intraclass correlation, respectively. Relative reliability ranged from 0.77 to 0.95 (P<0.05) for GLU(mean) and meal, exercise, and nocturnal glycemia with CV ranging from 3.9% to 11.7%. Despite significant relative reliability (R=0.93; P<0.01), t(>10.0 mmol/L) showed larger CV (54.7%). Among the different glycemic variability measures, a significant between-day difference was observed in MAGEc, MAGE(ave), CONGA6, and CONGA12. The remaining measures (i.e., SD(w), %cv(w), MAGE(abs.gos), and CONGA1-4) indicated no between-day differences and significant relative reliability. In individuals with T2D, CGMS-estimated glycemic profiles were characterized by high relative and absolute reliability for both daily and shorter-term measurements as represented by GLUmean and meal, exercise, and nocturnal glycemia. Among the different methods to calculate glycemic variability, our results showed SD(w), %cv(w), MAGE(abs.gos), and CONGAn with n ≤ 4 were reliable measures. These results suggest the usefulness of CGMS in clinical trials utilizing repeated measured.
Yuan, Xuebing; Yu, Shuai; Zhang, Shengzhi; Wang, Guoping; Liu, Sheng
2015-01-01
Inertial navigation based on micro-electromechanical system (MEMS) inertial measurement units (IMUs) has attracted numerous researchers due to its high reliability and independence. The heading estimation, as one of the most important parts of inertial navigation, has been a research focus in this field. Heading estimation using magnetometers is perturbed by magnetic disturbances, such as indoor concrete structures and electronic equipment. The MEMS gyroscope is also used for heading estimation. However, the accuracy of gyroscope is unreliable with time. In this paper, a wearable multi-sensor system has been designed to obtain the high-accuracy indoor heading estimation, according to a quaternion-based unscented Kalman filter (UKF) algorithm. The proposed multi-sensor system including one three-axis accelerometer, three single-axis gyroscopes, one three-axis magnetometer and one microprocessor minimizes the size and cost. The wearable multi-sensor system was fixed on waist of pedestrian and the quadrotor unmanned aerial vehicle (UAV) for heading estimation experiments in our college building. The results show that the mean heading estimation errors are less 10° and 5° to multi-sensor system fixed on waist of pedestrian and the quadrotor UAV, respectively, compared to the reference path. PMID:25961384
NASA Technical Reports Server (NTRS)
Turnquist, S. R.; Twombly, M.; Hoffman, D.
1989-01-01
A preliminary reliability, availability, and maintainability (RAM) analysis of the proposed Space Station Freedom electric power system (EPS) was performed using the unit reliability, availability, and maintainability (UNIRAM) analysis methodology. Orbital replacement units (ORUs) having the most significant impact on EPS availability measures were identified. Also, the sensitivity of the EPS to variations in ORU RAM data was evaluated for each ORU. Estimates were made of average EPS power output levels and availability of power to the core area of the space station. The results of assessments of the availability of EPS power and power to load distribution points in the space stations are given. Some highlights of continuing studies being performed to understand EPS availability considerations are presented.
Lindskog, Marcus; Winman, Anders; Juslin, Peter; Poom, Leo
2013-01-01
Two studies investigated the reliability and predictive validity of commonly used measures and models of Approximate Number System acuity (ANS). Study 1 investigated reliability by both an empirical approach and a simulation of maximum obtainable reliability under ideal conditions. Results showed that common measures of the Weber fraction (w) are reliable only when using a substantial number of trials, even under ideal conditions. Study 2 compared different purported measures of ANS acuity as for convergent and predictive validity in a within-subjects design and evaluated an adaptive test using the ZEST algorithm. Results showed that the adaptive measure can reduce the number of trials needed to reach acceptable reliability. Only direct tests with non-symbolic numerosity discriminations of stimuli presented simultaneously were related to arithmetic fluency. This correlation remained when controlling for general cognitive ability and perceptual speed. Further, the purported indirect measure of ANS acuity in terms of the Numeric Distance Effect (NDE) was not reliable and showed no sign of predictive validity. The non-symbolic NDE for reaction time was significantly related to direct w estimates in a direction contrary to the expected. Easier stimuli were found to be more reliable, but only harder (7:8 ratio) stimuli contributed to predictive validity. PMID:23964256
NASA Astrophysics Data System (ADS)
Wang, Lei; Xiong, Chuang; Wang, Xiaojun; Li, Yunlong; Xu, Menghui
2018-04-01
Considering that multi-source uncertainties from inherent nature as well as the external environment are unavoidable and severely affect the controller performance, the dynamic safety assessment with high confidence is of great significance for scientists and engineers. In view of this, the uncertainty quantification analysis and time-variant reliability estimation corresponding to the closed-loop control problems are conducted in this study under a mixture of random, interval, and convex uncertainties. By combining the state-space transformation and the natural set expansion, the boundary laws of controlled response histories are first confirmed with specific implementation of random items. For nonlinear cases, the collocation set methodology and fourth Rounge-Kutta algorithm are introduced as well. Enlightened by the first-passage model in random process theory as well as by the static probabilistic reliability ideas, a new definition of the hybrid time-variant reliability measurement is provided for the vibration control systems and the related solution details are further expounded. Two engineering examples are eventually presented to demonstrate the validity and applicability of the methodology developed.
Enhanced Component Performance Study: Turbine-Driven Pumps 1998–2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2015-11-01
This report presents an enhanced performance evaluation of turbine-driven pumps (TDPs) at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience failure reports from fiscal year 1998 through 2014 for the component reliability as reported in the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The TDP failure modes considered are failure to start (FTS), failure to run less than or equal to one hour (FTR=1H), failure to run more than one hour (FTR>1H), and normally running systems FTS and failure to run (FTR). The component reliability estimates and themore » reliability data are trended for the most recent 10-year period while yearly estimates for reliability are provided for the entire active period. Statistically significant increasing trends were identified for TDP unavailability, for frequency of start demands for standby TDPs, and for run hours in the first hour after start. Statistically significant decreasing trends were identified for start demands for normally running TDPs, and for run hours per reactor critical year for normally running TDPs.« less
Bottema-Beutel, Kristen; Lloyd, Blair; Carter, Erik W; Asmus, Jennifer M
2014-11-01
Attaining reliable estimates of observational measures can be challenging in school and classroom settings, as behavior can be influenced by multiple contextual factors. Generalizability (G) studies can enable researchers to estimate the reliability of observational data, and decision (D) studies can inform how many observation sessions are necessary to achieve a criterion level of reliability. We conducted G and D studies using observational data from a randomized control trial focusing on social and academic participation of students with severe disabilities in inclusive secondary classrooms. Results highlight the importance of anchoring observational decisions to reliability estimates from existing or pilot data sets. We outline steps for conducting G and D studies and address options when reliability estimates are lower than desired.
A LEO Satellite Navigation Algorithm Based on GPS and Magnetometer Data
NASA Technical Reports Server (NTRS)
Deutschmann, Julie; Bar-Itzhack, Itzhack; Harman, Rick; Bauer, Frank H. (Technical Monitor)
2000-01-01
The Global Positioning System (GPS) has become a standard method for low cost onboard satellite orbit determination. The use of a GPS receiver as an attitude and rate sensor has also been developed in the recent past. Additionally, focus has been given to attitude and orbit estimation using the magnetometer, a low cost, reliable sensor. Combining measurements from both GPS and a magnetometer can provide a robust navigation system that takes advantage of the estimation qualities of both measurements. Ultimately a low cost, accurate navigation system can result, potentially eliminating the need for more costly sensors, including gyroscopes.
Integrated GNSS Attitude Determination and Positioning for Direct Geo-Referencing
Nadarajah, Nandakumaran; Paffenholz, Jens-André; Teunissen, Peter J. G.
2014-01-01
Direct geo-referencing is an efficient methodology for the fast acquisition of 3D spatial data. It requires the fusion of spatial data acquisition sensors with navigation sensors, such as Global Navigation Satellite System (GNSS) receivers. In this contribution, we consider an integrated GNSS navigation system to provide estimates of the position and attitude (orientation) of a 3D laser scanner. The proposed multi-sensor system (MSS) consists of multiple GNSS antennas rigidly mounted on the frame of a rotating laser scanner and a reference GNSS station with known coordinates. Precise GNSS navigation requires the resolution of the carrier phase ambiguities. The proposed method uses the multivariate constrained integer least-squares (MC-LAMBDA) method for the estimation of rotating frame ambiguities and attitude angles. MC-LAMBDA makes use of the known antenna geometry to strengthen the underlying attitude model and, hence, to enhance the reliability of rotating frame ambiguity resolution and attitude determination. The reliable estimation of rotating frame ambiguities is consequently utilized to enhance the relative positioning of the rotating frame with respect to the reference station. This integrated (array-aided) method improves ambiguity resolution, as well as positioning accuracy between the rotating frame and the reference station. Numerical analyses of GNSS data from a real-data campaign confirm the improved performance of the proposed method over the existing method. In particular, the integrated method yields reliable ambiguity resolution and reduces position standard deviation by a factor of about 0.8, matching the theoretical gain of 3/4 for two antennas on the rotating frame and a single antenna at the reference station. PMID:25036330
Integrated GNSS attitude determination and positioning for direct geo-referencing.
Nadarajah, Nandakumaran; Paffenholz, Jens-André; Teunissen, Peter J G
2014-07-17
Direct geo-referencing is an efficient methodology for the fast acquisition of 3D spatial data. It requires the fusion of spatial data acquisition sensors with navigation sensors, such as Global Navigation Satellite System (GNSS) receivers. In this contribution, we consider an integrated GNSS navigation system to provide estimates of the position and attitude (orientation) of a 3D laser scanner. The proposed multi-sensor system (MSS) consists of multiple GNSS antennas rigidly mounted on the frame of a rotating laser scanner and a reference GNSS station with known coordinates. Precise GNSS navigation requires the resolution of the carrier phase ambiguities. The proposed method uses the multivariate constrained integer least-squares (MC-LAMBDA) method for the estimation of rotating frame ambiguities and attitude angles. MC-LAMBDA makes use of the known antenna geometry to strengthen the underlying attitude model and, hence, to enhance the reliability of rotating frame ambiguity resolution and attitude determination. The reliable estimation of rotating frame ambiguities is consequently utilized to enhance the relative positioning of the rotating frame with respect to the reference station. This integrated (array-aided) method improves ambiguity resolution, as well as positioning accuracy between the rotating frame and the reference station. Numerical analyses of GNSS data from a real-data campaign confirm the improved performance of the proposed method over the existing method. In particular, the integrated method yields reliable ambiguity resolution and reduces position standard deviation by a factor of about 0:8, matching the theoretical gain of √ 3/4 for two antennas on the rotating frame and a single antenna at the reference station.
A dynamic programming approach to estimate the capacity value of energy storage
Sioshansi, Ramteen; Madaeni, Seyed Hossein; Denholm, Paul
2013-09-17
Here, we present a method to estimate the capacity value of storage. Our method uses a dynamic program to model the effect of power system outages on the operation and state of charge of storage in subsequent periods. We combine the optimized dispatch from the dynamic program with estimated system loss of load probabilities to compute a probability distribution for the state of charge of storage in each period. This probability distribution can be used as a forced outage rate for storage in standard reliability-based capacity value estimation methods. Our proposed method has the advantage over existing approximations that itmore » explicitly captures the effect of system shortage events on the state of charge of storage in subsequent periods. We also use a numerical case study, based on five utility systems in the U.S., to demonstrate our technique and compare it to existing approximation methods.« less
System for estimating fatigue damage
DOE Office of Scientific and Technical Information (OSTI.GOV)
LeMonds, Jeffrey; Guzzo, Judith Ann; Liu, Shaopeng
In one aspect, a system for estimating fatigue damage in a riser string is provided. The system includes a plurality of accelerometers which can be deployed along a riser string and a communications link to transmit accelerometer data from the plurality of accelerometers to one or more data processors in real time. With data from a limited number of accelerometers located at sensor locations, the system estimates an optimized current profile along the entire length of the riser including riser locations where no accelerometer is present. The optimized current profile is then used to estimate damage rates to individual risermore » components and to update a total accumulated damage to individual riser components. The number of sensor locations is small relative to the length of a deepwater riser string, and a riser string several miles long can be reliably monitored along its entire length by fewer than twenty sensor locations.« less
ERIC Educational Resources Information Center
Gorev, Pavel M.; Bichurina, Svetlana Y.; Yakupova, Rufiya M.; Khairova, Irina V.
2016-01-01
Cognitive development of personality can be considered as one of the key directions of preschool education presented in the world practice, where preschool programs are educational ones, and preschool education is the first level of the general education. Thereby the purpose of the research is to create a model of reliable estimation of cognitive…
ERIC Educational Resources Information Center
Green, Samuel B.; Yang, Yanyun
2009-01-01
A method is presented for estimating reliability using structural equation modeling (SEM) that allows for nonlinearity between factors and item scores. Assuming the focus is on consistency of summed item scores, this method for estimating reliability is preferred to those based on linear SEM models and to the most commonly reported estimate of…
A study of fault prediction and reliability assessment in the SEL environment
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Patnaik, Debabrata
1986-01-01
An empirical study on estimation and prediction of faults, prediction of fault detection and correction effort, and reliability assessment in the Software Engineering Laboratory environment (SEL) is presented. Fault estimation using empirical relationships and fault prediction using curve fitting method are investigated. Relationships between debugging efforts (fault detection and correction effort) in different test phases are provided, in order to make an early estimate of future debugging effort. This study concludes with the fault analysis, application of a reliability model, and analysis of a normalized metric for reliability assessment and reliability monitoring during development of software.
Deng, Zhimin; Tian, Tianhai
2014-07-29
The advances of systems biology have raised a large number of sophisticated mathematical models for describing the dynamic property of complex biological systems. One of the major steps in developing mathematical models is to estimate unknown parameters of the model based on experimentally measured quantities. However, experimental conditions limit the amount of data that is available for mathematical modelling. The number of unknown parameters in mathematical models may be larger than the number of observation data. The imbalance between the number of experimental data and number of unknown parameters makes reverse-engineering problems particularly challenging. To address the issue of inadequate experimental data, we propose a continuous optimization approach for making reliable inference of model parameters. This approach first uses a spline interpolation to generate continuous functions of system dynamics as well as the first and second order derivatives of continuous functions. The expanded dataset is the basis to infer unknown model parameters using various continuous optimization criteria, including the error of simulation only, error of both simulation and the first derivative, or error of simulation as well as the first and second derivatives. We use three case studies to demonstrate the accuracy and reliability of the proposed new approach. Compared with the corresponding discrete criteria using experimental data at the measurement time points only, numerical results of the ERK kinase activation module show that the continuous absolute-error criteria using both function and high order derivatives generate estimates with better accuracy. This result is also supported by the second and third case studies for the G1/S transition network and the MAP kinase pathway, respectively. This suggests that the continuous absolute-error criteria lead to more accurate estimates than the corresponding discrete criteria. We also study the robustness property of these three models to examine the reliability of estimates. Simulation results show that the models with estimated parameters using continuous fitness functions have better robustness properties than those using the corresponding discrete fitness functions. The inference studies and robustness analysis suggest that the proposed continuous optimization criteria are effective and robust for estimating unknown parameters in mathematical models.
Reliable Change and Outcome Trajectories Across Levels of Care in a Mental Health System for Youth.
Jackson, David S; Keir, Scott S; Sender, Max; Mueller, Charles W
2017-01-01
Knowledge of mental health treatment outcome trajectories across various service types can be valuable for both system- and client-level decision-making. Using longitudinal youth functional impairment scores across 2807 treatment episodes, this study examined outcome trajectories and estimated the number of months required for reliable change across nine major services (or levels of care). Results indicate logarithmic improvement trajectories for a majority of levels of care and significant differences in time until improvement ranging from 4 to 12 months. Findings can guide system-level policies on lengths of treatment and service authorizations and provide expected treatment response data for client-level treatment decisions.
Tracking reliability for space cabin-borne equipment in development by Crow model.
Chen, J D; Jiao, S J; Sun, H L
2001-12-01
Objective. To study and track the reliability growth of manned spaceflight cabin-borne equipment in the course of its development. Method. A new technique of reliability growth estimation and prediction, which is composed of the Crow model and test data conversion (TDC) method was used. Result. The estimation and prediction value of the reliability growth conformed to its expectations. Conclusion. The method could dynamically estimate and predict the reliability of the equipment by making full use of various test information in the course of its development. It offered not only a possibility of tracking the equipment reliability growth, but also the reference for quality control in manned spaceflight cabin-borne equipment design and development process.
Analysis of Shuttle Orbiter Reliability and Maintainability Data for Conceptual Studies
NASA Technical Reports Server (NTRS)
Morris, W. D.; White, N. H.; Ebeling, C. E.
1996-01-01
In order to provide a basis for estimating the expected support required of new systems during their conceptual design phase, Langley Research Center has recently collected Shuttle Orbiter reliability and maintainability data from the various data base sources at Kennedy Space Center. This information was analyzed to provide benchmarks, trends, and distributions to aid in the analysis of new designs. This paper presents a summation of those results and an initial interpretation of the findings.
D'Agnese, F. A.; Faunt, C.C.; Turner, A.K.; ,
1996-01-01
The recharge and discharge components of the Death Valley regional groundwater flow system were defined by techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were used to calculate discharge volumes for these area. An empirical method of groundwater recharge estimation was modified to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.The recharge and discharge components of the Death Valley regional groundwater flow system were defined by remote sensing and GIS techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. This map provided a basis for subsequent evapotranspiration and infiltration estimations. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were then used to calculate discharge volumes for these areas. A previously used empirical method of groundwater recharge estimation was modified by GIS methods to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.
The Yale-Brown Obsessive Compulsive Scale: A Reliability Generalization Meta-Analysis.
López-Pina, José Antonio; Sánchez-Meca, Julio; López-López, José Antonio; Marín-Martínez, Fulgencio; Núñez-Núñez, Rosa Maria; Rosa-Alcázar, Ana I; Gómez-Conesa, Antonia; Ferrer-Requena, Josefa
2015-10-01
The Yale-Brown Obsessive Compulsive Scale (Y-BOCS) is the most frequently applied test to assess obsessive compulsive symptoms. We conducted a reliability generalization meta-analysis on the Y-BOCS to estimate the average reliability, examine the variability among the reliability estimates, search for moderators, and propose a predictive model that researchers and clinicians can use to estimate the expected reliability of the Y-BOCS. We included studies where the Y-BOCS was applied to a sample of adults and reliability estimate was reported. Out of the 11,490 references located, 144 studies met the selection criteria. For the total scale, the mean reliability was 0.866 for coefficients alpha, 0.848 for test-retest correlations, and 0.922 for intraclass correlations. The moderator analyses led to a predictive model where the standard deviation of the total test and the target population (clinical vs. nonclinical) explained 38.6% of the total variability among coefficients alpha. Finally, clinical implications of the results are discussed. © The Author(s) 2014.
Influences on and Limitations of Classical Test Theory Reliability Estimates.
ERIC Educational Resources Information Center
Arnold, Margery E.
It is incorrect to say "the test is reliable" because reliability is a function not only of the test itself, but of many factors. The present paper explains how different factors affect classical reliability estimates such as test-retest, interrater, internal consistency, and equivalent forms coefficients. Furthermore, the limits of classical test…
Reliability Generalization of the Psychopathy Checklist Applied in Youthful Samples
ERIC Educational Resources Information Center
Campbell, Justin S.; Pulos, Steven; Hogan, Mike; Murry, Francie
2005-01-01
This study examines the average reliability of Hare Psychopathy Checklists (PCLs) adapted for use in samples of youthful offenders (aged 12 to 21 years). Two forms of reliability are examined: 18 alpha estimates of internal consistency and 18 intraclass correlation (two or more raters) estimates of interrater reliability. The results, an average…
Basis for the power supply reliability study of the 1 MW neutron source
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGhee, D.G.; Fathizadeh, M.
1993-07-01
The Intense Pulsed Neutron Source (IPNS) upgrade to 1 MW requires new power supply designs. This paper describes the tools and the methodology needed to assess the reliability of the power supplies. Both the design and operation of the power supplies in the synchrotron will be taken into account. To develop a reliability budget, the experiments to be conducted with this accelerator are reviewed, and data is collected on the number and duration of interruptions possible before an experiment is required to start over. Once the budget is established, several accelerators of this type will be examined. The budget ismore » allocated to the different accelerator systems based on their operating experience. The accelerator data is usually in terms of machine availability and system down time. It takes into account mean time to failure (MTTF), time to diagnose, time to repair or replace the failed components, and time to get the machine back online. These estimated times are used as baselines for the design. Even though we are in the early stage of design, available data can be analyzed to estimate the MTTF for the power supplies.« less
Reliability reporting across studies using the Buss Durkee Hostility Inventory.
Vassar, Matt; Hale, William
2009-01-01
Empirical research on anger and hostility has pervaded the academic literature for more than 50 years. Accurate measurement of anger/hostility and subsequent interpretation of results requires that the instruments yield strong psychometric properties. For consistent measurement, reliability estimates must be calculated with each administration, because changes in sample characteristics may alter the scale's ability to generate reliable scores. Therefore, the present study was designed to address reliability reporting practices for a widely used anger assessment, the Buss Durkee Hostility Inventory (BDHI). Of the 250 published articles reviewed, 11.2% calculated and presented reliability estimates for the data at hand, 6.8% cited estimates from a previous study, and 77.1% made no mention of score reliability. Mean alpha estimates of scores for BDHI subscales generally fell below acceptable standards. Additionally, no detectable pattern was found between reporting practices and publication year or journal prestige. Areas for future research are also discussed.
Vision System for Coarsely Estimating Motion Parameters for Unknown Fast Moving Objects in Space
Chen, Min; Hashimoto, Koichi
2017-01-01
Motivated by biological interests in analyzing navigation behaviors of flying animals, we attempt to build a system measuring their motion states. To do this, in this paper, we build a vision system to detect unknown fast moving objects within a given space, calculating their motion parameters represented by positions and poses. We proposed a novel method to detect reliable interest points from images of moving objects, which can be hardly detected by general purpose interest point detectors. 3D points reconstructed using these interest points are then grouped and maintained for detected objects, according to a careful schedule, considering appearance and perspective changes. In the estimation step, a method is introduced to adapt the robust estimation procedure used for dense point set to the case for sparse set, reducing the potential risk of greatly biased estimation. Experiments are conducted against real scenes, showing the capability of the system of detecting multiple unknown moving objects and estimating their positions and poses. PMID:29206189
NASA Astrophysics Data System (ADS)
Lyubimov, V. V.; Kurkina, E. V.
2018-05-01
The authors consider the problem of a dynamic system passing through a low-order resonance, describing an uncontrolled atmospheric descent of an asymmetric nanosatellite in the Earth's atmosphere. The authors perform mathematical and numerical modeling of the motion of the nanosatellite with a small mass-aerodynamic asymmetry relative to the center of mass. The aim of the study is to obtain new reliable approximate analytical estimates of perturbations of the angle of attack of a nanosatellite passing through resonance at angles of attack of not more than 0.5π. By using the stationary phase method, the authors were able to investigate a discontinuous perturbation in the angle of attack of a nanosatellite passing through a resonance with two different nanosatellite designs. Comparison of the results of the numerical modeling and new approximate analytical estimates of the perturbation of the angle of attack confirms the reliability of the said estimates.
A Design Heritage-Based Forecasting Methodology for Risk Informed Management of Advanced Systems
NASA Technical Reports Server (NTRS)
Maggio, Gaspare; Fragola, Joseph R.
1999-01-01
The development of next generation systems often carries with it the promise of improved performance, greater reliability, and reduced operational costs. These expectations arise from the use of novel designs, new materials, advanced integration and production technologies intended for functionality replacing the previous generation. However, the novelty of these nascent technologies is accompanied by lack of operational experience and, in many cases, no actual testing as well. Therefore some of the enthusiasm surrounding most new technologies may be due to inflated aspirations from lack of knowledge rather than actual future expectations. This paper proposes a design heritage approach for improved reliability forecasting of advanced system components. The basis of the design heritage approach is to relate advanced system components to similar designs currently in operation. The demonstrated performance of these components could then be used to forecast the expected performance and reliability of comparable advanced technology components. In this approach the greater the divergence of the advanced component designs from the current systems the higher the uncertainty that accompanies the associated failure estimates. Designers of advanced systems are faced with many difficult decisions. One of the most common and more difficult types of these decisions are those related to the choice between design alternatives. In the past decision-makers have found these decisions to be extremely difficult to make because they often involve the trade-off between a known performing fielded design and a promising paper design. When it comes to expected reliability performance the paper design always looks better because it is on paper and it addresses all the know failure modes of the fielded design. On the other hand there is a long, and sometimes very difficult road, between the promise of a paper design and its fulfillment; with the possibility that sometimes the reliability promise is not fulfilled at all. Decision makers in advanced technology areas have always known to discount the performance claims of a design to a degree in proportion to its stage of development, and at times have preferred the more mature design over the one of lesser maturity even with the latter promising substantially better performance once fielded. As with the broader measures of performance this has also been true for projected reliability performance. Paper estimates of potential advances in design reliability are to a degree uncertain in proportion to the maturity of the features being proposed to secure those advances. This is especially true when performance-enhancing features in other areas are also planned to be part of the development program.
USDA-ARS?s Scientific Manuscript database
Error in rater estimates of plant disease severity occur, and standard area diagrams (SADs) help improve accuracy and reliability. The effects of diagram number in a SAD set on accuracy and reliability is unknown. The objective of this study was to compare estimates of pecan scab severity made witho...
Reliability Estimation When a Test Is Split into Two Parts of Unknown Effective Length.
ERIC Educational Resources Information Center
Feldt, Leonard S.
2002-01-01
Considers the situation in which content or administrative considerations limit the way in which a test can be partitioned to estimate the internal consistency reliability of the total test score. Demonstrates that a single-valued estimate of the total score reliability is possible only if an assumption is made about the comparative size of the…
ERIC Educational Resources Information Center
Saupe, Joe L.; Eimers, Mardy T.
2013-01-01
The purpose of this paper is to explore differences in the reliabilities of cumulative college grade point averages (GPAs), estimated for unweighted and weighted, one-semester, 1-year, 2-year, and 4-year GPAs. Using cumulative GPAs for a freshman class at a major university, we estimate internal consistency (coefficient alpha) reliabilities for…
Butler, Emily E; Saville, Christopher W N; Ward, Robert; Ramsey, Richard
2017-01-01
The human face cues a range of important fitness information, which guides mate selection towards desirable others. Given humans' high investment in the central nervous system (CNS), cues to CNS function should be especially important in social selection. We tested if facial attractiveness preferences are sensitive to the reliability of human nervous system function. Several decades of research suggest an operational measure for CNS reliability is reaction time variability, which is measured by standard deviation of reaction times across trials. Across two experiments, we show that low reaction time variability is associated with facial attractiveness. Moreover, variability in performance made a unique contribution to attractiveness judgements above and beyond both physical health and sex-typicality judgements, which have previously been associated with perceptions of attractiveness. In a third experiment, we empirically estimated the distribution of attractiveness preferences expected by chance and show that the size and direction of our results in Experiments 1 and 2 are statistically unlikely without reference to reaction time variability. We conclude that an operating characteristic of the human nervous system, reliability of information processing, is signalled to others through facial appearance. Copyright © 2016 Elsevier B.V. All rights reserved.
Parameter estimation using meta-heuristics in systems biology: a comprehensive review.
Sun, Jianyong; Garibaldi, Jonathan M; Hodgman, Charlie
2012-01-01
This paper gives a comprehensive review of the application of meta-heuristics to optimization problems in systems biology, mainly focussing on the parameter estimation problem (also called the inverse problem or model calibration). It is intended for either the system biologist who wishes to learn more about the various optimization techniques available and/or the meta-heuristic optimizer who is interested in applying such techniques to problems in systems biology. First, the parameter estimation problems emerging from different areas of systems biology are described from the point of view of machine learning. Brief descriptions of various meta-heuristics developed for these problems follow, along with outlines of their advantages and disadvantages. Several important issues in applying meta-heuristics to the systems biology modelling problem are addressed, including the reliability and identifiability of model parameters, optimal design of experiments, and so on. Finally, we highlight some possible future research directions in this field.
Ahluwalia, Indu B; Helms, Kristen; Morrow, Brian
2013-01-01
We investigated the reliability and validity of three self-reported indicators from the Pregnancy Risk Assessment Monitoring System (PRAMS) survey. We used 2008 PRAMS (n=15,646) data from 12 states that had implemented the 2003 revised U.S. Certificate of Live Birth. We estimated reliability by kappa coefficient and validity by sensitivity and specificity using the birth certificate data as the reference for the following: prenatal participation in the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC); Medicaid payment for delivery; and breastfeeding initiation. These indicators were examined across several demographic subgroups. The reliability was high for all three measures: 0.81 for WIC participation, 0.67 for Medicaid payment of delivery, and 0.72 for breastfeeding initiation. The validity of PRAMS indicators was also high: WIC participation (sensitivity = 90.8%, specificity = 90.6%), Medicaid payment for delivery (sensitivity = 82.4%, specificity = 85.6%), and breastfeeding initiation (sensitivity = 94.3%, specificity = 76.0%). The prevalence estimates were higher on PRAMS than the birth certificate for each of the indicators except Medicaid-paid delivery among non-Hispanic black women. Kappa values within most subgroups remained in the moderate range (0.40-0.80). Sensitivity and specificity values were lower for Hispanic women who responded to the PRAMS survey in Spanish and for breastfeeding initiation among women who delivered very low birthweight and very preterm infants. The validity and reliability of the PRAMS data for measures assessed were high. Our findings support the use of PRAMS data for epidemiological surveillance, research, and planning.
NASA Technical Reports Server (NTRS)
Monaghan, Mark W.; Gillespie, Amanda M.
2013-01-01
During the shuttle era NASA utilized a failure reporting system called the Problem Reporting and Corrective Action (PRACA) it purpose was to identify and track system non-conformance. The PRACA system over the years evolved from a relatively nominal way to identify system problems to a very complex tracking and report generating data base. The PRACA system became the primary method to categorize any and all anomalies from corrosion to catastrophic failure. The systems documented in the PRACA system range from flight hardware to ground or facility support equipment. While the PRACA system is complex, it does possess all the failure modes, times of occurrence, length of system delay, parts repaired or replaced, and corrective action performed. The difficulty is mining the data then to utilize that data in order to estimate component, Line Replaceable Unit (LRU), and system reliability analysis metrics. In this paper, we identify a methodology to categorize qualitative data from the ground system PRACA data base for common ground or facility support equipment. Then utilizing a heuristic developed for review of the PRACA data determine what reports identify a credible failure. These data are the used to determine inter-arrival times to perform an estimation of a metric for repairable component-or LRU reliability. This analysis is used to determine failure modes of the equipment, determine the probability of the component failure mode, and support various quantitative differing techniques for performing repairable system analysis. The result is that an effective and concise estimate of components used in manned space flight operations. The advantage is the components or LRU's are evaluated in the same environment and condition that occurs during the launch process.
Toward accurate and precise estimates of lion density.
Elliot, Nicholas B; Gopalaswamy, Arjun M
2017-08-01
Reliable estimates of animal density are fundamental to understanding ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation because wildlife authorities rely on estimates to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores, such as lions (Panthera leo). Although abundance indices for lions may produce poor inferences, they continue to be used to estimate density and inform management and policy. We used sighting data from a 3-month survey and adapted a Bayesian spatially explicit capture-recapture (SECR) model to estimate spatial lion density in the Maasai Mara National Reserve and surrounding conservancies in Kenya. Our unstructured spatial capture-recapture sampling design incorporated search effort to explicitly estimate detection probability and density on a fine spatial scale, making our approach robust in the context of varying detection probabilities. Overall posterior mean lion density was estimated to be 17.08 (posterior SD 1.310) lions >1 year old/100 km 2 , and the sex ratio was estimated at 2.2 females to 1 male. Our modeling framework and narrow posterior SD demonstrate that SECR methods can produce statistically rigorous and precise estimates of population parameters, and we argue that they should be favored over less reliable abundance indices. Furthermore, our approach is flexible enough to incorporate different data types, which enables robust population estimates over relatively short survey periods in a variety of systems. Trend analyses are essential to guide conservation decisions but are frequently based on surveys of differing reliability. We therefore call for a unified framework to assess lion numbers in key populations to improve management and policy decisions. © 2016 Society for Conservation Biology.
Accelerometer-based measures in physical activity surveillance: current practices and issues.
Pedišić, Željko; Bauman, Adrian
2015-02-01
Self-reports of physical activity (PA) have been the mainstay of measurement in most non-communicable disease (NCD) surveillance systems. To these, other measures are added to summate to a comprehensive PA surveillance system. Recently, some national NCD surveillance systems have started using accelerometers as a measure of PA. The purpose of this paper was specifically to appraise the suitability and role of accelerometers for population-level PA surveillance. A thorough literature search was conducted to examine aspects of the generalisability, reliability, validity, comprehensiveness and between-study comparability of accelerometer estimates, and to gauge the simplicity, cost-effectiveness, adaptability and sustainability of their use in NCD surveillance. Accelerometer data collected in PA surveillance systems may not provide estimates that are generalisable to the target population. Accelerometer-based estimates have adequate reliability for PA surveillance, but there are still several issues associated with their validity. Accelerometer-based prevalence estimates are largely dependent on the investigators' choice of intensity cut-off points. Maintaining standardised accelerometer data collections in long-term PA surveillance systems is difficult, which may cause discontinuity in time-trend data. The use of accelerometers does not necessarily produce useful between-study and international comparisons due to lack of standardisation of data collection and processing methods. To conclude, it appears that accelerometers still have limitations regarding generalisability, validity, comprehensiveness, simplicity, affordability, adaptability, between-study comparability and sustainability. Therefore, given the current evidence, it seems that the widespread adoption of accelerometers specifically for large-scale PA surveillance systems may be premature. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Reliability of System Identification Techniques to Assess Standing Balance in Healthy Elderly
Maier, Andrea B.; Aarts, Ronald G. K. M.; van Gerven, Joop M. A.; Arendzen, J. Hans; Schouten, Alfred C.; Meskers, Carel G. M.; van der Kooij, Herman
2016-01-01
Objectives System identification techniques have the potential to assess the contribution of the underlying systems involved in standing balance by applying well-known disturbances. We investigated the reliability of standing balance parameters obtained with multivariate closed loop system identification techniques. Methods In twelve healthy elderly balance tests were performed twice a day during three days. Body sway was measured during two minutes of standing with eyes closed and the Balance test Room (BalRoom) was used to apply four disturbances simultaneously: two sensory disturbances, to the proprioceptive and the visual system, and two mechanical disturbances applied at the leg and trunk segment. Using system identification techniques, sensitivity functions of the sensory disturbances and the neuromuscular controller were estimated. Based on the generalizability theory (G theory), systematic errors and sources of variability were assessed using linear mixed models and reliability was assessed by computing indexes of dependability (ID), standard error of measurement (SEM) and minimal detectable change (MDC). Results A systematic error was found between the first and second trial in the sensitivity functions. No systematic error was found in the neuromuscular controller and body sway. The reliability of 15 of 25 parameters and body sway were moderate to excellent when the results of two trials on three days were averaged. To reach an excellent reliability on one day in 7 out of 25 parameters, it was predicted that at least seven trials must be averaged. Conclusion This study shows that system identification techniques are a promising method to assess the underlying systems involved in standing balance in elderly. However, most of the parameters do not appear to be reliable unless a large number of trials are collected across multiple days. To reach an excellent reliability in one third of the parameters, a training session for participants is needed and at least seven trials of two minutes must be performed on one day. PMID:26953694
Technical information report: Plasma melter operation, reliability, and maintenance analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendrickson, D.W.
1995-03-14
This document provides a technical report of operability, reliability, and maintenance of a plasma melter for low-level waste vitrification, in support of the Hanford Tank Waste Remediation System (TWRS) Low-Level Waste (LLW) Vitrification Program. A process description is provided that minimizes maintenance and downtime and includes material and energy balances, equipment sizes and arrangement, startup/operation/maintence/shutdown cycle descriptions, and basis for scale-up to a 200 metric ton/day production facility. Operational requirements are provided including utilities, feeds, labor, and maintenance. Equipment reliability estimates and maintenance requirements are provided which includes a list of failure modes, responses, and consequences.
ERIC Educational Resources Information Center
Lane, Ginny G.; White, Amy E.; Henson, Robin K.
2002-01-01
Conducted a reliability generalizability study on the Coopersmith Self-Esteem Inventory (CSEI; S. Coopersmith, 1967) to examine the variability of reliability estimates across studies and to identify study characteristics that may predict this variability. Results show that reliability for CSEI scores can vary considerably, especially at the…
Wilderness recreation use estimation: a handbook of methods and systems
Alan E. Watson; David N. Cole; David L. Turner; Penny S. Reynolds
2000-01-01
Documented evidence shows that managers of units within the U.S. National Wilderness Preservation System are making decisions without reliable information on the amount, types, and distribution of recreation use occurring at these areas. There are clear legislative mandates and agency policies that direct managers to monitor trends in use and conditions in wilderness....
Personnel safety with pressurized gas systems
Cadwallader, Lee C.; Zhao, Haihua
2016-09-08
In this study, selected accident case histories are described that illustrate the potential modes of injury from gas jets, pressure-driven missiles, and asphyxiants. Gas combustion hazards are also briefly mentioned. Using high-pressure helium and nitrogen, estimates of safe exclusion distances are calculated for differing pressures, temperatures, and breach sizes. Some sources for gas system reliability values are also cited.
BDS/GPS Dual Systems Positioning Based on the Modified SR-UKF Algorithm
Kong, JaeHyok; Mao, Xuchu; Li, Shaoyuan
2016-01-01
The Global Navigation Satellite System can provide all-day three-dimensional position and speed information. Currently, only using the single navigation system cannot satisfy the requirements of the system’s reliability and integrity. In order to improve the reliability and stability of the satellite navigation system, the positioning method by BDS and GPS navigation system is presented, the measurement model and the state model are described. Furthermore, the modified square-root Unscented Kalman Filter (SR-UKF) algorithm is employed in BDS and GPS conditions, and analysis of single system/multi-system positioning has been carried out, respectively. The experimental results are compared with the traditional estimation results, which show that the proposed method can perform highly-precise positioning. Especially when the number of satellites is not adequate enough, the proposed method combine BDS and GPS systems to achieve a higher positioning precision. PMID:27153068
Boe, S G; Dalton, B H; Harwood, B; Doherty, T J; Rice, C L
2009-05-01
To establish the inter-rater reliability of decomposition-based quantitative electromyography (DQEMG) derived motor unit number estimates (MUNEs) and quantitative motor unit (MU) analysis. Using DQEMG, two examiners independently obtained a sample of needle and surface-detected motor unit potentials (MUPs) from the tibialis anterior muscle from 10 subjects. Coupled with a maximal M wave, surface-detected MUPs were used to derive a MUNE for each subject and each examiner. Additionally, size-related parameters of the individual MUs were obtained following quantitative MUP analysis. Test-retest MUNE values were similar with high reliability observed between examiners (ICC=0.87). Additionally, MUNE variability from test-retest as quantified by a 95% confidence interval was relatively low (+/-28 MUs). Lastly, quantitative data pertaining to MU size, complexity and firing rate were similar between examiners. MUNEs and quantitative MU data can be obtained with high reliability by two independent examiners using DQEMG. Establishing the inter-rater reliability of MUNEs and quantitative MU analysis using DQEMG is central to the clinical applicability of the technique. In addition to assessing response to treatments over time, multiple clinicians may be involved in the longitudinal assessment of the MU pool of individuals with disorders of the central or peripheral nervous system.
Brackley, Victoria; Ball, Kevin; Tor, Elaine
2018-05-12
The effectiveness of the swimming turn is highly influential to overall performance in competitive swimming. The push-off or wall contact, within the turn phase, is directly involved in determining the speed the swimmer leaves the wall. Therefore, it is paramount to develop reliable methods to measure the wall-contact-time during the turn phase for training and research purposes. The aim of this study was to determine the concurrent validity and reliability of the Pool Pad App to measure wall-contact-time during the freestyle and backstroke tumble turn. The wall-contact-times of nine elite and sub-elite participants were recorded during their regular training sessions. Concurrent validity statistics included the standardised typical error estimate, linear analysis and effect sizes while the intraclass correlating coefficient (ICC) was used for the reliability statistics. The standardised typical error estimate resulted in a moderate Cohen's d effect size with an R 2 value of 0.80 and the ICC between the Pool Pad and 2D video footage was 0.89. Despite these measurement differences, the results from this concurrent validity and reliability analyses demonstrated that the Pool Pad is suitable for measuring wall-contact-time during the freestyle and backstroke tumble turn within a training environment.
Inference for Stochastic Chemical Kinetics Using Moment Equations and System Size Expansion.
Fröhlich, Fabian; Thomas, Philipp; Kazeroonian, Atefeh; Theis, Fabian J; Grima, Ramon; Hasenauer, Jan
2016-07-01
Quantitative mechanistic models are valuable tools for disentangling biochemical pathways and for achieving a comprehensive understanding of biological systems. However, to be quantitative the parameters of these models have to be estimated from experimental data. In the presence of significant stochastic fluctuations this is a challenging task as stochastic simulations are usually too time-consuming and a macroscopic description using reaction rate equations (RREs) is no longer accurate. In this manuscript, we therefore consider moment-closure approximation (MA) and the system size expansion (SSE), which approximate the statistical moments of stochastic processes and tend to be more precise than macroscopic descriptions. We introduce gradient-based parameter optimization methods and uncertainty analysis methods for MA and SSE. Efficiency and reliability of the methods are assessed using simulation examples as well as by an application to data for Epo-induced JAK/STAT signaling. The application revealed that even if merely population-average data are available, MA and SSE improve parameter identifiability in comparison to RRE. Furthermore, the simulation examples revealed that the resulting estimates are more reliable for an intermediate volume regime. In this regime the estimation error is reduced and we propose methods to determine the regime boundaries. These results illustrate that inference using MA and SSE is feasible and possesses a high sensitivity.
Inference for Stochastic Chemical Kinetics Using Moment Equations and System Size Expansion
Thomas, Philipp; Kazeroonian, Atefeh; Theis, Fabian J.; Grima, Ramon; Hasenauer, Jan
2016-01-01
Quantitative mechanistic models are valuable tools for disentangling biochemical pathways and for achieving a comprehensive understanding of biological systems. However, to be quantitative the parameters of these models have to be estimated from experimental data. In the presence of significant stochastic fluctuations this is a challenging task as stochastic simulations are usually too time-consuming and a macroscopic description using reaction rate equations (RREs) is no longer accurate. In this manuscript, we therefore consider moment-closure approximation (MA) and the system size expansion (SSE), which approximate the statistical moments of stochastic processes and tend to be more precise than macroscopic descriptions. We introduce gradient-based parameter optimization methods and uncertainty analysis methods for MA and SSE. Efficiency and reliability of the methods are assessed using simulation examples as well as by an application to data for Epo-induced JAK/STAT signaling. The application revealed that even if merely population-average data are available, MA and SSE improve parameter identifiability in comparison to RRE. Furthermore, the simulation examples revealed that the resulting estimates are more reliable for an intermediate volume regime. In this regime the estimation error is reduced and we propose methods to determine the regime boundaries. These results illustrate that inference using MA and SSE is feasible and possesses a high sensitivity. PMID:27447730
Effect of Surge Current Testing on Reliability of Solid Tantalum Capacitors
NASA Technical Reports Server (NTRS)
Teverovsky, Alexander
2008-01-01
Tantalum capacitors manufactured per military specifications are established reliability components and have less than 0.001% of failures per 1000 hours for grades D or S, thus positioning these parts among electronic components with the highest reliability characteristics. Still, failures of tantalum capacitors do happen and when it occurs it might have catastrophic consequences for the system. To reduce this risk, further development of a screening and qualification system with special attention to the possible deficiencies in the existing procedures is necessary. The purpose of this work is evaluation of the effect of surge current stress testing on reliability of the parts at both steady-state and multiple surge current stress conditions. In order to reveal possible degradation and precipitate more failures, various part types were tested and stressed in the range of voltage and temperature conditions exceeding the specified limits. A model to estimate the probability of post-surge current testing-screening failures and measures to improve the effectiveness of the screening process has been suggested.
Redondo, Jonatan Pajares; González, Lisardo Prieto; Guzman, Javier García; Boada, Beatriz L; Díaz, Vicente
2018-02-06
Nowadays, the current vehicles are incorporating control systems in order to improve their stability and handling. These control systems need to know the vehicle dynamics through the variables (lateral acceleration, roll rate, roll angle, sideslip angle, etc.) that are obtained or estimated from sensors. For this goal, it is necessary to mount on vehicles not only low-cost sensors, but also low-cost embedded systems, which allow acquiring data from sensors and executing the developed algorithms to estimate and to control with novel higher speed computing. All these devices have to be integrated in an adequate architecture with enough performance in terms of accuracy, reliability and processing time. In this article, an architecture to carry out the estimation and control of vehicle dynamics has been developed. This architecture was designed considering the basic principles of IoT and integrates low-cost sensors and embedded hardware for orchestrating the experiments. A comparison of two different low-cost systems in terms of accuracy, acquisition time and reliability has been done. Both devices have been compared with the VBOX device from Racelogic, which has been used as the ground truth. The comparison has been made from tests carried out in a real vehicle. The lateral acceleration and roll rate have been analyzed in order to quantify the error of these devices.
Díaz, Vicente
2018-01-01
Nowadays, the current vehicles are incorporating control systems in order to improve their stability and handling. These control systems need to know the vehicle dynamics through the variables (lateral acceleration, roll rate, roll angle, sideslip angle, etc.) that are obtained or estimated from sensors. For this goal, it is necessary to mount on vehicles not only low-cost sensors, but also low-cost embedded systems, which allow acquiring data from sensors and executing the developed algorithms to estimate and to control with novel higher speed computing. All these devices have to be integrated in an adequate architecture with enough performance in terms of accuracy, reliability and processing time. In this article, an architecture to carry out the estimation and control of vehicle dynamics has been developed. This architecture was designed considering the basic principles of IoT and integrates low-cost sensors and embedded hardware for orchestrating the experiments. A comparison of two different low-cost systems in terms of accuracy, acquisition time and reliability has been done. Both devices have been compared with the VBOX device from Racelogic, which has been used as the ground truth. The comparison has been made from tests carried out in a real vehicle. The lateral acceleration and roll rate have been analyzed in order to quantify the error of these devices. PMID:29415507
NASA Technical Reports Server (NTRS)
Shin, Jong-Yeob; Belcastro, Christine
2008-01-01
Formal robustness analysis of aircraft control upset prevention and recovery systems could play an important role in their validation and ultimate certification. As a part of the validation process, this paper describes an analysis method for determining a reliable flight regime in the flight envelope within which an integrated resilent control system can achieve the desired performance of tracking command signals and detecting additive faults in the presence of parameter uncertainty and unmodeled dynamics. To calculate a reliable flight regime, a structured singular value analysis method is applied to analyze the closed-loop system over the entire flight envelope. To use the structured singular value analysis method, a linear fractional transform (LFT) model of a transport aircraft longitudinal dynamics is developed over the flight envelope by using a preliminary LFT modeling software tool developed at the NASA Langley Research Center, which utilizes a matrix-based computational approach. The developed LFT model can capture original nonlinear dynamics over the flight envelope with the ! block which contains key varying parameters: angle of attack and velocity, and real parameter uncertainty: aerodynamic coefficient uncertainty and moment of inertia uncertainty. Using the developed LFT model and a formal robustness analysis method, a reliable flight regime is calculated for a transport aircraft closed-loop system.
Software Fault Tolerance: A Tutorial
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2000-01-01
Because of our present inability to produce error-free software, software fault tolerance is and will continue to be an important consideration in software systems. The root cause of software design errors is the complexity of the systems. Compounding the problems in building correct software is the difficulty in assessing the correctness of software for highly complex systems. After a brief overview of the software development processes, we note how hard-to-detect design faults are likely to be introduced during development and how software faults tend to be state-dependent and activated by particular input sequences. Although component reliability is an important quality measure for system level analysis, software reliability is hard to characterize and the use of post-verification reliability estimates remains a controversial issue. For some applications software safety is more important than reliability, and fault tolerance techniques used in those applications are aimed at preventing catastrophes. Single version software fault tolerance techniques discussed include system structuring and closure, atomic actions, inline fault detection, exception handling, and others. Multiversion techniques are based on the assumption that software built differently should fail differently and thus, if one of the redundant versions fails, it is expected that at least one of the other versions will provide an acceptable output. Recovery blocks, N-version programming, and other multiversion techniques are reviewed.
Reliability of reservoir firm yield determined from the historical drought of record
Archfield, S.A.; Vogel, R.M.
2005-01-01
The firm yield of a reservoir is typically defined as the maximum yield that could have been delivered without failure during the historical drought of record. In the future, reservoirs will experience droughts that are either more or less severe than the historical drought of record. The question addressed here is what the reliability of such systems will be when operated at the firm yield. To address this question, we examine the reliability of 25 hypothetical reservoirs sited across five locations in the central and western United States. These locations provided a continuous 756-month streamflow record spanning the same time interval. The firm yield of each reservoir was estimated from the historical drought of record at each location. To determine the steady-state monthly reliability of each firm-yield estimate, 12,000-month synthetic records were generated using the moving-blocks bootstrap method. Bootstrapping was repeated 100 times for each reservoir to obtain an average steady-state monthly reliability R, the number of months the reservoir did not fail divided by the total months. Values of R were greater than 0.99 for 60 percent of the study reservoirs; the other 40 percent ranged from 0.95 to 0.98. Estimates of R were highly correlated with both the level of development (ratio of firm yield to average streamflow) and average lag-1 monthly autocorrelation. Together these two predictors explained 92 percent of the variability in R, with the level of development alone explaining 85 percent of the variability. Copyright ASCE 2005.
Time Domain Estimation of Arterial Parameters using the Windkessel Model and the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Gostuski, Vladimir; Pastore, Ignacio; Rodriguez Palacios, Gaspar; Vaca Diez, Gustavo; Moscoso-Vasquez, H. Marcela; Risk, Marcelo
2016-04-01
Numerous parameter estimation techniques exist for characterizing the arterial system using electrical circuit analogs. However, they are often limited by their requirements and usually high computational burdain. Therefore, a new method for estimating arterial parameters based on Monte Carlo simulation is proposed. A three element Windkessel model was used to represent the arterial system. The approach was to reduce the error between the calculated and physiological aortic pressure by randomly generating arterial parameter values, while keeping constant the arterial resistance. This last value was obtained for each subject using the arterial flow, and was a necessary consideration in order to obtain a unique set of values for the arterial compliance and peripheral resistance. The estimation technique was applied to in vivo data containing steady beats in mongrel dogs, and it reliably estimated Windkessel arterial parameters. Further, this method appears to be computationally efficient for on-line time-domain estimation of these parameters.
NASA Technical Reports Server (NTRS)
Martin, Ken E.; Esztergalyos, J.
1992-01-01
The Bonneville Power Administration (BPA) uses IRIG-B transmitted over microwave as its primary system time dissemination. Problems with accuracy and reliability have led to ongoing research into better methods. BPA has also developed and deployed a unique fault locator which uses precise clocks synchronized by a pulse over microwaves. It automatically transmits the data to a central computer for analysis. A proposed system could combine fault location timing and time dissemination into a Global Position System (GPS) timing receiver and close the verification loop through a master station at the Dittmer Control Center. Such a system would have many advantages, including lower cost, higher reliability, and wider industry support. Test results indicate the GPS has sufficient accuracy and reliability for this and other current timing requirements including synchronous phase angle measurements. A phasor measurement system which provides phase angle has recently been tested with excellent results. Phase angle is a key parameter in power system control applications including dynamic braking, DC modulation, remedial action schemes, and system state estimation. Further research is required to determine the applications which can most effectively use real-time phase angle measurements and the best method to apply them.
NASA Astrophysics Data System (ADS)
Martin, Ken E.; Esztergalyos, J.
1992-07-01
The Bonneville Power Administration (BPA) uses IRIG-B transmitted over microwave as its primary system time dissemination. Problems with accuracy and reliability have led to ongoing research into better methods. BPA has also developed and deployed a unique fault locator which uses precise clocks synchronized by a pulse over microwaves. It automatically transmits the data to a central computer for analysis. A proposed system could combine fault location timing and time dissemination into a Global Position System (GPS) timing receiver and close the verification loop through a master station at the Dittmer Control Center. Such a system would have many advantages, including lower cost, higher reliability, and wider industry support. Test results indicate the GPS has sufficient accuracy and reliability for this and other current timing requirements including synchronous phase angle measurements. A phasor measurement system which provides phase angle has recently been tested with excellent results. Phase angle is a key parameter in power system control applications including dynamic braking, DC modulation, remedial action schemes, and system state estimation. Further research is required to determine the applications which can most effectively use real-time phase angle measurements and the best method to apply them.
Delineating parameter unidentifiabilities in complex models
NASA Astrophysics Data System (ADS)
Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis
2017-03-01
Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.
Heredia, Guillermo; Caballero, Fernando; Maza, Iván; Merino, Luis; Viguria, Antidio; Ollero, Aníbal
2009-01-01
This paper presents a method to increase the reliability of Unmanned Aerial Vehicle (UAV) sensor Fault Detection and Identification (FDI) in a multi-UAV context. Differential Global Positioning System (DGPS) and inertial sensors are used for sensor FDI in each UAV. The method uses additional position estimations that augment individual UAV FDI system. These additional estimations are obtained using images from the same planar scene taken from two different UAVs. Since accuracy and noise level of the estimation depends on several factors, dynamic replanning of the multi-UAV team can be used to obtain a better estimation in case of faults caused by slow growing errors of absolute position estimation that cannot be detected by using local FDI in the UAVs. Experimental results with data from two real UAVs are also presented.
Why preferring parametric forecasting to nonparametric methods?
Jabot, Franck
2015-05-07
A recent series of papers by Charles T. Perretti and collaborators have shown that nonparametric forecasting methods can outperform parametric methods in noisy nonlinear systems. Such a situation can arise because of two main reasons: the instability of parametric inference procedures in chaotic systems which can lead to biased parameter estimates, and the discrepancy between the real system dynamics and the modeled one, a problem that Perretti and collaborators call "the true model myth". Should ecologists go on using the demanding parametric machinery when trying to forecast the dynamics of complex ecosystems? Or should they rely on the elegant nonparametric approach that appears so promising? It will be here argued that ecological forecasting based on parametric models presents two key comparative advantages over nonparametric approaches. First, the likelihood of parametric forecasting failure can be diagnosed thanks to simple Bayesian model checking procedures. Second, when parametric forecasting is diagnosed to be reliable, forecasting uncertainty can be estimated on virtual data generated with the fitted to data parametric model. In contrast, nonparametric techniques provide forecasts with unknown reliability. This argumentation is illustrated with the simple theta-logistic model that was previously used by Perretti and collaborators to make their point. It should convince ecologists to stick to standard parametric approaches, until methods have been developed to assess the reliability of nonparametric forecasting. Copyright © 2015 Elsevier Ltd. All rights reserved.
An inverse problem for a mathematical model of aquaponic agriculture
NASA Astrophysics Data System (ADS)
Bobak, Carly; Kunze, Herb
2017-01-01
Aquaponic agriculture is a sustainable ecosystem that relies on a symbiotic relationship between fish and macrophytes. While the practice has been growing in popularity, relatively little mathematical models exist which aim to study the system processes. In this paper, we present a system of ODEs which aims to mathematically model the population and concetrations dynamics present in an aquaponic environment. Values of the parameters in the system are estimated from the literature so that simulated results can be presented to illustrate the nature of the solutions to the system. As well, a brief sensitivity analysis is performed in order to identify redundant parameters and highlight those which may need more reliable estimates. Specifically, an inverse problem with manufactured data for fish and plants is presented to demonstrate the ability of the collage theorem to recover parameter estimates.
Quantifying complexity of financial short-term time series by composite multiscale entropy measure
NASA Astrophysics Data System (ADS)
Niu, Hongli; Wang, Jun
2015-05-01
It is significant to study the complexity of financial time series since the financial market is a complex evolved dynamic system. Multiscale entropy is a prevailing method used to quantify the complexity of a time series. Due to its less reliability of entropy estimation for short-term time series at large time scales, a modification method, the composite multiscale entropy, is applied to the financial market. To qualify its effectiveness, its applications in the synthetic white noise and 1 / f noise with different data lengths are reproduced first in the present paper. Then it is introduced for the first time to make a reliability test with two Chinese stock indices. After conducting on short-time return series, the CMSE method shows the advantages in reducing deviations of entropy estimation and demonstrates more stable and reliable results when compared with the conventional MSE algorithm. Finally, the composite multiscale entropy of six important stock indices from the world financial markets is investigated, and some useful and interesting empirical results are obtained.
Integrating Solar PV in Utility System Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, A.; Botterud, A.; Wu, J.
2013-10-31
This study develops a systematic framework for estimating the increase in operating costs due to uncertainty and variability in renewable resources, uses the framework to quantify the integration costs associated with sub-hourly solar power variability and uncertainty, and shows how changes in system operations may affect these costs. Toward this end, we present a statistical method for estimating the required balancing reserves to maintain system reliability along with a model for commitment and dispatch of the portfolio of thermal and renewable resources at different stages of system operations. We estimate the costs of sub-hourly solar variability, short-term forecast errors, andmore » day-ahead (DA) forecast errors as the difference in production costs between a case with “realistic” PV (i.e., subhourly solar variability and uncertainty are fully included in the modeling) and a case with “well behaved” PV (i.e., PV is assumed to have no sub-hourly variability and can be perfectly forecasted). In addition, we highlight current practices that allow utilities to compensate for the issues encountered at the sub-hourly time frame with increased levels of PV penetration. In this analysis we use the analytical framework to simulate utility operations with increasing deployment of PV in a case study of Arizona Public Service Company (APS), a utility in the southwestern United States. In our analysis, we focus on three processes that are important in understanding the management of PV variability and uncertainty in power system operations. First, we represent the decisions made the day before the operating day through a DA commitment model that relies on imperfect DA forecasts of load and wind as well as PV generation. Second, we represent the decisions made by schedulers in the operating day through hour-ahead (HA) scheduling. Peaking units can be committed or decommitted in the HA schedules and online units can be redispatched using forecasts that are improved relative to DA forecasts, but still imperfect. Finally, we represent decisions within the operating hour by schedulers and transmission system operators as real-time (RT) balancing. We simulate the DA and HA scheduling processes with a detailed unit-commitment (UC) and economic dispatch (ED) optimization model. This model creates a least-cost dispatch and commitment plan for the conventional generating units using forecasts and reserve requirements as inputs. We consider only the generation units and load of the utility in this analysis; we do not consider opportunities to trade power with neighboring utilities. We also do not consider provision of reserves from renewables or from demand-side options. We estimate dynamic reserve requirements in order to meet reliability requirements in the RT operations, considering the uncertainty and variability in load, solar PV, and wind resources. Balancing reserve requirements are based on the 2.5th and 97.5th percentile of 1-min deviations from the HA schedule in a previous year. We then simulate RT deployment of balancing reserves using a separate minute-by-minute simulation of deviations from the HA schedules in the operating year. In the simulations we assume that balancing reserves can be fully deployed in 10 min. The minute-by-minute deviations account for HA forecasting errors and the actual variability of the load, wind, and solar generation. Using these minute-by-minute deviations and deployment of balancing reserves, we evaluate the impact of PV on system reliability through the calculation of the standard reliability metric called Control Performance Standard 2 (CPS2). Broadly speaking, the CPS2 score measures the percentage of 10-min periods in which a balancing area is able to balance supply and demand within a specific threshold. Compliance with the North American Electric Reliability Corporation (NERC) reliability standards requires that the CPS2 score must exceed 90% (i.e., the balancing area must maintain adequate balance for 90% of the 10-min periods). The combination of representing DA forecast errors in the DA commitments, using 1-min PV data to simulate RT balancing, and estimates of reliability performance through the CPS2 metric, all factors that are important to operating systems with increasing amounts of PV, makes this study unique in its scope.« less
Field reliability of competency and sanity opinions: A systematic review and meta-analysis.
Guarnera, Lucy A; Murrie, Daniel C
2017-06-01
We know surprisingly little about the interrater reliability of forensic psychological opinions, even though courts and other authorities have long called for known error rates for scientific procedures admitted as courtroom testimony. This is particularly true for opinions produced during routine practice in the field, even for some of the most common types of forensic evaluations-evaluations of adjudicative competency and legal sanity. To address this gap, we used meta-analytic procedures and study space methodology to systematically review studies that examined the interrater reliability-particularly the field reliability-of competency and sanity opinions. Of 59 identified studies, 9 addressed the field reliability of competency opinions and 8 addressed the field reliability of sanity opinions. These studies presented a wide range of reliability estimates; pairwise percentage agreements ranged from 57% to 100% and kappas ranged from .28 to 1.0. Meta-analytic combinations of reliability estimates obtained by independent evaluators returned estimates of κ = .49 (95% CI: .40-.58) for competency opinions and κ = .41 (95% CI: .29-.53) for sanity opinions. This wide range of reliability estimates underscores the extent to which different evaluation contexts tend to produce different reliability rates. Unfortunately, our study space analysis illustrates that available field reliability studies typically provide little information about contextual variables crucial to understanding their findings. Given these concerns, we offer suggestions for improving research on the field reliability of competency and sanity opinions, as well as suggestions for improving reliability rates themselves. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Test Assembly Implications for Providing Reliable and Valid Subscores
ERIC Educational Resources Information Center
Lee, Minji K.; Sweeney, Kevin; Melican, Gerald J.
2017-01-01
This study investigates the relationships among factor correlations, inter-item correlations, and the reliability estimates of subscores, providing a guideline with respect to psychometric properties of useful subscores. In addition, it compares subscore estimation methods with respect to reliability and distinctness. The subscore estimation…
Relative Navigation for Formation Flying of Spacecraft
NASA Technical Reports Server (NTRS)
Alonso, Roberto; Du, Ju-Young; Hughes, Declan; Junkins, John L.; Crassidis, John L.
2001-01-01
This paper presents a robust and efficient approach for relative navigation and attitude estimation of spacecraft flying in formation. This approach uses measurements from a new optical sensor that provides a line of sight vector from the master spacecraft to the secondary satellite. The overall system provides a novel, reliable, and autonomous relative navigation and attitude determination system, employing relatively simple electronic circuits with modest digital signal processing requirements and is fully independent of any external systems. Experimental calibration results are presented, which are used to achieve accurate line of sight measurements. State estimation for formation flying is achieved through an optimal observer design. Also, because the rotational and translational motions are coupled through the observation vectors, three approaches are suggested to separate both signals just for stability analysis. Simulation and experimental results indicate that the combined sensor/estimator approach provides accurate relative position and attitude estimates.
Compound estimation procedures in reliability
NASA Technical Reports Server (NTRS)
Barnes, Ron
1990-01-01
At NASA, components and subsystems of components in the Space Shuttle and Space Station generally go through a number of redesign stages. While data on failures for various design stages are sometimes available, the classical procedures for evaluating reliability only utilize the failure data on the present design stage of the component or subsystem. Often, few or no failures have been recorded on the present design stage. Previously, Bayesian estimators for the reliability of a single component, conditioned on the failure data for the present design, were developed. These new estimators permit NASA to evaluate the reliability, even when few or no failures have been recorded. Point estimates for the latter evaluation were not possible with the classical procedures. Since different design stages of a component (or subsystem) generally have a good deal in common, the development of new statistical procedures for evaluating the reliability, which consider the entire failure record for all design stages, has great intuitive appeal. A typical subsystem consists of a number of different components and each component has evolved through a number of redesign stages. The present investigations considered compound estimation procedures and related models. Such models permit the statistical consideration of all design stages of each component and thus incorporate all the available failure data to obtain estimates for the reliability of the present version of the component (or subsystem). A number of models were considered to estimate the reliability of a component conditioned on its total failure history from two design stages. It was determined that reliability estimators for the present design stage, conditioned on the complete failure history for two design stages have lower risk than the corresponding estimators conditioned only on the most recent design failure data. Several models were explored and preliminary models involving bivariate Poisson distribution and the Consael Process (a bivariate Poisson process) were developed. Possible short comings of the models are noted. An example is given to illustrate the procedures. These investigations are ongoing with the aim of developing estimators that extend to components (and subsystems) with three or more design stages.
NASA Technical Reports Server (NTRS)
Hurd, W. J.
1974-01-01
A prototype of a semi-real time system for synchronizing the Deep Space Net station clocks by radio interferometry was successfully demonstrated on August 30, 1972. The system utilized an approximate maximum likelihood estimation procedure for processing the data, thereby achieving essentially optimum time sync estimates for a given amount of data, or equivalently, minimizing the amount of data required for reliable estimation. Synchronization accuracies as good as 100 ns rms were achieved between Deep Space Stations 11 and 12, both at Goldstone, Calif. The accuracy can be improved by increasing the system bandwidth until the fundamental limitations due to baseline and source position uncertainties and atmospheric effects are reached. These limitations are under 10 ns for transcontinental baselines.
Test Reliability at the Individual Level
Hu, Yueqin; Nesselroade, John R.; Erbacher, Monica K.; Boker, Steven M.; Burt, S. Alexandra; Keel, Pamela K.; Neale, Michael C.; Sisk, Cheryl L.; Klump, Kelly
2016-01-01
Reliability has a long history as one of the key psychometric properties of a test. However, a given test might not measure people equally reliably. Test scores from some individuals may have considerably greater error than others. This study proposed two approaches using intraindividual variation to estimate test reliability for each person. A simulation study suggested that the parallel tests approach and the structural equation modeling approach recovered the simulated reliability coefficients. Then in an empirical study, where forty-five females were measured daily on the Positive and Negative Affect Schedule (PANAS) for 45 consecutive days, separate estimates of reliability were generated for each person. Results showed that reliability estimates of the PANAS varied substantially from person to person. The methods provided in this article apply to tests measuring changeable attributes and require repeated measures across time on each individual. This article also provides a set of parallel forms of PANAS. PMID:28936107
An experiment in software reliability
NASA Technical Reports Server (NTRS)
Dunham, J. R.; Pierce, J. L.
1986-01-01
The results of a software reliability experiment conducted in a controlled laboratory setting are reported. The experiment was undertaken to gather data on software failures and is one in a series of experiments being pursued by the Fault Tolerant Systems Branch of NASA Langley Research Center to find a means of credibly performing reliability evaluations of flight control software. The experiment tests a small sample of implementations of radar tracking software having ultra-reliability requirements and uses n-version programming for error detection, and repetitive run modeling for failure and fault rate estimation. The experiment results agree with those of Nagel and Skrivan in that the program error rates suggest an approximate log-linear pattern and the individual faults occurred with significantly different error rates. Additional analysis of the experimental data raises new questions concerning the phenomenon of interacting faults. This phenomenon may provide one explanation for software reliability decay.
C-5 Reliability Enhancement and Re-engining Program (C-5 RERP)
2015-12-01
Production Estimate Current APB Production Objective/Threshold Demonstrated Performance Current Estimate Time To Climb/Initial Level Off 837,000 lbs...RCR - Runway Condition Reading SDD - System Design and Development SL - Sea Level C-5 RERP December 2015 SAR March 23, 2016 16:10:28 UNCLASSIFIED 12...5.3 5.3 Acq O&M 0.0 0.0 -- 0.0 0.0 0.0 0.0 Total 7146.6 7135.7 N/A 6698.0 7694.1 7510.7 7066.6 Confidence Level Confidence Level of cost estimate
NASA presentation. [wind energy conversion systems planning
NASA Technical Reports Server (NTRS)
Thomas, R. L.
1973-01-01
The development of a wind energy system is outlined that supplies reliable energy at a cost competitive with other energy systems. A government directed industry program with strong university support is recommended that includes meteorological studies to estimate wind energy potentials and determines favorable regions and sites for wind power installations. Key phases of the overall program are wind energy conversion systems, meteorological wind studies, energy storage systems, and environmental impact studies. Performance testing with a prototype wind energy conversion and storage system is projected for Fiscal 1977.
DOT National Transportation Integrated Search
2017-12-01
For the incident response operations to be appreciated by the general public, it is essential that responsible highway agencies be capable of providing the estimated clearance duration of a detected incident at the level sufficiently reliable for mot...
A low-cost Mr compatible ergometer to assess post-exercise phosphocreatine recovery kinetics.
Naimon, Niels D; Walczyk, Jerzy; Babb, James S; Khegai, Oleksandr; Che, Xuejiao; Alon, Leeor; Regatte, Ravinder R; Brown, Ryan; Parasoglou, Prodromos
2017-06-01
To develop a low-cost pedal ergometer compatible with ultrahigh (7 T) field MR systems to reliably quantify metabolic parameters in human lower leg muscle using phosphorus magnetic resonance spectroscopy. We constructed an MR compatible ergometer using commercially available materials and elastic bands that provide resistance to movement. We recruited ten healthy subjects (eight men and two women, mean age ± standard deviation: 32.8 ± 6.0 years, BMI: 24.1 ± 3.9 kg/m 2 ). All subjects were scanned on a 7 T whole-body magnet. Each subject was scanned on two visits and performed a 90 s plantar flexion exercise at 40% maximum voluntary contraction during each scan. During the first visit, each subject performed the exercise twice in order for us to estimate the intra-exam repeatability, and once during the second visit in order to estimate the inter-exam repeatability of the time constant of phosphocreatine recovery kinetics. We assessed the intra and inter-exam reliability in terms of the within-subject coefficient of variation (CV). We acquired reliable measurements of PCr recovery kinetics with an intra- and inter-exam CV of 7.9% and 5.7%, respectively. We constructed a low-cost pedal ergometer compatible with ultrahigh (7 T) field MR systems, which allowed us to quantify reliably PCr recovery kinetics in lower leg muscle using 31 P-MRS.
NASA Astrophysics Data System (ADS)
Bell, Andrew Reid; Shah, M. Azeem Ali; Ward, Patrick S.
2014-08-01
It is widely argued that farmers are unwilling to pay adequate fees for surface water irrigation to recover the costs associated with maintenance and improvement of delivery systems. In this paper, we use a discrete choice experiment to study farmer preferences for irrigation characteristics along two branch canals in Punjab Province in eastern Pakistan. We find that farmers are generally willing to pay well in excess of current surface water irrigation costs for increased surface water reliability and that the amount that farmers are willing to pay is an increasing function of their existing surface water supply as well as location along the main canal branch. This explicit translation of implicit willingness-to-pay (WTP) for water (via expenditure on groundwater pumping) to WTP for reliable surface water demonstrates the potential for greatly enhanced cost recovery in the Indus Basin Irrigation System via appropriate setting of water user fees, driven by the higher WTP of those currently receiving reliable supplies.
Bell, Andrew Reid; Shah, M Azeem Ali; Ward, Patrick S
2014-01-01
It is widely argued that farmers are unwilling to pay adequate fees for surface water irrigation to recover the costs associated with maintenance and improvement of delivery systems. In this paper, we use a discrete choice experiment to study farmer preferences for irrigation characteristics along two branch canals in Punjab Province in eastern Pakistan. We find that farmers are generally willing to pay well in excess of current surface water irrigation costs for increased surface water reliability and that the amount that farmers are willing to pay is an increasing function of their existing surface water supply as well as location along the main canal branch. This explicit translation of implicit willingness-to-pay (WTP) for water (via expenditure on groundwater pumping) to WTP for reliable surface water demonstrates the potential for greatly enhanced cost recovery in the Indus Basin Irrigation System via appropriate setting of water user fees, driven by the higher WTP of those currently receiving reliable supplies. PMID:25552779
Parallelized reliability estimation of reconfigurable computer networks
NASA Technical Reports Server (NTRS)
Nicol, David M.; Das, Subhendu; Palumbo, Dan
1990-01-01
A parallelized system, ASSURE, for computing the reliability of embedded avionics flight control systems which are able to reconfigure themselves in the event of failure is described. ASSURE accepts a grammar that describes a reliability semi-Markov state-space. From this it creates a parallel program that simultaneously generates and analyzes the state-space, placing upper and lower bounds on the probability of system failure. ASSURE is implemented on a 32-node Intel iPSC/860, and has achieved high processor efficiencies on real problems. Through a combination of improved algorithms, exploitation of parallelism, and use of an advanced microprocessor architecture, ASSURE has reduced the execution time on substantial problems by a factor of one thousand over previous workstation implementations. Furthermore, ASSURE's parallel execution rate on the iPSC/860 is an order of magnitude faster than its serial execution rate on a Cray-2 supercomputer. While dynamic load balancing is necessary for ASSURE's good performance, it is needed only infrequently; the particular method of load balancing used does not substantially affect performance.
DOT National Transportation Integrated Search
1999-06-01
The main purpose of Phase I of this project was to develop a methodology for predicting consequences of hazardous material (HM) crashes, such as injuries and property damage. An initial step in developing a risk assessment is to reliably estimate the...
Design and validation of an automated hydrostatic weighing system.
McClenaghan, B A; Rocchio, L
1986-08-01
The purpose of this study was to design and evaluate the validity of an automated technique to assess body density using a computerized hydrostatic weighing system. An existing hydrostatic tank was modified and interfaced with a microcomputer equipped with an analog-to-digital converter. Software was designed to input variables, control the collection of data, calculate selected measurements, and provide a summary of the results of each session. Validity of the data obtained utilizing the automated hydrostatic weighing system was estimated by: evaluating the reliability of the transducer/computer interface to measure objects of known underwater weight; comparing the data against a criterion measure; and determining inter-session subject reliability. Values obtained from the automated system were found to be highly correlated with known underwater weights (r = 0.99, SEE = 0.0060 kg). Data concurrently obtained utilizing the automated system and a manual chart recorder were also found to be highly correlated (r = 0.99, SEE = 0.0606 kg). Inter-session subject reliability was determined utilizing data collected on subjects (N = 16) tested on two occasions approximately 24 h apart. Correlations revealed high relationships between measures of underwater weight (r = 0.99, SEE = 0.1399 kg) and body density (r = 0.98, SEE = 0.00244 g X cm-1). Results indicate that a computerized hydrostatic weighing system is a valid and reliable method for determining underwater weight.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, David; Brunett, Acacia J.; Passerini, Stefano
GE Hitachi Nuclear Energy (GEH) and Argonne National Laboratory (Argonne) participated in a two year collaboration to modernize and update the probabilistic risk assessment (PRA) for the PRISM sodium fast reactor. At a high level, the primary outcome of the project was the development of a next-generation PRA that is intended to enable risk-informed prioritization of safety- and reliability-focused research and development. A central Argonne task during this project was a reliability assessment of passive safety systems, which included the Reactor Vessel Auxiliary Cooling System (RVACS) and the inherent reactivity feedbacks of the metal fuel core. Both systems were examinedmore » utilizing a methodology derived from the Reliability Method for Passive Safety Functions (RMPS), with an emphasis on developing success criteria based on mechanistic system modeling while also maintaining consistency with the Fuel Damage Categories (FDCs) of the mechanistic source term assessment. This paper provides an overview of the reliability analyses of both systems, including highlights of the FMEAs, the construction of best-estimate models, uncertain parameter screening and propagation, and the quantification of system failure probability. In particular, special focus is given to the methodologies to perform the analysis of uncertainty propagation and the determination of the likelihood of violating FDC limits. Additionally, important lessons learned are also reviewed, such as optimal sampling methodologies for the discovery of low likelihood failure events and strategies for the combined treatment of aleatory and epistemic uncertainties.« less
LESS: Link Estimation with Sparse Sampling in Intertidal WSNs
Ji, Xiaoyu; Chen, Yi-chao; Li, Xiaopeng; Xu, Wenyuan
2018-01-01
Deploying wireless sensor networks (WSN) in the intertidal area is an effective approach for environmental monitoring. To sustain reliable data delivery in such a dynamic environment, a link quality estimation mechanism is crucial. However, our observations in two real WSN systems deployed in the intertidal areas reveal that link update in routing protocols often suffers from energy and bandwidth waste due to the frequent link quality measurement and updates. In this paper, we carefully investigate the network dynamics using real-world sensor network data and find it feasible to achieve accurate estimation of link quality using sparse sampling. We design and implement a compressive-sensing-based link quality estimation protocol, LESS, which incorporates both spatial and temporal characteristics of the system to aid the link update in routing protocols. We evaluate LESS in both real WSN systems and a large-scale simulation, and the results show that LESS can reduce energy and bandwidth consumption by up to 50% while still achieving more than 90% link quality estimation accuracy. PMID:29494557
Estimating distributions with increasing failure rate in an imperfect repair model.
Kvam, Paul H; Singh, Harshinder; Whitaker, Lyn R
2002-03-01
A failed system is repaired minimally if after failure, it is restored to the working condition of an identical system of the same age. We extend the nonparametric maximum likelihood estimator (MLE) of a system's lifetime distribution function to test units that are known to have an increasing failure rate. Such items comprise a significant portion of working components in industry. The order-restricted MLE is shown to be consistent. Similar results hold for the Brown-Proschan imperfect repair model, which dictates that a failed component is repaired perfectly with some unknown probability, and is otherwise repaired minimally. The estimators derived are motivated and illustrated by failure data in the nuclear industry. Failure times for groups of emergency diesel generators and motor-driven pumps are analyzed using the order-restricted methods. The order-restricted estimators are consistent and show distinct differences from the ordinary MLEs. Simulation results suggest significant improvement in reliability estimation is available in many cases when component failure data exhibit the IFR property.
Pillai, Nikhil; Craig, Morgan; Dokoumetzidis, Aristeidis; Schwartz, Sorell L; Bies, Robert; Freedman, Immanuel
2018-06-19
In mathematical pharmacology, models are constructed to confer a robust method for optimizing treatment. The predictive capability of pharmacological models depends heavily on the ability to track the system and to accurately determine parameters with reference to the sensitivity in projected outcomes. To closely track chaotic systems, one may choose to apply chaos synchronization. An advantageous byproduct of this methodology is the ability to quantify model parameters. In this paper, we illustrate the use of chaos synchronization combined with Nelder-Mead search to estimate parameters of the well-known Kirschner-Panetta model of IL-2 immunotherapy from noisy data. Chaos synchronization with Nelder-Mead search is shown to provide more accurate and reliable estimates than Nelder-Mead search based on an extended least squares (ELS) objective function. Our results underline the strength of this approach to parameter estimation and provide a broader framework of parameter identification for nonlinear models in pharmacology. Copyright © 2018 Elsevier Ltd. All rights reserved.
Measurement-based reliability/performability models
NASA Technical Reports Server (NTRS)
Hsueh, Mei-Chen
1987-01-01
Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.
How Many Sleep Diary Entries Are Needed to Reliably Estimate Adolescent Sleep?
Arora, Teresa; Gradisar, Michael; Taheri, Shahrad; Carskadon, Mary A.
2017-01-01
Abstract Study Objectives: To investigate (1) how many nights of sleep diary entries are required for reliable estimates of five sleep-related outcomes (bedtime, wake time, sleep onset latency [SOL], sleep duration, and wake after sleep onset [WASO]) and (2) the test–retest reliability of sleep diary estimates of school night sleep across 12 weeks. Methods: Data were drawn from four adolescent samples (Australia [n = 385], Qatar [n = 245], United Kingdom [n = 770], and United States [n = 366]), who provided 1766 eligible sleep diary weeks for reliability analyses. We performed reliability analyses for each cohort using complete data (7 days), one to five school nights, and one to two weekend nights. We also performed test–retest reliability analyses on 12-week sleep diary data available from a subgroup of 55 US adolescents. Results: Intraclass correlation coefficients for bedtime, SOL, and sleep duration indicated good-to-excellent reliability from five weekday nights of sleep diary entries across all adolescent cohorts. Four school nights was sufficient for wake times in the Australian and UK samples, but not the US or Qatari samples. Only Australian adolescents showed good reliability for two weekend nights of bedtime reports; estimates of SOL were adequate for UK adolescents based on two weekend nights. WASO was not reliably estimated using 1 week of sleep diaries. We observed excellent test–rest reliability across 12 weeks of sleep diary data in a subsample of US adolescents. Conclusion: We recommend at least five weekday nights of sleep dairy entries to be made when studying adolescent bedtimes, SOL, and sleep duration. Adolescent sleep patterns were stable across 12 consecutive school weeks. PMID:28199718
How Many Sleep Diary Entries Are Needed to Reliably Estimate Adolescent Sleep?
Short, Michelle A; Arora, Teresa; Gradisar, Michael; Taheri, Shahrad; Carskadon, Mary A
2017-03-01
To investigate (1) how many nights of sleep diary entries are required for reliable estimates of five sleep-related outcomes (bedtime, wake time, sleep onset latency [SOL], sleep duration, and wake after sleep onset [WASO]) and (2) the test-retest reliability of sleep diary estimates of school night sleep across 12 weeks. Data were drawn from four adolescent samples (Australia [n = 385], Qatar [n = 245], United Kingdom [n = 770], and United States [n = 366]), who provided 1766 eligible sleep diary weeks for reliability analyses. We performed reliability analyses for each cohort using complete data (7 days), one to five school nights, and one to two weekend nights. We also performed test-retest reliability analyses on 12-week sleep diary data available from a subgroup of 55 US adolescents. Intraclass correlation coefficients for bedtime, SOL, and sleep duration indicated good-to-excellent reliability from five weekday nights of sleep diary entries across all adolescent cohorts. Four school nights was sufficient for wake times in the Australian and UK samples, but not the US or Qatari samples. Only Australian adolescents showed good reliability for two weekend nights of bedtime reports; estimates of SOL were adequate for UK adolescents based on two weekend nights. WASO was not reliably estimated using 1 week of sleep diaries. We observed excellent test-rest reliability across 12 weeks of sleep diary data in a subsample of US adolescents. We recommend at least five weekday nights of sleep dairy entries to be made when studying adolescent bedtimes, SOL, and sleep duration. Adolescent sleep patterns were stable across 12 consecutive school weeks. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.
A Meta-Analysis of Reliability Coefficients in Second Language Research
ERIC Educational Resources Information Center
Plonsky, Luke; Derrick, Deirdre J.
2016-01-01
Ensuring internal validity in quantitative research requires, among other conditions, reliable instrumentation. Unfortunately, however, second language (L2) researchers often fail to report and even more often fail to interpret reliability estimates beyond generic benchmarks for acceptability. As a means to guide interpretations of such estimates,…
Reliability Estimates for Undergraduate Grade Point Average
ERIC Educational Resources Information Center
Westrick, Paul A.
2017-01-01
Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…
Reliability of Test Scores in Nonparametric Item Response Theory.
ERIC Educational Resources Information Center
Sijtsma, Klaas; Molenaar, Ivo W.
1987-01-01
Three methods for estimating reliability are studied within the context of nonparametric item response theory. Two were proposed originally by Mokken and a third is developed in this paper. Using a Monte Carlo strategy, these three estimation methods are compared with four "classical" lower bounds to reliability. (Author/JAZ)
IRT-Estimated Reliability for Tests Containing Mixed Item Formats
ERIC Educational Resources Information Center
Shu, Lianghua; Schwarz, Richard D.
2014-01-01
As a global measure of precision, item response theory (IRT) estimated reliability is derived for four coefficients (Cronbach's a, Feldt-Raju, stratified a, and marginal reliability). Models with different underlying assumptions concerning test-part similarity are discussed. A detailed computational example is presented for the targeted…
Evaluation of Validity and Reliability for Hierarchical Scales Using Latent Variable Modeling
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2012-01-01
A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Britton, Paul; Hatfield, Glen Spencer; Novack, Steven D.
2017-01-01
Today's launch vehicles complex electronic and avionics systems heavily utilize Field Programmable Gate Array (FPGA) integrated circuits (IC) for their superb speed and reconfiguration capabilities. Consequently, FPGAs are prevalent ICs in communication protocols such as MILSTD- 1553B and in control signal commands such as in solenoid valve actuations. This paper will identify reliability concerns and high level guidelines to estimate FPGA total failure rates in a launch vehicle application. The paper will discuss hardware, hardware description language, and radiation induced failures. The hardware contribution of the approach accounts for physical failures of the IC. The hardware description language portion will discuss the high level FPGA programming languages and software/code reliability growth. The radiation portion will discuss FPGA susceptibility to space environment radiation.
Reliability verification of vehicle speed estimate method in forensic videos.
Kim, Jong-Hyuk; Oh, Won-Taek; Choi, Ji-Hun; Park, Jong-Chan
2018-06-01
In various types of traffic accidents, including car-to-car crash, vehicle-pedestrian collision, and hit-and-run accident, driver overspeed is one of the critical issues of traffic accident analysis. Hence, analysis of vehicle speed at the moment of accident is necessary. The present article proposes a vehicle speed estimate method (VSEM) applying a virtual plane and a virtual reference line to a forensic video. The reliability of the VSEM was verified by comparing the results obtained by applying the VSEM to videos from a test vehicle driving with a global positioning system (GPS)-based Vbox speed. The VSEM verified by these procedures was applied to real traffic accident examples to evaluate the usability of the VSEM. Copyright © 2018 Elsevier B.V. All rights reserved.
Crowdsourcing-Assisted Radio Environment Database for V2V Communication.
Katagiri, Keita; Sato, Koya; Fujii, Takeo
2018-04-12
In order to realize reliable Vehicle-to-Vehicle (V2V) communication systems for autonomous driving, the recognition of radio propagation becomes an important technology. However, in the current wireless distributed network systems, it is difficult to accurately estimate the radio propagation characteristics because of the locality of the radio propagation caused by surrounding buildings and geographical features. In this paper, we propose a measurement-based radio environment database for improving the accuracy of the radio environment estimation in the V2V communication systems. The database first gathers measurement datasets of the received signal strength indicator (RSSI) related to the transmission/reception locations from V2V systems. By using the datasets, the average received power maps linked with transmitter and receiver locations are generated. We have performed measurement campaigns of V2V communications in the real environment to observe RSSI for the database construction. Our results show that the proposed method has higher accuracy of the radio propagation estimation than the conventional path loss model-based estimation.
Crowdsourcing-Assisted Radio Environment Database for V2V Communication †
Katagiri, Keita; Fujii, Takeo
2018-01-01
In order to realize reliable Vehicle-to-Vehicle (V2V) communication systems for autonomous driving, the recognition of radio propagation becomes an important technology. However, in the current wireless distributed network systems, it is difficult to accurately estimate the radio propagation characteristics because of the locality of the radio propagation caused by surrounding buildings and geographical features. In this paper, we propose a measurement-based radio environment database for improving the accuracy of the radio environment estimation in the V2V communication systems. The database first gathers measurement datasets of the received signal strength indicator (RSSI) related to the transmission/reception locations from V2V systems. By using the datasets, the average received power maps linked with transmitter and receiver locations are generated. We have performed measurement campaigns of V2V communications in the real environment to observe RSSI for the database construction. Our results show that the proposed method has higher accuracy of the radio propagation estimation than the conventional path loss model-based estimation. PMID:29649174
Temporary Losses of Highway Capacity and Impacts on Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chin, S.M.
2002-07-31
Traffic congestion and its impacts significantly affect the nation's economic performance and the public's quality of life. In most urban areas, travel demand routinely exceeds highway capacity during peak periods. In addition, events such as crashes, vehicle breakdowns, work zones, adverse weather, and suboptimal signal timing cause temporary capacity losses, often worsening the conditions on already congested highway networks. The impacts of these temporary capacity losses include delay, reduced mobility, and reduced reliability of the highway system. They can also cause drivers to re-route or reschedule trips. Prior to this study, no nationwide estimates of temporary losses of highway capacitymore » had been made by type of capacity-reducing event. Such information is vital to formulating sound public policies for the highway infrastructure and its operation. This study is an initial attempt to provide nationwide estimates of the capacity losses and delay caused by temporary capacity-reducing events. The objective of this study was to develop and implement methods for producing national-level estimates of the loss of capacity on the nation's highway facilities due to temporary phenomena as well as estimates of the impacts of such losses. The estimates produced by this study roughly indicate the magnitude of problems that are likely be addressed by the Congress during the next re-authorization of the Surface Transportation Programs. The scope of the study includes all urban and rural freeways and principal arterials in the nation's highway system for 1999. Specifically, this study attempts to quantify the extent of temporary capacity losses due to crashes, breakdowns, work zones, weather, and sub-optimal signal timing. These events can cause impacts such as capacity reduction, delays, trip rescheduling, rerouting, reduced mobility, and reduced reliability. This study focuses on the reduction of capacity and resulting delays caused by the temporary events mentioned above. Impacts other than capacity losses and delay, such as re-routing, rescheduling, reduced mobility, and reduced reliability, are not covered in this phase of research.« less
Rollover risk prediction of heavy vehicles by reliability index and empirical modelling
NASA Astrophysics Data System (ADS)
Sellami, Yamine; Imine, Hocine; Boubezoul, Abderrahmane; Cadiou, Jean-Charles
2018-03-01
This paper focuses on a combination of a reliability-based approach and an empirical modelling approach for rollover risk assessment of heavy vehicles. A reliability-based warning system is developed to alert the driver to a potential rollover before entering into a bend. The idea behind the proposed methodology is to estimate the rollover risk by the probability that the vehicle load transfer ratio (LTR) exceeds a critical threshold. Accordingly, a so-called reliability index may be used as a measure to assess the vehicle safe functioning. In the reliability method, computing the maximum of LTR requires to predict the vehicle dynamics over the bend which can be in some cases an intractable problem or time-consuming. With the aim of improving the reliability computation time, an empirical model is developed to substitute the vehicle dynamics and rollover models. This is done by using the SVM (Support Vector Machines) algorithm. The preliminary obtained results demonstrate the effectiveness of the proposed approach.
López-Pina, José Antonio; Sánchez-Meca, Julio; López-López, José Antonio; Marín-Martínez, Fulgencio; Núñez-Núñez, Rosa Ma; Rosa-Alcázar, Ana I; Gómez-Conesa, Antonia; Ferrer-Requena, Josefa
2015-01-01
The Yale-Brown Obsessive-Compulsive Scale for children and adolescents (CY-BOCS) is a frequently applied test to assess obsessive-compulsive symptoms. We conducted a reliability generalization meta-analysis on the CY-BOCS to estimate the average reliability, search for reliability moderators, and propose a predictive model that researchers and clinicians can use to estimate the expected reliability of the CY-BOCS scores. A total of 47 studies reporting a reliability coefficient with the data at hand were included in the meta-analysis. The results showed good reliability and a large variability associated to the standard deviation of total scores and sample size.
NASA Astrophysics Data System (ADS)
Ablay, Gunyaz
Using traditional control methods for controller design, parameter estimation and fault diagnosis may lead to poor results with nuclear systems in practice because of approximations and uncertainties in the system models used, possibly resulting in unexpected plant unavailability. This experience has led to an interest in development of robust control, estimation and fault diagnosis methods. One particularly robust approach is the sliding mode control methodology. Sliding mode approaches have been of great interest and importance in industry and engineering in the recent decades due to their potential for producing economic, safe and reliable designs. In order to utilize these advantages, sliding mode approaches are implemented for robust control, state estimation, secure communication and fault diagnosis in nuclear plant systems. In addition, a sliding mode output observer is developed for fault diagnosis in dynamical systems. To validate the effectiveness of the methodologies, several nuclear plant system models are considered for applications, including point reactor kinetics, xenon concentration dynamics, an uncertain pressurizer model, a U-tube steam generator model and a coupled nonlinear nuclear reactor model.
Mejia, Amanda F; Nebel, Mary Beth; Barber, Anita D; Choe, Ann S; Pekar, James J; Caffo, Brian S; Lindquist, Martin A
2018-05-15
Reliability of subject-level resting-state functional connectivity (FC) is determined in part by the statistical techniques employed in its estimation. Methods that pool information across subjects to inform estimation of subject-level effects (e.g., Bayesian approaches) have been shown to enhance reliability of subject-level FC. However, fully Bayesian approaches are computationally demanding, while empirical Bayesian approaches typically rely on using repeated measures to estimate the variance components in the model. Here, we avoid the need for repeated measures by proposing a novel measurement error model for FC describing the different sources of variance and error, which we use to perform empirical Bayes shrinkage of subject-level FC towards the group average. In addition, since the traditional intra-class correlation coefficient (ICC) is inappropriate for biased estimates, we propose a new reliability measure denoted the mean squared error intra-class correlation coefficient (ICC MSE ) to properly assess the reliability of the resulting (biased) estimates. We apply the proposed techniques to test-retest resting-state fMRI data on 461 subjects from the Human Connectome Project to estimate connectivity between 100 regions identified through independent components analysis (ICA). We consider both correlation and partial correlation as the measure of FC and assess the benefit of shrinkage for each measure, as well as the effects of scan duration. We find that shrinkage estimates of subject-level FC exhibit substantially greater reliability than traditional estimates across various scan durations, even for the most reliable connections and regardless of connectivity measure. Additionally, we find partial correlation reliability to be highly sensitive to the choice of penalty term, and to be generally worse than that of full correlations except for certain connections and a narrow range of penalty values. This suggests that the penalty needs to be chosen carefully when using partial correlations. Copyright © 2018. Published by Elsevier Inc.
Predictors of validity and reliability of a physical activity record in adolescents
2013-01-01
Background Poor to moderate validity of self-reported physical activity instruments is commonly observed in young people in low- and middle-income countries. However, the reasons for such low validity have not been examined in detail. We tested the validity of a self-administered daily physical activity record in adolescents and assessed if personal characteristics or the convenience level of reporting physical activity modified the validity estimates. Methods The study comprised a total of 302 adolescents from an urban and rural area in Ecuador. Validity was evaluated by comparing the record with accelerometer recordings for seven consecutive days. Test-retest reliability was examined by comparing registrations from two records administered three weeks apart. Time spent on sedentary (SED), low (LPA), moderate (MPA) and vigorous (VPA) intensity physical activity was estimated. Bland Altman plots were used to evaluate measurement agreement. We assessed if age, sex, urban or rural setting, anthropometry and convenience of completing the record explained differences in validity estimates using a linear mixed model. Results Although the record provided higher estimates for SED and VPA and lower estimates for LPA and MPA compared to the accelerometer, it showed an overall fair measurement agreement for validity. There was modest reliability for assessing physical activity in each intensity level. Validity was associated with adolescents’ personal characteristics: sex (SED: P = 0.007; LPA: P = 0.001; VPA: P = 0.009) and setting (LPA: P = 0.000; MPA: P = 0.047). Reliability was associated with the convenience of completing the physical activity record for LPA (low convenience: P = 0.014; high convenience: P = 0.045). Conclusions The physical activity record provided acceptable estimates for reliability and validity on a group level. Sex and setting were associated with validity estimates, whereas convenience to fill out the record was associated with better reliability estimates for LPA. This tendency of improved reliability estimates for adolescents reporting higher convenience merits further consideration. PMID:24289296
Nodal failure index approach to groundwater remediation design
Lee, J.; Reeves, H.W.; Dowding, C.H.
2008-01-01
Computer simulations often are used to design and to optimize groundwater remediation systems. We present a new computationally efficient approach that calculates the reliability of remedial design at every location in a model domain with a single simulation. The estimated reliability and other model information are used to select a best remedial option for given site conditions, conceptual model, and available data. To evaluate design performance, we introduce the nodal failure index (NFI) to determine the number of nodal locations at which the probability of success is below the design requirement. The strength of the NFI approach is that selected areas of interest can be specified for analysis and the best remedial design determined for this target region. An example application of the NFI approach using a hypothetical model shows how the spatial distribution of reliability can be used for a decision support system in groundwater remediation design. ?? 2008 ASCE.
Chang, Fi-John; Chen, Pin-An; Chang, Li-Chiu; Tsai, Yu-Hsuan
2016-08-15
This study attempts to model the spatio-temporal dynamics of total phosphate (TP) concentrations along a river for effective hydro-environmental management. We propose a systematical modeling scheme (SMS), which is an ingenious modeling process equipped with a dynamic neural network and three refined statistical methods, for reliably predicting the TP concentrations along a river simultaneously. Two different types of artificial neural network (BPNN-static neural network; NARX network-dynamic neural network) are constructed in modeling the dynamic system. The Dahan River in Taiwan is used as a study case, where ten-year seasonal water quality data collected at seven monitoring stations along the river are used for model training and validation. Results demonstrate that the NARX network can suitably capture the important dynamic features and remarkably outperforms the BPNN model, and the SMS can effectively identify key input factors, suitably overcome data scarcity, significantly increase model reliability, satisfactorily estimate site-specific TP concentration at seven monitoring stations simultaneously, and adequately reconstruct seasonal TP data into a monthly scale. The proposed SMS can reliably model the dynamic spatio-temporal water pollution variation in a river system for missing, hazardous or costly data of interest. Copyright © 2016 Elsevier B.V. All rights reserved.
An innovative localisation algorithm for railway vehicles
NASA Astrophysics Data System (ADS)
Allotta, B.; D'Adamio, P.; Malvezzi, M.; Pugi, L.; Ridolfi, A.; Rindi, A.; Vettori, G.
2014-11-01
In modern railway automatic train protection and automatic train control systems, odometry is a safety relevant on-board subsystem which estimates the instantaneous speed and the travelled distance of the train; a high reliability of the odometry estimate is fundamental, since an error on the train position may lead to a potentially dangerous overestimation of the distance available for braking. To improve the odometry estimate accuracy, data fusion of different inputs coming from a redundant sensor layout may be used. The aim of this work has been developing an innovative localisation algorithm for railway vehicles able to enhance the performances, in terms of speed and position estimation accuracy, of the classical odometry algorithms, such as the Italian Sistema Controllo Marcia Treno (SCMT). The proposed strategy consists of a sensor fusion between the information coming from a tachometer and an Inertial Measurements Unit (IMU). The sensor outputs have been simulated through a 3D multibody model of a railway vehicle. The work has provided the development of a custom IMU, designed by ECM S.p.a, in order to meet their industrial and business requirements. The industrial requirements have to be compliant with the European Train Control System (ETCS) standards: the European Rail Traffic Management System (ERTMS), a project developed by the European Union to improve the interoperability among different countries, in particular as regards the train control and command systems, fixes some standard values for the odometric (ODO) performance, in terms of speed and travelled distance estimation. The reliability of the ODO estimation has to be taken into account basing on the allowed speed profiles. The results of the currently used ODO algorithms can be improved, especially in case of degraded adhesion conditions; it has been verified in the simulation environment that the results of the proposed localisation algorithm are always compliant with the ERTMS requirements. The estimation strategy has good performance also under degraded adhesion conditions and could be put on board of high-speed railway vehicles; it represents an accurate and reliable solution. The IMU board is tested via a dedicated Hardware in the Loop (HIL) test rig: it includes an industrial robot able to replicate the motion of the railway vehicle. Through the generated experimental outputs the performances of the innovative localisation algorithm have been evaluated: the HIL test rig permitted to test the proposed algorithm, avoiding expensive (in terms of time and cost) on-track tests, obtaining encouraging results. In fact, the preliminary results show a significant improvement of the position and speed estimation performances compared to those obtained with SCMT algorithms, currently in use on the Italian railway network.
Hall, Justin M; Azar, Frederick M; Miller, Robert H; Smith, Richard; Throckmorton, Thomas W
2014-09-01
We compared accuracy and reliability of a traditional method of measurement (most cephalad vertebral spinous process that can be reached by a patient with the extended thumb) to estimates made with the shoulder in abduction to determine if there were differences between the two methods. Six physicians with fellowship training in sports medicine or shoulder surgery estimated measurements in 48 healthy volunteers. Three were randomly chosen to make estimates of both internal rotation measurements for each volunteer. An independent observer made objective measurements on lateral scoliosis films (spinous process method) or with a goniometer (abduction method). Examiners were blinded to objective measurements as well as to previous estimates. Intraclass coefficients for interobserver reliability for the traditional method averaged 0.75, indicating good agreement among observers. The difference in vertebral level estimated by the examiner and the actual radiographic level averaged 1.8 levels. The intraclass coefficient for interobserver reliability for the abduction method averaged 0.81 for all examiners, indicating near-perfect agreement. Confidence intervals indicated that estimates were an average of 8° different from the objective goniometer measurements. Pearson correlation coefficients of intraobserver reliability for the abduction method averaged 0.94, indicating near-perfect agreement within observers. Confidence intervals demonstrated repeated estimates between 5° and 10° of the original. Internal rotation estimates made with the shoulder abducted demonstrated interobserver reliability superior to that of spinous process estimates, and reproducibility was high. On the basis of this finding, we now take glenohumeral internal rotation measurements with the shoulder in abduction and use a goniometer to maximize accuracy and objectivity. Copyright © 2014 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.
Bowman, Gene L.; Shannon, Jackilen; Ho, Emily; Traber, Maret G.; Frei, Balz; Oken, Barry S.; Kaye, Jeffery A.; Quinn, Joseph F.
2010-01-01
Introduction There is great interest in nutritional strategies for the prevention of age-related cognitive decline, yet the best methods for nutritional assessment in populations at risk for dementia are still evolving. Our study objective was to test the reliability and validity of two common nutritional assessments (plasma nutrient biomarkers and Food Frequency Questionnaire) in people at risk for dementia. Methods Thirty-eight elders, half with amnestic -Mild Cognitive Impairment and half with intact cognition were recruited. Nutritional assessments were collected together at baseline and again at 1 month. Intraclass and Pearson correlation coefficients quantified reliability and validity. Results Twenty-six nutrients were examined and reliability was very good or better for 77% (20/26, ICC ≥ .75) of the plasma nutrient biomarkers and for 88% of the FFQ estimates. Twelve of the plasma nutrient estimates were as reliable as the commonly measured plasma cholesterol (ICC=.92). FFQ and plasma long-chain fatty acids (docosahexaenoic acid, r =.39, eicosapentaenoic acid, r = .39) and carotenoids (α-carotene, r =.49; lutein + zeaxanthin, r = .48; β-carotene, r = .43; β-cryptoxanthin, r = .41) were correlated, but no other FFQ estimates correlated with respective nutrient biomarkers. Correlations between FFQ and plasma fatty acids and carotenoids were significantly stronger after removing subjects with MCI. Conclusion The reliability and validity of plasma and FFQ nutrient estimates vary according to the nutrient of interest. Memory deficit attenuates FFQ estimate validity and inflates FFQ estimate reliability. Many plasma nutrient biomarkers have very good reliability over 1-month regardless of memory state. This method can circumvent sources of error seen in other less direct methods of nutritional assessment. PMID:20856100
Xu, Fang; Wallace, Robyn C.; Garvin, William; Greenlund, Kurt J.; Bartoli, William; Ford, Derek; Eke, Paul; Town, G. Machell
2016-01-01
Public health researchers have used a class of statistical methods to calculate prevalence estimates for small geographic areas with few direct observations. Many researchers have used Behavioral Risk Factor Surveillance System (BRFSS) data as a basis for their models. The aims of this study were to 1) describe a new BRFSS small area estimation (SAE) method and 2) investigate the internal and external validity of the BRFSS SAEs it produced. The BRFSS SAE method uses 4 data sets (the BRFSS, the American Community Survey Public Use Microdata Sample, Nielsen Claritas population totals, and the Missouri Census Geographic Equivalency File) to build a single weighted data set. Our findings indicate that internal and external validity tests were successful across many estimates. The BRFSS SAE method is one of several methods that can be used to produce reliable prevalence estimates in small geographic areas. PMID:27418213
Attitude and Trajectory Estimation Using Earth Magnetic Field Data
NASA Technical Reports Server (NTRS)
Deutschmann, Julie; Bar-Itzhack, Itzhack Y.
1996-01-01
The magnetometer has long been a reliable, inexpensive sensor used in spacecraft momentum management and attitude estimation. Recent studies show an increased accuracy potential for magnetometer-only attitude estimation systems. Since the Earth's magnetic field is a function of time and position, and since time is known quite precisely, the differences between the computer and measured magnetic field components, as measured by the magnetometers throughout the entire spacecraft orbit, are a function of both the spacecraft trajectory and attitude errors. Therefore, these errors can be used to estimate both trajectory and attitude. Traditionally, satellite attitude and trajectory have been estimated with completely separate system, using different measurement data. Recently, trajectory estimation for low earth orbit satellites was successfully demonstrated in ground software using only magnetometer data. This work proposes a single augmented extended Kalman Filter to simultaneously and autonomously estimate both spacecraft trajectory and attitude with data from a magnetometer and either dynamically determined rates or gyro-measured body rates.
Gizaw, Solomon; Goshme, Shenkute; Getachew, Tesfaye; Haile, Aynalem; Rischkowsky, Barbara; van Arendonk, Johan; Valle-Zárate, Anne; Dessie, Tadelle; Mwai, Ally Okeyo
2014-06-01
Pedigree recording and genetic selection in village flocks of smallholder farmers have been deemed infeasible by researchers and development workers. This is mainly due to the difficulty of sire identification under uncontrolled village breeding practices. A cooperative village sheep-breeding scheme was designed to achieve controlled breeding and implemented for Menz sheep of Ethiopia in 2009. In this paper, we evaluated the reliability of pedigree recording in village flocks by comparing genetic parameters estimated from data sets collected in the cooperative village and in a nucleus flock maintained under controlled breeding. Effectiveness of selection in the cooperative village was evaluated based on trends in breeding values over generations. Heritability estimates for 6-month weight recorded in the village and the nucleus flock were very similar. There was an increasing trend over generations in average estimated breeding values for 6-month weight in the village flocks. These results have a number of implications: the pedigree recorded in the village flocks was reliable; genetic parameters, which have so far been estimated based on nucleus data sets, can be estimated based on village recording; and appreciable genetic improvement could be achieved in village sheep selection programs under low-input smallholder farming systems.
A feasibility study for measuring stratospheric turbulence using metrac positioning system
NASA Technical Reports Server (NTRS)
Gage, K. S.; Jasperson, W. H.
1975-01-01
The feasibility of obtaining measurements of Lagrangian turbulence at stratospheric altitudes is demonstrated by using the METRAC System to track constant-level balloons. The basis for current estimates of diffusion coefficients are reviewed and it is pointed out that insufficient data is available upon which to base reliable estimates of vertical diffusion coefficients. It is concluded that diffusion coefficients could be directly obtained from Lagrangian turbulence measurements. The METRAC balloon tracking system is shown to possess the necessary precision in order to resolve the response of constant-level balloons to turbulence at stratospheric altitudes. A small sample of data recorded from a tropospheric tetroon flight tracked by the METRAC System is analyzed to obtain estimates of small-scale three-dimensional diffusion coefficients. It is recommended that this technique be employed to establish a climatology of diffusion coefficients and to ascertain the variation of these coefficients with altitude, season, and latitude.
OSA severity assessment based on sleep breathing analysis using ambient microphone.
Dafna, E; Tarasiuk, A; Zigel, Y
2013-01-01
In this paper, an audio-based system for severity estimation of obstructive sleep apnea (OSA) is proposed. The system estimates the apnea-hypopnea index (AHI), which is the average number of apneic events per hour of sleep. This system is based on a Gaussian mixture regression algorithm that was trained and validated on full-night audio recordings. Feature selection process using a genetic algorithm was applied to select the best features extracted from time and spectra domains. A total of 155 subjects, referred to in-laboratory polysomnography (PSG) study, were recruited. Using the PSG's AHI score as a gold-standard, the performances of the proposed system were evaluated using a Pearson correlation, AHI error, and diagnostic agreement methods. Correlation of R=0.89, AHI error of 7.35 events/hr, and diagnostic agreement of 77.3% were achieved, showing encouraging performances and a reliable non-contact alternative method for OSA severity estimation.
Extant process-based hydrologic and water quality models are indispensable to water resources planning and environmental management. However, models are only approximations of real systems and often calibrated with incomplete and uncertain data. Reliable estimates, or perhaps f...
Extant process-based hydrologic and water quality models are indispensable to water resources planning and environmental management. However, models are only approximations of real systems and often calibrated with incomplete and uncertain data. Reliable estimates, or perhaps f...
Estimating the Reliability of the CITAR Computer Courseware Evaluation System.
ERIC Educational Resources Information Center
Micceri, Theodore
In today's complex computer-based teaching (CBT)/computer-assisted instruction market, flashy presentations frequently prove the most important purchasing element, while instructional design and content are secondary to form. Courseware purchasers must base decisions upon either a vendor's presentation or some published evaluator rating.…
Multi-Sensor Improved Sea Surface Temperature (MISST) for GODAE
2008-01-01
its methodology to add 3 retrieval error information to the US Navy operational data stream. Quantitative estimates of reliability are added to...hycom.rsmas.miami.edu/ “ POSITIV : Prototype Operational System – ISAR – Temperature Instrumentation for the VOS fleet” CIRA/CSU Joint Hurricane Testbed project
Reliability of TMS phosphene threshold estimation: Toward a standardized protocol.
Mazzi, Chiara; Savazzi, Silvia; Abrahamyan, Arman; Ruzzoli, Manuela
Phosphenes induced by transcranial magnetic stimulation (TMS) are a subjectively described visual phenomenon employed in basic and clinical research as index of the excitability of retinotopically organized areas in the brain. Phosphene threshold estimation is a preliminary step in many TMS experiments in visual cognition for setting the appropriate level of TMS doses; however, the lack of a direct comparison of the available methods for phosphene threshold estimation leaves unsolved the reliability of those methods in setting TMS doses. The present work aims at fulfilling this gap. We compared the most common methods for phosphene threshold calculation, namely the Method of Constant Stimuli (MOCS), the Modified Binary Search (MOBS) and the Rapid Estimation of Phosphene Threshold (REPT). In two experiments we tested the reliability of PT estimation under each of the three methods, considering the day of administration, participants' expertise in phosphene perception and the sensitivity of each method to the initial values used for the threshold calculation. We found that MOCS and REPT have comparable reliability when estimating phosphene thresholds, while MOBS estimations appear less stable. Based on our results, researchers and clinicians can estimate phosphene threshold according to MOCS or REPT equally reliably, depending on their specific investigation goals. We suggest several important factors for consideration when calculating phosphene thresholds and describe strategies to adopt in experimental procedures. Copyright © 2017 Elsevier Inc. All rights reserved.
Pupillary transient responses to within-task cognitive load variation.
Wong, Hoe Kin; Epps, Julien
2016-12-01
Changes in physiological signals due to task evoked cognitive load have been reported extensively. However, pupil size based approaches for estimating cognitive load on a moment-to-moment basis are not as well understood as estimating cognitive load on a task-to-task basis, despite the appeal these approaches have for continuous load estimation. In particular, the pupillary transient response to instantaneous changes in induced load has not been experimentally quantified, and the within-task changes in pupil dilation have not been investigated in a manner that allows their consistency to be quantified with a view to biomedical system design. In this paper, a variation of the digit span task is developed which reliably induces rapid changes of cognitive load to generate task-evoked pupillary responses (TEPRs) associated with large, within-task load changes. Linear modelling and one-way ANOVA reveals that increasing the rate of cognitive loading, while keeping task demands constant, results in a steeper pupillary response. Instantaneous drops in cognitive load are shown to produce statistically significantly different transient pupillary responses relative to sustained load, and when characterised using an exponential decay response, the task-evoked pupillary response time constant is in the order of 1-5 s. Within-task test-retest analysis confirms the reliability of the moment-to-moment measurements. Based on these results, estimates of pupil diameter can be employed with considerably more confidence in moment-to-moment cognitive load estimation systems. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Reliability of the Cooking Task in adults with acquired brain injury.
Poncet, Frédérique; Swaine, Bonnie; Taillefer, Chantal; Lamoureux, Julie; Pradat-Diehl, Pascale; Chevignard, Mathilde
2015-01-01
Acquired brain injury (ABI) often leads to deficits in executive functioning (EF) responsible for severe and long-standing disabilities in daily life activities. The Cooking Task is an ecological and valid test of EF involving multi-tasking in a real environment. Given its complex scoring system, it is important to establish the tool's reliability. The objective of the study was to examine the reliability of the Cooking Task (internal consistency, inter-rater and test-retest reliability). A total of 160 patients with ABI (113 men, mean age 37 years, SD = 14.3) were tested using the Cooking Task. For test-retest reliability, patients were assessed by the same rater on two occasions (mean interval 11 days) while two raters independently and simultaneously observed and scored patients' performances to estimate inter-rater reliability. Internal consistency was high for the global scale (Cronbach α = .74). Inter-rater reliability (n = 66) for total errors was also high (ICC = .93), however the test-retest reliability (n = 11) was poor (ICC = .36). In general the Cooking Task appears to be a reliable tool. The low test-retest results were expected given the importance of EF in the performance of novel tasks.
NASA Astrophysics Data System (ADS)
Roozegar, Mehdi; Mahjoob, Mohammad J.; Ayati, Moosa
2017-05-01
This paper deals with adaptive estimation of the unknown parameters and states of a pendulum-driven spherical robot (PDSR), which is a nonlinear in parameters (NLP) chaotic system with parametric uncertainties. Firstly, the mathematical model of the robot is deduced by applying the Newton-Euler methodology for a system of rigid bodies. Then, based on the speed gradient (SG) algorithm, the states and unknown parameters of the robot are estimated online for different step length gains and initial conditions. The estimated parameters are updated adaptively according to the error between estimated and true state values. Since the errors of the estimated states and parameters as well as the convergence rates depend significantly on the value of step length gain, this gain should be chosen optimally. Hence, a heuristic fuzzy logic controller is employed to adjust the gain adaptively. Simulation results indicate that the proposed approach is highly encouraging for identification of this NLP chaotic system even if the initial conditions change and the uncertainties increase; therefore, it is reliable to be implemented on a real robot.
Processes and Procedures for Estimating Score Reliability and Precision
ERIC Educational Resources Information Center
Bardhoshi, Gerta; Erford, Bradley T.
2017-01-01
Precision is a key facet of test development, with score reliability determined primarily according to the types of error one wants to approximate and demonstrate. This article identifies and discusses several primary forms of reliability estimation: internal consistency (i.e., split-half, KR-20, a), test-retest, alternate forms, interscorer, and…
A Latent Class Approach to Estimating Test-Score Reliability
ERIC Educational Resources Information Center
van der Ark, L. Andries; van der Palm, Daniel W.; Sijtsma, Klaas
2011-01-01
This study presents a general framework for single-administration reliability methods, such as Cronbach's alpha, Guttman's lambda-2, and method MS. This general framework was used to derive a new approach to estimating test-score reliability by means of the unrestricted latent class model. This new approach is the latent class reliability…
ERIC Educational Resources Information Center
Gadermann, Anne M.; Guhn, Martin; Zumbo, Bruno D.
2012-01-01
This paper provides a conceptual, empirical, and practical guide for estimating ordinal reliability coefficients for ordinal item response data (also referred to as Likert, Likert-type, ordered categorical, or rating scale item responses). Conventionally, reliability coefficients, such as Cronbach's alpha, are calculated using a Pearson…
Use of remote sensing for land use policy formulation
NASA Technical Reports Server (NTRS)
1981-01-01
Progress in studies for using remotely sensed data for assessing crop stress and in crop estimation is reported. The estimation of acreage of small forested areas in the southern lower peninsula of Michigan using LANDSAT data is evaluated. Damage to small grains caused by the cereal leaf beetle was assessed through remote sensing. The remote detection of X-disease of peach and cherry trees and of fire blight of pear and apple trees was investigated. The reliability of improving on standard methods of crop production estimation was demonstrated. Areas of virus infestation in vineyards and blueberry fields in western and southwestern Michigan were identified. The installation and systems integration of a microcomputer system for processing and making available remotely sensed data are described.
Metallicities of Galaxies in the Local Universe
NASA Astrophysics Data System (ADS)
Hirschauer, Alec Seth
2018-01-01
The degree of heavy-element enrichment for star-forming galaxies in the universe is a fundamental astrophysical characteristic which traces the amount of stellar nucleosynthesis undertaken by the constituent population of stars. Estimating this quantity via the so-called "direct-method" is observationally challenging and requires measurement of intrinsically weak temperature-sensitive nebular emission lines, however these are typically not found for galaxies unless their emission lines are exceptionally bright. Metal abundances ("metallicities") must then therefore be estimated by empirical means utilizing ratios of strong emission lines, calibrated to sources of known abundance and/or theoretical models, which are measurable in essentially any nebular spectrum of a star-forming system. Relationships concerning metallicities in galaxies such as the luminosity-metallicity and mass-metallicity are critically dependent upon reliable estimations of abundances. Therefore, having a reliable observational constraint is paramount to developing models which accurately reflect the universe. This dissertation presentation explores metallicities for galaxies in the local universe through a variety of means. First, an attempt is made to improve calibrations of empirical relationships for estimating abundances for star-forming galaxies at high-metallicities, finding some intrinsic shortcomings but also revealing some interesting new findings regarding the computation of the electron gas of star-forming systems, as well as detecting some anomalously under-abundant, overly-luminous galaxies. Second, the development of a self-consistent scale for estimating metallicities allows for the creation of luminosity-metallicity and mass-metallicity relations for a statistically representative sample of star-forming galaxies in the local universe. Finally, a discovery is made of an extremely metal-poor star-forming galaxy, which opens the possibility to find more similar systems and to better understand star-formation in exceptionally low-abundance environments.
An Evidential Reasoning-Based CREAM to Human Reliability Analysis in Maritime Accident Process.
Wu, Bing; Yan, Xinping; Wang, Yang; Soares, C Guedes
2017-10-01
This article proposes a modified cognitive reliability and error analysis method (CREAM) for estimating the human error probability in the maritime accident process on the basis of an evidential reasoning approach. This modified CREAM is developed to precisely quantify the linguistic variables of the common performance conditions and to overcome the problem of ignoring the uncertainty caused by incomplete information in the existing CREAM models. Moreover, this article views maritime accident development from the sequential perspective, where a scenario- and barrier-based framework is proposed to describe the maritime accident process. This evidential reasoning-based CREAM approach together with the proposed accident development framework are applied to human reliability analysis of a ship capsizing accident. It will facilitate subjective human reliability analysis in different engineering systems where uncertainty exists in practice. © 2017 Society for Risk Analysis.
NASA Technical Reports Server (NTRS)
Al Hassan, Mohammad; Britton, Paul; Hatfield, Glen Spencer; Novack, Steven D.
2017-01-01
Field Programmable Gate Arrays (FPGAs) integrated circuits (IC) are one of the key electronic components in today's sophisticated launch and space vehicle complex avionic systems, largely due to their superb reprogrammable and reconfigurable capabilities combined with relatively low non-recurring engineering costs (NRE) and short design cycle. Consequently, FPGAs are prevalent ICs in communication protocols and control signal commands. This paper will identify reliability concerns and high level guidelines to estimate FPGA total failure rates in a launch vehicle application. The paper will discuss hardware, hardware description language, and radiation induced failures. The hardware contribution of the approach accounts for physical failures of the IC. The hardware description language portion will discuss the high level FPGA programming languages and software/code reliability growth. The radiation portion will discuss FPGA susceptibility to space environment radiation.
Wang, Jun; Zhou, Bihua; Zhou, Shudao
2016-01-01
This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior. PMID:26880874
Internal Consistency, Retest Reliability, and their Implications For Personality Scale Validity
McCrae, Robert R.; Kurtz, John E.; Yamagata, Shinji; Terracciano, Antonio
2010-01-01
We examined data (N = 34,108) on the differential reliability and validity of facet scales from the NEO Inventories. We evaluated the extent to which (a) psychometric properties of facet scales are generalizable across ages, cultures, and methods of measurement; and (b) validity criteria are associated with different forms of reliability. Composite estimates of facet scale stability, heritability, and cross-observer validity were broadly generalizable. Two estimates of retest reliability were independent predictors of the three validity criteria; none of three estimates of internal consistency was. Available evidence suggests the same pattern of results for other personality inventories. Internal consistency of scales can be useful as a check on data quality, but appears to be of limited utility for evaluating the potential validity of developed scales, and it should not be used as a substitute for retest reliability. Further research on the nature and determinants of retest reliability is needed. PMID:20435807
Uncertainties in obtaining high reliability from stress-strength models
NASA Technical Reports Server (NTRS)
Neal, Donald M.; Matthews, William T.; Vangel, Mark G.
1992-01-01
There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.
Operational Challenges In TDRS Post-Maneuver Orbit Determination
NASA Technical Reports Server (NTRS)
Laing, Jason; Myers, Jessica; Ward, Douglas; Lamb, Rivers
2015-01-01
The GSFC Flight Dynamics Facility (FDF) is responsible for daily and post maneuver orbit determination for the Tracking and Data Relay Satellite System (TDRSS). The most stringent requirement for this orbit determination is 75 meters total position accuracy (3-sigma) predicted over one day for Terra's onboard navigation system. To maintain an accurate solution onboard Terra, a solution is generated and provided by the FDF Four hours after a TDRS maneuver. A number of factors present challenges to this support, such as maneuver prediction uncertainty and potentially unreliable tracking from User satellities. Reliable support is provided by comparing an extended Kalman Filter (estimated using ODTK) against a Batch Least Squares system (estimated using GTDS).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vanderwiel, Scott A; Wilson, Alyson G; Graves, Todd L
Both the U. S. Department of Defense (DoD) and Department of Energy (DOE) maintain weapons stockpiles: items like bullets, missiles and bombs that have already been produced and are being stored until needed. Ideally, these stockpiles maintain high reliability over time. To assess reliability, a surveillance program is implemented, where units are periodically removed from the stockpile and tested. The most definitive tests typically destroy the weapons so a given unit is tested only once. Surveillance managers need to decide how many units should be tested, how often they should be tested, what tests should be done, and how themore » resulting data are used to estimate the stockpile's current and future reliability. These issues are particularly critical from a planning perspective: given what has already been observed and our understanding of the mechanisms of stockpile aging, what is an appropriate and cost-effective surveillance program? Surveillance programs are costly, broad, and deep, especially in the DOE, where the US nuclear weapons surveillance program must 'ensure, through various tests, that the reliability of nuclear weapons is maintained' in the absence of full-system testing (General Accounting Office, 1996). The DOE program consists primarily of three types of tests: nonnuclear flight tests, that involve the actual dropping or launching of a weapon from which the nuclear components have been removed; and nonnuclear and nuclear systems laboratory tests, which detect defects due to aging, manufacturing, and design of the nonnuclear and nuclear portions of the weapons. Fully integrated analysis of the suite of nuclear weapons surveillance data is an ongoing area of research (Wilson et al., 2007). This paper introduces a simple model that captures high-level features of stockpile reliability over time and can be used to answer broad policy questions about surveillance programs. Our intention is to provide a framework that generates tractable answers that integrate expert knowledge and high-level summaries of surveillance data to allow decision-making about appropriate trade-offs between the cost of data and the precision of stockpile reliability estimates.« less
Predicting Cost/Reliability/Maintainability of Advanced General Aviation Avionics Equipment
NASA Technical Reports Server (NTRS)
Davis, M. R.; Kamins, M.; Mooz, W. E.
1978-01-01
A methodology is provided for assisting NASA in estimating the cost, reliability, and maintenance (CRM) requirements for general avionics equipment operating in the 1980's. Practical problems of predicting these factors are examined. The usefulness and short comings of different approaches for modeling coast and reliability estimates are discussed together with special problems caused by the lack of historical data on the cost of maintaining general aviation avionics. Suggestions are offered on how NASA might proceed in assessing cost reliability CRM implications in the absence of reliable generalized predictive models.
APPLICATION OF TRAVEL TIME RELIABILITY FOR PERFORMANCE ORIENTED OPERATIONAL PLANNING OF EXPRESSWAYS
NASA Astrophysics Data System (ADS)
Mehran, Babak; Nakamura, Hideki
Evaluation of impacts of congestion improvement scheme s on travel time reliability is very significant for road authorities since travel time reliability repr esents operational performance of expressway segments. In this paper, a methodology is presented to estimate travel tim e reliability prior to implementation of congestion relief schemes based on travel time variation modeling as a function of demand, capacity, weather conditions and road accident s. For subject expressway segmen ts, traffic conditions are modeled over a whole year considering demand and capacity as random variables. Patterns of demand and capacity are generated for each five minute interval by appl ying Monte-Carlo simulation technique, and accidents are randomly generated based on a model that links acci dent rate to traffic conditions. A whole year analysis is performed by comparing de mand and available capacity for each scenario and queue length is estimated through shockwave analysis for each time in terval. Travel times are estimated from refined speed-flow relationships developed for intercity expressways and buffer time index is estimated consequently as a measure of travel time reliability. For validation, estimated reliability indices are compared with measured values from empirical data, and it is shown that the proposed method is suitable for operational evaluation and planning purposes.
NASA Technical Reports Server (NTRS)
Unal, Resit; Morris, W. Douglas; White, Nancy H.; Lepsch, Roger A.; Brown, Richard W.
2000-01-01
This paper describes the development of parametric models for estimating operational reliability and maintainability (R&M) characteristics for reusable vehicle concepts, based on vehicle size and technology support level. A R&M analysis tool (RMAT) and response surface methods are utilized to build parametric approximation models for rapidly estimating operational R&M characteristics such as mission completion reliability. These models that approximate RMAT, can then be utilized for fast analysis of operational requirements, for lifecycle cost estimating and for multidisciplinary sign optimization.
NASA Astrophysics Data System (ADS)
Mohammed, Amal A.; Abraheem, Sudad K.; Fezaa Al-Obedy, Nadia J.
2018-05-01
In this paper is considered with Burr type XII distribution. The maximum likelihood, Bayes methods of estimation are used for estimating the unknown scale parameter (α). Al-Bayyatis’ loss function and suggest loss function are used to find the reliability with the least loss. So the reliability function is expanded in terms of a set of power function. For this performance, the Matlab (ver.9) is used in computations and some examples are given.
Ocean color - Availability of the global data set
NASA Technical Reports Server (NTRS)
Feldman, Gene; Kuring, Norman; Ng, Carolyn; Esaias, Wayne; Mcclain, Chuck; Elrod, Jane; Maynard, Nancy; Endres, Dan
1989-01-01
The use of satellite observations of ocean color to provide reliable estimates of marine phytoplankton biomass on synoptic scales is examined. An overview is given of the Coastal Zone Color Scanner data processing system. The archiving and distribution of ocean color data are discussed, and NASA-sponsored archive sites are listed.
A USEPA-sponsored field demonstration program was conducted to gather technically reliable cost and performance information on the electro-scan (FELL -41) pipeline condition assessment technology. Electro-scan technology can be used to estimate the magnitude and location of pote...
Modelling topographic potential for erosion and deposition using GIS
Helena Mitasova; Louis R. Iverson
1996-01-01
Modelling of erosion and deposition in complex terrain within a geographical information system (GIS) requires a high resolution digital elevation model (DEM), reliable estimation of topographic parameters, and formulation of erosion models adequate for digital representation of spatially distributed parameters. Regularized spline with tension was integrated within a...
DOT National Transportation Integrated Search
2002-03-01
In a simulated yoke study, estimates of roadway travel times are archived from web-based Advanced Traveler Information Systems (ATIS) and used to recreate hypothetical, retrospective paired driving trials between travelers with and without ATIS. Prev...
On Quality and Measures in Software Engineering
ERIC Educational Resources Information Center
Bucur, Ion I.
2006-01-01
Complexity measures are mainly used to estimate vital information about reliability and maintainability of software systems from regular analysis of the source code. Such measures also provide constant feedback during a software project to assist the control of the development procedure. There exist several models to classify a software product's…
Takahashi, Chie; Watt, Simon J.
2014-01-01
When we hold an object while looking at it, estimates from visual and haptic cues to size are combined in a statistically optimal fashion, whereby the “weight” given to each signal reflects their relative reliabilities. This allows object properties to be estimated more precisely than would otherwise be possible. Tools such as pliers and tongs systematically perturb the mapping between object size and the hand opening. This could complicate visual-haptic integration because it may alter the reliability of the haptic signal, thereby disrupting the determination of appropriate signal weights. To investigate this we first measured the reliability of haptic size estimates made with virtual pliers-like tools (created using a stereoscopic display and force-feedback robots) with different “gains” between hand opening and object size. Haptic reliability in tool use was straightforwardly determined by a combination of sensitivity to changes in hand opening and the effects of tool geometry. The precise pattern of sensitivity to hand opening, which violated Weber's law, meant that haptic reliability changed with tool gain. We then examined whether the visuo-motor system accounts for these reliability changes. We measured the weight given to visual and haptic stimuli when both were available, again with different tool gains, by measuring the perceived size of stimuli in which visual and haptic sizes were varied independently. The weight given to each sensory cue changed with tool gain in a manner that closely resembled the predictions of optimal sensory integration. The results are consistent with the idea that different tool geometries are modeled by the brain, allowing it to calculate not only the distal properties of objects felt with tools, but also the certainty with which those properties are known. These findings highlight the flexibility of human sensory integration and tool-use, and potentially provide an approach for optimizing the design of visual-haptic devices. PMID:24592245
Reliability analysis of the F-8 digital fly-by-wire system
NASA Technical Reports Server (NTRS)
Brock, L. D.; Goodman, H. A.
1981-01-01
The F-8 Digital Fly-by-Wire (DFBW) flight test program intended to provide the technology for advanced control systems, giving aircraft enhanced performance and operational capability is addressed. A detailed analysis of the experimental system was performed to estimated the probabilities of two significant safety critical events: (1) loss of primary flight control function, causing reversion to the analog bypass system; and (2) loss of the aircraft due to failure of the electronic flight control system. The analysis covers appraisal of risks due to random equipment failure, generic faults in design of the system or its software, and induced failure due to external events. A unique diagrammatic technique was developed which details the combinatorial reliability equations for the entire system, promotes understanding of system failure characteristics, and identifies the most likely failure modes. The technique provides a systematic method of applying basic probability equations and is augmented by a computer program written in a modular fashion that duplicates the structure of these equations.
Loeding, B L; Greenan, J P
1998-12-01
The study examined the validity and reliability of four assessments, with three instruments per domain. Domains included generalizable mathematics, communication, interpersonal relations, and reasoning skills. Participants were deaf, legally blind, or visually impaired students enrolled in vocational classes at residential secondary schools. The researchers estimated the internal consistency reliability, test-retest reliability, and construct validity correlations of three subinstruments: student self-ratings, teacher ratings, and performance assessments. The data suggest that these instruments are highly internally consistent measures of generalizable vocational skills. Four performance assessments have high-to-moderate test-retest reliability estimates, and were generally considered to possess acceptable validity and reliability.
Optimal Redundancy Management in Reconfigurable Control Systems Based on Normalized Nonspecificity
NASA Technical Reports Server (NTRS)
Wu, N.Eva; Klir, George J.
1998-01-01
In this paper the notion of normalized nonspecificity is introduced. The nonspecifity measures the uncertainty of the estimated parameters that reflect impairment in a controlled system. Based on this notion, a quantity called a reconfiguration coverage is calculated. It represents the likelihood of success of a control reconfiguration action. This coverage links the overall system reliability to the achievable and required control, as well as diagnostic performance. The coverage, when calculated on-line, is used for managing the redundancy in the system.
Reliability of the Raven Coloured Progressive Matrices for Anglo and for Mexican-American Children.
ERIC Educational Resources Information Center
Valencia, Richard R.
1984-01-01
Investigated the internal consistency reliability estimates of the Raven Coloured Progressive Matrices (CPM) for 96 Anglo and Mexican American third-grade boys from low socioeconomic status background. The results showed that the reliability estimates of the CPM for the two ethnic groups were acceptably high and extremely similar in magnitude.…
Interval Estimation of Revision Effect on Scale Reliability via Covariance Structure Modeling
ERIC Educational Resources Information Center
Raykov, Tenko
2009-01-01
A didactic discussion of a procedure for interval estimation of change in scale reliability due to revision is provided, which is developed within the framework of covariance structure modeling. The method yields ranges of plausible values for the population gain or loss in reliability of unidimensional composites, which results from deletion or…
Parrett, Charles; Johnson, D.R.; Hull, J.A.
1989-01-01
Estimates of streamflow characteristics (monthly mean flow that is exceeded 90, 80, 50, and 20 percent of the time for all years of record and mean monthly flow) were made and are presented in tabular form for 312 sites in the Missouri River basin in Montana. Short-term gaged records were extended to the base period of water years 1937-86, and were used to estimate monthly streamflow characteristics at 100 sites. Data from 47 gaged sites were used in regression analysis relating the streamflow characteristics to basin characteristics and to active-channel width. The basin-characteristics equations, with standard errors of 35% to 97%, were used to estimate streamflow characteristics at 179 ungaged sites. The channel-width equations, with standard errors of 36% to 103%, were used to estimate characteristics at 138 ungaged sites. Streamflow measurements were correlated with concurrent streamflows at nearby gaged sites to estimate streamflow characteristics at 139 ungaged sites. In a test using 20 pairs of gages, the standard errors ranged from 31% to 111%. At 139 ungaged sites, the estimates from two or more of the methods were weighted and combined in accordance with the variance of individual methods. When estimates from three methods were combined the standard errors ranged from 24% to 63 %. A drainage-area-ratio adjustment method was used to estimate monthly streamflow characteristics at seven ungaged sites. The reliability of the drainage-area-ratio adjustment method was estimated to be about equal to that of the basin-characteristics method. The estimate were checked for reliability. Estimates of monthly streamflow characteristics from gaged records were considered to be most reliable, and estimates at sites with actual flow record from 1937-86 were considered to be completely reliable (zero error). Weighted-average estimates were considered to be the most reliable estimates made at ungaged sites. (USGS)
Structural Reliability Using Probability Density Estimation Methods Within NESSUS
NASA Technical Reports Server (NTRS)
Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric
2003-01-01
A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been proposed by the Society of Automotive Engineers (SAE). The test cases compare different probabilistic methods within NESSUS because it is important that a user can have confidence that estimates of stochastic parameters of a response will be within an acceptable error limit. For each response, the mean, standard deviation, and 0.99 percentile, are repeatedly estimated which allows confidence statements to be made for each parameter estimated, and for each method. Thus, the ability of several stochastic methods to efficiently and accurately estimate density parameters is compared using four valid test cases. While all of the reliability methods used performed quite well, for the new LHS module within NESSUS it was found that it had a lower estimation error than MC when they were used to estimate the mean, standard deviation, and 0.99 percentile of the four different stochastic responses. Also, LHS required a smaller amount of calculations to obtain low error answers with a high amount of confidence than MC. It can therefore be stated that NESSUS is an important reliability tool that has a variety of sound probabilistic methods a user can employ and the newest LHS module is a valuable new enhancement of the program.
D'Agnese, F. A.; Faunt, C.C.; Keith, Turner A.
1996-01-01
The recharge and discharge components of the Death Valley regional groundwater flow system were defined by remote sensing and GIS techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. This map provided a basis for subsequent evapotranspiration and infiltration estimations. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were then used to calculate discharge volumes for these areas. A previously used empirical method of groundwater recharge estimation was modified by GIS methods to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.
An Extended Kalman Filter-Based Attitude Tracking Algorithm for Star Sensors
Li, Jian; Wei, Xinguo; Zhang, Guangjun
2017-01-01
Efficiency and reliability are key issues when a star sensor operates in tracking mode. In the case of high attitude dynamics, the performance of existing attitude tracking algorithms degenerates rapidly. In this paper an extended Kalman filtering-based attitude tracking algorithm is presented. The star sensor is modeled as a nonlinear stochastic system with the state estimate providing the three degree-of-freedom attitude quaternion and angular velocity. The star positions in the star image are predicted and measured to estimate the optimal attitude. Furthermore, all the cataloged stars observed in the sensor field-of-view according the predicted image motion are accessed using a catalog partition table to speed up the tracking, called star mapping. Software simulation and night-sky experiment are performed to validate the efficiency and reliability of the proposed method. PMID:28825684
An Extended Kalman Filter-Based Attitude Tracking Algorithm for Star Sensors.
Li, Jian; Wei, Xinguo; Zhang, Guangjun
2017-08-21
Efficiency and reliability are key issues when a star sensor operates in tracking mode. In the case of high attitude dynamics, the performance of existing attitude tracking algorithms degenerates rapidly. In this paper an extended Kalman filtering-based attitude tracking algorithm is presented. The star sensor is modeled as a nonlinear stochastic system with the state estimate providing the three degree-of-freedom attitude quaternion and angular velocity. The star positions in the star image are predicted and measured to estimate the optimal attitude. Furthermore, all the cataloged stars observed in the sensor field-of-view according the predicted image motion are accessed using a catalog partition table to speed up the tracking, called star mapping. Software simulation and night-sky experiment are performed to validate the efficiency and reliability of the proposed method.
Validity and Reliability of Assessing Body Composition Using a Mobile Application.
Macdonald, Elizabeth Z; Vehrs, Pat R; Fellingham, Gilbert W; Eggett, Dennis; George, James D; Hager, Ronald
2017-12-01
The purpose of this study was to determine the validity and reliability of the LeanScreen (LS) mobile application that estimates percent body fat (%BF) using estimates of circumferences from photographs. The %BF of 148 weight-stable adults was estimated once using dual-energy x-ray absorptiometry (DXA). Each of two administrators assessed the %BF of each subject twice using the LS app and manually measured circumferences. A mixed-model ANOVA and Bland-Altman analyses were used to compare the estimates of %BF obtained from each method. Interrater and intrarater reliabilities values were determined using multiple measurements taken by each of the two administrators. The LS app and manually measured circumferences significantly underestimated (P < 0.05) the %BF determined using DXA by an average of -3.26 and -4.82 %BF, respectively. The LS app (6.99 %BF) and manually measured circumferences (6.76 %BF) had large limits of agreement. All interrater and intrarater reliability coefficients of estimates of %BF using the LS app and manually measured circumferences exceeded 0.99. The estimates of %BF from manually measured circumferences and the LS app were highly reliable. However, these field measures are not currently recommended for the assessment of body composition because of significant bias and large limits of agreements.
Auditing of suppliers as the requirement of quality management systems in construction
NASA Astrophysics Data System (ADS)
Harasymiuk, Jolanta; Barski, Janusz
2017-07-01
The choice of a supplier of construction materials can be important factor of increase or reduction of building works costs. Construction materials present from 40 for 70% of investment task depending on kind of works being provided for realization. There is necessity of estimate of suppliers from the point of view of effectiveness of construction undertaking and necessity from the point of view of conformity of taken operation by executives of construction job and objects within the confines of systems of managements quality being initiated in their organizations. The estimate of suppliers of construction materials and subexecutives of special works is formal requirement in quality management systems, which meets the requirements of the ISO 9001 standard. The aim of this paper is to show possibilities of making use of anaudit for estimate of credibility and reliability of the supplier of construction materials. The article describes kinds of audits, that were carried in quality management systems, with particular taking into consideration audits called as second-site. One characterizes the estimate criterions of qualitative ability and method of choice of the supplier of construction materials. The paper shows also propositions of exemplary questions, that would be estimated in audit process, the way of conducting of this estimate and conditionality of estimate.
NASA Astrophysics Data System (ADS)
Holmgren, J.; Tulldahl, H. M.; Nordlöf, J.; Nyström, M.; Olofsson, K.; Rydell, J.; Willén, E.
2017-10-01
A system was developed for automatic estimations of tree positions and stem diameters. The sensor trajectory was first estimated using a positioning system that consists of a low precision inertial measurement unit supported by image matching with data from a stereo-camera. The initial estimation of the sensor trajectory was then calibrated by adjustments of the sensor pose using the laser scanner data. Special features suitable for forest environments were used to solve the correspondence and matching problems. Tree stem diameters were estimated for stem sections using laser data from individual scanner rotations and were then used for calibration of the sensor pose. A segmentation algorithm was used to associate stem sections to individual tree stems. The stem diameter estimates of all stem sections associated to the same tree stem were then combined for estimation of stem diameter at breast height (DBH). The system was validated on four 20 m radius circular plots and manual measured trees were automatically linked to trees detected in laser data. The DBH could be estimated with a RMSE of 19 mm (6 %) and a bias of 8 mm (3 %). The calibrated sensor trajectory and the combined use of circle fits from individual scanner rotations made it possible to obtain reliable DBH estimates also with a low precision positioning system.
Markov chains for testing redundant software
NASA Technical Reports Server (NTRS)
White, Allan L.; Sjogren, Jon A.
1988-01-01
A preliminary design for a validation experiment has been developed that addresses several problems unique to assuring the extremely high quality of multiple-version programs in process-control software. The procedure uses Markov chains to model the error states of the multiple version programs. The programs are observed during simulated process-control testing, and estimates are obtained for the transition probabilities between the states of the Markov chain. The experimental Markov chain model is then expanded into a reliability model that takes into account the inertia of the system being controlled. The reliability of the multiple version software is computed from this reliability model at a given confidence level using confidence intervals obtained for the transition probabilities during the experiment. An example demonstrating the method is provided.
Reliability of the Language ENvironment Analysis system (LENA™) in European French.
Canault, Mélanie; Le Normand, Marie-Thérèse; Foudil, Samy; Loundon, Natalie; Thai-Van, Hung
2016-09-01
In this study, we examined the accuracy of the Language ENvironment Analysis (LENA) system in European French. LENA is a digital recording device with software that facilitates the collection and analysis of audio recordings from young children, providing automated measures of the speech overheard and produced by the child. Eighteen native French-speaking children, who were divided into six age groups ranging from 3 to 48 months old, were recorded about 10-16 h per day, three days a week. A total of 324 samples (six 10-min chunks of recordings) were selected and then transcribed according to the CHAT format. Simple and mixed linear models between the LENA and human adult word count (AWC) and child vocalization count (CVC) estimates were performed, to determine to what extent the automatic and the human methods agreed. Both the AWC and CVC estimates were very reliable (r = .64 and .71, respectively) for the 324 samples. When controlling the random factors of participants and recordings, 1 h was sufficient to obtain a reliable sample. It was, however, found that two age groups (7-12 months and 13-18 months) had a significant effect on the AWC data and that the second day of recording had a significant effect on the CVC data. When noise-related factors were added to the model, only a significant effect of signal-to-noise ratio was found on the AWC data. All of these findings and their clinical implications are discussed, providing strong support for the reliability of LENA in French.
Pailian, Hrag; Halberda, Justin
2015-04-01
We investigated the psychometric properties of the one-shot change detection task for estimating visual working memory (VWM) storage capacity-and also introduced and tested an alternative flicker change detection task for estimating these limits. In three experiments, we found that the one-shot whole-display task returns estimates of VWM storage capacity (K) that are unreliable across set sizes-suggesting that the whole-display task is measuring different things at different set sizes. In two additional experiments, we found that the one-shot single-probe variant shows improvements in the reliability and consistency of K estimates. In another additional experiment, we found that a one-shot whole-display-with-click task (requiring target localization) also showed improvements in reliability and consistency. The latter results suggest that the one-shot task can return reliable and consistent estimates of VWM storage capacity (K), and they highlight the possibility that the requirement to localize the changed target is what engenders this enhancement. Through a final series of four experiments, we introduced and tested an alternative flicker change detection method that also requires the observer to localize the changing target and that generates, from response times, an estimate of VWM storage capacity (K). We found that estimates of K from the flicker task correlated with estimates from the traditional one-shot task and also had high reliability and consistency. We highlight the flicker method's ability to estimate executive functions as well as VWM storage capacity, and discuss the potential for measuring multiple abilities with the one-shot and flicker tasks.
Lunar Regenerative Fuel Cell (RFC) Reliability Testing for Assured Mission Success
NASA Technical Reports Server (NTRS)
Bents, David J.
2009-01-01
NASA's Constellation program has selected the closed cycle hydrogen oxygen Polymer Electrolyte Membrane (PEM) Regenerative Fuel Cell (RFC) as its baseline solar energy storage system for the lunar outpost and manned rover vehicles. Since the outpost and manned rovers are "human-rated," these energy storage systems will have to be of proven reliability exceeding 99 percent over the length of the mission. Because of the low (TRL=5) development state of the closed cycle hydrogen oxygen PEM RFC at present, and because there is no equivalent technology base in the commercial sector from which to draw or infer reliability information from, NASA will have to spend significant resources developing this technology from TRL 5 to TRL 9, and will have to embark upon an ambitious reliability development program to make this technology ready for a manned mission. Because NASA would be the first user of this new technology, NASA will likely have to bear all the costs associated with its development.When well-known reliability estimation techniques are applied to the hydrogen oxygen RFC to determine the amount of testing that will be required to assure RFC unit reliability over life of the mission, the analysis indicates the reliability testing phase by itself will take at least 2 yr, and could take up to 6 yr depending on the number of QA units that are built and tested and the individual unit reliability that is desired. The cost and schedule impacts of reliability development need to be considered in NASA's Exploration Technology Development Program (ETDP) plans, since life cycle testing to build meaningful reliability data is the only way to assure "return to the moon, this time to stay, then on to Mars" mission success.
Quantitative mapping of rainfall rates over the oceans utilizing Nimbus-5 ESMR data
NASA Technical Reports Server (NTRS)
Rao, M. S. V.; Abbott, W. V.
1976-01-01
The electrically scanning microwave radiometer (ESMR) data from the Nimbus 5 satellite was used to deduce estimates of precipitation amount over the oceans. An atlas of the global oceanic rainfall was prepared and the global rainfall maps analyzed and related to available ground truth information as well as to large scale processes in the atmosphere. It was concluded that the ESMR system provides the most reliable and direct approach yet known for the estimation of rainfall over sparsely documented, wide oceanic regions.
A Laboratory Study on the Reliability Estimations of the Mini-CEX
ERIC Educational Resources Information Center
de Lima, Alberto Alves; Conde, Diego; Costabel, Juan; Corso, Juan; Van der Vleuten, Cees
2013-01-01
Reliability estimations of workplace-based assessments with the mini-CEX are typically based on real-life data. Estimations are based on the assumption of local independence: the object of the measurement should not be influenced by the measurement itself and samples should be completely independent. This is difficult to achieve. Furthermore, the…
Comparability and Reliability Considerations of Adequate Yearly Progress
ERIC Educational Resources Information Center
Maier, Kimberly S.; Maiti, Tapabrata; Dass, Sarat C.; Lim, Chae Young
2012-01-01
The purpose of this study is to develop an estimate of Adequate Yearly Progress (AYP) that will allow for reliable and valid comparisons among student subgroups, schools, and districts. A shrinkage-type estimator of AYP using the Bayesian framework is described. Using simulated data, the performance of the Bayes estimator will be compared to…
Sample Size for Estimation of G and Phi Coefficients in Generalizability Theory
ERIC Educational Resources Information Center
Atilgan, Hakan
2013-01-01
Problem Statement: Reliability, which refers to the degree to which measurement results are free from measurement errors, as well as its estimation, is an important issue in psychometrics. Several methods for estimating reliability have been suggested by various theories in the field of psychometrics. One of these theories is the generalizability…
Software For Computing Reliability Of Other Software
NASA Technical Reports Server (NTRS)
Nikora, Allen; Antczak, Thomas M.; Lyu, Michael
1995-01-01
Computer Aided Software Reliability Estimation (CASRE) computer program developed for use in measuring reliability of other software. Easier for non-specialists in reliability to use than many other currently available programs developed for same purpose. CASRE incorporates mathematical modeling capabilities of public-domain Statistical Modeling and Estimation of Reliability Functions for Software (SMERFS) computer program and runs in Windows software environment. Provides menu-driven command interface; enabling and disabling of menu options guides user through (1) selection of set of failure data, (2) execution of mathematical model, and (3) analysis of results from model. Written in C language.
Preliminary assessment of systems for deriving liquid and gaseous fuels from waste or grown organics
NASA Technical Reports Server (NTRS)
Graham, R. W.; Reynolds, T. W.; Hsu, Y. Y.
1976-01-01
The overall feasibility of the chemical conversion of waste or grown organic matter to fuel is examined from the technical, economic, and social viewpoints. The energy contribution from a system that uses waste and grown organic feedstocks is estimated as 4 to 12 percent of our current energy consumption. Estimates of today's market prices for these fuels are included. Economic and social issues are as important as technology in determining the feasibility of such a proposal. An orderly program of development and demonstration is recommended to provide reliable data for an assessment of the viability of the proposal.
The common engine concept for ALS application - A cost reduction approach
NASA Technical Reports Server (NTRS)
Bair, E. K.; Schindler, C. M.
1989-01-01
Future launch systems require the application of propulsion systems which have been designed and developed to meet mission model needs while providing high degrees of reliability and cost effectiveness. Vehicle configurations which utilize different propellant combinations for booster and core stages can benefit from a common engine approach where a single engine design can be configured to operate on either set of propellants and thus serve as either a booster or core engine. Engine design concepts and mission application for a vehicle employing a common engine are discussed. Engine program cost estimates were made and cost savings, over the design and development of two unique engines, estimated.
A Decreasing Failure Rate, Mixed Exponential Model Applied to Reliability.
1981-06-01
Trident missile systems have been observed. The mixed exponential distribu- tion has been shown to fit the life data for the electronic equipment on...these systems . This paper discusses some of the estimation problems which occur with the decreasing failure rate mixed exponential distribution when...assumption of constant or increasing failure rate seemed to be incorrect. 2. However, the design of this electronic equipment indicated that
NASA Technical Reports Server (NTRS)
1976-01-01
This redundant strapdown INS preliminary design study demonstrates the practicality of a skewed sensor system configuration by means of: (1) devising a practical system mechanization utilizing proven strapdown instruments, (2) thoroughly analyzing the skewed sensor redundancy management concept to determine optimum geometry, data processing requirements, and realistic reliability estimates, and (3) implementing the redundant computers into a low-cost, maintainable configuration.
Software Estimation: Developing an Accurate, Reliable Method
2011-08-01
Lake, CA ,93555- 6110 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S...Activity, the systems engineering team is responsible for system and software requirements. 2 . Process Dashboard is a software planning and tracking tool... CA 93555- 6110 760-939-6989 Brad Hodgins is an interim TSP Mentor Coach, SEI-Authorized TSP Coach, SEI-Certified PSP/TSP Instructor, and SEI
Xu, Xidong; Wickens, Christopher D; Rantanen, Esa M
2007-01-15
A total of 24 pilots viewed dynamic encounters between their own aircraft and an intruder aircraft on a 2-D cockpit display of traffic information (CDTI) and estimated the point and time of closest approach. A three-level alerting system provided a correct categorical estimate of the projected miss distance on 83% of the trials. The remaining 17% of alerts were equally divided between misses and false alarms, of large and small magnitude. Roughly half the pilots depended on automation to improve estimation of miss distance relative to the baseline pilots, who viewed identical trials without the aid of automated alerts. Moreover, they did so more on the more difficult traffic trials resulting in improved performance on the 83% correct automation trials without causing harm on the 17% automation-error trials, compared to the baseline group. The automated alerts appeared to lead pilots to inspect the raw data more closely. While assisting the accurate prediction of miss distance, the automation led to an underestimate of the time remaining until the point of closest approach. The results point to the benefits of even imperfect automation in the strategic alerts characteristic of the CDTI, at least as long as this reliability remains high (above 80%).
Resonator design and performance estimation for a space-based laser transmitter
NASA Astrophysics Data System (ADS)
Agrawal, Lalita; Bhardwaj, Atul; Pal, Suranjan; Kamalakar, J. A.
2006-12-01
Development of a laser transmitter for space applications is a highly challenging task. The laser must be rugged, reliable, lightweight, compact and energy efficient. Most of these features are inherently achieved by diode pumping of solid state lasers. Overall system reliability can further be improved by appropriate optical design of the laser resonator besides selection of suitable electro-optical and opto-mechanical components. This paper presents the design details and the theoretically estimated performance of a crossed-porro prism based, folded Z-shaped laser resonator. A symmetrically pumped Nd: YAG laser rod of 3 mm diameter and 60 mm length is placed in the gain arm with total input peak power of 1800 W from laser diode arrays. Electro-optical Q-switching is achieved through a combination of a polarizer, a fractional waveplate and LiNbO 3 Q-switch crystal (9 x 9 x 25 mm) placed in the feedback arm. Polarization coupled output is obtained by optimizing azimuth angle of quarter wave plate placed in the gain arm. Theoretical estimation of laser output energy and pulse width has been carried out by varying input power levels and resonator length to analyse the performance tolerances. The designed system is capable of meeting the objective of generating laser pulses of 10 ns duration and 30 mJ energy @ 10 Hz.
Reliability based design including future tests and multiagent approaches
NASA Astrophysics Data System (ADS)
Villanueva, Diane
The initial stages of reliability-based design optimization involve the formulation of objective functions and constraints, and building a model to estimate the reliability of the design with quantified uncertainties. However, even experienced hands often overlook important objective functions and constraints that affect the design. In addition, uncertainty reduction measures, such as tests and redesign, are often not considered in reliability calculations during the initial stages. This research considers two areas that concern the design of engineering systems: 1) the trade-off of the effect of a test and post-test redesign on reliability and cost and 2) the search for multiple candidate designs as insurance against unforeseen faults in some designs. In this research, a methodology was developed to estimate the effect of a single future test and post-test redesign on reliability and cost. The methodology uses assumed distributions of computational and experimental errors with re-design rules to simulate alternative future test and redesign outcomes to form a probabilistic estimate of the reliability and cost for a given design. Further, it was explored how modeling a future test and redesign provides a company an opportunity to balance development costs versus performance by simultaneously designing the design and the post-test redesign rules during the initial design stage. The second area of this research considers the use of dynamic local surrogates, or surrogate-based agents, to locate multiple candidate designs. Surrogate-based global optimization algorithms often require search in multiple candidate regions of design space, expending most of the computation needed to define multiple alternate designs. Thus, focusing on solely locating the best design may be wasteful. We extended adaptive sampling surrogate techniques to locate multiple optima by building local surrogates in sub-regions of the design space to identify optima. The efficiency of this method was studied, and the method was compared to other surrogate-based optimization methods that aim to locate the global optimum using two two-dimensional test functions, a six-dimensional test function, and a five-dimensional engineering example.
Reliability of drivers in urban intersections.
Gstalter, Herbert; Fastenmeier, Wolfgang
2010-01-01
The concept of human reliability has been widely used in industrial settings by human factors experts to optimise the person-task fit. Reliability is estimated by the probability that a task will successfully be completed by personnel in a given stage of system operation. Human Reliability Analysis (HRA) is a technique used to calculate human error probabilities as the ratio of errors committed to the number of opportunities for that error. To transfer this notion to the measurement of car driver reliability the following components are necessary: a taxonomy of driving tasks, a definition of correct behaviour in each of these tasks, a list of errors as deviations from the correct actions and an adequate observation method to register errors and opportunities for these errors. Use of the SAFE-task analysis procedure recently made it possible to derive driver errors directly from the normative analysis of behavioural requirements. Driver reliability estimates could be used to compare groups of tasks (e.g. different types of intersections with their respective regulations) as well as groups of drivers' or individual drivers' aptitudes. This approach was tested in a field study with 62 drivers of different age groups. The subjects drove an instrumented car and had to complete an urban test route, the main features of which were 18 intersections representing six different driving tasks. The subjects were accompanied by two trained observers who recorded driver errors using standardized observation sheets. Results indicate that error indices often vary between both the age group of drivers and the type of driving task. The highest error indices occurred in the non-signalised intersection tasks and the roundabout, which exactly equals the corresponding ratings of task complexity from the SAFE analysis. A comparison of age groups clearly shows the disadvantage of older drivers, whose error indices in nearly all tasks are significantly higher than those of the other groups. The vast majority of these errors could be explained by high task load in the intersections, as they represent difficult tasks. The discussion shows how reliability estimates can be used in a constructive way to propose changes in car design, intersection layout and regulation as well as driver training.
A Multi-Component Automated Laser-Origami System for Cyber-Manufacturing
NASA Astrophysics Data System (ADS)
Ko, Woo-Hyun; Srinivasa, Arun; Kumar, P. R.
2017-12-01
Cyber-manufacturing systems can be enhanced by an integrated network architecture that is easily configurable, reliable, and scalable. We consider a cyber-physical system for use in an origami-type laser-based custom manufacturing machine employing folding and cutting of sheet material to manufacture 3D objects. We have developed such a system for use in a laser-based autonomous custom manufacturing machine equipped with real-time sensing and control. The basic elements in the architecture are built around the laser processing machine. They include a sensing system to estimate the state of the workpiece, a control system determining control inputs for a laser system based on the estimated data and user’s job requests, a robotic arm manipulating the workpiece in the work space, and middleware, named Etherware, supporting the communication among the systems. We demonstrate automated 3D laser cutting and bending to fabricate a 3D product as an experimental result.
Improving the Discipline of Cost Estimation and Analysis
NASA Technical Reports Server (NTRS)
Piland, William M.; Pine, David J.; Wilson, Delano M.
2000-01-01
The need to improve the quality and accuracy of cost estimates of proposed new aerospace systems has been widely recognized. The industry has done the best job of maintaining related capability with improvements in estimation methods and giving appropriate priority to the hiring and training of qualified analysts. Some parts of Government, and National Aeronautics and Space Administration (NASA) in particular, continue to need major improvements in this area. Recently, NASA recognized that its cost estimation and analysis capabilities had eroded to the point that the ability to provide timely, reliable estimates was impacting the confidence in planning man), program activities. As a result, this year the Agency established a lead role for cost estimation and analysis. The Independent Program Assessment Office located at the Langley Research Center was given this responsibility.
[Evaluation of the reliability of freight elevator operators].
Gosk, A; Borodulin-Nadzieja, L; Janocha, A; Salomon, E
1991-01-01
The study involved 58 workers employed at winding machines. Their reliability was estimated from the results of psychomotoric test precision, condition of the vegetative nervous system, and from the results of psychological tests. The tests were carried out at the laboratory and at the workplaces, with all distractive factors and functional connection of the work process present. We have found that the reliability of the workers may be affected by a variety of factors. Among the winding machine operators, work monotony can lead to "monotony syndrome". Among the signalists , the appreciation of great responsibility can lead to unpredictable and non-adequate reactions. From both groups, persons displaying a lower-than-average precision were isolated. All those persons demonstrated a reckless attitude and the opinion of their superiors about them was poor. Those persons constitute potential risk for the reliable operation of the discussed team.
Reducing random measurement error in assessing postural load on the back in epidemiologic surveys.
Burdorf, A
1995-02-01
The goal of this study was to design strategies to assess postural load on the back in occupational epidemiology by taking into account the reliability of measurement methods and the variability of exposure among the workers under study. Intermethod reliability studies were evaluated to estimate the systematic bias (accuracy) and random measurement error (precision) of various methods to assess postural load on the back. Intramethod reliability studies were reviewed to estimate random variability of back load over time. Intermethod surveys have shown that questionnaires have a moderate reliability for gross activities such as sitting, whereas duration of trunk flexion and rotation should be assessed by observation methods or inclinometers. Intramethod surveys indicate that exposure variability can markedly affect the reliability of estimates of back load if the estimates are based upon a single measurement over a certain time period. Equations have been presented to evaluate various study designs according to the reliability of the measurement method, the optimum allocation of the number of repeated measurements per subject, and the number of subjects in the study. Prior to a large epidemiologic study, an exposure-oriented survey should be conducted to evaluate the performance of measurement instruments and to estimate sources of variability for back load. The strategy for assessing back load can be optimized by balancing the number of workers under study and the number of repeated measurements per worker.
The relationship between cost estimates reliability and BIM adoption: SEM analysis
NASA Astrophysics Data System (ADS)
Ismail, N. A. A.; Idris, N. H.; Ramli, H.; Rooshdi, R. R. Raja Muhammad; Sahamir, S. R.
2018-02-01
This paper presents the usage of Structural Equation Modelling (SEM) approach in analysing the effects of Building Information Modelling (BIM) technology adoption in improving the reliability of cost estimates. Based on the questionnaire survey results, SEM analysis using SPSS-AMOS application examined the relationships between BIM-improved information and cost estimates reliability factors, leading to BIM technology adoption. Six hypotheses were established prior to SEM analysis employing two types of SEM models, namely the Confirmatory Factor Analysis (CFA) model and full structural model. The SEM models were then validated through the assessment on their uni-dimensionality, validity, reliability, and fitness index, in line with the hypotheses tested. The final SEM model fit measures are: P-value=0.000, RMSEA=0.079<0.08, GFI=0.824, CFI=0.962>0.90, TLI=0.956>0.90, NFI=0.935>0.90 and ChiSq/df=2.259; indicating that the overall index values achieved the required level of model fitness. The model supports all the hypotheses evaluated, confirming that all relationship exists amongst the constructs are positive and significant. Ultimately, the analysis verified that most of the respondents foresee better understanding of project input information through BIM visualization, its reliable database and coordinated data, in developing more reliable cost estimates. They also perceive to accelerate their cost estimating task through BIM adoption.
The challenge of mapping between two medical coding systems.
Wojcik, Barbara E; Stein, Catherine R; Devore, Raymond B; Hassell, L Harrison
2006-11-01
Deployable medical systems patient conditions (PCs) designate groups of patients with similar medical conditions and, therefore, similar treatment requirements. PCs are used by the U.S. military to estimate field medical resources needed in combat operations. Information associated with each of the 389 PCs is based on subject matter expert opinion, instead of direct derivation from standard medical codes. Currently, no mechanisms exist to tie current or historical medical data to PCs. Our study objective was to determine whether reliable conversion between PC codes and International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) diagnosis codes is possible. Data were analyzed for three professional coders assigning all applicable ICD-9-CM diagnosis codes to each PC code. Inter-rater reliability was measured by using Cohen's K statistic and percent agreement. Methods were developed to calculate kappa statistics when multiple responses could be selected from many possible categories. Overall, we found moderate support for the possibility of reliable conversion between PCs and ICD-9-CM diagnoses (mean kappa = 0.61). Current PCs should be modified into a system that is verifiable with real data.
Study of turboprop systems reliability and maintenance costs
NASA Technical Reports Server (NTRS)
1978-01-01
The overall reliability and maintenance costs (R&MC's) of past and current turboprop systems were examined. Maintenance cost drivers were found to be scheduled overhaul (40%), lack of modularity particularly in the propeller and reduction gearbox, and lack of inherent durability (reliability) of some parts. Comparisons were made between the 501-D13/54H60 turboprop system and the widely used JT8D turbofan. It was found that the total maintenance cost per flight hour of the turboprop was 75% higher than that of the JT8D turbofan. Part of this difference was due to propeller and gearbox costs being higher than those of the fan and reverser, but most of the difference was in the engine core where the older technology turboprop core maintenance costs were nearly 70 percent higher than for the turbofan. The estimated maintenance cost of both the advanced turboprop and advanced turbofan were less than the JT8D. The conclusion was that an advanced turboprop and an advanced turbofan, using similar cores, will have very competitive maintenance costs per flight hour.
Sleep Estimates Using Microelectromechanical Systems (MEMS)
te Lindert, Bart H. W.; Van Someren, Eus J. W.
2013-01-01
Study Objectives: Although currently more affordable than polysomnography, actigraphic sleep estimates have disadvantages. Brand-specific differences in data reduction impede pooling of data in large-scale cohorts and may not fully exploit movement information. Sleep estimate reliability might improve by advanced analyses of three-axial, linear accelerometry data sampled at a high rate, which is now feasible using microelectromechanical systems (MEMS). However, it might take some time before these analyses become available. To provide ongoing studies with backward compatibility while already switching from actigraphy to MEMS accelerometry, we designed and validated a method to transform accelerometry data into the traditional actigraphic movement counts, thus allowing for the use of validated algorithms to estimate sleep parameters. Design: Simultaneous actigraphy and MEMS-accelerometry recording. Setting: Home, unrestrained. Participants: Fifteen healthy adults (23-36 y, 10 males, 5 females). Interventions: None. Measurements: Actigraphic movement counts/15-sec and 50-Hz digitized MEMS-accelerometry. Analyses: Passing-Bablok regression optimized transformation of MEMS-accelerometry signals to movement counts. Kappa statistics calculated agreement between individual epochs scored as wake or sleep. Bland-Altman plots evaluated reliability of common sleep variables both between and within actigraphs and MEMS-accelerometers. Results: Agreement between epochs was almost perfect at the low, medium, and high threshold (kappa = 0.87 ± 0.05, 0.85 ± 0.06, and 0.83 ± 0.07). Sleep parameter agreement was better between two MEMS-accelerometers or a MEMS-accelerometer and an actigraph than between two actigraphs. Conclusions: The algorithm allows for continuity of outcome parameters in ongoing actigraphy studies that consider switching to MEMS-accelerometers. Its implementation makes backward compatibility feasible, while collecting raw data that, in time, could provide better sleep estimates and promote cross-study data pooling. Citation: te Lindert BHW; Van Someren EJW. Sleep estimates using microelectromechanical systems (MEMS). SLEEP 2013;36(5):781-789. PMID:23633761
Mohammadi, Younes; Parsaeian, Mahboubeh; Farzadfar, Farshad; Kasaeian, Amir; Mehdipour, Parinaz; Sheidaei, Ali; Mansouri, Anita; Saeedi Moghaddam, Sahar; Djalalinia, Shirin; Mahmoudi, Mahmood; Khosravi, Ardeshir; Yazdani, Kamran
2014-03-01
Calculation of burden of diseases and risk factors is crucial to set priorities in the health care systems. Nevertheless, the reliable measurement of mortality rates is the main barrier to reach this goal. Unfortunately, in many developing countries the vital registration system (VRS) is either defective or does not exist at all. Consequently, alternative methods have been developed to measure mortality. This study is a subcomponent of NASBOD project, which is currently conducting in Iran. In this study, we aim to calculate incompleteness of the Death Registration System (DRS) and then to estimate levels and trends of child and adult mortality using reliable methods. In order to estimate mortality rates, first, we identify all possible data sources. Then, we calculate incompleteness of child and adult morality separately. For incompleteness of child mortality, we analyze summary birth history data using maternal age cohort and maternal age period methods. Then, we combine these two methods using LOESS regression. However, these estimates are not plausible for some provinces. We use additional information of covariates such as wealth index and years of schooling to make predictions for these provinces using spatio-temporal model. We generate yearly estimates of mortality using Gaussian process regression that covers both sampling and non-sampling errors within uncertainty intervals. By comparing the resulted estimates with mortality rates from DRS, we calculate child mortality incompleteness. For incompleteness of adult mortality, Generalized Growth Balance, Synthetic Extinct Generation and a hybrid of two mentioned methods are used. Afterwards, we combine incompleteness of three methods using GPR, and apply it to correct and adjust the number of deaths. In this study, we develop a conceptual framework to overcome the existing challenges for accurate measuring of mortality rates. The resulting estimates can be used to inform policy-makers about past, current and future mortality rates as a major indicator of health status of a population.
The PAWS and STEM reliability analysis programs
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Stevenson, Philip H.
1988-01-01
The PAWS and STEM programs are new design/validation tools. These programs provide a flexible, user-friendly, language-based interface for the input of Markov models describing the behavior of fault-tolerant computer systems. These programs produce exact solutions of the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. PAWS uses a Pade approximation as a solution technique; STEM uses a Taylor series as a solution technique. Both programs have the capability to solve numerically stiff models. PAWS and STEM possess complementary properties with regard to their input space; and, an additional strength of these programs is that they accept input compatible with the SURE program. If used in conjunction with SURE, PAWS and STEM provide a powerful suite of programs to analyze the reliability of fault-tolerant computer systems.
NASA Astrophysics Data System (ADS)
Gurov, V. V.
2017-01-01
Software tools for educational purposes, such as e-lessons, computer-based testing system, from the point of view of reliability, have a number of features. The main ones among them are the need to ensure a sufficiently high probability of their faultless operation for a specified time, as well as the impossibility of their rapid recovery by the way of replacing it with a similar running program during the classes. The article considers the peculiarities of reliability evaluation of programs in contrast to assessments of hardware reliability. The basic requirements to reliability of software used for carrying out practical and laboratory classes in the form of computer-based training programs are given. The essential requirements applicable to the reliability of software used for conducting the practical and laboratory studies in the form of computer-based teaching programs are also described. The mathematical tool based on Markov chains, which allows to determine the degree of debugging of the training program for use in the educational process by means of applying the graph of the software modules interaction, is presented.
Psychometrics Matter in Health Behavior: A Long-term Reliability Generalization Study.
Pickett, Andrew C; Valdez, Danny; Barry, Adam E
2017-09-01
Despite numerous calls for increased understanding and reporting of reliability estimates, social science research, including the field of health behavior, has been slow to respond and adopt such practices. Therefore, we offer a brief overview of reliability and common reporting errors; we then perform analyses to examine and demonstrate the variability of reliability estimates by sample and over time. Using meta-analytic reliability generalization, we examined the variability of coefficient alpha scores for a well-designed, consistent, nationwide health study, covering a span of nearly 40 years. For each year and sample, reliability varied. Furthermore, reliability was predicted by a sample characteristic that differed among age groups within each administration. We demonstrated that reliability is influenced by the methods and individuals from which a given sample is drawn. Our work echoes previous calls that psychometric properties, particularly reliability of scores, are important and must be considered and reported before drawing statistical conclusions.
Effectiveness of glucose monitoring systems modified for the visually impaired.
Bernbaum, M; Albert, S G; Brusca, S; McGinnis, J; Miller, D; Hoffmann, J W; Mooradian, A D
1993-10-01
To compare three glucose meters modified for use by individuals with diabetes and visual impairment regarding accuracy, precision, and clinical reliability. Ten subjects with diabetes and visual impairment performed self-monitoring of blood glucose using each of the three commercially available blood glucose meters modified for visually impaired users (the AccuChek Freedom [Boehringer Mannheim, Indianapolis, IN], the Diascan SVM [Home Diagnostics, Eatontown, NJ], and the One Touch [Lifescan, Milpitas, CA]). The meters were independently evaluated by a laboratory technologist for precision and accuracy determinations. Only two meters were acceptable with regard to laboratory precision (coefficient of variation < 10%)--the Accuchek and the One Touch. The Accuchek and the One Touch did not differ significantly with regard to laboratory estimates of accuracy. A great discrepancy of the clinical reliability results was observed between these two meters. The Accuchek maintained a high degree of reliability (y = 0.99X + 0.44, r = 0.97, P = 0.001). The visually impaired subjects were unable to perform reliable testing using the One Touch system because of a lack of appropriate tactile landmarks and auditory signals. In addition to laboratory assessments of glucose meters, monitoring systems designed for the visually impaired must include adequate tactile and audible feedback features to allow for the acquisition and placement of appropriate blood samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the secondmore » explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.« less
Spur, helical, and spiral bevel transmission life modeling
NASA Technical Reports Server (NTRS)
Savage, Michael; Rubadeux, Kelly L.; Coe, Harold H.; Coy, John J.
1994-01-01
A computer program, TLIFE, which estimates the life, dynamic capacity, and reliability of aircraft transmissions, is presented. The program enables comparisons of transmission service life at the design stage for optimization. A variety of transmissions may be analyzed including: spur, helical, and spiral bevel reductions as well as series combinations of these reductions. The basic spur and helical reductions include: single mesh, compound, and parallel path plus revert star and planetary gear trains. A variety of straddle and overhung bearing configurations on the gear shafts are possible as is the use of a ring gear for the output. The spiral bevel reductions include single and dual input drives with arbitrary shaft angles. The program is written in FORTRAN 77 and has been executed both in the personal computer DOS environment and on UNIX workstations. The analysis may be performed in either the SI metric or the English inch system of units. The reliability and life analysis is based on the two-parameter Weibull distribution lives of the component gears and bearings. The program output file describes the overall transmission and each constituent transmission, its components, and their locations, capacities, and loads. Primary output is the dynamic capacity and 90-percent reliability and mean lives of the unit transmissions and the overall system which can be used to estimate service overhaul frequency requirements. Two examples are presented to illustrate the information available for single element and series transmissions.
Cost estimation of HVDC transmission system of Bangka's NPP candidates
NASA Astrophysics Data System (ADS)
Liun, Edwaren; Suparman
2014-09-01
Regarding nuclear power plant development in Bangka Island, it can be estimated that produced power will be oversupply for the Bangka Island and needs to transmit to Sumatra or Java Island. The distance between the regions or islands causing considerable loss of power in transmission by alternating current, and a wide range of technical and economical issues. The objective of this paper addresses to economics analysis of direct current transmission system to overcome those technical problem. Direct current transmission has a stable characteristic, so that the power delivery from Bangka to Sumatra or Java in a large scale efficiently and reliably can be done. HVDC system costs depend on the power capacity applied to the system and length of the transmission line in addition to other variables that may be different.
Mei, Wenjuan; Zeng, Xianping; Yang, Chenglin; Zhou, Xiuyun
2017-01-01
The insulated gate bipolar transistor (IGBT) is a kind of excellent performance switching device used widely in power electronic systems. How to estimate the remaining useful life (RUL) of an IGBT to ensure the safety and reliability of the power electronics system is currently a challenging issue in the field of IGBT reliability. The aim of this paper is to develop a prognostic technique for estimating IGBTs’ RUL. There is a need for an efficient prognostic algorithm that is able to support in-situ decision-making. In this paper, a novel prediction model with a complete structure based on optimally pruned extreme learning machine (OPELM) and Volterra series is proposed to track the IGBT’s degradation trace and estimate its RUL; we refer to this model as Volterra k-nearest neighbor OPELM prediction (VKOPP) model. This model uses the minimum entropy rate method and Volterra series to reconstruct phase space for IGBTs’ ageing samples, and a new weight update algorithm, which can effectively reduce the influence of the outliers and noises, is utilized to establish the VKOPP network; then a combination of the k-nearest neighbor method (KNN) and least squares estimation (LSE) method is used to calculate the output weights of OPELM and predict the RUL of the IGBT. The prognostic results show that the proposed approach can predict the RUL of IGBT modules with small error and achieve higher prediction precision and lower time cost than some classic prediction approaches. PMID:29099811
Reliability analysis of structural ceramic components using a three-parameter Weibull distribution
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Powers, Lynn M.; Starlinger, Alois
1992-01-01
Described here are nonlinear regression estimators for the three-Weibull distribution. Issues relating to the bias and invariance associated with these estimators are examined numerically using Monte Carlo simulation methods. The estimators were used to extract parameters from sintered silicon nitride failure data. A reliability analysis was performed on a turbopump blade utilizing the three-parameter Weibull distribution and the estimates from the sintered silicon nitride data.
ERIC Educational Resources Information Center
Lincove, Jane Arnold; Osborne, Cynthia; Dillon, Amanda; Mills, Nicholas
2014-01-01
Despite questions about validity and reliability, the use of value-added estimation methods has moved beyond academic research into state accountability systems for teachers, schools, and teacher preparation programs (TPPs). Prior studies of value-added measurement for TPPs test the validity of researcher-designed models and find that measuring…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cadwallader, Lee C.; Zhao, Haihua
In this study, selected accident case histories are described that illustrate the potential modes of injury from gas jets, pressure-driven missiles, and asphyxiants. Gas combustion hazards are also briefly mentioned. Using high-pressure helium and nitrogen, estimates of safe exclusion distances are calculated for differing pressures, temperatures, and breach sizes. Some sources for gas system reliability values are also cited.
Software Technology for Adaptable, Reliable Systems (STARS)
1994-03-25
Tmeline(3), SECOMO(3), SEER(3), GSFC Software Engineering Lab Model(l), SLIM(4), SEER-SEM(l), SPQR (2), PRICE-S(2), internally-developed models(3), APMSS(1...3 " Timeline - 3 " SASET (Software Architecture Sizing Estimating Tool) - 2 " MicroMan 11- 2 * LCM (Logistics Cost Model) - 2 * SPQR - 2 * PRICE-S - 2
National visitor use monitoring implementation in Alaska.
Eric M. White; Joshua B. Wilson
2008-01-01
The USDA Forest Service implemented the National Visitor Use Monitoring (NVUM) program across the entire National Forest System (NFS) in calendar year 2000. The primary objective of the NVUM program is to develop reliable estimates of recreation use on NFS lands via a nationally consistent, statistically valid sampling approach. Secondary objectives of NVUM are to...
Are Handheld Computers Dependable? A New Data Collection System for Classroom-Based Observations
ERIC Educational Resources Information Center
Adiguzel, Tufan; Vannest, Kimberly J.; Parker, Richard I.
2009-01-01
Very little research exists on the dependability of handheld computers used in public school classrooms. This study addresses four dependability criteria--reliability, maintainability, availability, and safety--to evaluate a data collection tool on a handheld computer. Data were collected from five sources: (1) time-use estimations by 19 special…
A Compact VLSI System for Bio-Inspired Visual Motion Estimation.
Shi, Cong; Luo, Gang
2018-04-01
This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.
The relative impact of baryons and cluster shape on weak lensing mass estimates of galaxy clusters
NASA Astrophysics Data System (ADS)
Lee, B. E.; Le Brun, A. M. C.; Haq, M. E.; Deering, N. J.; King, L. J.; Applegate, D.; McCarthy, I. G.
2018-05-01
Weak gravitational lensing depends on the integrated mass along the line of sight. Baryons contribute to the mass distribution of galaxy clusters and the resulting mass estimates from lensing analysis. We use the cosmo-OWLS suite of hydrodynamic simulations to investigate the impact of baryonic processes on the bias and scatter of weak lensing mass estimates of clusters. These estimates are obtained by fitting NFW profiles to mock data using MCMC techniques. In particular, we examine the difference in estimates between dark matter-only runs and those including various prescriptions for baryonic physics. We find no significant difference in the mass bias when baryonic physics is included, though the overall mass estimates are suppressed when feedback from AGN is included. For lowest-mass systems for which a reliable mass can be obtained (M200 ≈ 2 × 1014M⊙), we find a bias of ≈-10 per cent. The magnitude of the bias tends to decrease for higher mass clusters, consistent with no bias for the most massive clusters which have masses comparable to those found in the CLASH and HFF samples. For the lowest mass clusters, the mass bias is particularly sensitive to the fit radii and the limits placed on the concentration prior, rendering reliable mass estimates difficult. The scatter in mass estimates between the dark matter-only and the various baryonic runs is less than between different projections of individual clusters, highlighting the importance of triaxiality.
Lunar Landing Operational Risk Model
NASA Technical Reports Server (NTRS)
Mattenberger, Chris; Putney, Blake; Rust, Randy; Derkowski, Brian
2010-01-01
Characterizing the risk of spacecraft goes beyond simply modeling equipment reliability. Some portions of the mission require complex interactions between system elements that can lead to failure without an actual hardware fault. Landing risk is currently the least characterized aspect of the Altair lunar lander and appears to result from complex temporal interactions between pilot, sensors, surface characteristics and vehicle capabilities rather than hardware failures. The Lunar Landing Operational Risk Model (LLORM) seeks to provide rapid and flexible quantitative insight into the risks driving the landing event and to gauge sensitivities of the vehicle to changes in system configuration and mission operations. The LLORM takes a Monte Carlo based approach to estimate the operational risk of the Lunar Landing Event and calculates estimates of the risk of Loss of Mission (LOM) - Abort Required and is Successful, Loss of Crew (LOC) - Vehicle Crashes or Cannot Reach Orbit, and Success. The LLORM is meant to be used during the conceptual design phase to inform decision makers transparently of the reliability impacts of design decisions, to identify areas of the design which may require additional robustness, and to aid in the development and flow-down of requirements.
Duncan, Laura; Comeau, Jinette; Wang, Li; Vitoroulis, Irene; Boyle, Michael H; Bennett, Kathryn
2018-02-19
A better understanding of factors contributing to the observed variability in estimates of test-retest reliability in published studies on standardized diagnostic interviews (SDI) is needed. The objectives of this systematic review and meta-analysis were to estimate the pooled test-retest reliability for parent and youth assessments of seven common disorders, and to examine sources of between-study heterogeneity in reliability. Following a systematic review of the literature, multilevel random effects meta-analyses were used to analyse 202 reliability estimates (Cohen's kappa = ҡ) from 31 eligible studies and 5,369 assessments of 3,344 children and youth. Pooled reliability was moderate at ҡ = .58 (CI 95% 0.53-0.63) and between-study heterogeneity was substantial (Q = 2,063 (df = 201), p < .001 and I 2 = 79%). In subgroup analysis, reliability varied across informants for specific types of psychiatric disorder (ҡ = .53-.69 for parent vs. ҡ = .39-.68 for youth) with estimates significantly higher for parents on attention deficit hyperactivity disorder, oppositional defiant disorder and the broad groupings of externalizing and any disorder. Reliability was also significantly higher in studies with indicators of poor or fair study methodology quality (sample size <50, retest interval <7 days). Our findings raise important questions about the meaningfulness of published evidence on the test-retest reliability of SDIs and the usefulness of these tools in both clinical and research contexts. Potential remedies include the introduction of standardized study and reporting requirements for reliability studies, and exploration of other approaches to assessing and classifying child and adolescent psychiatric disorder. © 2018 Association for Child and Adolescent Mental Health.
Casartelli, Nicola; Müller, Roland; Maffiuletti, Nicola A
2010-11-01
The aim of the present study was to verify the validity and reliability of the Myotest accelerometric system (Myotest SA, Sion, Switzerland) for the assessment of vertical jump height. Forty-four male basketball players (age range: 9-25 years) performed series of squat, countermovement and repeated jumps during 2 identical test sessions separated by 2-15 days. Flight height was simultaneously quantified with the Myotest system and validated photoelectric cells (Optojump). Two calculation methods were used to estimate the jump height from Myotest recordings: flight time (Myotest-T) and vertical takeoff velocity (Myotest-V). Concurrent validity was investigated comparing Myotest-T and Myotest-V to the criterion method (Optojump), and test-retest reliability was also examined. As regards validity, Myotest-T overestimated jumping height compared to Optojump (p < 0.001) with a systematic bias of approximately 7 cm, even though random errors were low (2.7 cm) and intraclass correlation coefficients (ICCs) where high (>0.98), that is, excellent validity. Myotest-V overestimated jumping height compared to Optojump (p < 0.001), with high random errors (>12 cm), high limits of agreement ratios (>36%), and low ICCs (<0.75), that is, poor validity. As regards reliability, Myotest-T showed high ICCs (range: 0.92-0.96), whereas Myotest-V showed low ICCs (range: 0.56-0.89), and high random errors (>9 cm). In conclusion, Myotest-T is a valid and reliable method for the assessment of vertical jump height, and its use is legitimate for field-based evaluations, whereas Myotest-V is neither valid nor reliable.
NASA Astrophysics Data System (ADS)
Bechou, L.; Deshayes, Y.; Aupetit-Berthelemot, C.; Guerin, A.; Tronche, C.
Space missions for Earth Observation are called upon to carry a growing number of instruments in their payload, whose performances are increasing. Future space systems are therefore intended to generate huge amounts of data and a key challenge in coming years will therefore lie in the ability to transmit that significant quantity of data to ground. Thus very high data rate Payload Telemetry (PLTM) systems will be required to face the demand of the future Earth Exploration Satellite Systems and reliability is one of the major concern of such systems. An attractive approach associated with the concept of predictive modeling consists in analyzing the impact of components malfunctioning on the optical link performances taking into account the network requirements and experimental degradation laws. Reliability estimation is traditionally based on life-testing and a basic approach is to use Telcordia requirements (468GR) for optical telecommunication applications. However, due to the various interactions between components, operating lifetime of a system cannot be taken as the lifetime of the less reliable component. In this paper, an original methodology is proposed to estimate reliability of an optical communication system by using a dedicated system simulator for predictive modeling and design for reliability. At first, we present frameworks of point-to-point optical communication systems for space applications where high data rate (or frequency bandwidth), lower cost or mass saving are needed. Optoelectronics devices used in these systems can be similar to those found in terrestrial optical network. Particularly we report simulation results of transmission performances after introduction of DFB Laser diode parameters variations versus time extrapolated from accelerated tests based on terrestrial or submarine telecommunications qualification standards. Simulations are performed to investigate and predict the consequence of degradations of the Laser diode (acting as a - requency carrier) on system performances (eye diagram, quality factor and BER). The studied link consists in 4× 2.5 Gbits/s WDM channels with direct modulation and equally spaced (0,8 nm) around the 1550 nm central wavelength. Results clearly show that variation of fundamental parameters such as bias current or central wavelength induces a penalization of dynamic performances of the complete WDM link. In addition different degradation kinetics of aged Laser diodes from a same batch have been implemented to build the final distribution of Q-factor and BER values after 25 years. When considering long optical distance, fiber attenuation, EDFA noise, dispersion, PMD, ... penalize network performances that can be compensated using Forward Error Correction (FEC) coding. Three methods have been investigated in the case of On-Off Keying (OOK) transmission over an unipolar optical channel corrupted by Gaussian noise. Such system simulations highlight the impact of component parameter degradations on the whole network performances allowing to optimize various time and cost consuming sensitivity analyses at the early stage of the system development. Thus the validity of failure criteria in relation with mission profiles can be evaluated representing a significant part of the general PDfR effort in particular for aerospace applications.
Examining the reliability of ADAS-Cog change scores.
Grochowalski, Joseph H; Liu, Ying; Siedlecki, Karen L
2016-09-01
The purpose of this study was to estimate and examine ways to improve the reliability of change scores on the Alzheimer's Disease Assessment Scale, Cognitive Subtest (ADAS-Cog). The sample, provided by the Alzheimer's Disease Neuroimaging Initiative, included individuals with Alzheimer's disease (AD) (n = 153) and individuals with mild cognitive impairment (MCI) (n = 352). All participants were administered the ADAS-Cog at baseline and 1 year, and change scores were calculated as the difference in scores over the 1-year period. Three types of change score reliabilities were estimated using multivariate generalizability. Two methods to increase change score reliability were evaluated: reweighting the subtests of the scale and adding more subtests. Reliability of ADAS-Cog change scores over 1 year was low for both the AD sample (ranging from .53 to .64) and the MCI sample (.39 to .61). Reweighting the change scores from the AD sample improved reliability (.68 to .76), but lengthening provided no useful improvement for either sample. The MCI change scores had low reliability, even with reweighting and adding additional subtests. The ADAS-Cog scores had low reliability for measuring change. Researchers using the ADAS-Cog should estimate and report reliability for their use of the change scores. The ADAS-Cog change scores are not recommended for assessment of meaningful clinical change.
Leveraging AMI data for distribution system model calibration and situational awareness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peppanen, Jouni; Reno, Matthew J.; Thakkar, Mohini
The many new distributed energy resources being installed at the distribution system level require increased visibility into system operations that will be enabled by distribution system state estimation (DSSE) and situational awareness applications. Reliable and accurate DSSE requires both robust methods for managing the big data provided by smart meters and quality distribution system models. This paper presents intelligent methods for detecting and dealing with missing or inaccurate smart meter data, as well as the ways to process the data for different applications. It also presents an efficient and flexible parameter estimation method based on the voltage drop equation andmore » regression analysis to enhance distribution system model accuracy. Finally, it presents a 3-D graphical user interface for advanced visualization of the system state and events. Moreover, we demonstrate this paper for a university distribution network with the state-of-the-art real-time and historical smart meter data infrastructure.« less
Leveraging AMI data for distribution system model calibration and situational awareness
Peppanen, Jouni; Reno, Matthew J.; Thakkar, Mohini; ...
2015-01-15
The many new distributed energy resources being installed at the distribution system level require increased visibility into system operations that will be enabled by distribution system state estimation (DSSE) and situational awareness applications. Reliable and accurate DSSE requires both robust methods for managing the big data provided by smart meters and quality distribution system models. This paper presents intelligent methods for detecting and dealing with missing or inaccurate smart meter data, as well as the ways to process the data for different applications. It also presents an efficient and flexible parameter estimation method based on the voltage drop equation andmore » regression analysis to enhance distribution system model accuracy. Finally, it presents a 3-D graphical user interface for advanced visualization of the system state and events. Moreover, we demonstrate this paper for a university distribution network with the state-of-the-art real-time and historical smart meter data infrastructure.« less
Closed-form solution of decomposable stochastic models
NASA Technical Reports Server (NTRS)
Sjogren, Jon A.
1990-01-01
Markov and semi-Markov processes are increasingly being used in the modeling of complex reconfigurable systems (fault tolerant computers). The estimation of the reliability (or some measure of performance) of the system reduces to solving the process for its state probabilities. Such a model may exhibit numerous states and complicated transition distributions, contributing to an expensive and numerically delicate solution procedure. Thus, when a system exhibits a decomposition property, either structurally (autonomous subsystems), or behaviorally (component failure versus reconfiguration), it is desirable to exploit this decomposition in the reliability calculation. In interesting cases there can be failure states which arise from non-failure states of the subsystems. Equations are presented which allow the computation of failure probabilities of the total (combined) model without requiring a complete solution of the combined model. This material is presented within the context of closed-form functional representation of probabilities as utilized in the Symbolic Hierarchical Automated Reliability and Performance Evaluator (SHARPE) tool. The techniques adopted enable one to compute such probability functions for a much wider class of systems at a reduced computational cost. Several examples show how the method is used, especially in enhancing the versatility of the SHARPE tool.
ERIC Educational Resources Information Center
Oakland, Thomas
New strategies for evaluation criterion referenced measures (CRM) are discussed. These strategies examine the following issues: (1) the use of normed referenced measures (NRM) as CRM and then estimating the reliability and validity of such measures in terms of variance from an arbitrarily specified criterion score, (2) estimation of the…
A Note on the Reliability Coefficients for Item Response Model-Based Ability Estimates
ERIC Educational Resources Information Center
Kim, Seonghoon
2012-01-01
Assuming item parameters on a test are known constants, the reliability coefficient for item response theory (IRT) ability estimates is defined for a population of examinees in two different ways: as (a) the product-moment correlation between ability estimates on two parallel forms of a test and (b) the squared correlation between the true…
Wu, Joseph T.; Ho, Andrew; Ma, Edward S. K.; Lee, Cheuk Kwong; Chu, Daniel K. W.; Ho, Po-Lai; Hung, Ivan F. N.; Ho, Lai Ming; Lin, Che Kit; Tsang, Thomas; Lo, Su-Vui; Lau, Yu-Lung; Leung, Gabriel M.
2011-01-01
Background In an emerging influenza pandemic, estimating severity (the probability of a severe outcome, such as hospitalization, if infected) is a public health priority. As many influenza infections are subclinical, sero-surveillance is needed to allow reliable real-time estimates of infection attack rate (IAR) and severity. Methods and Findings We tested 14,766 sera collected during the first wave of the 2009 pandemic in Hong Kong using viral microneutralization. We estimated IAR and infection-hospitalization probability (IHP) from the serial cross-sectional serologic data and hospitalization data. Had our serologic data been available weekly in real time, we would have obtained reliable IHP estimates 1 wk after, 1–2 wk before, and 3 wk after epidemic peak for individuals aged 5–14 y, 15–29 y, and 30–59 y. The ratio of IAR to pre-existing seroprevalence, which decreased with age, was a major determinant for the timeliness of reliable estimates. If we began sero-surveillance 3 wk after community transmission was confirmed, with 150, 350, and 500 specimens per week for individuals aged 5–14 y, 15–19 y, and 20–29 y, respectively, we would have obtained reliable IHP estimates for these age groups 4 wk before the peak. For 30–59 y olds, even 800 specimens per week would not have generated reliable estimates until the peak because the ratio of IAR to pre-existing seroprevalence for this age group was low. The performance of serial cross-sectional sero-surveillance substantially deteriorates if test specificity is not near 100% or pre-existing seroprevalence is not near zero. These potential limitations could be mitigated by choosing a higher titer cutoff for seropositivity. If the epidemic doubling time is longer than 6 d, then serial cross-sectional sero-surveillance with 300 specimens per week would yield reliable estimates when IAR reaches around 6%–10%. Conclusions Serial cross-sectional serologic data together with clinical surveillance data can allow reliable real-time estimates of IAR and severity in an emerging pandemic. Sero-surveillance for pandemics should be considered. Please see later in the article for the Editors' Summary PMID:21990967
NASA Astrophysics Data System (ADS)
Ravishankar, Bharani
Conventional space vehicles have thermal protection systems (TPS) that provide protection to an underlying structure that carries the flight loads. In an attempt to save weight, there is interest in an integrated TPS (ITPS) that combines the structural function and the TPS function. This has weight saving potential, but complicates the design of the ITPS that now has both thermal and structural failure modes. The main objectives of this dissertation was to optimally design the ITPS subjected to thermal and mechanical loads through deterministic and reliability based optimization. The optimization of the ITPS structure requires computationally expensive finite element analyses of 3D ITPS (solid) model. To reduce the computational expenses involved in the structural analysis, finite element based homogenization method was employed, homogenizing the 3D ITPS model to a 2D orthotropic plate. However it was found that homogenization was applicable only for panels that are much larger than the characteristic dimensions of the repeating unit cell in the ITPS panel. Hence a single unit cell was used for the optimization process to reduce the computational cost. Deterministic and probabilistic optimization of the ITPS panel required evaluation of failure constraints at various design points. This further demands computationally expensive finite element analyses which was replaced by efficient, low fidelity surrogate models. In an optimization process, it is important to represent the constraints accurately to find the optimum design. Instead of building global surrogate models using large number of designs, the computational resources were directed towards target regions near constraint boundaries for accurate representation of constraints using adaptive sampling strategies. Efficient Global Reliability Analyses (EGRA) facilitates sequentially sampling of design points around the region of interest in the design space. EGRA was applied to the response surface construction of the failure constraints in the deterministic and reliability based optimization of the ITPS panel. It was shown that using adaptive sampling, the number of designs required to find the optimum were reduced drastically, while improving the accuracy. System reliability of ITPS was estimated using Monte Carlo Simulation (MCS) based method. Separable Monte Carlo method was employed that allowed separable sampling of the random variables to predict the probability of failure accurately. The reliability analysis considered uncertainties in the geometry, material properties, loading conditions of the panel and error in finite element modeling. These uncertainties further increased the computational cost of MCS techniques which was also reduced by employing surrogate models. In order to estimate the error in the probability of failure estimate, bootstrapping method was applied. This research work thus demonstrates optimization of the ITPS composite panel with multiple failure modes and large number of uncertainties using adaptive sampling techniques.
Smile line assessment comparing quantitative measurement and visual estimation.
Van der Geld, Pieter; Oosterveld, Paul; Schols, Jan; Kuijpers-Jagtman, Anne Marie
2011-02-01
Esthetic analysis of dynamic functions such as spontaneous smiling is feasible by using digital videography and computer measurement for lip line height and tooth display. Because quantitative measurements are time-consuming, digital videography and semiquantitative (visual) estimation according to a standard categorization are more practical for regular diagnostics. Our objective in this study was to compare 2 semiquantitative methods with quantitative measurements for reliability and agreement. The faces of 122 male participants were individually registered by using digital videography. Spontaneous and posed smiles were captured. On the records, maxillary lip line heights and tooth display were digitally measured on each tooth and also visually estimated according to 3-grade and 4-grade scales. Two raters were involved. An error analysis was performed. Reliability was established with kappa statistics. Interexaminer and intraexaminer reliability values were high, with median kappa values from 0.79 to 0.88. Agreement of the 3-grade scale estimation with quantitative measurement showed higher median kappa values (0.76) than the 4-grade scale estimation (0.66). Differentiating high and gummy smile lines (4-grade scale) resulted in greater inaccuracies. The estimation of a high, average, or low smile line for each tooth showed high reliability close to quantitative measurements. Smile line analysis can be performed reliably with a 3-grade scale (visual) semiquantitative estimation. For a more comprehensive diagnosis, additional measuring is proposed, especially in patients with disproportional gingival display. Copyright © 2011 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
Prediction of seasonal climate-induced variations in global food production
NASA Astrophysics Data System (ADS)
Iizumi, Toshichika; Sakuma, Hirofumi; Yokozawa, Masayuki; Luo, Jing-Jia; Challinor, Andrew J.; Brown, Molly E.; Sakurai, Gen; Yamagata, Toshio
2013-10-01
Consumers, including the poor in many countries, are increasingly dependent on food imports and are thus exposed to variations in yields, production and export prices in the major food-producing regions of the world. National governments and commercial entities are therefore paying increased attention to the cropping forecasts of important food-exporting countries as well as to their own domestic food production. Given the increased volatility of food markets and the rising incidence of climatic extremes affecting food production, food price spikes may increase in prevalence in future years. Here we present a global assessment of the reliability of crop failure hindcasts for major crops at two lead times derived by linking ensemble seasonal climatic forecasts with statistical crop models. We found that moderate-to-marked yield loss over a substantial percentage (26-33%) of the harvested area of these crops is reliably predictable if climatic forecasts are near perfect. However, only rice and wheat production are reliably predictable at three months before the harvest using within-season hindcasts. The reliabilities of estimates varied substantially by crop--rice and wheat yields were the most predictable, followed by soybean and maize. The reasons for variation in the reliability of the estimates included the differences in crop sensitivity to the climate and the technology used by the crop-producing regions. Our findings reveal that the use of seasonal climatic forecasts to predict crop failures will be useful for monitoring global food production and will encourage the adaptation of food systems toclimatic extremes.
Rapid Detection of Small Movements with GNSS Doppler Observables
NASA Astrophysics Data System (ADS)
Hohensinn, Roland; Geiger, Alain
2017-04-01
High-alpine terrain reacts very sensitively to varying environmental conditions. As an example, increasing temperatures cause thawing of permafrost areas. This, in turn causes an increasing threat by natural hazards like debris flow (e.g. rock glaciers) or rockfalls. The Institute of Geodesy and Photogrammetry is contributing to alpine mass-movement monitoring systems in different project areas in the Swiss Alps. A main focus lies on providing geodetic mass-movement information derived from GNSS static solutions on a daily and a sub-daily basis, obtained with low-cost and autonomous GNSS stations. Another focus is set on rapidly providing reliable geodetic information in real-time i.e. for an integration in early warning systems. One way to achieve this is the estimation of accurate station velocities from observations of range rates, which can be obtained as Doppler observables from time derivatives of carrier phase measurements. The key for this method lies in a precise modeling of prominent effects contributing to the observed range rates, which are satellite velocity, atmospheric delay rates and relativistic effects. A suitable observation model is then devised, which accounts for these predictions. The observation model, combined with a simple kinematic movement model forms the basis for the parameter estimation. Based on the estimated station velocities, movements are then detected using a statistical test. To improve the reliablity of the estimated parameters, another spotlight is set on an on-line quality control procedure. We will present the basic algorithms as well as results from first tests which were carried out with a low-cost GPS L1 phase receiver. With a u-blox module and a sampling rate of 5 Hz, accuracies on the mm/s level can be obtained and velocities down to 1 cm/s can be detected. Reliable and accurate station velocities and movement information can be provided within seconds.
A method of bias correction for maximal reliability with dichotomous measures.
Penev, Spiridon; Raykov, Tenko
2010-02-01
This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.
Sun, Wei; Chou, Chih-Ping; Stacy, Alan W; Ma, Huiyan; Unger, Jennifer; Gallaher, Peggy
2007-02-01
Cronbach's a is widely used in social science research to estimate the internal consistency of reliability of a measurement scale. However, when items are not strictly parallel, the Cronbach's a coefficient provides a lower-bound estimate of true reliability, and this estimate may be further biased downward when items are dichotomous. The estimation of standardized Cronbach's a for a scale with dichotomous items can be improved by using the upper bound of coefficient phi. SAS and SPSS macros have been developed in this article to obtain standardized Cronbach's a via this method. The simulation analysis showed that Cronbach's a from upper-bound phi might be appropriate for estimating the real reliability when standardized Cronbach's a is problematic.
Bae, Sungwoo; Kim, Myungchin
2016-01-01
In order to realize a true WoT environment, a reliable power circuit is required to ensure interconnections among a range of WoT devices. This paper presents research on sensors and their effects on the reliability and response characteristics of power circuits in WoT devices. The presented research can be used in various power circuit applications, such as energy harvesting interfaces, photovoltaic systems, and battery management systems for the WoT devices. As power circuits rely on the feedback from voltage/current sensors, the system performance is likely to be affected by the sensor failure rates, sensor dynamic characteristics, and their interface circuits. This study investigated how the operational availability of the power circuits is affected by the sensor failure rates by performing a quantitative reliability analysis. In the analysis process, this paper also includes the effects of various reconstruction and estimation techniques used in power processing circuits (e.g., energy harvesting circuits and photovoltaic systems). This paper also reports how the transient control performance of power circuits is affected by sensor interface circuits. With the frequency domain stability analysis and circuit simulation, it was verified that the interface circuit dynamics may affect the transient response characteristics of power circuits. The verification results in this paper showed that the reliability and control performance of the power circuits can be affected by the sensor types, fault tolerant approaches against sensor failures, and the response characteristics of the sensor interfaces. The analysis results were also verified by experiments using a power circuit prototype. PMID:27608020
Analytical Assessment of the Reciprocating Feed System
NASA Technical Reports Server (NTRS)
Eddleman, David E.; Blackmon, James B.; Morton, Christopher D.
2006-01-01
A preliminary analysis tool has been created in Microsoft Excel to determine deliverable payload mass, total system mass, and performance of spacecraft systems using various types of propellant feed systems. These mass estimates are conducted by inserting into the user interface the basic mission parameters (e.g., thrust, burn time, specific impulse, mixture ratio, etc.), system architecture (e.g., propulsion system type and characteristics, propellants, pressurization system type, etc.), and design properties (e.g., material properties, safety factors, etc.). Different propellant feed and pressurization systems are available for comparison in the program. This gives the user the ability to compare conventional pressure fed, reciprocating feed system (RFS), autogenous pressurization thrust augmentation (APTA RFS), and turbopump systems with the deliverable payload, inert mass, and total system mass being the primary comparison metrics. Analyses of several types of missions and spacecraft were conducted and it was found that the RFS offers a performance improvement, especially in terms of delivered payload, over conventional pressure fed systems. Furthermore, it is competitive with a turbopump system at low to moderate chamber pressures, up to approximately 1,500 psi. Various example cases estimating the system mass and deliverable payload of several types of spacecraft are presented that illustrate the potential system performance advantages of the RFS. In addition, a reliability assessment of the RFS was conducted, comparing it to simplified conventional pressure fed and turbopump systems, based on MIL-STD 756B; these results showed that the RFS offers higher reliability, and thus substantially longer periods between system refurbishment, than turbopump systems, and is competitive with conventional pressure fed systems. This is primarily the result of the intrinsic RFS fail-operational capability with three run tanks, since the system can operate with just two run tanks.
NASA Astrophysics Data System (ADS)
Rosas, Pedro; Wagemans, Johan; Ernst, Marc O.; Wichmann, Felix A.
2005-05-01
A number of models of depth-cue combination suggest that the final depth percept results from a weighted average of independent depth estimates based on the different cues available. The weight of each cue in such an average is thought to depend on the reliability of each cue. In principle, such a depth estimation could be statistically optimal in the sense of producing the minimum-variance unbiased estimator that can be constructed from the available information. Here we test such models by using visual and haptic depth information. Different texture types produce differences in slant-discrimination performance, thus providing a means for testing a reliability-sensitive cue-combination model with texture as one of the cues to slant. Our results show that the weights for the cues were generally sensitive to their reliability but fell short of statistically optimal combination - we find reliability-based reweighting but not statistically optimal cue combination.
High-rate multi-GNSS: what does it mean to seismology?
NASA Astrophysics Data System (ADS)
Geng, J.
2017-12-01
GNSS precise point positioning (PPP) is capable of measuring centimeter-level positions epoch by epoch at a single station, and is thus treasured in tsunami/earthquake early warning where static displacements in the near field are critical to rapidly and reliably determining the magnitude of destructive events. However, most operational real-time PPP systems at present rely on only GPS data. The deficiency of such systems is that the high reliability and availability of precise displacements cannot be maintained continuously in real time, which is however a crucial requirement for disaster resistance and response. Multi-GNSS, including GLONASS, BeiDou, Galileo and QZSS other than only GPS, can be a solution to this problem because much more satellites per epoch (e.g. 30-40) will be available. In this case, positioning failure due to data loss or blunders can be minimized, and on the other hand, positioning initializations can be accelerated to a great extent since the satellite geometry for each epoch will be enhanced enormously. We established a prototype real-time multi-GNSS PPP service based on Asia-Pacific real-time network which can collect and stream high-rate data from all five navigation systems above. We estimated high-rate satellite clock corrections and enabled undifferenced ambiguity fixing for multi-GNSS, which therefore ensures high availability and reliability of precise displacement estimates in contrast to GPS-only systems. We will report how we can benefit from multi-GNSS for seismology, especially the noise characteristics of high-rate and sub-daily displacements. We will also use storm surge loading events to demonstrate the contribution of multi-GNSS to sub-daily transient signals.
2015-09-01
HMMWV), M1A1 Main Battle Tanks, Tank Retrievers, Armored Breeching Vehicles, Amphibious Assault Vehicles, and several variants of the Medium...MCPP-N equipment stored in the Norwegian caves. As noted earlier, Marine Corps equipment is distributed among six caves. While the current version of...according to Marine Corps Business System Integration Team officials, the initial plan was for the first version of the Global Combat Support System
NASA Technical Reports Server (NTRS)
Trivedi, K. S. (Editor); Clary, J. B. (Editor)
1980-01-01
A computer aided reliability estimation procedure (CARE 3), developed to model the behavior of ultrareliable systems required by flight-critical avionics and control systems, is evaluated. The mathematical models, numerical method, and fault-tolerant architecture modeling requirements are examined, and the testing and characterization procedures are discussed. Recommendations aimed at enhancing CARE 3 are presented; in particular, the need for a better exposition of the method and the user interface is emphasized.
Zabaleta, Haritz; Valencia, David; Perry, Joel; Veneman, Jan; Keller, Thierry
2011-01-01
ArmAssist is a wireless robot for post stroke upper limb rehabilitation. Knowing the position of the arm is essential for any rehabilitation device. In this paper, we describe a method based on an artificial landmark navigation system. The navigation system uses three optical mouse sensors. This enables the building of a cheap but reliable position sensor. Two of the sensors are the data source for odometry calculations, and the third optical mouse sensor takes very low resolution pictures of a custom designed mat. These pictures are processed by an optical symbol recognition algorithm which will estimate the orientation of the robot and recognize the landmarks placed on the mat. The data fusion strategy is described to detect the misclassifications of the landmarks in order to fuse only reliable information. The orientation given by the optical symbol recognition (OSR) algorithm is used to improve significantly the odometry and the recognition of the landmarks is used to reference the odometry to a absolute coordinate system. The system was tested using a 3D motion capture system. With the actual mat configuration, in a field of motion of 710 × 450 mm, the maximum error in position estimation was 49.61 mm with an average error of 36.70 ± 22.50 mm. The average test duration was 36.5 seconds and the average path length was 4173 mm.
Measuring eating disorder attitudes and behaviors: a reliability generalization study
2014-01-01
Background Although score reliability is a sample-dependent characteristic, researchers often only report reliability estimates from previous studies as justification for employing particular questionnaires in their research. The present study followed reliability generalization procedures to determine the mean score reliability of the Eating Disorder Inventory and its most commonly employed subscales (Drive for Thinness, Bulimia, and Body Dissatisfaction) and the Eating Attitudes Test as a way to better identify those characteristics that might impact score reliability. Methods Published studies that used these measures were coded based on their reporting of reliability information and additional study characteristics that might influence score reliability. Results Score reliability estimates were included in 26.15% of studies using the EDI and 36.28% of studies using the EAT. Mean Cronbach’s alphas for the EDI (total score = .91; subscales = .75 to .89), EAT-40 (total score = .81) and EAT-26 (total score = .86; subscales = .56 to .80) suggested variability in estimated internal consistency. Whereas some EDI subscales exhibited higher score reliability in clinical eating disorder samples than in nonclinical samples, other subscales did not exhibit these differences. Score reliability information for the EAT was primarily reported for nonclinical samples, making it difficult to characterize the effect of type of sample on these measures. However, there was a tendency for mean score reliability to be higher in the adult (vs. adolescent) samples and in female (vs. male) samples. Conclusions Overall, this study highlights the importance of assessing and reporting internal consistency during every test administration because reliability is affected by characteristics of the participants being examined. PMID:24764530
Onojima, Takayuki; Goto, Takahiro; Mizuhara, Hiroaki; Aoyagi, Toshio
2018-01-01
Synchronization of neural oscillations as a mechanism of brain function is attracting increasing attention. Neural oscillation is a rhythmic neural activity that can be easily observed by noninvasive electroencephalography (EEG). Neural oscillations show the same frequency and cross-frequency synchronization for various cognitive and perceptual functions. However, it is unclear how this neural synchronization is achieved by a dynamical system. If neural oscillations are weakly coupled oscillators, the dynamics of neural synchronization can be described theoretically using a phase oscillator model. We propose an estimation method to identify the phase oscillator model from real data of cross-frequency synchronized activities. The proposed method can estimate the coupling function governing the properties of synchronization. Furthermore, we examine the reliability of the proposed method using time-series data obtained from numerical simulation and an electronic circuit experiment, and show that our method can estimate the coupling function correctly. Finally, we estimate the coupling function between EEG oscillation and the speech sound envelope, and discuss the validity of these results.
A Height Estimation Approach for Terrain Following Flights from Monocular Vision.
Campos, Igor S G; Nascimento, Erickson R; Freitas, Gustavo M; Chaimowicz, Luiz
2016-12-06
In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80 % for positives and 90 % for negatives, while the height estimation algorithm presented good accuracy.
Automated reliability assessment for spectroscopic redshift measurements
NASA Astrophysics Data System (ADS)
Jamal, S.; Le Brun, V.; Le Fèvre, O.; Vibert, D.; Schmitt, A.; Surace, C.; Copin, Y.; Garilli, B.; Moresco, M.; Pozzetti, L.
2018-03-01
Context. Future large-scale surveys, such as the ESA Euclid mission, will produce a large set of galaxy redshifts (≥106) that will require fully automated data-processing pipelines to analyze the data, extract crucial information and ensure that all requirements are met. A fundamental element in these pipelines is to associate to each galaxy redshift measurement a quality, or reliability, estimate. Aim. In this work, we introduce a new approach to automate the spectroscopic redshift reliability assessment based on machine learning (ML) and characteristics of the redshift probability density function. Methods: We propose to rephrase the spectroscopic redshift estimation into a Bayesian framework, in order to incorporate all sources of information and uncertainties related to the redshift estimation process and produce a redshift posterior probability density function (PDF). To automate the assessment of a reliability flag, we exploit key features in the redshift posterior PDF and machine learning algorithms. Results: As a working example, public data from the VIMOS VLT Deep Survey is exploited to present and test this new methodology. We first tried to reproduce the existing reliability flags using supervised classification in order to describe different types of redshift PDFs, but due to the subjective definition of these flags (classification accuracy 58%), we soon opted for a new homogeneous partitioning of the data into distinct clusters via unsupervised classification. After assessing the accuracy of the new clusters via resubstitution and test predictions (classification accuracy 98%), we projected unlabeled data from preliminary mock simulations for the Euclid space mission into this mapping to predict their redshift reliability labels. Conclusions: Through the development of a methodology in which a system can build its own experience to assess the quality of a parameter, we are able to set a preliminary basis of an automated reliability assessment for spectroscopic redshift measurements. This newly-defined method is very promising for next-generation large spectroscopic surveys from the ground and in space, such as Euclid and WFIRST. A table of the reclassified VVDS redshifts and reliability is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/611/A53
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radder, J.A.; Cramer, D.S.
1997-06-01
In order to meet Department of Energy (DOE) Defense Program requirements for tritium in the 2005-2007 time frame, new production capability must be made available. The Accelerator Production of Tritium (APT) Plant is being considered as an alternative to nuclear reactor production of tritium, which has been the preferred method in the past. The proposed APT plant will use a high-power proton accelerator to generate thermal neutrons that will be captured in {sup 3}He to produce tritium (3H). It is expected that the APT Plant will be built and operated at the DOE`s Savannah River Site (SRS) in Aiken, Southmore » Carolina. Discussion is focused on Reliability, Availability, Maintainability, and Inspectability (RAMI) modeling of recent conceptual designs for balance of plant (BOP) systems in the proposed APT Plant. In the conceptual designs for balance of plant (BOP) systems in the proposed APT Plant. In the conceptual design phase, system RAMI estimates are necessary to identify the best possible system alternative and to provide a valid picture of the cost effectiveness of the proposed system for comparison with other system alternatives. RAMI estimates in the phase must necessarily be based on generic data. The objective of the RAMI analyses at the conceptual design stage is to assist the designers in achieving an optimum design which balances the reliability and maintainability requirements among the subsystems and components.« less
Singh, Aparna; Singh, Girish; Patwardhan, Kishor; Gehlot, Sangeeta
2017-01-01
According to Ayurveda, the traditional system of healthcare of Indian origin, Agni is the factor responsible for digestion and metabolism. Four functional states (Agnibala) of Agni have been recognized: regular, irregular, intense, and weak. The objective of the present study was to develop and validate a self-assessment tool to estimate Agnibala The developed tool was evaluated for its reliability and validity by administering it to 300 healthy volunteers of either gender belonging to 18 to 40-year age group. Besides confirming the statistical validity and reliability, the practical utility of the newly developed tool was also evaluated by recording serum lipid parameters of all the volunteers. The results show that the lipid parameters vary significantly according to the status of Agni The tool, therefore, may be used to screen normal population to look for possible susceptibility to certain health conditions. © The Author(s) 2016.
Audible vision for the blind and visually impaired in indoor open spaces.
Yu, Xunyi; Ganz, Aura
2012-01-01
In this paper we introduce Audible Vision, a system that can help blind and visually impaired users navigate in large indoor open spaces. The system uses computer vision to estimate the location and orientation of the user, and enables the user to perceive his/her relative position to a landmark through 3D audio. Testing shows that Audible Vision can work reliably in real-life ever-changing environment crowded with people.
Secure Fusion Estimation for Bandwidth Constrained Cyber-Physical Systems Under Replay Attacks.
Chen, Bo; Ho, Daniel W C; Hu, Guoqiang; Yu, Li; Bo Chen; Ho, Daniel W C; Guoqiang Hu; Li Yu; Chen, Bo; Ho, Daniel W C; Hu, Guoqiang; Yu, Li
2018-06-01
State estimation plays an essential role in the monitoring and supervision of cyber-physical systems (CPSs), and its importance has made the security and estimation performance a major concern. In this case, multisensor information fusion estimation (MIFE) provides an attractive alternative to study secure estimation problems because MIFE can potentially improve estimation accuracy and enhance reliability and robustness against attacks. From the perspective of the defender, the secure distributed Kalman fusion estimation problem is investigated in this paper for a class of CPSs under replay attacks, where each local estimate obtained by the sink node is transmitted to a remote fusion center through bandwidth constrained communication channels. A new mathematical model with compensation strategy is proposed to characterize the replay attacks and bandwidth constrains, and then a recursive distributed Kalman fusion estimator (DKFE) is designed in the linear minimum variance sense. According to different communication frameworks, two classes of data compression and compensation algorithms are developed such that the DKFEs can achieve the desired performance. Several attack-dependent and bandwidth-dependent conditions are derived such that the DKFEs are secure under replay attacks. An illustrative example is given to demonstrate the effectiveness of the proposed methods.
Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W.; Imel, Zac E.; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C.
2014-01-01
The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. PMID:25242192
Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W; Imel, Zac E; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C
2015-02-01
The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. Copyright © 2015 Elsevier Inc. All rights reserved.
Low-flow characteristics of streams in the lower Wisconsin River basin
Gebert, W.A.
1978-01-01
Low-flow characteristics estimated for the lower Wisconsin River basin have a high degree of reliability when compared with other basins in Wisconsin, Reliable estimates appear to be related to the relatively uniform geologic features in the basin.
Anderson, Donald D; Segal, Neil A; Kern, Andrew M; Nevitt, Michael C; Torner, James C; Lynch, John A
2012-01-01
Recent findings suggest that contact stress is a potent predictor of subsequent symptomatic osteoarthritis development in the knee. However, much larger numbers of knees (likely on the order of hundreds, if not thousands) need to be reliably analyzed to achieve the statistical power necessary to clarify this relationship. This study assessed the reliability of new semiautomated computational methods for estimating contact stress in knees from large population-based cohorts. Ten knees of subjects from the Multicenter Osteoarthritis Study were included. Bone surfaces were manually segmented from sequential 1.0 Tesla magnetic resonance imaging slices by three individuals on two nonconsecutive days. Four individuals then registered the resulting bone surfaces to corresponding bone edges on weight-bearing radiographs, using a semi-automated algorithm. Discrete element analysis methods were used to estimate contact stress distributions for each knee. Segmentation and registration reliabilities (day-to-day and interrater) for peak and mean medial and lateral tibiofemoral contact stress were assessed with Shrout-Fleiss intraclass correlation coefficients (ICCs). The segmentation and registration steps of the modeling approach were found to have excellent day-to-day (ICC 0.93-0.99) and good inter-rater reliability (0.84-0.97). This approach for estimating compartment-specific tibiofemoral contact stress appears to be sufficiently reliable for use in large population-based cohorts.
Duggan, Brendan M; Rae, Anne M; Clements, Dylan N; Hocking, Paul M
2017-05-02
Genetic progress in selection for greater body mass and meat yield in poultry has been associated with an increase in gait problems which are detrimental to productivity and welfare. The incidence of suboptimal gait in breeding flocks is controlled through the use of a visual gait score, which is a subjective assessment of walking ability of each bird. The subjective nature of the visual gait score has led to concerns over its effectiveness in reducing the incidence of suboptimal gait in poultry through breeding. The aims of this study were to assess the reliability of the current visual gait scoring system in ducks and to develop a more objective method to select for better gait. Experienced gait scorers assessed short video clips of walking ducks to estimate the reliability of the current visual gait scoring system. Kendall's coefficients of concordance between and within observers were estimated at 0.49 and 0.75, respectively. In order to develop a more objective scoring system, gait components were visually scored on more than 4000 pedigreed Pekin ducks and genetic parameters were estimated for these components. Gait components, which are a more objective measure, had heritabilities that were as good as, or better than, those of the overall visual gait score. Measurement of gait components is simpler and therefore more objective than the standard visual gait score. The recording of gait components can potentially be automated, which may increase accuracy further and may improve heritability estimates. Genetic correlations were generally low, which suggests that it is possible to use gait components to select for an overall improvement in both economic traits and gait as part of a balanced breeding programme.
Latorre, Jorge; Llorens, Roberto; Colomer, Carolina; Alcañiz, Mariano
2018-04-27
Different studies have analyzed the potential of the off-the-shelf Microsoft Kinect, in its different versions, to estimate spatiotemporal gait parameters as a portable markerless low-cost alternative to laboratory grade systems. However, variability in populations, measures, and methodologies prevents accurate comparison of the results. The objective of this study was to determine and compare the reliability of the existing Kinect-based methods to estimate spatiotemporal gait parameters in healthy and post-stroke adults. Forty-five healthy individuals and thirty-eight stroke survivors participated in this study. Participants walked five meters at a comfortable speed and their spatiotemporal gait parameters were estimated from the data retrieved by a Kinect v2, using the most common methods in the literature, and by visual inspection of the videotaped performance. Errors between both estimations were computed. For both healthy and post-stroke participants, highest accuracy was obtained when using the speed of the ankles to estimate gait speed (3.6-5.5 cm/s), stride length (2.5-5.5 cm), and stride time (about 45 ms), and when using the distance between the sacrum and the ankles and toes to estimate double support time (about 65 ms) and swing time (60-90 ms). Although the accuracy of these methods is limited, these measures could occasionally complement traditional tools. Copyright © 2018 Elsevier Ltd. All rights reserved.
Ortel, Terry W.; Spies, Ryan R.
2015-11-19
Next-Generation Radar (NEXRAD) has become an integral component in the estimation of precipitation (Kitzmiller and others, 2013). The high spatial and temporal resolution of NEXRAD has revolutionized the ability to estimate precipitation across vast regions, which is especially beneficial in areas without a dense rain-gage network. With the improved precipitation estimates, hydrologic models can produce reliable streamflow forecasts for areas across the United States. NEXRAD data from the National Weather Service (NWS) has been an invaluable tool used by the U.S. Geological Survey (USGS) for numerous projects and studies; NEXRAD data processing techniques similar to those discussed in this Fact Sheet have been developed within the USGS, including the NWS Quantitative Precipitation Estimates archive developed by Blodgett (2013).
NASA Astrophysics Data System (ADS)
Suparta, Wayan; Rahman, Rosnani
2016-02-01
Global Positioning System (GPS) receivers are widely installed throughout the Peninsular Malaysia, but the implementation for monitoring weather hazard system such as flash flood is still not optimal. To increase the benefit for meteorological applications, the GPS system should be installed in collocation with meteorological sensors so the precipitable water vapor (PWV) can be measured. The distribution of PWV is a key element to the Earth's climate for quantitative precipitation improvement as well as flash flood forecasts. The accuracy of this parameter depends on a large extent on the number of GPS receiver installations and meteorological sensors in the targeted area. Due to cost constraints, a spatial interpolation method is proposed to address these issues. In this paper, we investigated spatial distribution of GPS PWV and meteorological variables (surface temperature, relative humidity, and rainfall) by using thin plate spline (tps) and ordinary kriging (Krig) interpolation techniques over the Klang Valley in Peninsular Malaysia (longitude: 99.5°-102.5°E and latitude: 2.0°-6.5°N). Three flash flood cases in September, October, and December 2013 were studied. The analysis was performed using mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2) to determine the accuracy and reliability of the interpolation techniques. Results at different phases (pre, onset, and post) that were evaluated showed that tps interpolation technique is more accurate, reliable, and highly correlated in estimating GPS PWV and relative humidity, whereas Krig is more reliable for predicting temperature and rainfall during pre-flash flood events. During the onset of flash flood events, both methods showed good interpolation in estimating all meteorological parameters with high accuracy and reliability. The finding suggests that the proposed method of spatial interpolation techniques are capable of handling limited data sources with high accuracy, which in turn can be used to predict future floods.
Mayo, Ann M
2015-01-01
It is important for CNSs and other APNs to consider the reliability and validity of instruments chosen for clinical practice, evidence-based practice projects, or research studies. Psychometric testing uses specific research methods to evaluate the amount of error associated with any particular instrument. Reliability estimates explain more about how well the instrument is designed, whereas validity estimates explain more about scores that are produced by the instrument. An instrument may be architecturally sound overall (reliable), but the same instrument may not be valid. For example, if a specific group does not understand certain well-constructed items, then the instrument does not produce valid scores when used with that group. Many instrument developers may conduct reliability testing only once, yet continue validity testing in different populations over many years. All CNSs should be advocating for the use of reliable instruments that produce valid results. Clinical nurse specialists may find themselves in situations where reliability and validity estimates for some instruments that are being utilized are unknown. In such cases, CNSs should engage key stakeholders to sponsor nursing researchers to pursue this most important work.
2013-01-01
Summary of background data Recent smartphones, such as the iPhone, are often equipped with an accelerometer and magnetometer, which, through software applications, can perform various inclinometric functions. Although these applications are intended for recreational use, they have the potential to measure and quantify range of motion. The purpose of this study was to estimate the intra and inter-rater reliability as well as the criterion validity of the clinometer and compass applications of the iPhone in the assessment cervical range of motion in healthy participants. Methods The sample consisted of 28 healthy participants. Two examiners measured cervical range of motion of each participant twice using the iPhone (for the estimation of intra and inter-reliability) and once with the CROM (for the estimation of criterion validity). Estimates of reliability and validity were then established using the intraclass correlation coefficient (ICC). Results We observed a moderate intra-rater reliability for each movement (ICC = 0.65-0.85) but a poor inter-rater reliability (ICC < 0.60). For the criterion validity, the ICCs are moderate (>0.50) to good (>0.65) for movements of flexion, extension, lateral flexions and right rotation, but poor (<0.50) for the movement left rotation. Conclusion We found good intra-rater reliability and lower inter-rater reliability. When compared to the gold standard, these applications showed moderate to good validity. However, before using the iPhone as an outcome measure in clinical settings, studies should be done on patients presenting with cervical problems. PMID:23829201
Multi-particle three-dimensional coordinate estimation in real-time optical manipulation
NASA Astrophysics Data System (ADS)
Dam, J. S.; Perch-Nielsen, I.; Palima, D.; Gluckstad, J.
2009-11-01
We have previously shown how stereoscopic images can be obtained in our three-dimensional optical micromanipulation system [J. S. Dam et al, Opt. Express 16, 7244 (2008)]. Here, we present an extension and application of this principle to automatically gather the three-dimensional coordinates for all trapped particles with high tracking range and high reliability without requiring user calibration. Through deconvolving of the red, green, and blue colour planes to correct for bleeding between colour planes, we show that we can extend the system to also utilize green illumination, in addition to the blue and red. Applying the green colour as on-axis illumination yields redundant information for enhanced error correction, which is used to verify the gathered data, resulting in reliable coordinates as well as producing visually attractive images.
Fallback options for airgap sensor fault of an electromagnetic suspension system
NASA Astrophysics Data System (ADS)
Michail, Konstantinos; Zolotas, Argyrios C.; Goodall, Roger M.
2013-06-01
The paper presents a method to recover the performance of an electromagnetic suspension under faulty airgap sensor. The proposed control scheme is a combination of classical control loops, a Kalman Estimator and analytical redundancy (for the airgap signal). In this way redundant airgap sensors are not essential for reliable operation of this system. When the airgap sensor fails the required signal is recovered using a combination of a Kalman estimator and analytical redundancy. The performance of the suspension is optimised using genetic algorithms and some preliminary robustness issues to load and operating airgap variations are discussed. Simulations on a realistic model of such type of suspension illustrate the efficacy of the proposed sensor tolerant control method.
The use of remote sensing for updating extensive forest inventories
John F. Kelly
1990-01-01
The Forest Inventory and Analysis unit of the USDA Forest Service Southern Forest Experiment Station (SO-FIA) has the research task of devising an inventory updating system that can be used to provide reliable estimates of forest area, volume, growth, and removals at the State level. These updated inventories must be accomplished within current budgetary restraints....
Development of a use estimation process at a metropolitan park district
Andrew J. Mowen
2001-01-01
The need for a committed system to monitor and track visitation over time is increasingly recognized by agencies and organizations that must be responsive to staffing, budgeting, and relations with external stakeholders. This paper highlights a process that one metropolitan park agency uses to monitor visitation, discusses the role of validity and reliability in the...
AFRL/Cornell Information Assurance Institute
2007-03-01
revewing this colection ofinformation . Send connents regarding this burden estimate or any other aspect of this collection of information, indcudng...collabora- tions involving Cornell and AFRL researchers, with * AFRL researchers able to participate in Cornell research projects, fa- cilitating technology ...approach to developing a science base and technology for supporting large-scale reliable distributed systems. First, so- lutions to core problems were
Engelken, Florian; Wassilew, Georgi I; Köhlitz, Torsten; Brockhaus, Sebastian; Hamm, Bernd; Perka, Carsten; Diederichs, und Gerd
2014-01-01
The purpose of this study was to quantify the performance of the Goutallier classification for assessing fatty degeneration of the gluteus muscles from magnetic resonance (MR) images and to compare its performance to a newly proposed system. Eighty-four hips with clinical signs of gluteal insufficiency and 50 hips from asymptomatic controls were analyzed using a standard classification system (Goutallier) and a new scoring system (Quartile). Interobserver reliability and intraobserver repeatability were determined, and accuracy was assessed by comparing readers' scores with quantitative estimates of the proportion of intramuscular fat based on MR signal intensities (gold standard). The existing Goutallier classification system and the new Quartile system performed equally well in assessing fatty degeneration of the gluteus muscles, both showing excellent levels of interrater and intrarater agreement. While the Goutallier classification system has the advantage of being widely known, the benefit of the Quartile system is that it is based on more clearly defined grades of fatty degeneration. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Castellarin, A.; Montanari, A.; Brath, A.
2002-12-01
The study derives Regional Depth-Duration-Frequency (RDDF) equations for a wide region of northern-central Italy (37,200 km 2) by following an adaptation of the approach originally proposed by Alila [WRR, 36(7), 2000]. The proposed RDDF equations have a rather simple structure and allow an estimation of the design storm, defined as the rainfall depth expected for a given storm duration and recurrence interval, in any location of the study area for storm durations from 1 to 24 hours and for recurrence intervals up to 100 years. The reliability of the proposed RDDF equations represents the main concern of the study and it is assessed at two different levels. The first level considers the gauged sites and compares estimates of the design storm obtained with the RDDF equations with at-site estimates based upon the observed annual maximum series of rainfall depth and with design storm estimates resulting from a regional estimator recently developed for the study area through a Hierarchical Regional Approach (HRA) [Gabriele and Arnell, WRR, 27(6), 1991]. The second level performs a reliability assessment of the RDDF equations for ungauged sites by means of a jack-knife procedure. Using the HRA estimator as a reference term, the jack-knife procedure assesses the reliability of design storm estimates provided by the RDDF equations for a given location when dealing with the complete absence of pluviometric information. The results of the analysis show that the proposed RDDF equations represent practical and effective computational means for producing a first guess of the design storm at the available raingauges and reliable design storm estimates for ungauged locations. The first author gratefully acknowledges D.H. Burn for sponsoring the submission of the present abstract.
Downs, Stephen; Marquez, Jodie; Chiarelli, Pauline
2013-06-01
What is the intra-rater and inter-rater relative reliability of the Berg Balance Scale? What is the absolute reliability of the Berg Balance Scale? Does the absolute reliability of the Berg Balance Scale vary across the scale? Systematic review with meta-analysis of reliability studies. Any clinical population that has undergone assessment with the Berg Balance Scale. Relative intra-rater reliability, relative inter-rater reliability, and absolute reliability. Eleven studies involving 668 participants were included in the review. The relative intrarater reliability of the Berg Balance Scale was high, with a pooled estimate of 0.98 (95% CI 0.97 to 0.99). Relative inter-rater reliability was also high, with a pooled estimate of 0.97 (95% CI 0.96 to 0.98). A ceiling effect of the Berg Balance Scale was evident for some participants. In the analysis of absolute reliability, all of the relevant studies had an average score of 20 or above on the 0 to 56 point Berg Balance Scale. The absolute reliability across this part of the scale, as measured by the minimal detectable change with 95% confidence, varied between 2.8 points and 6.6 points. The Berg Balance Scale has a higher absolute reliability when close to 56 points due to the ceiling effect. We identified no data that estimated the absolute reliability of the Berg Balance Scale among participants with a mean score below 20 out of 56. The Berg Balance Scale has acceptable reliability, although it might not detect modest, clinically important changes in balance in individual subjects. The review was only able to comment on the absolute reliability of the Berg Balance Scale among people with moderately poor to normal balance. Copyright © 2013 Australian Physiotherapy Association. Published by .. All rights reserved.
Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S
2015-01-16
Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.
NASA Technical Reports Server (NTRS)
Colwell, R. N. (Principal Investigator)
1977-01-01
The author has identified the following significant results: (1) Goals of the irrigated lands project were addressed by the design and implementation of a multiphase sampling scheme that was founded on the utilization of a LANDSAT-based remote sensing system. (2) The synoptic coverage of LANDSAT and the eighteen day orbit cycle allowed the project to study agricultural test sites in a variety of environmental regions and monitor the development of crops throughout the major growing season. (3) The capability to utilize multidate imagery is crucial to the reliable estimation of irrigated acreage in California where multiple cropping is widespread and current estimation systems must rely on single data survey techniques. (4) In addition, the magnitude of agricultural acreage in California makes estimation by conventional methods impossible.
Gravity-darkening exponents in semi-detached binary systems from their photometric observations. II.
NASA Astrophysics Data System (ADS)
Djurašević, G.; Rovithis-Livaniou, H.; Rovithis, P.; Georgiades, N.; Erkapić, S.; Pavlović, R.
2006-01-01
This second part of our study concerning gravity-darkening presents the results for 8 semi-detached close binary systems. From the light-curve analysis of these systems the exponent of the gravity-darkening (GDE) for the Roche lobe filling components has been empirically derived. The method used for the light-curve analysis is based on Roche geometry, and enables simultaneous estimation of the systems' parameters and the gravity-darkening exponents. Our analysis is restricted to the black-body approximation which can influence in some degree the parameter estimation. The results of our analysis are: 1) For four of the systems, namely: TX UMa, β Per, AW Cam and TW Cas, there is a very good agreement between empirically estimated and theoretically predicted values for purely convective envelopes. 2) For the AI Dra system, the estimated value of gravity-darkening exponent is greater, and for UX Her, TW And and XZ Pup lesser than corresponding theoretical predictions, but for all mentioned systems the obtained values of the gravity-darkening exponent are quite close to the theoretically expected values. 3) Our analysis has proved generally that with the correction of the previously estimated mass ratios of the components within some of the analysed systems, the theoretical predictions of the gravity-darkening exponents for stars with convective envelopes are highly reliable. The anomalous values of the GDE found in some earlier studies of these systems can be considered as the consequence of the inappropriate method used to estimate the GDE. 4) The empirical estimations of GDE given in Paper I and in the present study indicate that in the light-curve analysis one can apply the recent theoretical predictions of GDE with high confidence for stars with both convective and radiative envelopes.
A closed-loop anesthetic delivery system for real-time control of burst suppression
NASA Astrophysics Data System (ADS)
Liberman, Max Y.; Ching, ShiNung; Chemali, Jessica; Brown, Emery N.
2013-08-01
Objective. There is growing interest in using closed-loop anesthetic delivery (CLAD) systems to automate control of brain states (sedation, unconsciousness and antinociception) in patients receiving anesthesia care. The accuracy and reliability of these systems can be improved by using as control signals electroencephalogram (EEG) markers for which the neurophysiological links to the anesthetic-induced brain states are well established. Burst suppression, in which bursts of electrical activity alternate with periods of quiescence or suppression, is a well-known, readily discernible EEG marker of profound brain inactivation and unconsciousness. This pattern is commonly maintained when anesthetics are administered to produce a medically-induced coma for cerebral protection in patients suffering from brain injuries or to arrest brain activity in patients having uncontrollable seizures. Although the coma may be required for several hours or days, drug infusion rates are managed inefficiently by manual adjustment. Our objective is to design a CLAD system for burst suppression control to automate management of medically-induced coma. Approach. We establish a CLAD system to control burst suppression consisting of: a two-dimensional linear system model relating the anesthetic brain level to the EEG dynamics; a new control signal, the burst suppression probability (BSP) defining the instantaneous probability of suppression; the BSP filter, a state-space algorithm to estimate the BSP from EEG recordings; a proportional-integral controller; and a system identification procedure to estimate the model and controller parameters. Main results. We demonstrate reliable performance of our system in simulation studies of burst suppression control using both propofol and etomidate in rodent experiments based on Vijn and Sneyd, and in human experiments based on the Schnider pharmacokinetic model for propofol. Using propofol, we further demonstrate that our control system reliably tracks changing target levels of burst suppression in simulated human subjects across different epidemiological profiles. Significance. Our results give new insights into CLAD system design and suggest a control-theory framework to automate second-to-second control of burst suppression for management of medically-induced coma.
Ambulatory estimation of foot placement during walking using inertial sensors.
Martin Schepers, H; van Asseldonk, Edwin H F; Baten, Chris T M; Veltink, Peter H
2010-12-01
This study proposes a method to assess foot placement during walking using an ambulatory measurement system consisting of orthopaedic sandals equipped with force/moment sensors and inertial sensors (accelerometers and gyroscopes). Two parameters, lateral foot placement (LFP) and stride length (SL), were estimated for each foot separately during walking with eyes open (EO), and with eyes closed (EC) to analyze if the ambulatory system was able to discriminate between different walking conditions. For validation, the ambulatory measurement system was compared to a reference optical position measurement system (Optotrak). LFP and SL were obtained by integration of inertial sensor signals. To reduce the drift caused by integration, LFP and SL were defined with respect to an average walking path using a predefined number of strides. By varying this number of strides, it was shown that LFP and SL could be best estimated using three consecutive strides. LFP and SL estimated from the instrumented shoe signals and with the reference system showed good correspondence as indicated by the RMS difference between both measurement systems being 6.5 ± 1.0 mm (mean ± standard deviation) for LFP, and 34.1 ± 2.7 mm for SL. Additionally, a statistical analysis revealed that the ambulatory system was able to discriminate between the EO and EC condition, like the reference system. It is concluded that the ambulatory measurement system was able to reliably estimate foot placement during walking. Copyright © 2010 Elsevier Ltd. All rights reserved.
An empirical Bayes approach for the Poisson life distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1973-01-01
A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.
A Bayesian Account of Visual-Vestibular Interactions in the Rod-and-Frame Task.
Alberts, Bart B G T; de Brouwer, Anouk J; Selen, Luc P J; Medendorp, W Pieter
2016-01-01
Panoramic visual cues, as generated by the objects in the environment, provide the brain with important information about gravity direction. To derive an optimal, i.e., Bayesian, estimate of gravity direction, the brain must combine panoramic information with gravity information detected by the vestibular system. Here, we examined the individual sensory contributions to this estimate psychometrically. We asked human subjects to judge the orientation (clockwise or counterclockwise relative to gravity) of a briefly flashed luminous rod, presented within an oriented square frame (rod-in-frame). Vestibular contributions were manipulated by tilting the subject's head, whereas visual contributions were manipulated by changing the viewing distance of the rod and frame. Results show a cyclical modulation of the frame-induced bias in perceived verticality across a 90° range of frame orientations. The magnitude of this bias decreased significantly with larger viewing distance, as if visual reliability was reduced. Biases increased significantly when the head was tilted, as if vestibular reliability was reduced. A Bayesian optimal integration model, with distinct vertical and horizontal panoramic weights, a gain factor to allow for visual reliability changes, and ocular counterroll in response to head tilt, provided a good fit to the data. We conclude that subjects flexibly weigh visual panoramic and vestibular information based on their orientation-dependent reliability, resulting in the observed verticality biases and the associated response variabilities.
A Bayesian Account of Visual–Vestibular Interactions in the Rod-and-Frame Task
de Brouwer, Anouk J.; Medendorp, W. Pieter
2016-01-01
Abstract Panoramic visual cues, as generated by the objects in the environment, provide the brain with important information about gravity direction. To derive an optimal, i.e., Bayesian, estimate of gravity direction, the brain must combine panoramic information with gravity information detected by the vestibular system. Here, we examined the individual sensory contributions to this estimate psychometrically. We asked human subjects to judge the orientation (clockwise or counterclockwise relative to gravity) of a briefly flashed luminous rod, presented within an oriented square frame (rod-in-frame). Vestibular contributions were manipulated by tilting the subject’s head, whereas visual contributions were manipulated by changing the viewing distance of the rod and frame. Results show a cyclical modulation of the frame-induced bias in perceived verticality across a 90° range of frame orientations. The magnitude of this bias decreased significantly with larger viewing distance, as if visual reliability was reduced. Biases increased significantly when the head was tilted, as if vestibular reliability was reduced. A Bayesian optimal integration model, with distinct vertical and horizontal panoramic weights, a gain factor to allow for visual reliability changes, and ocular counterroll in response to head tilt, provided a good fit to the data. We conclude that subjects flexibly weigh visual panoramic and vestibular information based on their orientation-dependent reliability, resulting in the observed verticality biases and the associated response variabilities. PMID:27844055
Intelligence Assessment Instruments in Adult Prison Populations: A Systematic Review.
van Esch, A Y M; Denzel, A D; Scherder, E J A; Masthoff, E D M
2017-10-01
Detection of intellectual disability (ID) in the penitentiary system is important for the following reasons: (a) to provide assistance to people with ID in understanding their legal rights and court proceedings; (b) to facilitate rehabilitation programs tailored to ID patients, which improves the enhancement of their quality of life and reduces their risk of reoffending; and (c) to provide a reliable estimate of the risk of offence recidivism. It requires a short assessment instrument that provides a reliable estimation of a person's intellectual functioning at the earliest possible stage of this process. The aim of this systematic review is (a) to provide an overview of recent short assessment instruments that provide a full-scale IQ score in adult prison populations and (b) to achieve a quality measurement of the validation studies regarding these instruments to determine which tests are most feasible in this target population. The Preferred Reporting Items for Systematic reviews and Meta-Analyses Statement is used to ensure reliability. The Satz-Mögel, an item-reduction short form of the Wechsler Adult Intelligence Scale, shows the highest correlation with the golden standard and is described to be most reliable. Nevertheless, when it comes to applicability in prison populations, the shorter and less verbal Quick Test can be preferred over others. Without affecting these conclusions, major limitations emerge from the present systematic review, which give rise to several important recommendations for further research.
Krishan, Kewal; Chatterjee, Preetika M; Kanchan, Tanuj; Kaur, Sandeep; Baryah, Neha; Singh, R K
2016-04-01
Sex estimation is considered as one of the essential parameters in forensic anthropology casework, and requires foremost consideration in the examination of skeletal remains. Forensic anthropologists frequently employ morphologic and metric methods for sex estimation of human remains. These methods are still very imperative in identification process in spite of the advent and accomplishment of molecular techniques. A constant boost in the use of imaging techniques in forensic anthropology research has facilitated to derive as well as revise the available population data. These methods however, are less reliable owing to high variance and indistinct landmark details. The present review discusses the reliability and reproducibility of various analytical approaches; morphological, metric, molecular and radiographic methods in sex estimation of skeletal remains. Numerous studies have shown a higher reliability and reproducibility of measurements taken directly on the bones and hence, such direct methods of sex estimation are considered to be more reliable than the other methods. Geometric morphometric (GM) method and Diagnose Sexuelle Probabiliste (DSP) method are emerging as valid methods and widely used techniques in forensic anthropology in terms of accuracy and reliability. Besides, the newer 3D methods are shown to exhibit specific sexual dimorphism patterns not readily revealed by traditional methods. Development of newer and better methodologies for sex estimation as well as re-evaluation of the existing ones will continue in the endeavour of forensic researchers for more accurate results. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.