Science.gov

Sample records for reliability physics-of-failure based

  1. Prediction of reliability on thermoelectric module through accelerated life test and Physics-of-failure

    NASA Astrophysics Data System (ADS)

    Choi, Hyoung-Seuk; Seo, Won-Seon; Choi, Duck-Kyun

    2011-09-01

    Thermoelectric cooling module (TEM) which is electric device has a mechanical stress because of temperature gradient in itself. It means that structure of TEM is vulnerable in an aspect of reliability but research on reliability of TEM was not performed a lot. Recently, the more the utilization of thermoelectric cooling devices grows, the more the needs for life prediction and improvement are increasing. In this paper, we investigated life distribution, shape parameter of the TEM through accelerated life test (ALT). And we discussed about how to enhance life of TEM through the Physics-of-failure. Experimental results of ALT showed that the thermoelectric cooling module follows the Weibull distribution, shape parameter of which is 3.6. The acceleration model is coffin Coffin-Manson and material constant is 1.8.

  2. Agent autonomy approach to probabilistic physics-of-failure modeling of complex dynamic systems with interacting failure mechanisms

    NASA Astrophysics Data System (ADS)

    Gromek, Katherine Emily

    A novel computational and inference framework of the physics-of-failure (PoF) reliability modeling for complex dynamic systems has been established in this research. The PoF-based reliability models are used to perform a real time simulation of system failure processes, so that the system level reliability modeling would constitute inferences from checking the status of component level reliability at any given time. The "agent autonomy" concept is applied as a solution method for the system-level probabilistic PoF-based (i.e. PPoF-based) modeling. This concept originated from artificial intelligence (AI) as a leading intelligent computational inference in modeling of multi agents systems (MAS). The concept of agent autonomy in the context of reliability modeling was first proposed by M. Azarkhail [1], where a fundamentally new idea of system representation by autonomous intelligent agents for the purpose of reliability modeling was introduced. Contribution of the current work lies in the further development of the agent anatomy concept, particularly the refined agent classification within the scope of the PoF-based system reliability modeling, new approaches to the learning and the autonomy properties of the intelligent agents, and modeling interacting failure mechanisms within the dynamic engineering system. The autonomous property of intelligent agents is defined as agent's ability to self-activate, deactivate or completely redefine their role in the analysis. This property of agents and the ability to model interacting failure mechanisms of the system elements makes the agent autonomy fundamentally different from all existing methods of probabilistic PoF-based reliability modeling. 1. Azarkhail, M., "Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems", PhD thesis, University of Maryland, College Park, 2007.

  3. Predicting remaining life by fusing the physics of failure modeling with diagnostics

    NASA Astrophysics Data System (ADS)

    Kacprzynski, G. J.; Sarlashkar, A.; Roemer, M. J.; Hess, A.; Hardman, B.

    2004-03-01

    Technology that enables failure prediction of critical machine components (prognostics) has the potential to significantly reduce maintenance costs and increase availability and safety. This article summarizes a research effort funded through the U.S. Defense Advanced Research Projects Agency and Naval Air System Command aimed at enhancing prognostic accuracy through more advanced physics-of-failure modeling and intelligent utilization of relevant diagnostic information. H-60 helicopter gear is used as a case study to introduce both stochastic sub-zone crack initiation and three-dimensional fracture mechanics lifing models along with adaptive model updating techniques for tuning key failure mode variables at a local material/damage site based on fused vibration features. The overall prognostic scheme is aimed at minimizing inherent modeling and operational uncertainties via sensed system measurements that evolve as damage progresses.

  4. Reliability-based design optimization using efficient global reliability analysis.

    SciTech Connect

    Bichon, Barron J.; Mahadevan, Sankaran; Eldred, Michael Scott

    2010-05-01

    Finding the optimal (lightest, least expensive, etc.) design for an engineered component that meets or exceeds a specified level of reliability is a problem of obvious interest across a wide spectrum of engineering fields. Various methods for this reliability-based design optimization problem have been proposed. Unfortunately, this problem is rarely solved in practice because, regardless of the method used, solving the problem is too expensive or the final solution is too inaccurate to ensure that the reliability constraint is actually satisfied. This is especially true for engineering applications involving expensive, implicit, and possibly nonlinear performance functions (such as large finite element models). The Efficient Global Reliability Analysis method was recently introduced to improve both the accuracy and efficiency of reliability analysis for this type of performance function. This paper explores how this new reliability analysis method can be used in a design optimization context to create a method of sufficient accuracy and efficiency to enable the use of reliability-based design optimization as a practical design tool.

  5. Reliability based design optimization: Formulations and methodologies

    NASA Astrophysics Data System (ADS)

    Agarwal, Harish

    Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed

  6. Reliability-based pricing of electricity service

    SciTech Connect

    Hagazy, Y.A.

    1993-01-01

    This research has two objectives: (a) to develop a price structure that unbundles electricity service by reliability levels, and (b) to analyze the implications of such a structure on economic welfare, system operation, load management, and energy conservation. The authors developed a pricing mechanism for electricity service that combines priority (reliability differentiation) pricing with real-time Ramsey type pricing. The electric utility is assumed to be a single welfare-maximizing firm able to set and communicate prices instantly. At time of supply shortages, the utility has direct control over customer loads and follows a rationing method among customers willing to accept power interruptions. Therefore, customers are given the choice either to be served with a high reliability [open quotes]firm[close quotes] service, or to be subject to interruption. To encourage customers to make rational reliability choices, a payment/compensation mechanism was integrated into the welfare-maximization model. In order to account for uncertainties associated with the operation of electric power systems, a stochastic production cost simulation is also integrated with the model. The stochastic production cost simulation yields the estimation of the expected production, cost, marginal costs, and system reliability level at total demand. The authors examine the welfare gain and energy and reserve saving possibilities due to different pricing schemes. The results show that reliability-based pricing yields higher economic efficiency and energy and power saving than both spot and Ramsey pricing when system imperfect reliability is considered. Therefore, the implication of this research is that reliability-based pricing provides a feasible solution to electric utilities as a substitute for power purchases, and/or new capacity investment.

  7. Reliability based fatigue design and maintenance procedures

    NASA Technical Reports Server (NTRS)

    Hanagud, S.

    1977-01-01

    A stochastic model has been developed to describe a probability for fatigue process by assuming a varying hazard rate. This stochastic model can be used to obtain the desired probability of a crack of certain length at a given location after a certain number of cycles or time. Quantitative estimation of the developed model was also discussed. Application of the model to develop a procedure for reliability-based cost-effective fail-safe structural design is presented. This design procedure includes the reliability improvement due to inspection and repair. Methods of obtaining optimum inspection and maintenance schemes are treated.

  8. A Reliability-Based Track Fusion Algorithm

    PubMed Central

    Xu, Li; Pan, Liqiang; Jin, Shuilin; Liu, Haibo; Yin, Guisheng

    2015-01-01

    The common track fusion algorithms in multi-sensor systems have some defects, such as serious imbalances between accuracy and computational cost, the same treatment of all the sensor information regardless of their quality, high fusion errors at inflection points. To address these defects, a track fusion algorithm based on the reliability (TFR) is presented in multi-sensor and multi-target environments. To improve the information quality, outliers in the local tracks are eliminated at first. Then the reliability of local tracks is calculated, and the local tracks with high reliability are chosen for the state estimation fusion. In contrast to the existing methods, TFR reduces high fusion errors at the inflection points of system tracks, and obtains a high accuracy with less computational cost. Simulation results verify the effectiveness and the superiority of the algorithm in dense sensor environments. PMID:25950174

  9. Reliability Evaluation of Next Generation Inverter: Cooperative Research and Development Final Report, CRADA Number CRD-12-478

    SciTech Connect

    Paret, Paul

    2016-10-01

    The National Renewable Energy Laboratory (NREL) will conduct thermal and reliability modeling on three sets of power modules for the development of a next generation inverter for electric traction drive vehicles. These modules will be chosen by General Motors (GM) to represent three distinct technological approaches to inverter power module packaging. Likely failure mechanisms will be identified in each package and a physics-of-failure-based reliability assessment will be conducted.

  10. Measurement-based reliability/performability models

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  11. Reliability Analysis and Reliability-Based Design Optimization of Circular Composite Cylinders Under Axial Compression

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2001-01-01

    This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.

  12. A reliability measure of protein-protein interactions and a reliability measure-based search engine.

    PubMed

    Park, Byungkyu; Han, Kyungsook

    2010-02-01

    Many methods developed for estimating the reliability of protein-protein interactions are based on the topology of protein-protein interaction networks. This paper describes a new reliability measure for protein-protein interactions, which does not rely on the topology of protein interaction networks, but expresses biological information on functional roles, sub-cellular localisations and protein classes as a scoring schema. The new measure is useful for filtering many spurious interactions, as well as for estimating the reliability of protein interaction data. In particular, the reliability measure can be used to search protein-protein interactions with the desired reliability in databases. The reliability-based search engine is available at http://yeast.hpid.org. We believe this is the first search engine for interacting proteins, which is made available to public. The search engine and the reliability measure of protein interactions should provide useful information for determining proteins to focus on.

  13. Assuring reliability program effectiveness.

    NASA Technical Reports Server (NTRS)

    Ball, L. W.

    1973-01-01

    An attempt is made to provide simple identification and description of techniques that have proved to be most useful either in developing a new product or in improving reliability of an established product. The first reliability task is obtaining and organizing parts failure rate data. Other tasks are parts screening, tabulation of general failure rates, preventive maintenance, prediction of new product reliability, and statistical demonstration of achieved reliability. Five principal tasks for improving reliability involve the physics of failure research, derating of internal stresses, control of external stresses, functional redundancy, and failure effects control. A final task is the training and motivation of reliability specialist engineers.

  14. Refining network reconstruction based on functional reliability.

    PubMed

    Zhang, Yunjun; Ouyang, Qi; Geng, Zhi

    2014-07-21

    Reliable functioning is crucial for the survival and development of the genetic regulatory networks in living cells and organisms. This functional reliability is an important feature of the networks and reflects the structural features that have been embedded in the regulatory networks by evolution. In this paper, we integrate this reliability into network reconstruction. We introduce the concept of dependency probability to measure the dependency of functional reliability on network edges. We also propose a method to estimate the dependency probability and select edges with high contributions to functional reliability. We use two real examples, the regulatory network of the cell cycle of the budding yeast and that of the fission yeast, to demonstrate that the proposed method improves network reconstruction. In addition, the dependency probability is robust in calculation and can be easily implemented in practice.

  15. System Reliability for LED-Based Products

    SciTech Connect

    Davis, J Lynn; Mills, Karmann; Lamvik, Michael; Yaga, Robert; Shepherd, Sarah D; Bittle, James; Baldasaro, Nick; Solano, Eric; Bobashev, Georgiy; Johnson, Cortina; Evans, Amy

    2014-04-07

    Results from accelerated life tests (ALT) on mass-produced commercially available 6” downlights are reported along with results from commercial LEDs. The luminaires capture many of the design features found in modern luminaires. In general, a systems perspective is required to understand the reliability of these devices since LED failure is rare. In contrast, components such as drivers, lenses, and reflector are more likely to impact luminaire reliability than LEDs.

  16. Optimum structural design based on reliability analysis

    NASA Technical Reports Server (NTRS)

    Heer, E.; Shinozuka, M.; Yang, J. N.

    1970-01-01

    Proof-load test improves statistical confidence in the estimate of reliability, numerical examples indicate a definite advantage of the proof-load approach in terms of savings in structural weight. The cost of establishing the statistical distribution of strength of the structural material is also introduced into the cost formulation

  17. Reliability of digital reactor protection system based on extenics.

    PubMed

    Zhao, Jing; He, Ya-Nan; Gu, Peng-Fei; Chen, Wei-Hua; Gao, Feng

    2016-01-01

    After the Fukushima nuclear accident, safety of nuclear power plants (NPPs) is widespread concerned. The reliability of reactor protection system (RPS) is directly related to the safety of NPPs, however, it is difficult to accurately evaluate the reliability of digital RPS. The method is based on estimating probability has some uncertainties, which can not reflect the reliability status of RPS dynamically and support the maintenance and troubleshooting. In this paper, the reliability quantitative analysis method based on extenics is proposed for the digital RPS (safety-critical), by which the relationship between the reliability and response time of RPS is constructed. The reliability of the RPS for CPR1000 NPP is modeled and analyzed by the proposed method as an example. The results show that the proposed method is capable to estimate the RPS reliability effectively and provide support to maintenance and troubleshooting of digital RPS system.

  18. Optimal reliability-based planning of experiments for POD curves

    SciTech Connect

    Soerensen, J.D.; Faber, M.H.; Kroon, I.B.

    1995-12-31

    Optimal planning of crack detection tests is considered. The tests are used to update the information on the reliability of inspection techniques modeled by probability of detection (P.O.D.) curves. It is shown how cost-optimal and reliability-based test plans can be obtained using First Order Reliability Methods in combination with life-cycle cost-optimal inspection and maintenance planning. The methodology is based on preposterior analyses from Bayesian decisions theory. An illustrative example is shown.

  19. Reliability-based Assessment of Stability of Slopes

    NASA Astrophysics Data System (ADS)

    Hsein Juang, C.; Zhang, J.; Gong, W.

    2015-09-01

    Multiple sources of uncertainties often exist in the evaluation of slope stability. When assessing stability of slopes in the face of uncertainties, it is desirable, and sometimes necessary, to adopt reliability-based approaches that consider these uncertainties explicitly. This paper focuses on the practical procedures developed recently for the reliability-based assessment of slope stability. The statistical characterization of model uncertainty and parameter uncertainty are first described, followed by an evaluation of the failure probability of a slope corresponding to a single slip surface, and the system failure probability. The availability of site-specific information then makes it possible to update the reliability of the slope through the Bayes’ theorem. Furthermore, how to perform reliability-based design when the statistics of random variables cannot be determined accurately is also discussed. Finally, case studies are presented to illustrate the benefit of performing reliability-based design and the procedure for conducting reliability-based robust design when the statistics of the random variables are incomplete.

  20. Reliability-based lifetime maintenance of aging highway bridges

    NASA Astrophysics Data System (ADS)

    Enright, Michael P.; Frangopol, Dan M.

    2000-06-01

    As the nation's infrastructure continues to age, the cost of maintaining it at an acceptable safety level continues to increase. In the United States, about one of every three bridges is rated structurally deficient and/or functionally obsolete. It will require about 80 billion to eliminate the current backlog of bridge deficiencies and maintain repair levels. Unfortunately, the financial resources allocated for these activities fall extremely short of the demand. Although several existing and emerging NDT techniques are available to gather inspection data, current maintenance planning decisions for deficient bridges are based on data from subjective condition assessments and do not consider the reliability of bridge components and systems. Recently, reliability-based optimum maintenance planning strategies have been developed. They can be used to predict inspection and repair times to achieve minimum life-cycle cost of deteriorating structural systems. In this study, a reliability-based methodology which takes into account loading randomness and history, and randomness in strength and degradation resulting from aggressive environmental factors, is used to predict the time- dependent reliability of aging highway bridges. A methodology for incorporating inspection data into reliability predictions is also presented. Finally, optimal lifetime maintenance strategies are identified, in which optimal inspection/repair times are found based on minimum expected life-cycle cost under prescribed reliability constraints. The influence of discount rate on optimum solutions is evaluated.

  1. Developing safety performance functions incorporating reliability-based risk measures.

    PubMed

    Ibrahim, Shewkar El-Bassiouni; Sayed, Tarek

    2011-11-01

    Current geometric design guides provide deterministic standards where the safety margin of the design output is generally unknown and there is little knowledge of the safety implications of deviating from these standards. Several studies have advocated probabilistic geometric design where reliability analysis can be used to account for the uncertainty in the design parameters and to provide a risk measure of the implication of deviation from design standards. However, there is currently no link between measures of design reliability and the quantification of safety using collision frequency. The analysis presented in this paper attempts to bridge this gap by incorporating a reliability-based quantitative risk measure such as the probability of non-compliance (P(nc)) in safety performance functions (SPFs). Establishing this link will allow admitting reliability-based design into traditional benefit-cost analysis and should lead to a wider application of the reliability technique in road design. The present application is concerned with the design of horizontal curves, where the limit state function is defined in terms of the available (supply) and stopping (demand) sight distances. A comprehensive collision and geometric design database of two-lane rural highways is used to investigate the effect of the probability of non-compliance on safety. The reliability analysis was carried out using the First Order Reliability Method (FORM). Two Negative Binomial (NB) SPFs were developed to compare models with and without the reliability-based risk measures. It was found that models incorporating the P(nc) provided a better fit to the data set than the traditional (without risk) NB SPFs for total, injury and fatality (I+F) and property damage only (PDO) collisions.

  2. Fatigue reliability based optimal design of planar compliant micropositioning stages

    NASA Astrophysics Data System (ADS)

    Wang, Qiliang; Zhang, Xianmin

    2015-10-01

    Conventional compliant micropositioning stages are usually developed based on static strength and deterministic methods, which may lead to either unsafe or excessive designs. This paper presents a fatigue reliability analysis and optimal design of a three-degree-of-freedom (3 DOF) flexure-based micropositioning stage. Kinematic, modal, static, and fatigue stress modelling of the stage were conducted using the finite element method. The maximum equivalent fatigue stress in the hinges was derived using sequential quadratic programming. The fatigue strength of the hinges was obtained by considering various influencing factors. On this basis, the fatigue reliability of the hinges was analysed using the stress-strength interference method. Fatigue-reliability-based optimal design of the stage was then conducted using the genetic algorithm and MATLAB. To make fatigue life testing easier, a 1 DOF stage was then optimized and manufactured. Experimental results demonstrate the validity of the approach.

  3. Multi-mode reliability-based design of horizontal curves.

    PubMed

    Essa, Mohamed; Sayed, Tarek; Hussein, Mohamed

    2016-08-01

    Recently, reliability analysis has been advocated as an effective approach to account for uncertainty in the geometric design process and to evaluate the risk associated with a particular design. In this approach, a risk measure (e.g. probability of noncompliance) is calculated to represent the probability that a specific design would not meet standard requirements. The majority of previous applications of reliability analysis in geometric design focused on evaluating the probability of noncompliance for only one mode of noncompliance such as insufficient sight distance. However, in many design situations, more than one mode of noncompliance may be present (e.g. insufficient sight distance and vehicle skidding at horizontal curves). In these situations, utilizing a multi-mode reliability approach that considers more than one failure (noncompliance) mode is required. The main objective of this paper is to demonstrate the application of multi-mode (system) reliability analysis to the design of horizontal curves. The process is demonstrated by a case study of Sea-to-Sky Highway located between Vancouver and Whistler, in southern British Columbia, Canada. Two noncompliance modes were considered: insufficient sight distance and vehicle skidding. The results show the importance of accounting for several noncompliance modes in the reliability model. The system reliability concept could be used in future studies to calibrate the design of various design elements in order to achieve consistent safety levels based on all possible modes of noncompliance.

  4. Indel Reliability in Indel-Based Phylogenetic Inference

    PubMed Central

    Ashkenazy, Haim; Cohen, Ofir; Pupko, Tal; Huchon, Dorothée

    2014-01-01

    It is often assumed that it is unlikely that the same insertion or deletion (indel) event occurred at the same position in two independent evolutionary lineages, and thus, indel-based inference of phylogeny should be less subject to homoplasy compared with standard inference which is based on substitution events. Indeed, indels were successfully used to solve debated evolutionary relationships among various taxonomical groups. However, indels are never directly observed but rather inferred from the alignment and thus indel-based inference may be sensitive to alignment errors. It is hypothesized that phylogenetic reconstruction would be more accurate if it relied only on a subset of reliable indels instead of the entire indel data. Here, we developed a method to quantify the reliability of indel characters by measuring how often they appear in a set of alternative multiple sequence alignments. Our approach is based on the assumption that indels that are consistently present in most alternative alignments are more reliable compared with indels that appear only in a small subset of these alignments. Using simulated and empirical data, we studied the impact of filtering and weighting indels by their reliability scores on the accuracy of indel-based phylogenetic reconstruction. The new method is available as a web-server at http://guidance.tau.ac.il/RELINDEL/. PMID:25409663

  5. Reliability, Compliance, and Security in Web-Based Course Assessments

    ERIC Educational Resources Information Center

    Bonham, Scott

    2008-01-01

    Pre- and postcourse assessment has become a very important tool for education research in physics and other areas. The web offers an attractive alternative to in-class paper administration, but concerns about web-based administration include reliability due to changes in medium, student compliance rates, and test security, both question leakage…

  6. The Verification-based Analysis of Reliable Multicast Protocol

    NASA Technical Reports Server (NTRS)

    Wu, Yunqing

    1996-01-01

    Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP Multicasting. In this paper, we develop formal models for R.W using existing automatic verification systems, and perform verification-based analysis on the formal RMP specifications. We also use the formal models of RW specifications to generate a test suite for conformance testing of the RMP implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress between the implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.

  7. Intelligent computer based reliability assessment of multichip modules

    NASA Astrophysics Data System (ADS)

    Grosse, Ian R.; Katragadda, Prasanna; Bhattacharya, Sandeepan; Kulkarni, Sarang

    1994-04-01

    To deliver reliable Multichip (MCM's) in the face of rapidly changing technology, computer-based tools are needed for predicting the thermal mechanical behavior of various MCM package designs and selecting the most promising design in terms of performance, robustness, and reliability. The design tool must be able to address new design technologies manufacturing processes, novel materials, application criteria, and thermal environmental conditions. Reliability is one of the most important factors for determining design quality and hence must be a central condition in the design of Multichip Module packages. Clearly, design engineers need computer based simulation tools for rapid and efficient electrical, thermal, and mechanical modeling and optimization of advanced devices. For three dimensional thermal and mechanical simulation of advanced devices, the finite element method (FEM) is increasingly becoming the numerical method of choice. FEM is a versatile and sophisticated numerical techniques for solving the partial differential equations that describe the physical behavior of complex designs. AUTOTHERM(TM) is a MCM design tool developed by Mentor Graphics for Motorola, Inc. This tool performs thermal analysis of MCM packages using finite element analysis techniques. The tools used the philosophy of object oriented representation of components and simplified specification of boundary conditions for the thermal analysis so that the user need not be an expert in using finite element techniques. Different package types can be assessed and environmental conditions can be modeled. It also includes a detailed reliability module which allows the user to choose a desired failure mechanism (model). All the current tools perform thermal and/or stress analysis and do not address the issues of robustness and optimality of the MCM designs and the reliability prediction techniques are based on closed form analytical models and can often fail to predict the cycles of failure (N

  8. Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory

    PubMed Central

    Yuan, Kaijuan; Xiao, Fuyuan; Fei, Liguo; Kang, Bingyi; Deng, Yong

    2016-01-01

    Sensor data fusion plays an important role in fault diagnosis. Dempster–Shafer (D-R) evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods. PMID:26797611

  9. Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory.

    PubMed

    Yuan, Kaijuan; Xiao, Fuyuan; Fei, Liguo; Kang, Bingyi; Deng, Yong

    2016-01-18

    Sensor data fusion plays an important role in fault diagnosis. Dempster-Shafer (D-R) evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods.

  10. Reliability based design including future tests and multiagent approaches

    NASA Astrophysics Data System (ADS)

    Villanueva, Diane

    The initial stages of reliability-based design optimization involve the formulation of objective functions and constraints, and building a model to estimate the reliability of the design with quantified uncertainties. However, even experienced hands often overlook important objective functions and constraints that affect the design. In addition, uncertainty reduction measures, such as tests and redesign, are often not considered in reliability calculations during the initial stages. This research considers two areas that concern the design of engineering systems: 1) the trade-off of the effect of a test and post-test redesign on reliability and cost and 2) the search for multiple candidate designs as insurance against unforeseen faults in some designs. In this research, a methodology was developed to estimate the effect of a single future test and post-test redesign on reliability and cost. The methodology uses assumed distributions of computational and experimental errors with re-design rules to simulate alternative future test and redesign outcomes to form a probabilistic estimate of the reliability and cost for a given design. Further, it was explored how modeling a future test and redesign provides a company an opportunity to balance development costs versus performance by simultaneously designing the design and the post-test redesign rules during the initial design stage. The second area of this research considers the use of dynamic local surrogates, or surrogate-based agents, to locate multiple candidate designs. Surrogate-based global optimization algorithms often require search in multiple candidate regions of design space, expending most of the computation needed to define multiple alternate designs. Thus, focusing on solely locating the best design may be wasteful. We extended adaptive sampling surrogate techniques to locate multiple optima by building local surrogates in sub-regions of the design space to identify optima. The efficiency of this method

  11. Reliability-based analysis and design optimization for durability

    NASA Astrophysics Data System (ADS)

    Choi, Kyung K.; Youn, Byeng D.; Tang, Jun; Hardee, Edward

    2005-05-01

    In the Army mechanical fatigue subject to external and inertia transient loads in the service life of mechanical systems often leads to a structural failure due to accumulated damage. Structural durability analysis that predicts the fatigue life of mechanical components subject to dynamic stresses and strains is a compute intensive multidisciplinary simulation process, since it requires the integration of several computer-aided engineering tools and considerable data communication and computation. Uncertainties in geometric dimensions due to manufacturing tolerances cause the indeterministic nature of the fatigue life of a mechanical component. Due to the fact that uncertainty propagation to structural fatigue under transient dynamic loading is not only numerically complicated but also extremely computationally expensive, it is a challenging task to develop a structural durability-based design optimization process and reliability analysis to ascertain whether the optimal design is reliable. The objective of this paper is the demonstration of an integrated CAD-based computer-aided engineering process to effectively carry out design optimization for structural durability, yielding a durable and cost-effectively manufacturable product. This paper shows preliminary results of reliability-based durability design optimization for the Army Stryker A-Arm.

  12. A Research Roadmap for Computation-Based Human Reliability Analysis

    SciTech Connect

    Boring, Ronald; Mandelli, Diego; Joe, Jeffrey; Smith, Curtis; Groth, Katrina

    2015-08-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.

  13. Efficient reliability-based design of mooring systems

    SciTech Connect

    Larsen, K.

    1996-12-31

    Uncertainties both in the environmentally induced load effects and in the strength of mooring line components make a rational design of mooring systems a complex task. The methods of structural reliability, taking these uncertainties into account, have been applied in an efficient probabilistic analysis procedure for the tension overload limit state for mooring lines. This paper outlines the philosophy and methodology for this procedure, followed by numerical examples of a turret moored ship. Both base case annual failure probabilities and results from a number of sensitivity analyses are presented. It is demonstrated that the reliability-based design procedure can be effectively utilized to quantify the safety against failure due to tension overload of moorings. The results of the case studies indicate that the largest uncertainties are associated with the distribution parameters of the chain link and steel wire rope segment tension capacity, and the modelling of the environment. The modelling of spreading angles between waves, wind and current vs. colinearity, and double-peaked vs. single-peaked wave spectrum models are key parameters in the reliability assessment.

  14. Limit states and reliability-based pipeline design. Final report

    SciTech Connect

    Zimmerman, T.J.E.; Chen, Q.; Pandey, M.D.

    1997-06-01

    This report provides the results of a study to develop limit states design (LSD) procedures for pipelines. Limit states design, also known as load and resistance factor design (LRFD), provides a unified approach to dealing with all relevant failure modes combinations of concern. It explicitly accounts for the uncertainties that naturally occur in the determination of the loads which act on a pipeline and in the resistance of the pipe to failure. The load and resistance factors used are based on reliability considerations; however, the designer is not faced with carrying out probabilistic calculations. This work is done during development and periodic updating of the LSD document. This report provides background information concerning limits states and reliability-based design (Section 2), gives the limit states design procedures that were developed (Section 3) and provides results of the reliability analyses that were undertaken in order to partially calibrate the LSD method (Section 4). An appendix contains LSD design examples in order to demonstrate use of the method. Section 3, Limit States Design has been written in the format of a recommended practice. It has been structured so that, in future, it can easily be converted to a limit states design code format. Throughout the report, figures and tables are given at the end of each section, with the exception of Section 3, where to facilitate understanding of the LSD method, they have been included with the text.

  15. Reliability-Based Control Design for Uncertain Systems

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.

    2005-01-01

    This paper presents a robust control design methodology for systems with probabilistic parametric uncertainty. Control design is carried out by solving a reliability-based multi-objective optimization problem where the probability of violating design requirements is minimized. Simultaneously, failure domains are optimally enlarged to enable global improvements in the closed-loop performance. To enable an efficient numerical implementation, a hybrid approach for estimating reliability metrics is developed. This approach, which integrates deterministic sampling and asymptotic approximations, greatly reduces the numerical burden associated with complex probabilistic computations without compromising the accuracy of the results. Examples using output-feedback and full-state feedback with state estimation are used to demonstrate the ideas proposed.

  16. Probabilistic confidence for decisions based on uncertain reliability estimates

    NASA Astrophysics Data System (ADS)

    Reid, Stuart G.

    2013-05-01

    Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.

  17. Reliability Quantification of Advanced Stirling Convertor (ASC) Components

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward

    2010-01-01

    The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.

  18. Study of vertical breakwater reliability based on copulas

    NASA Astrophysics Data System (ADS)

    Dong, Sheng; Li, Jingjing; Li, Xue; Wei, Yong

    2016-04-01

    The reliability of a vertical breakwater is calculated using direct integration methods based on joint density functions. The horizontal and uplifting wave forces on the vertical breakwater can be well fitted by the lognormal and the Gumbel distributions, respectively. The joint distribution of the horizontal and uplifting wave forces is analyzed using different probabilistic distributions, including the bivariate logistic Gumbel distribution, the bivariate lognormal distribution, and three bivariate Archimedean copulas functions constructed with different marginal distributions simultaneously. We use the fully nested copulas to construct multivariate distributions taking into account related variables. Different goodness fitting tests are carried out to determine the best bivariate copula model for wave forces on a vertical breakwater. We show that a bivariate model constructed by Frank copula gives the best reliability analysis, using marginal distributions of Gumbel and lognormal to account for uplifting pressure and horizontal wave force on a vertical breakwater, respectively. The results show that failure probability of the vertical breakwater calculated by multivariate density function is comparable to those by the Joint Committee on Structural Safety methods. As copulas are suitable for constructing a bivariate or multivariate joint distribution, they have great potential in reliability analysis for other coastal structures.

  19. Multi-objective reliability-based optimization with stochastic metamodels.

    PubMed

    Coelho, Rajan Filomeno; Bouillard, Philippe

    2011-01-01

    This paper addresses continuous optimization problems with multiple objectives and parameter uncertainty defined by probability distributions. First, a reliability-based formulation is proposed, defining the nondeterministic Pareto set as the minimal solutions such that user-defined probabilities of nondominance and constraint satisfaction are guaranteed. The formulation can be incorporated with minor modifications in a multiobjective evolutionary algorithm (here: the nondominated sorting genetic algorithm-II). Then, in the perspective of applying the method to large-scale structural engineering problems--for which the computational effort devoted to the optimization algorithm itself is negligible in comparison with the simulation--the second part of the study is concerned with the need to reduce the number of function evaluations while avoiding modification of the simulation code. Therefore, nonintrusive stochastic metamodels are developed in two steps. First, for a given sampling of the deterministic variables, a preliminary decomposition of the random responses (objectives and constraints) is performed through polynomial chaos expansion (PCE), allowing a representation of the responses by a limited set of coefficients. Then, a metamodel is carried out by kriging interpolation of the PCE coefficients with respect to the deterministic variables. The method has been tested successfully on seven analytical test cases and on the 10-bar truss benchmark, demonstrating the potential of the proposed approach to provide reliability-based Pareto solutions at a reasonable computational cost.

  20. Multisite reliability of MR-based functional connectivity.

    PubMed

    Noble, Stephanie; Scheinost, Dustin; Finn, Emily S; Shen, Xilin; Papademetris, Xenophon; McEwen, Sarah C; Bearden, Carrie E; Addington, Jean; Goodyear, Bradley; Cadenhead, Kristin S; Mirzakhanian, Heline; Cornblatt, Barbara A; Olvet, Doreen M; Mathalon, Daniel H; McGlashan, Thomas H; Perkins, Diana O; Belger, Aysenil; Seidman, Larry J; Thermenos, Heidi; Tsuang, Ming T; van Erp, Theo G M; Walker, Elaine F; Hamann, Stephan; Woods, Scott W; Cannon, Tyrone D; Constable, R Todd

    2017-02-01

    Recent years have witnessed an increasing number of multisite MRI functional connectivity (fcMRI) studies. While multisite studies provide an efficient way to accelerate data collection and increase sample sizes, especially for rare clinical populations, any effects of site or MRI scanner could ultimately limit power and weaken results. Little data exists on the stability of functional connectivity measurements across sites and sessions. In this study, we assess the influence of site and session on resting state functional connectivity measurements in a healthy cohort of traveling subjects (8 subjects scanned twice at each of 8 sites) scanned as part of the North American Prodrome Longitudinal Study (NAPLS). Reliability was investigated in three types of connectivity analyses: (1) seed-based connectivity with posterior cingulate cortex (PCC), right motor cortex (RMC), and left thalamus (LT) as seeds; (2) the intrinsic connectivity distribution (ICD), a voxel-wise connectivity measure; and (3) matrix connectivity, a whole-brain, atlas-based approach to assessing connectivity between nodes. Contributions to variability in connectivity due to subject, site, and day-of-scan were quantified and used to assess between-session (test-retest) reliability in accordance with Generalizability Theory. Overall, no major site, scanner manufacturer, or day-of-scan effects were found for the univariate connectivity analyses; instead, subject effects dominated relative to the other measured factors. However, summaries of voxel-wise connectivity were found to be sensitive to site and scanner manufacturer effects. For all connectivity measures, although subject variance was three times the site variance, the residual represented 60-80% of the variance, indicating that connectivity differed greatly from scan to scan independent of any of the measured factors (i.e., subject, site, and day-of-scan). Thus, for a single 5min scan, reliability across connectivity measures was poor (ICC=0

  1. Stochastic structural and reliability based optimization of tuned mass damper

    NASA Astrophysics Data System (ADS)

    Mrabet, E.; Guedri, M.; Ichchou, M. N.; Ghanmi, S.

    2015-08-01

    The purpose of the current work is to present and discuss a technique for optimizing the parameters of a vibration absorber in the presence of uncertain bounded structural parameters. The technique used in the optimization is an interval extension based on a Taylor expansion of the objective function. The technique permits the transformation of the problem, initially non-deterministic, into two independents deterministic sub-problems. Two optimization strategies are considered: the Stochastic Structural Optimization (SSO) and the Reliability Based Optimization (RBO). It has been demonstrated through two different structures that the technique is valid for the SSO problem, even for high levels of uncertainties and it is less suitable for the RBO problem, especially when considering high levels of uncertainties.

  2. Quantifying neurotransmission reliability through metrics-based information analysis.

    PubMed

    Brasselet, Romain; Johansson, Roland S; Arleo, Angelo

    2011-04-01

    We set forth an information-theoretical measure to quantify neurotransmission reliability while taking into full account the metrical properties of the spike train space. This parametric information analysis relies on similarity measures induced by the metrical relations between neural responses as spikes flow in. Thus, in order to assess the entropy, the conditional entropy, and the overall information transfer, this method does not require any a priori decoding algorithm to partition the space into equivalence classes. It therefore allows the optimal parameters of a class of distances to be determined with respect to information transmission. To validate the proposed information-theoretical approach, we study precise temporal decoding of human somatosensory signals recorded using microneurography experiments. For this analysis, we employ a similarity measure based on the Victor-Purpura spike train metrics. We show that with appropriate parameters of this distance, the relative spike times of the mechanoreceptors' responses convey enough information to perform optimal discrimination--defined as maximum metrical information and zero conditional entropy--of 81 distinct stimuli within 40 ms of the first afferent spike. The proposed information-theoretical measure proves to be a suitable generalization of Shannon mutual information in order to consider the metrics of temporal codes explicitly. It allows neurotransmission reliability to be assessed in the presence of large spike train spaces (e.g., neural population codes) with high temporal precision.

  3. 49 CFR Appendix E to Part 238 - General Principles of Reliability-Based Maintenance Programs

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... STANDARDS Pt. 238, App. E Appendix E to Part 238—General Principles of Reliability-Based Maintenance... minimum total cost, including maintenance costs and the costs of residual failures. (b) Reliability-based... result of the undetected failure of a hidden function. (c) In a reliability-based maintenance...

  4. RELIABILITY BASED DESIGN OF FIXED FOUNDATION WIND TURBINES

    SciTech Connect

    Nichols, R.

    2013-10-14

    Recent analysis of offshore wind turbine foundations using both applicable API and IEC standards show that the total load demand from wind and waves is greatest in wave driven storms. Further, analysis of overturning moment loads (OTM) reveal that impact forces exerted by breaking waves are the largest contributor to OTM in big storms at wind speeds above the operating range of 25 m/s. Currently, no codes or standards for offshore wind power generators have been adopted by the Bureau of Ocean Energy Management Regulation and Enforcement (BOEMRE) for use on the Outer Continental Shelf (OCS). Current design methods based on allowable stress design (ASD) incorporate the uncertainty in the variation of loads transferred to the foundation and geotechnical capacity of the soil and rock to support the loads is incorporated into a factor of safety. Sources of uncertainty include spatial and temporal variation of engineering properties, reliability of property measurements applicability and sufficiency of sampling and testing methods, modeling errors, and variability of estimated load predictions. In ASD these sources of variability are generally given qualitative rather than quantitative consideration. The IEC 61400‐3 design standard for offshore wind turbines is based on ASD methods. Load and resistance factor design (LRFD) methods are being increasingly used in the design of structures. Uncertainties such as those listed above can be included quantitatively into the LRFD process. In LRFD load factors and resistance factors are statistically based. This type of analysis recognizes that there is always some probability of failure and enables the probability of failure to be quantified. This paper presents an integrated approach consisting of field observations and numerical simulation to establish the distribution of loads from breaking waves to support the LRFD of fixed offshore foundations.

  5. Simulation and Non-Simulation Based Human Reliability Analysis Approaches

    SciTech Connect

    Boring, Ronald Laurids; Shirley, Rachel Elizabeth; Joe, Jeffrey Clark; Mandelli, Diego

    2014-12-01

    Part of the U.S. Department of Energy’s Light Water Reactor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Characterization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk model. In this report, we review simulation-based and non-simulation-based human reliability assessment (HRA) methods. Chapter 2 surveys non-simulation-based HRA methods. Conventional HRA methods target static Probabilistic Risk Assessments for Level 1 events. These methods would require significant modification for use in dynamic simulation of Level 2 and Level 3 events. Chapter 3 is a review of human performance models. A variety of methods and models simulate dynamic human performance; however, most of these human performance models were developed outside the risk domain and have not been used for HRA. The exception is the ADS-IDAC model, which can be thought of as a virtual operator program. This model is resource-intensive but provides a detailed model of every operator action in a given scenario, along with models of numerous factors that can influence operator performance. Finally, Chapter 4 reviews the treatment of timing of operator actions in HRA methods. This chapter is an example of one of the critical gaps between existing HRA methods and the needs of dynamic HRA. This report summarizes the foundational information needed to develop a feasible approach to modeling human interactions in the RISMC simulations.

  6. Reliable Freestanding Position-Based Routing in Highway Scenarios

    PubMed Central

    Galaviz-Mosqueda, Gabriel A.; Aquino-Santos, Raúl; Villarreal-Reyes, Salvador; Rivera-Rodríguez, Raúl; Villaseñor-González, Luis; Edwards, Arthur

    2012-01-01

    Vehicular Ad Hoc Networks (VANETs) are considered by car manufacturers and the research community as the enabling technology to radically improve the safety, efficiency and comfort of everyday driving. However, before VANET technology can fulfill all its expected potential, several difficulties must be addressed. One key issue arising when working with VANETs is the complexity of the networking protocols compared to those used by traditional infrastructure networks. Therefore, proper design of the routing strategy becomes a main issue for the effective deployment of VANETs. In this paper, a reliable freestanding position-based routing algorithm (FPBR) for highway scenarios is proposed. For this scenario, several important issues such as the high mobility of vehicles and the propagation conditions may affect the performance of the routing strategy. These constraints have only been partially addressed in previous proposals. In contrast, the design approach used for developing FPBR considered the constraints imposed by a highway scenario and implements mechanisms to overcome them. FPBR performance is compared to one of the leading protocols for highway scenarios. Performance metrics show that FPBR yields similar results when considering freespace propagation conditions, and outperforms the leading protocol when considering a realistic highway path loss model. PMID:23202159

  7. Reliability-based optimum inspection and maintenance procedures. [for engines

    NASA Technical Reports Server (NTRS)

    Nanagud, S.; Uppaluri, B.

    1975-01-01

    The development of reliability-based optimum inspection and maintenance schedules for engines needs an understanding of the fatigue behavior of the engines. Critical areas of the engine structure prone to fatigue damage are usually identified beforehand or after the fleet has been put into operation. In these areas, fatigue cracks initiate after several flight hours, and these cracks grow in length until failure takes place when these cracks attain the critical lengths. Crack initiation time and its growth rate are considered to be random variables. Usually, the inspection (fatigue) or test data from similar engines are used as prior distributions. The existing state-of-the-art is to ignore the different lengths of cracks obserbed at various inspections and to consider only the fact that a crack existed (or did not exist) at the time of inspection. In this paper, a procedure has been developed to obtain the probability of finding a crack of a given size at a certain time if the probability distributions for crack initiation and rates of growth are known. Application of the developed stochastic models to devise optimum procedures for inspection and maintenance are also discussed.

  8. Reliable freestanding position-based routing in highway scenarios.

    PubMed

    Galaviz-Mosqueda, Gabriel A; Aquino-Santos, Raúl; Villarreal-Reyes, Salvador; Rivera-Rodríguez, Raúl; Villaseñor-González, Luis; Edwards, Arthur

    2012-10-24

    Vehicular Ad Hoc Networks (VANETs) are considered by car manufacturers and the research community as the enabling technology to radically improve the safety, efficiency and comfort of everyday driving. However, before VANET technology can fulfill all its expected potential, several difficulties must be addressed. One key issue arising when working with VANETs is the complexity of the networking protocols compared to those used by traditional infrastructure networks. Therefore, proper design of the routing strategy becomes a main issue for the effective deployment of VANETs. In this paper, a reliable freestanding position-based routing algorithm (FPBR) for highway scenarios is proposed. For this scenario, several important issues such as the high mobility of vehicles and the propagation conditions may affect the performance of the routing strategy. These constraints have only been partially addressed in previous proposals. In contrast, the design approach used for developing FPBR considered the constraints imposed by a highway scenario and implements mechanisms to overcome them. FPBR performance is compared to one of the leading protocols for highway scenarios. Performance metrics show that FPBR yields similar results when considering freespace propagation conditions, and outperforms the leading protocol when considering a realistic highway path loss model.

  9. A Vision for Spaceflight Reliability: NASA's Objectives Based Strategy

    NASA Technical Reports Server (NTRS)

    Groen, Frank; Evans, John; Hall, Tony

    2015-01-01

    In defining the direction for a new Reliability and Maintainability standard, OSMA has extracted the essential objectives that our programs need, to undertake a reliable mission. These objectives have been structured to lead mission planning through construction of an objective hierarchy, which defines the critical approaches for achieving high reliability and maintainability (R M). Creating a hierarchy, as a basis for assurance implementation, is a proven approach; yet, it holds the opportunity to enable new directions, as NASA moves forward in tackling the challenges of space exploration.

  10. Integrated circuit reliability. Citations from the NTIS data base

    NASA Astrophysics Data System (ADS)

    Reed, W. E.

    1980-06-01

    The bibliography presents research pertinent to design, reliability prediction, failure and malfunction, processing techniques, and radiation damage. This updated bibliography contains 193 abstracts, 17 of which are new entries to the previous edition.

  11. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2009-01-01

    A stochastic design optimization methodology (SDO) has been developed to design components of an airframe structure that can be made of metallic and composite materials. The design is obtained as a function of the risk level, or reliability, p. The design method treats uncertainties in load, strength, and material properties as distribution functions, which are defined with mean values and standard deviations. A design constraint or a failure mode is specified as a function of reliability p. Solution to stochastic optimization yields the weight of a structure as a function of reliability p. Optimum weight versus reliability p traced out an inverted-S-shaped graph. The center of the inverted-S graph corresponded to 50 percent (p = 0.5) probability of success. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponds to unity for reliability p (or p = 1). Weight can be reduced to a small value for the most failure-prone design with a reliability that approaches zero (p = 0). Reliability can be changed for different components of an airframe structure. For example, the landing gear can be designed for a very high reliability, whereas it can be reduced to a small extent for a raked wingtip. The SDO capability is obtained by combining three codes: (1) The MSC/Nastran code was the deterministic analysis tool, (2) The fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and (3) NASA Glenn Research Center s optimization testbed CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life raked wingtip structure of the Boeing 767-400 extended range airliner made of metallic and composite materials.

  12. Reliability-Based Life Assessment of Stirling Convertor Heater Head

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Halford, Gary R.; Korovaichuk, Igor

    2004-01-01

    Onboard radioisotope power systems being developed and planned for NASA's deep-space missions require reliable design lifetimes of up to 14 yr. The structurally critical heater head of the high-efficiency Stirling power convertor has undergone extensive computational analysis of operating temperatures, stresses, and creep resistance of the thin-walled Inconel 718 bill of material. A preliminary assessment of the effect of uncertainties in the material behavior was also performed. Creep failure resistance of the thin-walled heater head could show variation due to small deviations in the manufactured thickness and in uncertainties in operating temperature and pressure. Durability prediction and reliability of the heater head are affected by these deviations from nominal design conditions. Therefore, it is important to include the effects of these uncertainties in predicting the probability of survival of the heater head under mission loads. Furthermore, it may be possible for the heater head to experience rare incidences of small temperature excursions of short duration. These rare incidences would affect the creep strain rate and, therefore, the life. This paper addresses the effects of such rare incidences on the reliability. In addition, the sensitivities of variables affecting the reliability are quantified, and guidelines developed to improve the reliability are outlined. Heater head reliability is being quantified with data from NASA Glenn Research Center's accelerated benchmark testing program.

  13. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W.; Gumbert, Clyde R.; Newman, Perry A.

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The optimal solutions associated with the MPP provide measurements related to safety probability. This study focuses on two commonly used approximate probability integration methods; i.e., the Reliability Index Approach (RIA) and the Performance Measurement Approach (PMA). Their reliability sensitivity equations are first derived in this paper, based on the derivatives of their respective optimal solutions. Examples are then provided to demonstrate the use of these derivatives for better reliability analysis and Reliability-Based Design Optimization (RBDO).

  14. Reliability Generalization of Curriculum-Based Measurement Reading Aloud: A Meta-Analytic Review

    ERIC Educational Resources Information Center

    Yeo, Seungsoo

    2011-01-01

    The purpose of this study was to employ the meta-analytic method of Reliability Generalization to investigate the magnitude and variability of reliability estimates obtained across studies using Curriculum-Based Measurement reading aloud. Twenty-eight studies that met the inclusion criteria were used to calculate the overall mean reliability of…

  15. Reliability-based condition assessment of steel containment and liners

    SciTech Connect

    Ellingwood, B.; Bhattacharya, B.; Zheng, R.

    1996-11-01

    Steel containments and liners in nuclear power plants may be exposed to aggressive environments that may cause their strength and stiffness to decrease during the plant service life. Among the factors recognized as having the potential to cause structural deterioration are uniform, pitting or crevice corrosion; fatigue, including crack initiation and propagation to fracture; elevated temperature; and irradiation. The evaluation of steel containments and liners for continued service must provide assurance that they are able to withstand future extreme loads during the service period with a level of reliability that is sufficient for public safety. Rational methodologies to provide such assurances can be developed using modern structural reliability analysis principles that take uncertainties in loading, strength, and degradation resulting from environmental factors into account. The research described in this report is in support of the Steel Containments and Liners Program being conducted for the US Nuclear Regulatory Commission by the Oak Ridge National Laboratory. The research demonstrates the feasibility of using reliability analysis as a tool for performing condition assessments and service life predictions of steel containments and liners. Mathematical models that describe time-dependent changes in steel due to aggressive environmental factors are identified, and statistical data supporting the use of these models in time-dependent reliability analysis are summarized. The analysis of steel containment fragility is described, and simple illustrations of the impact on reliability of structural degradation are provided. The role of nondestructive evaluation in time-dependent reliability analysis, both in terms of defect detection and sizing, is examined. A Markov model provides a tool for accounting for time-dependent changes in damage condition of a structural component or system. 151 refs.

  16. A reliability and mass perspective of SP-100 Stirling cycle lunar-base powerplant designs

    SciTech Connect

    Bloomfield, H.S.

    1991-06-01

    The purpose was to obtain reliability and mass perspectives on selection of space power system conceptual designs based on SP-100 reactor and Stirling cycle power-generation subsystems. The approach taken was to: (1) develop a criterion for an acceptable overall reliability risk as a function of the expected range of emerging technology subsystem unit reliabilities; (2) conduct reliability and mass analyses for a diverse matrix of 800-kWe lunar-base design configurations employing single and multiple powerplants with both full and partial subsystem redundancy combinations; and (3) derive reliability and mass perspectives on selection of conceptual design configurations that meet an acceptable reliability criterion with the minimum system mass increase relative to reference powerplant design. The developed perspectives provided valuable insight into the considerations required to identify and characterize high-reliability and low-mass lunar-base powerplant conceptual design.

  17. A reliability and mass perspective of SP-100 Stirling cycle lunar-base powerplant designs

    NASA Technical Reports Server (NTRS)

    Bloomfield, Harvey S.

    1991-01-01

    The purpose was to obtain reliability and mass perspectives on selection of space power system conceptual designs based on SP-100 reactor and Stirling cycle power-generation subsystems. The approach taken was to: (1) develop a criterion for an acceptable overall reliability risk as a function of the expected range of emerging technology subsystem unit reliabilities; (2) conduct reliability and mass analyses for a diverse matrix of 800-kWe lunar-base design configurations employing single and multiple powerplants with both full and partial subsystem redundancy combinations; and (3) derive reliability and mass perspectives on selection of conceptual design configurations that meet an acceptable reliability criterion with the minimum system mass increase relative to reference powerplant design. The developed perspectives provided valuable insight into the considerations required to identify and characterize high-reliability and low-mass lunar-base powerplant conceptual design.

  18. Architecture-Based Reliability Analysis of Web Services

    ERIC Educational Resources Information Center

    Rahmani, Cobra Mariam

    2012-01-01

    In a Service Oriented Architecture (SOA), the hierarchical complexity of Web Services (WS) and their interactions with the underlying Application Server (AS) create new challenges in providing a realistic estimate of WS performance and reliability. The current approaches often treat the entire WS environment as a black-box. Thus, the sensitivity…

  19. Reliability and Validity of Curriculum-Based Informal Reading Inventories.

    ERIC Educational Resources Information Center

    Fuchs, Lynn; And Others

    A study was conducted to explore the reliability and validity of three prominent procedures used in informal reading inventories (IRIs): (1) choosing a 95% word recognition accuracy standard for determining student instructional level, (2) arbitrarily selecting a passage to represent the difficulty level of a basal reader, and (3) employing…

  20. Neural Networks Based Approach to Enhance Space Hardware Reliability

    NASA Technical Reports Server (NTRS)

    Zebulum, Ricardo S.; Thakoor, Anilkumar; Lu, Thomas; Franco, Lauro; Lin, Tsung Han; McClure, S. S.

    2011-01-01

    This paper demonstrates the use of Neural Networks as a device modeling tool to increase the reliability analysis accuracy of circuits targeted for space applications. The paper tackles a number of case studies of relevance to the design of Flight hardware. The results show that the proposed technique generates more accurate models than the ones regularly used to model circuits.

  1. Differences scores: regression-based reliable difference and the regression-based confidence interval.

    PubMed

    Charter, Richard A

    2009-04-01

    Over 50 years ago Payne and Jones (1957) developed what has been labeled the traditional reliable difference formula that continues to be useful as a significance test for the difference between two test scores. The traditional reliable difference is based on the standard error of measurement (SEM) and has been updated to a confidence interval approach. As an alternative to the traditional reliable difference, this article presents the regression-based reliable difference that is based on the standard error of estimate (SEE) and estimated true scores. This new approach should be attractive to clinicians preferring the idea of scores regressing toward the mean. The new approach is also presented in confidence interval form with an interpretation that can be viewed as a statement of all hypotheses that are tenable and consistent with the observed data and has the backing of several authorities. Two well-known conceptualizations for true score confidence intervals are the traditional and regression-based. Now clinicians favoring the regression-based conceptualization are not restricted to the use of traditional model when testing score differences using confidence intervals.

  2. Reliability-Based Performance Assessment of Damaged Ships

    DTIC Science & Technology

    2006-10-01

    SNAME, Arlington Virginia, II.B. 1-mII.B. 19, 1991. Wang , Ge; Chen, Yongjun ; Zhang, Hanqing; Peng, Hua. (2002). "Longitudinal strength of ships with...strength of hull girders of damaged ships. Reliability of the ship was also estimated by a first order and second moment method. Wang , et al (2002...to achieve a safe design. Wang (1996) have applied several methods, such as point- crossing method, load coincidence method, Ferry Borges method, peak

  3. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).

  4. Analyzing reliability of seizure diagnosis based on semiology.

    PubMed

    Jin, Bo; Wu, Han; Xu, Jiahui; Yan, Jianwei; Ding, Yao; Wang, Z Irene; Guo, Yi; Wang, Zhongjin; Shen, Chunhong; Chen, Zhong; Ding, Meiping; Wang, Shuang

    2014-12-01

    This study aimed to determine the accuracy of seizure diagnosis by semiological analysis and to assess the factors that affect diagnostic reliability. A total of 150 video clips of seizures from 50 patients (each with three seizures of the same type) were observed by eight epileptologists, 12 neurologists, and 20 physicians (internists). The videos included 37 series of epileptic seizures, eight series of physiologic nonepileptic events (PNEEs), and five series of psychogenic nonepileptic seizures (PNESs). After observing each video, the doctors chose the diagnosis of epileptic seizures or nonepileptic events for the patient; if the latter was chosen, they further chose the diagnosis of PNESs or PNEEs. The overall diagnostic accuracy rate for epileptic seizures and nonepileptic events increased from 0.614 to 0.660 after observations of all three seizures (p < 0.001). The diagnostic sensitivity and specificity of epileptic seizures were 0.770 and 0.808, respectively, for the epileptologists. These values were significantly higher than those for the neurologists (0.660 and 0.699) and physicians (0.588 and 0.658). A wide range of diagnostic accuracy was found across the various seizures types. An accuracy rate of 0.895 for generalized tonic-clonic seizures was the highest, followed by 0.800 for dialeptic seizures and then 0.760 for automotor seizures. The accuracy rates for myoclonic seizures (0.530), hypermotor seizures (0.481), gelastic/dacrystic seizures (0.438), and PNESs (0.430) were poor. The reliability of semiological diagnosis of seizures is greatly affected by the seizure type as well as the doctor's experience. Although the overall reliability is limited, it can be improved by observing more seizures.

  5. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Coroneos, Rula; Patnaik, Surya N.

    2011-01-01

    A stochastic optimization methodology (SDO) has been developed to design airframe structural components made of metallic and composite materials. The design method accommodates uncertainties in load, strength, and material properties that are defined by distribution functions with mean values and standard deviations. A response parameter, like a failure mode, has become a function of reliability. The primitive variables like thermomechanical loads, material properties, and failure theories, as well as variables like depth of beam or thickness of a membrane, are considered random parameters with specified distribution functions defined by mean values and standard deviations.

  6. A simple reliability-based topology optimization approach for continuum structures using a topology description function

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Wen, Guilin; Zhi Zuo, Hao; Qing, Qixiang

    2016-07-01

    The structural configuration obtained by deterministic topology optimization may represent a low reliability level and lead to a high failure rate. Therefore, it is necessary to take reliability into account for topology optimization. By integrating reliability analysis into topology optimization problems, a simple reliability-based topology optimization (RBTO) methodology for continuum structures is investigated in this article. The two-layer nesting involved in RBTO, which is time consuming, is decoupled by the use of a particular optimization procedure. A topology description function approach (TOTDF) and a first order reliability method are employed for topology optimization and reliability calculation, respectively. The problem of the non-smoothness inherent in TOTDF is dealt with using two different smoothed Heaviside functions and the corresponding topologies are compared. Numerical examples demonstrate the validity and efficiency of the proposed improved method. In-depth discussions are also presented on the influence of different structural reliability indices on the final layout.

  7. A perspective on the reliability of MEMS-based components for telecommunications

    NASA Astrophysics Data System (ADS)

    McNulty, John C.

    2008-02-01

    Despite the initial skepticism of OEM companies regarding reliability, MEMS-based devices are increasingly common in optical networking. This presentation will discuss the use and reliability of MEMS in a variety of network applications, from tunable lasers and filters to variable optical attenuators and dynamic channel equalizers. The failure mechanisms of these devices will be addressed in terms of reliability physics, packaging methodologies, and process controls. Typical OEM requirements will also be presented, including testing beyond of the scope of Telcordia qualification standards. The key conclusion is that, with sufficiently robust design and manufacturing controls, MEMS-based devices can meet or exceed the demanding reliability requirements for telecommunications components.

  8. Reliability-based failure analysis of brittle materials

    NASA Technical Reports Server (NTRS)

    Powers, Lynn M.; Ghosn, Louis J.

    1989-01-01

    The reliability of brittle materials under a generalized state of stress is analyzed using the Batdorf model. The model is modified to include the reduction in shear due to the effect of the compressive stress on the microscopic crack faces. The combined effect of both surface and volume flaws is included. Due to the nature of fracture of brittle materials under compressive loading, the component is modeled as a series system in order to establish bounds on the probability of failure. A computer program was written to determine the probability of failure employing data from a finite element analysis. The analysis showed that for tensile loading a single crack will be the cause of total failure but under compressive loading a series of microscopic cracks must join together to form a dominant crack.

  9. Assessing the reliability of Curriculum-Based Measurement: an application of Latent Growth Modeling.

    PubMed

    Yeo, Seungsoo; Kim, Dong-Il; Branum-Martin, Lee; Wayman, Miya Miura; Espin, Christine A

    2012-04-01

    The purpose of this study was to demonstrate the use of Latent Growth Modeling (LGM) as a method for estimating reliability of Curriculum-Based Measurement (CBM) progress-monitoring data. The LGM approach permits the error associated with each measure to differ at each time point, thus providing an alternative method for examining of the reliability of CBM reading aloud data over repeated measurements. The analysis revealed that the reliability of CBM data was not a fixed property of the measure, but it changed with time. The study demonstrates the need to consider reliability in new ways with respect to the use of CBM data as repeated measures.

  10. Hard and Soft Constraints in Reliability-Based Design Optimization

    NASA Technical Reports Server (NTRS)

    Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.

    2006-01-01

    This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.

  11. A damage mechanics based approach to structural deterioration and reliability

    SciTech Connect

    Bhattcharya, B.; Ellingwood, B.

    1998-02-01

    Structural deterioration often occurs without perceptible manifestation. Continuum damage mechanics defines structural damage in terms of the material microstructure, and relates the damage variable to the macroscopic strength or stiffness of the structure. This enables one to predict the state of damage prior to the initiation of a macroscopic flaw, and allows one to estimate residual strength/service life of an existing structure. The accumulation of damage is a dissipative process that is governed by the laws of thermodynamics. Partial differential equations for damage growth in terms of the Helmholtz free energy are derived from fundamental thermodynamical conditions. Closed-form solutions to the equations are obtained under uniaxial loading for ductile deformation damage as a function of plastic strain, for creep damage as a function of time, and for fatigue damage as function of number of cycles. The proposed damage growth model is extended into the stochastic domain by considering fluctuations in the free energy, and closed-form solutions of the resulting stochastic differential equation are obtained in each of the three cases mentioned above. A reliability analysis of a ring-stiffened cylindrical steel shell subjected to corrosion, accidental pressure, and temperature is performed.

  12. Is School-Based Height and Weight Screening of Elementary Students Private and Reliable?

    ERIC Educational Resources Information Center

    Stoddard, Sarah A.; Kubik, Martha Y.; Skay, Carol

    2008-01-01

    The Institute of Medicine recommends school-based body mass index (BMI) screening as an obesity prevention strategy. While school nurses have provided height/weight screening for years, little has been published describing measurement reliability or process. This study evaluated the reliability of height/weight measures collected by school nurses…

  13. Reliability and Validity of the Evidence-Based Practice Confidence (EPIC) Scale

    ERIC Educational Resources Information Center

    Salbach, Nancy M.; Jaglal, Susan B.; Williams, Jack I.

    2013-01-01

    Introduction: The reliability, minimal detectable change (MDC), and construct validity of the evidence-based practice confidence (EPIC) scale were evaluated among physical therapists (PTs) in clinical practice. Methods: A longitudinal mail survey was conducted. Internal consistency and test-retest reliability were estimated using Cronbach's alpha…

  14. The Reliability of Randomly Generated Math Curriculum-Based Measurements

    ERIC Educational Resources Information Center

    Strait, Gerald G.; Smith, Bradley H.; Pender, Carolyn; Malone, Patrick S.; Roberts, Jarod; Hall, John D.

    2015-01-01

    "Curriculum-Based Measurement" (CBM) is a direct method of academic assessment used to screen and evaluate students' skills and monitor their responses to academic instruction and intervention. Interventioncentral.org offers a math worksheet generator at no cost that creates randomly generated "math curriculum-based measures"…

  15. Expected-Credibility-Based Job Scheduling for Reliable Volunteer Computing

    NASA Astrophysics Data System (ADS)

    Watanabe, Kan; Fukushi, Masaru; Horiguchi, Susumu

    This paper presents a proposal of an expected-credibility-based job scheduling method for volunteer computing (VC) systems with malicious participants who return erroneous results. Credibility-based voting is a promising approach to guaranteeing the computational correctness of VC systems. However, it relies on a simple round-robin job scheduling method that does not consider the jobs' order of execution, thereby resulting in numerous unnecessary job allocations and performance degradation of VC systems. To improve the performance of VC systems, the proposed job scheduling method selects a job to be executed prior to others dynamically based on two novel metrics: expected credibility and the expected number of results for each job. Simulation of VCs shows that the proposed method can improve the VC system performance up to 11%; It always outperforms the original round-robin method irrespective of the value of unknown parameters such as population and behavior of saboteurs.

  16. Reliability Modeling Development and Its Applications for Ceramic Capacitors with Base-Metal Electrodes (BMEs)

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2014-01-01

    This presentation includes a summary of NEPP-funded deliverables for the Base-Metal Electrodes (BMEs) capacitor task, development of a general reliability model for BME capacitors, and a summary and future work.

  17. A Reliable Homemade Electrode Based on Glassy Polymeric Carbon

    ERIC Educational Resources Information Center

    Santos, Andre L.; Takeuchi, Regina M.; Oliviero, Herilton P.; Rodriguez, Marcello G.; Zimmerman, Robert L.

    2004-01-01

    The production of a GPC-based material by submitting a cross-linked resin precursor to control thermal conditions is discussed. The precursor material is prepolymerized at 60-degree Celsius in a mold and is carbonized in inert atmosphere by slowly raising the temperature, the rise is performed to avoid change in the shape of the carbonization…

  18. On the Reliability of Vocational Workplace-Based Certifications

    ERIC Educational Resources Information Center

    Harth, H.; Hemker, B.T.

    2013-01-01

    The assessment of vocational workplace-based qualifications in England relies on human assessors (raters). These assessors observe naturally occurring, non-standardised evidence, unique to each learner and evaluate the learner as competent/not yet competent against content standards. Whilst these are considered difficult to measure, this study…

  19. Advanced reliability modeling of fault-tolerant computer-based systems

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.

    1982-01-01

    Two methodologies for the reliability assessment of fault tolerant digital computer based systems are discussed. The computer-aided reliability estimation 3 (CARE 3) and gate logic software simulation (GLOSS) are assessment technologies that were developed to mitigate a serious weakness in the design and evaluation process of ultrareliable digital systems. The weak link is based on the unavailability of a sufficiently powerful modeling technique for comparing the stochastic attributes of one system against others. Some of the more interesting attributes are reliability, system survival, safety, and mission success.

  20. Reliability Coupled Sensitivity Based Design Approach for Gravity Retaining Walls

    NASA Astrophysics Data System (ADS)

    Guha Ray, A.; Baidya, D. K.

    2012-09-01

    Sensitivity analysis involving different random variables and different potential failure modes of a gravity retaining wall focuses on the fact that high sensitivity of a particular variable on a particular mode of failure does not necessarily imply a remarkable contribution to the overall failure probability. The present paper aims at identifying a probabilistic risk factor ( R f ) for each random variable based on the combined effects of failure probability ( P f ) of each mode of failure of a gravity retaining wall and sensitivity of each of the random variables on these failure modes. P f is calculated by Monte Carlo simulation and sensitivity analysis of each random variable is carried out by F-test analysis. The structure, redesigned by modifying the original random variables with the risk factors, is safe against all the variations of random variables. It is observed that R f for friction angle of backfill soil ( φ 1 ) increases and cohesion of foundation soil ( c 2 ) decreases with an increase of variation of φ 1 , while R f for unit weights ( γ 1 and γ 2 ) for both soil and friction angle of foundation soil ( φ 2 ) remains almost constant for variation of soil properties. The results compared well with some of the existing deterministic and probabilistic methods and found to be cost-effective. It is seen that if variation of φ 1 remains within 5 %, significant reduction in cross-sectional area can be achieved. But if the variation is more than 7-8 %, the structure needs to be modified. Finally design guidelines for different wall dimensions, based on the present approach, are proposed.

  1. Reliable Location-Based Services from Radio Navigation Systems

    PubMed Central

    Qiu, Di; Boneh, Dan; Lo, Sherman; Enge, Per

    2010-01-01

    Loran is a radio-based navigation system originally designed for naval applications. We show that Loran-C’s high-power and high repeatable accuracy are fantastic for security applications. First, we show how to derive a precise location tag—with a sensitivity of about 20 meters—that is difficult to project to an exact location. A device can use our location tag to block or allow certain actions, without knowing its precise location. To ensure that our tag is reproducible we make use of fuzzy extractors, a mechanism originally designed for biometric authentication. We build a fuzzy extractor specifically designed for radio-type errors and give experimental evidence to show its effectiveness. Second, we show that our location tag is difficult to predict from a distance. For example, an observer cannot predict the location tag inside a guarded data center from a few hundreds of meters away. As an application, consider a location-aware disk drive that will only work inside the data center. An attacker who steals the device and is capable of spoofing Loran-C signals, still cannot make the device work since he does not know what location tag to spoof. We provide experimental data supporting our unpredictability claim. PMID:22163532

  2. Reliable location-based services from radio navigation systems.

    PubMed

    Qiu, Di; Boneh, Dan; Lo, Sherman; Enge, Per

    2010-01-01

    Loran is a radio-based navigation system originally designed for naval applications. We show that Loran-C's high-power and high repeatable accuracy are fantastic for security applications. First, we show how to derive a precise location tag--with a sensitivity of about 20 meters--that is difficult to project to an exact location. A device can use our location tag to block or allow certain actions, without knowing its precise location. To ensure that our tag is reproducible we make use of fuzzy extractors, a mechanism originally designed for biometric authentication. We build a fuzzy extractor specifically designed for radio-type errors and give experimental evidence to show its effectiveness. Second, we show that our location tag is difficult to predict from a distance. For example, an observer cannot predict the location tag inside a guarded data center from a few hundreds of meters away. As an application, consider a location-aware disk drive that will only work inside the data center. An attacker who steals the device and is capable of spoofing Loran-C signals, still cannot make the device work since he does not know what location tag to spoof. We provide experimental data supporting our unpredictability claim.

  3. Reliability of hypothalamic–pituitary–adrenal axis assessment methods for use in population-based studies

    PubMed Central

    Wand, Gary S.; Malhotra, Saurabh; Kamel, Ihab; Horton, Karen

    2013-01-01

    Population-based studies have been hampered in exploring hypothalamic–pituitary–adrenal axis (HPA) activity as a potential explanatory link between stress-related and metabolic disorders due to their lack of incorporation of reliable measures of chronic cortisol exposure. The purpose of this review is to summarize current literature on the reliability of HPA axis measures and to discuss the feasibility of performing them in population-based studies. We identified articles through PubMed using search terms related to cortisol, HPA axis, adrenal imaging, and reliability. The diurnal salivary cortisol curve (generated from multiple salivary samples from awakening to midnight) and 11 p.m. salivary cortisol had the highest between-visit reliabilities (r = 0.63–0.84 and 0.78, respectively). The cortisol awakening response and dexamethasone-suppressed cortisol had the next highest between-visit reliabilities (r = 0.33–0.67 and 0.42–0.66, respectively). Based on our own data, the inter-reader reliability (rs) of adrenal gland volume from non-contrast CT was 0.67–0.71 for the left and 0.47–0.70 for the right adrenal glands. While a single 8 a.m. salivary cortisol is one of the easiest measures to perform, it had the lowest between-visit reliability (R = 0.18–0.47). Based on the current literature, use of sampling multiple salivary cortisol measures across the diurnal curve (with awakening cortisol), dexamethasone-suppressed cortisol, and adrenal gland volume are measures of HPA axis tone with similar between-visit reliabilities which likely reflect chronic cortisol burden and are feasible to perform in population-based studies. PMID:21533585

  4. Validity and reliability of Internet-based physiotherapy assessment for musculoskeletal disorders: a systematic review.

    PubMed

    Mani, Suresh; Sharma, Shobha; Omar, Baharudin; Paungmali, Aatit; Joseph, Leonard

    2017-04-01

    Purpose The purpose of this review is to systematically explore and summarise the validity and reliability of telerehabilitation (TR)-based physiotherapy assessment for musculoskeletal disorders. Method A comprehensive systematic literature review was conducted using a number of electronic databases: PubMed, EMBASE, PsycINFO, Cochrane Library and CINAHL, published between January 2000 and May 2015. The studies examined the validity, inter- and intra-rater reliabilities of TR-based physiotherapy assessment for musculoskeletal conditions were included. Two independent reviewers used the Quality Appraisal Tool for studies of diagnostic Reliability (QAREL) and the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool to assess the methodological quality of reliability and validity studies respectively. Results A total of 898 hits were achieved, of which 11 articles based on inclusion criteria were reviewed. Nine studies explored the concurrent validity, inter- and intra-rater reliabilities, while two studies examined only the concurrent validity. Reviewed studies were moderate to good in methodological quality. The physiotherapy assessments such as pain, swelling, range of motion, muscle strength, balance, gait and functional assessment demonstrated good concurrent validity. However, the reported concurrent validity of lumbar spine posture, special orthopaedic tests, neurodynamic tests and scar assessments ranged from low to moderate. Conclusion TR-based physiotherapy assessment was technically feasible with overall good concurrent validity and excellent reliability, except for lumbar spine posture, orthopaedic special tests, neurodynamic testa and scar assessment.

  5. Towards Reliable Evaluation of Anomaly-Based Intrusion Detection Performance

    NASA Technical Reports Server (NTRS)

    Viswanathan, Arun

    2012-01-01

    This report describes the results of research into the effects of environment-induced noise on the evaluation process for anomaly detectors in the cyber security domain. This research was conducted during a 10-week summer internship program from the 19th of August, 2012 to the 23rd of August, 2012 at the Jet Propulsion Laboratory in Pasadena, California. The research performed lies within the larger context of the Los Angeles Department of Water and Power (LADWP) Smart Grid cyber security project, a Department of Energy (DoE) funded effort involving the Jet Propulsion Laboratory, California Institute of Technology and the University of Southern California/ Information Sciences Institute. The results of the present effort constitute an important contribution towards building more rigorous evaluation paradigms for anomaly-based intrusion detectors in complex cyber physical systems such as the Smart Grid. Anomaly detection is a key strategy for cyber intrusion detection and operates by identifying deviations from profiles of nominal behavior and are thus conceptually appealing for detecting "novel" attacks. Evaluating the performance of such a detector requires assessing: (a) how well it captures the model of nominal behavior, and (b) how well it detects attacks (deviations from normality). Current evaluation methods produce results that give insufficient insight into the operation of a detector, inevitably resulting in a significantly poor characterization of a detectors performance. In this work, we first describe a preliminary taxonomy of key evaluation constructs that are necessary for establishing rigor in the evaluation regime of an anomaly detector. We then focus on clarifying the impact of the operational environment on the manifestation of attacks in monitored data. We show how dynamic and evolving environments can introduce high variability into the data stream perturbing detector performance. Prior research has focused on understanding the impact of this

  6. Reliability analysis of idealized tunnel support system using probability-based methods with case studies

    NASA Astrophysics Data System (ADS)

    Gharouni-Nik, Morteza; Naeimi, Meysam; Ahadi, Sodayf; Alimoradi, Zahra

    2014-06-01

    In order to determine the overall safety of a tunnel support lining, a reliability-based approach is presented in this paper. Support elements in jointed rock tunnels are provided to control the ground movement caused by stress redistribution during the tunnel drive. Main support elements contribute to stability of the tunnel structure are recognized owing to identify various aspects of reliability and sustainability in the system. The selection of efficient support methods for rock tunneling is a key factor in order to reduce the number of problems during construction and maintain the project cost and time within the limited budget and planned schedule. This paper introduces a smart approach by which decision-makers will be able to find the overall reliability of tunnel support system before selecting the final scheme of the lining system. Due to this research focus, engineering reliability which is a branch of statistics and probability is being appropriately applied to the field and much effort has been made to use it in tunneling while investigating the reliability of the lining support system for the tunnel structure. Therefore, reliability analysis for evaluating the tunnel support performance is the main idea used in this research. Decomposition approaches are used for producing system block diagram and determining the failure probability of the whole system. Effectiveness of the proposed reliability model of tunnel lining together with the recommended approaches is examined using several case studies and the final value of reliability obtained for different designing scenarios. Considering the idea of linear correlation between safety factors and reliability parameters, the values of isolated reliabilities determined for different structural components of tunnel support system. In order to determine individual safety factors, finite element modeling is employed for different structural subsystems and the results of numerical analyses are obtained in

  7. The B-747 flight control system maintenance and reliability data base for cost effectiveness tradeoff studies

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Primary and automatic flight controls are combined for a total flight control reliability and maintenance cost data base using information from two previous reports and additional cost data gathered from a major airline. A comparison of the current B-747 flight control system effects on reliability and operating cost with that of a B-747 designed for an active control wing load alleviation system is provided.

  8. Reliability of 3D laser-based anthropometry and comparison with classical anthropometry

    PubMed Central

    Kuehnapfel, Andreas; Ahnert, Peter; Loeffler, Markus; Broda, Anja; Scholz, Markus

    2016-01-01

    Anthropometric quantities are widely used in epidemiologic research as possible confounders, risk factors, or outcomes. 3D laser-based body scans (BS) allow evaluation of dozens of quantities in short time with minimal physical contact between observers and probands. The aim of this study was to compare BS with classical manual anthropometric (CA) assessments with respect to feasibility, reliability, and validity. We performed a study on 108 individuals with multiple measurements of BS and CA to estimate intra- and inter-rater reliabilities for both. We suggested BS equivalents of CA measurements and determined validity of BS considering CA the gold standard. Throughout the study, the overall concordance correlation coefficient (OCCC) was chosen as indicator of agreement. BS was slightly more time consuming but better accepted than CA. For CA, OCCCs for intra- and inter-rater reliability were greater than 0.8 for all nine quantities studied. For BS, 9 of 154 quantities showed reliabilities below 0.7. BS proxies for CA measurements showed good agreement (minimum OCCC > 0.77) after offset correction. Thigh length showed higher reliability in BS while upper arm length showed higher reliability in CA. Except for these issues, reliabilities of CA measurements and their BS equivalents were comparable. PMID:27225483

  9. Bridge reliability assessment based on the PDF of long-term monitored extreme strains

    NASA Astrophysics Data System (ADS)

    Jiao, Meiju; Sun, Limin

    2011-04-01

    Structural health monitoring (SHM) systems can provide valuable information for the evaluation of bridge performance. As the development and implementation of SHM technology in recent years, the data mining and use has received increasingly attention and interests in civil engineering. Based on the principle of probabilistic and statistics, a reliability approach provides a rational basis for analysis of the randomness in loads and their effects on structures. A novel approach combined SHM systems with reliability method to evaluate the reliability of a cable-stayed bridge instrumented with SHM systems was presented in this paper. In this study, the reliability of the steel girder of the cable-stayed bridge was denoted by failure probability directly instead of reliability index as commonly used. Under the assumption that the probability distributions of the resistance are independent to the responses of structures, a formulation of failure probability was deduced. Then, as a main factor in the formulation, the probability density function (PDF) of the strain at sensor locations based on the monitoring data was evaluated and verified. That Donghai Bridge was taken as an example for the application of the proposed approach followed. In the case study, 4 years' monitoring data since the operation of the SHM systems was processed, and the reliability assessment results were discussed. Finally, the sensitivity and accuracy of the novel approach compared with FORM was discussed.

  10. An Energy-Based Limit State Function for Estimation of Structural Reliability in Shock Environments

    DOE PAGES

    Guthrie, Michael A.

    2013-01-01

    limit state function is developed for the estimation of structural reliability in shock environments. This limit state function uses peak modal strain energies to characterize environmental severity and modal strain energies at failure to characterize the structural capacity. The Hasofer-Lind reliability index is briefly reviewed and its computation for the energy-based limit state function is discussed. Applications to two degree of freedom mass-spring systems and to a simple finite element model are considered. For these examples, computation of the reliability index requires little effort beyond a modal analysis, but still accounts for relevant uncertainties in both the structure and environment.more » For both examples, the reliability index is observed to agree well with the results of Monte Carlo analysis. In situations where fast, qualitative comparison of several candidate designs is required, the reliability index based on the proposed limit state function provides an attractive metric which can be used to compare and control reliability.« less

  11. A General Reliability Model for Ni-BaTiO3-Based Multilayer Ceramic Capacitors

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2014-01-01

    The evaluation of multilayer ceramic capacitors (MLCCs) with Ni electrode and BaTiO3 dielectric material for potential space project applications requires an in-depth understanding of their reliability. A general reliability model for Ni-BaTiO3 MLCC is developed and discussed. The model consists of three parts: a statistical distribution; an acceleration function that describes how a capacitor's reliability life responds to the external stresses, and an empirical function that defines contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size, and capacitor chip size A. Application examples are also discussed based on the proposed reliability model for Ni-BaTiO3 MLCCs.

  12. A General Reliability Model for Ni-BaTiO3-Based Multilayer Ceramic Capacitors

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2014-01-01

    The evaluation for potential space project applications of multilayer ceramic capacitors (MLCCs) with Ni electrode and BaTiO3 dielectric material requires an in-depth understanding of the MLCCs reliability. A general reliability model for Ni-BaTiO3 MLCCs is developed and discussed in this paper. The model consists of three parts: a statistical distribution; an acceleration function that describes how a capacitors reliability life responds to external stresses; and an empirical function that defines the contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size r, and capacitor chip size A. Application examples are also discussed based on the proposed reliability model for Ni-BaTiO3 MLCCs.

  13. The Turkish Version of Web-Based Learning Platform Evaluation Scale: Reliability and Validity Study

    ERIC Educational Resources Information Center

    Dag, Funda

    2016-01-01

    The purpose of this study is to determine the language equivalence and the validity and reliability of the Turkish version of the "Web-Based Learning Platform Evaluation Scale" ("Web Tabanli Ögrenme Ortami Degerlendirme Ölçegi" [WTÖODÖ]) used in the selection and evaluation of web-based learning environments. Within this scope,…

  14. Score Reliability of a Test Composed of Passage-Based Testlets: A Generalizability Theory Perspective.

    ERIC Educational Resources Information Center

    Lee, Yong-Won

    The purpose of this study was to investigate the impact of local item dependence (LID) in passage-based testlets on the test score reliability of an English as a Foreign Language (EFL) reading comprehension test from the perspective of generalizability (G) theory. Definitions and causes of LID in passage-based testlets are reviewed within the…

  15. Reliability-based structural optimization: A proposed analytical-experimental study

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Nikolaidis, Efstratios

    1993-01-01

    An analytical and experimental study for assessing the potential of reliability-based structural optimization is proposed and described. In the study, competing designs obtained by deterministic and reliability-based optimization are compared. The experimental portion of the study is practical because the structure selected is a modular, actively and passively controlled truss that consists of many identical members, and because the competing designs are compared in terms of their dynamic performance and are not destroyed if failure occurs. The analytical portion of this study is illustrated on a 10-bar truss example. In the illustrative example, it is shown that reliability-based optimization can yield a design that is superior to an alternative design obtained by deterministic optimization. These analytical results provide motivation for the proposed study, which is underway.

  16. The SUPERB Project: Reliability-based design guideline for submarine pipelines

    SciTech Connect

    Sotberg, T.; Bruschi, R.; Moerk, K.

    1996-12-31

    This paper gives an overview of the research program SUPERB, the main objective being the development of a SUbmarine PipelinE Reliability Based Design Guideline with a comprehensive setup of design recommendations and criteria for pipeline design. The motivation of this program is related to the fact that project guidelines currently in force do not account for modern fabrication technology and the findings of recent research programs and by advanced engineering tools. The main structure of the Limit State Based Design (LSBD) Guideline is described followed by an outline of the safety philosophy which is introduced to fit within this framework. Focus is on the development of a reliability-based design guideline as a rational tool to manage future offshore projects with an optimal balance between project safety and economy. Selection of appropriate limit state functions and use of reliability tools to calibrate partial safety factors is also discussed.

  17. Small numbers, disclosure risk, security, and reliability issues in Web-based data query systems.

    PubMed

    Rudolph, Barbara A; Shah, Gulzar H; Love, Denise

    2006-01-01

    This article describes the process for developing consensus guidelines and tools for releasing public health data via the Web and highlights approaches leading agencies have taken to balance disclosure risk with public dissemination of reliable health statistics. An agency's choice of statistical methods for improving the reliability of released data for Web-based query systems is based upon a number of factors, including query system design (dynamic analysis vs preaggregated data and tables), population size, cell size, data use, and how data will be supplied to users. The article also describes those efforts that are necessary to reduce the risk of disclosure of an individual's protected health information.

  18. A Robust and Reliability-Based Optimization Framework for Conceptual Aircraft Wing Design

    NASA Astrophysics Data System (ADS)

    Paiva, Ricardo Miguel

    A robustness and reliability based multidisciplinary analysis and optimization framework for aircraft design is presented. Robust design optimization and Reliability Based Design Optimization are merged into a unified formulation which streamlines the setup of optimization problems and aims at preventing foreseeable implementation issues in uncertainty based design. Surrogate models are evaluated to circumvent the intensive computations resulting from using direct evaluation in nondeterministic optimization. Three types of models are implemented in the framework: quadratic interpolation, regression Kriging and artificial neural networks. Regression Kriging presents the best compromise between performance and accuracy in deterministic wing design problems. The performance of the simultaneous implementation of robustness and reliability is evaluated using simple analytic problems and more complex wing design problems, revealing that performance benefits can still be achieved while satisfying probabilistic constraints rather than the simpler (and not as computationally intensive) robust constraints. The latter are proven to to be unable to follow a reliability constraint as uncertainty in the input variables increases. The computational effort of the reliability analysis is further reduced through the implementation of a coordinate change in the respective optimization sub-problem. The computational tool developed is a stand-alone application and it presents a user-friendly graphical user interface. The multidisciplinary analysis and design optimization tool includes modules for aerodynamics, structural, aeroelastic and cost analysis, that can be used either individually or coupled.

  19. Reliability Based Design for a Raked Wing Tip of an Airframe

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2011-01-01

    A reliability-based optimization methodology has been developed to design the raked wing tip of the Boeing 767-400 extended range airliner made of composite and metallic materials. Design is formulated for an accepted level of risk or reliability. The design variables, weight and the constraints became functions of reliability. Uncertainties in the load, strength and the material properties, as well as the design variables, were modeled as random parameters with specified distributions, like normal, Weibull or Gumbel functions. The objective function and constraint, or a failure mode, became derived functions of the risk-level. Solution to the problem produced the optimum design with weight, variables and constraints as a function of the risk-level. Optimum weight versus reliability traced out an inverted-S shaped graph. The center of the graph corresponded to a 50 percent probability of success, or one failure in two samples. Under some assumptions, this design would be quite close to the deterministic optimum solution. The weight increased when reliability exceeded 50 percent, and decreased when the reliability was compromised. A design could be selected depending on the level of risk acceptable to a situation. The optimization process achieved up to a 20-percent reduction in weight over traditional design.

  20. Composite Stress Rupture: A New Reliability Model Based on Strength Decay

    NASA Technical Reports Server (NTRS)

    Reeder, James R.

    2012-01-01

    A model is proposed to estimate reliability for stress rupture of composite overwrap pressure vessels (COPVs) and similar composite structures. This new reliability model is generated by assuming a strength degradation (or decay) over time. The model suggests that most of the strength decay occurs late in life. The strength decay model will be shown to predict a response similar to that predicted by a traditional reliability model for stress rupture based on tests at a single stress level. In addition, the model predicts that even though there is strength decay due to proof loading, a significant overall increase in reliability is gained by eliminating any weak vessels, which would fail early. The model predicts that there should be significant periods of safe life following proof loading, because time is required for the strength to decay from the proof stress level to the subsequent loading level. Suggestions for testing the strength decay reliability model have been made. If the strength decay reliability model predictions are shown through testing to be accurate, COPVs may be designed to carry a higher level of stress than is currently allowed, which will enable the production of lighter structures

  1. Validation of highly reliable, real-time knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1988-01-01

    Knowledge-based systems have the potential to greatly increase the capabilities of future aircraft and spacecraft and to significantly reduce support manpower needed for the space station and other space missions. However, a credible validation methodology must be developed before knowledge-based systems can be used for life- or mission-critical applications. Experience with conventional software has shown that the use of good software engineering techniques and static analysis tools can greatly reduce the time needed for testing and simulation of a system. Since exhaustive testing is infeasible, reliability must be built into the software during the design and implementation phases. Unfortunately, many of the software engineering techniques and tools used for conventional software are of little use in the development of knowledge-based systems. Therefore, research at Langley is focused on developing a set of guidelines, methods, and prototype validation tools for building highly reliable, knowledge-based systems. The use of a comprehensive methodology for building highly reliable, knowledge-based systems should significantly decrease the time needed for testing and simulation. A proven record of delivering reliable systems at the beginning of the highly visible testing and simulation phases is crucial to the acceptance of knowledge-based systems in critical applications.

  2. An Evidential Reasoning-Based CREAM to Human Reliability Analysis in Maritime Accident Process.

    PubMed

    Wu, Bing; Yan, Xinping; Wang, Yang; Soares, C Guedes

    2017-01-09

    This article proposes a modified cognitive reliability and error analysis method (CREAM) for estimating the human error probability in the maritime accident process on the basis of an evidential reasoning approach. This modified CREAM is developed to precisely quantify the linguistic variables of the common performance conditions and to overcome the problem of ignoring the uncertainty caused by incomplete information in the existing CREAM models. Moreover, this article views maritime accident development from the sequential perspective, where a scenario- and barrier-based framework is proposed to describe the maritime accident process. This evidential reasoning-based CREAM approach together with the proposed accident development framework are applied to human reliability analysis of a ship capsizing accident. It will facilitate subjective human reliability analysis in different engineering systems where uncertainty exists in practice.

  3. Reliability of home-based, motor function measure in hereditary neuromuscular diseases.

    PubMed

    Ruiz-Cortes, Xiomara; Ortiz-Corredor, Fernando; Mendoza-Pulido, Camilo

    2017-02-01

    Objective To evaluate the reliability of the motor function measure (MFM) scale in the assessment of disease severity and progression when administered at home and clinic and assess its correlation with the Paediatric Outcomes Data Collection Instrument (PODCI). Methods In this prospective study, two assessors rated children with hereditary neuromuscular diseases (HNMDs) using the MFM at the clinic and then 2 weeks later at the patients' home. Intraclass correlation coefficient (ICC) was calculated for the reliability of the MFM and its domains. The reliability of each item was assessed and the correlation between MFM and three domains of PODCI was evaluated. Results A total of 48 children (5-17 years of age) were assessed in both locations and the MFM scale demonstrated excellent inter-rater reliability (ICC, 0.98). Weighted kappa ranged from excellent to poor. Correlation of the home-based MFM with the PODCI domain 'basic mobility and transfers' was excellent, with the 'upper extremity' domain was moderate, but there was no correlation with the 'happiness' domain. Conclusion The MFM is a reliable tool for assessing patients with HNMD when used in a home-based setting.

  4. Reliability assessment of long span bridges based on structural health monitoring: application to Yonghe Bridge

    NASA Astrophysics Data System (ADS)

    Li, Shunlong; Li, Hui; Ou, Jinping; Li, Hongwei

    2009-07-01

    This paper presents the reliability estimation studies based on structural health monitoring data for long span cable stayed bridges. The data collected by structural health monitoring system can be used to update the assumptions or probability models of random load effects, which would give potential for accurate reliability estimation. The reliability analysis is based on the estimated distribution for Dead, Live, Wind and Temperature Load effects. For the components with FBG strain sensors, the Dead, Live and unit Temperature Load effects can be determined by the strain measurements. For components without FBG strain sensors, the Dead and unit Temperature Load and Wind Load effects of the bridge can be evaluated by the finite element model, updated and calibrated by monitoring data. By applying measured truck loads and axle spacing data from weight in motion (WIM) system to the calibrated finite element model, the Live Load effects of components without FBG sensors can be generated. The stochastic process of Live Load effects can be described approximately by a Filtered Poisson Process and the extreme value distribution of Live Load effects can be calculated by Filtered Poisson Process theory. Then first order reliability method (FORM) is employed to estimate the reliability index of main components of the bridge (i.e. stiffening girder).

  5. The Seismic Reliability of Offshore Structures Based on Nonlinear Time History Analyses

    SciTech Connect

    Hosseini, Mahmood; Karimiyani, Somayyeh; Ghafooripour, Amin; Jabbarzadeh, Mohammad Javad

    2008-07-08

    Regarding the past earthquakes damages to offshore structures, as vital structures in the oil and gas industries, it is important that their seismic design is performed by very high reliability. Accepting the Nonlinear Time History Analyses (NLTHA) as the most reliable seismic analysis method, in this paper an offshore platform of jacket type with the height of 304 feet, having a deck of 96 feet by 94 feet, and weighing 290 million pounds has been studied. At first, some Push-Over Analyses (POA) have been preformed to recognize the more critical members of the jacket, based on the range of their plastic deformations. Then NLTHA have been performed by using the 3-components accelerograms of 100 earthquakes, covering a wide range of frequency content, and normalized to three Peak Ground Acceleration (PGA) levels of 0.3 g, 0.65 g, and 1.0 g. By using the results of NLTHA the damage and rupture probabilities of critical member have been studied to assess the reliability of the jacket structure. Regarding that different structural members of the jacket have different effects on the stability of the platform, an 'importance factor' has been considered for each critical member based on its location and orientation in the structure, and then the reliability of the whole structure has been obtained by combining the reliability of the critical members, each having its specific importance factor.

  6. The Seismic Reliability of Offshore Structures Based on Nonlinear Time History Analyses

    NASA Astrophysics Data System (ADS)

    Hosseini, Mahmood; Karimiyani, Somayyeh; Ghafooripour, Amin; Jabbarzadeh, Mohammad Javad

    2008-07-01

    Regarding the past earthquakes damages to offshore structures, as vital structures in the oil and gas industries, it is important that their seismic design is performed by very high reliability. Accepting the Nonlinear Time History Analyses (NLTHA) as the most reliable seismic analysis method, in this paper an offshore platform of jacket type with the height of 304 feet, having a deck of 96 feet by 94 feet, and weighing 290 million pounds has been studied. At first, some Push-Over Analyses (POA) have been preformed to recognize the more critical members of the jacket, based on the range of their plastic deformations. Then NLTHA have been performed by using the 3-components accelerograms of 100 earthquakes, covering a wide range of frequency content, and normalized to three Peak Ground Acceleration (PGA) levels of 0.3 g, 0.65 g, and 1.0 g. By using the results of NLTHA the damage and rupture probabilities of critical member have been studied to assess the reliability of the jacket structure. Regarding that different structural members of the jacket have different effects on the stability of the platform, an "importance factor" has been considered for each critical member based on its location and orientation in the structure, and then the reliability of the whole structure has been obtained by combining the reliability of the critical members, each having its specific importance factor.

  7. An Evaluation method for C2 Cyber-Physical Systems Reliability Based on Deep Learning

    DTIC Science & Technology

    2014-06-01

    the reliability testing data of the system, we obtain the prior distribution of the relia- bility is 1 1( ) ( ; , )R LG R r  . By Bayes theo- rem ...criticality cyber-physical sys- tems[C]//Proc of ICDCS. Piscataway, NJ: IEEE, 2010:169-178. [17] Zimmer C, Bhat B, Muller F, et al. Time-based intrusion de

  8. 49 CFR Appendix E to Part 238 - General Principles of Reliability-Based Maintenance Programs

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... reliability-based maintenance program may be defined as simple or complex. A simple component or system is one... role in controlling critical failures. Complex components or systems are ones whose functional failure... with increasing age unless there is a dominant failure mode. Therefore, age limits imposed on...

  9. 49 CFR Appendix E to Part 238 - General Principles of Reliability-Based Maintenance Programs

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... reliability-based maintenance program may be defined as simple or complex. A simple component or system is one... role in controlling critical failures. Complex components or systems are ones whose functional failure... with increasing age unless there is a dominant failure mode. Therefore, age limits imposed on...

  10. Empirical vs. Expected IRT-Based Reliability Estimation in Computerized Multistage Testing (MST)

    ERIC Educational Resources Information Center

    Zhang, Yanwei; Breithaupt, Krista; Tessema, Aster; Chuah, David

    2006-01-01

    Two IRT-based procedures to estimate test reliability for a certification exam that used both adaptive (via a MST model) and non-adaptive design were considered in this study. Both procedures rely on calibrated item parameters to estimate error variance. In terms of score variance, one procedure (Method 1) uses the empirical ability distribution…

  11. Assessing the Reliability of Curriculum-Based Measurement: An Application of Latent Growth Modeling

    ERIC Educational Resources Information Center

    Yeo, Seungsoo; Kim, Dong-Il; Branum-Martin, Lee; Wayman, Miya Miura; Espin, Christine A.

    2012-01-01

    The purpose of this study was to demonstrate the use of Latent Growth Modeling (LGM) as a method for estimating reliability of Curriculum-Based Measurement (CBM) progress-monitoring data. The LGM approach permits the error associated with each measure to differ at each time point, thus providing an alternative method for examining of the…

  12. Generalizability Theory Reliability of Written Expression Curriculum-Based Measurement in Universal Screening

    ERIC Educational Resources Information Center

    Keller-Margulis, Milena A.; Mercer, Sterett H.; Thomas, Erin L.

    2016-01-01

    The purpose of this study was to examine the reliability of written expression curriculum-based measurement (WE-CBM) in the context of universal screening from a generalizability theory framework. Students in second through fifth grade (n = 145) participated in the study. The sample included 54% female students, 49% White students, 23% African…

  13. Composite Reliability of a Workplace-Based Assessment Toolbox for Postgraduate Medical Education

    ERIC Educational Resources Information Center

    Moonen-van Loon, J. M. W.; Overeem, K.; Donkers, H. H. L. M.; van der Vleuten, C. P. M.; Driessen, E. W.

    2013-01-01

    In recent years, postgraduate assessment programmes around the world have embraced workplace-based assessment (WBA) and its related tools. Despite their widespread use, results of studies on the validity and reliability of these tools have been variable. Although in many countries decisions about residents' continuation of training and…

  14. Assessing I-Grid(TM) web-based monitoring for power quality and reliability benchmarking

    SciTech Connect

    Divan, Deepak; Brumsickle, William; Eto, Joseph

    2003-04-30

    This paper presents preliminary findings from DOEs pilot program. The results show how a web-based monitoring system can form the basis for aggregation of data and correlation and benchmarking across broad geographical lines. A longer report describes additional findings from the pilot, including impacts of power quality and reliability on customers operations [Divan, Brumsickle, Eto 2003].

  15. Degradation mechanisms in high-power multi-mode InGaAs-AlGaAs strained quantum well lasers for high-reliability applications

    NASA Astrophysics Data System (ADS)

    Sin, Yongkun; Presser, Nathan; Brodie, Miles; Lingley, Zachary; Foran, Brendan; Moss, Steven C.

    2015-03-01

    Laser diode manufacturers perform accelerated multi-cell lifetests to estimate lifetimes of lasers using an empirical model. Since state-of-the-art laser diodes typically require a long period of latency before they degrade, significant amount of stress is applied to the lasers to generate failures in relatively short test durations. A drawback of this approach is the lack of mean-time-to-failure data under intermediate and low stress conditions, leading to uncertainty in model parameters (especially optical power and current exponent) and potential overestimation of lifetimes at usage conditions. This approach is a concern especially for satellite communication systems where high reliability is required of lasers for long-term duration in the space environment. A number of groups have studied reliability and degradation processes in GaAs-based lasers, but none of these studies have yielded a reliability model based on the physics of failure. The lack of such a model is also a concern for space applications where complete understanding of degradation mechanisms is necessary. Our present study addresses the aforementioned issues by performing long-term lifetests under low stress conditions followed by failure mode analysis (FMA) and physics of failure investigation. We performed low-stress lifetests on both MBE- and MOCVD-grown broad-area InGaAs- AlGaAs strained QW lasers under ACC (automatic current control) mode to study low-stress degradation mechanisms. Our lifetests have accumulated over 36,000 test hours and FMA is performed on failures using our angle polishing technique followed by EL. This technique allows us to identify failure types by observing dark line defects through a window introduced in backside metal contacts. We also investigated degradation mechanisms in MOCVD-grown broad-area InGaAs-AlGaAs strained QW lasers using various FMA techniques. Since it is a challenge to control defect densities during the growth of laser structures, we chose to

  16. Research on Air Traffic Control Automatic System Software Reliability Based on Markov Chain

    NASA Astrophysics Data System (ADS)

    Wang, Xinglong; Liu, Weixiang

    Ensuring the space of air craft and high efficiency of air traffic are the main job tasks of the air traffic control automatic system. An Air Traffic Control Automatic System (ATCAS) and Markov model is put forward in this paper, which collected the 36 month failure data of ATCAS; A method to predict the s1,s2,s3 of ATCAS is based on Markov chain which predicts and validates the Reliability of ATCTS according to the deriving theory of Reliability. The experimental results show that the method can be used for the future research and proved to be practicable.

  17. Measuring Fidelity and Adaptation: Reliability of a Instrument for School-Based Prevention Programs.

    PubMed

    Bishop, Dana C; Pankratz, Melinda M; Hansen, William B; Albritton, Jordan; Albritton, Lauren; Strack, Joann

    2014-06-01

    There is a need to standardize methods for assessing fidelity and adaptation. Such standardization would allow program implementation to be examined in a manner that will be useful for understanding the moderating role of fidelity in dissemination research. This article describes a method for collecting data about fidelity of implementation for school-based prevention programs, including measures of adherence, quality of delivery, dosage, participant engagement, and adaptation. We report about the reliability of these methods when applied by four observers who coded video recordings of teachers delivering All Stars, a middle school drug prevention program. Interrater agreement for scaled items was assessed for an instrument designed to evaluate program fidelity. Results indicated sound interrater reliability for items assessing adherence, dosage, quality of teaching, teacher understanding of concepts, and program adaptations. The interrater reliability for items assessing potential program effectiveness, classroom management, achievement of activity objectives, and adaptation valences was improved by dichotomizing the response options for these items. The item that assessed student engagement demonstrated only modest interrater reliability and was not improved through dichotomization. Several coder pairs were discordant on items that overall demonstrated good interrater reliability. Proposed modifications to the coding manual and protocol are discussed.

  18. Sequential optimization with particle splitting-based reliability assessment for engineering design under uncertainties

    NASA Astrophysics Data System (ADS)

    Zhuang, Xiaotian; Pan, Rong; Sun, Qing

    2014-08-01

    The evaluation of probabilistic constraints plays an important role in reliability-based design optimization. Traditional simulation methods such as Monte Carlo simulation can provide highly accurate results, but they are often computationally intensive to implement. To improve the computational efficiency of the Monte Carlo method, this article proposes a particle splitting approach, a rare-event simulation technique that evaluates probabilistic constraints. The particle splitting-based reliability assessment is integrated into the iterative steps of design optimization. The proposed method provides an enhancement of subset simulation by increasing sample diversity and producing a stable solution. This method is further extended to address the problem with multiple probabilistic constraints. The performance of the particle splitting approach is compared with the most probable point based method and other approximation methods through examples.

  19. Reliability of high power diode laser systems based on single emitters

    NASA Astrophysics Data System (ADS)

    Leisher, Paul; Reynolds, Mitch; Brown, Aaron; Kennedy, Keith; Bao, Ling; Wang, Jun; Grimshaw, Mike; DeVito, Mark; Karlsen, Scott; Small, Jay; Ebert, Chris; Martinsen, Rob; Haden, Jim

    2011-03-01

    Diode laser modules based on arrays of single emitters offer a number of advantages over bar-based solutions including enhanced reliability, higher brightness, and lower cost per bright watt. This approach has enabled a rapid proliferation of commercially available high-brightness fiber-coupled diode laser modules. Incorporating ever-greater numbers of emitters within a single module offers a direct path for power scaling while simultaneously maintaining high brightness and minimizing overall cost. While reports of long lifetimes for single emitter diode laser technology are widespread, the complex relationship between the standalone chip reliability and package-induced failure modes, as well as the impact of built-in redundancy offered by multiple emitters, are not often discussed. In this work, we present our approach to the modeling of fiber-coupled laser systems based on single-emitter laser diodes.

  20. Reliability of Therapist Self-Report on Treatment Targets and Focus in Family-Based Intervention

    PubMed Central

    Hogue, Aaron; Dauber, Sarah; Henderson, Craig E.; Liddle, Howard A.

    2013-01-01

    Reliable therapist-report methods appear to be an essential component of quality assurance procedures to support adoption of evidence-based practices in usual care, but studies have found weak correspondence between therapist and observer ratings of treatment techniques. This study examined therapist reliability and accuracy in rating intervention target (i.e., session participants) and focus (i.e., session content) in a manual-guided, family-based preventive intervention implemented with 50 inner-city adolescents at risk for substance use. A total of 106 sessions selected from three phases of treatment were rated via post-session self-report by the participating therapist and also via videotape by nonparticipant coders. Both groups estimated the amount of session time devoted to model-prescribed treatment targets (adolescent, parent, conjoint) and foci (family, school, peer, prosocial, drugs). Therapists demonstrated excellent reliability with coders for treatment targets and moderate to high reliability for treatment foci across the sample and within each phase. Also, therapists did not consistently overestimate their degree of activity with targets or foci. Implications of study findings for fidelity assessment in routine settings are discussed. PMID:24068479

  1. A Reliability Model for Ni-BaTiO3-Based (BME) Ceramic Capacitors

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2014-01-01

    The evaluation of multilayer ceramic capacitors (MLCCs) with base-metal electrodes (BMEs) for potential NASA space project applications requires an in-depth understanding of their reliability. The reliability of an MLCC is defined as the ability of the dielectric material to retain its insulating properties under stated environmental and operational conditions for a specified period of time t. In this presentation, a general mathematic expression of a reliability model for a BME MLCC is developed and discussed. The reliability model consists of three parts: (1) a statistical distribution that describes the individual variation of properties in a test group of samples (Weibull, log normal, normal, etc.), (2) an acceleration function that describes how a capacitors reliability responds to external stresses such as applied voltage and temperature (All units in the test group should follow the same acceleration function if they share the same failure mode, independent of individual units), and (3) the effect and contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size r, and capacitor chip size S. In general, a two-parameter Weibull statistical distribution model is used in the description of a BME capacitors reliability as a function of time. The acceleration function that relates a capacitors reliability to external stresses is dependent on the failure mode. Two failure modes have been identified in BME MLCCs: catastrophic and slow degradation. A catastrophic failure is characterized by a time-accelerating increase in leakage current that is mainly due to existing processing defects (voids, cracks, delamination, etc.), or the extrinsic defects. A slow degradation failure is characterized by a near-linear increase in leakage current against the stress time; this is caused by the electromigration of oxygen vacancies (intrinsic defects). The

  2. Reliability Sensitivity Analysis and Design Optimization of Composite Structures Based on Response Surface Methodology

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2003-01-01

    This report discusses the development and application of two alternative strategies in the form of global and sequential local response surface (RS) techniques for the solution of reliability-based optimization (RBO) problems. The problem of a thin-walled composite circular cylinder under axial buckling instability is used as a demonstrative example. In this case, the global technique uses a single second-order RS model to estimate the axial buckling load over the entire feasible design space (FDS) whereas the local technique uses multiple first-order RS models with each applied to a small subregion of FDS. Alternative methods for the calculation of unknown coefficients in each RS model are explored prior to the solution of the optimization problem. The example RBO problem is formulated as a function of 23 uncorrelated random variables that include material properties, thickness and orientation angle of each ply, cylinder diameter and length, as well as the applied load. The mean values of the 8 ply thicknesses are treated as independent design variables. While the coefficients of variation of all random variables are held fixed, the standard deviations of ply thicknesses can vary during the optimization process as a result of changes in the design variables. The structural reliability analysis is based on the first-order reliability method with reliability index treated as the design constraint. In addition to the probabilistic sensitivity analysis of reliability index, the results of the RBO problem are presented for different combinations of cylinder length and diameter and laminate ply patterns. The two strategies are found to produce similar results in terms of accuracy with the sequential local RS technique having a considerably better computational efficiency.

  3. Perceptual attraction in tool use: evidence for a reliability-based weighting mechanism.

    PubMed

    Debats, Nienke B; Ernst, Marc O; Heuer, Herbert

    2017-04-01

    Humans are well able to operate tools whereby their hand movement is linked, via a kinematic transformation, to a spatially distant object moving in a separate plane of motion. An everyday example is controlling a cursor on a computer monitor. Despite these separate reference frames, the perceived positions of the hand and the object were found to be biased toward each other. We propose that this perceptual attraction is based on the principles by which the brain integrates redundant sensory information of single objects or events, known as optimal multisensory integration. That is, 1) sensory information about the hand and the tool are weighted according to their relative reliability (i.e., inverse variances), and 2) the unisensory reliabilities sum up in the integrated estimate. We assessed whether perceptual attraction is consistent with optimal multisensory integration model predictions. We used a cursor-control tool-use task in which we manipulated the relative reliability of the unisensory hand and cursor position estimates. The perceptual biases shifted according to these relative reliabilities, with an additional bias due to contextual factors that were present in experiment 1 but not in experiment 2 The biased position judgments' variances were, however, systematically larger than the predicted optimal variances. Our findings suggest that the perceptual attraction in tool use results from a reliability-based weighting mechanism similar to optimal multisensory integration, but that certain boundary conditions for optimality might not be satisfied.NEW & NOTEWORTHY Kinematic tool use is associated with a perceptual attraction between the spatially separated hand and the effective part of the tool. We provide a formal account for this phenomenon, thereby showing that the process behind it is similar to optimal integration of sensory information relating to single objects.

  4. Reliability analysis of the solar array based on Fault Tree Analysis

    NASA Astrophysics Data System (ADS)

    Jianing, Wu; Shaoze, Yan

    2011-07-01

    The solar array is an important device used in the spacecraft, which influences the quality of in-orbit operation of the spacecraft and even the launches. This paper analyzes the reliability of the mechanical system and certifies the most vital subsystem of the solar array. The fault tree analysis (FTA) model is established according to the operating process of the mechanical system based on DFH-3 satellite; the logical expression of the top event is obtained by Boolean algebra and the reliability of the solar array is calculated. The conclusion shows that the hinges are the most vital links between the solar arrays. By analyzing the structure importance(SI) of the hinge's FTA model, some fatal causes, including faults of the seal, insufficient torque of the locking spring, temperature in space, and friction force, can be identified. Damage is the initial stage of the fault, so limiting damage is significant to prevent faults. Furthermore, recommendations for improving reliability associated with damage limitation are discussed, which can be used for the redesigning of the solar array and the reliability growth planning.

  5. Novel ring-based architecture for TWDM-PON with high reliability and flexible extensibility

    NASA Astrophysics Data System (ADS)

    Xiong, Yu; Sun, Peng; Li, Zhiqiang

    2017-02-01

    Time and wavelength division multiplexed passive optical network (TWDM-PON) was determined as a primary solution to NG-PON2 by the full service access network (FSAN) in 2012. Since then, TWDM-PON has been applied to a wider set of applications, including those that are outage sensitive and expansion flexible. So the protection techniques with reliability and flexibility should be studied to address the above needs. In this paper, we propose a novel ring-based architecture for TWDM-PON. The architecture can provide reliable ring protection scheme against a fiber fault occurring on main ring (MR), sub-ring (SR) or last mile ring (LMR). In addition, we exploit the extended node (EN) to realize the network expansion conveniently and smoothly for the flexible extensibility. Thus, more remote nodes(RNs) and optical network units (ONUs) could access this architecture through EN. Moreover, in order to further improve reliability of the network, we design the 1:1 protection scheme against the connected fiber fault between RN and EN. The results show that the proposed architecture has a recovery time of 17 ms under protection mode and the reliability of the network is also illustrated to be greatly improved compared to the network without protection. As the number of ONUs increases, the average cost of each ONU could be gradually reduced. Finally, the simulations verify the feasibility of the architecture.

  6. Web-based collection of expert opinion on routine scalp EEG: software development and interrater reliability.

    PubMed

    Halford, Jonathan J; Pressly, William B; Benbadis, Selim R; Tatum, William O; Turner, Robert P; Arain, Amir; Pritchard, Paul B; Edwards, Jonathan C; Dean, Brian C

    2011-04-01

    Computerized detection of epileptiform transients (ETs), characterized by interictal spikes and sharp waves in the EEG, has been a research goal for the last 40 years. A reliable method for detecting ETs would assist physicians in interpretation and improve efficiency in reviewing long-term EEG recordings. Computer algorithms developed thus far for detecting ETs are not as reliable as human experts, primarily due to the large number of false-positive detections. Comparing the performance of different algorithms is difficult because each study uses individual EEG test datasets. In this article, we present EEGnet, a distributed web-based platform for the acquisition and analysis of large-scale training datasets for comparison of different EEG ET detection algorithms. This software allows EEG scorers to log in through the web, mark EEG segments of interest, and categorize segments of interest using a conventional clinical EEG user interface. This software platform was used by seven board-certified academic epileptologists to score 40 short 30-second EEG segments from 40 patients, half containing ETs and half containing artifacts and normal variants. The software performance was adequate. Interrater reliability for marking the location of paroxysmal activity was low. Interrater reliability of marking artifacts and ETs was high and moderate, respectively.

  7. Reliable and redundant FPGA based read-out design in the ATLAS TileCal Demonstrator

    SciTech Connect

    Akerstedt, Henrik; Muschter, Steffen; Drake, Gary; Anderson, Kelby; Bohm, Christian; Oreglia, Mark; Tang, Fukun

    2015-10-01

    The Tile Calorimeter at ATLAS [1] is a hadron calorimeter based on steel plates and scintillating tiles read out by PMTs. The current read-out system uses standard ADCs and custom ASICs to digitize and temporarily store the data on the detector. However, only a subset of the data is actually read out to the counting room. The on-detector electronics will be replaced around 2023. To achieve the required reliability the upgraded system will be highly redundant. Here the ASICs will be replaced with Kintex-7 FPGAs from Xilinx. This, in addition to the use of multiple 10 Gbps optical read-out links, will allow a full read-out of all detector data. Due to the higher radiation levels expected when the beam luminosity is increased, opportunities for repairs will be less frequent. The circuitry and firmware must therefore be designed for sufficiently high reliability using redundancy and radiation tolerant components. Within a year, a hybrid demonstrator including the new readout system will be installed in one slice of the ATLAS Tile Calorimeter. This will allow the proposed upgrade to be thoroughly evaluated well before the planned 2023 deployment in all slices, especially with regard to long term reliability. Different firmware strategies alongside with their integration in the demonstrator are presented in the context of high reliability protection against hardware malfunction and radiation induced errors.

  8. Generalizability theory reliability of written expression curriculum-based measurement in universal screening.

    PubMed

    Keller-Margulis, Milena A; Mercer, Sterett H; Thomas, Erin L

    2016-09-01

    The purpose of this study was to examine the reliability of written expression curriculum-based measurement (WE-CBM) in the context of universal screening from a generalizability theory framework. Students in second through fifth grade (n = 145) participated in the study. The sample included 54% female students, 49% White students, 23% African American students, 17% Hispanic students, 8% Asian students, and 3% of students identified as 2 or more races. Of the sample, 8% were English Language Learners and 6% were students receiving special education. Three WE-CBM probes were administered for 7 min each at 3 time points across 1 year. Writing samples were scored for commonly used WE-CBM metrics (e.g., correct minus incorrect word sequences; CIWS). Results suggest that nearly half the variance in WE-CBM is related to unsystematic error and that conventional screening procedures (i.e., the use of one 3-min sample) do not yield scores with adequate reliability for relative or absolute decisions about student performance. In most grades, three 3-min writing samples (or 2 longer duration samples) were required for adequate reliability for relative decisions, and three 7-min writing samples would not yield adequate reliability for relative decisions about within-year student growth. Implications and recommendations are discussed. (PsycINFO Database Record

  9. Reliability-Based Analysis and Design Methods for Reinforced Concrete Protective Structures

    DTIC Science & Technology

    1993-04-01

    of the factors that contributes to airblast and resistance prediction error is assumed to be lognormally distributed. Errors in the PCDM airblast...structural resistance prediction error model is also assumed to be composed of three multiplicative factors: (1) a correction factor for actual material...material properties can be used to develop structural resistance prediction error models and reliability-based capacity factors. Prediction error models

  10. Reliability and performance evaluation of systems containing embedded rule-based expert systems

    NASA Technical Reports Server (NTRS)

    Beaton, Robert M.; Adams, Milton B.; Harrison, James V. A.

    1989-01-01

    A method for evaluating the reliability of real-time systems containing embedded rule-based expert systems is proposed and investigated. It is a three stage technique that addresses the impact of knowledge-base uncertainties on the performance of expert systems. In the first stage, a Markov reliability model of the system is developed which identifies the key performance parameters of the expert system. In the second stage, the evaluation method is used to determine the values of the expert system's key performance parameters. The performance parameters can be evaluated directly by using a probabilistic model of uncertainties in the knowledge-base or by using sensitivity analyses. In the third and final state, the performance parameters of the expert system are combined with performance parameters for other system components and subsystems to evaluate the reliability and performance of the complete system. The evaluation method is demonstrated in the context of a simple expert system used to supervise the performances of an FDI algorithm associated with an aircraft longitudinal flight-control system.

  11. A Reliability and Validity of an Instrument to Evaluate the School-Based Assessment System: A Pilot Study

    ERIC Educational Resources Information Center

    Ghazali, Nor Hasnida Md

    2016-01-01

    A valid, reliable and practical instrument is needed to evaluate the implementation of the school-based assessment (SBA) system. The aim of this study is to develop and assess the validity and reliability of an instrument to measure the perception of teachers towards the SBA implementation in schools. The instrument is developed based on a…

  12. 76 FR 40722 - Granite Reliable Power, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-11

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Granite Reliable Power, LLC; Supplemental Notice That Initial Market-Based... above-referenced proceeding of Granite Reliable Power, LLC's application for market-based rate...

  13. Reliability- and performance-based robust design optimization of MEMS structures considering technological uncertainties

    NASA Astrophysics Data System (ADS)

    Martowicz, Adam; Uhl, Tadeusz

    2012-10-01

    The paper discusses the applicability of a reliability- and performance-based multi-criteria robust design optimization technique for micro-electromechanical systems, considering their technological uncertainties. Nowadays, micro-devices are commonly applied systems, especially in the automotive industry, taking advantage of utilizing both the mechanical structure and electronic control circuit on one board. Their frequent use motivates the elaboration of virtual prototyping tools that can be applied in design optimization with the introduction of technological uncertainties and reliability. The authors present a procedure for the optimization of micro-devices, which is based on the theory of reliability-based robust design optimization. This takes into consideration the performance of a micro-device and its reliability assessed by means of uncertainty analysis. The procedure assumes that, for each checked design configuration, the assessment of uncertainty propagation is performed with the meta-modeling technique. The described procedure is illustrated with an example of the optimization carried out for a finite element model of a micro-mirror. The multi-physics approach allowed the introduction of several physical phenomena to correctly model the electrostatic actuation and the squeezing effect present between electrodes. The optimization was preceded by sensitivity analysis to establish the design and uncertain domains. The genetic algorithms fulfilled the defined optimization task effectively. The best discovered individuals are characterized by a minimized value of the multi-criteria objective function, simultaneously satisfying the constraint on material strength. The restriction of the maximum equivalent stresses was introduced with the conditionally formulated objective function with a penalty component. The yielded results were successfully verified with a global uniform search through the input design domain.

  14. Reliability-based structural optimization using response surface approximations and probabilistic sufficiency factor

    NASA Astrophysics Data System (ADS)

    Qu, Xueyong

    Uncertainties exist practically everywhere from structural design to manufacturing, product lifetime service, and maintenance. Uncertainties can be introduced by errors in modeling and simulation; by manufacturing imperfections (such as variability in material properties and structural geometric dimensions); and by variability in loading. Structural design by safety factors using nominal values without considering uncertainties may lead to designs that are either unsafe, or too conservative and thus not efficient. The focus of this dissertation is reliability-based design optimization (RBDO) of composite structures. Uncertainties are modeled by the probabilistic distributions of random variables. Structural reliability is evaluated in term of the probability of failure. RBDO minimizes cost such as structural weight subject to reliability constraints. Since engineering structures usually have multiple failure modes, Monte Carlo simulation (MCS) was used employed to calculate the system probability of failure. Response surface (RS) approximation techniques were used to solve the difficulties associated with MCS. The high computational cost of a large number of MCS samples was alleviated by analysis RS, and numerical noise in the results of MCS was filtered out by design RS. RBDO of composite laminates is investigated for use in hydrogen tanks in cryogenic environments. The major challenge is to reduce the large residual strains developed due to thermal mismatch between matrix and fibers while maintaining the load carrying capacity. RBDO is performed to provide laminate designs, quantify the effects of uncertainties on the optimum weight, and identify those parameters that have the largest influence on optimum design. Studies of weight and reliability tradeoffs indicate that the most cost-effective measure for reducing weight and increasing reliability is quality control. A probabilistic sufficiency factor (PSF) approach was developed to improve the computational

  15. Reliability and validity of the NeuroCognitive Performance Test, a web-based neuropsychological assessment

    PubMed Central

    Morrison, Glenn E.; Simone, Christa M.; Ng, Nicole F.; Hardy, Joseph L.

    2015-01-01

    The NeuroCognitive Performance Test (NCPT) is a brief, repeatable, web-based cognitive assessment platform that measures performance across several cognitive domains. The NCPT platform is modular and includes 18 subtests that can be arranged into customized batteries. Here we present normative data from a sample of 130,140 healthy volunteers for an NCPT battery consisting of 8 subtests. Participants took the NCPT remotely and without supervision. Factor structure and effects of age, education, and gender were evaluated with this normative dataset. Test-retest reliability was evaluated in a subset of participants who took the battery again an average of 78.8 days later. The eight NCPT subtests group into 4 putative cognitive domains, have adequate to good test-retest reliability, and are sensitive to expected age- and education-related cognitive effects. Concurrent validity to standard neuropsychological tests was demonstrated in 73 healthy volunteers. In an exploratory analysis the NCPT battery could differentiate those who self-reported Mild Cognitive Impairment or Alzheimer's disease from matched healthy controls. Overall these results demonstrate the reliability and validity of the NCPT battery as a measure of cognitive performance and support the feasibility of web-based, unsupervised testing, with potential utility in clinical and research settings. PMID:26579035

  16. Reliability and validity of the NeuroCognitive Performance Test, a web-based neuropsychological assessment.

    PubMed

    Morrison, Glenn E; Simone, Christa M; Ng, Nicole F; Hardy, Joseph L

    2015-01-01

    The NeuroCognitive Performance Test (NCPT) is a brief, repeatable, web-based cognitive assessment platform that measures performance across several cognitive domains. The NCPT platform is modular and includes 18 subtests that can be arranged into customized batteries. Here we present normative data from a sample of 130,140 healthy volunteers for an NCPT battery consisting of 8 subtests. Participants took the NCPT remotely and without supervision. Factor structure and effects of age, education, and gender were evaluated with this normative dataset. Test-retest reliability was evaluated in a subset of participants who took the battery again an average of 78.8 days later. The eight NCPT subtests group into 4 putative cognitive domains, have adequate to good test-retest reliability, and are sensitive to expected age- and education-related cognitive effects. Concurrent validity to standard neuropsychological tests was demonstrated in 73 healthy volunteers. In an exploratory analysis the NCPT battery could differentiate those who self-reported Mild Cognitive Impairment or Alzheimer's disease from matched healthy controls. Overall these results demonstrate the reliability and validity of the NCPT battery as a measure of cognitive performance and support the feasibility of web-based, unsupervised testing, with potential utility in clinical and research settings.

  17. Summary of Research on Reliability Criteria-Based Flight System Control

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva; Belcastro, Christine (Technical Monitor)

    2002-01-01

    This paper presents research on the reliability assessment of adaptive flight control systems. The topics include: 1) Overview of Project Focuses; 2) Reliability Analysis; and 3) Design for Reliability. This paper is presented in viewgraph form.

  18. Predictive models of safety based on audit findings: Part 1: Model development and reliability.

    PubMed

    Hsiao, Yu-Lin; Drury, Colin; Wu, Changxu; Paquet, Victor

    2013-03-01

    This consecutive study was aimed at the quantitative validation of safety audit tools as predictors of safety performance, as we were unable to find prior studies that tested audit validity against safety outcomes. An aviation maintenance domain was chosen for this work as both audits and safety outcomes are currently prescribed and regulated. In Part 1, we developed a Human Factors/Ergonomics classification framework based on HFACS model (Shappell and Wiegmann, 2001a,b), for the human errors detected by audits, because merely counting audit findings did not predict future safety. The framework was tested for measurement reliability using four participants, two of whom classified errors on 1238 audit reports. Kappa values leveled out after about 200 audits at between 0.5 and 0.8 for different tiers of errors categories. This showed sufficient reliability to proceed with prediction validity testing in Part 2.

  19. A reliable and sensitive bead-based fluorescence assay for identification of nucleic acid sequences

    NASA Astrophysics Data System (ADS)

    Klamp, Tobias; Yahiatène, Idir; Lampe, André; Schüttpelz, Mark; Sauer, Markus

    2011-03-01

    The sensitive and rapid detection of pathogenic DNA is of tremendous importance in the field of diagnostics. We demonstrate the ability of detecting and quantifying single- and double-stranded pathogenic DNA with picomolar sensitivity in a bead-based fluorescence assay. Selecting appropriate capturing and detection sequences enables rapid (2 h) and reliable DNA quantification. We show that synthetic sequences of S. pneumoniae and M. luteus can be quantified in very small sample volumes (20 μL) across a linear detection range over four orders of magnitude from 1 nM to 1 pM, using a miniaturized wide-field fluorescence microscope without amplification steps. The method offers single molecule detection sensitivity without using complex setups and thus volunteers as simple, robust, and reliable method for the sensitive detection of DNA and RNA sequences.

  20. Cut set-based risk and reliability analysis for arbitrarily interconnected networks

    DOEpatents

    Wyss, Gregory D.

    2000-01-01

    Method for computing all-terminal reliability for arbitrarily interconnected networks such as the United States public switched telephone network. The method includes an efficient search algorithm to generate minimal cut sets for nonhierarchical networks directly from the network connectivity diagram. Efficiency of the search algorithm stems in part from its basis on only link failures. The method also includes a novel quantification scheme that likewise reduces computational effort associated with assessing network reliability based on traditional risk importance measures. Vast reductions in computational effort are realized since combinatorial expansion and subsequent Boolean reduction steps are eliminated through analysis of network segmentations using a technique of assuming node failures to occur on only one side of a break in the network, and repeating the technique for all minimal cut sets generated with the search algorithm. The method functions equally well for planar and non-planar networks.

  1. Accounting for Proof Test Data in a Reliability Based Design Optimization Framework

    NASA Technical Reports Server (NTRS)

    Ventor, Gerharad; Scotti, Stephen J.

    2012-01-01

    This paper investigates the use of proof (or acceptance) test data during the reliability based design optimization of structural components. It is assumed that every component will be proof tested and that the component will only enter into service if it passes the proof test. The goal is to reduce the component weight, while maintaining high reliability, by exploiting the proof test results during the design process. The proposed procedure results in the simultaneous design of the structural component and the proof test itself and provides the designer with direct control over the probability of failing the proof test. The procedure is illustrated using two analytical example problems and the results indicate that significant weight savings are possible when exploiting the proof test results during the design process.

  2. Differential Evolution Based Intelligent System State Search Method for Composite Power System Reliability Evaluation

    NASA Astrophysics Data System (ADS)

    Bakkiyaraj, Ashok; Kumarappan, N.

    2015-09-01

    This paper presents a new approach for evaluating the reliability indices of a composite power system that adopts binary differential evolution (BDE) algorithm in the search mechanism to select the system states. These states also called dominant states, have large state probability and higher loss of load curtailment necessary to maintain real power balance. A chromosome of a BDE algorithm represents the system state. BDE is not applied for its traditional application of optimizing a non-linear objective function, but used as tool for exploring more number of dominant states by producing new chromosomes, mutant vectors and trail vectors based on the fitness function. The searched system states are used to evaluate annualized system and load point reliability indices. The proposed search methodology is applied to RBTS and IEEE-RTS test systems and results are compared with other approaches. This approach evaluates the indices similar to existing methods while analyzing less number of system states.

  3. Advanced System-Level Reliability Analysis and Prediction with Field Data Integration

    DTIC Science & Technology

    2011-09-01

    innovative life prediction methodologies that incorporate emerging probabilistic lifing techniques as well as advanced physics-of- failure...often based on simplifying assumptions and their predictions may suffer from different sources of uncertainty. For instance, one source of...system level, most modeling approaches focus on life prediction for single components and fail to account for the interdependencies that may result

  4. Reliability Evaluation of Base-Metal-Electrode Multilayer Ceramic Capacitors for Potential Space Applications

    NASA Technical Reports Server (NTRS)

    Liu, David (Donhang); Sampson, Michael J.

    2011-01-01

    Base-metal-electrode (BME) ceramic capacitors are being investigated for possible use in high-reliability spacelevel applications. This paper focuses on how BME capacitors construction and microstructure affects their lifetime and reliability. Examination of the construction and microstructure of commercial off-the-shelf (COTS) BME capacitors reveals great variance in dielectric layer thickness, even among BME capacitors with the same rated voltage. Compared to PME (precious-metal-electrode) capacitors, BME capacitors exhibit a denser and more uniform microstructure, with an average grain size between 0.3 and 0.5 m, which is much less than that of most PME capacitors. BME capacitors can be fabricated with more internal electrode layers and thinner dielectric layers than PME capacitors because they have a fine-grained microstructure and do not shrink much during ceramic sintering. This makes it possible for BME capacitors to achieve a very high capacitance volumetric efficiency. The reliability of BME and PME capacitors was investigated using highly accelerated life testing (HALT). Most BME capacitors were found to fail with an early avalanche breakdown, followed by a regular dielectric wearout failure during the HALT test. When most of the early failures, characterized with avalanche breakdown, were removed, BME capacitors exhibited a minimum mean time-to-failure (MTTF) of more than 105 years at room temperature and rated voltage. Dielectric thickness was found to be a critical parameter for the reliability of BME capacitors. The number of stacked grains in a dielectric layer appears to play a significant role in determining BME capacitor reliability. Although dielectric layer thickness varies for a given rated voltage in BME capacitors, the number of stacked grains is relatively consistent, typically around 12 for a number of BME capacitors with a rated voltage of 25V. This may suggest that the number of grains per dielectric layer is more critical than the

  5. Physics-based process modeling, reliability prediction, and design guidelines for flip-chip devices

    NASA Astrophysics Data System (ADS)

    Michaelides, Stylianos

    -down devices without the underfill, based on the thorough understanding of the failure modes. Also, practical design guidelines for material, geometry and process parameters for reliable flip-chip devices have been developed.

  6. WEAMR — A Weighted Energy Aware Multipath Reliable Routing Mechanism for Hotline-Based WSNs

    PubMed Central

    Tufail, Ali; Qamar, Arslan; Khan, Adil Mehmood; Baig, Waleed Akram; Kim, Ki-Hyung

    2013-01-01

    Reliable source to sink communication is the most important factor for an efficient routing protocol especially in domains of military, healthcare and disaster recovery applications. We present weighted energy aware multipath reliable routing (WEAMR), a novel energy aware multipath routing protocol which utilizes hotline-assisted routing to meet such requirements for mission critical applications. The protocol reduces the number of average hops from source to destination and provides unmatched reliability as compared to well known reactive ad hoc protocols i.e., AODV and AOMDV. Our protocol makes efficient use of network paths based on weighted cost calculation and intelligently selects the best possible paths for data transmissions. The path cost calculation considers end to end number of hops, latency and minimum energy node value in the path. In case of path failure path recalculation is done efficiently with minimum latency and control packets overhead. Our evaluation shows that our proposal provides better end-to-end delivery with less routing overhead and higher packet delivery success ratio compared to AODV and AOMDV. The use of multipath also increases overall life time of WSN network using optimum energy available paths between sender and receiver in WDNs. PMID:23669714

  7. Methods for assessing the reliability of quality of life based on SF-36.

    PubMed

    Pan, Yi; Barnhart, Huiman X

    2016-12-30

    The 36-Item Short Form Health Survey (SF-36) has been widely used to measure quality of life. Reliability has been traditionally assessed by intraclass correlation coefficient (ICC), which is equivalent to Cronbach's alpha theoretically. However, it is a scaled assessment of reliability and does not indicate the extent of differences because of measurement error. In this paper, total deviation index (TDI) is used to interpret the magnitude of measurement error for SF-36, and a new formula for computing TDI for average item score is proposed. The interpretation based on TDI is simple and intuitive by providing, with a high probability, the expected difference that is because of measurement error. We also show that a high value of ICC does not always correspond to a smaller magnitude of measurement error, which indicates that ICC can sometimes provide a false sense of high reliability. The methodology is illustrated with reported SF-36 data from the literature and from real data in the Arthritis Self-Management Program. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Probabilistic and structural reliability analysis of laminated composite structures based on the IPACS code

    NASA Technical Reports Server (NTRS)

    Sobel, Larry; Buttitta, Claudio; Suarez, James

    1993-01-01

    Probabilistic predictions based on the Integrated Probabilistic Assessment of Composite Structures (IPACS) code are presented for the material and structural response of unnotched and notched, 1M6/3501-6 Gr/Ep laminates. Comparisons of predicted and measured modulus and strength distributions are given for unnotched unidirectional, cross-ply, and quasi-isotropic laminates. The predicted modulus distributions were found to correlate well with the test results for all three unnotched laminates. Correlations of strength distributions for the unnotched laminates are judged good for the unidirectional laminate and fair for the cross-ply laminate, whereas the strength correlation for the quasi-isotropic laminate is deficient because IPACS did not yet have a progressive failure capability. The paper also presents probabilistic and structural reliability analysis predictions for the strain concentration factor (SCF) for an open-hole, quasi-isotropic laminate subjected to longitudinal tension. A special procedure was developed to adapt IPACS for the structural reliability analysis. The reliability results show the importance of identifying the most significant random variables upon which the SCF depends, and of having accurate scatter values for these variables.

  9. Quantified Risk Ranking Model for Condition-Based Risk and Reliability Centered Maintenance

    NASA Astrophysics Data System (ADS)

    Chattopadhyaya, Pradip Kumar; Basu, Sushil Kumar; Majumdar, Manik Chandra

    2016-03-01

    In the recent past, risk and reliability centered maintenance (RRCM) framework is introduced with a shift in the methodological focus from reliability and probabilities (expected values) to reliability, uncertainty and risk. In this paper authors explain a novel methodology for risk quantification and ranking the critical items for prioritizing the maintenance actions on the basis of condition-based risk and reliability centered maintenance (CBRRCM). The critical items are identified through criticality analysis of RPN values of items of a system and the maintenance significant precipitating factors (MSPF) of items are evaluated. The criticality of risk is assessed using three risk coefficients. The likelihood risk coefficient treats the probability as a fuzzy number. The abstract risk coefficient deduces risk influenced by uncertainty, sensitivity besides other factors. The third risk coefficient is called hazardous risk coefficient, which is due to anticipated hazards which may occur in the future and the risk is deduced from criteria of consequences on safety, environment, maintenance and economic risks with corresponding cost for consequences. The characteristic values of all the three risk coefficients are obtained with a particular test. With few more tests on the system, the values may change significantly within controlling range of each coefficient, hence `random number simulation' is resorted to obtain one distinctive value for each coefficient. The risk coefficients are statistically added to obtain final risk coefficient of each critical item and then the final rankings of critical items are estimated. The prioritization in ranking of critical items using the developed mathematical model for risk assessment shall be useful in optimization of financial losses and timing of maintenance actions.

  10. A reliable transmission protocol for ZigBee-based wireless patient monitoring.

    PubMed

    Chen, Shyr-Kuen; Kao, Tsair; Chan, Chia-Tai; Huang, Chih-Ning; Chiang, Chih-Yen; Lai, Chin-Yu; Tung, Tse-Hua; Wang, Pi-Chung

    2012-01-01

    Patient monitoring systems are gaining their importance as the fast-growing global elderly population increases demands for caretaking. These systems use wireless technologies to transmit vital signs for medical evaluation. In a multihop ZigBee network, the existing systems usually use broadcast or multicast schemes to increase the reliability of signals transmission; however, both the schemes lead to significantly higher network traffic and end-to-end transmission delay. In this paper, we present a reliable transmission protocol based on anycast routing for wireless patient monitoring. Our scheme automatically selects the closest data receiver in an anycast group as a destination to reduce the transmission latency as well as the control overhead. The new protocol also shortens the latency of path recovery by initiating route recovery from the intermediate routers of the original path. On the basis of a reliable transmission scheme, we implement a ZigBee device for fall monitoring, which integrates fall detection, indoor positioning, and ECG monitoring. When the triaxial accelerometer of the device detects a fall, the current position of the patient is transmitted to an emergency center through a ZigBee network. In order to clarify the situation of the fallen patient, 4-s ECG signals are also transmitted. Our transmission scheme ensures the successful transmission of these critical messages. The experimental results show that our scheme is fast and reliable. We also demonstrate that our devices can seamlessly integrate with the next generation technology of wireless wide area network, worldwide interoperability for microwave access, to achieve real-time patient monitoring.

  11. Reliability of trunk shape measurements based on 3-D surface reconstructions

    PubMed Central

    Cheriet, Farida; Danserau, Jean; Ronsky, Janet; Zernicke, Ronald F.; Labelle, Hubert

    2007-01-01

    This study aimed to estimate the reliability of 3-D trunk surface measurements for the characterization of external asymmetry associated with scoliosis. Repeated trunk surface acquisitions using the Inspeck system (Inspeck Inc., Montreal, Canada), with two different postures A (anatomical position) and B (‘‘clavicle’’ position), were obtained from patients attending a scoliosis clinic. For each acquisition, a 3-D model of the patient’s trunk was built and a series of measurements was computed. For each measure and posture, intraclass correlation coefficients (ICC) were obtained using a bivariate analysis of variance, and the smallest detectable difference was calculated. For posture A, reliability was fair to excellent with ICC from 0.91 to 0.99 (0.85 to 0.99 for the lower bound of the 95% confidence interval). For posture B, the ICC was 0.85 to 0.98 (0.74 to 0.99 for the lower bound of the 95% confidence interval). The smallest statistically significant differences for the maximal back surface rotation was 2.5 and 1.5° for the maximal trunk rotation. Apparent global asymmetry and axial trunk rotation indices were relatively robust to changes in arm posture, both in terms of mean values and within-subject variations, and also showed a good reliability. Computing measurements from cross-sectional analysis enabled a reduction in errors compared to the measurements based on markers’ position. Although not yet sensitive enough to detect small changes for monitoring of curve natural progression, trunk surface analysis can help to document the external asymmetry associated with different types of spinal curves as well as the cosmetic improvement obtained after surgical interventions. The anatomical posture is slightly more reliable as it allows a better coverage of the trunk surface by the digitizing system. PMID:17701228

  12. Reliability of team-based self-monitoring in critical events: a pilot study

    PubMed Central

    2013-01-01

    Background Teamwork is a critical component during critical events. Assessment is mandatory for remediation and to target training programmes for observed performance gaps. Methods The primary purpose was to test the feasibility of team-based self-monitoring of crisis resource management with a validated teamwork assessment tool. A secondary purpose was to assess item-specific reliability and content validity in order to develop a modified context-optimised assessment tool. We conducted a prospective, single-centre study to assess team-based self-monitoring of teamwork after in-situ inter-professional simulated critical events by comparison with an assessment by observers. The Mayo High Performance Teamwork Scale (MHPTS) was used as the assessment tool with evaluation of internal consistency, item-specific consensus estimates for agreement between participating teams and observers, and content validity. Results 105 participants and 58 observers completed the MHPTS after a total of 16 simulated critical events over 8 months. Summative internal consistency of the MHPTS calculated as Cronbach’s alpha was acceptable with 0.712 for observers and 0.710 for participants. Overall consensus estimates for dichotomous data (agreement/non-agreement) was 0.62 (Cohen’s kappa; IQ-range 0.31-0.87). 6/16 items had excellent (kappa > 0.8) and 3/16 good reliability (kappa > 0.6). Short questions concerning easy to observe behaviours were more likely to be reliable. The MHPTS was modified using a threshold for good reliability of kappa > 0.6. The result is a 9 item self-assessment tool (TeamMonitor) with a calculated median kappa of 0.86 (IQ-range: 0.67-1.0) and good content validity. Conclusions Team-based self-monitoring with the MHPTS to assess team performance during simulated critical events is feasible. A context-based modification of the tool is achievable with good internal consistency and content validity. Further studies are needed to investigate if team-based

  13. Validity and reliability of an IMU-based method to detect APAs prior to gait initiation.

    PubMed

    Mancini, Martina; Chiari, Lorenzo; Holmstrom, Lars; Salarian, Arash; Horak, Fay B

    2016-01-01

    Anticipatory postural adjustments (APAs) prior to gait initiation have been largely studied in traditional, laboratory settings using force plates under the feet to characterize the displacement of the center of pressure. However clinical trials and clinical practice would benefit from a portable, inexpensive method for characterizing APAs. Therefore, the main objectives of this study were (1) to develop a novel, automatic IMU-based method to detect and characterize APAs during gait initiation and (2) to measure its test-retest reliability. Experiment I was carried out in the laboratory to determine the validity of the IMU-based method in 10 subjects with PD (OFF medication) and 12 control subjects. Experiment II was carried out in the clinic, to determine test-retest reliability of the IMU-based method in a different set of 17 early-to-moderate, treated subjects with PD (tested ON medication) and 17 age-matched control subjects. Results showed that gait initiation characteristics (both APAs and 1st step) detected with our novel method were significantly correlated to the characteristics calculated with a force plate and motion analysis system. The size of APAs measured with either inertial sensors or force plate was significantly smaller in subjects with PD than in control subjects (p<0.05). Test-retest reliability for the gait initiation characteristics measured with inertial sensors was moderate-to-excellent (0.56

  14. Validity and Reliability of an IMU-based Method to Detect APAs Prior to Gait Initiation

    PubMed Central

    Chiari, Lorenzo; Holmstrom, Lars; Salarian, Arash; Horak, Fay B.

    2015-01-01

    Anticipatory postural adjustments (APAs) prior to gait initiation have been largely studied in in traditional, laboratory settings using force plates under the feet to characterize the displacement of the center of pressure. However clinical trials and clinical practice would benefit from a portable, inexpensive method for characterizing APAs. Therefore, the main objectives of this study were: 1) to develop a novel, automatic IMU-based method to detect and characterize APAs during gait initiation and 2) to measure its test-retest reliability. Experiment I was carried out in the laboratory to determine the validity of the IMU-based method in ten subjects with PD (OFF medication) and 12 control subjects. Experiment II was carried out in the clinic, to determine test-retest reliability of the IMU-based method in a different set of 17 early-to-moderate, treated subjects with PD (tested ON medication) and 17 age-matched control subjects. Results showed that gait initiation characteristics (both APAs and 1st step) detected with our novel method were significantly correlated to the characteristics calculated with a force plate and motion analysis system. The size of APAs measured with either inertial sensors or force plate were significantly smaller in subjects with PD than in control subjects (p<0.05). Test-retest reliability for the gait initiation characteristics measured with inertial sensors was moderate-to-excellent (.56

  15. How to Characterize the Reliability of Ceramic Capacitors with Base-Metal Electrodes (BMEs)

    NASA Technical Reports Server (NTRS)

    Liu, David (Donhang)

    2015-01-01

    The reliability of an MLCC device is the product of a time-dependent part and a time-independent part: 1) Time-dependent part is a statistical distribution; 2) Time-independent part is the reliability at t=0, the initial reliability. Initial reliability depends only on how a BME MLCC is designed and processed. Similar to the way the minimum dielectric thickness ensured the long-term reliability of a PME MLCC, the initial reliability also ensures the long term-reliability of a BME MLCC. This presentation shows new discoveries regarding commonalities and differences between PME and BME capacitor technologies.

  16. How to Characterize the Reliability of Ceramic Capacitors with Base-Metal Electrodes (BMEs)

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2015-01-01

    The reliability of an MLCC device is the product of a time-dependent part and a time-independent part: 1) Time-dependent part is a statistical distribution; 2) Time-independent part is the reliability at t0, the initial reliability. Initial reliability depends only on how a BME MLCC is designed and processed. Similar to the way the minimum dielectric thickness ensured the long-term reliability of a PME MLCC, the initial reliability also ensures the long term-reliability of a BME MLCC. This presentation shows new discoveries regarding commonalities and differences between PME and BME capacitor technologies.

  17. Effect of Clinically Discriminating, Evidence-Based Checklist Items on the Reliability of Scores from an Internal Medicine Residency OSCE

    ERIC Educational Resources Information Center

    Daniels, Vijay J.; Bordage, Georges; Gierl, Mark J.; Yudkowsky, Rachel

    2014-01-01

    Objective structured clinical examinations (OSCEs) are used worldwide for summative examinations but often lack acceptable reliability. Research has shown that reliability of scores increases if OSCE checklists for medical students include only clinically relevant items. Also, checklists are often missing evidence-based items that high-achieving…

  18. Reliability-based optimization of maintenance scheduling of mechanical components under fatigue

    PubMed Central

    Beaurepaire, P.; Valdebenito, M.A.; Schuëller, G.I.; Jensen, H.A.

    2012-01-01

    This study presents the optimization of the maintenance scheduling of mechanical components under fatigue loading. The cracks of damaged structures may be detected during non-destructive inspection and subsequently repaired. Fatigue crack initiation and growth show inherent variability, and as well the outcome of inspection activities. The problem is addressed under the framework of reliability based optimization. The initiation and propagation of fatigue cracks are efficiently modeled using cohesive zone elements. The applicability of the method is demonstrated by a numerical example, which involves a plate with two holes subject to alternating stress. PMID:23564979

  19. Reliability-based optimization of maintenance scheduling of mechanical components under fatigue.

    PubMed

    Beaurepaire, P; Valdebenito, M A; Schuëller, G I; Jensen, H A

    2012-05-01

    This study presents the optimization of the maintenance scheduling of mechanical components under fatigue loading. The cracks of damaged structures may be detected during non-destructive inspection and subsequently repaired. Fatigue crack initiation and growth show inherent variability, and as well the outcome of inspection activities. The problem is addressed under the framework of reliability based optimization. The initiation and propagation of fatigue cracks are efficiently modeled using cohesive zone elements. The applicability of the method is demonstrated by a numerical example, which involves a plate with two holes subject to alternating stress.

  20. Using Model Replication to Improve the Reliability of Agent-Based Models

    NASA Astrophysics Data System (ADS)

    Zhong, Wei; Kim, Yushim

    The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.

  1. Workplace-based assessment of communication skills: A pilot project addressing feasibility, acceptance and reliability.

    PubMed

    Weyers, Simone; Jemi, Iman; Karger, André; Raski, Bianca; Rotthoff, Thomas; Pentzek, Michael; Mortsiefer, Achim

    2016-01-01

    Background: Imparting communication skills has been given great importance in medical curricula. In addition to standardized assessments, students should communicate with real patients in actual clinical situations during workplace-based assessments and receive structured feedback on their performance. The aim of this project was to pilot a formative testing method for workplace-based assessment. Our investigation centered in particular on whether or not physicians view the method as feasible and how high acceptance is among students. In addition, we assessed the reliability of the method. Method: As part of the project, 16 students held two consultations each with chronically ill patients at the medical practice where they were completing GP training. These consultations were video-recorded. The trained mentoring physician rated the student's performance and provided feedback immediately following the consultations using the Berlin Global Rating scale (BGR). Two impartial, trained raters also evaluated the videos using BGR. For qualitative and quantitative analysis, information on how physicians and students viewed feasibility and their levels of acceptance was collected in written form in a partially standardized manner. To test for reliability, the test-retest reliability was calculated for both of the overall evaluations given by each rater. The inter-rater reliability was determined for the three evaluations of each individual consultation. Results: The formative assessment method was rated positively by both physicians and students. It is relatively easy to integrate into daily routines. Its significant value lies in the personal, structured and recurring feedback. The two overall scores for each patient consultation given by the two impartial raters correlate moderately. The degree of uniformity among the three raters in respect to the individual consultations is low. Discussion: Within the scope of this pilot project, only a small sample of physicians and

  2. Workplace-based assessment of communication skills: A pilot project addressing feasibility, acceptance and reliability

    PubMed Central

    Weyers, Simone; Jemi, Iman; Karger, André; Raski, Bianca; Rotthoff, Thomas; Pentzek, Michael; Mortsiefer, Achim

    2016-01-01

    Background: Imparting communication skills has been given great importance in medical curricula. In addition to standardized assessments, students should communicate with real patients in actual clinical situations during workplace-based assessments and receive structured feedback on their performance. The aim of this project was to pilot a formative testing method for workplace-based assessment. Our investigation centered in particular on whether or not physicians view the method as feasible and how high acceptance is among students. In addition, we assessed the reliability of the method. Method: As part of the project, 16 students held two consultations each with chronically ill patients at the medical practice where they were completing GP training. These consultations were video-recorded. The trained mentoring physician rated the student’s performance and provided feedback immediately following the consultations using the Berlin Global Rating scale (BGR). Two impartial, trained raters also evaluated the videos using BGR. For qualitative and quantitative analysis, information on how physicians and students viewed feasibility and their levels of acceptance was collected in written form in a partially standardized manner. To test for reliability, the test-retest reliability was calculated for both of the overall evaluations given by each rater. The inter-rater reliability was determined for the three evaluations of each individual consultation. Results: The formative assessment method was rated positively by both physicians and students. It is relatively easy to integrate into daily routines. Its significant value lies in the personal, structured and recurring feedback. The two overall scores for each patient consultation given by the two impartial raters correlate moderately. The degree of uniformity among the three raters in respect to the individual consultations is low. Discussion: Within the scope of this pilot project, only a small sample of physicians and

  3. Physics of Failure Analysis of Xilinx Flip Chip CCGA Packages: Effects of Mission Environments on Properties of LP2 Underfill and ATI Lid Adhesive Materials

    NASA Technical Reports Server (NTRS)

    Suh, Jong-ook

    2013-01-01

    The Xilinx Virtex 4QV and 5QV (V4 and V5) are next-generation field-programmable gate arrays (FPGAs) for space applications. However, there have been concerns within the space community regarding the non-hermeticity of V4/V5 packages; polymeric materials such as the underfill and lid adhesive will be directly exposed to the space environment. In this study, reliability concerns associated with the non-hermeticity of V4/V5 packages were investigated by studying properties and behavior of the underfill and the lid adhesvie materials used in V4/V5 packages.

  4. Reliability and Validity of Assessing User Satisfaction With Web-Based Health Interventions

    PubMed Central

    Lehr, Dirk; Reis, Dorota; Vis, Christiaan; Riper, Heleen; Berking, Matthias; Ebert, David Daniel

    2016-01-01

    Background The perspective of users should be taken into account in the evaluation of Web-based health interventions. Assessing the users’ satisfaction with the intervention they receive could enhance the evidence for the intervention effects. Thus, there is a need for valid and reliable measures to assess satisfaction with Web-based health interventions. Objective The objective of this study was to analyze the reliability, factorial structure, and construct validity of the Client Satisfaction Questionnaire adapted to Internet-based interventions (CSQ-I). Methods The psychometric quality of the CSQ-I was analyzed in user samples from 2 separate randomized controlled trials evaluating Web-based health interventions, one from a depression prevention intervention (sample 1, N=174) and the other from a stress management intervention (sample 2, N=111). At first, the underlying measurement model of the CSQ-I was analyzed to determine the internal consistency. The factorial structure of the scale and the measurement invariance across groups were tested by multigroup confirmatory factor analyses. Additionally, the construct validity of the scale was examined by comparing satisfaction scores with the primary clinical outcome. Results Multigroup confirmatory analyses on the scale yielded a one-factorial structure with a good fit (root-mean-square error of approximation =.09, comparative fit index =.96, standardized root-mean-square residual =.05) that showed partial strong invariance across the 2 samples. The scale showed very good reliability, indicated by McDonald omegas of .95 in sample 1 and .93 in sample 2. Significant correlations with change in depressive symptoms (r=−.35, P<.001) and perceived stress (r=−.48, P<.001) demonstrated the construct validity of the scale. Conclusions The proven internal consistency, factorial structure, and construct validity of the CSQ-I indicate a good overall psychometric quality of the measure to assess the user’s general

  5. The Accessibility, Usability, and Reliability of Chinese Web-Based Information on HIV/AIDS

    PubMed Central

    Niu, Lu; Luo, Dan; Liu, Ying; Xiao, Shuiyuan

    2016-01-01

    Objective: The present study was designed to assess the quality of Chinese-language Internet-based information on HIV/AIDS. Methods: We entered the following search terms, in Chinese, into Baidu and Sogou: “HIV/AIDS”, “symptoms”, and “treatment”, and evaluated the first 50 hits of each query using the Minervation validation instrument (LIDA tool) and DISCERN instrument. Results: Of the 900 hits identified, 85 websites were included in this study. The overall score of the LIDA tool was 63.7%; the mean score of accessibility, usability, and reliability was 82.2%, 71.5%, and 27.3%, respectively. Of the top 15 sites according to the LIDA score, the mean DISCERN score was calculated at 43.1 (95% confidence intervals (CI) = 37.7–49.5). Noncommercial websites showed higher DISCERN scores than commercial websites; whereas commercial websites were more likely to be found in the first 20 links obtained from each search engine than the noncommercial websites. Conclusions: In general, the HIV/AIDS related Chinese-language websites have poor reliability, although their accessibility and usability are fair. In addition, the treatment information presented on Chinese-language websites is far from sufficient. There is an imperative need for professionals and specialized institutes to improve the comprehensiveness of web-based information related to HIV/AIDS. PMID:27556475

  6. Bifactor Modeling and the Estimation of Model-Based Reliability in the WAIS-IV.

    PubMed

    Gignac, Gilles E; Watkins, Marley W

    2013-09-01

    Previous confirmatory factor analytic research that has examined the factor structure of the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) has endorsed either higher order models or oblique factor models that tend to amalgamate both general factor and index factor sources of systematic variance. An alternative model that has not yet been examined for the WAIS-IV is the bifactor model. Bifactor models allow all subtests to load onto both the general factor and their respective index factor directly. Bifactor models are also particularly amenable to the estimation of model-based reliabilities for both global composite scores (ω h ) and subscale/index scores (ω s ). Based on the WAIS-IV normative sample correlation matrices, a bifactor model that did not include any index factor cross loadings or correlated residuals was found to be better fitting than the conventional higher order and oblique factor models. Although the ω h estimate associated with the full scale intelligence quotient (FSIQ) scores was respectably high (.86), the ω s estimates associated with the WAIS-IV index scores were very low (.13 to .47). The results are interpreted in the context of the benefits of a bifactor modeling approach. Additionally, in light of the very low levels of unique internal consistency reliabilities associated with the index scores, it is contended that clinical index score interpretations are probably not justifiable.

  7. The Soft Rock Socketed Monopile with Creep Effects - A Reliability Approach based on Wavelet Neural Networks

    NASA Astrophysics Data System (ADS)

    Kozubal, Janusz; Tomanovic, Zvonko; Zivaljevic, Slobodan

    2016-09-01

    In the present study the numerical model of the pile embedded in marl described by a time dependent model, based on laboratory tests, is proposed. The solutions complement the state of knowledge of the monopile loaded by horizontal force in its head with respect to its random variability values in time function. The investigated reliability problem is defined by the union of failure events defined by the excessive horizontal maximal displacement of the pile head in each periods of loads. Abaqus has been used for modeling of the presented task with a two layered viscoplastic model for marl. The mechanical parameters for both parts of model: plastic and rheological were calibrated based on the creep laboratory test results. The important aspect of the problem is reliability analysis of a monopile in complex environment under random sequences of loads which help understanding the role of viscosity in nature of rock basis constructions. Due to the lack of analytical solutions the computations were done by the method of response surface in conjunction with wavelet neural network as a method recommended for time sequences process and description of nonlinear phenomenon.

  8. TOPICAL REVIEW: Mechanistically based probability modelling, life prediction and reliability assessment

    NASA Astrophysics Data System (ADS)

    Wei, Robert P.; Harlow, D. Gary

    2005-01-01

    Life prediction and reliability assessment are essential components for the life-cycle engineering and management (LCEM) of modern engineered systems. These systems can range from microelectronic and bio-medical devices to large machinery and structures. To be effective, the underlying approach to LCEM must be transformed to embody mechanistically based probability modelling, vis-à-vis the more traditional experientially based statistical modelling, for predicting damage evolution and distribution. In this paper, the probability and statistical approaches are compared and differentiated. The process of model development on the basis of mechanistic understanding derived from critical experiments is illustrated through selected examples. The efficacy of this approach is illustrated through an example of the evolution and distribution of corrosion and corrosion fatigue damage in aluminium alloys in relation to aircraft that had been in long-term service.

  9. Reliable clock estimation using linear weighted fusion based on pairwise broadcast synchronization

    SciTech Connect

    Shi, Xin Zhao, Xiangmo Hui, Fei Ma, Junyan Yang, Lan

    2014-10-06

    Clock synchronization in wireless sensor networks (WSNs) has been studied extensively in recent years and many protocols are put forward based on the point of statistical signal processing, which is an effective way to optimize accuracy. However, the accuracy derived from the statistical data can be improved mainly by sufficient packets exchange, which will consume the limited power resources greatly. In this paper, a reliable clock estimation using linear weighted fusion based on pairwise broadcast synchronization is proposed to optimize sync accuracy without expending additional sync packets. As a contribution, a linear weighted fusion scheme for multiple clock deviations is constructed with the collaborative sensing of clock timestamp. And the fusion weight is defined by the covariance of sync errors for different clock deviations. Extensive simulation results show that the proposed approach can achieve better performance in terms of sync overhead and sync accuracy.

  10. Reliable Identification of Vehicle-Boarding Actions Based on Fuzzy Inference System

    PubMed Central

    Ahn, DaeHan; Park, Homin; Hwang, Seokhyun; Park, Taejoon

    2017-01-01

    Existing smartphone-based solutions to prevent distracted driving suffer from inadequate system designs that only recognize simple and clean vehicle-boarding actions, thereby failing to meet the required level of accuracy in real-life environments. In this paper, exploiting unique sensory features consistently monitored from a broad range of complicated vehicle-boarding actions, we propose a reliable and accurate system based on fuzzy inference to classify the sides of vehicle entrance by leveraging built-in smartphone sensors only. The results of our comprehensive evaluation on three vehicle types with four participants demonstrate that the proposed system achieves 91.1%∼94.0% accuracy, outperforming other methods by 26.9%∼38.4% and maintains at least 87.8% accuracy regardless of smartphone positions and vehicle types. PMID:28208795

  11. Reliability of file-based retrospective ratings of psychopathy with the PCL-R.

    PubMed

    Grann, M; Långström, N; Tengström, A; Stålenheim, E G

    1998-06-01

    A rapidly emerging consensus recognizes Hare's Psychopathy Checklist-Revised (PCL-R; Hare, 1991) as the most valid and useful instrument to assess psychopathy (Fulero, 1995; Stone, 1995). We compared independent clinical PCL-R ratings of 40 forensic adult male criminal offenders to retrospective file-only ratings. File-based PCL-R ratings, in comparison to the clinical ratings, yielded categorical psychopathy diagnoses with a sensitivity of .57 and a specificity of .96. The intraclass correlation (ICC) of the total scores as estimated by ICC(2,1) was .88, and was markedly better on Factor 2, ICC(2,1) = .89, than on Factor 1, ICC(2,1) = .69. The findings support the belief that for research purposes, file-only PCL-R ratings based on Swedish forensic psychiatric investigation records can be made with good alternate-form reliability.

  12. Reliable Identification of Vehicle-Boarding Actions Based on Fuzzy Inference Syste.

    PubMed

    Ahn, DaeHan; Park, Homin; Hwang, Seokhyun; Park, Taejoon

    2017-02-09

    Existing smartphone-based solutions to prevent distracted driving suffer from inadequate system designs that only recognize simple and clean vehicle-boarding actions, thereby failing to meet the required level of accuracy in real-life environments. In this paper, exploiting unique sensory features consistently monitored from a broad range of complicated vehicle-boarding actions, we propose a reliable and accurate system based on fuzzy inference to classify the sides of vehicle entrancebyleveragingbuilt-insmartphonesensorsonly. Theresultsofourcomprehensiveevaluation on three vehicle types with four participants demonstrate that the proposed system achieves 91.1%∼94.0% accuracy, outperforming other methods by 26.9%∼38.4% and maintains at least 87.8 %accuracy regardless of smartphone positions and vehicle types.

  13. Reliability and validity of an internet-based questionnaire measuring lifetime physical activity.

    PubMed

    De Vera, Mary A; Ratzlaff, Charles; Doerfling, Paul; Kopec, Jacek

    2010-11-15

    Lifetime exposure to physical activity is an important construct for evaluating associations between physical activity and disease outcomes, given the long induction periods in many chronic diseases. The authors' objective in this study was to evaluate the measurement properties of the Lifetime Physical Activity Questionnaire (L-PAQ), a novel Internet-based, self-administered instrument measuring lifetime physical activity, among Canadian men and women in 2005-2006. Reliability was examined using a test-retest study. Validity was examined in a 2-part study consisting of 1) comparisons with previously validated instruments measuring similar constructs, the Lifetime Total Physical Activity Questionnaire (LT-PAQ) and the Chasan-Taber Physical Activity Questionnaire (CT-PAQ), and 2) a priori hypothesis tests of constructs measured by the L-PAQ. The L-PAQ demonstrated good reliability, with intraclass correlation coefficients ranging from 0.67 (household activity) to 0.89 (sports/recreation). Comparison between the L-PAQ and the LT-PAQ resulted in Spearman correlation coefficients ranging from 0.41 (total activity) to 0.71 (household activity); comparison between the L-PAQ and the CT-PAQ yielded coefficients of 0.58 (sports/recreation), 0.56 (household activity), and 0.50 (total activity). L-PAQ validity was further supported by observed relations between the L-PAQ and sociodemographic variables, consistent with a priori hypotheses. Overall, the L-PAQ is a useful instrument for assessing multiple domains of lifetime physical activity with acceptable reliability and validity.

  14. A Reliability Test of a Complex System Based on Empirical Likelihood

    PubMed Central

    Zhang, Jun; Hui, Yongchang

    2016-01-01

    To analyze the reliability of a complex system described by minimal paths, an empirical likelihood method is proposed to solve the reliability test problem when the subsystem distributions are unknown. Furthermore, we provide a reliability test statistic of the complex system and extract the limit distribution of the test statistic. Therefore, we can obtain the confidence interval for reliability and make statistical inferences. The simulation studies also demonstrate the theorem results. PMID:27760130

  15. Novel bandgap-based under-voltage-lockout methods with high reliability

    NASA Astrophysics Data System (ADS)

    Yongrui, Zhao; Xinquan, Lai

    2013-10-01

    Highly reliable bandgap-based under-voltage-lockout (UVLO) methods are presented in this paper. The proposed under-voltage state to signal conversion methods take full advantages of the high temperature stability characteristics and the enhancement low-voltage protection methods which protect the core circuit from error operation; moreover, a common-source stage amplifier method is introduced to expand the output voltage range. All of these methods are verified in a UVLO circuit fabricated with a 0.5 μm standard BCD process technology. The experimental result shows that the proposed bandgap method exhibits a good temperature coefficient of 20 ppm/°C, which ensures that the UVLO keeps a stable output until the under-voltage state changes. Moreover, at room temperature, the high threshold voltage VTH+ generated by the UVLO is 12.3 V with maximum drift voltage of ±80 mV, and the low threshold voltage VTH- is 9.5 V with maximum drift voltage of ±70 mV. Also, the low voltage protection method used in the circuit brings a high reliability when the supply voltage is very low.

  16. A reliability-based calibration study for upheaval buckling of pipelines

    SciTech Connect

    Moerk, K.J.; Bjoernsen, T.; Venaas, A.; Thorkildsen, F.

    1995-12-31

    When a pipeline is subjected to axial compressive loads it will try to extend and release the compression force. If the pipeline is restrained by a soil or rock cover forces will be established between the pipeline and the soil. Upheaval buckling will occur when the force exerted by the pipe on the soil exceeds the vertical restraint. The pipeline moves upwards leading to possible unacceptable local plastic deformations or collapse or making it vulnerable to fishing gear and other third party activities. The present paper describes a reliability-based design procedure against upheaval buckling of rock or soil covered pipelines. The study is performed using state-of-the-art design methodologies including an assessment of all known uncertainties related to the load and capacity, measurement, surveys and confidence in the applied models. A response surface technique is applied within the level 3 reliability analysis. Target safety levels are discussed and a case specific calibration of partial safety factors in a consistent design equation against upheaval buckling is performed. Finally, a set of safety factors are proposed for both SLS and ULS requirements.

  17. On the Reliability of a Solitary Wave Based Transducer to Determine the Characteristics of Some Materials

    PubMed Central

    Deng, Wen; Nasrollahi, Amir; Rizzo, Piervincenzo; Li, Kaiyuan

    2015-01-01

    In the study presented in this article we investigated the feasibility and the reliability of a transducer design for the nondestructive evaluation (NDE) of the stiffness of structural materials. The NDE method is based on the propagation of highly nonlinear solitary waves (HNSWs) along a one-dimensional chain of spherical particles that is in contact with the material to be assessed. The chain is part of a built-in system designed and assembled to excite and detect HNSWs, and to exploit the dynamic interaction between the particles and the material to be inspected. This interaction influences the time-of-flight and the amplitude of the solitary pulses reflected at the transducer/material interface. The results of this study show that certain features of the waves are dependent on the modulus of elasticity of the material and that the built-in system is reliable. In the future the proposed NDE method may provide a cost-effective tool for the rapid assessment of materials’ modulus. PMID:26703617

  18. Reliability based calibration of partial safety factors for design of free pipeline spans

    SciTech Connect

    Ronold, K.O.; Nielsen, N.J.R.; Tura, F.; Bryndum, M.B.; Smed, P.F.

    1995-12-31

    This paper demonstrates how a structural reliability method can be applied as a rational means to analyze free spans of submarine pipelines with respect to failure in ultimate loading, and to establish partial safety factors for design of such free spans against this failure mode. It is important to note that the described procedure shall be considered as an illustration of a structural reliability methodology, and that the results do not represent a set of final design recommendations. A scope of design cases, consisting of a number of available site-specific pipeline spans, is established and is assumed representative for the future occurrence of submarine pipeline spans. Probabilistic models for the wave and current loading and its transfer to stresses in the pipe wall of a pipeline span is established together with a stochastic representation of the material resistance. The event of failure in ultimate loading is considered as based on a limit state which is reached when the maximum stress over the design life of the pipeline exceeds the yield strength of the pipe material. The yielding limit state is considered an ultimate limit state (ULS).

  19. Determine the optimal carrier selection for a logistics network based on multi-commodity reliability criterion

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Kuei; Yeh, Cheng-Ta

    2013-05-01

    From the perspective of supply chain management, the selected carrier plays an important role in freight delivery. This article proposes a new criterion of multi-commodity reliability and optimises the carrier selection based on such a criterion for logistics networks with routes and nodes, over which multiple commodities are delivered. Carrier selection concerns the selection of exactly one carrier to deliver freight on each route. The capacity of each carrier has several available values associated with a probability distribution, since some of a carrier's capacity may be reserved for various orders. Therefore, the logistics network, given any carrier selection, is a multi-commodity multi-state logistics network. Multi-commodity reliability is defined as a probability that the logistics network can satisfy a customer's demand for various commodities, and is a performance indicator for freight delivery. To solve this problem, this study proposes an optimisation algorithm that integrates genetic algorithm, minimal paths and Recursive Sum of Disjoint Products. A practical example in which multi-sized LCD monitors are delivered from China to Germany is considered to illustrate the solution procedure.

  20. A Compact Forearm Crutch Based on Force Sensors for Aided Gait: Reliability and Validity

    PubMed Central

    Chamorro-Moriana, Gema; Sevillano, José Luis; Ridao-Fernández, Carmen

    2016-01-01

    Frequently, patients who suffer injuries in some lower member require forearm crutches in order to partially unload weight-bearing. These lesions cause pain in lower limb unloading and their progression should be controlled objectively to avoid significant errors in accuracy and, consequently, complications and after effects in lesions. The design of a new and feasible tool that allows us to control and improve the accuracy of loads exerted on crutches during aided gait is necessary, so as to unburden the lower limbs. In this paper, we describe such a system based on a force sensor, which we have named the GCH System 2.0. Furthermore, we determine the validity and reliability of measurements obtained using this tool via a comparison with the validated AMTI (Advanced Mechanical Technology, Inc., Watertown, MA, USA) OR6-7-2000 Platform. An intra-class correlation coefficient demonstrated excellent agreement between the AMTI Platform and the GCH System. A regression line to determine the predictive ability of the GCH system towards the AMTI Platform was found, which obtained a precision of 99.3%. A detailed statistical analysis is presented for all the measurements and also segregated for several requested loads on the crutches (10%, 25% and 50% of body weight). Our results show that our system, designed for assessing loads exerted by patients on forearm crutches during assisted gait, provides valid and reliable measurements of loads. PMID:27338396

  1. Reliability of Beta angle in assessing true anteroposterior apical base discrepancy in different growth patterns

    PubMed Central

    Sundareswaran, Shobha; Kumar, Vinay

    2015-01-01

    Introduction: Beta angle as a skeletal anteroposterior dysplasia indicator is known to be useful in evaluating normodivergent growth patterns. Hence, we compared and verified the accuracy of Beta angle in predicting sagittal jaw discrepancy among subjects with hyperdivergent, hypodivergent and normodivergent growth patterns. Materials and Methods: Lateral cephalometric radiographs of 179 patients belonging to skeletal Classes I, II, and III were further divided into normodivergent, hyperdivergent, and hypodivergent groups based on their vertical growth patterns. Sagittal dysplasia indicators - angle ANB, Wits appraisal, and Beta angle values were measured and tabulated. The perpendicular point of intersection on line CB (Condylion-Point B) in Beta angle was designated as ‘X’ and linear dimension XB was evaluated. Results: Statistically significant increase was observed in the mean values of Beta angle and XB distance in the vertical growth pattern groups of both skeletal Class I and Class II patients thus pushing them toward Class III and Class I, respectively. Conclusions: Beta angle is a reliable indicator of sagittal dysplasia in normal and horizontal patterns of growth. However, vertical growth patterns significantly increased Beta angle values, thus affecting their reliability as a sagittal discrepancy assessment tool. Hence, Beta angle may not be a valid tool for assessment of sagittal jaw discrepancy in patients exhibiting vertical growth patterns with skeletal Class I and Class II malocclusions. Nevertheless, Class III malocclusions having the highest Beta angle values were unaffected. PMID:25810649

  2. Reliability and Validity of a Wireless Microelectromechanicals Based System (Keimove™) for Measuring Vertical Jumping Performance

    PubMed Central

    Requena, Bernardo; García, Inmaculada; Requena, Francisco; Saez-Saez de Villarreal, Eduardo; Pääsuke, Mati

    2012-01-01

    The aim of this study was to determine the validity and reliability of a microelectromechanicals (MEMs) based system (Keimove™) in measuring flight time and takeoff velocity during a counter-movement jump (CMJ). As criterion reference, data of a high- speed camera (HSC) and a force-platform (FP) synchronized with a linear position transducer (LPT) was used. Thirty professional soccer players completely familiarized with the CMJ technique performed three CMJs. The second and third trials were used for further analysis. The Keimove™ system, the HSC and the FP synchronized with the LPT (FP+LPT) simultaneously measured the CMJ performance. During each repetition, the Keimove™ system registered flight time and velocity at takeoff. At the same time and as criterion reference, both the HSC and the FP recorded the flight time while the LPT+FP registered the velocity at takeoff. Pearson correlation coefficients for the flight time were high (r = 0.99; p < 0.001) when Keimove™ system was compared with the HSC or the FP+LPT, respectively. For the velocity at takeoff variable, the Pearson r between the Keimove™ system and the FP+LPT was lower although significant at the 0.05 level. No significant differences in mean values were observed for flight times and velocity at takeoff between the three devices. Intraclass correlations and coefficients of variation between trials were similar and ranged between 0.92-0.97 and 2.1-7.4, respectively. In conclusion, the Keimove™ system represents a valid and reliable instrument to measure velocity at takeoff and flight time during CMJ testing. Thus, this MEMs-based system will offer a portable, cost-effective tool for the assessment CMJ performance. Key points The Keimove™ system is composed of specific software and a wireless MEMs-based device designed to be attached at the lumbar region of the athlete. The Keimove™ system is a mechanically valid and reliable instrument in measuring flight time and velocity at takeoff

  3. Reliability model generator

    NASA Technical Reports Server (NTRS)

    McMann, Catherine M. (Inventor); Cohen, Gerald C. (Inventor)

    1991-01-01

    An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.

  4. An Examination of Temporal Trends in Electricity Reliability Based on Reports from U.S. Electric Utilities

    SciTech Connect

    Eto, Joseph H.; LaCommare, Kristina Hamachi; Larsen, Peter; Todd, Annika; Fisher, Emily

    2012-01-06

    Since the 1960s, the U.S. electric power system has experienced a major blackout about once every 10 years. Each has been a vivid reminder of the importance society places on the continuous availability of electricity and has led to calls for changes to enhance reliability. At the root of these calls are judgments about what reliability is worth and how much should be paid to ensure it. In principle, comprehensive information on the actual reliability of the electric power system and on how proposed changes would affect reliability ought to help inform these judgments. Yet, comprehensive, national-scale information on the reliability of the U.S. electric power system is lacking. This report helps to address this information gap by assessing trends in U.S. electricity reliability based on information reported by electric utilities on power interruptions experienced by their customers. Our research augments prior investigations, which focused only on power interruptions originating in the bulk power system, by considering interruptions originating both from the bulk power system and from within local distribution systems. Our research also accounts for differences among utility reliability reporting practices by employing statistical techniques that remove the influence of these differences on the trends that we identify. The research analyzes up to 10 years of electricity reliability information collected from 155 U.S. electric utilities, which together account for roughly 50% of total U.S. electricity sales. The questions analyzed include: 1. Are there trends in reported electricity reliability over time? 2. How are trends in reported electricity reliability affected by the installation or upgrade of an automated outage management system? 3. How are trends in reported electricity reliability affected by the use of IEEE Standard 1366-2003?

  5. Improved mechanical reliability of MEMS electret based vibration energy harvesters for automotive applications

    NASA Astrophysics Data System (ADS)

    Renaud, M.; Fujita, T.; Goedbloed, M.; de Nooijer, C.; van Schaijk, R.

    2014-11-01

    Current commercial wireless tire pressure monitoring systems (TPMS) require a battery as electrical power source. The battery limits the lifetime of the TPMS. This limit can be circumvented by replacing the battery by a vibration energy harvester. Autonomous wireless TPMS powered by MEMS electret based vibration energy harvester have been demonstrated. A remaining technical challenge to attain the grade of commercial product with these autonomous TPMS is the mechanical reliability of the MEMS harvester. It should survive the harsh conditions imposed by the tire environment, particularly in terms of mechanical shocks. As shown in this article, our first generation of harvesters has a shock resilience of 400 g, which is far from being sufficient for the targeted application. In order to improve this aspect, several types of shock absorbing structures are investigated. With the best proposed solution, the shock resilience of the harvesters is brought above 2500 g.

  6. Fast and reliable interrogation of USFBG sensors based on MG-Y laser discrete wavelength channels

    NASA Astrophysics Data System (ADS)

    Rohollahnejad, Jalal; Xia, Li; Cheng, Rui; Ran, Yanli; Su, Lei

    2017-01-01

    In this letter, we propose to use discrete wavelength channels of a single chip MG-Y laser to interrogate an ultra-short fiber Bragg grating with a wide Gaussian spectrum. The broadband Gaussian spectrum of USFBG is sampled by the wavelength channels of MG-Y laser, through which the center of the spectrum. The measurement inherits the important features of a common tunable laser interrogation technique, namely its high flexibility, natural insensitivity to intensity variations relative to common intensity-based approaches. While for traditional tunable laser methods, it requires to sweep the whole spectrum to obtain the center wavelength of the spectrum, for the proposed scheme, just a few discrete wavelength channels of laser are needed to be acquired, which leads to significant improvements of the efficiency and measurement speed. This reliable and low cost concept could offer the good foundation for USFBGs future applications in large scale distributed measurements, especially in time domain multiplexing scheme.

  7. Comparative analysis of different configurations of PLC-based safety systems from reliability point of view

    NASA Technical Reports Server (NTRS)

    Tapia, Moiez A.

    1993-01-01

    The study of a comparative analysis of distinct multiplex and fault-tolerant configurations for a PLC-based safety system from a reliability point of view is presented. It considers simplex, duplex and fault-tolerant triple redundancy configurations. The standby unit in case of a duplex configuration has a failure rate which is k times the failure rate of the standby unit, the value of k varying from 0 to 1. For distinct values of MTTR and MTTF of the main unit, MTBF and availability for these configurations are calculated. The effect of duplexing only the PLC module or only the sensors and the actuators module, on the MTBF of the configuration, is also presented. The results are summarized and merits and demerits of various configurations under distinct environments are discussed.

  8. Reliability-Based Design of a Safety-Critical Automation System: A Case Study

    NASA Technical Reports Server (NTRS)

    Carroll, Carol W.; Dunn, W.; Doty, L.; Frank, M. V.; Hulet, M.; Alvarez, Teresa (Technical Monitor)

    1994-01-01

    In 1986, NASA funded a project to modernize the NASA Ames Research Center Unitary Plan Wind Tunnels, including the replacement of obsolescent controls with a modern, automated distributed control system (DCS). The project effort on this system included an independent safety analysis (ISA) of the automation system. The purpose of the ISA was to evaluate the completeness of the hazard analyses which had already been performed on the Modernization Project. The ISA approach followed a tailoring of the risk assessment approach widely used on existing nuclear power plants. The tailoring of the nuclear industry oriented risk assessment approach to the automation system and its role in reliability-based design of the automation system is the subject of this paper.

  9. Development of a reliable transmission-based laser sensor system for intelligent transportation systems

    NASA Astrophysics Data System (ADS)

    Chowdhury, Mashrur A.; Banerjee, Partha; Nehmetallah, Georges; Goodhue, Paul C.; Das, Arobindu; Atluri, Mahesh

    2004-10-01

    The transportation community has applied sensors for various traffic management purposes, such as in traffic signal control, ramp metering, traveler information development, and incident detection by collecting and processing real-time vehicle position and speed. The U.S. transportation community has not adopted any single newer traffic detectors as the most accepted choice. The objective of this research is to develop an infrared sensor system in the laboratory that will provide improved estimates of vehicle speed compared to those available from current infrared sensors, to model the sensor"s failure conditions and probabilities, and ultimately refine the sensor to provide the most reliable data under various environmental conditions. This paper presents the initial development of the proposed sensor system. This system will be implemented in a highway segment to evaluate its the risks of failure under various environmental conditions. A modified design will then be developed based on the field evaluations.

  10. Web-based phenotyping for Tourette Syndrome: Reliability of common co-morbid diagnoses

    PubMed Central

    Darrow, Sabrina M.; Illmann, Cornelia; Gauvin, Caitlin; Osiecki, Lisa; Egan, Crystelle A.; Greenberg, Erica; Eckfield, Monika; Hirschtritt, Matthew E.; Pauls, David L.; Batterson, James R.; Berlin, Cheston M.; Malaty, Irene A.; Woods, Douglas W.; Scharf, Jeremiah; Mathews, Carol

    2015-01-01

    Collecting phenotypic data necessary for genetic analyses of neuropsychiatric disorders is time consuming and costly. Development of web-based phenotype assessments would greatly improve the efficiency and cost-effectiveness of genetic research. However, evaluating the reliability of this approach compared to standard, in-depth clinical interviews is essential. The current study replicates and extends a preliminary report on the utility of a web-based screen for Tourette Syndrome (TS) and common comorbid diagnoses (obsessive compulsive disorder (OCD) and attention deficit/hyperactivity disorder (ADHD)). A subset of individuals who completed a web-based phenotyping assessment for a TS genetic study was invited to participate in semi-structured diagnostic clinical interviews. The data from these interviews were used to determine participants’ diagnostic status for TS, OCD, and ADHD using best estimate procedures, which then served as the gold standard to compare diagnoses assigned using web-based screen data. The results show high rates of agreement for TS. Kappas for OCD and ADHD diagnoses were also high and together demonstrate the utility of this self-report data in comparison previous diagnoses from clinicians and dimensional assessment methods. PMID:26054936

  11. Interpretive Reliability of Six Computer-Based Test Interpretation Programs for the Minnesota Multiphasic Personality Inventory-2.

    PubMed

    Deskovitz, Mark A; Weed, Nathan C; McLaughlan, Joseph K; Williams, John E

    2016-04-01

    The reliability of six Minnesota Multiphasic Personality Inventory-Second edition (MMPI-2) computer-based test interpretation (CBTI) programs was evaluated across a set of 20 commonly appearing MMPI-2 profile codetypes in clinical settings. Evaluation of CBTI reliability comprised examination of (a) interrater reliability, the degree to which raters arrive at similar inferences based on the same CBTI profile and (b) interprogram reliability, the level of agreement across different CBTI systems. Profile inferences drawn by four raters were operationalized using q-sort methodology. Results revealed no significant differences overall with regard to interrater and interprogram reliability. Some specific CBTI/profile combinations (e.g., the CBTI by Automated Assessment Associates on a within normal limits profile) and specific profiles (e.g., the 4/9 profile displayed greater interprogram reliability than the 2/4 profile) were interpreted with variable consensus (α range = .21-.95). In practice, users should consider that certain MMPI-2 profiles are interpreted more or less consensually and that some CBTIs show variable reliability depending on the profile.

  12. Fast two-dimensional phase-unwrapping algorithm based on sorting by reliability following a noncontinuous path.

    PubMed

    Herráez, Miguel Arevallilo; Burton, David R; Lalor, Michael J; Gdeisat, Munther A

    2002-12-10

    We describe what is to our knowledge a novel technique for phase unwrapping. Several algorithms based on unwrapping the most-reliable pixels first have been proposed. These were restricted to continuous paths and were subject to difficulties in defining a starting pixel. The technique described here uses a different type of reliability function and does not follow a continuous path to perform the unwrapping operation. The technique is explained in detail and illustrated with a number of examples.

  13. Reliability and applications of statistical methods based on oligonucleotide frequencies in bacterial and archaeal genomes

    PubMed Central

    Bohlin, Jon; Skjerve, Eystein; Ussery, David W

    2008-01-01

    Background The increasing number of sequenced prokaryotic genomes contains a wealth of genomic data that needs to be effectively analysed. A set of statistical tools exists for such analysis, but their strengths and weaknesses have not been fully explored. The statistical methods we are concerned with here are mainly used to examine similarities between archaeal and bacterial DNA from different genomes. These methods compare observed genomic frequencies of fixed-sized oligonucleotides with expected values, which can be determined by genomic nucleotide content, smaller oligonucleotide frequencies, or be based on specific statistical distributions. Advantages with these statistical methods include measurements of phylogenetic relationship with relatively small pieces of DNA sampled from almost anywhere within genomes, detection of foreign/conserved DNA, and homology searches. Our aim was to explore the reliability and best suited applications for some popular methods, which include relative oligonucleotide frequencies (ROF), di- to hexanucleotide zero'th order Markov methods (ZOM) and 2.order Markov chain Method (MCM). Tests were performed on distant homology searches with large DNA sequences, detection of foreign/conserved DNA, and plasmid-host similarity comparisons. Additionally, the reliability of the methods was tested by comparing both real and random genomic DNA. Results Our findings show that the optimal method is context dependent. ROFs were best suited for distant homology searches, whilst the hexanucleotide ZOM and MCM measures were more reliable measures in terms of phylogeny. The dinucleotide ZOM method produced high correlation values when used to compare real genomes to an artificially constructed random genome with similar %GC, and should therefore be used with care. The tetranucleotide ZOM measure was a good measure to detect horizontally transferred regions, and when used to compare the phylogenetic relationships between plasmids and hosts

  14. Reliability-based aeroelastic optimization of a composite aircraft wing via fluid-structure interaction of high fidelity solvers

    NASA Astrophysics Data System (ADS)

    Nikbay, M.; Fakkusoglu, N.; Kuru, M. N.

    2010-06-01

    We consider reliability based aeroelastic optimization of a AGARD 445.6 composite aircraft wing with stochastic parameters. Both commercial engineering software and an in-house reliability analysis code are employed in this high-fidelity computational framework. Finite volume based flow solver Fluent is used to solve 3D Euler equations, while Gambit is the fluid domain mesh generator and Catia-V5-R16 is used as a parametric 3D solid modeler. Abaqus, a structural finite element solver, is used to compute the structural response of the aeroelastic system. Mesh based parallel code coupling interface MPCCI-3.0.6 is used to exchange the pressure and displacement information between Fluent and Abaqus to perform a loosely coupled fluid-structure interaction by employing a staggered algorithm. To compute the probability of failure for the probabilistic constraints, one of the well known MPP (Most Probable Point) based reliability analysis methods, FORM (First Order Reliability Method) is implemented in Matlab. This in-house developed Matlab code is embedded in the multidisciplinary optimization workflow which is driven by Modefrontier. Modefrontier 4.1, is used for its gradient based optimization algorithm called NBI-NLPQLP which is based on sequential quadratic programming method. A pareto optimal solution for the stochastic aeroelastic optimization is obtained for a specified reliability index and results are compared with the results of deterministic aeroelastic optimization.

  15. Gradient-based reliability maps for ACM-based segmentation of hippocampus.

    PubMed

    Zarpalas, Dimitrios; Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos

    2014-04-01

    Automatic segmentation of deep brain structures, such as the hippocampus (HC), in MR images has attracted considerable scientific attention due to the widespread use of MRI and to the principal role of some structures in various mental disorders. In this literature, there exists a substantial amount of work relying on deformable models incorporating prior knowledge about structures' anatomy and shape information. However, shape priors capture global shape characteristics and thus fail to model boundaries of varying properties; HC boundaries present rich, poor, and missing gradient regions. On top of that, shape prior knowledge is blended with image information in the evolution process, through global weighting of the two terms, again neglecting the spatially varying boundary properties, causing segmentation faults. An innovative method is hereby presented that aims to achieve highly accurate HC segmentation in MR images, based on the modeling of boundary properties at each anatomical location and the inclusion of appropriate image information for each of those, within an active contour model framework. Hence, blending of image information and prior knowledge is based on a local weighting map, which mixes gradient information, regional and whole brain statistical information with a multi-atlas-based spatial distribution map of the structure's labels. Experimental results on three different datasets demonstrate the efficacy and accuracy of the proposed method.

  16. Reliability Optimization Design for Contact Springs of AC Contactors Based on Adaptive Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Sheng; Su, Xiuping; Wu, Ziran; Xu, Chengwen

    The paper illustrates the procedure of reliability optimization modeling for contact springs of AC contactors under nonlinear multi-constraint conditions. The adaptive genetic algorithm (AGA) is utilized to perform reliability optimization on the contact spring parameters of a type of AC contactor. A method that changes crossover and mutation rates at different times in the AGA can effectively avoid premature convergence, and experimental tests are performed after optimization. The experimental result shows that the mass of each optimized spring is reduced by 16.2%, while the reliability increases to 99.9% from 94.5%. The experimental result verifies the correctness and feasibility of this reliability optimization designing method.

  17. Post-illumination pupil response after blue light: Reliability of optimized melanopsin-based phototransduction assessment.

    PubMed

    van der Meijden, Wisse P; te Lindert, Bart H W; Bijlenga, Denise; Coppens, Joris E; Gómez-Herrero, Germán; Bruijel, Jessica; Kooij, J J Sandra; Cajochen, Christian; Bourgin, Patrice; Van Someren, Eus J W

    2015-10-01

    ± 3.6 yr) we examined the potential confounding effects of dark adaptation, time of the day (morning vs. afternoon), body posture (upright vs. supine position), and 24-h environmental light history on the PIPR assessment. Mixed effect regression models were used to analyze these possible confounders. A supine position caused larger PIPR-mm (β = 0.29 mm, SE = 0.10, p = 0.01) and PIPR-% (β = 4.34%, SE = 1.69, p = 0.02), which was due to an increase in baseline dark pupil diameter; this finding is of relevance for studies requiring a supine posture, as in functional Magnetic Resonance Imaging, constant routine protocols, and bed-ridden patients. There were no effects of dark adaptation, time of day, and light history. In conclusion, the presented method provides a reliable and robust assessment of the PIPR to allow for studies on individual differences in melanopsin-based phototransduction and effects of interventions.

  18. Reliable Dual Tensor Model Estimation in Single and Crossing Fibers Based on Jeffreys Prior

    PubMed Central

    Yang, Jianfei; Poot, Dirk H. J.; Caan, Matthan W. A.; Su, Tanja; Majoie, Charles B. L. M.; van Vliet, Lucas J.; Vos, Frans M.

    2016-01-01

    Purpose This paper presents and studies a framework for reliable modeling of diffusion MRI using a data-acquisition adaptive prior. Methods Automated relevance determination estimates the mean of the posterior distribution of a rank-2 dual tensor model exploiting Jeffreys prior (JARD). This data-acquisition prior is based on the Fisher information matrix and enables the assessment whether two tensors are mandatory to describe the data. The method is compared to Maximum Likelihood Estimation (MLE) of the dual tensor model and to FSL’s ball-and-stick approach. Results Monte Carlo experiments demonstrated that JARD’s volume fractions correlated well with the ground truth for single and crossing fiber configurations. In single fiber configurations JARD automatically reduced the volume fraction of one compartment to (almost) zero. The variance in fractional anisotropy (FA) of the main tensor component was thereby reduced compared to MLE. JARD and MLE gave a comparable outcome in data simulating crossing fibers. On brain data, JARD yielded a smaller spread in FA along the corpus callosum compared to MLE. Tract-based spatial statistics demonstrated a higher sensitivity in detecting age-related white matter atrophy using JARD compared to both MLE and the ball-and-stick approach. Conclusions The proposed framework offers accurate and precise estimation of diffusion properties in single and dual fiber regions. PMID:27760166

  19. Optimal Preventive Maintenance Schedule based on Lifecycle Cost and Time-Dependent Reliability

    DTIC Science & Technology

    2011-11-10

    cost PC , the inspection cost IC and an expected variable cost EVC [2, 32]. These costs are a function of quality and reliability. The lifecycle...expected variable cost EVC is a function of the time- dependent reliability which is used to estimate the expected present value of repairing and/or

  20. Instrumentation and Control Needs for Reliable Operation of Lunar Base Surface Nuclear Power Systems

    NASA Technical Reports Server (NTRS)

    Turso, James; Chicatelli, Amy; Bajwa, Anupa

    2005-01-01

    needed to enable this critical functionality of autonomous operation. It will be imperative to consider instrumentation and control requirements in parallel to system configuration development so as to identify control-related, as well as integrated system-related, problem areas early to avoid potentially expensive work-arounds . This paper presents an overview of the enabling technologies necessary for the development of reliable, autonomous lunar base nuclear power systems with an emphasis on system architectures and off-the-shelf algorithms rather than hardware. Autonomy needs are presented in the context of a hypothetical lunar base nuclear power system. The scenarios and applications presented are hypothetical in nature, based on information from open-literature sources, and only intended to provoke thought and provide motivation for the use of autonomous, intelligent control and diagnostics.

  1. Is Teacher Assessment Reliable or Valid for High School Students under a Web-Based Portfolio Environment?

    ERIC Educational Resources Information Center

    Chang, Chi-Cheng; Wu, Bing-Hong

    2012-01-01

    This study explored the reliability and validity of teacher assessment under a Web-based portfolio assessment environment (or Web-based teacher portfolio assessment). Participants were 72 eleventh graders taking the "Computer Application" course. The students perform portfolio creation, inspection, self- and peer-assessment using the Web-based…

  2. Is Learner Self-Assessment Reliable and Valid in a Web-Based Portfolio Environment for High School Students?

    ERIC Educational Resources Information Center

    Chang, Chi-Cheng; Liang, Chaoyun; Chen, Yi-Hui

    2013-01-01

    This study explored the reliability and validity of Web-based portfolio self-assessment. Participants were 72 senior high school students enrolled in a computer application course. The students created learning portfolios, viewed peers' work, and performed self-assessment on the Web-based portfolio assessment system. The results indicated: 1)…

  3. Use of viral promoters in mammalian cell-based bioassays: How reliable?

    PubMed Central

    Betrabet, Shrikant S; Choudhuri, Jyoti; Gill-Sharma, Manjit

    2004-01-01

    Cell-based bioassays have been suggested for screening of hormones and drug bioactivities. They are a plausible alternative to animal based methods. The technique used is called receptor/reporter system. Receptor/reporter system was initially developed as a research technique to understand gene function. Often reporter constructs containing viral promoters were used because they could be expressed with very 'high' magnitude in a variety of cell types in the laboratory. On the other hand mammalian genes are expressed in a cell/tissue specific manner, which makes them (i.e. cells/tissues) specialized for specific function in vivo. Therefore, if the receptor/reporter system is to be used as a cell-based screen for testing of hormones and drugs for human therapy then the choice of cell line as well as the promoter in the reporter module is of prime importance so as to get a realistic measure of the bioactivities of 'test' compounds. We evaluated two conventionally used viral promoters and a natural mammalian promoter, regulated by steroid hormone progesterone, in a cell-based receptor/reporter system. The promoters were spliced into vectors expressing enzyme CAT (chloramphenicol acetyl transferase), which served as a reporter of their magnitudes and consistencies in controlling gene expressions. They were introduced into breast cell lines T47D and MCF-7, which served as a cell-based source of progesterone receptors. The yardstick of their reliability was highest magnitude as well as consistency in CAT expression on induction by sequential doses of progesterone. All the promoters responded to induction by progesterone doses ranging from 10-12 to 10-6 molar by expressing CAT enzyme, albeit with varying magnitudes and consistencies. The natural mammalian promoter showed the most coherence in magnitude as well as dose dependent expression profile in both the cell lines. Our study casts doubts on use of viral promoters in a cell-based bioassay for measuring bioactivities of

  4. Optimization of Systems with Uncertainty: Initial Developments for Performance, Robustness and Reliability Based Designs

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    This paper presents a study on the optimization of systems with structured uncertainties, whose inputs and outputs can be exhaustively described in the probabilistic sense. By propagating the uncertainty from the input to the output in the space of the probability density functions and the moments, optimization problems that pursue performance, robustness and reliability based designs are studied. Be specifying the desired outputs in terms of desired probability density functions and then in terms of meaningful probabilistic indices, we settle a computationally viable framework for solving practical optimization problems. Applications to static optimization and stability control are used to illustrate the relevance of incorporating uncertainty in the early stages of the design. Several examples that admit a full probabilistic description of the output in terms of the design variables and the uncertain inputs are used to elucidate the main features of the generic problem and its solution. Extensions to problems that do not admit closed form solutions are also evaluated. Concrete evidence of the importance of using a consistent probabilistic formulation of the optimization problem and a meaningful probabilistic description of its solution is provided in the examples. In the stability control problem the analysis shows that standard deterministic approaches lead to designs with high probability of running into instability. The implementation of such designs can indeed have catastrophic consequences.

  5. Shock reliability analysis and improvement of MEMS electret-based vibration energy harvesters

    NASA Astrophysics Data System (ADS)

    Renaud, M.; Fujita, T.; Goedbloed, M.; de Nooijer, C.; van Schaijk, R.

    2015-10-01

    Vibration energy harvesters can serve as a replacement solution to batteries for powering tire pressure monitoring systems (TPMS). Autonomous wireless TPMS powered by microelectromechanical system (MEMS) electret-based vibration energy harvester have been demonstrated. The mechanical reliability of the MEMS harvester still has to be assessed in order to bring the harvester to the requirements of the consumer market. It should survive the mechanical shocks occurring in the tire environment. A testing procedure to quantify the shock resilience of harvesters is described in this article. Our first generation of harvesters has a shock resilience of 400 g, which is far from being sufficient for the targeted application. In order to improve this aspect, the first important aspect is to understand the failure mechanism. Failure is found to occur in the form of fracture of the device’s springs. It results from impacts between the anchors of the springs when the harvester undergoes a shock. The shock resilience of the harvesters can be improved by redirecting these impacts to nonvital parts of the device. With this philosophy in mind, we design three types of shock absorbing structures and test their effect on the shock resilience of our MEMS harvesters. The solution leading to the best results consists of rigid silicon stoppers covered by a layer of Parylene. The shock resilience of the harvesters is brought above 2500 g. Results in the same range are also obtained with flexible silicon bumpers, which are simpler to manufacture.

  6. An Acetylcholinesterase-Based Chronoamperometric Biosensor for Fast and Reliable Assay of Nerve Agents

    PubMed Central

    Pohanka, Miroslav; Adam, Vojtech; Kizek, Rene

    2013-01-01

    The enzyme acetylcholinesterase (AChE) is an important part of cholinergic nervous system, where it stops neurotransmission by hydrolysis of the neurotransmitter acetylcholine. It is sensitive to inhibition by organophosphate and carbamate insecticides, some Alzheimer disease drugs, secondary metabolites such as aflatoxins and nerve agents used in chemical warfare. When immobilized on a sensor (physico-chemical transducer), it can be used for assay of these inhibitors. In the experiments described herein, an AChE- based electrochemical biosensor using screen printed electrode systems was prepared. The biosensor was used for assay of nerve agents such as sarin, soman, tabun and VX. The limits of detection achieved in a measuring protocol lasting ten minutes were 7.41 × 10−12 mol/L for sarin, 6.31 × 10−12 mol/L for soman, 6.17 × 10−11 mol/L for tabun, and 2.19 × 10−11 mol/L for VX, respectively. The assay was reliable, with minor interferences caused by the organic solvents ethanol, methanol, isopropanol and acetonitrile. Isopropanol was chosen as suitable medium for processing lipophilic samples. PMID:23999806

  7. Science-Based Simulation Model of Human Performance for Human Reliability Analysis

    SciTech Connect

    Dana L. Kelly; Ronald L. Boring; Ali Mosleh; Carol Smidts

    2011-10-01

    Human reliability analysis (HRA), a component of an integrated probabilistic risk assessment (PRA), is the means by which the human contribution to risk is assessed, both qualitatively and quantitatively. However, among the literally dozens of HRA methods that have been developed, most cannot fully model and quantify the types of errors that occurred at Three Mile Island. Furthermore, all of the methods lack a solid empirical basis, relying heavily on expert judgment or empirical results derived in non-reactor domains. Finally, all of the methods are essentially static, and are thus unable to capture the dynamics of an accident in progress. The objective of this work is to begin exploring a dynamic simulation approach to HRA, one whose models have a basis in psychological theories of human performance, and whose quantitative estimates have an empirical basis. This paper highlights a plan to formalize collaboration among the Idaho National Laboratory (INL), the University of Maryland, and The Ohio State University (OSU) to continue development of a simulation model initially formulated at the University of Maryland. Initial work will focus on enhancing the underlying human performance models with the most recent psychological research, and on planning follow-on studies to establish an empirical basis for the model, based on simulator experiments to be carried out at the INL and at the OSU.

  8. Autonomous, Decentralized Grid Architecture: Prosumer-Based Distributed Autonomous Cyber-Physical Architecture for Ultra-Reliable Green Electricity Networks

    SciTech Connect

    2012-01-11

    GENI Project: Georgia Tech is developing a decentralized, autonomous, internet-like control architecture and control software system for the electric power grid. Georgia Tech’s new architecture is based on the emerging concept of electricity prosumers—economically motivated actors that can produce, consume, or store electricity. Under Georgia Tech’s architecture, all of the actors in an energy system are empowered to offer associated energy services based on their capabilities. The actors achieve their sustainability, efficiency, reliability, and economic objectives, while contributing to system-wide reliability and efficiency goals. This is in marked contrast to the current one-way, centralized control paradigm.

  9. Study of Fuze Structure and Reliability Design Based on the Direct Search Method

    NASA Astrophysics Data System (ADS)

    Lin, Zhang; Ning, Wang

    2017-03-01

    Redundant design is one of the important methods to improve the reliability of the system, but mutual coupling of multiple factors is often involved in the design. In my study, Direct Search Method is introduced into the optimum redundancy configuration for design optimization, in which, the reliability, cost, structural weight and other factors can be taken into account simultaneously, and the redundant allocation and reliability design of aircraft critical system are computed. The results show that this method is convenient and workable, and applicable to the redundancy configurations and optimization of various designs upon appropriate modifications. And this method has a good practical value.

  10. A GC/MS-based metabolomic approach for reliable diagnosis of phenylketonuria.

    PubMed

    Xiong, Xiyue; Sheng, Xiaoqi; Liu, Dan; Zeng, Ting; Peng, Ying; Wang, Yichao

    2015-11-01

    ), which showed that phenylacetic acid may be used as a reliable discriminator for the diagnosis of PKU. The low false positive rate (1-specificity, 0.064) can be eliminated or at least greatly reduced by simultaneously referring to other markers, especially phenylpyruvic acid, a unique marker in PKU. Additionally, this standard was obtained with high sensitivity and specificity in a less invasive manner for diagnosing PKU compared with the Phe/Tyr ratio. Therefore, we conclude that urinary metabolomic information based on the improved oximation-silylation method together with GC/MS may be reliable for the diagnosis and differential diagnosis of PKU.

  11. Psychometric instrumentation: reliability and validity of instruments used for clinical practice, evidence-based practice projects and research studies.

    PubMed

    Mayo, Ann M

    2015-01-01

    It is important for CNSs and other APNs to consider the reliability and validity of instruments chosen for clinical practice, evidence-based practice projects, or research studies. Psychometric testing uses specific research methods to evaluate the amount of error associated with any particular instrument. Reliability estimates explain more about how well the instrument is designed, whereas validity estimates explain more about scores that are produced by the instrument. An instrument may be architecturally sound overall (reliable), but the same instrument may not be valid. For example, if a specific group does not understand certain well-constructed items, then the instrument does not produce valid scores when used with that group. Many instrument developers may conduct reliability testing only once, yet continue validity testing in different populations over many years. All CNSs should be advocating for the use of reliable instruments that produce valid results. Clinical nurse specialists may find themselves in situations where reliability and validity estimates for some instruments that are being utilized are unknown. In such cases, CNSs should engage key stakeholders to sponsor nursing researchers to pursue this most important work.

  12. Stochastic Analysis of Waterhammer and Applications in Reliability-Based Structural Design for Hydro Turbine Penstocks

    SciTech Connect

    Zhang, Qin Fen; Karney, Professor Byran W.; Suo, Prof. Lisheng; Colombo, Dr. Andrew

    2011-01-01

    Abstract: The randomness of transient events, and the variability in factors which influence the magnitudes of resultant pressure fluctuations, ensures that waterhammer and surges in a pressurized pipe system are inherently stochastic. To bolster and improve reliability-based structural design, a stochastic model of transient pressures is developed for water conveyance systems in hydropower plants. The statistical characteristics and probability distributions of key factors in boundary conditions, initial states and hydraulic system parameters are analyzed based on a large record of observed data from hydro plants in China; and then the statistical characteristics and probability distributions of annual maximum waterhammer pressures are simulated using Monte Carlo method and verified by the analytical probabilistic model for a simplified pipe system. In addition, the characteristics (annual occurrence, sustaining period and probability distribution) of hydraulic loads for both steady and transient states are discussed. Illustrating with an example of penstock structural design, it is shown that the total waterhammer pressure should be split into two individual random variable loads: the steady/static pressure and the waterhammer pressure rise during transients; and that different partial load factors should be applied to each individual load to reflect its unique physical and stochastic features. Particularly, the normative load (usually the unfavorable value at 95-percentage point) for steady/static hydraulic pressure should be taken from the probability distribution of its maximum values during the pipe's design life, while for waterhammer pressure rise, as the second variable load, the probability distribution of its annual maximum values is used to determine its normative load.

  13. Reliability of Nationwide Prevalence Estimates of Dementia: A Critical Appraisal Based on Brazilian Surveys

    PubMed Central

    2015-01-01

    Background The nationwide dementia prevalence is usually calculated by applying the results of local surveys to countries’ populations. To evaluate the reliability of such estimations in developing countries, we chose Brazil as an example. We carried out a systematic review of dementia surveys, ascertained their risk of bias, and present the best estimate of occurrence of dementia in Brazil. Methods and Findings We carried out an electronic search of PubMed, Latin-American databases, and a Brazilian thesis database for surveys focusing on dementia prevalence in Brazil. The systematic review was registered at PROSPERO (CRD42014008815). Among the 35 studies found, 15 analyzed population-based random samples. However, most of them utilized inadequate criteria for diagnostics. Six studies without these limitations were further analyzed to assess the risk of selection, attrition, outcome and population bias as well as several statistical issues. All the studies presented moderate or high risk of bias in at least two domains due to the following features: high non-response, inaccurate cut-offs, and doubtful accuracy of the examiners. Two studies had limited external validity due to high rates of illiteracy or low income. The three studies with adequate generalizability and the lowest risk of bias presented a prevalence of dementia between 7.1% and 8.3% among subjects aged 65 years and older. However, after adjustment for accuracy of screening, the best available evidence points towards a figure between 15.2% and 16.3%. Conclusions The risk of bias may strongly limit the generalizability of dementia prevalence estimates in developing countries. Extrapolations that have already been made for Brazil and Latin America were based on a prevalence that should have been adjusted for screening accuracy or not used at all due to severe bias. Similar evaluations regarding other developing countries are needed in order to verify the scope of these limitations. PMID:26131563

  14. Determining Functional Reliability of Pyrotechnic Mechanical Devices

    NASA Technical Reports Server (NTRS)

    Bement, Laurence J.; Multhaup, Herbert A.

    1997-01-01

    This paper describes a new approach for evaluating mechanical performance and predicting the mechanical functional reliability of pyrotechnic devices. Not included are other possible failure modes, such as the initiation of the pyrotechnic energy source. The requirement of hundreds or thousands of consecutive, successful tests on identical components for reliability predictions, using the generally accepted go/no-go statistical approach routinely ignores physics of failure. The approach described in this paper begins with measuring, understanding and controlling mechanical performance variables. Then, the energy required to accomplish the function is compared to that delivered by the pyrotechnic energy source to determine mechanical functional margin. Finally, the data collected in establishing functional margin is analyzed to predict mechanical functional reliability, using small-sample statistics. A careful application of this approach can provide considerable cost improvements and understanding over that of go/no-go statistics. Performance and the effects of variables can be defined, and reliability predictions can be made by evaluating 20 or fewer units. The application of this approach to a pin puller used on a successful NASA mission is provided as an example.

  15. CardioGuard: A Brassiere-Based Reliable ECG Monitoring Sensor System for Supporting Daily Smartphone Healthcare Applications

    PubMed Central

    Kwon, Sungjun; Kim, Jeehoon; Kang, Seungwoo; Lee, Youngki; Baek, Hyunjae

    2014-01-01

    Abstract We propose CardioGuard, a brassiere-based reliable electrocardiogram (ECG) monitoring sensor system, for supporting daily smartphone healthcare applications. It is designed to satisfy two key requirements for user-unobtrusive daily ECG monitoring: reliability of ECG sensing and usability of the sensor. The system is validated through extensive evaluations. The evaluation results showed that the CardioGuard sensor reliably measure the ECG during 12 representative daily activities including diverse movement levels; 89.53% of QRS peaks were detected on average. The questionnaire-based user study with 15 participants showed that the CardioGuard sensor was comfortable and unobtrusive. Additionally, the signal-to-noise ratio test and the washing durability test were conducted to show the high-quality sensing of the proposed sensor and its physical durability in practical use, respectively. PMID:25405527

  16. Delay Analysis of Car-to-Car Reliable Data Delivery Strategies Based on Data Mulling with Network Coding

    NASA Astrophysics Data System (ADS)

    Park, Joon-Sang; Lee, Uichin; Oh, Soon Young; Gerla, Mario; Lun, Desmond Siumen; Ro, Won Woo; Park, Joonseok

    Vehicular ad hoc networks (VANET) aims to enhance vehicle navigation safety by providing an early warning system: any chance of accidents is informed through the wireless communication between vehicles. For the warning system to work, it is crucial that safety messages be reliably delivered to the target vehicles in a timely manner and thus reliable and timely data dissemination service is the key building block of VANET. Data mulling technique combined with three strategies, network codeing, erasure coding and repetition coding, is proposed for the reliable and timely data dissemination service. Particularly, vehicles in the opposite direction on a highway are exploited as data mules, mobile nodes physically delivering data to destinations, to overcome intermittent network connectivity cause by sparse vehicle traffic. Using analytic models, we show that in such a highway data mulling scenario the network coding based strategy outperforms erasure coding and repetition based strategies.

  17. Assessing local instrument reliability and validity: a field-based example from northern Uganda.

    PubMed

    Betancourt, Theresa S; Bass, Judith; Borisova, Ivelina; Neugebauer, Richard; Speelman, Liesbeth; Onyango, Grace; Bolton, Paul

    2009-08-01

    This paper presents an approach for evaluating the reliability and validity of mental health measures in non-Western field settings. We describe this approach using the example of our development of the Acholi psychosocial assessment instrument (APAI), which is designed to assess depression-like (two tam, par and kumu), anxiety-like (ma lwor) and conduct problems (kwo maraco) among war-affected adolescents in northern Uganda. To examine the criterion validity of this measure in the absence of a traditional gold standard, we derived local syndrome terms from qualitative data and used self reports of these syndromes by indigenous people as a reference point for determining caseness. Reliability was examined using standard test-retest and inter-rater methods. Each of the subscale scores for the depression-like syndromes exhibited strong internal reliability ranging from alpha = 0.84-0.87. Internal reliability was good for anxiety (0.70), conduct problems (0.83), and the pro-social attitudes and behaviors (0.70) subscales. Combined inter-rater reliability and test-retest reliability were good for most subscales except for the conduct problem scale and prosocial scales. The pattern of significant mean differences in the corresponding APAI problem scale score between self-reported cases vs. noncases on local syndrome terms was confirmed in the data for all of the three depression-like syndromes, but not for the anxiety-like syndrome ma lwor or the conduct problem kwo maraco.

  18. A reliability study using computer-based analysis of finger joint space narrowing in rheumatoid arthritis patients.

    PubMed

    Hatano, Katsuya; Kamishima, Tamotsu; Sutherland, Kenneth; Kato, Masaru; Nakagawa, Ikuma; Ichikawa, Shota; Kawauchi, Keisuke; Saitou, Shota; Mukai, Masaya

    2017-02-01

    The joint space difference index (JSDI) is a newly developed radiographic index which can quantitatively assess joint space narrowing progression of rheumatoid arthritis (RA) patients by using an image subtraction method on a computer. The aim of this study was to investigate the reliability of this method by non-experts utilizing RA image evaluation. Four non-experts assessed JSDI for radiographic images of 510 metacarpophalangeal joints from 51 RA patients twice with an interval of more than 2 weeks. Two rheumatologists and one radiologist as well as the four non-experts examined the joints by using the Sharp-van der Heijde Scoring (SHS) method. The radiologist and four non-experts repeated the scoring with an interval of more than 2 weeks. We calculated intra-/inter-observer reliability using the intra-class correlation coefficients (ICC) for JSDI and SHS scoring, respectively. The intra-/inter-observer reliabilities for the computer-based method were almost perfect (inter-observer ICC, 0.966-0.983; intra-observer ICC, 0.954-0.996). Contrary to this, intra-/inter-observer reliability for SHS by experts was moderate to almost perfect (inter-observer ICC, 0.556-0.849; intra-observer ICC, 0.589-0.839). The results suggest that our computer-based method has high reliability to detect finger joint space narrowing progression in RA patients.

  19. Modeling the reliability of a class of fault-tolerant VLSI/WSI systems based on multiple-level redundancy

    NASA Astrophysics Data System (ADS)

    Chen, Yung-Yuan; Upadhyaya, Shambhu J.

    1994-06-01

    A class of fault-tolerant Very Large Scale Integration (VLSI) and Wafer Scale Integration (WSI) schemes, called the multiple-level redundancy, which incorporates both hierarchical and element level redundancy has been proposed for the design of high yield and high reliability large area array processors. The residual redundancy left unused after successfully reconfiguring and eliminating the manufacturing defects can be used to improve the operational reliability of a system. Since existing techniques for the analysis of the effect of residual redundancy on reliability improvement are not applicable, we present a new hierarchical model to estimate the reliability of the systems designed by our approach. Our model emphasizes the effect of support circuit (interconnection) failures on system reliability, leading to more accurate analysis. We discuss two area prediction models, one based on the regular WSI process, another based on the advanced WSI process, to estimate the area-related parameters. This analysis gives an insight into the practical implementations of fault-tolerant schemes in VLSI/WSI technology. Results of a computer experiment conducted to validate our models are also discussed.

  20. Nanoparticle-based cancer treatment: can delivered dose and biological dose be reliably modeled and quantified?

    NASA Astrophysics Data System (ADS)

    Hoopes, P. Jack; Petryk, Alicia A.; Giustini, Andrew J.; Stigliano, Robert V.; D'Angelo, Robert N.; Tate, Jennifer A.; Cassim, Shiraz M.; Foreman, Allan; Bischof, John C.; Pearce, John A.; Ryan, Thomas

    2011-03-01

    Essential developments in the reliable and effective use of heat in medicine include: 1) the ability to model energy deposition and the resulting thermal distribution and tissue damage (Arrhenius models) over time in 3D, 2) the development of non-invasive thermometry and imaging for tissue damage monitoring, and 3) the development of clinically relevant algorithms for accurate prediction of the biological effect resulting from a delivered thermal dose in mammalian cells, tissues, and organs. The accuracy and usefulness of this information varies with the type of thermal treatment, sensitivity and accuracy of tissue assessment, and volume, shape, and heterogeneity of the tumor target and normal tissue. That said, without the development of an algorithm that has allowed the comparison and prediction of the effects of hyperthermia in a wide variety of tumor and normal tissues and settings (cumulative equivalent minutes/ CEM), hyperthermia would never have achieved clinical relevance. A new hyperthermia technology, magnetic nanoparticle-based hyperthermia (mNPH), has distinct advantages over the previous techniques: the ability to target the heat to individual cancer cells (with a nontoxic nanoparticle), and to excite the nanoparticles noninvasively with a noninjurious magnetic field, thus sparing associated normal cells and greatly improving the therapeutic ratio. As such, this modality has great potential as a primary and adjuvant cancer therapy. Although the targeted and safe nature of the noninvasive external activation (hysteretic heating) are a tremendous asset, the large number of therapy based variables and the lack of an accurate and useful method for predicting, assessing and quantifying mNP dose and treatment effect is a major obstacle to moving the technology into routine clinical practice. Among other parameters, mNPH will require the accurate determination of specific nanoparticle heating capability, the total nanoparticle content and biodistribution in

  1. A Reliable and Inexpensive Method of Nucleic Acid Extraction for the PCR-Based Detection of Diverse Plant Pathogens

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A reliable extraction method is described for the preparation of total nucleic acids from several plant genera for subsequent detection of plant pathogens by PCR-based techniques. By the combined use of a modified CTAB (cetyltrimethylammonium bromide) extraction protocol and a semi-automatic homogen...

  2. Content Validity and Inter-Rater Reliability of the Halliwick-Concept-Based Instrument "Swimming with Independent Measure"

    ERIC Educational Resources Information Center

    Srsen, Katja Groleger; Vidmar, Gaj; Pikl, Masa; Vrecar, Irena; Burja, Cirila; Krusec, Klavdija

    2012-01-01

    The Halliwick concept is widely used in different settings to promote joyful movement in water and swimming. To assess the swimming skills and progression of an individual swimmer, a valid and reliable measure should be used. The Halliwick-concept-based Swimming with Independent Measure (SWIM) was introduced for this purpose. We aimed to determine…

  3. Tool for Assessing Responsibility-Based Education (TARE): Instrument Development, Content Validity, and Inter-Rater Reliability

    ERIC Educational Resources Information Center

    Wright, Paul M.; Craig, Mark W.

    2011-01-01

    Numerous scholars have stressed the importance of personal and social responsibility in physical activity settings; however, there is a lack of instrumentation to study the implementation of responsibility-based teaching strategies. The development, content validity, and initial inter-rater reliability testing of the Tool for Assessing…

  4. The Development of the Functional Literacy Experience Scale Based upon Ecological Theory (FLESBUET) and Validity-Reliability Study

    ERIC Educational Resources Information Center

    Özenç, Emine Gül; Dogan, M. Cihangir

    2014-01-01

    This study aims to perform a validity-reliability test by developing the Functional Literacy Experience Scale based upon Ecological Theory (FLESBUET) for primary education students. The study group includes 209 fifth grade students at Sabri Taskin Primary School in the Kartal District of Istanbul, Turkey during the 2010-2011 academic year.…

  5. Content validity and inter-rater reliability of the Halliwick-concept-based instrument 'Swimming with Independent Measure'.

    PubMed

    Sršen, Katja Groleger; Vidmar, Gaj; Pikl, Maša; Vrečar, Irena; Burja, Cirila; Krušec, Klavdija

    2012-06-01

    The Halliwick concept is widely used in different settings to promote joyful movement in water and swimming. To assess the swimming skills and progression of an individual swimmer, a valid and reliable measure should be used. The Halliwick-concept-based Swimming with Independent Measure (SWIM) was introduced for this purpose. We aimed to determine its content validity and inter-rater reliability. Fifty-four healthy children, 3.5-11 years old, from a mainstream swimming program participated in a content validity study. They were evaluated with SWIM and the national evaluation system of swimming abilities (classifying children into seven categories). To study the inter-rater reliability of SWIM, we included 37 children and youth from a Halliwick swimming program, aged 7-22 years, who were evaluated by two Halliwick instructors independently. The average SWIM score differed between national evaluation system categories and followed the expected order (P<0.001), whereby a ceiling effect was observed in the higher categories. High inter-rater reliability was found for all 11 SWIM items. The lowest reliability was observed for item G (sagittal rotation), although the estimates were still above 0.9. As expected, the highest reliability was observed for the total score (intraclass correlation 0.996). The validity of SWIM with respect to the national evaluation system of swimming abilities is high until the point where a swimmer is well adapted to water and already able to learn some swimming techniques. The inter-rater reliability of SWIM is very high; thus, we believe that SWIM can be used in further research and practice to follow the progress of swimmers.

  6. The specification-based validation of reliable multicast protocol: Problem Report. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Wu, Yunqing

    1995-01-01

    Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP multicasting. In this report, we develop formal models for RMP using existing automated verification systems, and perform validation on the formal RMP specifications. The validation analysis help identifies some minor specification and design problems. We also use the formal models of RMP to generate a test suite for conformance testing of the implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress of implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.

  7. Reliability-centered maintenance for ground-based large optical telescopes and radio antenna arrays

    NASA Astrophysics Data System (ADS)

    Marchiori, G.; Formentin, F.; Rampini, F.

    2014-07-01

    In the last years, EIE GROUP has been more and more involved in large optical telescopes and radio antennas array projects. In this frame, the paper describes a fundamental aspect of the Logistic Support Analysis (LSA) process, that is the application of the Reliability-Centered Maintenance (RCM) methodology for the generation of maintenance plans for ground-based large optical telescopes and radio antennas arrays. This helps maintenance engineers to make sure that the telescopes continue to work properly, doing what their users require them to do in their present operating conditions. The main objective of the RCM process is to establish the complete maintenance regime, with the safe minimum required maintenance, carried out without any risk to personnel, telescope and subsystems. At the same time, a correct application of the RCM allows to increase the cost effectiveness, telescope uptime and items availability, and to provide greater understanding of the level of risk that the organization is managing. At the same time, engineers shall make a great effort since the initial phase of the project to obtain a telescope requiring easy maintenance activities and simple replacement of the major assemblies, taking special care on the accesses design and items location, implementation and design of special lifting equipment and handling devices for the heavy items. This maintenance engineering framework is based on seven points, which lead to the main steps of the RCM program. The initial steps of the RCM process consist of: system selection and data collection (MTBF, MTTR, etc.), definition of system boundaries and operating context, telescope description with the use of functional block diagrams, and the running of a FMECA to address the dominant causes of equipment failure and to lay down the Critical Items List. In the second part of the process the RCM logic is applied, which helps to determine the appropriate maintenance tasks for each identified failure mode. Once

  8. Reliability and structural integrity

    NASA Technical Reports Server (NTRS)

    Davidson, J. R.

    1976-01-01

    An analytic model is developed to calculate the reliability of a structure after it is inspected for cracks. The model accounts for the growth of undiscovered cracks between inspections and their effect upon the reliability after subsequent inspections. The model is based upon a differential form of Bayes' Theorem for reliability, and upon fracture mechanics for crack growth.

  9. Asymmetric programming: a highly reliable metadata allocation strategy for MLC NAND flash memory-based sensor systems.

    PubMed

    Huang, Min; Liu, Zhaoqing; Qiao, Liyan

    2014-10-10

    While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it's critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB) pages which are more reliable than least significant bit (LSB) pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme.

  10. Test–retest reliability of the prefrontal response to affective pictures based on functional near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Huang, Yuxia; Mao, Mengchai; Zhang, Zong; Zhou, Hui; Zhao, Yang; Duan, Lian; Kreplin, Ute; Xiao, Xiang; Zhu, Chaozhe

    2017-01-01

    Functional near-infrared spectroscopy (fNIRS) is being increasingly applied to affective and social neuroscience research; however, the reliability of this method is still unclear. This study aimed to evaluate the test-retest reliability of the fNIRS-based prefrontal response to emotional stimuli. Twenty-six participants viewed unpleasant and neutral pictures, and were simultaneously scanned by fNIRS in two sessions three weeks apart. The reproducibility of the prefrontal activation map was evaluated at three spatial scales (mapwise, clusterwise, and channelwise) at both the group and individual levels. The influence of the time interval was also explored and comparisons were made between longer (intersession) and shorter (intrasession) time intervals. The reliabilities of the activation map at the group level for the mapwise (up to 0.88, the highest value appeared in the intersession assessment) and clusterwise scales (up to 0.91, the highest appeared in the intrasession assessment) were acceptable, indicating that fNIRS may be a reliable tool for emotion studies, especially for a group analysis and under larger spatial scales. However, it should be noted that the individual-level and the channelwise fNIRS prefrontal responses were not sufficiently stable. Future studies should investigate which factors influence reliability, as well as the validity of fNIRS used in emotion studies.

  11. Estimating and comparing the reliability of a suite of workplace-based assessments: an obstetrics and gynaecology setting.

    PubMed

    Homer, Matt; Setna, Zeryab; Jha, Vikram; Higham, Jenny; Roberts, Trudie; Boursicot, Katherine

    2013-08-01

    This paper reports on a study that compares estimates of the reliability of a suite of workplace based assessment forms as employed to formatively assess the progress of trainee obstetricians and gynaecologists. The use of such forms of assessment is growing nationally and internationally in many specialties, but there is little research evidence on comparisons by procedure/competency and form-type across an entire specialty. Generalisability theory combined with a multilevel modelling approach is used to estimate variance components, G-coefficients and standard errors of measurement across 13 procedures and three form-types (mini-CEX, OSATS and CbD). The main finding is that there are wide variations in the estimates of reliability across the forms, and that therefore the guidance on assessment within the specialty does not always allow for enough forms per trainee to ensure that the levels of reliability of the process is adequate. There is, however, little evidence that reliability varies systematically by form-type. Methodologically, the problems of accurately estimating reliability in these contexts through the calculation of variance components and, crucially, their associated standard errors are considered. The importance of the use of appropriate methods in such calculations is emphasised, and the unavoidable limitations of research in naturalistic settings are discussed.

  12. Comprehensive reliability allocation method for CNC lathes based on cubic transformed functions of failure mode and effects analysis

    NASA Astrophysics Data System (ADS)

    Yang, Zhou; Zhu, Yunpeng; Ren, Hongrui; Zhang, Yimin

    2015-03-01

    Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.

  13. Asymmetric Programming: A Highly Reliable Metadata Allocation Strategy for MLC NAND Flash Memory-Based Sensor Systems

    PubMed Central

    Huang, Min; Liu, Zhaoqing; Qiao, Liyan

    2014-01-01

    While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it's critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB) pages which are more reliable than least significant bit (LSB) pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme. PMID:25310473

  14. Development, construct validity and test-retest reliability of a field-based wheelchair mobility performance test for wheelchair basketball.

    PubMed

    de Witte, Annemarie M H; Hoozemans, Marco J M; Berger, Monique A M; van der Slikke, Rienk M A; van der Woude, Lucas H V; Veeger, Dirkjan H E J

    2017-01-16

    The aim of this study was to develop and describe a wheelchair mobility performance test in wheelchair basketball and to assess its construct validity and reliability. To mimic mobility performance of wheelchair basketball matches in a standardised manner, a test was designed based on observation of wheelchair basketball matches and expert judgement. Forty-six players performed the test to determine its validity and 23 players performed the test twice for reliability. Independent-samples t-tests were used to assess whether the times needed to complete the test were different for classifications, playing standards and sex. Intraclass correlation coefficients (ICC) were calculated to quantify reliability of performance times. Males performed better than females (P < 0.001, effect size [ES] = -1.26) and international men performed better than national men (P < 0.001, ES = -1.62). Performance time of low (≤2.5) and high (≥3.0) classification players was borderline not significant with a moderate ES (P = 0.06, ES = 0.58). The reliability was excellent for overall performance time (ICC = 0.95). These results show that the test can be used as a standardised mobility performance test to validly and reliably assess the capacity in mobility performance of elite wheelchair basketball athletes. Furthermore, the described methodology of development is recommended for use in other sports to develop sport-specific tests.

  15. The Reliability and Validity of the Complex Task Performance Assessment: A Performance-Based Assessment of Executive Function

    PubMed Central

    Wolf, Timothy J.; Dahl, Abigail; Auen, Colleen; Doherty, Meghan

    2015-01-01

    The objective of this study was to evaluate the inter-rater reliability, test-retest reliability, concurrent validity, and discriminant validity of the Complex Task Performance Assessment (CTPA): An ecologically-valid performance-based assessment of executive function. Community control participants (n = 20) and individuals with mild stroke (n = 14) participated in this study. All participants completed the CTPA and a battery of cognitive assessments at initial testing. The control participants completed the CTPA at two different times one week apart. The intra-class correlation coefficient (ICC) for inter-rater reliability for the total score on the CTPA was 0.991. The ICCs for all of the sub scores of the CTPA were also high (0.889-0.977). The CTPA total score was significantly correlated to Condition 4 of the DKEFS Color-Word Interference Test (ρ = −0.425), and the Wechsler Test of Adult Reading (ρ = −0.493). Finally, there were significant differences between control subjects and individuals with mild stroke on the total score of the CTPA (p = 0.007) and all sub scores except interpretation failures and total items incorrect. These results are also consistent with other current executive function performance-based assessments and indicate that the CTPA is a reliable and valid performance-based measure of executive function. PMID:25939359

  16. The reliability and validity of the Complex Task Performance Assessment: A performance-based assessment of executive function.

    PubMed

    Wolf, Timothy J; Dahl, Abigail; Auen, Colleen; Doherty, Meghan

    2015-05-05

    The objective of this study was to evaluate the inter-rater reliability, test-retest reliability, concurrent validity, and discriminant validity of the Complex Task Performance Assessment (CTPA): an ecologically valid performance-based assessment of executive function. Community control participants (n = 20) and individuals with mild stroke (n = 14) participated in this study. All participants completed the CTPA and a battery of cognitive assessments at initial testing. The control participants completed the CTPA at two different times one week apart. The intra-class correlation coefficient (ICC) for inter-rater reliability for the total score on the CTPA was .991. The ICCs for all of the sub-scores of the CTPA were also high (.889-.977). The CTPA total score was significantly correlated to Condition 4 of the DKEFS Color-Word Interference Test (p = -.425), and the Wechsler Test of Adult Reading (p  = -.493). Finally, there were significant differences between control subjects and individuals with mild stroke on the total score of the CTPA (p = .007) and all sub-scores except interpretation failures and total items incorrect. These results are also consistent with other current executive function performance-based assessments and indicate that the CTPA is a reliable and valid performance-based measure of executive function.

  17. Test-Retest Reliability of an Automated Infrared-Assisted Trunk Accelerometer-Based Gait Analysis System.

    PubMed

    Hsu, Chia-Yu; Tsai, Yuh-Show; Yau, Cheng-Shiang; Shie, Hung-Hai; Wu, Chu-Ming

    2016-07-23

    The aim of this study was to determine the test-retest reliability of an automated infrared-assisted, trunk accelerometer-based gait analysis system for measuring gait parameters of healthy subjects in a hospital. Thirty-five participants (28 of them females; age range, 23-79 years) performed a 5-m walk twice using an accelerometer-based gait analysis system with infrared assist. Measurements of spatiotemporal gait parameters (walking speed, step length, and cadence) and trunk control (gait symmetry, gait regularity, acceleration root mean square (RMS), and acceleration root mean square ratio (RMSR)) were recorded in two separate walking tests conducted 1 week apart. Relative and absolute test-retest reliability was determined by calculating the intra-class correlation coefficient (ICC3,1) and smallest detectable difference (SDD), respectively. The test-retest reliability was excellent for walking speed (ICC = 0.87, 95% confidence interval = 0.74-0.93, SDD = 13.4%), step length (ICC = 0.81, 95% confidence interval = 0.63-0.91, SDD = 12.2%), cadence (ICC = 0.81, 95% confidence interval = 0.63-0.91, SDD = 10.8%), and trunk control (step and stride regularity in anterior-posterior direction, acceleration RMS and acceleration RMSR in medial-lateral direction, and acceleration RMS and stride regularity in vertical direction). An automated infrared-assisted, trunk accelerometer-based gait analysis system is a reliable tool for measuring gait parameters in the hospital environment.

  18. Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.

    1988-01-01

    The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.

  19. Hospital-based fall program measurement and improvement in high reliability organizations.

    PubMed

    Quigley, Patricia A; White, Susan V

    2013-05-31

    Falls and fall injuries in hospitals are the most frequently reported adverse event among adults in the inpatient setting. Advancing measurement and improvement around falls prevention in the hospital is important as falls are a nurse sensitive measure and nurses play a key role in this component of patient care. A framework for applying the concepts of high reliability organizations to falls prevention programs is described, including discussion of the core characteristics of such a model and determining the impact at the patient, unit, and organizational level. This article showcases the components of a patient safety culture and the integration of these components with fall prevention, the role of nurses, and high reliability.

  20. Reliability of lower limb alignment measures using an established landmark-based method with a customized computer software program.

    PubMed

    Sled, Elizabeth A; Sheehy, Lisa M; Felson, David T; Costigan, Patrick A; Lam, Miu; Cooke, T Derek V

    2011-01-01

    The objective of the study was to evaluate the reliability of frontal plane lower limb alignment measures using a landmark-based method by (1) comparing inter- and intra-reader reliability between measurements of alignment obtained manually with those using a computer program, and (2) determining inter- and intra-reader reliability of computer-assisted alignment measures from full-limb radiographs. An established method for measuring alignment was used, involving selection of 10 femoral and tibial bone landmarks. (1) To compare manual and computer methods, we used digital images and matching paper copies of five alignment patterns simulating healthy and malaligned limbs drawn using AutoCAD. Seven readers were trained in each system. Paper copies were measured manually and repeat measurements were performed daily for 3 days, followed by a similar routine with the digital images using the computer. (2) To examine the reliability of computer-assisted measures from full-limb radiographs, 100 images (200 limbs) were selected as a random sample from 1,500 full-limb digital radiographs which were part of the Multicenter Osteoarthritis Study. Three trained readers used the software program to measure alignment twice from the batch of 100 images, with two or more weeks between batch handling. Manual and computer measures of alignment showed excellent agreement (intraclass correlations [ICCs] 0.977-0.999 for computer analysis; 0.820-0.995 for manual measures). The computer program applied to full-limb radiographs produced alignment measurements with high inter- and intra-reader reliability (ICCs 0.839-0.998). In conclusion, alignment measures using a bone landmark-based approach and a computer program were highly reliable between multiple readers.

  1. Knowledge-base for the new human reliability analysis method, A Technique for Human Error Analysis (ATHEANA)

    SciTech Connect

    Cooper, S.E.; Wreathall, J.; Thompson, C.M., Drouin, M.; Bley, D.C.

    1996-10-01

    This paper describes the knowledge base for the application of the new human reliability analysis (HRA) method, a ``A Technique for Human Error Analysis`` (ATHEANA). Since application of ATHEANA requires the identification of previously unmodeled human failure events, especially errors of commission, and associated error-forcing contexts (i.e., combinations of plant conditions and performance shaping factors), this knowledge base is an essential aid for the HRA analyst.

  2. Integrated avionics reliability

    NASA Technical Reports Server (NTRS)

    Alikiotis, Dimitri

    1988-01-01

    The integrated avionics reliability task is an effort to build credible reliability and/or performability models for multisensor integrated navigation and flight control. The research was initiated by the reliability analysis of a multisensor navigation system consisting of the Global Positioning System (GPS), the Long Range Navigation system (Loran C), and an inertial measurement unit (IMU). Markov reliability models were developed based on system failure rates and mission time.

  3. Methodology for reliability based condition assessment. Application to concrete structures in nuclear plants

    SciTech Connect

    Mori, Y.; Ellingwood, B.

    1993-08-01

    Structures in nuclear power plants may be exposed to aggressive environmental effects that cause their strength to decrease over an extended period of service. A major concern in evaluating the continued service for such structures is to ensure that in their current condition they are able to withstand future extreme load events during the intended service life with a level of reliability sufficient for public safety. This report describes a methodology to facilitate quantitative assessments of current and future structural reliability and performance of structures in nuclear power plants. This methodology takes into account the nature of past and future loads, and randomness in strength and in degradation resulting from environmental factors. An adaptive Monte Carlo simulation procedure is used to evaluate time-dependent system reliability. The time-dependent reliability is sensitive to the time-varying load characteristics and to the choice of initial strength and strength degradation models but not to correlation in component strengths within a system. Inspection/maintenance strategies are identified that minimize the expected future costs of keeping the failure probability of a structure at or below an established target failure probability during its anticipated service period.

  4. Reliability-Based Weighting of Visual and Vestibular Cues in Displacement Estimation

    PubMed Central

    ter Horst, Arjan C.; Koppen, Mathieu; Selen, Luc P. J.; Medendorp, W. Pieter

    2015-01-01

    When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement. PMID:26658990

  5. Temporal Stability of Strength-Based Assessments: Test-Retest Reliability of Student and Teacher Reports

    ERIC Educational Resources Information Center

    Romer, Natalie; Merrell, Kenneth W.

    2013-01-01

    This study focused on evaluating the temporal stability of self-reported and teacher-reported perceptions of students' social and emotional skills and assets. We used a test-retest reliability procedure over repeated administrations of the child, adolescent, and teacher versions of the "Social-Emotional Assets and Resilience Scales".…

  6. Optimum structural design based on reliability and proof-load testing

    NASA Technical Reports Server (NTRS)

    Shinozuka, M.; Yang, J. N.

    1969-01-01

    Proof-load test eliminates structures with strength less than the proof load and improves the reliability value in analysis. It truncates the distribution function of strength at the proof load, thereby alleviating verification of a fitted distribution function at the lower tail portion where data are usually nonexistent.

  7. Reliability-Based Weighting of Visual and Vestibular Cues in Displacement Estimation.

    PubMed

    ter Horst, Arjan C; Koppen, Mathieu; Selen, Luc P J; Medendorp, W Pieter

    2015-01-01

    When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement.

  8. Checking the reliability of a linear-programming based approach towards detecting community structures in networks.

    PubMed

    Chen, W Y C; Dress, A W M; Yu, W Q

    2007-09-01

    Here, the reliability of a recent approach to use parameterised linear programming for detecting community structures in network has been investigated. Using a one-parameter family of objective functions, a number of "perturbation experiments' document that our approach works rather well. A real-life network and a family of benchmark network are also analysed.

  9. Moving to a Higher Level for PV Reliability through Comprehensive Standards Based on Solid Science (Presentation)

    SciTech Connect

    Kurtz, S.

    2014-11-01

    PV reliability is a challenging topic because of the desired long life of PV modules, the diversity of use environments and the pressure on companies to rapidly reduce their costs. This presentation describes the challenges, examples of failure mechanisms that we know or don't know how to test for, and how a scientific approach is being used to establish international standards.

  10. A Note on the Reliability Coefficients for Item Response Model-Based Ability Estimates

    ERIC Educational Resources Information Center

    Kim, Seonghoon

    2012-01-01

    Assuming item parameters on a test are known constants, the reliability coefficient for item response theory (IRT) ability estimates is defined for a population of examinees in two different ways: as (a) the product-moment correlation between ability estimates on two parallel forms of a test and (b) the squared correlation between the true…

  11. RELIABILITY-BASED UNCERTAINTY ANALYSIS OF GROUNDWATER CONTAMINANT TRANSPORT AND REMEDIATION

    EPA Science Inventory

    This report presents a discussion of the application of the first- and second-order reliability methods (FORM and SORM, respectively) to ground-water transport and remediation, and to public health risk assessment. Using FORM and SORM allows the formal incorporation of parameter...

  12. Predictions of Crystal Structure Based on Radius Ratio: How Reliable Are They?

    ERIC Educational Resources Information Center

    Nathan, Lawrence C.

    1985-01-01

    Discussion of crystalline solids in undergraduate curricula often includes the use of radius ratio rules as a method for predicting which type of crystal structure is likely to be adopted by a given ionic compound. Examines this topic, establishing more definitive guidelines for the use and reliability of the rules. (JN)

  13. A reliable and highly sensitive, digital PCR-based assay for early detection of citrus Huanglongbing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Huanglongbing (HLB) is caused by a phloem-limited bacterium, Ca. Liberibacter asiaticus (Las) in the United States. The bacterium is often present at a low concentration and unevenly distributed in the early stage of infection, making reliable and early diagnosis a challenge. We have developed a pro...

  14. Contributions to a reliable hydrogen sensor based on surface plasmon surface resonance spectroscopy

    NASA Astrophysics Data System (ADS)

    Morjan, Martin; Züchner, Harald; Cammann, Karl

    2009-06-01

    Hydrogen is being seen as a potentially inexhaustible, clean power supply. Direct hydrogen production and storage techniques that would eliminate carbon by-products and compete in cost are accelerated in R&D due to the recent sharp price increase of crude oil. But hydrogen is also linked with certain risks of use, namely the danger of explosions if mixed with air due to the very low energy needed for ignition and the possibility to diminish the ozone layer by undetected leaks. To reduce those risks efficient, sensitive and very early warning systems are needed. This paper will contribute to this challenge in adopting the optical method of Surface-Plasmon-Resonance (SPR) Spectroscopy for a sensitive detection of hydrogen concentrations well below the lower explosion limit. The technique of SPR performed with fiberoptics would in principle allow a remote control without any electrical contacts in the potential explosion zone. A thin palladium metal layer has been studied as sensing element. A simulation programme to find an optimum sensor design lead to the conclusion that an Otto-configuration is more advantageous under intended "real world" measurement conditions than a Kretschmann configuration. This could be experimentally verified. The very small air gap in the Otto-configuration could be successfully replaced by a several hundred nm thick intermediate layer of MgF 2 or SiO 2 to ease the fabrication of hydrogen sensor-chips based on glass slide substrates. It could be demonstrated that by a separate detection of the TM- and TE-polarized light fractions the TE-polarized beam could be used as a reference signal, since the TE-part does not excite surface plasmons and thus is not influenced by the presence of hydrogen. Choosing the measured TM/TE intensity ratio as the analytical signal a sensor-chip made from a BK7 glass slide with a 425 nm thick intermediate layer of SiO 2 and a sensing layer of 50 nm Pd on top allowed a drift-free, reliable and reversible

  15. Reliable Magnetic Resonance Imaging Based Grading System for Cervical Intervertebral Disc Degeneration

    PubMed Central

    Chen, Antonia F.; Kang, James D.; Lee, Joon Y.

    2016-01-01

    Study Design Observational. Purpose To develop a simple and comprehensive grading system for cervical discs that precisely, consistently and meaningfully presents radiologic and morphologic data. Overview of Literature The Thompson grading system is commonly used to classify the severity of degenerative lumbar discs on magnetic resonance imaging (MRI). Inherent differences in the morphological and physiological characteristics of cervical discs have hindered development of precise classification systems. Other grading systems have been developed for degenerating cervical discs, but their versatility and feasibility in the clinical setting is suboptimal. Methods MRIs of 46 human cervical discs were de-identified and displayed in PowerPoint format. Each slide depicted a single disc with a normal (grade 0) disc displayed in the top right corner for reference. The presentation was given to 25 physicians comprising attending spine surgeons, spine fellows, orthopaedic residents, and two attending musculoskeletal radiologists. The grading system included Grade 0 (normal height compared to C2–3, mid cleft still visible), grade 1 (dark disc, normal height), grade 2 (collapsed disc, few osteophytes), and grade 3 (collapsed disc, many osteophytes). The ease of use of the system was gauged in the participants and the interobserver reliability was calculated. Results The intraclass correlation coefficient for interobserver reliability was 0.87, and 0.94 for intraobserver reliability, indicating excellent reliability. Ninety-five percent and 85 percent of the clinicians judged the grading system to be clinically feasible and useful in daily practice, respectively. Conclusions The grading system is easy to use, has excellent reliability, and can be used for precise and consistent clinician communication. PMID:26949461

  16. RICA: a reliable and image configurable arena for cyborg bumblebee based on CAN bus.

    PubMed

    Gong, Fan; Zheng, Nenggan; Xue, Lei; Xu, Kedi; Zheng, Xiaoxiang

    2014-01-01

    In this paper, we designed a reliable and image configurable flight arena, RICA, for developing cyborg bumblebees. To meet the spatial and temporal requirements of bumblebees, the Controller Area Network (CAN) bus is adopted to interconnect the LED display modules to ensure the reliability and real-time performance of the arena system. Easily-configurable interfaces on a desktop computer implemented by python scripts are provided to transmit the visual patterns to the LED distributor online and configure RICA dynamically. The new arena system will be a power tool to investigate the quantitative relationship between the visual inputs and induced flight behaviors and also will be helpful to the visual-motor research in other related fields.

  17. Reliability growth modeling analysis of the space shuttle main engines based upon the Weibull process

    NASA Technical Reports Server (NTRS)

    Wheeler, J. T.

    1990-01-01

    The Weibull process, identified as the inhomogeneous Poisson process with the Weibull intensity function, is used to model the reliability growth assessment of the space shuttle main engine test and flight failure data. Additional tables of percentage-point probabilities for several different values of the confidence coefficient have been generated for setting (1-alpha)100-percent two sided confidence interval estimates on the mean time between failures. The tabled data pertain to two cases: (1) time-terminated testing, and (2) failure-terminated testing. The critical values of the three test statistics, namely Cramer-von Mises, Kolmogorov-Smirnov, and chi-square, were calculated and tabled for use in the goodness of fit tests for the engine reliability data. Numerical results are presented for five different groupings of the engine data that reflect the actual response to the failures.

  18. A human reliability based usability evaluation method for safety-critical software

    SciTech Connect

    Boring, R. L.; Tran, T. Q.; Gertman, D. I.; Ragsdale, A.

    2006-07-01

    Boring and Gertman (2005) introduced a novel method that augments heuristic usability evaluation methods with that of the human reliability analysis method of SPAR-H. By assigning probabilistic modifiers to individual heuristics, it is possible to arrive at the usability error probability (UEP). Although this UEP is not a literal probability of error, it nonetheless provides a quantitative basis to heuristic evaluation. This method allows one to seamlessly prioritize and identify usability issues (i.e., a higher UEP requires more immediate fixes). However, the original version of this method required the usability evaluator to assign priority weights to the final UEP, thus allowing the priority of a usability issue to differ among usability evaluators. The purpose of this paper is to explore an alternative approach to standardize the priority weighting of the UEP in an effort to improve the method's reliability. (authors)

  19. Validity and reliability of intra-stroke kayak velocity and acceleration using a GPS-based accelerometer.

    PubMed

    Janssen, Ina; Sachlikidis, Alexi

    2010-03-01

    The aim of this study was to assess the validity and reliability of the velocity and acceleration measured by a kayak-mounted GPS-based accelerometer units compared to the video-derived measurements and the effect of satellite configuration on velocity. Four GPS-based accelerometers units of varied accelerometer ranges (2 g or 6 g) were mounted on a kayak as the paddler performed 12 trials at three different stroke rates for each of three different testing sessions (two in the morning vs. one in the afternoon). The velocity and acceleration derived by the accelerometers was compared with the velocity and acceleration derived from high-speed video footage (100Hz). Validity was measured using Bland and Altman plots, R2, and the root of the mean of the squared difference (RMSe), while reliability was calculated using the coefficient of variation, R2, and repeated measures analysis of variance (ANOVA) tests. The GPS-based accelerometers under-reported kayak velocity by 0.14-0.19 m/s and acceleration by 1.67 m/s2 when compared to the video-derived measurements. The afternoon session reported the least difference, indicating a time of day effect on the velocity measured. This study highlights the need for sports utilising GPS-based accelerometers, such as minimaxX, for intra-stroke measurements to conduct sport-specific validity and reliability studies to ensure the accuracy of their data.

  20. A Human Reliability Based Usability Evaluation Method for Safety-Critical Software

    SciTech Connect

    Phillippe Palanque; Regina Bernhaupt; Ronald Boring; Chris Johnson

    2006-04-01

    Recent years have seen an increasing use of sophisticated interaction techniques including in the field of safety critical interactive software [8]. The use of such techniques has been required in order to increase the bandwidth between the users and systems and thus to help them deal efficiently with increasingly complex systems. These techniques come from research and innovation done in the field of humancomputer interaction (HCI). A significant effort is currently being undertaken by the HCI community in order to apply and extend current usability evaluation techniques to these new kinds of interaction techniques. However, very little has been done to improve the reliability of software offering these kinds of interaction techniques. Even testing basic graphical user interfaces remains a challenge that has rarely been addressed in the field of software engineering [9]. However, the non reliability of interactive software can jeopardize usability evaluation by showing unexpected or undesired behaviors. The aim of this SIG is to provide a forum for both researchers and practitioners interested in testing interactive software. Our goal is to define a roadmap of activities to cross fertilize usability and reliability testing of these kinds of systems to minimize duplicate efforts in both communities.

  1. Reliable change indices and standardized regression-based change score norms for evaluating neuropsychological change in children with epilepsy.

    PubMed

    Busch, Robyn M; Lineweaver, Tara T; Ferguson, Lisa; Haut, Jennifer S

    2015-06-01

    Reliable change indices (RCIs) and standardized regression-based (SRB) change score norms permit evaluation of meaningful changes in test scores following treatment interventions, like epilepsy surgery, while accounting for test-retest reliability, practice effects, score fluctuations due to error, and relevant clinical and demographic factors. Although these methods are frequently used to assess cognitive change after epilepsy surgery in adults, they have not been widely applied to examine cognitive change in children with epilepsy. The goal of the current study was to develop RCIs and SRB change score norms for use in children with epilepsy. Sixty-three children with epilepsy (age range: 6-16; M=10.19, SD=2.58) underwent comprehensive neuropsychological evaluations at two time points an average of 12 months apart. Practice effect-adjusted RCIs and SRB change score norms were calculated for all cognitive measures in the battery. Practice effects were quite variable across the neuropsychological measures, with the greatest differences observed among older children, particularly on the Children's Memory Scale and Wisconsin Card Sorting Test. There was also notable variability in test-retest reliabilities across measures in the battery, with coefficients ranging from 0.14 to 0.92. Reliable change indices and SRB change score norms for use in assessing meaningful cognitive change in children following epilepsy surgery are provided for measures with reliability coefficients above 0.50. This is the first study to provide RCIs and SRB change score norms for a comprehensive neuropsychological battery based on a large sample of children with epilepsy. Tables to aid in evaluating cognitive changes in children who have undergone epilepsy surgery are provided for clinical use. An Excel sheet to perform all relevant calculations is also available to interested clinicians or researchers.

  2. Optimal clustering of MGs based on droop controller for improving reliability using a hybrid of harmony search and genetic algorithms.

    PubMed

    Abedini, Mohammad; Moradi, Mohammad H; Hosseinian, S M

    2016-03-01

    This paper proposes a novel method to address reliability and technical problems of microgrids (MGs) based on designing a number of self-adequate autonomous sub-MGs via adopting MGs clustering thinking. In doing so, a multi-objective optimization problem is developed where power losses reduction, voltage profile improvement and reliability enhancement are considered as the objective functions. To solve the optimization problem a hybrid algorithm, named HS-GA, is provided, based on genetic and harmony search algorithms, and a load flow method is given to model different types of DGs as droop controller. The performance of the proposed method is evaluated in two case studies. The results provide support for the performance of the proposed method.

  3. Reliability of a novel CBCT-based 3D classification system for maxillary canine impactions in orthodontics: the KPG index.

    PubMed

    Dalessandri, Domenico; Migliorati, Marco; Rubiano, Rachele; Visconti, Luca; Contardo, Luca; Di Lenarda, Roberto; Martin, Conchita

    2013-01-01

    The aim of this study was to evaluate both intra- and interoperator reliability of a radiological three-dimensional classification system (KPG index) for the assessment of degree of difficulty for orthodontic treatment of maxillary canine impactions. Cone beam computed tomography (CBCT) scans of fifty impacted canines, obtained using three different scanners (NewTom, Kodak, and Planmeca), were classified using the KPG index by three independent orthodontists. Measurements were repeated one month later. Based on these two sessions, several recommendations on KPG Index scoring were elaborated. After a joint calibration session, these recommendations were explained to nine orthodontists and the two measurement sessions were repeated. There was a moderate intrarater agreement in the precalibration measurement sessions. After the calibration session, both intra- and interrater agreement were almost perfect. Indexes assessed with Kodak Dental Imaging 3D module software showed a better reliability in z-axis values, whereas indexes assessed with Planmeca Romexis software showed a better reliability in x- and y-axis values. No differences were found between the CBCT scanners used. Taken together, these findings indicate that the application of the instructions elaborated during this study improved KPG index reliability, which was nevertheless variously influenced by the use of different software for images evaluation.

  4. Reliability of a Novel CBCT-Based 3D Classification System for Maxillary Canine Impactions in Orthodontics: The KPG Index

    PubMed Central

    Visconti, Luca; Martin, Conchita

    2013-01-01

    The aim of this study was to evaluate both intra- and interoperator reliability of a radiological three-dimensional classification system (KPG index) for the assessment of degree of difficulty for orthodontic treatment of maxillary canine impactions. Cone beam computed tomography (CBCT) scans of fifty impacted canines, obtained using three different scanners (NewTom, Kodak, and Planmeca), were classified using the KPG index by three independent orthodontists. Measurements were repeated one month later. Based on these two sessions, several recommendations on KPG Index scoring were elaborated. After a joint calibration session, these recommendations were explained to nine orthodontists and the two measurement sessions were repeated. There was a moderate intrarater agreement in the precalibration measurement sessions. After the calibration session, both intra- and interrater agreement were almost perfect. Indexes assessed with Kodak Dental Imaging 3D module software showed a better reliability in z-axis values, whereas indexes assessed with Planmeca Romexis software showed a better reliability in x- and y-axis values. No differences were found between the CBCT scanners used. Taken together, these findings indicate that the application of the instructions elaborated during this study improved KPG index reliability, which was nevertheless variously influenced by the use of different software for images evaluation. PMID:24235889

  5. On Reliable and Efficient Data Gathering Based Routing in Underwater Wireless Sensor Networks

    PubMed Central

    Liaqat, Tayyaba; Akbar, Mariam; Javaid, Nadeem; Qasim, Umar; Khan, Zahoor Ali; Javaid, Qaisar; Alghamdi, Turki Ali; Niaz, Iftikhar Azim

    2016-01-01

    This paper presents cooperative routing scheme to improve data reliability. The proposed protocol achieves its objective, however, at the cost of surplus energy consumption. Thus sink mobility is introduced to minimize the energy consumption cost of nodes as it directly collects data from the network nodes at minimized communication distance. We also present delay and energy optimized versions of our proposed RE-AEDG to further enhance its performance. Simulation results prove the effectiveness of our proposed RE-AEDG in terms of the selected performance matrics. PMID:27589750

  6. Test-Retest Reliability of an Automated Infrared-Assisted Trunk Accelerometer-Based Gait Analysis System

    PubMed Central

    Hsu, Chia-Yu; Tsai, Yuh-Show; Yau, Cheng-Shiang; Shie, Hung-Hai; Wu, Chu-Ming

    2016-01-01

    The aim of this study was to determine the test-retest reliability of an automated infrared-assisted, trunk accelerometer-based gait analysis system for measuring gait parameters of healthy subjects in a hospital. Thirty-five participants (28 of them females; age range, 23–79 years) performed a 5-m walk twice using an accelerometer-based gait analysis system with infrared assist. Measurements of spatiotemporal gait parameters (walking speed, step length, and cadence) and trunk control (gait symmetry, gait regularity, acceleration root mean square (RMS), and acceleration root mean square ratio (RMSR)) were recorded in two separate walking tests conducted 1 week apart. Relative and absolute test-retest reliability was determined by calculating the intra-class correlation coefficient (ICC3,1) and smallest detectable difference (SDD), respectively. The test-retest reliability was excellent for walking speed (ICC = 0.87, 95% confidence interval = 0.74–0.93, SDD = 13.4%), step length (ICC = 0.81, 95% confidence interval = 0.63–0.91, SDD = 12.2%), cadence (ICC = 0.81, 95% confidence interval = 0.63–0.91, SDD = 10.8%), and trunk control (step and stride regularity in anterior-posterior direction, acceleration RMS and acceleration RMSR in medial-lateral direction, and acceleration RMS and stride regularity in vertical direction). An automated infrared-assisted, trunk accelerometer-based gait analysis system is a reliable tool for measuring gait parameters in the hospital environment. PMID:27455281

  7. A reliability-based maintenance technicians' workloads optimisation model with stochastic consideration

    NASA Astrophysics Data System (ADS)

    Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.

    2016-12-01

    The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.

  8. Reliable execution based on CPN and skyline optimization for Web service composition.

    PubMed

    Chen, Liping; Ha, Weitao; Zhang, Guojun

    2013-01-01

    With development of SOA, the complex problem can be solved by combining available individual services and ordering them to best suit user's requirements. Web services composition is widely used in business environment. With the features of inherent autonomy and heterogeneity for component web services, it is difficult to predict the behavior of the overall composite service. Therefore, transactional properties and nonfunctional quality of service (QoS) properties are crucial for selecting the web services to take part in the composition. Transactional properties ensure reliability of composite Web service, and QoS properties can identify the best candidate web services from a set of functionally equivalent services. In this paper we define a Colored Petri Net (CPN) model which involves transactional properties of web services in the composition process. To ensure reliable and correct execution, unfolding processes of the CPN are followed. The execution of transactional composition Web service (TCWS) is formalized by CPN properties. To identify the best services of QoS properties from candidate service sets formed in the TCSW-CPN, we use skyline computation to retrieve dominant Web service. It can overcome that the reduction of individual scores to an overall similarity leads to significant information loss. We evaluate our approach experimentally using both real and synthetically generated datasets.

  9. Resistive switching memories based on metal oxides: mechanisms, reliability and scaling

    NASA Astrophysics Data System (ADS)

    Ielmini, Daniele

    2016-06-01

    With the explosive growth of digital data in the era of the Internet of Things (IoT), fast and scalable memory technologies are being researched for data storage and data-driven computation. Among the emerging memories, resistive switching memory (RRAM) raises strong interest due to its high speed, high density as a result of its simple two-terminal structure, and low cost of fabrication. The scaling projection of RRAM, however, requires a detailed understanding of switching mechanisms and there are potential reliability concerns regarding small device sizes. This work provides an overview of the current understanding of bipolar-switching RRAM operation, reliability and scaling. After reviewing the phenomenological and microscopic descriptions of the switching processes, the stability of the low- and high-resistance states will be discussed in terms of conductance fluctuations and evolution in 1D filaments containing only a few atoms. The scaling potential of RRAM will finally be addressed by reviewing the recent breakthroughs in multilevel operation and 3D architecture, making RRAM a strong competitor among future high-density memory solutions.

  10. Multiplication factor for open ground storey buildings-a reliability based evaluation

    NASA Astrophysics Data System (ADS)

    Haran Pragalath, D. C.; Avadhoot, Bhosale; Robin, Davis P.; Pradip, Sarkar

    2016-06-01

    Open Ground Storey (OGS) framed buildings where the ground storey is kept open without infill walls, mainly to facilitate parking, is increasing commonly in urban areas. However, vulnerability of this type of buildings has been exposed in past earthquakes. OGS buildings are conventionally designed by a bare frame analysis that ignores the stiffness of the infill walls present in the upper storeys, but doing so underestimates the inter-storey drift (ISD) and thereby the force demand in the ground storey columns. Therefore, a multiplication factor (MF) is introduced in various international codes to estimate the design forces (bending moments and shear forces) in the ground storey columns. This study focuses on the seismic performance of typical OGS buildings designed by means of MFs. The probabilistic seismic demand models, fragility curves, reliability and cost indices for various frame models including bare frames and fully infilled frames are developed. It is found that the MF scheme suggested by the Israel code is better than other international codes in terms of reliability and cost.

  11. Coloured Letters and Numbers (CLaN): a reliable factor-analysis based synaesthesia questionnaire.

    PubMed

    Rothen, Nicolas; Tsakanikos, Elias; Meier, Beat; Ward, Jamie

    2013-09-01

    Synaesthesia is a heterogeneous phenomenon, even when considering one particular sub-type. The purpose of this study was to design a reliable and valid questionnaire for grapheme-colour synaesthesia that captures this heterogeneity. By the means of a large sample of 628 synaesthetes and a factor analysis, we created the Coloured Letters and Numbers (CLaN) questionnaire with 16 items loading on 4 different factors (i.e., localisation, automaticity/attention, deliberate use, and longitudinal changes). These factors were externally validated with tests which are widely used in the field of synaesthesia research. The questionnaire showed good test-retest reliability and construct validity (i.e., internally and externally). Our findings are discussed in the light of current theories and new ideas in synaesthesia research. More generally, the questionnaire is a useful tool which can be widely used in synaesthesia research to reveal the influence of individual differences on various performance measures and will be useful in generating new hypotheses.

  12. Reliability and Validity of a Novel Internet-Based Battery to Assess Mood and Cognitive Function in the Elderly

    PubMed Central

    Myers, Candice A.; Keller, Jeffrey N.; Allen, H. Raymond; Brouillette, Robert M.; Foil, Heather; Davis, Allison B.; Greenway, Frank L.; Johnson, William D.; Martin, Corby K.

    2016-01-01

    Dementia is a chronic condition in the elderly and depression is often a concurrent symptom. As populations continue to age, accessible and useful tools to screen for cognitive function and its associated symptoms in elderly populations are needed. The aim of this study was to test the reliability and validity of a new internet-based assessment battery for screening mood and cognitive function in an elderly population. Specifically, the Helping Hand Technology (HHT) assessments for depression (HHT-D) and global cognitive function (HHT-G) were evaluated in a sample of 57 elderly participants (22 male, 35 female) aged 59–85 years. The study sample was categorized into three groups: 1) dementia (n = 8; Mini-Mental State Exam (MMSE) score 10–24), 2) mild cognitive impairment (n = 24; MMSE score 25–28), and 3) control (n = 25; MMSE score 29–30). Test-retest reliability (Pearson correlation coefficient, r) and internal consistency reliability (Cronbach’s alpha, α) of the HHT-D and HHT-G were assessed. Validity of the HHT-D and HHT-G was tested via comparison (Pearson r) to commonly used pencil-and-paper based assessments: HHT-D versus the Geriatric Depression Scale (GDS) and HHT-G versus the MMSE. Good test-retest (r = 0.80; p < 0.0001) and acceptable internal consistency reliability (α = 0.73) of the HHT-D were established. Moderate support for the validity of the HHT-D was obtained (r = 0.60 between the HHT-D and GDS; p < 0.0001). Results indicated good test-retest (r = 0.87; p < 0.0001) and acceptable internal consistency reliability (α = 0.70) of the HHT-G. Validity of the HHT-G was supported (r = 0.71 between the HHT-G and MMSE; p < 0.0001). In summary, the HHT-D and HHT-G were found to be reliable and valid computerized assessments to screen for depression and cognitive status, respectively, in an elderly sample. PMID:27589529

  13. A reliable low-cost wireless and wearable gait monitoring system based on a plastic optical fibre sensor

    NASA Astrophysics Data System (ADS)

    Bilro, L.; Oliveira, J. G.; Pinto, J. L.; Nogueira, R. N.

    2011-04-01

    A wearable and wireless system designed to evaluate quantitatively the human gait is presented. It allows knee sagittal motion monitoring over long distances and periods with a portable and low-cost package. It is based on the measurement of transmittance changes when a side-polished plastic optical fibre is bent. Four voluntary healthy subjects, on five different days, were tested in order to assess inter-day and inter-subject reliability. Results have shown that this technique is reliable, allows a one-time calibration and is suitable in the diagnosis and rehabilitation of knee injuries or for monitoring the performance of competitive athletes. Environmental testing was accomplished in order to study the influence of different temperatures and humidity conditions.

  14. Nanowire growth process modeling and reliability models for nanodevices

    NASA Astrophysics Data System (ADS)

    Fathi Aghdam, Faranak

    . This work is an early attempt that uses a physical-statistical modeling approach to studying selective nanowire growth for the improvement of process yield. In the second research work, the reliability of nano-dielectrics is investigated. As electronic devices get smaller, reliability issues pose new challenges due to unknown underlying physics of failure (i.e., failure mechanisms and modes). This necessitates new reliability analysis approaches related to nano-scale devices. One of the most important nano-devices is the transistor that is subject to various failure mechanisms. Dielectric breakdown is known to be the most critical one and has become a major barrier for reliable circuit design in nano-scale. Due to the need for aggressive downscaling of transistors, dielectric films are being made extremely thin, and this has led to adopting high permittivity (k) dielectrics as an alternative to widely used SiO2 in recent years. Since most time-dependent dielectric breakdown test data on bilayer stacks show significant deviations from a Weibull trend, we have proposed two new approaches to modeling the time to breakdown of bi-layer high-k dielectrics. In the first approach, we have used a marked space-time self-exciting point process to model the defect generation rate. A simulation algorithm is used to generate defects within the dielectric space, and an optimization algorithm is employed to minimize the Kullback-Leibler divergence between the empirical distribution obtained from the real data and the one based on the simulated data to find the best parameter values and to predict the total time to failure. The novelty of the presented approach lies in using a conditional intensity for trap generation in dielectric that is a function of time, space and size of the previous defects. In addition, in the second approach, a k-out-of-n system framework is proposed to estimate the total failure time after the generation of more than one soft breakdown.

  15. MNOS stack for reliable, low optical loss, Cu based CMOS plasmonic devices.

    PubMed

    Emboras, Alexandros; Najar, Adel; Nambiar, Siddharth; Grosse, Philippe; Augendre, Emmanuel; Leroux, Charles; de Salvo, Barbara; de Lamaestre, Roch Espiau

    2012-06-18

    We study the electro optical properties of a Metal-Nitride-Oxide-Silicon (MNOS) stack for a use in CMOS compatible plasmonic active devices. We show that the insertion of an ultrathin stoichiometric Si(3)N(4) layer in a MOS stack lead to an increase in the electrical reliability of a copper gate MNOS capacitance from 50 to 95% thanks to a diffusion barrier effect, while preserving the low optical losses brought by the use of copper as the plasmon supporting metal. An experimental investigation is undertaken at a wafer scale using some CMOS standard processes of the LETI foundry. Optical transmission measurments conducted in a MNOS channel waveguide configuration coupled to standard silicon photonics circuitry confirms the very low optical losses (0.39 dB.μm(-1)), in good agreement with predictions using ellipsometric optical constants of Cu.

  16. A Novel Evaluation Method for Building Construction Project Based on Integrated Information Entropy with Reliability Theory

    PubMed Central

    Bai, Xiao-ping; Zhang, Xi-wei

    2013-01-01

    Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme. Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for risk computing of building construction projects. PMID:23533352

  17. Diskless supercomputers: Scalable, reliable I/O for the Tera-Op technology base

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.; Ousterhout, John K.; Patterson, David A.

    1993-01-01

    Computing is seeing an unprecedented improvement in performance; over the last five years there has been an order-of-magnitude improvement in the speeds of workstation CPU's. At least another order of magnitude seems likely in the next five years, to machines with 500 MIPS or more. The goal of the ARPA Teraop program is to realize even larger, more powerful machines, executing as many as a trillion operations per second. Unfortunately, we have seen no comparable breakthroughs in I/O performance; the speeds of I/O devices and the hardware and software architectures for managing them have not changed substantially in many years. We have completed a program of research to demonstrate hardware and software I/O architectures capable of supporting the kinds of internetworked 'visualization' workstations and supercomputers that will appear in the mid 1990s. The project had three overall goals: high performance, high reliability, and scalable, multipurpose system.

  18. A novel evaluation method for building construction project based on integrated information entropy with reliability theory.

    PubMed

    Bai, Xiao-ping; Zhang, Xi-wei

    2013-01-01

    Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme. Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for risk computing of building construction projects.

  19. Reliable Acquisition of RAM Dumps from Intel-Based Apple Mac Computers over FireWire

    NASA Astrophysics Data System (ADS)

    Gladyshev, Pavel; Almansoori, Afrah

    RAM content acquisition is an important step in live forensic analysis of computer systems. FireWire offers an attractive way to acquire RAM content of Apple Mac computers equipped with a FireWire connection. However, the existing techniques for doing so require substantial knowledge of the target computer configuration and cannot be used reliably on a previously unknown computer in a crime scene. This paper proposes a novel method for acquiring RAM content of Apple Mac computers over FireWire, which automatically discovers necessary information about the target computer and can be used in the crime scene setting. As an application of the developed method, the techniques for recovery of AOL Instant Messenger (AIM) conversation fragments from RAM dumps are also discussed in this paper.

  20. DEGRADATION SUSCEPTIBILITY METRICS AS THE BASES FOR BAYESIAN RELIABILITY MODELS OF AGING PASSIVE COMPONENTS AND LONG-TERM REACTOR RISK

    SciTech Connect

    Unwin, Stephen D.; Lowry, Peter P.; Toyooka, Michael Y.; Ford, Benjamin E.

    2011-07-17

    Conventional probabilistic risk assessments (PRAs) are not well-suited to addressing long-term reactor operations. Since passive structures, systems and components are among those for which refurbishment or replacement can be least practical, they might be expected to contribute increasingly to risk in an aging plant. Yet, passives receive limited treatment in PRAs. Furthermore, PRAs produce only snapshots of risk based on the assumption of time-independent component failure rates. This assumption is unlikely to be valid in aging systems. The treatment of aging passive components in PRA does present challenges. First, service data required to quantify component reliability models are sparse, and this problem is exacerbated by the greater data demands of age-dependent reliability models. A compounding factor is that there can be numerous potential degradation mechanisms associated with the materials, design, and operating environment of a given component. This deepens the data problem since the risk-informed management of materials degradation and component aging will demand an understanding of the long-term risk significance of individual degradation mechanisms. In this paper we describe a Bayesian methodology that integrates the metrics of materials degradation susceptibility being developed under the Nuclear Regulatory Commission's Proactive Management of Materials of Degradation Program with available plant service data to estimate age-dependent passive component reliabilities. Integration of these models into conventional PRA will provide a basis for materials degradation management informed by the predicted long-term operational risk.

  1. Human reliability-based MC&A models for detecting insider theft.

    SciTech Connect

    Duran, Felicia Angelica; Wyss, Gregory Dane

    2010-06-01

    Material control and accounting (MC&A) safeguards operations that track and account for critical assets at nuclear facilities provide a key protection approach for defeating insider adversaries. These activities, however, have been difficult to characterize in ways that are compatible with the probabilistic path analysis methods that are used to systematically evaluate the effectiveness of a site's physical protection (security) system (PPS). MC&A activities have many similar characteristics to operator procedures performed in a nuclear power plant (NPP) to check for anomalous conditions. This work applies human reliability analysis (HRA) methods and models for human performance of NPP operations to develop detection probabilities for MC&A activities. This has enabled the development of an extended probabilistic path analysis methodology in which MC&A protections can be combined with traditional sensor data in the calculation of PPS effectiveness. The extended path analysis methodology provides an integrated evaluation of a safeguards and security system that addresses its effectiveness for attacks by both outside and inside adversaries.

  2. Reliability of vibration energy harvesters of metal-based PZT thin films

    NASA Astrophysics Data System (ADS)

    Tsujiura, Y.; Suwa, E.; Kurokawa, F.; Hida, H.; Kanno, I.

    2014-11-01

    This paper describes the reliability of piezoelectric vibration energy harvesters (PVEHs) of Pb(Zr,Ti)O3 (PZT) thin films on metal foil cantilevers. The PZT thin films were directly deposited onto the Pt-coated stainless-steel (SS430) cantilevers by rf-magnetron sputtering, and we observed their aging behavior of power generation characteristics under the resonance vibration condition for three days. During the aging measurement, there was neither fatigue failure nor degradation of dielectric properties in our PVEHs (length: 13 mm, width: 5.0 mm, thickness: 104 μm) even under a large excitation acceleration of 25 m/s2. However, we observed clear degradation of the generated electric voltage depending on excitation acceleration. The decay rate of the output voltage was 5% from the start of the measurement at 25 m/s2. The transverse piezoelectric coefficient (e31,f) also degraded with almost the same decay rate as that of the output voltage; this indicates that the degradation of output voltage was mainly caused by that of piezoelectric properties. From the decay curves, the output powers are estimated to degrade 7% at 15 m/s2 and 36% at 25 m/s2 if we continue to excite the PVEHs for 30 years.

  3. Tumor Heterogeneity: Mechanisms and Bases for a Reliable Application of Molecular Marker Design

    PubMed Central

    Diaz-Cano, Salvador J.

    2012-01-01

    Tumor heterogeneity is a confusing finding in the assessment of neoplasms, potentially resulting in inaccurate diagnostic, prognostic and predictive tests. This tumor heterogeneity is not always a random and unpredictable phenomenon, whose knowledge helps designing better tests. The biologic reasons for this intratumoral heterogeneity would then be important to understand both the natural history of neoplasms and the selection of test samples for reliable analysis. The main factors contributing to intratumoral heterogeneity inducing gene abnormalities or modifying its expression include: the gradient ischemic level within neoplasms, the action of tumor microenvironment (bidirectional interaction between tumor cells and stroma), mechanisms of intercellular transference of genetic information (exosomes), and differential mechanisms of sequence-independent modifications of genetic material and proteins. The intratumoral heterogeneity is at the origin of tumor progression and it is also the byproduct of the selection process during progression. Any analysis of heterogeneity mechanisms must be integrated within the process of segregation of genetic changes in tumor cells during the clonal expansion and progression of neoplasms. The evaluation of these mechanisms must also consider the redundancy and pleiotropism of molecular pathways, for which appropriate surrogate markers would support the presence or not of heterogeneous genetics and the main mechanisms responsible. This knowledge would constitute a solid scientific background for future therapeutic planning. PMID:22408433

  4. Reliability and Validity of Web-Based Portfolio Peer Assessment: A Case Study for a Senior High School's Students Taking Computer Course

    ERIC Educational Resources Information Center

    Chang, Chi-Cheng; Tseng, Kuo-Hung; Chou, Pao-Nan; Chen, Yi-Hui

    2011-01-01

    This study examined the reliability and validity of Web-based portfolio peer assessment. Participants were 72 second-grade students from a senior high school taking a computer course. The results indicated that: 1) there was a lack of consistency across various student raters on a portfolio, or inter-rater reliability; 2) two-thirds of the raters…

  5. Reliability training

    NASA Technical Reports Server (NTRS)

    Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Dillard, Richard B.; Wong, Kam L.; Barber, Frank J.; Barina, Frank J.

    1992-01-01

    Discussed here is failure physics, the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low cost reliable products. A review of reliability for the years 1940 to 2000 is given. Next, a review of mathematics is given as well as a description of what elements contribute to product failures. Basic reliability theory and the disciplines that allow us to control and eliminate failures are elucidated.

  6. Probing Reliability of Transport Phenomena Based Heat Transfer and Fluid Flow Analysis in Autogeneous Fusion Welding Process

    NASA Astrophysics Data System (ADS)

    Bag, S.; de, A.

    2010-09-01

    The transport phenomena based heat transfer and fluid flow calculations in weld pool require a number of input parameters. Arc efficiency, effective thermal conductivity, and viscosity in weld pool are some of these parameters, values of which are rarely known and difficult to assign a priori based on the scientific principles alone. The present work reports a bi-directional three-dimensional (3-D) heat transfer and fluid flow model, which is integrated with a real number based genetic algorithm. The bi-directional feature of the integrated model allows the identification of the values of a required set of uncertain model input parameters and, next, the design of process parameters to achieve a target weld pool dimension. The computed values are validated with measured results in linear gas-tungsten-arc (GTA) weld samples. Furthermore, a novel methodology to estimate the overall reliability of the computed solutions is also presented.

  7. Person Reliability

    ERIC Educational Resources Information Center

    Lumsden, James

    1977-01-01

    Person changes can be of three kinds: developmental trends, swells, and tremors. Person unreliability in the tremor sense (momentary fluctuations) can be estimated from person characteristic curves. Average person reliability for groups can be compared from item characteristic curves. (Author)

  8. Reliable emotion recognition system based on dynamic adaptive fusion of forehead biopotentials and physiological signals.

    PubMed

    Khezri, Mahdi; Firoozabadi, Mohammad; Sharafat, Ahmad Reza

    2015-11-01

    In this study, we proposed a new adaptive method for fusing multiple emotional modalities to improve the performance of the emotion recognition system. Three-channel forehead biosignals along with peripheral physiological measurements (blood volume pressure, skin conductance, and interbeat intervals) were utilized as emotional modalities. Six basic emotions, i.e., anger, sadness, fear, disgust, happiness, and surprise were elicited by displaying preselected video clips for each of the 25 participants in the experiment; the physiological signals were collected simultaneously. In our multimodal emotion recognition system, recorded signals with the formation of several classification units identified the emotions independently. Then the results were fused using the adaptive weighted linear model to produce the final result. Each classification unit is assigned a weight that is determined dynamically by considering the performance of the units during the testing phase and the training phase results. This dynamic weighting scheme enables the emotion recognition system to adapt itself to each new user. The results showed that the suggested method outperformed conventional fusion of the features and classification units using the majority voting method. In addition, a considerable improvement, compared to the systems that used the static weighting schemes for fusing classification units, was also shown. Using support vector machine (SVM) and k-nearest neighbors (KNN) classifiers, the overall classification accuracies of 84.7% and 80% were obtained in identifying the emotions, respectively. In addition, applying the forehead or physiological signals in the proposed scheme indicates that designing a reliable emotion recognition system is feasible without the need for additional emotional modalities.

  9. Software Reliability 2002

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores R.

    2003-01-01

    In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.

  10. Robot-Assisted End-Effector-Based Stair Climbing for Cardiopulmonary Exercise Testing: Feasibility, Reliability, and Repeatability

    PubMed Central

    Stoller, Oliver; Schindelholz, Matthias; Hunt, Kenneth J.

    2016-01-01

    Background Neurological impairments can limit the implementation of conventional cardiopulmonary exercise testing (CPET) and cardiovascular training strategies. A promising approach to provoke cardiovascular stress while facilitating task-specific exercise in people with disabilities is feedback-controlled robot-assisted end-effector-based stair climbing (RASC). The aim of this study was to evaluate the feasibility, reliability, and repeatability of augmented RASC-based CPET in able-bodied subjects, with a view towards future research and applications in neurologically impaired populations. Methods Twenty able-bodied subjects performed a familiarisation session and 2 consecutive incremental CPETs using augmented RASC. Outcome measures focussed on standard cardiopulmonary performance parameters and on accuracy of work rate tracking (RMSEP−root mean square error). Criteria for feasibility were cardiopulmonary responsiveness and technical implementation. Relative and absolute test-retest reliability were assessed by intraclass correlation coefficients (ICC), standard error of the measurement (SEM), and minimal detectable change (MDC). Mean differences, limits of agreement, and coefficients of variation (CoV) were estimated to assess repeatability. Results All criteria for feasibility were achieved. Mean V′O2peak was 106±9% of predicted V′O2max and mean HRpeak was 99±3% of predicted HRmax. 95% of the subjects achieved at least 1 criterion for V′O2max, and the detection of the sub-maximal ventilatory thresholds was successful (ventilatory anaerobic threshold 100%, respiratory compensation point 90% of the subjects). Excellent reliability was found for peak cardiopulmonary outcome measures (ICC ≥ 0.890, SEM ≤ 0.60%, MDC ≤ 1.67%). Repeatability for the primary outcomes was good (CoV ≤ 0.12). Conclusions RASC-based CPET with feedback-guided exercise intensity demonstrated comparable or higher peak cardiopulmonary performance variables relative to

  11. Hardware based redundant multi-threading inside a GPU for improved reliability

    DOEpatents

    Sridharan, Vilas; Gurumurthi, Sudhanva

    2015-05-05

    A system and method for verifying computation output using computer hardware are provided. Instances of computation are generated and processed on hardware-based processors. As instances of computation are processed, each instance of computation receives a load accessible to other instances of computation. Instances of output are generated by processing the instances of computation. The instances of output are verified against each other in a hardware based processor to ensure accuracy of the output.

  12. A Logarithmic Opinion Pool Based STAPLE Algorithm For The Fusion of Segmentations With Associated Reliability Weights

    PubMed Central

    Akhondi-Asl, Alireza; Hoyte, Lennox; Lockhart, Mark E.; Warfield, Simon K.

    2014-01-01

    Pelvic floor dysfunction is very common in women after childbirth and precise segmentation of magnetic resonance images (MRI) of the pelvic floor may facilitate diagnosis and treatment of patients. However, because of the complexity of the structures of pelvic floor, manual segmentation of the pelvic floor is challenging and suffers from high inter and intra-rater variability of expert raters. Multiple template fusion algorithms are promising techniques for segmentation of MRI in these types of applications, but these algorithms have been limited by imperfections in the alignment of each template to the target, and by template segmentation errors. In this class of segmentation techniques, a collection of templates is aligned to a target, and a new segmentation of the target is inferred. A number of algorithms sought to improve segmentation performance by combining image intensities and template labels as two independent sources of information, carrying out decision fusion through local intensity weighted voting schemes. This class of approach is a form of linear opinion pooling, and achieves unsatisfactory performance for this application. We hypothesized that better decision fusion could be achieved by assessing the contribution of each template in comparison to a reference standard segmentation of the target image and developed a novel segmentation algorithm to enable automatic segmentation of MRI of the female pelvic floor. The algorithm achieves high performance by estimating and compensating for both imperfect registration of the templates to the target image and template segmentation inaccuracies. The algorithm is a generalization of the STAPLE algorithm in which a reference segmentation is estimated and used to infer an optimal weighting for fusion of templates. A local image similarity measure is used to infer a local reliability weight, which contributes to the fusion through a novel logarithmic opinion pooling. We evaluated our new algorithm in comparison

  13. Note: Reliable and non-contact 6D motion tracking system based on 2D laser scanners for cargo transportation

    SciTech Connect

    Kim, Young-Keun; Kim, Kyung-Soo

    2014-10-15

    Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-based sensor, the system is expected to be highly robust to sea weather conditions.

  14. The N of 1 in Arts-Based Research: Reliability and Validity

    ERIC Educational Resources Information Center

    Siegesmund, Richard

    2014-01-01

    N signifies the number of data samples in a study. Traditional research values numerous data samples as this reduces the variability created by extremes. Alternatively, arts-based research privileges the outlier, the N of 1. Oftentimes, what is unique and outside the norm is the focus. There are three approaches to the N of 1 in arts-based…

  15. Child and Adolescent Behaviorally Based Disorders: A Critical Review of Reliability and Validity

    ERIC Educational Resources Information Center

    Mallett, Christopher A.

    2014-01-01

    Objectives: The purpose of this study was to investigate the historical construction and empirical support of two child and adolescent behaviorally based mental health disorders: oppositional defiant and conduct disorders. Method: The study utilized a historiography methodology to review, from 1880 to 2012, these disorders' inclusion in…

  16. From Fulcher to PLEVALEX: Issues in Interface Design, Validity and Reliability in Internet Based Language Testing

    ERIC Educational Resources Information Center

    Garcia Laborda, Jesus

    2007-01-01

    Interface design and ergonomics, while already studied in much of educational theory, have not until recently been considered in language testing (Fulcher, 2003). In this paper, we revise the design principles of PLEVALEX, a fully operational prototype Internet based language testing platform. Our focus here is to show PLEVALEX's interfaces and…

  17. Alpha, Dimension-Free, and Model-Based Internal Consistency Reliability

    ERIC Educational Resources Information Center

    Bentler, Peter M.

    2009-01-01

    As pointed out by Sijtsma ("in press"), coefficient alpha is inappropriate as a single summary of the internal consistency of a composite score. Better estimators of internal consistency are available. In addition to those mentioned by Sijtsma, an old dimension-free coefficient and structural equation model based coefficients are…

  18. Reliability and Validity of Authentic Assessment in a Web Based Course

    ERIC Educational Resources Information Center

    Olfos, Raimundo; Zulantay, Hildaura

    2007-01-01

    Web-based courses are promising in that they are effective and have the possibility of their instructional design being improved over time. However, the assessments of said courses are criticized in terms of their validity. This paper is an exploratory case study regarding the validity of the assessment system used in a semi presential web-based…

  19. Reliability and Validity of a Computer-Based Knowledge Mapping System To Measure Content Understanding.

    ERIC Educational Resources Information Center

    Herl, H. E.; O'Neil, H. F., Jr.; Chung, G. K. W. K.; Schacter, J.

    1999-01-01

    Presents results from two computer-based knowledge-mapping studies developed by the National Center for Research on Evaluation, Standards, and Student Testing (CRESST): in one, middle and high school students constructed group maps while collaborating over a network, and in the second, students constructed individual maps while searching the Web.…

  20. Reliability study of Zr and Al incorporated Hf based high-k dielectric deposited by advanced processing

    NASA Astrophysics Data System (ADS)

    Bhuyian, Md Nasir Uddin

    Hafnium-based high-kappa dielectric materials have been successfully used in the industry as a key replacement for SiO2 based gate dielectrics in order to continue CMOS device scaling to the 22-nm technology node. Further scaling according to the device roadmap requires the development of oxides with higher kappa values in order to scale the equivalent oxide thickness (EOT) to 0.7 nm or below while achieving low defect densities. In addition, next generation devices need to meet challenges like improved channel mobility, reduced gate leakage current, good control on threshold voltage, lower interface state density, and good reliability. In order to overcome these challenges, improvements of the high-kappa film properties and deposition methods are highly desirable. In this dissertation, a detail study of Zr and Al incorporated HfO 2 based high-kappa dielectrics is conducted to investigate improvement in electrical characteristics and reliability. To meet scaling requirements of the gate dielectric to sub 0.7 nm, Zr is added to HfO2 to form Hf1-xZrxO2 with x=0, 0.31 and 0.8 where the dielectric film is deposited by using various intermediate processing conditions, like (i) DADA: intermediate thermal annealing in a cyclical deposition process; (ii) DSDS: similar cyclical process with exposure to SPA Ar plasma; and (iii) As-Dep: the dielectric deposited without any intermediate step. MOSCAPs are formed with TiN metal gate and the reliability of these devices is investigated by subjecting them to a constant voltage stress in the gate injection mode. Stress induced flat-band voltage shift (DeltaVFB), stress induced leakage current (SILC) and stress induced interface state degradation are observed. DSDS samples demonstrate the superior characteristics whereas the worst degradation is observed for DADA samples. Time dependent dielectric breakdown (TDDB) shows that DSDS Hf1-xZrxO2 (x=0.8) has the superior characteristics with reduced oxygen vacancy, which is affiliated to

  1. Test-Retest Reliability and Convergent Validity of a Computer Based Hand Function Test Protocol in People with Arthritis

    PubMed Central

    Srikesavan, Cynthia S.; Shay, Barbara; Szturm, Tony

    2015-01-01

    Objectives: A computer based hand function assessment tool has been developed to provide a standardized method for quantifying task performance during manipulations of common objects/tools/utensils with diverse physical properties and grip/grasp requirements for handling. The study objectives were to determine test-retest reliability and convergent validity of the test protocol in people with arthritis. Methods: Three different object manipulation tasks were evaluated twice in forty people with rheumatoid arthritis (RA) or hand osteoarthritis (HOA). Each object was instrumented with a motion sensor and moved in concert with a computer generated visual target. Self-reported joint pain and stiffness levels were recorded before and after each task. Task performance was determined by comparing the object movement with the computer target motion. This was correlated with grip strength, nine hole peg test, Disabilities of Arm, Shoulder, and Hand (DASH) questionnaire, and the Health Assessment Questionnaire (HAQ) scores. Results: The test protocol indicated moderate to high test-retest reliability of performance measures for three manipulation tasks, intraclass correlation coefficients (ICCs) ranging between 0.5 to 0.84, p<0.05. Strength of association between task performance measures with self- reported activity/participation composite scores was low to moderate (Spearman rho <0.7). Low correlations (Spearman rho < 0.4) were observed between task performance measures and grip strength; and between three objects’ performance measures. Significant reduction in pain and joint stiffness (p<0.05) was observed after performing each task. Conclusion: The study presents initial evidence on the test retest reliability and convergent validity of a computer based hand function assessment protocol in people with rheumatoid arthritis or hand osteoarthritis. The novel tool objectively measures overall task performance during a variety of object manipulation tasks done by tracking a

  2. Reliable Alignment in Total Knee Arthroplasty by the Use of an iPod-Based Navigation System.

    PubMed

    Koenen, Paola; Schneider, Marco M; Fröhlich, Matthias; Driessen, Arne; Bouillon, Bertil; Bäthis, Holger

    2016-01-01

    Axial alignment is one of the main objectives in total knee arthroplasty (TKA). Computer-assisted surgery (CAS) is more accurate regarding limb alignment reconstruction compared to the conventional technique. The aim of this study was to analyse the precision of the innovative navigation system DASH® by Brainlab and to evaluate the reliability of intraoperatively acquired data. A retrospective analysis of 40 patients was performed, who underwent CAS TKA using the iPod-based navigation system DASH. Pre- and postoperative axial alignment were measured on standardized radiographs by two independent observers. These data were compared with the navigation data. Furthermore, interobserver reliability was measured. The duration of surgery was monitored. The mean difference between the preoperative mechanical axis by X-ray and the first intraoperatively measured limb axis by the navigation system was 2.4°. The postoperative X-rays showed a mean difference of 1.3° compared to the final navigation measurement. According to radiographic measurements, 88% of arthroplasties had a postoperative limb axis within ±3°. The mean additional time needed for navigation was 5 minutes. We could prove very good precision for the DASH system, which is comparable to established navigation devices with only negligible expenditure of time compared to conventional TKA.

  3. A Multi-Criteria Decision Analysis based methodology for quantitatively scoring the reliability and relevance of ecotoxicological data.

    PubMed

    Isigonis, Panagiotis; Ciffroy, Philippe; Zabeo, Alex; Semenzin, Elena; Critto, Andrea; Giove, Silvio; Marcomini, Antonio

    2015-12-15

    Ecotoxicological data are highly important for risk assessment processes and are used for deriving environmental quality criteria, which are enacted for assuring the good quality of waters, soils or sediments and achieving desirable environmental quality objectives. Therefore, it is of significant importance the evaluation of the reliability of available data for analysing their possible use in the aforementioned processes. The thorough analysis of currently available frameworks for the assessment of ecotoxicological data has led to the identification of significant flaws but at the same time various opportunities for improvement. In this context, a new methodology, based on Multi-Criteria Decision Analysis (MCDA) techniques, has been developed with the aim of analysing the reliability and relevance of ecotoxicological data (which are produced through laboratory biotests for individual effects), in a transparent quantitative way, through the use of expert knowledge, multiple criteria and fuzzy logic. The proposed methodology can be used for the production of weighted Species Sensitivity Weighted Distributions (SSWD), as a component of the ecological risk assessment of chemicals in aquatic systems. The MCDA aggregation methodology is described in detail and demonstrated through examples in the article and the hierarchically structured framework that is used for the evaluation and classification of ecotoxicological data is shortly discussed. The methodology is demonstrated for the aquatic compartment but it can be easily tailored to other environmental compartments (soil, air, sediments).

  4. Design and reliability analysis of high-speed and continuous data recording system based on disk array

    NASA Astrophysics Data System (ADS)

    Jiang, Changlong; Ma, Cheng; He, Ning; Zhang, Xugang; Wang, Chongyang; Jia, Huibo

    2002-12-01

    In many real-time fields the sustained high-speed data recording system is required. This paper proposes a high-speed and sustained data recording system based on the complex-RAID 3+0. The system consists of Array Controller Module (ACM), String Controller Module (SCM) and Main Controller Module (MCM). ACM implemented by an FPGA chip is used to split the high-speed incoming data stream into several lower-speed streams and generate one parity code stream synchronously. It also can inversely recover the original data stream while reading. SCMs record lower-speed streams from the ACM into the SCSI disk drivers. In the SCM, the dual-page buffer technology is adopted to implement speed-matching function and satisfy the need of sustainable recording. MCM monitors the whole system, controls ACM and SCMs to realize the data stripping, reconstruction, and recovery functions. The method of how to determine the system scale is presented. At the end, two new ways Floating Parity Group (FPG) and full 2D-Parity Group (full 2D-PG) are proposed to improve the system reliability and compared with the Traditional Parity Group (TPG). This recording system can be used conveniently in many areas of data recording, storing, playback and remote backup with its high-reliability.

  5. Facile and Reliable in Situ Polymerization of Poly(Ethyl Cyanoacrylate)-Based Polymer Electrolytes toward Flexible Lithium Batteries.

    PubMed

    Cui, Yanyan; Chai, Jingchao; Du, Huiping; Duan, Yulong; Xie, Guangwen; Liu, Zhihong; Cui, Guanglei

    2017-03-15

    Polycyanoacrylate is a very promising matrix for polymer electrolyte, which possesses advantages of strong binding and high electrochemical stability owing to the functional nitrile groups. Herein, a facile and reliable in situ polymerization strategy of poly(ethyl cyanoacrylate) (PECA) based gel polymer electrolytes (GPE) via a high efficient anionic polymerization was introduced consisting of PECA and 4 M LiClO4 in carbonate solvents. The in situ polymerized PECA gel polymer electrolyte achieved an excellent ionic conductivity (2.7 × 10(-3) S cm(-1)) at room temperature, and exhibited a considerable electrochemical stability window up to 4.8 V vs Li/Li(+). The LiFePO4/PECA-GPE/Li and LiNi1.5Mn0.5O4/PECA-GPE/Li batteries using this in-situ-polymerized GPE delivered stable charge/discharge profiles, considerable rate capability, and excellent cycling performance. These results demonstrated this reliable in situ polymerization process is a very promising strategy to prepare high performance polymer electrolytes for flexible thin-film batteries, micropower lithium batteries, and deformable lithium batteries for special purpose.

  6. Reliable Alignment in Total Knee Arthroplasty by the Use of an iPod-Based Navigation System

    PubMed Central

    Koenen, Paola; Schneider, Marco M.; Fröhlich, Matthias; Driessen, Arne; Bouillon, Bertil; Bäthis, Holger

    2016-01-01

    Axial alignment is one of the main objectives in total knee arthroplasty (TKA). Computer-assisted surgery (CAS) is more accurate regarding limb alignment reconstruction compared to the conventional technique. The aim of this study was to analyse the precision of the innovative navigation system DASH® by Brainlab and to evaluate the reliability of intraoperatively acquired data. A retrospective analysis of 40 patients was performed, who underwent CAS TKA using the iPod-based navigation system DASH. Pre- and postoperative axial alignment were measured on standardized radiographs by two independent observers. These data were compared with the navigation data. Furthermore, interobserver reliability was measured. The duration of surgery was monitored. The mean difference between the preoperative mechanical axis by X-ray and the first intraoperatively measured limb axis by the navigation system was 2.4°. The postoperative X-rays showed a mean difference of 1.3° compared to the final navigation measurement. According to radiographic measurements, 88% of arthroplasties had a postoperative limb axis within ±3°. The mean additional time needed for navigation was 5 minutes. We could prove very good precision for the DASH system, which is comparable to established navigation devices with only negligible expenditure of time compared to conventional TKA. PMID:27313898

  7. Reliable bearing fault diagnosis using Bayesian inference-based multi-class support vector machines.

    PubMed

    Islam, M M Manjurul; Kim, Jaeyoung; Khan, Sheraz A; Kim, Jong-Myon

    2017-02-01

    This letter presents a multi-fault diagnosis scheme for bearings using hybrid features extracted from their acoustic emissions and a Bayesian inference-based one-against-all support vector machine (Bayesian OAASVM) for multi-class classification. The Bayesian OAASVM, which is a standard multi-class extension of the binary support vector machine, results in ambiguously labeled regions in the input space that degrade its classification performance. The proposed Bayesian OAASVM formulates the feature space as an appropriate Gaussian process prior, interprets the decision value of the Bayesian OAASVM as a maximum a posteriori evidence function, and uses Bayesian inference to label unknown samples.

  8. Feasibility and reliability of classifying gross motor function among children with cerebral palsy using population-based record surveillance.

    PubMed

    Benedict, Ruth E; Patz, Jean; Maenner, Matthew J; Arneson, Carrie L; Yeargin-Allsopp, Marshalyn; Doernberg, Nancy S; Van Naarden Braun, Kim; Kirby, Russell S; Durkin, Maureen S

    2011-01-01

    For conditions with wide-ranging consequences, such as cerebral palsy (CP), population-based surveillance provides an estimate of the prevalence of case status but only the broadest understanding of the impact of the condition on children, families or society. Beyond case status, information regarding health, functional skills and participation is necessary to fully appreciate the consequences of the condition. The purpose of this study was to assess the feasibility and reliability of enhancing population-based surveillance by classifying gross motor function (GMF) from information available in medical records of children with CP. We assessed inter-rater reliability of two GMF classification methods, one the Gross Motor Function Classification System (GMFCS) and the other a 3-category classification of walking ability: (1) independently, (2) with handheld mobility device, or (3) limited or none. Two qualified clinicians independently reviewed abstracted evaluations from medical records of 8-year-old children residing in southeast Wisconsin, USA who were identified as having CP (n = 154) through the Centers for Disease Control and Prevention's Autism and Developmental Disabilities Monitoring Network. Ninety per cent (n = 138) of the children with CP had information in the record after age 4 years and 108 (70%) had adequate descriptions of gross motor skills to classify using the GMFCS. Agreement was achieved on 75.0% of the GMFCS ratings (simple kappa = 0.67, 95% confidence interval [95% CI 0.57, 0.78], weighted kappa = 0.83, [95% CI 0.77, 0.89]). Among case children for whom walking ability could be classified (n = 117), approximately half walked independently without devices and one-third had limited or no walking ability. Across walking ability categories, agreement was reached for 94% (simple kappa = 0.90, [95% CI 0.82, 0.96], weighted kappa = 0.94, [95% CI 0.89, 0.98]). Classifying GMF in the context of active records-based surveillance is feasible and reliable

  9. Parameter estimation techniques based on optimizing goodness-of-fit statistics for structural reliability

    NASA Technical Reports Server (NTRS)

    Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.

    1993-01-01

    New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.

  10. Reliability physics

    NASA Technical Reports Server (NTRS)

    Cuddihy, E. F.; Ross, R. G., Jr.

    1984-01-01

    Speakers whose topics relate to the reliability physics of solar arrays are listed and their topics briefly reviewed. Nine reports are reviewed ranging in subjects from studies of photothermal degradation in encapsulants and polymerizable ultraviolet stabilizers to interface bonding stability to electrochemical degradation of photovoltaic modules.

  11. Observer-based reliable stabilization of uncertain linear systems subject to actuator faults, saturation, and bounded system disturbances.

    PubMed

    Fan, Jinhua; Zhang, Youmin; Zheng, Zhiqiang

    2013-11-01

    A matrix inequality approach is proposed to reliably stabilize a class of uncertain linear systems subject to actuator faults, saturation, and bounded system disturbances. The system states are assumed immeasurable, and a classical observer is incorporated for observation to enable state-based feedback control. Both the stability and stabilization of the closed-loop system are discussed and the closed-loop domain of attraction is estimated by an ellipsoidal invariant set. The resultant stabilization conditions in the form of matrix inequalities enable simultaneous optimization of both the observer gain and the feedback controller gain, which is realized by converting the non-convex optimization problem to an unconstrained nonlinear programming problem. The effectiveness of proposed design techniques is demonstrated through a linearized model of F-18 HARV around an operating point.

  12. Performance and reliability of HfAlO x-based interpoly dielectrics for floating-gate Flash memory

    NASA Astrophysics Data System (ADS)

    Govoreanu, B.; Wellekens, D.; Haspeslagh, L.; Brunco, D. P.; De Vos, J.; Aguado, D. Ruiz; Blomme, P.; van der Zanden, K.; Van Houdt, J.

    2008-04-01

    This paper discusses the performance and reliability of aggressively scaled HfAlO x-based interpoly dielectric stacks in combination with high-workfunction metal gates for sub-45 nm non-volatile memory technologies. It is shown that a less than 5 nm EOT IPD stack can provide a large program/erase (P/E) window, while operating at moderate voltages and has very good retention, with an extrapolated 10-year retention window of about 3 V at 150 °C. The impact of the process sequence and metal gate material is discussed. The viability of the material is considered in view of the demands of various Flash memory technologies and direction for further improvements are discussed.

  13. Reliability of neuronal information conveyed by unreliable neuristor-based leaky integrate-and-fire neurons: a model study

    PubMed Central

    Lim, Hyungkwang; Kornijcuk, Vladimir; Seok, Jun Yeong; Kim, Seong Keun; Kim, Inho; Hwang, Cheol Seong; Jeong, Doo Seok

    2015-01-01

    We conducted simulations on the neuronal behavior of neuristor-based leaky integrate-and-fire (NLIF) neurons. The phase-plane analysis on the NLIF neuron highlights its spiking dynamics – determined by two nullclines conditional on the variables on the plane. Particular emphasis was placed on the operational noise arising from the variability of the threshold switching behavior in the neuron on each switching event. As a consequence, we found that the NLIF neuron exhibits a Poisson-like noise in spiking, delimiting the reliability of the information conveyed by individual NLIF neurons. To highlight neuronal information coding at a higher level, a population of noisy NLIF neurons was analyzed in regard to probability of successful information decoding given the Poisson-like noise of each neuron. The result demonstrates highly probable success in decoding in spite of large variability – due to the variability of the threshold switching behavior – of individual neurons. PMID:25966658

  14. [Reliability of the individual age assessment at the time of death based on sternal rib end morphology in Balkan population].

    PubMed

    Donić, Danijela; Durić, Marija; Babić, Dragan; Popović, Dorde

    2005-06-01

    This paper analyzes the reliability of the Iscan's sternal rib-ends phase method for the assessment of individual age at the time of death in the Balkan population. The method is based on the morphological age changes of the sternal rib ends. The tested samples consisted of 65 ribs from autopsy cases in the Institute for Forensic Medicine, University of Belgrade, during 1999-2002 (23 females, and 42 males of various ages, ranged from 17-91 years), according to the forensic documents. Significant differences between the real chronological age of the individuals and the values established by the Iscan's method was found, especially in the older categories (phases 6 and 7), in both males and females. The results of the discriminative analysis showed the values of the highest diagnostic relevance for the assessment of age in our population: the change of the depth of the articular fossa, the thickness of its walls, and the quality of the bones.

  15. Reliable gate stack and substrate parameter extraction based on C-V measurements for 14 nm node FDSOI technology

    NASA Astrophysics Data System (ADS)

    Mohamad, B.; Leroux, C.; Rideau, D.; Haond, M.; Reimbold, G.; Ghibaudo, G.

    2017-02-01

    Effective work function and equivalent oxide thickness are fundamental parameters for technology optimization. In this work, a comprehensive study is done on a large set of FDSOI devices. The extraction of the gate stack parameters is carried out by fitting experimental CV characteristics to quantum simulation, based on self-consistent solution of one dimensional Poisson and Schrodinger equations. A reliable methodology for gate stack parameters is proposed and validated. This study identifies the process modules that impact directly the effective work function from those that only affect the device threshold voltage, due to the device architecture. Moreover, the relative impacts of various process modules on channel thickness and gate oxide thickness are evidenced.

  16. Copper-based micro-channel cooler reliably operated using solutions of distilled-water and ethanol as a coolant

    NASA Astrophysics Data System (ADS)

    Chin, A. K.; Nelson, A.; Chin, R. H.; Bertaska, R.; Jacob, J. H.

    2015-03-01

    Copper-based micro-channel coolers (Cu-MCC) are the lowest thermal-resistance heat-sinks for high-power laserdiode (LD) bars. Presently, the resistivity, pH and oxygen content of the de-ionized water coolant, must be actively controlled to minimize cooler failure by corrosion and electro-corrosion. Additionally, the water must be constantly exposed to ultraviolet radiation to limit the growth of micro-organisms that may clog the micro-channels. In this study, we report the reliable, care-free operation of LD-bars attached to Cu-MCCs, using a solution of distilledwater and ethanol as the coolant. This coolant meets the storage requirements of Mil-Std 810G, e.g. exposure to a storage temperature as low as -51°C and no growth of micro-organisms during passive storage.

  17. Improving the reliability of road materials based on micronized sulfur composites

    NASA Astrophysics Data System (ADS)

    Abdrakhmanova, K. K.

    2015-01-01

    The work contains the results of a nano-structural modification of sulfur that prevents polymorphic transformations from influencing the properties of sulfur composites where sulfur is present in a thermodynamic stable condition that precludes destruction when operated. It has been established that the properties of sulfur-based composite materials can be significantly improved by modifying sulfur and structuring sulfur binder by nano-dispersed fiber particles and ultra-dispersed state filler. The paper shows the possibility of modifying Tengiz sulfur by its fragmenting which ensures that the structured sulfur is structurally changed and stabilized through reinforcement by ultra-dispersed fiber particles allowing the phase contact area to be multiplied. Interaction between nano-dispersed fibers of chrysotile asbestos and sulfur ensures the implementation of the mechanical properties of chrysotile asbestos tubes in reinforced composite and its integrity provided that the surface of chrysotile asbestos tubes are highly moistened with molten sulfur and there is high adhesion between the tubes and the matrix that, in addition to sulfur, contains limestone microparticles. Ability to apply materials in severe operation conditions and possibility of exposure in both aggressive medium and mechanical loads makes produced sulfur composites required by the road construction industry.

  18. Improved Membrane-Based Sensor Network for Reliable Gas Monitoring in the Subsurface

    PubMed Central

    Lazik, Detlef; Ebert, Sebastian

    2012-01-01

    A conceptually improved sensor network to monitor the partial pressure of CO2 in different soil horizons was designed. Consisting of five membrane-based linear sensors (line-sensors) each with 10 m length, the set-up enables us to integrate over the locally fluctuating CO2 concentrations (typically lower 5%vol) up to the meter-scale gaining valuable concentration means with a repetition time of about 1 min. Preparatory tests in the laboratory resulted in a unexpected highly increased accuracy of better than 0.03%vol with respect to the previously published 0.08%vol. Thereby, the statistical uncertainties (standard deviations) of the line-sensors and the reference sensor (nondispersive infrared CO2-sensor) were close to each other. Whereas the uncertainty of the reference increases with the measurement value, the line-sensors show an inverse uncertainty trend resulting in a comparatively enhanced accuracy for concentrations >1%vol. Furthermore, a method for in situ maintenance was developed, enabling a proof of sensor quality and its effective calibration without demounting the line-sensors from the soil which would disturb the established structures and ongoing processes. PMID:23235447

  19. Effect of Inner Electrode on Reliability of (Zn,Mg)TiO3-Based Multilayer Ceramic Capacitor

    NASA Astrophysics Data System (ADS)

    Lee, Wen‑His; Su, Chi‑Yi; Lee, Ying Chieh; Yang, Jackey; Yang, Tong; PinLin, Shih

    2006-07-01

    In this study, different proportions of silver-palladium alloy acting as the inner electrode were adopted to a (Zn,Mg)TiO3-based multilayer ceramic capacitor (MLCC) sintered at 925 °C for 2 h to evaluate the effect of the inner electrode on reliability. The main results show that the lifetime is inversely proportional to Ag content in the Pd/Ag inner electrode. Ag+1 diffusion into the (Zn,Mg)TiO3-based MLCC during cofiring at 925 °C for 2 h and Ag+1 migration at 140 °C against 200 V are both responsible for the short lifetime of the (Zn,Mg)TiO3-based MLCC, particularly the latter factor. A (Zn,Mg)TiO3-based MLCC with high Ag content in the inner electrode Ag/Pd=99/01 exhibits the shortest lifetime (13 h), and the effect of Ag+1 migration is markedly enhanced when the activation energy of the (Zn,Mg)TiO3 dielectric is greatly lowered due to the excessive formation of oxygen vacancies and the semiconducting Zn2TiO4 phase when Ag+ substitutes for Zn+2 during co-firing.

  20. Reliability of information-based integration of EEG and fMRI data: a simulation study.

    PubMed

    Assecondi, Sara; Ostwald, Dirk; Bagshaw, Andrew P

    2015-02-01

    Most studies involving simultaneous electroencephalographic (EEG) and functional magnetic resonance imaging (fMRI) data rely on the first-order, affine-linear correlation of EEG and fMRI features within the framework of the general linear model. An alternative is the use of information-based measures such as mutual information and entropy, which can also detect higher-order correlations present in the data. The estimate of information-theoretic quantities might be influenced by several parameters, such as the numerosity of the sample, the amount of correlation between variables, and the discretization (or binning) strategy of choice. While these issues have been investigated for invasive neurophysiological data and a number of bias-correction estimates have been developed, there has been no attempt to systematically examine the accuracy of information estimates for the multivariate distributions arising in the context of EEG-fMRI recordings. This is especially important given the differences between electrophysiological and EEG-fMRI recordings. In this study, we drew random samples from simulated bivariate and trivariate distributions, mimicking the statistical properties of EEG-fMRI data. We compared the estimated information shared by simulated random variables with its numerical value and found that the interaction between the binning strategy and the estimation method influences the accuracy of the estimate. Conditional on the simulation assumptions, we found that the equipopulated binning strategy yields the best and most consistent results across distributions and bias correction methods. We also found that within bias correction techniques, the asymptotically debiased (TPMC), the jackknife debiased (JD), and the best upper bound (BUB) approach give similar results, and those are consistent across distributions.

  1. The rating reliability calculator

    PubMed Central

    Solomon, David J

    2004-01-01

    Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program. PMID:15117416

  2. New Flexible Silicone-Based EEG Dry Sensor Material Compositions Exhibiting Improvements in Lifespan, Conductivity, and Reliability

    PubMed Central

    Yu, Yi-Hsin; Chen, Shih-Hsun; Chang, Che-Lun; Lin, Chin-Teng; Hairston, W. David; Mrozek, Randy A.

    2016-01-01

    This study investigates alternative material compositions for flexible silicone-based dry electroencephalography (EEG) electrodes to improve the performance lifespan while maintaining high-fidelity transmission of EEG signals. Electrode materials were fabricated with varying concentrations of silver-coated silica and silver flakes to evaluate their electrical, mechanical, and EEG transmission performance. Scanning electron microscope (SEM) analysis of the initial electrode development identified some weak points in the sensors’ construction, including particle pull-out and ablation of the silver coating on the silica filler. The newly-developed sensor materials achieved significant improvement in EEG measurements while maintaining the advantages of previous silicone-based electrodes, including flexibility and non-toxicity. The experimental results indicated that the proposed electrodes maintained suitable performance even after exposure to temperature fluctuations, 85% relative humidity, and enhanced corrosion conditions demonstrating improvements in the environmental stability. Fabricated flat (forehead) and acicular (hairy sites) electrodes composed of the optimum identified formulation exhibited low impedance and reliable EEG measurement; some initial human experiments demonstrate the feasibility of using these silicone-based electrodes for typical lab data collection applications. PMID:27809260

  3. Approach for the use of MSW settlement predictions in the assessment of landfill capacity based on reliability analysis.

    PubMed

    Sivakumar Babu, G L; Chouksey, Sandeep Kumar; Reddy, Krishna R

    2013-10-01

    In the analysis and design of municipal solid waste (MSW) landfills, there are many uncertainties associated with the properties of MSW during and after MSW placement. Several studies are performed involving different laboratory and field tests to understand the complex behavior and properties of MSW, and based on these studies, different models are proposed for the analysis of time dependent settlement response of MSW. For the analysis of MSW settlement, it is very important to account for the variability of model parameters that reflect different processes such as primary compression under loading, mechanical creep and biodegradation. In this paper, regression equations based on response surface method (RSM) are used to represent the complex behavior of MSW using a newly developed constitutive model. An approach to assess landfill capacities and develop landfill closure plans based on prediction of landfill settlements is proposed. The variability associated with model parameters relating to primary compression, mechanical creep and biodegradation are used to examine their influence on MSW settlement using reliability analysis framework and influence of various parameters on the settlement of MSW are estimated through sensitivity analysis.

  4. A stochastic simulation-optimization approach for estimating highly reliable soil tension threshold values in sensor-based deficit irrigation

    NASA Astrophysics Data System (ADS)

    Kloss, S.; Schütze, N.; Walser, S.; Grundmann, J.

    2012-04-01

    In arid and semi-arid regions where water is scarce, farmers heavily rely on irrigation in order to grow crops and to produce agricultural commodities. The variable and often severely limited water supply thereby poses a serious challenge for farmers to cope with and demand sophisticated irrigation strategies that allow an efficient management of the available water resources. The general aim is to increase water productivity (WP) and one of these strategies to achieve this goal is controlled deficit irrigation (CDI). One way to realize CDI is by defining soil water status specific threshold values (either in soil tension or moisture) at which irrigation cycles are triggered. When utilizing CDI, irrigation control is of utmost importance and yet thresholds are likely chosen by trial and error and thus unreliable. Hence, for CDI to be effective systematic investigations for deriving reliable threshold values that account for different CDI strategies are needed. In this contribution, a method is presented that uses a simulation-based stochastic approach for estimating threshold values with a high reliability. The approach consist of a weather generator offering statistical significance to site-specific climate series, an optimization algorithm that determines optimal threshold values under limiting waters supply, and a crop model for simulating plant growth and water consumption. The study focuses on threshold values of soil tension for different CDI strategies. The advantage of soil-tension-based threshold values over soil-moisture-based lies in their universal and soil type independent applicability. The investigated CDI strategies comprised schedules of constant threshold values, crop development stage dependent threshold values, and different minimum irrigation intervals. For practical reasons, fixed irrigation schedules were tested as well. Additionally, a full irrigation schedule served as reference. The obtained threshold values were then tested in field

  5. Reproducibility, reliability and validity of population-based administrative health data for the assessment of cancer non-related comorbidities

    PubMed Central

    Fowler, Helen

    2017-01-01

    Background Patients with comorbidities do not receive optimal treatment for their cancer, leading to lower cancer survival. Information on individual comorbidities is not straightforward to derive from population-based administrative health datasets. We described the development of a reproducible algorithm to extract the individual Charlson index comorbidities from such data. We illustrated the algorithm with 1,789 laryngeal cancer patients diagnosed in England in 2013. We aimed to clearly set out and advocate the time-related assumptions specified in the algorithm by providing empirical evidence for them. Methods Comorbidities were assessed from hospital records in the ten years preceding cancer diagnosis and internal reliability of the hospital records was checked. Data were right-truncated 6 or 12 months prior to cancer diagnosis to avoid inclusion of potentially cancer-related comorbidities. We tested for collider bias using Cox regression. Results Our administrative data showed weak to moderate internal reliability to identify comorbidities (ICC ranging between 0.1 and 0.6) but a notably high external validity (86.3%). We showed a reverse protective effect of non-cancer related Chronic Obstructive Pulmonary Disease (COPD) when the effect is split into cancer and non-cancer related COPD (Age-adjusted HR: 0.95, 95% CI:0.7–1.28 for non-cancer related comorbidities). Furthermore, we showed that a window of 6 years before diagnosis is an optimal period for the assessment of comorbidities. Conclusion To formulate a robust approach for assessing common comorbidities, it is important that assumptions made are explicitly stated and empirically proven. We provide a transparent and consistent approach useful to researchers looking to assess comorbidities for cancer patients using administrative health data. PMID:28263996

  6. Development of a novel location-based assessment of sensory symptoms in cancer patients: preliminary reliability and validity assessment.

    PubMed

    Burkey, Adam R; Kanetsky, Peter A

    2009-05-01

    We report on the development of a novel location-based assessment of sensory symptoms in cancer (L-BASIC) instrument, and its initial estimates of reliability and validity. L-BASIC is structured so that patients provide a numeric score and an adjectival description for any sensory symptom, including both pain and neuropathic sensations, present in each of the 10 predefined body areas. Ninety-seven patients completed the baseline questionnaire; 39 completed the questionnaire on two occasions. A mean of 3.5 body parts was scored per patient. On average, 2.7 (of 11) descriptor categories were used per body part. There was good internal consistency (Cronbach's alpha=0.74) for a four-item scale that combined location-specific metrics. Temporal stability was adequate (kappa>0.50 and r>0.60 for categorical and continuous variables, respectively) among patients without observed or reported subjective change in clinical status between L-BASIC administrations. We compared our four-item scale against scores obtained from validated pain and quality-of-life (QOL) scales, and as expected, correlations were higher for pain-related items than for QOL-related items. We detected differences in L-BASIC responses among patients with cancer-related head or neck pain, chemotherapy-related neuropathy and breast cancer-related lymphedema. We conclude that L-BASIC provides internally consistent and temporally stable responses, while acknowledging that further refinement and testing of this novel instrument are necessary. We anticipate that future versions of L-BASIC will provide reliable and valid syndrome-specific measurement of defined clinical pain and symptom constructs in the cancer population, which may be of particular value in assessing treatment response in patients with such multiple complaints.

  7. Tutorial on use of intraclass correlation coefficients for assessing intertest reliability and its application in functional near-infrared spectroscopy-based brain imaging

    NASA Astrophysics Data System (ADS)

    Li, Lin; Zeng, Li; Lin, Zi-Jing; Cazzell, Mary; Liu, Hanli

    2015-05-01

    Test-retest reliability of neuroimaging measurements is an important concern in the investigation of cognitive functions in the human brain. To date, intraclass correlation coefficients (ICCs), originally used in inter-rater reliability studies in behavioral sciences, have become commonly used metrics in reliability studies on neuroimaging and functional near-infrared spectroscopy (fNIRS). However, as there are six popular forms of ICC, the adequateness of the comprehensive understanding of ICCs will affect how one may appropriately select, use, and interpret ICCs toward a reliability study. We first offer a brief review and tutorial on the statistical rationale of ICCs, including their underlying analysis of variance models and technical definitions, in the context of assessment on intertest reliability. Second, we provide general guidelines on the selection and interpretation of ICCs. Third, we illustrate the proposed approach by using an actual research study to assess intertest reliability of fNIRS-based, volumetric diffuse optical tomography of brain activities stimulated by a risk decision-making protocol. Last, special issues that may arise in reliability assessment using ICCs are discussed and solutions are suggested.

  8. A dye-assisted paper-based point-of-care assay for fast and reliable blood grouping.

    PubMed

    Zhang, Hong; Qiu, Xiaopei; Zou, Yurui; Ye, Yanyao; Qi, Chao; Zou, Lingyun; Yang, Xiang; Yang, Ke; Zhu, Yuanfeng; Yang, Yongjun; Zhou, Yang; Luo, Yang

    2017-03-15

    Fast and simultaneous forward and reverse blood grouping has long remained elusive. Forward blood grouping detects antigens on red blood cells, whereas reverse grouping identifies specific antibodies present in plasma. We developed a paper-based assay using immobilized antibodies and bromocresol green dye for rapid and reliable blood grouping, where dye-assisted color changes corresponding to distinct blood components provide a visual readout. ABO antigens and five major Rhesus antigens could be detected within 30 s, and simultaneous forward and reverse ABO blood grouping using small volumes (100 μl) of whole blood was achieved within 2 min through on-chip plasma separation without centrifugation. A machine-learning method was developed to classify the spectral plots corresponding to dye-based color changes, which enabled reproducible automatic grouping. Using optimized operating parameters, the dye-assisted paper assay exhibited comparable accuracy and reproducibility to the classical gel-card assays in grouping 3550 human blood samples. When translated to the assembly line and low-cost manufacturing, the proposed approach may be developed into a cost-effective and robust universal blood-grouping platform.

  9. Network reliability

    NASA Technical Reports Server (NTRS)

    Johnson, Marjory J.

    1985-01-01

    Network control (or network management) functions are essential for efficient and reliable operation of a network. Some control functions are currently included as part of the Open System Interconnection model. For local area networks, it is widely recognized that there is a need for additional control functions, including fault isolation functions, monitoring functions, and configuration functions. These functions can be implemented in either a central or distributed manner. The Fiber Distributed Data Interface Medium Access Control and Station Management protocols provide an example of distributed implementation. Relative information is presented here in outline form.

  10. A GIS-based assessment of the suitability of SCIAMACHY satellite sensor measurements for estimating reliable CO concentrations in a low-latitude climate.

    PubMed

    Fagbeja, Mofoluso A; Hill, Jennifer L; Chatterton, Tim J; Longhurst, James W S

    2015-02-01

    An assessment of the reliability of the Scanning Imaging Absorption Spectrometer for Atmospheric Cartography (SCIAMACHY) satellite sensor measurements to interpolate tropospheric concentrations of carbon monoxide considering the low-latitude climate of the Niger Delta region in Nigeria was conducted. Monthly SCIAMACHY carbon monoxide (CO) column measurements from January 2,003 to December 2005 were interpolated using ordinary kriging technique. The spatio-temporal variations observed in the reliability were based on proximity to the Atlantic Ocean, seasonal variations in the intensities of rainfall and relative humidity, the presence of dust particles from the Sahara desert, industrialization in Southwest Nigeria and biomass burning during the dry season in Northern Nigeria. Spatial reliabilities of 74 and 42 % are observed for the inland and coastal areas, respectively. Temporally, average reliability of 61 and 55 % occur during the dry and wet seasons, respectively. Reliability in the inland and coastal areas was 72 and 38 % during the wet season, and 75 and 46 % during the dry season, respectively. Based on the results, the WFM-DOAS SCIAMACHY CO data product used for this study is therefore relevant in the assessment of CO concentrations in developing countries within the low latitudes that could not afford monitoring infrastructure due to the required high costs. Although the SCIAMACHY sensor is no longer available, it provided cost-effective, reliable and accessible data that could support air quality assessment in developing countries.

  11. Reliability-oriented multi-objective optimal decision-making approach for uncertainty-based watershed load reduction.

    PubMed

    Dong, Feifei; Liu, Yong; Su, Han; Zou, Rui; Guo, Huaicheng

    2015-05-15

    Water quality management and load reduction are subject to inherent uncertainties in watershed systems and competing decision objectives. Therefore, optimal decision-making modeling in watershed load reduction is suffering due to the following challenges: (a) it is difficult to obtain absolutely "optimal" solutions, and (b) decision schemes may be vulnerable to failure. The probability that solutions are feasible under uncertainties is defined as reliability. A reliability-oriented multi-objective (ROMO) decision-making approach was proposed in this study for optimal decision making with stochastic parameters and multiple decision reliability objectives. Lake Dianchi, one of the three most eutrophic lakes in China, was examined as a case study for optimal watershed nutrient load reduction to restore lake water quality. This study aimed to maximize reliability levels from considerations of cost and load reductions. The Pareto solutions of the ROMO optimization model were generated with the multi-objective evolutionary algorithm, demonstrating schemes representing different biases towards reliability. The Pareto fronts of six maximum allowable emission (MAE) scenarios were obtained, which indicated that decisions may be unreliable under unpractical load reduction requirements. A decision scheme identification process was conducted using the back propagation neural network (BPNN) method to provide a shortcut for identifying schemes at specific reliability levels for decision makers. The model results indicated that the ROMO approach can offer decision makers great insights into reliability tradeoffs and can thus help them to avoid ineffective decisions.

  12. An integrative C. elegans protein-protein interaction network with reliability assessment based on a probabilistic graphical model.

    PubMed

    Huang, Xiao-Tai; Zhu, Yuan; Chan, Leanne Lai Hang; Zhao, Zhongying; Yan, Hong

    2016-01-01

    In Caenorhabditis elegans, a large number of protein-protein interactions (PPIs) are identified by different experiments. However, a comprehensive weighted PPI network, which is essential for signaling pathway inference, is not yet available in this model organism. Therefore, we firstly construct an integrative PPI network in C. elegans with 12,951 interactions involving 5039 proteins from seven molecular interaction databases. Then, a reliability score based on a probabilistic graphical model (RSPGM) is proposed to assess PPIs. It assumes that the random number of interactions between two proteins comes from the Bernoulli distribution to avoid multi-links. The main parameter of the RSPGM score contains a few latent variables which can be considered as several common properties between two proteins. Validations on high-confidence yeast datasets show that RSPGM provides more accurate evaluation than other approaches, and the PPIs in the reconstructed PPI network have higher biological relevance than that in the original network in terms of gene ontology, gene expression, essentiality and the prediction of known protein complexes. Furthermore, this weighted integrative PPI network in C. elegans is employed on inferring interaction path of the canonical Wnt/β-catenin pathway as well. Most genes on the inferred interaction path have been validated to be Wnt pathway components. Therefore, RSPGM is essential and effective for evaluating PPIs and inferring interaction path. Finally, the PPI network with RSPGM scores can be queried and visualized on a user interactive website, which is freely available at .

  13. Reliability study of AlN-driven microcantilevers based on interferometric measurements of their static and dynamic behaviours

    NASA Astrophysics Data System (ADS)

    Gorecki, Christophe; Krupa, Katarzyna; Józwicki, Romuald; Jozwik, Michal

    2010-08-01

    Microelectromechanical systems (MEMS) are exposed to a variety of environmental conditions, making the prediction of operational reliability difficult. In this contribution, we investigate the environmental effects on the static and dynamic properties of piezoelectrically actuated MEMS microcantilevers where aluminium nitride (AlN) is used as actuation material. The environmental effects to be considered include thermal and humid cycling, as well as harsh electrical loading performed under standard weather conditions. Investigated properties are defined for the static behaviour (i.e. determined initial deflection, out-of-plane displacement vs. constant voltage) and dynamic behaviour (i.e. determined 1st resonance frequency, vibration amplitude at 1st resonance mode) of AlN-based microcantilevers. The metrology tool is a Twyman Green interferometer, operating in both stroboscopic regime and time-average interferometry mode. The initial deflection and the frequency changes of the first resonance mode of microcantilevers are monitored during the accelerated thermal aging, the humidity tests, as well as harsh electrical loading and fatigue tests. Finally, the resonant fatigue tests, accelerated by application of a high voltage are accomplished to evaluate the lifetime of microcantilevers. For constant values of voltage higher than 15 V, a delamination of top electrode of AlN transducer is observed.

  14. Reliability analysis of charge plasma based double material gate oxide (DMGO) SiGe-on-insulator (SGOI) MOSFET

    NASA Astrophysics Data System (ADS)

    Pradhan, K. P.; Sahu, P. K.; Singh, D.; Artola, L.; Mohapatra, S. K.

    2015-09-01

    A novel device named charge plasma based doping less double material gate oxide (DMGO) silicon-germanium on insulator (SGOI) double gate (DG) MOSFET is proposed for the first time. The fundamental objective in this work is to modify the channel potential, electric field and electron velocity for improving leakage current, transconductance (gm) and transconductance generation factor (TGF). Using 2-D simulation, we exhibit that the DMGO-SGOI MOSFET shows higher electron velocity at source side and lower electric field at drain side as compare to ultra-thin body (UTB) DG MOSFET. On the other hand DMGO-SGOI MOSFET demonstrates a significant improvement in gm and TGF in comparison to UTB-DG MOSFET. This work also evaluates the existence of a biasing point i.e. zero temperature coefficient (ZTC) bias point, where the device parameters become independent of temperature. The impact of operating temperature (T) on above said various performance metrics are also subjected to extensive analysis. This further validates the reliability of charge plasma DMGO SGOI MOSFET and its application opportunities involved in designing analog/RF circuits for a wide range of temperature applications.

  15. A Microstructure-Based Time-Dependent Crack Growth Model for Life and Reliability Prediction of Turbopropulsion Systems

    NASA Astrophysics Data System (ADS)

    Chan, Kwai S.; Enright, Michael P.; Moody, Jonathan; Fitch, Simeon H. K.

    2014-01-01

    The objective of this investigation was to develop an innovative methodology for life and reliability prediction of hot-section components in advanced turbopropulsion systems. A set of generic microstructure-based time-dependent crack growth (TDCG) models was developed and used to assess the sources of material variability due to microstructure and material parameters such as grain size, activation energy, and crack growth threshold for TDCG. A comparison of model predictions and experimental data obtained in air and in vacuum suggests that oxidation is responsible for higher crack growth rates at high temperatures, low frequencies, and long dwell times, but oxidation can also induce higher crack growth thresholds (Δ K th or K th) under certain conditions. Using the enhanced risk analysis tool and material constants calibrated to IN 718 data, the effect of TDCG on the risk of fracture in turboengine components was demonstrated for a generic rotor design and a realistic mission profile using the DARWIN® probabilistic life-prediction code. The results of this investigation confirmed that TDCG and cycle-dependent crack growth in IN 718 can be treated by a simple summation of the crack increments over a mission. For the temperatures considered, TDCG in IN 718 can be considered as a K-controlled or a diffusion-controlled oxidation-induced degradation process. This methodology provides a pathway for evaluating microstructural effects on multiple damage modes in hot-section components.

  16. Recent progress in high performance and reliable n-type transition metal oxide-based thin film transistors

    NASA Astrophysics Data System (ADS)

    Kwon, Jang Yeon; Kyeong Jeong, Jae

    2015-02-01

    This review gives an overview of the recent progress in vacuum-based n-type transition metal oxide (TMO) thin film transistors (TFTs). Several excellent review papers regarding metal oxide TFTs in terms of fundamental electron structure, device process and reliability have been published. In particular, the required field-effect mobility of TMO TFTs has been increasing rapidly to meet the demands of the ultra-high-resolution, large panel size and three dimensional visual effects as a megatrend of flat panel displays, such as liquid crystal displays, organic light emitting diodes and flexible displays. In this regard, the effects of the TMO composition on the performance of the resulting oxide TFTs has been reviewed, and classified into binary, ternary and quaternary composition systems. In addition, the new strategic approaches including zinc oxynitride materials, double channel structures, and composite structures have been proposed recently, and were not covered in detail in previous review papers. Special attention is given to the advanced device architecture of TMO TFTs, such as back-channel-etch and self-aligned coplanar structure, which is a key technology because of their advantages including low cost fabrication, high driving speed and unwanted visual artifact-free high quality imaging. The integration process and related issues, such as etching, post treatment, low ohmic contact and Cu interconnection, required for realizing these advanced architectures are also discussed.

  17. Empirically based comparisons of the reliability and validity of common quantification approaches for eyeblink startle potentiation in humans.

    PubMed

    Bradford, Daniel E; Starr, Mark J; Shackman, Alexander J; Curtin, John J

    2015-12-01

    Startle potentiation is a well-validated translational measure of negative affect. Startle potentiation is widely used in clinical and affective science, and there are multiple approaches for its quantification. The three most commonly used approaches quantify startle potentiation as the increase in startle response from a neutral to threat condition based on (1) raw potentiation, (2) standardized potentiation, or (3) percent-change potentiation. These three quantification approaches may yield qualitatively different conclusions about effects of independent variables (IVs) on affect when within- or between-group differences exist for startle response in the neutral condition. Accordingly, we directly compared these quantification approaches in a shock-threat task using four IVs known to influence startle response in the no-threat condition: probe intensity, time (i.e., habituation), alcohol administration, and individual differences in general startle reactivity measured at baseline. We confirmed the expected effects of time, alcohol, and general startle reactivity on affect using self-reported fear/anxiety as a criterion. The percent-change approach displayed apparent artifact across all four IVs, which raises substantial concerns about its validity. Both raw and standardized potentiation approaches were stable across probe intensity and time, which supports their validity. However, only raw potentiation displayed effects that were consistent with a priori specifications and/or the self-report criterion for the effects of alcohol and general startle reactivity. Supplemental analyses of reliability and validity for each approach provided additional evidence in support of raw potentiation.

  18. Curriculum-based measurement of oral reading: A preliminary investigation of confidence interval overlap to detect reliable growth.

    PubMed

    Van Norman, Ethan R

    2016-09-01

    Curriculum-based measurement of oral reading (CBM-R) progress monitoring data is used to measure student response to instruction. Federal legislation permits educators to use CBM-R progress monitoring data as a basis for determining the presence of specific learning disabilities. However, decision making frameworks originally developed for CBM-R progress monitoring data were not intended for such high stakes assessments. Numerous documented issues with trend line estimation undermine the validity of using slope estimates to infer progress. One proposed recommendation is to use confidence interval overlap as a means of judging reliable growth. This project explored the degree to which confidence interval overlap was related to true growth magnitude using simulation methodology. True and observed CBM-R scores were generated across 7 durations of data collection (range 6-18 weeks), 3 levels of dataset quality or residual variance (5, 10, and 15 words read correct per minute) and 2 types of data collection schedules. Descriptive and inferential analyses were conducted to explore interactions between overlap status, progress monitoring scenarios, and true growth magnitude. A small but statistically significant interaction was observed between overlap status, duration, and dataset quality, b = -0.004, t(20992) =-7.96, p < .001. In general, confidence interval overlap does not appear to meaningfully account for variance in true growth across many progress monitoring conditions. Implications for research and practice are discussed. Limitations and directions for future research are addressed. (PsycINFO Database Record

  19. Modifications to the AOAC use-dilution test for quaternary ammonium compound-based disinfectants that significantly improve method reliability.

    PubMed

    Arlea, Crystal; King, Sharon; Bennie, Barbara; Kemp, Kere; Mertz, Erin; Staub, Richard

    2008-01-01

    The AOAC use-dilution test (UDT) for bactericidal disinfectant efficacy (Method 964.02) has often been criticized for its extreme variability in test results, particularly for quaternary ammonium compound (QAC)-based disinfectants against Pseudomonas aeruginosa. While efforts are under way to develop a new and better test method for hospital disinfectant products that is globally acceptable, U.S. manufacturers and formulators of QAC products must continue in the interim to measure their product performance against the current UDT method. Therefore, continued variability in the UDT places an unnecessary and unfair burden on U.S. QAC product manufacturers to ensure that their products perform against an, at best, unreliable test method. This article reports on evaluations that were conducted to attempt to identify key sources of UDT method variability and to find ways to mitigate their impact on test outcomes for the method. The results of testing across 4 laboratories, involving over 6015 carriers, determined that operator error was a key factor in test variability. This variability was found to be significantly minimized by the inclusion of a simple culture dilution step. The findings from this study suggest possible refinements to the current AOAC UDT method that would serve to improve the overall ruggedness and reliability of the method and to optimize recovery of cells from the carrier surface, thereby further improving the accuracy and reproducibility of counts and test outcomes until such time as a replacement method is implemented.

  20. Linear Interaction Energy Based Prediction of Cytochrome P450 1A2 Binding Affinities with Reliability Estimation

    PubMed Central

    Capoferri, Luigi; Verkade-Vreeker, Marlies C. A.; Buitenhuis, Danny; Commandeur, Jan N. M.; Pastor, Manuel; Vermeulen, Nico P. E.; Geerke, Daan P.

    2015-01-01

    Prediction of human Cytochrome P450 (CYP) binding affinities of small ligands, i.e., substrates and inhibitors, represents an important task for predicting drug-drug interactions. A quantitative assessment of the ligand binding affinity towards different CYPs can provide an estimate of inhibitory activity or an indication of isoforms prone to interact with the substrate of inhibitors. However, the accuracy of global quantitative models for CYP substrate binding or inhibition based on traditional molecular descriptors can be limited, because of the lack of information on the structure and flexibility of the catalytic site of CYPs. Here we describe the application of a method that combines protein-ligand docking, Molecular Dynamics (MD) simulations and Linear Interaction Energy (LIE) theory, to allow for quantitative CYP affinity prediction. Using this combined approach, a LIE model for human CYP 1A2 was developed and evaluated, based on a structurally diverse dataset for which the estimated experimental uncertainty was 3.3 kJ mol-1. For the computed CYP 1A2 binding affinities, the model showed a root mean square error (RMSE) of 4.1 kJ mol-1 and a standard error in prediction (SDEP) in cross-validation of 4.3 kJ mol-1. A novel approach that includes information on both structural ligand description and protein-ligand interaction was developed for estimating the reliability of predictions, and was able to identify compounds from an external test set with a SDEP for the predicted affinities of 4.6 kJ mol-1 (corresponding to 0.8 pKi units). PMID:26551865

  1. Physics-Based Stress Corrosion Cracking Component Reliability Model cast in an R7-Compatible Cumulative Damage Framework

    SciTech Connect

    Unwin, Stephen D.; Lowry, Peter P.; Layton, Robert F.; Toloczko, Mychailo B.; Johnson, Kenneth I.; Sanborn, Scott E.

    2011-07-01

    This is a working report drafted under the Risk-Informed Safety Margin Characterization pathway of the Light Water Reactor Sustainability Program, describing statistical models of passives component reliabilities.

  2. Trunk-acceleration based assessment of gait parameters in older persons: a comparison of reliability and validity of four inverted pendulum based estimations.

    PubMed

    Zijlstra, Agnes; Zijlstra, Wiebren

    2013-09-01

    Inverted pendulum (IP) models of human walking allow for wearable motion-sensor based estimations of spatio-temporal gait parameters during unconstrained walking in daily-life conditions. At present it is unclear to what extent different IP based estimations yield different results, and reliability and validity have not been investigated in older persons without a specific medical condition. The aim of this study was to compare reliability and validity of four different IP based estimations of mean step length in independent-living older persons. Participants were assessed twice and walked at different speeds while wearing a tri-axial accelerometer at the lower back. For all step-length estimators, test-retest intra-class correlations approached or were above 0.90. Intra-class correlations with reference step length were above 0.92 with a mean error of 0.0 cm when (1) multiplying the estimated center-of-mass displacement during a step by an individual correction factor in a simple IP model, or (2) adding an individual constant for bipedal stance displacement to the estimated displacement during single stance in a 2-phase IP model. When applying generic corrections or constants in all subjects (i.e. multiplication by 1.25, or adding 75% of foot length), correlations were above 0.75 with a mean error of respectively 2.0 and 1.2 cm. Although the results indicate that an individual adjustment of the IP models provides better estimations of mean step length, the ease of a generic adjustment can be favored when merely evaluating intra-individual differences. Further studies should determine the validity of these IP based estimations for assessing gait in daily life.

  3. Spatiotemporal variation of watershed health propensity through reliability-resilience-vulnerability based drought index (case study: Shazand Watershed in Iran).

    PubMed

    Sadeghi, Seyed Hamidreza; Hazbavi, Zeinab

    2017-06-01

    Quantitative response of the watershed health to climate variability is of critical importance for watershed managers. However, existing studies seldom considered the impact of climate variability on watershed health. The present study therefore aimed to analyze the temporal and spatial variability of reliability (Rel), resilience (Res) and vulnerability (Vul) indicators in node years of 1986, 1998, 2008 and 2014 in connection with Standardized Precipitation Index (SPI) for 24 sub-watersheds in the Shazand Watershed of Markazi Province in Iran. The analysis was based on rainfall variability as one of the main climatic drivers. To achieve the study purposes, the monthly rainfall time series of eight rain gauge stations distributed across the watershed or neighboring areas were analyzed and corresponding SPIs and Rel ResVul indicators were calculated. Ultimately, the spatial variation of SPI oriented Rel ResVul was mapped for the study watershed using Geographic Information System (GIS). The average and standard deviation of SPI-Rel ResVul index for the study years of 1986, 1998, 2008 and 2014 was obtained 0.240±0.025, 0.290±0.036, 0.077±0.0280 and 0.241±0.081, respectively. In overall, the results of the study proved the spatiotemporal variations of SPI-Rel ResVul watershed health index in the study area. Accordingly, all the sub-watersheds of the Shazand Watershed were grouped in unhealthy and very unhealthy conditions in all the study years. For 1986 and 1998 all the sub-watersheds were assessed in unhealthy status. Whilst, it declined to very unhealthy condition in 2008 and then some 75% of the watershed ultimately referred again to unhealthy and the rest still remained under very unhealthy conditions in 2014.

  4. Factor Structure and Reliability of the Revised Conflict Tactics Scales' (CTS2) 10-Factor Model in a Community-Based Female Sample

    ERIC Educational Resources Information Center

    Yun, Sung Hyun

    2011-01-01

    The present study investigated the factor structure and reliability of the revised Conflict Tactics Scales' (CTS2) 10-factor model in a community-based female sample (N = 261). The underlying factor structure of the 10-factor model was tested by the confirmatory multiple group factor analysis, which demonstrated complex factor cross-loadings…

  5. The Reliability of Workplace-Based Assessment in Postgraduate Medical Education and Training: A National Evaluation in General Practice in the United Kingdom

    ERIC Educational Resources Information Center

    Murphy, Douglas J.; Bruce, David A.; Mercer, Stewart W.; Eva, Kevin W.

    2009-01-01

    To investigate the reliability and feasibility of six potential workplace-based assessment methods in general practice training: criterion audit, multi-source feedback from clinical and non-clinical colleagues, patient feedback (the CARE Measure), referral letters, significant event analysis, and video analysis of consultations. Performance of GP…

  6. Reliable, Efficient and Cost-Effective Electric Power Converter for Small Wind Turbines Based on AC-link Technology

    SciTech Connect

    Darren Hammell; Mark Holveck; DOE Project Officer - Keith Bennett

    2006-08-01

    Grid-tied inverter power electronics have been an Achilles heel of the small wind industry, providing opportunity for new technologies to provide lower costs, greater efficiency, and improved reliability. The small wind turbine market is also moving towards the 50-100kW size range. The unique AC-link power conversion technology provides efficiency, reliability, and power quality advantages over existing technologies, and Princeton Power will adapt prototype designs used for industrial asynchronous motor control to a 50kW small wind turbine design.

  7. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  8. Reliability Centered Maintenance - Methodologies

    NASA Technical Reports Server (NTRS)

    Kammerer, Catherine C.

    2009-01-01

    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  9. Estimating the Reliability of a Test Battery Composite or a Test Score Based on Weighted Item Scoring

    ERIC Educational Resources Information Center

    Feldt, Leonard S.

    2004-01-01

    In some settings, the validity of a battery composite or a test score is enhanced by weighting some parts or items more heavily than others in the total score. This article describes methods of estimating the total score reliability coefficient when differential weights are used with items or parts.

  10. Reliability and Validity of Inferences about Teachers Based on Student Scores. William H. Angoff Memorial Lecture Series

    ERIC Educational Resources Information Center

    Haertel, Edward H.

    2013-01-01

    Policymakers and school administrators have embraced value-added models of teacher effectiveness as tools for educational improvement. Teacher value-added estimates may be viewed as complicated scores of a certain kind. This suggests using a test validation model to examine their reliability and validity. Validation begins with an interpretive…

  11. Development of a reliable and highly sensitive, digital PCR-based assay for early detection of HLB

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Huanglongbing (HLB) is caused by a phloem-limited bacterium, Ca. Liberibacter asiaticus (Las) in the United States. The bacterium often is present at a low concentration and unevenly distributed in the early stage of infection, making reliable and early diagnosis a serious challenge. Conventional d...

  12. A Study on Combination of Reliability-based Automatic Repeat reQuest with Error Potential-based Error Correction for Improving P300 Speller Performance

    NASA Astrophysics Data System (ADS)

    Takahashi, Hiromu; Yoshikawa, Tomohiro; Furuhashi, Takeshi

    Brain-computer interfaces (BCIs) are systems that translate one's thoughts into commands to restore control and communication to severely paralyzed people, and also appealing to healthy people. The P300 speller, one of the most renowned BCIs for communication, allows users to select letters just by thoughts. However, due to the low signal-to-noise ratio of the P300, signal averaging is often performed, which improves the spelling accuracy but degrades the spelling speed. The authors have proposed reliability-based automatic repeat request (RB-ARQ) to ease this problem. RB-ARQ could be enhanced when it is combined with the error correction based on the error-related potentials (ErrPs) that occur on erroneous feedbacks. Thus, this study aims to reveal the characteristics of the ErrPs in the P300 speller paradigm, and to combine RB-ARQ with the ErrP-based error correction to further improve the performance. The results show that the ErrPs observed in the current study resemble the previously reported ErrPs observed in a cursor control task using a BCI, and that the performance of the P300 speller could be improved by 35 percent on average.

  13. Test-retest reliability of fMRI-based graph theoretical properties during working memory, emotion processing, and resting state.

    PubMed

    Cao, Hengyi; Plichta, Michael M; Schäfer, Axel; Haddad, Leila; Grimm, Oliver; Schneider, Michael; Esslinger, Christine; Kirsch, Peter; Meyer-Lindenberg, Andreas; Tost, Heike

    2014-01-01

    The investigation of the brain connectome with functional magnetic resonance imaging (fMRI) and graph theory analyses has recently gained much popularity, but little is known about the robustness of these properties, in particular those derived from active fMRI tasks. Here, we studied the test-retest reliability of brain graphs calculated from 26 healthy participants with three established fMRI experiments (n-back working memory, emotional face-matching, resting state) and two parcellation schemes for node definition (AAL atlas, functional atlas proposed by Power et al.). We compared the intra-class correlation coefficients (ICCs) of five different data processing strategies and demonstrated a superior reliability of task-regression methods with condition-specific regressors. The between-task comparison revealed significantly higher ICCs for resting state relative to the active tasks, and a superiority of the n-back task relative to the face-matching task for global and local network properties. While the mean ICCs were typically lower for the active tasks, overall fair to good reliabilities were detected for global and local connectivity properties, and for the n-back task with both atlases, smallworldness. For all three tasks and atlases, low mean ICCs were seen for the local network properties. However, node-specific good reliabilities were detected for node degree in regions known to be critical for the challenged functions (resting-state: default-mode network nodes, n-back: fronto-parietal nodes, face-matching: limbic nodes). Between-atlas comparison demonstrated significantly higher reliabilities for the functional parcellations for global and local network properties. Our findings can inform the choice of processing strategies, brain atlases and outcome properties for fMRI studies using active tasks, graph theory methods, and within-subject designs, in particular future pharmaco-fMRI studies.

  14. Does a web-based feedback training program result in improved reliability in clinicians' ratings of the Global Assessment of Functioning (GAF) Scale?

    PubMed

    Støre-Valen, Jakob; Ryum, Truls; Pedersen, Geir A F; Pripp, Are H; Jose, Paul E; Karterud, Sigmund

    2015-09-01

    The Global Assessment of Functioning (GAF) Scale is used in routine clinical practice and research to estimate symptom and functional severity and longitudinal change. Concerns about poor interrater reliability have been raised, and the present study evaluated the effect of a Web-based GAF training program designed to improve interrater reliability in routine clinical practice. Clinicians rated up to 20 vignettes online, and received deviation scores as immediate feedback (i.e., own scores compared with expert raters) after each rating. Growth curves of absolute SD scores across the vignettes were modeled. A linear mixed effects model, using the clinician's deviation scores from expert raters as the dependent variable, indicated an improvement in reliability during training. Moderation by content of scale (symptoms; functioning), scale range (average; extreme), previous experience with GAF rating, profession, and postgraduate training were assessed. Training reduced deviation scores for inexperienced GAF raters, for individuals in clinical professions other than nursing and medicine, and for individuals with no postgraduate specialization. In addition, training was most beneficial for cases with average severity of symptoms compared with cases with extreme severity. The results support the use of Web-based training with feedback routines as a means to improve the reliability of GAF ratings performed by clinicians in mental health practice. These results especially pertain to clinicians in mental health practice who do not have a masters or doctoral degree.

  15. Intra-observer reliability for measuring first and second toe and metatarsal protrusion distance using palpation-based tests: a test-retest study

    PubMed Central

    2014-01-01

    Background Measurement of first and second metatarsal and toe protrusion is frequently used to explain foot problems using x-rays, osteological measurements or palpation-based tests. Length differences could be related to the appearance of problems in the foot. A test-retest design was conducted in order to establish the intra-rater reliability of three palpation-based tests. Methods 202 feet of physical therapy students and teachers of the CEU San Pablo University of Madrid, 39 men and 62 women, were measured using three different tests. Data were analysed using SPSS version 15.0. Mean, SD and 95% CI were calculated for each variable. A normal distribution of quantitative data was assessed using the Kolmogorov-Smirnov test. The test-retest intra-rater reliability was assessed using an Intraclass Correlation Coefficient (ICC). The Standard Error Mean (SEM) and the Minimal Detectable Change (MDC) were also obtained. Results All the ICC values showed a high degree of reliability (Test 1 = 0.97, Test 2 = 0.86 and Test 3 = 0.88) as did the SEM (Test 1 = 0.07, Test 2 = 0.10 and Test 3 = 0.11) and the MDC (Test 1 = 0.21, Test 2 = 0.30 and Test 3 = 0.31). Conclusions Reliability of measuring first and second metatarsal and toe protrusion using the three palpation-based tests showed a high degree of reliability. PMID:25729437

  16. Integrating field methodology and web-based data collection to assess the reliability of the Alcohol Use Disorders Identification Test (AUDIT).

    PubMed

    Celio, Mark A; Vetter-O'Hagen, Courtney S; Lisman, Stephen A; Johansen, Gerard E; Spear, Linda P

    2011-12-01

    Field methodologies offer a unique opportunity to collect ecologically valid data on alcohol use and its associated problems within natural drinking environments. However, limitations in follow-up data collection methods have left unanswered questions regarding the psychometric properties of field-based measures. The aim of the current study is to evaluate the reliability of self-report data collected in a naturally occurring environment - as indexed by the Alcohol Use Disorders Identification Test (AUDIT) - compared to self-report data obtained through an innovative web-based follow-up procedure. Individuals recruited outside of bars (N=170; mean age=21; range 18-32) provided a BAC sample and completed a self-administered survey packet that included the AUDIT. BAC feedback was provided anonymously through a dedicated web page. Upon sign in, follow-up participants (n=89; 52%) were again asked to complete the AUDIT before receiving their BAC feedback. Reliability analyses demonstrated that AUDIT scores - both continuous and dichotomized at the standard cut-point - were stable across field- and web-based administrations. These results suggest that self-report data obtained from acutely intoxicated individuals in naturally occurring environments are reliable when compared to web-based data obtained after a brief follow-up interval. Furthermore, the results demonstrate the feasibility, utility, and potential of integrating field methods and web-based data collection procedures.

  17. Optimizing the preventive maintenance scheduling by genetic algorithm based on cost and reliability in National Iranian Drilling Company

    NASA Astrophysics Data System (ADS)

    Javanmard, Habibollah; Koraeizadeh, Abd al-Wahhab

    2016-06-01

    The present research aims at predicting the required activities for preventive maintenance in terms of equipment optimal cost and reliability. The research sample includes all offshore drilling equipment of FATH 59 Derrick Site affiliated with National Iranian Drilling Company. Regarding the method, the research uses a field methodology and in terms of its objectives, it is classified as an applied research. Some of the data are extracted from the documents available in the equipment and maintenance department of FATH 59 Derrick site, and other needed data are resulted from experts' estimates through genetic algorithm method. The research result is provided as the prediction of downtimes, costs, and reliability in a predetermined time interval. The findings of the method are applicable for all manufacturing and non-manufacturing equipment.

  18. Test-retest reliability of a battery of field-based health-related fitness measures for adolescents.

    PubMed

    Lubans, David R; Morgan, Philip; Callister, Robin; Plotnikoff, Ronald C; Eather, Narelle; Riley, Nicholas; Smith, Chris J

    2011-04-01

    The main aim of this study was to determine the test-retest reliability of existing tests of health-related fitness. Participants (mean age 14.8 years, s = 0.4) were 42 boys and 26 girls who completed the study assessments on two occasions separated by one week. The following tests were conducted: bioelectrical impedance analysis (BIA) to calculate percent body fat, leg dynamometer, 90° push-up, 7-stage sit-up, and wall squat tests. Intra-class correlation (ICC), paired samples t-tests, and typical error expressed as a coefficient of variation were calculated. The mean percent body fat intra-class correlation coefficient was similar for boys (ICC = 0.95) and girls (ICC = 0.93), but the mean coefficient of variation was considerably higher for boys than girls (22.2% vs. 12.2%). The boys' coefficients of variation for the tests of muscular fitness ranged from 9.0% for the leg dynamometer test to 26.5% for the timed wall squat test. The girls' coefficients of variation ranged from 17.1% for the sit-up test to 21.4% for the push-up test. Although the BIA machine produced reliable estimates of percent body fat, the tests of muscular fitness resulted in high systematic error, suggesting that these measures may require an extensive familiarization phase before the results can be considered reliable.

  19. How to measure ecosystem stability? An evaluation of the reliability of stability metrics based on remote sensing time series across the major global ecosystems.

    PubMed

    De Keersmaecker, Wanda; Lhermitte, Stef; Honnay, Olivier; Farifteh, Jamshid; Somers, Ben; Coppin, Pol

    2014-07-01

    Increasing frequency of extreme climate events is likely to impose increased stress on ecosystems and to jeopardize the services that ecosystems provide. Therefore, it is of major importance to assess the effects of extreme climate events on the temporal stability (i.e., the resistance, the resilience, and the variance) of ecosystem properties. Most time series of ecosystem properties are, however, affected by varying data characteristics, uncertainties, and noise, which complicate the comparison of ecosystem stability metrics (ESMs) between locations. Therefore, there is a strong need for a more comprehensive understanding regarding the reliability of stability metrics and how they can be used to compare ecosystem stability globally. The objective of this study was to evaluate the performance of temporal ESMs based on time series of the Moderate Resolution Imaging Spectroradiometer derived Normalized Difference Vegetation Index of 15 global land-cover types. We provide a framework (i) to assess the reliability of ESMs in function of data characteristics, uncertainties and noise and (ii) to integrate reliability estimates in future global ecosystem stability studies against climate disturbances. The performance of our framework was tested through (i) a global ecosystem comparison and (ii) an comparison of ecosystem stability in response to the 2003 drought. The results show the influence of data quality on the accuracy of ecosystem stability. White noise, biased noise, and trends have a stronger effect on the accuracy of stability metrics than the length of the time series, temporal resolution, or amount of missing values. Moreover, we demonstrate the importance of integrating reliability estimates to interpret stability metrics within confidence limits. Based on these confidence limits, other studies dealing with specific ecosystem types or locations can be put into context, and a more reliable assessment of ecosystem stability against environmental disturbances

  20. Electrical and reliability characteristics of Mn-doped nano BaTiO3-based ceramics for ultrathin multilayer ceramic capacitor application

    NASA Astrophysics Data System (ADS)

    Gong, Huiling; Wang, Xiaohui; Zhang, Shaopeng; Tian, Zhibin; Li, Longtu

    2012-12-01

    Nano BaTiO3-based dielectric ceramics were prepared by chemical coating approach, which are promising for ultrathin multilayer ceramic capacitor (MLCC) applications. The doping effects of Mn element on the microstructures and dielectric properties of the ceramics were investigated. The degradation test and impedance spectroscopy were employed to study the resistance degradation and the conduction mechanism of Mn-doped nano-BaTiO3 ceramic samples. It has been found that the reliability characteristics greatly depended on the Mn-doped content. Moreover, the BaTiO3 ceramic with grain size in nanoscale is more sensitive to the Mn-doped content than that in sub-micron scale. The addition of 0.3 mol. % Mn is beneficial for improving the reliability of the nano BaTiO3-based ceramics, which is an important parameter for MLCC applications. However, further increasing the addition amount will deteriorate the performance of the ceramic samples.

  1. Reliability-based econometrics of aerospace structural systems: Design criteria and test options. Ph.D. Thesis - Georgia Inst. of Tech.

    NASA Technical Reports Server (NTRS)

    Thomas, J. M.; Hanagud, S.

    1974-01-01

    The design criteria and test options for aerospace structural reliability were investigated. A decision methodology was developed for selecting a combination of structural tests and structural design factors. The decision method involves the use of Bayesian statistics and statistical decision theory. Procedures are discussed for obtaining and updating data-based probabilistic strength distributions for aerospace structures when test information is available and for obtaining subjective distributions when data are not available. The techniques used in developing the distributions are explained.

  2. Reliability of Wireless Sensor Networks

    PubMed Central

    Dâmaso, Antônio; Rosa, Nelson; Maciel, Paulo

    2014-01-01

    Wireless Sensor Networks (WSNs) consist of hundreds or thousands of sensor nodes with limited processing, storage, and battery capabilities. There are several strategies to reduce the power consumption of WSN nodes (by increasing the network lifetime) and increase the reliability of the network (by improving the WSN Quality of Service). However, there is an inherent conflict between power consumption and reliability: an increase in reliability usually leads to an increase in power consumption. For example, routing algorithms can send the same packet though different paths (multipath strategy), which it is important for reliability, but they significantly increase the WSN power consumption. In this context, this paper proposes a model for evaluating the reliability of WSNs considering the battery level as a key factor. Moreover, this model is based on routing algorithms used by WSNs. In order to evaluate the proposed models, three scenarios were considered to show the impact of the power consumption on the reliability of WSNs. PMID:25157553

  3. Reliability in CMOS IC processing

    NASA Technical Reports Server (NTRS)

    Shreeve, R.; Ferrier, S.; Hall, D.; Wang, J.

    1990-01-01

    Critical CMOS IC processing reliability monitors are defined in this paper. These monitors are divided into three categories: process qualifications, ongoing production workcell monitors, and ongoing reliability monitors. The key measures in each of these categories are identified and prioritized based on their importance.

  4. Parametric Mass Reliability Study

    NASA Technical Reports Server (NTRS)

    Holt, James P.

    2014-01-01

    The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.

  5. Software reliability report

    NASA Technical Reports Server (NTRS)

    Wilson, Larry

    1991-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Unfortunately, the models appear to be unable to account for the random nature of the data. If the same code is debugged multiple times and one of the models is used to make predictions, intolerable variance is observed in the resulting reliability predictions. It is believed that data replication can remove this variance in lab type situations and that it is less than scientific to talk about validating a software reliability model without considering replication. It is also believed that data replication may prove to be cost effective in the real world, thus the research centered on verification of the need for replication and on methodologies for generating replicated data in a cost effective manner. The context of the debugging graph was pursued by simulation and experimentation. Simulation was done for the Basic model and the Log-Poisson model. Reasonable values of the parameters were assigned and used to generate simulated data which is then processed by the models in order to determine limitations on their accuracy. These experiments exploit the existing software and program specimens which are in AIR-LAB to measure the performance of reliability models.

  6. Reliability/redundancy trade-off evaluation for multiplexed architectures used to implement quantum dot based computing

    SciTech Connect

    Bhaduri, D.; Shukla, S. K.; Graham, P. S.; Gokhale, M.

    2004-01-01

    With the advent of nanocomputing, researchers have proposed Quantum Dot Cellular Automata (QCA) as one of the implementation technologies. The majority gate is one of the fundamental gates implementable with QCAs. Moreover, majority gates play an important role in defect-tolerant circuit implementations for nanotechnologies due to their use in redundancy mechanisms such as TMR, CTMR etc. Therefore, providing reliable implementation of majority logic using some redundancy mechanism is extremely important. This problem was addressed by von Neumann in 1956, in the form of 'majority multiplexing' and since then several analytical probabilistic models have been proposed to analyze majority multiplexing circuits. However, such analytical approaches are extremely challenging combinatorially and error prone. Also the previous analyses did not distinguish between permanent faults at the gates and transient faults due to noisy interconnects or noise effects on gates. In this paper, we provide explicit fault models for transient and permanent errors at the gates and noise effects at the interconnects. We model majority multiplexing in a probabilistic system description language, and use probabilistic model checking to analyze the effects of our fault models on the different reliability/redundancy trade-offs for majority multiplexing configurations. We also draw parallels with another fundamental logic gate multiplexing technique, namely NAND multiplexing. Tools and methodologies for analyzing redundant architectures that use majority gates will help logic designers to quickly evaluate the amount of redundancy needed to achieve a given level of reliability. VLSI designs at the nanoscale will utilize implementation fabrics prone to faults of permanent and transient nature, and the interconnects will be extensively affected by noise, hence the need for tools that can capture probabilistically quantified fault models and provide quick evaluation of the trade-offs. A comparative

  7. Rational Hydrogenation for Enhanced Mobility and High Reliability on ZnO-based Thin Film Transistors: From Simulation to Experiment.

    PubMed

    Xu, Lei; Chen, Qian; Liao, Lei; Liu, Xingqiang; Chang, Ting-Chang; Chang, Kuan-Chang; Tsai, Tsung-Ming; Jiang, Changzhong; Wang, Jinlan; Li, Jinchai

    2016-03-02

    Hydrogenation is one of the effective methods for improving the performance of ZnO thin film transistors (TFTs), which originate from the fact that hydrogen (H) acts as a defect passivator and a shallow n-type dopant in ZnO materials. However, passivation accompanied by an excessive H doping of the channel region of a ZnO TFT is undesirable because high carrier density leads to negative threshold voltages. Herein, we report that Mg/H codoping could overcome the trade-off between performance and reliability in the ZnO TFTs. The theoretical calculation suggests that the incorporation of Mg in hydrogenated ZnO decrease the formation energy of interstitial H and increase formation energy of O-vacancy (VO). The experimental results demonstrate that the existence of the diluted Mg in hydrogenated ZnO TFTs could be sufficient to boost up mobility from 10 to 32.2 cm(2)/(V s) at a low carrier density (∼2.0 × 10(18) cm(-3)), which can be attributed to the decreased electron effective mass by surface band bending. The all results verified that the Mg/H codoping can significantly passivate the VO to improve device reliability and enhance mobility. Thus, this finding clearly points the way to realize high-performance metal oxide TFTs for low-cost, large-volume, flexible electronics.

  8. Improving the communication reliability of body sensor networks based on the IEEE 802.15.4 protocol.

    PubMed

    Gomes, Diogo; Afonso, José A

    2014-03-01

    Body sensor networks (BSNs) enable continuous monitoring of patients anywhere, with minimum constraints to daily life activities. Although the IEEE 802.15.4 and ZigBee(®) (ZigBee Alliance, San Ramon, CA) standards were mainly developed for use in wireless sensors network (WSN) applications, they are also widely used in BSN applications because of device characteristics such as low power, low cost, and small form factor. However, compared with WSNs, BSNs present some very distinctive characteristics in terms of traffic and mobility patterns, heterogeneity of the nodes, and quality of service requirements. This article evaluates the suitability of the carrier sense multiple access-collision avoidance protocol, used by the IEEE 802.15.4 and ZigBee standards, for data-intensive BSN applications, through the execution of experimental tests in different evaluation scenarios, in order to take into account the effects of contention, clock drift, and hidden nodes on the communication reliability. Results show that the delivery ratio may decrease substantially during transitory periods, which can last for several minutes, to a minimum of 90% with retransmissions and 13% without retransmissions. This article also proposes and evaluates the performance of the BSN contention avoidance mechanism, which was designed to solve the identified reliability problems. This mechanism was able to restore the delivery ratio to 100% even in the scenario without retransmissions.

  9. Reliability and Validity of a Novel Internet-Based Battery to Assess Mood and Cognitive Function in the Elderly.

    PubMed

    Myers, Candice A; Keller, Jeffrey N; Allen, H Raymond; Brouillette, Robert M; Foil, Heather; Davis, Allison B; Greenway, Frank L; Johnson, William D; Martin, Corby K

    2016-10-18

    Dementia is a chronic condition in the elderly and depression is often a concurrent symptom. As populations continue to age, accessible and useful tools to screen for cognitive function and its associated symptoms in elderly populations are needed. The aim of this study was to test the reliability and validity of a new internet-based assessment battery for screening mood and cognitive function in an elderly population. Specifically, the Helping Hand Technology (HHT) assessments for depression (HHT-D) and global cognitive function (HHT-G) were evaluated in a sample of 57 elderly participants (22 male, 35 female) aged 59-85 years. The study sample was categorized into three groups: 1) dementia (n = 8; Mini-Mental State Exam (MMSE) score 10-24), 2) mild cognitive impairment (n = 24; MMSE score 25-28), and 3) control (n = 25; MMSE score 29-30). Test-retest reliability (Pearson correlation coefficient, r) and internal consistency reliability (Cronbach's alpha, α) of the HHT-D and HHT-G were assessed. Validity of the HHT-D and HHT-G was tested via comparison (Pearson r) to commonly used pencil-and-paper based assessments: HHT-D versus the Geriatric Depression Scale (GDS) and HHT-G versus the MMSE. Good test-retest (r = 0.80; p < 0.0001) and acceptable internal consistency reliability (α= 0.73) of the HHT-D were established. Moderate support for the validity of the HHT-D was obtained (r = 0.60 between the HHT-D and GDS; p < 0.0001). Results indicated good test-retest (r = 0.87; p < 0.0001) and acceptable internal consistency reliability (α= 0.70) of the HHT-G. Validity of the HHT-G was supported (r = 0.71 between the HHT-G and MMSE; p < 0.0001). In summary, the HHT-D and HHT-G were found to be reliable and valid computerized assessments to screen for depression and cognitive status, respectively, in an elderly sample.

  10. Assessing the Quality of Mobile Exercise Apps Based on the American College of Sports Medicine Guidelines: A Reliable and Valid Scoring Instrument

    PubMed Central

    Bian, Jiang; Leavitt, Trevor; Vincent, Heather K; Vander Zalm, Lindsey; Teurlings, Tyler L; Smith, Megan D

    2017-01-01

    Background Regular physical activity can not only help with weight management, but also lower cardiovascular risks, cancer rates, and chronic disease burden. Yet, only approximately 20% of Americans currently meet the physical activity guidelines recommended by the US Department of Health and Human Services. With the rapid development of mobile technologies, mobile apps have the potential to improve participation rates in exercise programs, particularly if they are evidence-based and are of sufficient content quality. Objective The goal of this study was to develop and test an instrument, which was designed to score the content quality of exercise program apps with respect to the exercise guidelines set forth by the American College of Sports Medicine (ACSM). Methods We conducted two focus groups (N=14) to elicit input for developing a preliminary 27-item scoring instruments based on the ACSM exercise prescription guidelines. Three reviewers who were no sports medicine experts independently scored 28 exercise program apps using the instrument. Inter- and intra-rater reliability was assessed among the 3 reviewers. An expert reviewer, a Fellow of the ACSM, also scored the 28 apps to create criterion scores. Criterion validity was assessed by comparing nonexpert reviewers’ scores to the criterion scores. Results Overall, inter- and intra-rater reliability was high with most coefficients being greater than .7. Inter-rater reliability coefficients ranged from .59 to .99, and intra-rater reliability coefficients ranged from .47 to 1.00. All reliability coefficients were statistically significant. Criterion validity was found to be excellent, with the weighted kappa statistics ranging from .67 to .99, indicating a substantial agreement between the scores of expert and nonexpert reviewers. Finally, all apps scored poorly against the ACSM exercise prescription guidelines. None of the apps received a score greater than 35, out of a possible maximal score of 70. Conclusions

  11. Inter-rater reliability between nurses for a new paediatric triage system based primarily on vital parameters: the Paediatric Triage Instrument (PETI)

    PubMed Central

    Karjala, Jaana; Eriksson, Staffan

    2017-01-01

    Introduction The major paediatric triage systems are primarily based on flow charts involving signs and symptoms for orientation and subjective estimates of the patient's condition. In contrast, the 4-level Paediatric Triage Instrument (PETI) is primarily based on vital parameters and was developed exclusively for paediatric triage in patients with medical symptoms. The aim of this study was to assess the inter-rater reliability of this triage system in children when used by nurses. Methods A design was employed in which triage was performed simultaneously and independently by a research nurse and an emergency department (ED) nurse using the PETI. All patients aged ≤12 years who presented at the ED with a medical symptom were considered eligible for participation. Results The 89 participants exhibited a median age of 2 years and were triaged by 28 different nurses. The inter-rater reliability between nurses calculated with the quadratic-weighted κ was 0.78 (95% CI 0.67 to 0.89); the linear-weighted κ was 0.67 (95% CI 0.56 to 0.80) and the unweighted κ was 0.59 (95% CI 0.44 to 0.73). For the patients aged <1, 1–3 and >3 years, the quadratic-weighted κ values were 0.67 (95% CI 0.39 to 0.94), 0.86 (95% CI 0.75 to 0.97) and 0.73 (95% CI 0.49 to 0.97), respectively. The median triage duration was 6 min. Conclusions The PETI exhibited substantial reliability when used in children aged ≤12 years and almost perfect reliability among children aged 1–3 years. Moreover, rapid application of the PETI was demonstrated. This study has some limitations, including sample size and generalisability, but the PETI exhibited promise regarding reliability, and the next step could be either a larger reliability study or a validation study. PMID:28235966

  12. Daily life dialogue assessment in psychiatric care-face validity and inter-rater reliability of a tool based on the International Classification of Functioning, Disability and Health.

    PubMed

    Johansson, Catrin; Åström, Sture; Kauffeldt, Anders; Carlström, Eric

    2013-12-01

    This article describes the development of an assessment tool based on the International Classification of Functioning Disability and Health (ICF) adapted to a psychiatric nursing context where both the patient and the nurse assess the patient's ability to participate in various spheres of life. The aim was to test psychometric properties, focusing on face validity and inter-rater reliability. Three Swedish expert groups participated. Analysis of inter-rater reliability was conducted through simulated patient cases. The results of an unweighted kappa value of 0.38, a linear weighted kappa value of 0.65 and a quadratic weighted kappa value of 0.73 were considered as acceptable when using simulated patient cases.

  13. A structured pictorial questionnaire to assess DSM-III-R-based diagnoses in children (6-11 years): development, validity, and reliability.

    PubMed

    Valla, J P; Bergeron, L; Bérubé, H; Gaudet, N; St-Georges, M

    1994-08-01

    This paper presents a structured pictorial instrument, the Dominic questionnaire, to assess mental disorders in 6- to 11-year-old children. Ninety-nine drawings represent situations corresponding to DSM-III-R based ADHD, CD, ODD, MDD, SAD, OAD, and SPh. However, cognitive limitation of 6- to 11-year-old children do not allow for time-related measurement. The instrument takes 15-20 min to administer. Reliability and validity of the Dominic questionnaire were studied in Parent DISC-2 positive and negative outpatient and general population samples and against clinical judgement. The pictorial approach provides acceptable test-retest reliability and the instrument makes standardized assessment possible for children as young as 6 years of age.

  14. Accuracy and reliability of GPS devices for measurement of sports-specific movement patterns related to cricket, tennis, and field-based team sports.

    PubMed

    Vickery, William M; Dascombe, Ben J; Baker, John D; Higham, Dean G; Spratford, Wayne A; Duffield, Rob

    2014-06-01

    The aim of this study was to determine the accuracy and reliability of 5, 10, and 15 Hz global positioning system (GPS) devices. Two male subjects (mean ± SD; age, 25.5 ± 0.7 years; height, 1.75 ± 0.01 m; body mass, 74 ± 5.7 kg) completed 10 repetitions of drills replicating movements typical of tennis, cricket, and field-based (football) sports. All movements were completed wearing two 5 and 10 Hz MinimaxX and 2 GPS-Sports 15 Hz GPS devices in a specially designed harness. Criterion movement data for distance and speed were provided from a 22-camera VICON system sampling at 100 Hz. Accuracy was determined using 1-way analysis of variance with Tukey's post hoc tests. Interunit reliability was determined using intraclass correlation (ICC), and typical error was estimated as coefficient of variation (CV). Overall, for the majority of distance and speed measures, as measured using the 5, 10, and 15 Hz GPS devices, were not significantly different (p > 0.05) to the VICON data. Additionally, no improvements in the accuracy or reliability of GPS devices were observed with an increase in the sampling rate. However, the CV for the 5 and 15 Hz devices for distance and speed measures ranged between 3 and 33%, with increasing variability evident in higher speed zones. The majority of ICC measures possessed a low level of interunit reliability (r = -0.35 to 0.39). Based on these results, practitioners of these devices should be aware that measurements of distance and speed may be consistently underestimated, regardless of the movements performed.

  15. Software reliability studies

    NASA Technical Reports Server (NTRS)

    Hoppa, Mary Ann; Wilson, Larry W.

    1994-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.

  16. Methods for reliability and uncertainty assessment and for applicability evaluations of classification- and regression-based QSARs.

    PubMed

    Eriksson, Lennart; Jaworska, Joanna; Worth, Andrew P; Cronin, Mark T D; McDowell, Robert M; Gramatica, Paola

    2003-08-01

    This article provides an overview of methods for reliability assessment of quantitative structure-activity relationship (QSAR) models in the context of regulatory acceptance of human health and environmental QSARs. Useful diagnostic tools and data analytical approaches are highlighted and exemplified. Particular emphasis is given to the question of how to define the applicability borders of a QSAR and how to estimate parameter and prediction uncertainty. The article ends with a discussion regarding QSAR acceptability criteria. This discussion contains a list of recommended acceptability criteria, and we give reference values for important QSAR performance statistics. Finally, we emphasize that rigorous and independent validation of QSARs is an essential step toward their regulatory acceptance and implementation.

  17. Methods for reliability and uncertainty assessment and for applicability evaluations of classification- and regression-based QSARs.

    PubMed Central

    Eriksson, Lennart; Jaworska, Joanna; Worth, Andrew P; Cronin, Mark T D; McDowell, Robert M; Gramatica, Paola

    2003-01-01

    This article provides an overview of methods for reliability assessment of quantitative structure-activity relationship (QSAR) models in the context of regulatory acceptance of human health and environmental QSARs. Useful diagnostic tools and data analytical approaches are highlighted and exemplified. Particular emphasis is given to the question of how to define the applicability borders of a QSAR and how to estimate parameter and prediction uncertainty. The article ends with a discussion regarding QSAR acceptability criteria. This discussion contains a list of recommended acceptability criteria, and we give reference values for important QSAR performance statistics. Finally, we emphasize that rigorous and independent validation of QSARs is an essential step toward their regulatory acceptance and implementation. PMID:12896860

  18. Design of a reliable PUF circuit based on R-2R ladder digital-to-analog convertor

    NASA Astrophysics Data System (ADS)

    Pengjun, Wang; Xuelong, Zhang; Yuejun, Zhang; Jianrui, Li

    2015-07-01

    A novel physical unclonable functions (PUF) circuit is proposed, which relies on non-linear characteristic of analog voltage generated by R-2R ladder DAC. After amplifying the deviation signal, the robustness of the DAC-PUF circuit has increased significantly. The DAC-PUF circuit is designed in TSMC 65 nm CMOS technology and the layout occupies 86.06 × 63.56 μm2. Monte Carlo simulation results show that the reliability of the DAC-PUF circuit is above 98% over a comprehensive range of environmental variation, such as temperature and supply voltage. Project supported by the National Natural Science Foundation of China (Nos. 61474068, 61404076, 61274132), the Zhejiang Provincial Natural Science Foundation of China (No. LQ14F040001), and the K. C. Wong Magna Fund in Ningbo University, China.

  19. Objectives, priorities, reliable knowledge, and science-based management of Missouri River interior least terns and piping plovers

    USGS Publications Warehouse

    Sherfy, Mark; Anteau, Michael; Shaffer, Terry; Sovada, Marsha; Stucker, Jennifer

    2011-01-01

    Supporting recovery of federally listed interior least tern (Sternula antillarum athalassos; tern) and piping plover (Charadrius melodus; plover) populations is a desirable goal in management of the Missouri River ecosystem. Many tools are implemented in support of this goal, including habitat management, annual monitoring, directed research, and threat mitigation. Similarly, many types of data can be used to make management decisions, evaluate system responses, and prioritize research and monitoring. The ecological importance of Missouri River recovery and the conservation status of terns and plovers place a premium on efficient and effective resource use. Efficiency is improved when a single data source informs multiple high-priority decisions, whereas effectiveness is improved when decisions are informed by reliable knowledge. Seldom will a single study design be optimal for addressing all data needs, making prioritization of needs essential. Data collection motivated by well-articulated objectives and priorities has many advantages over studies in which questions and priorities are determined retrospectively. Research and monitoring for terns and plovers have generated a wealth of data that can be interpreted in a variety of ways. The validity and strength of conclusions from analyses of these data is dependent on compatibility between the study design and the question being asked. We consider issues related to collection and interpretation of biological data, and discuss their utility for enhancing the role of science in management of Missouri River terns and plovers. A team of USGS scientists at Northern Prairie Wildlife Research Center has been conducting tern and plover research on the Missouri River since 2005. The team has had many discussions about the importance of setting objectives, identifying priorities, and obtaining reliable information to answer pertinent questions about tern and plover management on this river system. The objectives of this

  20. Reliability testing and analysis of safing and arming devices for army fuzes

    NASA Astrophysics Data System (ADS)

    Zunino, James L., III; Skelton, Donald R.; Robinson, Charles

    2008-02-01

    To address the lack of micro-electro-mechanical systems (MEMS) reliability data as well as a standardized methodology for assessing reliability, the Metallic Materials Technology Branch at Picatinny Arsenal has initiated a MEMS Reliability Assessment Program. This lack of data has been identified as a barrier to the utilization the MEMS in DOD systems. Of particular concern are the impacts of long-term storage and environmental exposure on the reliability of devices. Specific objectives of the Metallic Materials Technology Branch (MMTB) program include: • Identify MEMS devices to be utilized in weapon systems. • Determine the relevant categories of MEMS materials, technologies and designs in these applications • Assess the operational environments in which the military MEMS device may be utilized. • Analyze the compatibility of MEMS devices with energetic and other hazardous materials. • Identify physics of failure, failure mechanisms and failure rates. • Develop accelerated test protocols for assessing the reliability of MEMS according to the categories. • Develop of reliability models for these devices. • Conduct testing and modeling on representative devices of interest to Army and DoD. • Develop of a methodology and capability for conducting independent assessments of reliability that cannot be obtained from private industry. In support of this effort, some testing has been performed on prototype mechanical Safety and Arming (S&A) devices for the 25-mm XM25 Air Burst Weapon. The objective is to test the S&A as a representative device for the identification of potential failure modes and effects for devices of the same class. Information derived from this testing will be used to develop standardized test protocols, formulate reliability models and establish design criteria and to identify critical parameters in support of the S&A development effort. To date, Environmental Stress Screening (ESS) tests have been performed on samples of the device

  1. Reliability and validity of a semi-structured DSM-based diagnostic interview module for the assessment of Attention Deficit Hyperactivity Disorder in adult psychiatric outpatients.

    PubMed

    Gorlin, Eugenia I; Dalrymple, Kristy; Chelminski, Iwona; Zimmerman, Mark

    2016-08-30

    Despite growing recognition that the symptoms and functional impairments of Attention Deficit/Hyperactivity Disorder (ADHD) persist into adulthood, only a few psychometrically sound diagnostic measures have been developed for the assessment of ADHD in adults, and none have been validated for use in a broad treatment-seeking psychiatric sample. The current study presents the reliability and validity of a semi-structured DSM-based diagnostic interview module for ADHD, which was administered to 1194 adults presenting to an outpatient psychiatric practice. The module showed excellent internal consistency and interrater reliability, good convergent and discriminant validity (as indexed by relatively high correlations with self-report measures of ADHD and ADHD-related constructs and little or no correlation with other, non-ADHD symptom domains), and good construct validity (as indexed by significantly higher rates of psychosocial impairment and self-reported family history of ADHD in individuals who meet criteria for an ADHD diagnosis). This instrument is thus a reliable and valid diagnostic tool for the detection of ADHD in adults presenting for psychiatric evaluation and treatment.

  2. The Development of DNA Based Methods for the Reliable and Efficient Identification of Nicotiana tabacum in Tobacco and Its Derived Products

    PubMed Central

    Fan, Wei; Li, Rong; Li, Sifan; Ping, Wenli; Li, Shujun; Naumova, Alexandra; Peelen, Tamara; Yuan, Zheng; Zhang, Dabing

    2016-01-01

    Reliable methods are needed to detect the presence of tobacco components in tobacco products to effectively control smuggling and classify tariff and excise in tobacco industry to control illegal tobacco trade. In this study, two sensitive and specific DNA based methods, one quantitative real-time PCR (qPCR) assay and the other loop-mediated isothermal amplification (LAMP) assay, were developed for the reliable and efficient detection of the presence of tobacco (Nicotiana tabacum) in various tobacco samples and commodities. Both assays targeted the same sequence of the uridine 5′-monophosphate synthase (UMPS), and their specificities and sensitivities were determined with various plant materials. Both qPCR and LAMP methods were reliable and accurate in the rapid detection of tobacco components in various practical samples, including customs samples, reconstituted tobacco samples, and locally purchased cigarettes, showing high potential for their application in tobacco identification, particularly in the special cases where the morphology or chemical compositions of tobacco have been disrupted. Therefore, combining both methods would facilitate not only the detection of tobacco smuggling control, but also the detection of tariff classification and of excise. PMID:27635142

  3. Electronic Versus Paper-Based Assessment of Health-Related Quality of Life Specific to HIV Disease: Reliability Study of the PROQOL-HIV Questionnaire

    PubMed Central

    Lalanne, Christophe; Goujard, Cécile; Herrmann, Susan; Cheung-Lung, Christian; Brosseau, Jean-Paul; Schwartz, Yannick; Chassany, Olivier

    2014-01-01

    Background Electronic patient-reported outcomes (PRO) provide quick and usually reliable assessments of patients’ health-related quality of life (HRQL). Objective An electronic version of the Patient-Reported Outcomes Quality of Life-human immunodeficiency virus (PROQOL-HIV) questionnaire was developed, and its face validity and reliability were assessed using standard psychometric methods. Methods A sample of 80 French outpatients (66% male, 52/79; mean age 46.7 years, SD 10.9) were recruited. Paper-based and electronic questionnaires were completed in a randomized crossover design (2-7 day interval). Biomedical data were collected. Questionnaire version and order effects were tested on full-scale scores in a 2-way ANOVA with patients as random effects. Test-retest reliability was evaluated using Pearson and intraclass correlation coefficients (ICC, with 95% confidence interval) for each dimension. Usability testing was carried out from patients’ survey reports, specifically, general satisfaction, ease of completion, quality and clarity of user interface, and motivation to participate in follow-up PROQOL-HIV electronic assessments. Results Questionnaire version and administration order effects (N=59 complete cases) were not significant at the 5% level, and no interaction was found between these 2 factors (P=.94). Reliability indexes were acceptable, with Pearson correlations greater than .7 and ICCs ranging from .708 to .939; scores were not statistically different between the two versions. A total of 63 (79%) complete patients’ survey reports were available, and 55% of patients (30/55) reported being satisfied and interested in electronic assessment of their HRQL in clinical follow-up. Individual ratings of PROQOL-HIV user interface (85%-100% of positive responses) confirmed user interface clarity and usability. Conclusions The electronic PROQOL-HIV introduces minor modifications to the original paper-based version, following International Society for

  4. Stirling Convertor Fasteners Reliability Quantification

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Korovaichuk, Igor; Kovacevich, Tiodor; Schreiber, Jeffrey G.

    2006-01-01

    Onboard Radioisotope Power Systems (RPS) being developed for NASA s deep-space science and exploration missions require reliable operation for up to 14 years and beyond. Stirling power conversion is a candidate for use in an RPS because it offers a multifold increase in the conversion efficiency of heat to electric power and reduced inventory of radioactive material. Structural fasteners are responsible to maintain structural integrity of the Stirling power convertor, which is critical to ensure reliable performance during the entire mission. Design of fasteners involve variables related to the fabrication, manufacturing, behavior of fasteners and joining parts material, structural geometry of the joining components, size and spacing of fasteners, mission loads, boundary conditions, etc. These variables have inherent uncertainties, which need to be accounted for in the reliability assessment. This paper describes these uncertainties along with a methodology to quantify the reliability, and provides results of the analysis in terms of quantified reliability and sensitivity of Stirling power conversion reliability to the design variables. Quantification of the reliability includes both structural and functional aspects of the joining components. Based on the results, the paper also describes guidelines to improve the reliability and verification testing.

  5. Reliable Multihop Broadcast Protocol with a Low-Overhead Link Quality Assessment for ITS Based on VANETs in Highway Scenarios

    PubMed Central

    Galaviz-Mosqueda, Alejandro; Villarreal-Reyes, Salvador; Galeana-Zapién, Hiram; Rubio-Loyola, Javier; Covarrubias-Rosales, David H.

    2014-01-01

    Vehicular ad hoc networks (VANETs) have been identified as a key technology to enable intelligent transport systems (ITS), which are aimed to radically improve the safety, comfort, and greenness of the vehicles in the road. However, in order to fully exploit VANETs potential, several issues must be addressed. Because of the high dynamic of VANETs and the impairments in the wireless channel, one key issue arising when working with VANETs is the multihop dissemination of broadcast packets for safety and infotainment applications. In this paper a reliable low-overhead multihop broadcast (RLMB) protocol is proposed to address the well-known broadcast storm problem. The proposed RLMB takes advantage of the hello messages exchanged between the vehicles and it processes such information to intelligently select a relay set and reduce the redundant broadcast. Additionally, to reduce the hello messages rate dependency, RLMB uses a point-to-zone link evaluation approach. RLMB performance is compared with one of the leading multihop broadcast protocols existing to date. Performance metrics show that our RLMB solution outperforms the leading protocol in terms of important metrics such as packet dissemination ratio, overhead, and delay. PMID:25133224

  6. Proof-of-Concept Demonstrations for Computation-Based Human Reliability Analysis. Modeling Operator Performance During Flooding Scenarios

    SciTech Connect

    Joe, Jeffrey Clark; Boring, Ronald Laurids; Herberger, Sarah Elizabeth Marie; Mandelli, Diego; Smith, Curtis Lee

    2015-09-01

    The United States (U.S.) Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program has the overall objective to help sustain the existing commercial nuclear power plants (NPPs). To accomplish this program objective, there are multiple LWRS “pathways,” or research and development (R&D) focus areas. One LWRS focus area is called the Risk-Informed Safety Margin and Characterization (RISMC) pathway. Initial efforts under this pathway to combine probabilistic and plant multi-physics models to quantify safety margins and support business decisions also included HRA, but in a somewhat simplified manner. HRA experts at Idaho National Laboratory (INL) have been collaborating with other experts to develop a computational HRA approach, called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER), for inclusion into the RISMC framework. The basic premise of this research is to leverage applicable computational techniques, namely simulation and modeling, to develop and then, using RAVEN as a controller, seamlessly integrate virtual operator models (HUNTER) with 1) the dynamic computational MOOSE runtime environment that includes a full-scope plant model, and 2) the RISMC framework PRA models already in use. The HUNTER computational HRA approach is a hybrid approach that leverages past work from cognitive psychology, human performance modeling, and HRA, but it is also a significant departure from existing static and even dynamic HRA methods. This report is divided into five chapters that cover the development of an external flooding event test case and associated statistical modeling considerations.

  7. Cost and Reliability Improvement for CIGS-Based PV on Flexible Substrate: May 24, 2006 -- July 31, 2010

    SciTech Connect

    Wiedeman, S.

    2011-05-01

    Global Solar Energy rapidly advances the cost and performance of commercial thin-film CIGS products using roll-to-roll processing on steel foil substrate in compact, low cost deposition equipment, with in-situ sensors for real-time intelligent process control. Substantial increases in power module efficiency, which now exceed 13%, are evident at GSE factories in two countries with a combined capacity greater than 75 MW. During 2009 the average efficiency of cell strings (3780 cm2) was increased from 7% to over 11%, with champion results exceeding 13% Continued testing of module reliability in rigid product has reaffirmed extended life expectancy for standard glass product, and has qualified additional lower-cost methods and materials. Expected lifetime for PV in flexible packages continues to increase as failure mechanisms are elucidated, and resolved by better methods and materials. Cost reduction has been achieved through better materials utilization, enhanced vendor and material qualification and selection. The largest cost gains have come as a result of higher cell conversion efficiency and yields, higher processing rates, greater automation and improved control in all process steps. These improvements are integral to this thin film PV partnership program, and all realized with the 'Gen2' manufacturing plants, processes and equipment.

  8. Reliable multihop broadcast protocol with a low-overhead link quality assessment for ITS based on VANETs in highway scenarios.

    PubMed

    Galaviz-Mosqueda, Alejandro; Villarreal-Reyes, Salvador; Galeana-Zapién, Hiram; Rubio-Loyola, Javier; Covarrubias-Rosales, David H

    2014-01-01

    Vehicular ad hoc networks (VANETs) have been identified as a key technology to enable intelligent transport systems (ITS), which are aimed to radically improve the safety, comfort, and greenness of the vehicles in the road. However, in order to fully exploit VANETs potential, several issues must be addressed. Because of the high dynamic of VANETs and the impairments in the wireless channel, one key issue arising when working with VANETs is the multihop dissemination of broadcast packets for safety and infotainment applications. In this paper a reliable low-overhead multihop broadcast (RLMB) protocol is proposed to address the well-known broadcast storm problem. The proposed RLMB takes advantage of the hello messages exchanged between the vehicles and it processes such information to intelligently select a relay set and reduce the redundant broadcast. Additionally, to reduce the hello messages rate dependency, RLMB uses a point-to-zone link evaluation approach. RLMB performance is compared with one of the leading multihop broadcast protocols existing to date. Performance metrics show that our RLMB solution outperforms the leading protocol in terms of important metrics such as packet dissemination ratio, overhead, and delay.

  9. An Internet-based symptom questionnaire that is reliable, valid, and available to psychiatrists, neurologists, and psychologists.

    PubMed

    Gualtieri, C Thomas

    2007-10-03

    The Neuropsych Questionnaire (NPQ) addresses 2 important clinical issues: how to screen patients for a wide range of neuropsychiatric disorders quickly and efficiently, and how to acquire independent verification of a patient's complaints. The NPQ is available over the Internet in adult and pediatric versions. The adult version of the NPQ consists of 207 simple questions about common symptoms of neuropsychiatric disorders. The NPQ scores patient and/or observer responses in terms of 20 symptom clusters: inattention, hyperactivity-impulsivity, learning problems, memory, anxiety, panic, agoraphobia, obsessions and compulsions, social anxiety, depression, mood instability, mania, aggression, psychosis, somatization, fatigue, sleep, suicide, pain, and substance abuse. The NPQ is reliable (patients tested twice, patient-observer pairs, 2 observers) and discriminates patients with different diagnoses. Scores generated by the NPQ correlate reasonably well with commonly used rating scales, and the test is sensitive to the effects of treatment. The NPQ is suitable for initial patient evaluations, and a short form is appropriate for follow-up assessment. The availability of a comprehensive computerized symptom checklist can help to make the day-to-day practice of psychiatry, neurology, and neuropsychology more objective.

  10. Using a Web-Based Approach to Assess Test-Retest Reliability of the "Hypertension Self-Care Profile" Tool in an Asian Population: A Validation Study.

    PubMed

    Koh, Yi Ling Eileen; Lua, Yi Hui Adela; Hong, Liyue; Bong, Huey Shin Shirley; Yeo, Ling Sui Jocelyn; Tsang, Li Ping Marianne; Ong, Kai Zhi; Wong, Sook Wai Samantha; Tan, Ngiap Chuan

    2016-03-01

    Essential hypertension often requires affected patients to self-manage their condition most of the time. Besides seeking regular medical review of their life-long condition to detect vascular complications, patients have to maintain healthy lifestyles in between physician consultations via diet and physical activity, and to take their medications according to their prescriptions. Their self-management ability is influenced by their self-efficacy capacity, which can be assessed using questionnaire-based tools. The "Hypertension Self-Care Profile" (HTN-SCP) is 1 such questionnaire assessing self-efficacy in the domains of "behavior," "motivation," and "self-efficacy." This study aims to determine the test-retest reliability of HTN-SCP in an English-literate Asian population using a web-based approach. Multiethnic Asian patients, aged 40 years and older, with essential hypertension were recruited from a typical public primary care clinic in Singapore. The investigators guided the patients to fill up the web-based 60-item HTN-SCP in English using a tablet or smartphone on the first visit and refilled the instrument 2 weeks later in the retest. Internal consistency and test-retest reliability were evaluated using Cronbach's Alpha and intraclass correlation coefficients (ICC), respectively. The t test was used to determine the relationship between the overall HTN-SCP scores of the patients and their self-reported self-management activities. A total of 160 patients completed the HTN-SCP during the initial test, from which 71 test-retest responses were completed. No floor or ceiling effect was found for the scores for the 3 subscales. Cronbach's Alpha coefficients were 0.857, 0.948, and 0.931 for "behavior," "motivation," and "self-efficacy" domains respectively, indicating high internal consistency. The item-total correlation ranges for the 3 scales were from 0.105 to 0.656 for Behavior, 0.401 to 0.808 for Motivation, 0.349 to 0.789 for Self-efficacy. The corresponding

  11. Reliability Generalization: "Lapsus Linguae"

    ERIC Educational Resources Information Center

    Smith, Julie M.

    2011-01-01

    This study examines the proposed Reliability Generalization (RG) method for studying reliability. RG employs the application of meta-analytic techniques similar to those used in validity generalization studies to examine reliability coefficients. This study explains why RG does not provide a proper research method for the study of reliability,…

  12. Adequacy of Asymptotic Normal Theory in Estimating Reliability for Mastery Tests Based on the Beta-Binomial Model.

    ERIC Educational Resources Information Center

    Huynh, Huynh

    1981-01-01

    Simulated data based on five test score distributions indicate that a slight modification of the asymptotic normal theory for the estimation of the p and kappa indices in mastery testing will provide results which are in close agreement with those based on small samples from the beta-binomial distribution. (Author/BW)

  13. PSCC: Sensitive and Reliable Population-Scale Copy Number Variation Detection Method Based on Low Coverage Sequencing

    PubMed Central

    Vogel, Ida; Choy, Kwong Wai; Chen, Fang; Christensen, Rikke; Zhang, Chunlei; Ge, Huijuan; Jiang, Haojun; Yu, Chang; Huang, Fang; Wang, Wei; Jiang, Hui; Zhang, Xiuqing

    2014-01-01

    Background Copy number variations (CNVs) represent an important type of genetic variation that deeply impact phenotypic polymorphisms and human diseases. The advent of high-throughput sequencing technologies provides an opportunity to revolutionize the discovery of CNVs and to explore their relationship with diseases. However, most of the existing methods depend on sequencing depth and show instability with low sequence coverage. In this study, using low coverage whole-genome sequencing (LCS) we have developed an effective population-scale CNV calling (PSCC) method. Methodology/Principal Findings In our novel method, two-step correction was used to remove biases caused by local GC content and complex genomic characteristics. We chose a binary segmentation method to locate CNV segments and designed combined statistics tests to ensure the stable performance of the false positive control. The simulation data showed that our PSCC method could achieve 99.7%/100% and 98.6%/100% sensitivity and specificity for over 300 kb CNV calling in the condition of LCS (∼2×) and ultra LCS (∼0.2×), respectively. Finally, we applied this novel method to analyze 34 clinical samples with an average of 2× LCS. In the final results, all the 31 pathogenic CNVs identified by aCGH were successfully detected. In addition, the performance comparison revealed that our method had significant advantages over existing methods using ultra LCS. Conclusions/Significance Our study showed that PSCC can sensitively and reliably detect CNVs using low coverage or even ultra-low coverage data through population-scale sequencing. PMID:24465483

  14. Dual-Fuel Combustion Turbine Provides Reliable Power to U.S. Navy Submarine Base New London in Groton, Connecticut

    SciTech Connect

    Halverson, Mark A. )

    2002-01-01

    In keeping with a long-standing tradition of running Base utilities as a business, the U.S. Navy Submarine Base New London installed a dual-fuel combustion turbine with a heat recovery boiler. The 5-megawatt (MW) gas- and oil-fired combustion turbine sits within the Lower Base area, just off the shores of the Thames River. The U.S. Navy owns, operates, and maintains the combined heat and power (CHP) plant, which provides power to the Navy?s nuclear submarines when they are in port and to the Navy?s training facilities at the Submarine Base. Heat recovered from the turbine is used to produce steam for use in Base housing, medical facilities, and laundries. In FY00, the Navy estimates that it will save over $500,000 per year as a result of the combined heat and power unit.

  15. Can There Be Reliability without "Reliability?"

    ERIC Educational Resources Information Center

    Mislevy, Robert J.

    2004-01-01

    An "Educational Researcher" article by Pamela Moss (1994) asks the title question, "Can there be validity without reliability?" Yes, she answers, if by reliability one means "consistency among independent observations intended as interchangeable" (Moss, 1994, p. 7), quantified by internal consistency indices such as…

  16. Reliability Validation and Improvement Framework

    DTIC Science & Technology

    2012-11-01

    discover and remove bugs using various test cover- age metrics to determine test sufficiency. Failure-probability density function based on code met- rics ...Coverage Metrics Traditional reliability engineering has focused on fault density and reliability growth as key met- rics . These are statistical...abs_all.jsp?arnumber=781027 [Kwiatkowska 2010] Kwiatkowska, M., Norman, G., & Parker , D. “Advances and Challenges of Probabilistic Model Checking

  17. Inter-operator Reliability of Magnetic Resonance Image-Based Computational Fluid Dynamics Prediction of Cerebrospinal Fluid Motion in the Cervical Spine.

    PubMed

    Martin, Bryn A; Yiallourou, Theresia I; Pahlavian, Soroush Heidari; Thyagaraj, Suraj; Bunck, Alexander C; Loth, Francis; Sheffer, Daniel B; Kröger, Jan Robert; Stergiopulos, Nikolaos

    2016-05-01

    For the first time, inter-operator dependence of MRI based computational fluid dynamics (CFD) modeling of cerebrospinal fluid (CSF) in the cervical spinal subarachnoid space (SSS) is evaluated. In vivo MRI flow measurements and anatomy MRI images were obtained at the cervico-medullary junction of a healthy subject and a Chiari I malformation patient. 3D anatomies of the SSS were reconstructed by manual segmentation by four independent operators for both cases. CFD results were compared at nine axial locations along the SSS in terms of hydrodynamic and geometric parameters. Intraclass correlation (ICC) assessed the inter-operator agreement for each parameter over the axial locations and coefficient of variance (CV) compared the percentage of variance for each parameter between the operators. Greater operator dependence was found for the patient (0.19 < ICC < 0.99) near the craniovertebral junction compared to the healthy subject (ICC > 0.78). For the healthy subject, hydraulic diameter and Womersley number had the least variance (CV = ~2%). For the patient, peak diastolic velocity and Reynolds number had the smallest variance (CV = ~3%). These results show a high degree of inter-operator reliability for MRI-based CFD simulations of CSF flow in the cervical spine for healthy subjects and a lower degree of reliability for patients with Type I Chiari malformation.

  18. How to work towards more reliable residual water estimations? State of the art in Switzerland and ideas for using physical based approaches

    NASA Astrophysics Data System (ADS)

    Floriancic, Marius; Margreth, Michael; Naef, Felix

    2016-04-01

    Reliable low flow estimations are important for many ecologic and economic reasons. In Switzerland the base for defining residual flow is Q347 (Q95), the discharge exceeded 347 days per year. To improve estimations, we need further knowledge of dominant processes of storage and drainage during low flow periods. We present new approaches to define Q347 based on physical properties of the contributing slopes and catchment parts. We used dominant runoff process maps, representing storage and drainage capacity of soils, to predict discharge during dry periods. We found that recession depends on these processes but during low flow periods and times of water scarcity different mechanisms sustain discharge and streamflow. During an extended field campaign in dry summer 2015, we surveyed drainage behavior of different landscape elements in the Swiss midlands and found major differences in their contribution to discharge. The contributing storages have small volumes but long residence times mainly influenced by pore volume distribution and flow paths in fractured rocks and bedrock. We found that steep areas formed of sandstones are more likely to support higher flows than flat alluvial valleys or areas with thick moraine deposits, where infiltration takes place more frequently. The gathered knowledge helps assessing catchment scale low flow issues and supports more reliable estimations of water availability during dry periods. Furthermore the presented approach may help detect areas more or less vulnerable to extended dry periods, important ecologic and economic issues especially during changing climatic conditions.

  19. Reliability assessment of GaAs- and InP-based diode lasers for high-energy single-pulse operation

    NASA Astrophysics Data System (ADS)

    Maiorov, M.; Damm, D.; Trofimov, I.; Zeidel, V.; Sellers, R.

    2009-08-01

    With the maturing of high-power diode laser technology, studies of laser-assisted ignition of a variety of substances are becoming an increasingly popular research topic. Its range of applications is wide - from fusing in the defense, construction and exploration industries to ignition in future combustion engines. Recent advances in InP-based technology have expanded the wavelength range that can be covered by multi-watt GaAs- and InP-based diode lasers to about 0.8 to 2 μm. With such a wide range, the wattage is no longer the sole defining factor for efficient ignition. Ignition-related studies should include the interaction of radiation of various wavelengths with matter and the reliability of devices based on different material systems. In this paper, we focus on the reliability of pulsed laser diodes for use in ignition applications. We discuss the existing data on the catastrophic optical damage (COD) of the mirrors of the GaAsbased laser diodes and come up with a non-destructive test method to predict the COD level of a particular device. This allows pre-characterization of the devices intended for fusing to eliminate failures during single-pulse operation in the field. We also tested InP-based devices and demonstrated that the maximum power is not limited by COD. Currently, devices with >10W output power are available from both GaAs- and InP-based devices, which dramatically expands the potential use of laser diodes in ignition systems.

  20. In-plant reliability data base for nuclear plant components: a feasibility study on human error information

    SciTech Connect

    Borkowski, R.J.; Fragola, J.R.; Schurman, D.L.; Johnson, J.W.

    1984-03-01

    This report documents the procedure and final results of a feasibility study which examined the usefulness of nuclear plant maintenance work requests in the IPRDS as tools for understanding human error and its influence on component failure and repair. Developed in this study were (1) a set of criteria for judging the quality of a plant maintenance record set for studying human error; (2) a scheme for identifying human errors in the maintenance records; and (3) two taxonomies (engineering-based and psychology-based) for categorizing and coding human error-related events.

  1. Assessment of home-based behavior modification programs for autistic children: reliability and validity of the behavioral summarized evaluation.

    PubMed

    Oneal, Brent J; Reeb, Roger N; Korte, John R; Butter, Eliot J

    2006-01-01

    Since the publication of Lovaas' (1987) impressive findings, there has been a proliferation of home-based behavior modification programs for autistic children. Parents and other paraprofessionals often play key roles in the implementation and monitoring of these programs. The Behavioral Summarized Evaluation (BSE) was developed for professionals and paraprofessionals to use in assessing the severity of autistic symptoms over the course of treatment. This paper examined the psychometric properties of the BSE (inter-item consistency, factorial composition, convergent validity, and sensitivity to parents' perceptions of symptom change over time) when used by parents of autistic youngsters undergoing home-based intervention. Recommendations for future research are presented.

  2. Curriculum-Based Measurement of Oral Reading: A Preliminary Investigation of Confidence Interval Overlap to Detect Reliable Growth

    ERIC Educational Resources Information Center

    Van Norman, Ethan R.

    2016-01-01

    Curriculum-based measurement of oral reading (CBM-R) progress monitoring data is used to measure student response to instruction. Federal legislation permits educators to use CBM-R progress monitoring data as a basis for determining the presence of specific learning disabilities. However, decision making frameworks originally developed for CBM-R…

  3. Alternative Methods to Curriculum-Based Measurement for Written Expression: Implications for Reliability and Validity of the Scores

    ERIC Educational Resources Information Center

    Merrigan, Teresa E.

    2012-01-01

    The purpose of the current study was to evaluate the psychometric properties of alternative approaches to administering and scoring curriculum-based measurement for written expression. Specifically, three response durations (3, 5, and 7 minutes) and six score types (total words written, words spelled correctly, percent of words spelled correctly,…

  4. On the Reliability and Validity of Human and LSA-Based Evaluations of Complex Student-Authored Texts

    ERIC Educational Resources Information Center

    Seifried, Eva; Lenhard, Wolfgang; Baier, Herbert; Spinath, Birgit

    2012-01-01

    This study investigates the potential of a software tool based on Latent Semantic Analysis (LSA; Landauer, McNamara, Dennis, & Kintsch, 2007) to automatically evaluate complex German texts. A sample of N = 94 German university students provided written answers to questions that involved a high amount of analytical reasoning and evaluation.…

  5. Assuring Electronics Reliability: What Could and Should Be Done Differently

    NASA Astrophysics Data System (ADS)

    Suhir, E.

    The following “ ten commandments” for the predicted and quantified reliability of aerospace electronic, and photonic products are addressed and discussed: 1) The best product is the best compromise between the needs for reliability, cost effectiveness and time-to-market; 2) Reliability cannot be low, need not be higher than necessary, but has to be adequate for a particular product; 3) When reliability is imperative, ability to quantify it is a must, especially if optimization is considered; 4) One cannot design a product with quantified, optimized and assured reliability by limiting the effort to the highly accelerated life testing (HALT) that does not quantify reliability; 5) Reliability is conceived at the design stage and should be taken care of, first of all, at this stage, when a “ genetically healthy” product should be created; reliability evaluations and assurances cannot be delayed until the product is fabricated and shipped to the customer, i.e., cannot be left to the prognostics-and-health-monitoring/managing (PHM) stage; it is too late at this stage to change the design or the materials for improved reliability; that is why, when reliability is imperative, users re-qualify parts to assess their lifetime and use redundancy to build a highly reliable system out of insufficiently reliable components; 6) Design, fabrication, qualification and PHM efforts should consider and be specific for particular products and their most likely actual or at least anticipated application(s); 7) Probabilistic design for reliability (PDfR) is an effective means for improving the state-of-the-art in the field: nothing is perfect, and the difference between an unreliable product and a robust one is “ merely” the probability of failure (PoF); 8) Highly cost-effective and highly focused failure oriented accelerated testing (FOAT) geared to a particular pre-determined reliability model and aimed at understanding the physics of failure- anticipated by this model is an

  6. Reliability of Predicting Early Hospital Readmission After Discharge For An Acute Coronary Syndrome using Claims-Based Data

    PubMed Central

    McManus, David D.; Saczynski, Jane S.; Lessard, Darleen; Waring, Molly E.; Allison, Jeroan; Parish, David C.; Goldberg, Robert J.; Ash, Arlene; Kiefe, Catarina I.

    2015-01-01

    Early rehospitalization after discharge for an acute coronary syndrome (ACS), including acute myocardial infarction (AMI), is generally considered undesirable. The Centers for Medicare and Medicaid Services (CMS) base hospital financial incentives on risk-adjusted readmission rates following AMI, using claims data in its adjustment models. Little is known about the contribution to readmission risk of factors not captured by claims. For 804 consecutive patients over 65 years old discharged in 2011–13 from 6 hospitals in Massachusetts and Georgia after an ACS, we compared a CMS-like readmission prediction model with an enhanced model incorporating additional clinical, psychosocial, and sociodemographic characteristics, after principal components analysis. Mean age was 73 years, 38% were women, 25% college educated, 32% had a prior AMI; all-cause re-hospitalization occurred within 30 days for 13%. In the enhanced model, prior coronary intervention [Odds Ratio=2.05 95% Confidence Interval (1.34, 3.16)], chronic kidney disease [1.89 (1.15, 3.10)], low health literacy [1.75 (1.14, 2.69)], lower serum sodium levels, and current non-smoker status were positively associated with readmission. The discriminative ability of the enhanced vs. the claims-based model was higher without evidence of over-fitting. For example, for patients in the highest deciles of readmission likelihood, observed readmissions occurred in 24% for the claims-based model and 33% for the enhanced model. In conclusion, readmission may be influenced by measurable factors not in CMS’ claims-based models and not controllable by hospitals. Incorporating additional factors into risk-adjusted readmission models may improve their accuracy and validity for use as indicators of hospital quality. PMID:26718235

  7. Reliability science and patient safety.

    PubMed

    Luria, Joseph W; Muething, Stephen E; Schoettker, Pamela J; Kotagal, Uma R

    2006-12-01

    Reliability is failure-free operation over time--the measurable capability of a process, procedure, or service to perform its intended function. Reliability science has the potential to help health care organizations reduce defects in care, increase the consistency with which care is delivered, and improve patient outcomes. Based on its principles, the Institute for Health care Improvement has developed a three-step model to prevent failures, mitigate the failures that occur, and redesign systems to reduce failures. Lessons may also be learned from complex organizations that have already adopted the principles of reliability science and operate with high rates of reliability. They share a preoccupation with failure, reluctance to simplify interpretations, sensitivity to operations, commitment to resilience, and underspecification of structures.

  8. Extending the use of the Web-based HIV Testing Belief Inventory to students attending historically Black colleges and universities: an examination of reliability and validity.

    PubMed

    Hou, Su-I

    2009-02-01

    This study sought to extend the use of a Web-based HIV Testing Belief Inventory (wHITBI), developed and validated in a majority White university in the southeastern United States, to students attending historically Black colleges and universities (HBCUs). The 19-item wHITBI was reviewed by experts to qualitatively assess its construct validity, clarity, relevancy, and comprehensiveness to HBCU students. Participants were recruited from 15 HBCUs (valid N = 372). Mean age was 20.5 years (SD = 2.4), 80% were females, 92% were heterosexual-oriented, and 58% had prior HIV test(s). Reliability coefficients revealed satisfactory internal consistencies (Cronbach's alphas: .58 approximately .85). Confirmatory factor analysis showed that items were loaded consistently with the four constructs: perceived benefits, concerns of HIV risk, stigma, and testing availability/accessibility. Data indicated good model fits (RMSEA = .06; CFI = .93; IFI = .93; RMS = .07), with all items loaded significantly. Findings showed that the psychometrics of wHITBI appears to maintain its integrity in this sample with satisfactory reliability coefficients and validities.

  9. Development of Stronger and More Reliable Cast Austenitic Stainless Steels (H-Series) Based on Scientific Design Methodology

    SciTech Connect

    Muralidharan, G.; Sikka, V.K.; Pankiw, R.I.

    2006-04-15

    The goal of this program was to increase the high-temperature strength of the H-Series of cast austenitic stainless steels by 50% and upper use temperature by 86 to 140 F (30 to 60 C). Meeting this goal is expected to result in energy savings of 38 trillion Btu/year by 2020 and energy cost savings of $185 million/year. The higher strength H-Series of cast stainless steels (HK and HP type) have applications for the production of ethylene in the chemical industry, for radiant burner tubes and transfer rolls for secondary processing of steel in the steel industry, and for many applications in the heat-treating industry. The project was led by Duraloy Technologies, Inc. with research participation by the Oak Ridge National Laboratory (ORNL) and industrial participation by a diverse group of companies. Energy Industries of Ohio (EIO) was also a partner in this project. Each team partner had well-defined roles. Duraloy Technologies led the team by identifying the base alloys that were to be improved from this research. Duraloy Technologies also provided an extensive creep data base on current alloys, provided creep-tested specimens of certain commercial alloys, and carried out centrifugal casting and component fabrication of newly designed alloys. Nucor Steel was the first partner company that installed the radiant burner tube assembly in their heat-treating furnace. Other steel companies participated in project review meetings and are currently working with Duraloy Technologies to obtain components of the new alloys. EIO is promoting the enhanced performance of the newly designed alloys to Ohio-based companies. The Timken Company is one of the Ohio companies being promoted by EIO. The project management and coordination plan is shown in Fig. 1.1. A related project at University of Texas-Arlington (UT-A) is described in Development of Semi-Stochastic Algorithm for Optimizing Alloy Composition of High-Temperature Austenitic Stainless Steels (H-Series) for Desired

  10. Power electronics reliability analysis.

    SciTech Connect

    Smith, Mark A.; Atcitty, Stanley

    2009-12-01

    This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

  11. Human Reliability Program Overview

    SciTech Connect

    Bodin, Michael

    2012-09-25

    This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.

  12. Equilibrating errors: reliable estimation of information transmission rates in biological systems with spectral analysis-based methods.

    PubMed

    Ignatova, Irina; French, Andrew S; Immonen, Esa-Ville; Frolov, Roman; Weckström, Matti

    2014-06-01

    Shannon's seminal approach to estimating information capacity is widely used to quantify information processing by biological systems. However, the Shannon information theory, which is based on power spectrum estimation, necessarily contains two sources of error: time delay bias error and random error. These errors are particularly important for systems with relatively large time delay values and for responses of limited duration, as is often the case in experimental work. The window function type and size chosen, as well as the values of inherent delays cause changes in both the delay bias and random errors, with possibly strong effect on the estimates of system properties. Here, we investigated the properties of these errors using white-noise simulations and analysis of experimental photoreceptor responses to naturalistic and white-noise light contrasts. Photoreceptors were used from several insect species, each characterized by different visual performance, behavior, and ecology. We show that the effect of random error on the spectral estimates of photoreceptor performance (gain, coherence, signal-to-noise ratio, Shannon information rate) is opposite to that of the time delay bias error: the former overestimates information rate, while the latter underestimates it. We propose a new algorithm for reducing the impact of time delay bias error and random error, based on discovering, and then using that size of window, at which the absolute values of these errors are equal and opposite, thus cancelling each other, allowing minimally biased measurement of neural coding.

  13. Reliable Design Versus Trust

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Kenneth A.

    2016-01-01

    This presentation focuses on reliability and trust for the users portion of the FPGA design flow. It is assumed that the manufacturer prior to hand-off to the user tests FPGA internal components. The objective is to present the challenges of creating reliable and trusted designs. The following will be addressed: What makes a design vulnerable to functional flaws (reliability) or attackers (trust)? What are the challenges for verifying a reliable design versus a trusted design?

  14. Reliability model generator specification

    NASA Technical Reports Server (NTRS)

    Cohen, Gerald C.; Mccann, Catherine

    1990-01-01

    The Reliability Model Generator (RMG), a program which produces reliability models from block diagrams for ASSIST, the interface for the reliability evaluation tool SURE is described. An account is given of motivation for RMG and the implemented algorithms are discussed. The appendices contain the algorithms and two detailed traces of examples.

  15. Viking Lander reliability program

    NASA Technical Reports Server (NTRS)

    Pilny, M. J.

    1978-01-01

    The Viking Lander reliability program is reviewed with attention given to the development of the reliability program requirements, reliability program management, documents evaluation, failure modes evaluation, production variation control, failure reporting and correction, and the parts program. Lander hardware failures which have occurred during the mission are listed.

  16. Theory of reliable systems

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1975-01-01

    An attempt was made to refine the current notion of system reliability by identifying and investigating attributes of a system which are important to reliability considerations. Techniques which facilitate analysis of system reliability are included. Special attention was given to fault tolerance, diagnosability, and reconfigurability characteristics of systems.

  17. Predicting software reliability

    NASA Technical Reports Server (NTRS)

    Littlewood, B.

    1989-01-01

    A detailed look is given to software reliability techniques. A conceptual model of the failure process is examined, and some software reliability growth models are discussed. Problems for which no current solutions exist are addressed, emphasizing the very difficult problem of safety-critical systems for which the reliability requirements can be enormously demanding.

  18. Reliability of a new method for evaluating femoral stem positioning after total hip arthroplasty based on stereoradiographic 3D reconstruction.

    PubMed

    Guenoun, Benjamin; El Hajj, Firass; Biau, David; Anract, Philippe; Courpied, Jean-Pierre

    2015-01-01

    The goal of this study was to validate a new method for determining femoral stem positioning based on 3D models derived from the EOS biplanar system. Independents observers measured stem anteversion and femoral offset using CT scan and EOS system of 28 femoral stems implanted in composite femurs. In parallel, the same parameters were measured on biplanar lower limb radiographs acquired from 30 patients who had undergone total hip arthroplasty. CT scanner and biplanar X-ray measurements on composite femurs were highly correlated: 0.94 for femoral offset (P < 0.01), 0.98 for stem anteversion (P < 0.01). The inter and intra-observer reproducibility when measuring composite bones was excellent with both imaging modalities as when measuring femoral stem positioning in patients with the biplanar X-ray system.

  19. Multisite longitudinal reliability of tract-based spatial statistics in diffusion tensor imaging of healthy elderly subjects.

    PubMed

    Jovicich, Jorge; Marizzoni, Moira; Bosch, Beatriz; Bartrés-Faz, David; Arnold, Jennifer; Benninghoff, Jens; Wiltfang, Jens; Roccatagliata, Luca; Picco, Agnese; Nobili, Flavio; Blin, Oliver; Bombois, Stephanie; Lopes, Renaud; Bordet, Régis; Chanoine, Valérie; Ranjeva, Jean-Philippe; Didic, Mira; Gros-Dagnac, Hélène; Payoux, Pierre; Zoccatelli, Giada; Alessandrini, Franco; Beltramello, Alberto; Bargalló, Núria; Ferretti, Antonio; Caulo, Massimo; Aiello, Marco; Ragucci, Monica; Soricelli, Andrea; Salvadori, Nicola; Tarducci, Roberto; Floridi, Piero; Tsolaki, Magda; Constantinidis, Manos; Drevelegas, Antonios; Rossini, Paolo Maria; Marra, Camillo; Otto, Josephin; Reiss-Zimmermann, Martin; Hoffmann, Karl-Titus; Galluzzi, Samantha; Frisoni, Giovanni B

    2014-11-01

    Large-scale longitudinal neuroimaging studies with diffusion imaging techniques are necessary to test and validate models of white matter neurophysiological processes that change in time, both in healthy and diseased brains. The predictive power of such longitudinal models will always be limited by the reproducibility of repeated measures acquired during different sessions. At present, there is limited quantitative knowledge about the across-session reproducibility of standard diffusion metrics in 3T multi-centric studies on subjects in stable conditions, in particular when using tract based spatial statistics and with elderly people. In this study we implemented a multi-site brain diffusion protocol in 10 clinical 3T MRI sites distributed across 4 countries in Europe (Italy, Germany, France and Greece) using vendor provided sequences from Siemens (Allegra, Trio Tim, Verio, Skyra, Biograph mMR), Philips (Achieva) and GE (HDxt) scanners. We acquired DTI data (2 × 2 × 2 mm(3), b = 700 s/mm(2), 5 b0 and 30 diffusion weighted volumes) of a group of healthy stable elderly subjects (5 subjects per site) in two separate sessions at least a week apart. For each subject and session four scalar diffusion metrics were considered: fractional anisotropy (FA), mean diffusivity (MD), radial diffusivity (RD) and axial (AD) diffusivity. The diffusion metrics from multiple subjects and sessions at each site were aligned to their common white matter skeleton using tract-based spatial statistics. The reproducibility at each MRI site was examined by looking at group averages of absolute changes relative to the mean (%) on various parameters: i) reproducibility of the signal-to-noise ratio (SNR) of the b0 images in centrum semiovale, ii) full brain test-retest differences of the diffusion metric maps on the white matter skeleton, iii) reproducibility of the diffusion metrics on atlas-based white matter ROIs on the white matter skeleton. Despite the differences of MRI scanner

  20. Egnos-Based Multi-Sensor Accurate and Reliable Navigation in Search-And Missions with Uavs

    NASA Astrophysics Data System (ADS)

    Molina, P.; Colomina, I.; Vitoria, T.; Silva, P. F.; Stebler, Y.; Skaloud, J.; Kornus, W.; Prades, R.

    2011-09-01

    This paper will introduce and describe the goals, concept and overall approach of the European 7th Framework Programme's project named CLOSE-SEARCH, which stands for 'Accurate and safe EGNOS-SoL Navigation for UAV-based low-cost SAR operations'. The goal of CLOSE-SEARCH is to integrate in a helicopter-type unmanned aerial vehicle, a thermal imaging sensor and a multi-sensor navigation system (based on the use of a Barometric Altimeter (BA), a Magnetometer (MAGN), a Redundant Inertial Navigation System (RINS) and an EGNOS-enabled GNSS receiver) with an Autonomous Integrity Monitoring (AIM) capability, to support the search component of Search-And-Rescue operations in remote, difficult-to-access areas and/or in time critical situations. The proposed integration will result in a hardware and software prototype that will demonstrate an end-to-end functionality, that is to fly in patterns over a region of interest (possibly inaccessible) during day or night and also under adverse weather conditions and locate there disaster survivors or lost people through the detection of the body heat. This paper will identify the technical challenges of the proposed approach, from navigating with a BA/MAGN/RINS/GNSS-EGNOSbased integrated system to the interpretation of thermal images for person identification. Moreover, the AIM approach will be described together with the proposed integrity requirements. Finally, this paper will show some results obtained in the project during the first test campaign performed on November 2010. On that day, a prototype was flown in three different missions to assess its high-level performance and to observe some fundamental mission parameters as the optimal flying height and flying speed to enable body recognition. The second test campaign is scheduled for the end of 2011.

  1. Towards long lasting zirconia-based composites for dental implants: Transformation induced plasticity and its consequence on ceramic reliability.

    PubMed

    Reveron, Helen; Fornabaio, Marta; Palmero, Paola; Fürderer, Tobias; Adolfsson, Erik; Lughi, Vanni; Bonifacio, Alois; Sergo, Valter; Montanaro, Laura; Chevalier, Jérôme

    2017-01-15

    Zirconia-based composites were developed through an innovative processing route able to tune compositional and microstructural features very precisely. Fully-dense ceria-stabilized zirconia ceramics (84vol% Ce-TZP) containing equiaxed alumina (8vol%Al2O3) and elongated strontium hexa-aluminate (8vol% SrAl12O19) second phases were obtained by conventional sintering. This work deals with the effect of the zirconia stabilization degree (CeO2 in the range 10.0-11.5mol%) on the transformability and mechanical properties of Ce-TZP-Al2O3-SrAl12O19 materials. Vickers hardness, biaxial flexural strength and Single-edge V-notched beam tests revealed a strong influence of ceria content on the mechanical properties. Composites with 11.0mol% CeO2 or above exhibited the classical behaviour of brittle ceramics, with no apparent plasticity and very low strain to failure. On the contrary, composites with 10.5mol% CeO2 or less showed large transformation-induced plasticity and almost no dispersion in strength data. Materials with 10.5mol% of ceria showed the highest values in terms of biaxial bending strength (up to 1.1GPa) and fracture toughness (>10MPa√m). In these ceramics, as zirconia transformation precedes failure, the Weibull modulus was exceptionally high and reached a value of 60, which is in the range typically reported for metals. The results achieved demonstrate the high potential of using these new strong, tough and stable zirconia-based composites in structural biomedical applications.

  2. Load Control System Reliability

    SciTech Connect

    Trudnowski, Daniel

    2015-04-03

    This report summarizes the results of the Load Control System Reliability project (DOE Award DE-FC26-06NT42750). The original grant was awarded to Montana Tech April 2006. Follow-on DOE awards and expansions to the project scope occurred August 2007, January 2009, April 2011, and April 2013. In addition to the DOE monies, the project also consisted of matching funds from the states of Montana and Wyoming. Project participants included Montana Tech; the University of Wyoming; Montana State University; NorthWestern Energy, Inc., and MSE. Research focused on two areas: real-time power-system load control methodologies; and, power-system measurement-based stability-assessment operation and control tools. The majority of effort was focused on area 2. Results from the research includes: development of fundamental power-system dynamic concepts, control schemes, and signal-processing algorithms; many papers (including two prize papers) in leading journals and conferences and leadership of IEEE activities; one patent; participation in major actual-system testing in the western North American power system; prototype power-system operation and control software installed and tested at three major North American control centers; and, the incubation of a new commercial-grade operation and control software tool. Work under this grant certainly supported the DOE-OE goals in the area of “Real Time Grid Reliability Management.”

  3. Reliable Entanglement Verification

    NASA Astrophysics Data System (ADS)

    Arrazola, Juan; Gittsovich, Oleg; Donohue, John; Lavoie, Jonathan; Resch, Kevin; Lütkenhaus, Norbert

    2013-05-01

    Entanglement plays a central role in quantum protocols. It is therefore important to be able to verify the presence of entanglement in physical systems from experimental data. In the evaluation of these data, the proper treatment of statistical effects requires special attention, as one can never claim to have verified the presence of entanglement with certainty. Recently increased attention has been paid to the development of proper frameworks to pose and to answer these type of questions. In this work, we apply recent results by Christandl and Renner on reliable quantum state tomography to construct a reliable entanglement verification procedure based on the concept of confidence regions. The statements made do not require the specification of a prior distribution nor the assumption of an independent and identically distributed (i.i.d.) source of states. Moreover, we develop efficient numerical tools that are necessary to employ this approach in practice, rendering the procedure ready to be employed in current experiments. We demonstrate this fact by analyzing the data of an experiment where photonic entangled two-photon states were generated and whose entanglement is verified with the use of an accessible nonlinear witness.

  4. Reliability of isoelectrofocusing for the detection of Hb S, Hb C, and HB D in a pioneering population-based program of newborn screening in Brazil.

    PubMed

    Paixão, M C; Cunha Ferraz, M H; Januário, J N; Viana, M B; Lima, J M

    2001-08-01

    Out of 128,326 newborns in the first 6-month period of a population-based screening program in Minas Gerais, Brazil, a second sample was obtained at the age of 6 months from 4,635 carriers of Hbs AS, AC, and AD which were detected by isoelectrofocusing. Discordance in results occurred in only 27 cases (0.6%): in seven there was a history of hemotransfusion; errors during pipetting or transcription of results occurred in seven cases; it was difficult to differenciate between Hbs S and D in eight patients; and the causes were not elucidated in five patients. The incidence of Hbs FS and FSC for the total population was 1:2,800 and 1:3,450, respectively. Isoelectrofocusing is a very reliable method for distinguishing AS, AC, or AD carriers from patients presenting with [corrected] variant hemoglobin and beta(+)-thalassemia combinations, and may be widely used in massive newborn screening programs.

  5. Using a Web-Based Approach to Assess Test–Retest Reliability of the “Hypertension Self-Care Profile” Tool in an Asian Population

    PubMed Central

    Koh, Yi Ling Eileen; Lua, Yi Hui Adela; Hong, Liyue; Bong, Huey Shin Shirley; Yeo, Ling Sui Jocelyn; Tsang, Li Ping Marianne; Ong, Kai Zhi; Wong, Sook Wai Samantha; Tan, Ngiap Chuan

    2016-01-01

    Abstract Essential hypertension often requires affected patients to self-manage their condition most of the time. Besides seeking regular medical review of their life-long condition to detect vascular complications, patients have to maintain healthy lifestyles in between physician consultations via diet and physical activity, and to take their medications according to their prescriptions. Their self-management ability is influenced by their self-efficacy capacity, which can be assessed using questionnaire-based tools. The “Hypertension Self-Care Profile” (HTN-SCP) is 1 such questionnaire assessing self-efficacy in the domains of “behavior,” “motivation,” and “self-efficacy.” This study aims to determine the test–retest reliability of HTN-SCP in an English-literate Asian population using a web-based approach. Multiethnic Asian patients, aged 40 years and older, with essential hypertension were recruited from a typical public primary care clinic in Singapore. The investigators guided the patients to fill up the web-based 60-item HTN-SCP in English using a tablet or smartphone on the first visit and refilled the instrument 2 weeks later in the retest. Internal consistency and test–retest reliability were evaluated using Cronbach's Alpha and intraclass correlation coefficients (ICC), respectively. The t test was used to determine the relationship between the overall HTN-SCP scores of the patients and their self-reported self-management activities. A total of 160 patients completed the HTN-SCP during the initial test, from which 71 test–retest responses were completed. No floor or ceiling effect was found for the scores for the 3 subscales. Cronbach's Alpha coefficients were 0.857, 0.948, and 0.931 for “behavior,” “motivation,” and “self-efficacy” domains respectively, indicating high internal consistency. The item-total correlation ranges for the 3 scales were from 0.105 to 0.656 for Behavior, 0.401 to 0.808 for Motivation, 0.349 to 0

  6. A Reliability-Based Particle Filter for Humanoid Robot Self-Localization in RoboCup Standard Platform League

    PubMed Central

    Sánchez, Eduardo Munera; Alcobendas, Manuel Muñoz; Noguera, Juan Fco. Blanes; Gilabert, Ginés Benet; Simó Ten, José E.

    2013-01-01

    This paper deals with the problem of humanoid robot localization and proposes a new method for position estimation that has been developed for the RoboCup Standard Platform League environment. Firstly, a complete vision system has been implemented in the Nao robot platform that enables the detection of relevant field markers. The detection of field markers provides some estimation of distances for the current robot position. To reduce errors in these distance measurements, extrinsic and intrinsic camera calibration procedures have been developed and described. To validate the localization algorithm, experiments covering many of the typical situations that arise during RoboCup games have been developed: ranging from degradation in position estimation to total loss of position (due to falls, ‘kidnapped robot’, or penalization). The self-localization method developed is based on the classical particle filter algorithm. The main contribution of this work is a new particle selection strategy. Our approach reduces the CPU computing time required for each iteration and so eases the limited resource availability problem that is common in robot platforms such as Nao. The experimental results show the quality of the new algorithm in terms of localization and CPU time consumption. PMID:24193098

  7. Reliability of the agar based method to assess the production of degradative enzymes in clinical isolates of Candida albicans.

    PubMed

    Arantes, Paula Tamião; Sanitá, Paula Volpato; Santezi, Carolina; Barbeiro, Camila de Oliveira; Reina, Bárbara Donadon; Vergani, Carlos Eduardo; Dovigo, Lívia Nordi

    2016-03-01

    The aim of this study was to establish a reproducible protocol using the methodology of hyaline zones around the colonies on specific agar plates for phospholipase and proteinase production. This was an in vitro double-blind experiment, in which the dependent variables were the enzymatic activity measurements (Pz) for the production of phospholipase (Pz-ph) and the production of secreted aspartyl proteinases (Pz-sap). Three independent variables give rise to different measurement protocols. All measurements were carried out at two different moments by four examiners (E1, E2, E3, and E4). The minimum sample size was 30 Candida albicans clinical isolates. Specific agar plates for phospholipase and SAPs production were prepared according the literature. The intra-and inter-examiner reproducibility for each protocol was estimated using the Intraclass Correlation Coefficient (ICC) and its confidence interval (95% CI). Based on the results obtained for both phospholipase and SAPs, there appears to be no consensus on the protocol chosen for each particular examiner. Measuring the colonies in triplicate may be the main factor associated with the increase in measurement accuracy and should therefore take precedence over measuring only one colony. When only one examiner is responsible for taking measurements, a standard protocol should be put in place and the statistical calibration of this researcher should be done prior to data collection. However, if two or more researchers are involved in the assessment of agar plates, our results suggest that the protocols using software to undertake plate reading is preferred.

  8. Ligation-mediated PCR, a fast and reliable technique for insertion sequence-based typing of Xanthomonas citri pv. citri.

    PubMed

    Ngoc, Lan Bui Thi; Vernière, Christian; Belasque, José Júnior; Vital, Karine; Boutry, Sébastien; Gagnevin, Lionel; Pruvost, Olivier

    2008-11-01

    Asiatic citrus canker, caused by Xanthomonas citri pv. citri, is a major disease threatening citrus crops throughout the world. The most common methods for strain differentiation of this pathogen are repetitive element sequence-based PCR (rep-PCR) and pulsed field gel electrophoresis (PFGE), using rare-cutting restriction enzyme analysis. We developed a ligation-mediated PCR targeting three insertion sequences (IS-LM-PCR) present as several copies in the genome of the fully sequenced strain 306 of X. citri pv. citri. This technique amplifies DNA fragments between an insertion sequence element and an MspI restriction site. The analysis of strains can be conducted within 24 h, starting from very small amounts of bacterial DNA, which makes IS-LM-PCR much less labor-intensive than PFGE. We used IS-LM-PCR to analyze a collection of 66 strains of X. citri pv. citri from around the world. The overall reproducibility of IS-LM-PCR reached 98% in this data set and its discriminatory power was markedly superior than rep-PCR. We suggest that IS-LM-PCR could be used for the global surveillance of non-epidemiologically related strains of X. citri pv. citri.

  9. Reliable characteristics and stabilization of on-membrane SOI MOSFET-based components heated up to 335 °C

    NASA Astrophysics Data System (ADS)

    Amor, S.; André, N.; Gérard, P.; Ali, S. Z.; Udrea, F.; Tounsi, F.; Mezghani, B.; Francis, L. A.; Flandre, D.

    2017-01-01

    In this work we investigate the characteristics and critical operating temperatures of on-membrane embedded MOSFETs from an experimental and analytical point of view. This study permits us to conclude the possibility of integrating electronic circuitry in the close vicinity of micro-heaters and hot operation transducers. A series of calibrations and measurements has been performed to examine the behaviors of transistors, inverters and diodes, actuated at high temperature, on a membrane equipped with an on-chip integrated micro-heater. The studied n- and p-channel body-tied partially-depleted MOSFETs and CMOS inverter are embedded in a 5 μm-thick membrane fabricated by back-side MEMS micromachining using SOI technology. It has been noted that a pre-stabilization step after the harsh post-CMOS processing, through an in situ high-temperature annealing using the micro-heater, is mandatory in order to stabilize the MOSFETs characteristics. The electrical characteristics and performance of the on-membrane MOS components are discussed when heated up to 335 °C. This study supports the possibility of extending the potential of the micro-hotplate concept, under certain conditions, by embedding more electronic functionalities on the interface of on-membrane-based sensors leading to better sensing and actuation performances and a total area reduction, particularly for environmental or industrial applications.

  10. Reliability of Sleep Measures from Four Personal Health Monitoring Devices Compared to Research-Based Actigraphy and Polysomnography

    PubMed Central

    Mantua, Janna; Gravel, Nickolas; Spencer, Rebecca M. C.

    2016-01-01

    Polysomnography (PSG) is the “gold standard” for monitoring sleep. Alternatives to PSG are of interest for clinical, research, and personal use. Wrist-worn actigraph devices have been utilized in research settings for measures of sleep for over two decades. Whether sleep measures from commercially available devices are similarly valid is unknown. We sought to determine the validity of five wearable devices: Basis Health Tracker, Misfit Shine, Fitbit Flex, Withings Pulse O2, and a research-based actigraph, Actiwatch Spectrum. We used Wilcoxon Signed Rank tests to assess differences between devices relative to PSG and correlational analysis to assess the strength of the relationship. Data loss was greatest for Fitbit and Misfit. For all devices, we found no difference and strong correlation of total sleep time with PSG. Sleep efficiency differed from PSG for Withings, Misfit, Fitbit, and Basis, while Actiwatch mean values did not differ from that of PSG. Only mean values of sleep efficiency (time asleep/time in bed) from Actiwatch correlated with PSG, yet this correlation was weak. Light sleep time differed from PSG (nREM1 + nREM2) for all devices. Measures of Deep sleep time did not differ from PSG (SWS + REM) for Basis. These results reveal the current strengths and limitations in sleep estimates produced by personal health monitoring devices and point to a need for future development. PMID:27164110

  11. Reliability of Sleep Measures from Four Personal Health Monitoring Devices Compared to Research-Based Actigraphy and Polysomnography.

    PubMed

    Mantua, Janna; Gravel, Nickolas; Spencer, Rebecca M C

    2016-05-05

    Polysomnography (PSG) is the "gold standard" for monitoring sleep. Alternatives to PSG are of interest for clinical, research, and personal use. Wrist-worn actigraph devices have been utilized in research settings for measures of sleep for over two decades. Whether sleep measures from commercially available devices are similarly valid is unknown. We sought to determine the validity of five wearable devices: Basis Health Tracker, Misfit Shine, Fitbit Flex, Withings Pulse O2, and a research-based actigraph, Actiwatch Spectrum. We used Wilcoxon Signed Rank tests to assess differences between devices relative to PSG and correlational analysis to assess the strength of the relationship. Data loss was greatest for Fitbit and Misfit. For all devices, we found no difference and strong correlation of total sleep time with PSG. Sleep efficiency differed from PSG for Withings, Misfit, Fitbit, and Basis, while Actiwatch mean values did not differ from that of PSG. Only mean values of sleep efficiency (time asleep/time in bed) from Actiwatch correlated with PSG, yet this correlation was weak. Light sleep time differed from PSG (nREM1 + nREM2) for all devices. Measures of Deep sleep time did not differ from PSG (SWS + REM) for Basis. These results reveal the current strengths and limitations in sleep estimates produced by personal health monitoring devices and point to a need for future development.

  12. RFA-based 589-nm guide star lasers for ESO VLT: a paradigm shift in performance, operational simplicity, reliability, and maintenance

    NASA Astrophysics Data System (ADS)

    Friedenauer, Axel; Karpov, Vladimir; Wei, Daoping; Hager, Manfred; Ernstberger, Bernhard; Clements, Wallace R. L.; Kaenders, Wilhelm G.

    2012-07-01

    Large telescopes equipped with adaptive optics require 20-25W CW 589-nm sources with emission linewidths of ~5 MHz. These Guide Star (GS) lasers should also be highly reliable and simple to operate and maintain for many years at the top of a mountain facility. Under contract from ESO, industrial partners TOPTICA and MPBC are nearing completion of the development of GS lasers for the ESO VLT, with delivery of the first of four units scheduled for December 2012. We report on the design and performance of the fully-engineered Pre-Production Unit (PPU), including system reliability/availability analysis, the successfully-concluded qualification testing, long-term component and system level tests and long-term maintenance and support planning. The chosen approach is based on ESO's patented narrow-band Raman Fiber Amplifier (EFRA) technology. A master oscillator signal from a linearly-polarized TOPTICA 20-mW, 1178-nm CW diode laser, with stabilized emission frequency and controllable linewidth up to a few MHz, is amplified in an MPBC polarization-maintaining (PM) RFA pumped by a high-power 1120-nm PM fiber laser. With efficient stimulated Brillouin scattering suppression, an unprecedented 40W of narrow-band RFA output has been obtained. This is then mode-matched into a resonant-cavity doubler with a free-spectral-range matching the sodium D2a to D2b separation, allowing simultaneous generation of an additional frequency component (D2b line) to re-pump the sodium atom electronic population. With this technique, the return flux can be increased without having to resort to electro-optical modulators and without the risk of introducing optical wave front distortions. The demonstrated output powers with doubling efficiencies >80% at 589 nm easily exceed the 20W design goal and require less than 700 W of electrical power. In summary, the fiber-based guide star lasers provide excellent beam quality and are modular, turn-key, maintenance-free, reliable, efficient, and ruggedized

  13. Nonstoichiometric acid-base reaction as reliable synthetic route to highly stable CH3NH3PbI3 perovskite film

    NASA Astrophysics Data System (ADS)

    Long, Mingzhu; Zhang, Tiankai; Chai, Yang; Ng, Chun-Fai; Mak, Thomas C. W.; Xu, Jianbin; Yan, Keyou

    2016-11-01

    Perovskite solar cells have received worldwide interests due to swiftly improved efficiency but the poor stability of the perovskite component hampers the device fabrication under normal condition. Herein, we develop a reliable nonstoichiometric acid-base reaction route to stable perovskite films by intermediate chemistry and technology. Perovskite thin-film prepared by nonstoichiometric acid-base reaction route is stable for two months with negligible PbI2-impurity under ~65% humidity, whereas other perovskites prepared by traditional methods degrade distinctly after 2 weeks. Route optimization involves the reaction of PbI2 with excess HI to generate HPbI3, which subsequently undergoes reaction with excess CH3NH2 to deliver CH3NH3PbI3 thin films. High quality of intermediate HPbI3 and CH3NH2 abundance are two important factors to stable CH3NH3PbI3 perovskite. Excess volatile acid/base not only affords full conversion in nonstoichiometric acid-base reaction route but also permits its facile removal for stoichiometric purification, resulting in average efficiency of 16.1% in forward/reverse scans.

  14. An automated process for building reliable and optimal in vitro/in vivo correlation models based on Monte Carlo simulations.

    PubMed

    Sutton, Steven C; Hu, Mingxiu

    2006-05-05

    Many mathematical models have been proposed for establishing an in vitro/in vivo correlation (IVIVC). The traditional IVIVC model building process consists of 5 steps: deconvolution, model fitting, convolution, prediction error evaluation, and cross-validation. This is a time-consuming process and typically a few models at most are tested for any given data set. The objectives of this work were to (1) propose a statistical tool to screen models for further development of an IVIVC, (2) evaluate the performance of each model under different circumstances, and (3) investigate the effectiveness of common statistical model selection criteria for choosing IVIVC models. A computer program was developed to explore which model(s) would be most likely to work well with a random variation from the original formulation. The process used Monte Carlo simulation techniques to build IVIVC models. Data-based model selection criteria (Akaike Information Criteria [AIC], R2) and the probability of passing the Food and Drug Administration "prediction error" requirement was calculated. To illustrate this approach, several real data sets representing a broad range of release profiles are used to illustrate the process and to demonstrate the advantages of this automated process over the traditional approach. The Hixson-Crowell and Weibull models were often preferred over the linear. When evaluating whether a Level A IVIVC model was possible, the model selection criteria AIC generally selected the best model. We believe that the approach we proposed may be a rapid tool to determine which IVIVC model (if any) is the most applicable.

  15. Robust fusion with reliabilities weights

    NASA Astrophysics Data System (ADS)

    Grandin, Jean-Francois; Marques, Miguel

    2002-03-01

    The reliability is a value of the degree of trust in a given measurement. We analyze and compare: ML (Classical Maximum Likelihood), MLE (Maximum Likelihood weighted by Entropy), MLR (Maximum Likelihood weighted by Reliability), MLRE (Maximum Likelihood weighted by Reliability and Entropy), DS (Credibility Plausibility), DSR (DS weighted by reliabilities). The analysis is based on a model of a dynamical fusion process. It is composed of three sensors, which have each it's own discriminatory capacity, reliability rate, unknown bias and measurement noise. The knowledge of uncertainties is also severely corrupted, in order to analyze the robustness of the different fusion operators. Two sensor models are used: the first type of sensor is able to estimate the probability of each elementary hypothesis (probabilistic masses), the second type of sensor delivers masses on union of elementary hypotheses (DS masses). In the second case probabilistic reasoning leads to sharing the mass abusively between elementary hypotheses. Compared to the classical ML or DS which achieves just 50% of correct classification in some experiments, DSR, MLE, MLR and MLRE reveals very good performances on all experiments (more than 80% of correct classification rate). The experiment was performed with large variations of the reliability coefficients for each sensor (from 0 to 1), and with large variations on the knowledge of these coefficients (from 0 0.8). All four operators reveal good robustness, but the MLR reveals to be uniformly dominant on all the experiments in the Bayesian case and achieves the best mean performance under incomplete a priori information.

  16. Discrete Reliability Projection

    DTIC Science & Technology

    2014-12-01

    Defense, Handbook MIL - HDBK -189C, 2011 Hall, J. B., Methodology for Evaluating Reliability Growth Programs of Discrete Systems, Ph.D. thesis, University...pk,i ] · [ 1− (1− θ̆k) · ( N k · T )]k−m , (2.13) 5 2 Hall’s Model where m is the number of observed failure modes and d∗i estimates di (either based...Mode Failures FEF Ni d ∗ i 1 1 0.95 2 1 0.70 3 1 0.90 4 1 0.90 5 4 0.95 6 2 0.70 7 1 0.80 Using equations 2.1 and 2.2 we can calculate the failure

  17. Test-retest reliability of wavelet - and Fourier based EMG (instantaneous) median frequencies in the evaluation of back and hip muscle fatigue during isometric back extensions.

    PubMed

    Coorevits, Pascal; Danneels, Lieven; Cambier, Dirk; Ramon, Herman; Druyts, Hans; Karlsson, J Stefan; De Moor, Georges; Vanderstraeten, Guy

    2008-10-01

    The present study aimed at assessing the test-retest reliability of wavelet - and Fourier derived (instantaneous) median frequencies of surface electromyographic (EMG) measurements of back and hip muscles during isometric back extensions. Twenty healthy subjects (10 males and 10 females) performed a modified Biering-Sørensen test on two separate days, with a 1-week interval between the two tests. Surface EMG measurements were bilaterally performed from the latissimus dorsi, the thoracic and lumbar parts of the longissimus thoracis, the thoracic and lumbar parts of the iliocostalis lumborum, the multifidus, the gluteus maximus and the biceps femoris. In addition, three-dimensional kinematic data were recorded of the subjects' lumbar vertebrae. The (instantaneous) median frequencies were calculated from the EMG signals using continuous wavelet (IMDF) - and short-time Fourier transforms (MDF). Linear regressions performed on the IMDF and MDF data as a function of time yielded slopes (IMDF(slope) and MDF(slope)) and intercepts (IMDF(init) and MDF(init)) of the regression lines. Test-retest reliability was assessed on the normalized slopes and intercept parameters by means of intraclass correlation coefficients (ICC) and standard errors of measurements expressed as percentages of the mean values (% SEM). The results of IMDF(slope) and MDF(slope) parameters indicated ICCs for back and hip muscles between .443 and .727 for IMDF(slope), values between .273 and .734 for MDF(slope), % SEM between 7.6% and 58.9% for IMDF(slope) and % SEM between 8.2% and 25.3% for MDF(slope), respectively. The ICCs for IMDF(init) and MDF(init) parameters varied between .376 and .907 for IMDF(init) and between .383 and .883 for MDF(init), and % SEM ranged from 2.7% to 6.3% for IMDF(init) and from 2.6% to 4.7% for MDF(init), respectively. These results indicate that both wavelet - and Fourier based (instantaneous) median frequency parameters generally are reliable in the analysis of back and

  18. The Reliability of College Grades

    ERIC Educational Resources Information Center

    Beatty, Adam S.; Walmsley, Philip T.; Sackett, Paul R.; Kuncel, Nathan R.; Koch, Amanda J.

    2015-01-01

    Little is known about the reliability of college grades relative to how prominently they are used in educational research, and the results to date tend to be based on small sample studies or are decades old. This study uses two large databases (N > 800,000) from over 200 educational institutions spanning 13 years and finds that both first-year…

  19. Recalibrating software reliability models

    NASA Technical Reports Server (NTRS)

    Brocklehurst, Sarah; Chan, P. Y.; Littlewood, Bev; Snell, John

    1990-01-01

    In spite of much research effort, there is no universally applicable software reliability growth model which can be trusted to give accurate predictions of reliability in all circumstances. Further, it is not even possible to decide a priori which of the many models is most suitable in a particular context. In an attempt to resolve this problem, techniques were developed whereby, for each program, the accuracy of various models can be analyzed. A user is thus enabled to select that model which is giving the most accurate reliability predicitons for the particular program under examination. One of these ways of analyzing predictive accuracy, called the u-plot, in fact allows a user to estimate the relationship between the predicted reliability and the true reliability. It is shown how this can be used to improve reliability predictions in a completely general way by a process of recalibration. Simulation results show that the technique gives improved reliability predictions in a large proportion of cases. However, a user does not need to trust the efficacy of recalibration, since the new reliability estimates prodcued by the technique are truly predictive and so their accuracy in a particular application can be judged using the earlier methods. The generality of this approach would therefore suggest that it be applied as a matter of course whenever a software reliability model is used.

  20. Recalibrating software reliability models

    NASA Technical Reports Server (NTRS)

    Brocklehurst, Sarah; Chan, P. Y.; Littlewood, Bev; Snell, John

    1989-01-01

    In spite of much research effort, there is no universally applicable software reliability growth model which can be trusted to give accurate predictions of reliability in all circumstances. Further, it is not even possible to decide a priori which of the many models is most suitable in a particular context. In an attempt to resolve this problem, techniques were developed whereby, for each program, the accuracy of various models can be analyzed. A user is thus enabled to select that model which is giving the most accurate reliability predictions for the particular program under examination. One of these ways of analyzing predictive accuracy, called the u-plot, in fact allows a user to estimate the relationship between the predicted reliability and the true reliability. It is shown how this can be used to improve reliability predictions in a completely general way by a process of recalibration. Simulation results show that the technique gives improved reliability predictions in a large proportion of cases. However, a user does not need to trust the efficacy of recalibration, since the new reliability estimates produced by the technique are truly predictive and so their accuracy in a particular application can be judged using the earlier methods. The generality of this approach would therefore suggest that it be applied as a matter of course whenever a software reliability model is used.

  1. Performance of GaN-on-Si-based vertical light-emitting diodes using silicon nitride electrodes with conducting filaments: correlation between filament density and device reliability.

    PubMed

    Kim, Kyeong Heon; Kim, Su Jin; Lee, Tae Ho; Lee, Byeong Ryong; Kim, Tae Geun

    2016-08-08

    Transparent conductive electrodes with good conductivity and optical transmittance are an essential element for highly efficient light-emitting diodes. However, conventional indium tin oxide and its alternative transparent conductive electrodes have some trouble with a trade-off between electrical conductivity and optical transmittance, thus limiting their practical applications. Here, we present silicon nitride transparent conductive electrodes with conducting filaments embedded using the electrical breakdown process and investigate the dependence of the conducting filament density formed in the transparent conductive electrode on the device performance of gallium nitride-based vertical light-emitting diodes. Three gallium nitride-on-silicon-based vertical light-emitting diodes using silicon nitride transparent conductive electrodes with high, medium, and low conducting filament densities were prepared with a reference vertical light-emitting diode using metal electrodes. This was carried to determine the optimal density of the conducting filaments in the proposed silicon nitride transparent conductive electrodes. In comparison, the vertical light-emitting diodes with a medium conducting filament density exhibited the lowest optical loss, direct ohmic behavior, and the best current injection and distribution over the entire n-type gallium nitride surface, leading to highly reliable light-emitting diode performance.

  2. Reliability model for planetary gear

    NASA Technical Reports Server (NTRS)

    Savage, M.; Paridon, C. A.; Coy, J. J.

    1982-01-01

    A reliability model is presented for planetary gear trains in which the ring gear is fixed, the Sun gear is the input, and the planet arm is the output. The input and output shafts are coaxial and the input and output torques are assumed to be coaxial with these shafts. Thrust and side loading are neglected. This type of gear train is commonly used in main rotor transmissions for helicopters and in other applications which require high reductions in speed. The reliability model is based on the Weibull distribution of the individual reliabilities of the transmission components. The transmission's basic dynamic capacity is defined as the input torque which may be applied for one million input rotations of the Sun gear. Load and life are related by a power law. The load life exponent and basic dynamic capacity are developed as functions of the component capacities.

  3. Reliable Quantification of the Potential for Equations Based on Spot Urine Samples to Estimate Population Salt Intake: Protocol for a Systematic Review and Meta-Analysis

    PubMed Central

    Huang, Liping; Crino, Michelle; Wu, Jason HY; Woodward, Mark; Land, Mary-Anne; McLean, Rachael; Webster, Jacqui; Enkhtungalag, Batsaikhan; Nowson, Caryl A; Elliott, Paul; Cogswell, Mary; Toft, Ulla; Mill, Jose G; Furlanetto, Tania W; Ilich, Jasminka Z; Hong, Yet Hoi; Cohall, Damian; Luzardo, Leonella; Noboa, Oscar; Holm, Ellen; Gerbes, Alexander L; Senousy, Bahaa; Pinar Kara, Sonat; Brewster, Lizzy M; Ueshima, Hirotsugu; Subramanian, Srinivas; Teo, Boon Wee; Allen, Norrina; Choudhury, Sohel Reza; Polonia, Jorge; Yasuda, Yoshinari; Campbell, Norm RC; Neal, Bruce

    2016-01-01

    Background Methods based on spot urine samples (a single sample at one time-point) have been identified as a possible alternative approach to 24-hour urine samples for determining mean population salt intake. Objective The aim of this study is to identify a reliable method for estimating mean population salt intake from spot urine samples. This will be done by comparing the performance of existing equations against one other and against estimates derived from 24-hour urine samples. The effects of factors such as ethnicity, sex, age, body mass index, antihypertensive drug use, health status, and timing of spot urine collection will be explored. The capacity of spot urine samples to measure change in salt intake over time will also be determined. Finally, we aim to develop a novel equation (or equations) that performs better than existing equations to estimate mean population salt intake. Methods A systematic review and meta-analysis of individual participant data will be conducted. A search has been conducted to identify human studies that report salt (or sodium) excretion based upon 24-hour urine samples and spot urine samples. There were no restrictions on language, study sample size, or characteristics of the study population. MEDLINE via OvidSP (1946-present), Premedline via OvidSP, EMBASE, Global Health via OvidSP (1910-present), and the Cochrane Library were searched, and two reviewers identified eligible studies. The authors of these studies will be invited to contribute data according to a standard format. Individual participant records will be compiled and a series of analyses will be completed to: (1) compare existing equations for estimating 24-hour salt intake from spot urine samples with 24-hour urine samples, and assess the degree of bias according to key demographic and clinical characteristics; (2) assess the reliability of using spot urine samples to measure population changes in salt intake overtime; and (3) develop a novel equation that performs

  4. Reliability, Recursion, and Risk.

    ERIC Educational Resources Information Center

    Henriksen, Melvin, Ed.; Wagon, Stan, Ed.

    1991-01-01

    The discrete mathematics topics of trees and computational complexity are implemented in a simple reliability program which illustrates the process advantages of the PASCAL programing language. The discussion focuses on the impact that reliability research can provide in assessment of the risks found in complex technological ventures. (Author/JJK)

  5. Monte Carlo Reliability Analysis.

    DTIC Science & Technology

    1987-10-01

    to Stochastic Processes , Prentice-Hall, Englewood Cliffs, NJ, 1975. (5) R. E. Barlow and F. Proscham, Statistical TheorX of Reliability and Life...Lewis and Z. Tu, "Monte Carlo Reliability Modeling by Inhomogeneous ,Markov Processes, Reliab. Engr. 16, 277-296 (1986). (4) E. Cinlar, Introduction

  6. Hawaii electric system reliability.

    SciTech Connect

    Silva Monroy, Cesar Augusto; Loose, Verne William

    2012-09-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  7. Hawaii Electric System Reliability

    SciTech Connect

    Loose, Verne William; Silva Monroy, Cesar Augusto

    2012-08-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers’ views of reliability “worth” and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers’ views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  8. Reliable measurement of 3D foot bone angles based on the frame-of-reference derived from a sole of the foot

    NASA Astrophysics Data System (ADS)

    Kim, Taeho; Lee, Dong Yeon; Park, Jinah

    2016-03-01

    Clinical management of foot pathology requires accurate and robust measurement of the anatomical angles. In order to measure a 3D angle, recent approaches have adopted a landmark-based local coordinate system to establish bone angles used in orthopedics. These measurement methods mainly assess the relative angle between bones using a representative axis derived from the morphological feature of the bone and therefore, the results can be affected by bone deformities. In this study, we propose a method of deriving a global frame-of-reference to acquire consistent direction of the foot by extracting the undersurface of the foot from the CT image data. The two lowest positions of the foot skin are identified from the surface to define the base plane, and the direction from the hallux to the fourth toe is defined together to construct the global coordinate system. We performed the experiment on 10 volumes of foot CT images of healthy subjects to verify that the proposed method provides reliable measurements. We measured 3D angles for talus-calcaneus and talus-navicular using facing articular surfaces of paired bones. The angle was reported in 3 projection angles based on both coordinate systems defined by proposed global frame-of-reference and by CT image planes (saggital, frontal, and transverse). The result shows that the quantified angle using the proposed method considerably reduced the standard deviation (SD) against the angle using the conventional projection planes, and it was also comparable with the measured angles obtained from local coordinate systems of the bones. Since our method is independent from any individual local shape of a bone, unlike the measurement method using the local coordinate system, it is suitable for inter-subject comparison studies.

  9. Disappointing reliability of pulsatility indices to identify candidates for magnetic resonance imaging screening in population-based studies assessing prevalence of cerebral small vessel disease

    PubMed Central

    Del Brutto, Oscar H.; Mera, Robertino M.; Andrade, María de la Luz; Castillo, Pablo R.; Zambrano, Mauricio; Nader, Juan A.

    2015-01-01

    Background: Diagnosis of cerebral small vessel disease (SVD) is a challenge in remote areas where magnetic resonance imaging (MRI) is not available. Hospital-based studies in high-risk or stroke patients have found an association between the pulsatility index (PI) of intracranial arteries – as derived from transcranial Doppler (TCD) – and white matter hyperintensities (WMH) of presumed vascular origin. We aimed to assess the reliability of cerebral pulsatility indices to identify candidates for MRI screening in population-based studies assessing prevalence of SVD. Methods: A representative sample of stroke-free Atahualpa residents aged ≥65 years investigated with MRI underwent TCD. Using generalized linear models, we evaluated whether the PI of major intracranial arteries correlate with WMH (used as a proxy of diffuse SVD), after adjusting for demographics and cardiovascular risk factors. Results: Out of 70 participants (mean age 70.6 ± 4.6 years, 57% women), 28 (40%) had moderate-to-severe WMH. In multivariate models, there were no differences across categories of WMH in the mean PI of middle cerebral arteries (1.10 ± 0.16 vs. 1.22 ± 0.24, β: 0.065, 95% confidence interval (CI): −0.084–0.177, P = 0.474) or vertebrobasilar arteries (1.11 ± 0.16 vs. 1.29 ± 0.27, β: 0.066, 95% CI: −0.0024–0.156, P = 0.146). Conclusions: Cerebral PI should not be used to identify candidates for MRI screening in population-based studies assessing the burden of SVD. PMID:26167015

  10. Chapter 9: Reliability

    SciTech Connect

    Algora, Carlos; Espinet-Gonzalez, Pilar; Vazquez, Manuel; Bosco, Nick; Miller, David; Kurtz, Sarah; Rubio, Francisca; McConnell,Robert

    2016-04-15

    This chapter describes the accumulated knowledge on CPV reliability with its fundamentals and qualification. It explains the reliability of solar cells, modules (including optics) and plants. The chapter discusses the statistical distributions, namely exponential, normal and Weibull. The reliability of solar cells includes: namely the issues in accelerated aging tests in CPV solar cells, types of failure and failures in real time operation. The chapter explores the accelerated life tests, namely qualitative life tests (mainly HALT) and quantitative accelerated life tests (QALT). It examines other well proven and experienced PV cells and/or semiconductor devices, which share similar semiconductor materials, manufacturing techniques or operating conditions, namely, III-V space solar cells and light emitting diodes (LEDs). It addresses each of the identified reliability issues and presents the current state of the art knowledge for their testing and evaluation. Finally, the chapter summarizes the CPV qualification and reliability standards.

  11. Panel-based next generation sequencing as a reliable and efficient technique to detect mutations in unselected patients with retinal dystrophies

    PubMed Central

    Glöckle, Nicola; Kohl, Susanne; Mohr, Julia; Scheurenbrand, Tim; Sprecher, Andrea; Weisschuh, Nicole; Bernd, Antje; Rudolph, Günther; Schubach, Max; Poloschek, Charlotte; Zrenner, Eberhart; Biskup, Saskia; Berger, Wolfgang; Wissinger, Bernd; Neidhardt, John

    2014-01-01

    Hereditary retinal dystrophies (RD) constitute a group of blinding diseases that are characterized by clinical variability and pronounced genetic heterogeneity. The different forms of RD can be caused by mutations in >100 genes, including >1600 exons. Consequently, next generation sequencing (NGS) technologies are among the most promising approaches to identify mutations in RD. So far, NGS is not routinely used in gene diagnostics. We developed a diagnostic NGS pipeline to identify mutations in 170 genetically and clinically unselected RD patients. NGS was applied to 105 RD-associated genes. Underrepresented regions were examined by Sanger sequencing. The NGS approach was successfully established using cases with known sequence alterations. Depending on the initial clinical diagnosis, we identified likely causative mutations in 55% of retinitis pigmentosa and 80% of Bardet–Biedl or Usher syndrome cases. Seventy-one novel mutations in 40 genes were newly associated with RD. The genes USH2A, EYS, ABCA4, and RHO were more frequently affected than others. Occasionally, cases carried mutations in more than one RD-associated gene. In addition, we found possible dominant de-novo mutations in cases with sporadic RD, which implies consequences for counseling of patients and families. NGS-based mutation analyses are reliable and cost-efficient approaches in gene diagnostics of genetically heterogeneous diseases like RD. PMID:23591405

  12. Reliability Considerations for the Operation of Large Accelerator User Facilities

    SciTech Connect

    Willeke, F. J.

    2016-01-29

    The lecture provides an overview of considerations relevant for achieving highly reliable operation of accelerator based user facilities. The article starts with an overview of statistical reliability formalism which is followed by high reliability design considerations with examples. Finally, the article closes with operational aspects of high reliability such as preventive maintenance and spares inventory.

  13. Reliability assurance for regulation of advanced reactors

    SciTech Connect

    Fullwood, R.; Lofaro, R.; Samanta, P.

    1991-12-31

    The advanced nuclear power plants must achieve higher levels of safety than the first generation of plants. Showing that this is indeed true provides new challenges to reliability and risk assessment methods in the analysis of the designs employing passive and semi-passive protection. Reliability assurance of the advanced reactor systems is important for determining the safety of the design and for determining the plant operability. Safety is the primary concern, but operability is considered indicative of good and safe operation. This paper discusses several concerns for reliability assurance of the advanced design encompassing reliability determination, level of detail required in advanced reactor submittals, data for reliability assurance, systems interactions and common cause effects, passive component reliability, PRA-based configuration control system, and inspection, training, maintenance and test requirements. Suggested approaches are provided for addressing each of these topics.

  14. Reliability Analysis Model

    NASA Technical Reports Server (NTRS)

    1970-01-01

    RAM program determines probability of success for one or more given objectives in any complex system. Program includes failure mode and effects, criticality and reliability analyses, and some aspects of operations, safety, flight technology, systems design engineering, and configuration analyses.

  15. On the Reliability of Categorically Scored Examinations

    ERIC Educational Resources Information Center

    Kupermintz, Haggai

    2004-01-01

    A decision-theoretic approach to the question of reliability in categorically scored examinations is explored. The concepts of true scores and errors are discussed as they deviate from conventional psychometric definitions and measurement error in categorical scores is cast in terms of misclassifications. A reliability measure based on…

  16. Multidisciplinary System Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  17. Metrology automation reliability

    NASA Astrophysics Data System (ADS)

    Chain, Elizabeth E.

    1996-09-01

    At Motorola's MOS-12 facility automated measurements on 200- mm diameter wafers proceed in a hands-off 'load-and-go' mode requiring only wafer loading, measurement recipe loading, and a 'run' command for processing. Upon completion of all sample measurements, the data is uploaded to the factory's data collection software system via a SECS II interface, eliminating the requirement of manual data entry. The scope of in-line measurement automation has been extended to the entire metrology scheme from job file generation to measurement and data collection. Data analysis and comparison to part specification limits is also carried out automatically. Successful integration of automated metrology into the factory measurement system requires that automated functions, such as autofocus and pattern recognition algorithms, display a high degree of reliability. In the 24- hour factory reliability data can be collected automatically on every part measured. This reliability data is then uploaded to the factory data collection software system at the same time as the measurement data. Analysis of the metrology reliability data permits improvements to be made as needed, and provides an accurate accounting of automation reliability. This reliability data has so far been collected for the CD-SEM (critical dimension scanning electron microscope) metrology tool, and examples are presented. This analysis method can be applied to such automated in-line measurements as CD, overlay, particle and film thickness measurements.

  18. Test-Retest Reliability and Concurrent Validity of a Single Tri-Axial Accelerometer-Based Gait Analysis in Older Adults with Normal Cognition

    PubMed Central

    Byun, Seonjeong; Han, Ji Won; Kim, Tae Hui; Kim, Ki Woong

    2016-01-01

    Objective We investigated the concurrent validity and test-retest reliability of spatio-temporal gait parameters measured with a single tri-axial accelerometer (TAA), determined the optimal number of steps required for obtaining acceptable levels of reliability, and compared the validity and reliability of the estimated gait parameters across the three reference axes of the TAA. Methods A total of 82 cognitively normal elderly participants walked around a 40-m long round walkway twice wearing a TAA at their center of body mass. Gait parameters such as cadence, gait velocity, step time, step length, step time variability, and step time asymmetry were estimated from the low pass-filtered signal of the TAA. The test-retest reliability and concurrent validity with the GAITRite® system were evaluated for the estimated gait parameters. Results Gait parameters using signals from the vertical axis showed excellent reliability for all gait parameters; the intraclass correlation coefficient (ICC) was 0.79–0.90. A minimum of 26 steps and 14 steps were needed to achieve excellent reliability in step time variability and step time asymmetry, respectively. A strong level of agreement was seen for the basic gait parameters between the TAA and GAITRiteⓇ (ICC = 0.91–0.96). Conclusions The measurement of gait parameters of elderly individuals with normal cognition using a TAA placed on the body’s center of mass was reliable and showed superiority over the GAITRiteⓇ with regard to gait variability and asymmetry. The TAA system was a valid tool for measuring basic gait parameters. Considering its wearability and low price, the TAA system may be a promising alternative to the pressure sensor walkway system for measuring gait parameters. PMID:27427965

  19. A Large Web-Based Observer Reliability Study of Early Ischaemic Signs on Computed Tomography. The Acute Cerebral CT Evaluation of Stroke Study (ACCESS)

    PubMed Central

    Wardlaw, Joanna M.; von Kummer, Rüdiger; Farrall, Andrew J.; Chappell, Francesca M.; Hill, Michael; Perry, David

    2010-01-01

    Background Early signs of ischaemic stroke on computerised tomography (CT) scanning are subtle but CT is the most widely available diagnostic test for stroke. Scoring methods that code for the extent of brain ischaemia may improve stroke diagnosis and quantification of the impact of ischaemia. Methodology and Principal Findings We showed CT scans from patients with acute ischaemic stroke (n = 32, with different patient characteristics and ischaemia signs) to doctors in stroke-related specialties world-wide over the web. CT scans were shown twice, randomly and blindly. Observers entered their scan readings, including early ischaemic signs by three scoring methods, into the web database. We compared observers' scorings to a reference standard neuroradiologist using area under receiver operator characteristic curve (AUC) analysis, Cronbach's alpha and logistic regression to determine the effect of scales, patient, scan and observer variables on detection of early ischaemic changes. Amongst 258 readers representing 33 nationalities and six specialties, the AUCs comparing readers with the reference standard detection of ischaemic signs were similar for all scales and both occasions. Being a neuroradiologist, slower scan reading, more pronounced ischaemic signs and later time to CT all improved detection of early ischaemic signs and agreement on the rating scales. Scan quality, stroke severity and number of years of training did not affect agreement. Conclusions Large-scale observer reliability studies are possible using web-based tools and inform routine practice. Slower scan reading and use of CT infarct rating scales improve detection of acute ischaemic signs and should be encouraged to improve stroke diagnosis. PMID:21209901

  20. Fluorescence Resonance Energy Transfer-Based DNA Tetrahedron Nanotweezer for Highly Reliable Detection of Tumor-Related mRNA in Living Cells.

    PubMed

    He, Lei; Lu, Dan-Qing; Liang, Hao; Xie, Sitao; Luo, Can; Hu, Miaomiao; Xu, Liujun; Zhang, Xiaobing; Tan, Weihong

    2017-03-30

    Accurate detection and imaging of tumor-related mRNA in living cells hold great promise for early cancer detection. However, currently, most probes designed to image intracellular mRNA confront intrinsic interferences arising from complex biological matrices and resulting in inevitable false-positive signals. To circumvent this problem, an intracellular DNA nanoprobe, termed DNA tetrahedron nanotweezer (DTNT), was developed to reliably image tumor-related mRNA in living cells based on the FRET (fluorescence resonance energy transfer) "off" to "on" signal readout mode. DTNT was self-assembled from four single-stranded DNAs. In the absence of target mRNA, the respectively labeled donor and acceptor fluorophores are separated, thus inducing low FRET efficiency. However, in the presence of target mRNA, DTNT alters its structure from the open to closed state, thus bringing the dual fluorophores into close proximity for high FRET efficiency. The DTNT exhibited high cellular permeability, fast response and excellent biocompatibility. Moreover, intracellular imaging experiments showed that DTNT could effectively distinguish cancer cells from normal cells and, moreover, distinguish among changes of mRNA expression levels in living cells. The DTNT nanoprobe also exhibits minimal effect of probe concentration, distribution and laser power as other ratiometric probe. More importantly, as a result of the FRET "off" to "on" signal readout mode, the DTNT nanoprobe almost entirely avoids false-positive signals due to intrinsic interferences, such as nuclease digestion, protein binding and thermodynamic fluctuations in complex biological matrices. This design blueprint can be applied to the development of powerful DNA nanomachines for biomedical research and clinical early diagnosis.

  1. Auto-OBSD: Automatic parameter selection for reliable Oscillatory Behavior-based Signal Decomposition with an application to bearing fault signature extraction

    NASA Astrophysics Data System (ADS)

    Huang, Huan; Baddour, Natalie; Liang, Ming

    2017-03-01

    Bearing signals are often contaminated by in-band interferences and random noise. Oscillatory Behavior-based Signal Decomposition (OBSD) is a new technique which decomposes a signal according to its oscillatory behavior, rather than frequency or scale. Due to the low oscillatory transients of bearing fault-induced signals, the OBSD can be used to effectively extract bearing fault signatures from a blurred signal. However, the quality of the result highly relies on the selection of method-related parameters. Such parameters are often subjectively selected and a systematic approach has not been reported in the literature. As such, this paper proposes a systematic approach to automatic selection of OBSD parameters for reliable extraction of bearing fault signatures. The OBSD utilizes the idea of Morphological Component Analysis (MCA) that optimally projects the original signal to low oscillatory wavelets and high oscillatory wavelets established via the Tunable Q-factor Wavelet Transform (TQWT). In this paper, the effects of the selection of each parameter on the performance of the OBSD for bearing fault signature extraction are investigated. It is found that some method-related parameters can be fixed at certain values due to the nature of bearing fault-induced impulses. To adaptively tune the remaining parameters, index-guided parameter selection algorithms are proposed. A Convergence Index (CI) is proposed and a CI-guided self-tuning algorithm is developed to tune the convergence-related parameters, namely, penalty factor and number of iterations. Furthermore, a Smoothness Index (SI) is employed to measure the effectiveness of the extracted low oscillatory component (i.e. bearing fault signature). It is shown that a minimum SI implies an optimal result with respect to the adjustment of relevant parameters. Thus, two SI-guided automatic parameter selection algorithms are also developed to specify two other parameters, i.e., Q-factor of high-oscillatory wavelets and

  2. Reproducibility and Reliability of Quantitative and Weighted T1 and T2∗ Mapping for Myelin-Based Cortical Parcellation at 7 Tesla

    PubMed Central

    Haast, Roy A. M.; Ivanov, Dimo; Formisano, Elia; Uludaǧ, Kâmil

    2016-01-01

    Different magnetic resonance (MR) parameters, such as R1 (=1/T1) or T2∗, have been used to visualize non-invasively the myelin distribution across the cortical sheet. Myelin contrast is consistently enhanced in the primary sensory and some higher order cortical areas (such as MT or the cingulate cortex), which renders it suitable for subject-specific anatomical cortical parcellation. However, no systematic comparison has been performed between the previously proposed MR parameters, i.e., the longitudinal and transversal relaxation values (or their ratios), for myelin mapping at 7 Tesla. In addition, usually these MR parameters are acquired in a non-quantitative manner (“weighted” parameters). Here, we evaluated the differences in ‘parcellability,’ contrast-to-noise ratio (CNR) and inter- and intra-subject variability and reproducibility, respectively, between high-resolution cortical surface maps based on these weighted MR parameters and their quantitative counterparts in ten healthy subjects. All parameters were obtained in a similar acquisition time and possible transmit- or receive-biases were removed during post-processing. It was found that CNR per unit time and parcellability were lower for the transversal compared to the longitudinal relaxation parameters. Further, quantitative R1 was characterized by the lowest inter- and intra-subject coefficient of variation (5.53 and 1.63%, respectively), making R1 a better parameter to map the myelin distribution compared to the other parameters. Moreover, quantitative MRI approaches offer the advantage of absolute rather than relative characterization of the underlying biochemical composition of the tissue, allowing more reliable comparison within subjects and between healthy subjects and patients. Finally, we explored two parcellation methods (thresholding the MR parameter values vs. surface gradients of these values) to determine areal borders based on the cortical surface pattern. It is shown that both

  3. Statistical modelling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1991-01-01

    During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety.

  4. Novel method for constructing a large-scale design space in lubrication process by using Bayesian estimation based on the reliability of a scale-up rule.

    PubMed

    Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo

    2012-01-01

    A reliable large-scale design space was constructed by integrating the reliability of a scale-up rule into the Bayesian estimation without enforcing a large-scale design of experiments (DoE). A small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. A constant Froude number was applied as a scale-up rule. Experiments were conducted at four different small scales with the same Froude number and blending time in order to determine the discrepancies in the response variables between the scales so as to indicate the reliability of the scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on the large scale by Bayesian estimation using the large-scale results and the reliability of the scale-up rule. Large-scale experiments performed under three additional sets of conditions showed that the corrected design space was more reliable than the small-scale design space even when there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.

  5. Photovoltaic module reliability workshop

    SciTech Connect

    Mrig, L.

    1990-01-01

    The paper and presentations compiled in this volume form the Proceedings of the fourth in a series of Workshops sponsored by Solar Energy Research Institute (SERI/DOE) under the general theme of photovoltaic module reliability during the period 1986--1990. The reliability Photo Voltaic (PV) modules/systems is exceedingly important along with the initial cost and efficiency of modules if the PV technology has to make a major impact in the power generation market, and for it to compete with the conventional electricity producing technologies. The reliability of photovoltaic modules has progressed significantly in the last few years as evidenced by warranties available on commercial modules of as long as 12 years. However, there is still need for substantial research and testing required to improve module field reliability to levels of 30 years or more. Several small groups of researchers are involved in this research, development, and monitoring activity around the world. In the US, PV manufacturers, DOE laboratories, electric utilities and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in this field were brought together under SERI/DOE sponsorship to exchange the technical knowledge and field experience as related to current information in this important field. The papers presented here reflect this effort.

  6. Photovoltaic module reliability workshop

    NASA Astrophysics Data System (ADS)

    Mrig, L.

    The paper and presentations compiled in this volume form the Proceedings of the fourth in a series of Workshops sponsored by Solar Energy Research Institute (SERI/DOE) under the general theme of photovoltaic module reliability during the period 1986 to 1990. The reliability photovoltaic (PV) modules/systems is exceedingly important along with the initial cost and efficiency of modules if the PV technology has to make a major impact in the power generation market, and for it to compete with the conventional electricity producing technologies. The reliability of photovoltaic modules has progressed significantly in the last few years as evidenced by warrantees available on commercial modules of as long as 12 years. However, there is still need for substantial research and testing required to improve module field reliability to levels of 30 years or more. Several small groups of researchers are involved in this research, development, and monitoring activity around the world. In the U.S., PV manufacturers, DOE laboratories, electric utilities and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in this field were brought together under SERI/DOE sponsorship to exchange the technical knowledge and field experience as related to current information in this important field. The papers presented here reflect this effort.

  7. Orbiter Autoland reliability analysis

    NASA Technical Reports Server (NTRS)

    Welch, D. Phillip

    1993-01-01

    The Space Shuttle Orbiter is the only space reentry vehicle in which the crew is seated upright. This position presents some physiological effects requiring countermeasures to prevent a crewmember from becoming incapacitated. This also introduces a potential need for automated vehicle landing capability. Autoland is a primary procedure that was identified as a requirement for landing following and extended duration orbiter mission. This report documents the results of the reliability analysis performed on the hardware required for an automated landing. A reliability block diagram was used to evaluate system reliability. The analysis considers the manual and automated landing modes currently available on the Orbiter. (Autoland is presently a backup system only.) Results of this study indicate a +/- 36 percent probability of successfully extending a nominal mission to 30 days. Enough variations were evaluated to verify that the reliability could be altered with missions planning and procedures. If the crew is modeled as being fully capable after 30 days, the probability of a successful manual landing is comparable to that of Autoland because much of the hardware is used for both manual and automated landing modes. The analysis indicates that the reliability for the manual mode is limited by the hardware and depends greatly on crew capability. Crew capability for a successful landing after 30 days has not been determined yet.

  8. Proposed reliability cost model

    NASA Technical Reports Server (NTRS)

    Delionback, L. M.

    1973-01-01

    The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.

  9. Gearbox Reliability Collaborative Update (Presentation)

    SciTech Connect

    Sheng, S.; Keller, J.; Glinsky, C.

    2013-10-01

    This presentation was given at the Sandia Reliability Workshop in August 2013 and provides information on current statistics, a status update, next steps, and other reliability research and development activities related to the Gearbox Reliability Collaborative.

  10. Reliability Engineering Handbook

    DTIC Science & Technology

    1964-06-01

    INTEVAL 00 0 542 917 1953 OPERATING TIME IN HOURS Figure 6-4. TWT Reliability Function, Showing the 90% Confidence Interval 6-7 6-2-4 to 6-2-5 NAVWEPS...the lower one-sided 90% greater than 977 hours, or 90% confidence confidence limit on 0 is (.704)(530) = 373 that 0 lies between these two bounds . R...6-4 6-2-2 Measurement of Reliability (Application of Confidence Limits).. 6-4 6-2-3 Procedural Steps

  11. Adaptation of the Boundary Violations Scale Developed Based on Structural Family Therapy to the Turkish Context: A Study of Validity and Reliability

    ERIC Educational Resources Information Center

    Avci, Rasit; Çolakkadioglu, Oguzhan; Öz, Aysegül Sükran; Akbas, Turan

    2015-01-01

    The purpose of this study was to adapt "The Boundary Violations Scale" (Madden et al., 2002), which was created to measure the intergenerational boundary violations in families from the perspective of children, to Turkish and to test the validity and reliability of the Turkish version of this instrument. This instrument was developed…

  12. Feasibility, validity and reliability of the plank isometric hold as a field-based assessment of torso muscular endurance for children 8-12 years of age.

    PubMed

    Boyer, Charles; Tremblay, Mark; Saunders, Travis J; McFarlane, Allison; Borghese, Michael; Lloyd, Meghann; Longmuir, Pat

    2013-08-01

    This project examined the feasibility, validity, and reliability of the plank isometric hold for children 8-12 years of age. 1502 children (52.5% female) performed partial curl-up and/or plank protocols to assess plank feasibility (n = 823, 52.1% girls), validity (n = 641, 54.1% girls) and reliability (n = 111, 47.8% girls). 12% (n = 52/431) of children could not perform a partial curl-up, but virtually all children (n = 1066/1084) could attain a nonzero score for the plank. Plank performance without time limit was influenced by small effects with age (β = 6.86; p < .001, η(2) = 0.03), flexibility (β = 0.79; p < .001, η(2) = 0.03), and medium effects with cardiovascular endurance (β = 1.07; p < .001, η(2) = 0.08), and waist circumference (β = -0.92; p < .001, η(2) = 0.06). Interrater (ICC = 0.62; CI = 0.50, 0.75), intra-rater (ICC = 0.83; CI = 0.73, 0.90) and test-retest (ICC = 0.63; CI = 0.46, 0.75) reliability were acceptable for the plank without time limit. These data suggest the plank without time limit is a feasible, valid and reliable assessment of torso muscular endurance for children 8-12 years of age.

  13. Reliability techniques in the petroleum industry

    NASA Technical Reports Server (NTRS)

    Williams, H. L.

    1971-01-01

    Quantitative reliability evaluation methods used in the Apollo Spacecraft Program are translated into petroleum industry requirements with emphasis on offsetting reliability demonstration costs and limited production runs. Described are the qualitative disciplines applicable, the definitions and criteria that accompany the disciplines, and the generic application of these disciplines to the chemical industry. The disciplines are then translated into proposed definitions and criteria for the industry, into a base-line reliability plan that includes these disciplines, and into application notes to aid in adapting the base-line plan to a specific operation.

  14. Gearbox Reliability Collaborative Bearing Calibration

    SciTech Connect

    van Dam, J.

    2011-10-01

    NREL has initiated the Gearbox Reliability Collaborative (GRC) to investigate the root cause of the low wind turbine gearbox reliability. The GRC follows a multi-pronged approach based on a collaborative of manufacturers, owners, researchers and consultants. The project combines analysis, field testing, dynamometer testing, condition monitoring, and the development and population of a gearbox failure database. At the core of the project are two 750kW gearboxes that have been redesigned and rebuilt so that they are representative of the multi-megawatt gearbox topology currently used in the industry. These gearboxes are heavily instrumented and are tested in the field and on the dynamometer. This report discusses the bearing calibrations of the gearboxes.

  15. LONG-TERM RELIABILITY OF AL2O3 AND PARYLENE C BILAYER ENCAPSULATED UTAH ELECTRODE ARRAY BASED NEURAL INTERFACES FOR CHRONIC IMPLANTATION

    PubMed Central

    Xie, Xianzong; Rieth, Loren; Williams, Layne; Negi, Sandeep; Bhandari, Rajmohan; Caldwell, Ryan; Sharma, Rohit; Tathireddy, Prashant; Solzbacher, Florian

    2014-01-01

    Objective We focus on improving the long-term stability and functionality of neural interfaces for chronic implantation by using bilayer encapsulation. Approach We evaluated the long-term reliability of Utah electrode array (UEA) based neural interfaces encapsulated by 52 nm of atomic layer deposited (ALD) Al2O3 and 6 μm of Parylene C bilayer, and compared these to devices with the baseline Parylene-only encapsulation. Three variants of arrays including wired, wireless, and active UEAs were used to evaluate this bilayer encapsulation scheme, and were immersed in phosphate buffered saline (PBS) at 57 °C for accelerated lifetime testing. Main results The median tip impedance of the bilayer encapsulated wired UEAs increased from 60 kΩ to 160 kΩ during the 960 days of equivalent soak testing at 37 °C, the opposite trend as typically observed for Parylene encapsulated devices. The loss of the iridium oxide tip metallization and etching of the silicon tip in PBS solution contributed to the increase of impedance. The lifetime of fully integrated wireless UEAs was also tested using accelerated lifetime measurement techniques. The bilayer coated devices had stable power-up frequencies at ~910 MHz and constant RF signal strength of -50 dBm during up to 1044 days (still under testing) of equivalent soaking time at 37 °C. This is a significant improvement over the lifetime of ~ 100 days achieved with Parylene-only encapsulation at 37 °C. The preliminary samples of bilayer coated active UEAs with a flip-chip bonded ASIC chip had a steady current draw of ~ 3 mA during 228 days of soak testing at 37 °C. An increase in current draw has been consistently correlated to device failures, so is a sensitive metric for their lifetime. Significance The trends of increasing electrode impedance of wired devices and performance stability of wireless and active devices support the significantly greater encapsulation performance of this bilayer encapsulation compared with Parylene

  16. Eco-friendly ionic liquid based ultrasonic assisted selective extraction coupled with a simple liquid chromatography for the reliable determination of acrylamide in food samples.

    PubMed

    Albishri, Hassan M; El-Hady, Deia Abd

    2014-01-01

    Acrylamide in food has drawn worldwide attention since 2002 due to its neurotoxic and carcinogenic effects. These influences brought out the dual polar and non-polar characters of acrylamide as they enabled it to dissolve in aqueous blood medium or penetrate the non-polar plasma membrane. In the current work, a simple HPLC/UV system was used to reveal that the penetration of acrylamide in non-polar phase was stronger than its dissolution in polar phase. The presence of phosphate salts in the polar phase reduced the acrylamide interaction with the non-polar phase. Furthermore, an eco-friendly and costless coupling of the HPLC/UV with ionic liquid based ultrasonic assisted extraction (ILUAE) was developed to determine the acrylamide content in food samples. ILUAE was proposed for the efficient extraction of acrylamide from bread and potato chips samples. The extracts were obtained by soaking of potato chips and bread samples in 1.5 mol L(-1) 1-butyl-3-methylimmidazolium bromide (BMIMBr) for 30.0 and 60.0 min, respectively and subsequent chromatographic separation within 12.0 min using Luna C18 column and 100% water mobile phase with 0.5 mL min(-1) under 25 °C column temperature at 250 nm. The extraction and analysis of acrylamide could be achieved within 2h. The mean extraction efficiency of acrylamide showed adequate repeatability with relative standard deviation (RSD) of 4.5%. The limit of detection and limit of quantitation were 25.0 and 80.0 ng mL(-1), respectively. The accuracy of the proposed method was tested by recovery in seven food samples giving values ranged between 90.6% and 109.8%. Therefore, the methodology was successfully validated by official guidelines, indicating its reliability to be applied to analysis of real samples, proven to be useful for its intended purpose. Moreover, it served as a simple, eco-friendly and costless alternative method over hitherto reported ones.

  17. Assuring Software Reliability

    DTIC Science & Technology

    2014-08-01

    resources.sei.cmu.edu/asset_files/WhitePaper/2009_019_001_29066.pdf [Boydston 2009] Boydston, A. & Lewis , W. Qualification and Reliability of...Woody, Carol . Survivability Analysis Framework (CMU/SEI-2010-TN-013). Software Engineering Institute, Carnegie Mellon University, 2010. http

  18. Sequential Reliability Tests.

    ERIC Educational Resources Information Center

    Eiting, Mindert H.

    1991-01-01

    A method is proposed for sequential evaluation of reliability of psychometric instruments. Sample size is unfixed; a test statistic is computed after each person is sampled and a decision is made in each stage of the sampling process. Results from a series of Monte-Carlo experiments establish the method's efficiency. (SLD)

  19. Designing reliability into accelerators

    SciTech Connect

    Hutton, A.

    1992-08-01

    For the next generation of high performance, high average luminosity colliders, the ``factories,`` reliability engineering must be introduced right at the inception of the project and maintained as a central theme throughout the project. There are several aspects which will be addressed separately: Concept; design; motivation; management techniques; and fault diagnosis.

  20. Designing reliability into accelerators

    SciTech Connect

    Hutton, A.

    1992-08-01

    For the next generation of high performance, high average luminosity colliders, the factories,'' reliability engineering must be introduced right at the inception of the project and maintained as a central theme throughout the project. There are several aspects which will be addressed separately: Concept; design; motivation; management techniques; and fault diagnosis.

  1. Reliable solar cookers

    SciTech Connect

    Magney, G.K.

    1992-12-31

    The author describes the activities of SERVE, a Christian relief and development agency, to introduce solar ovens to the Afghan refugees in Pakistan. It has provided 5,000 solar cookers since 1984. The experience has demonstrated the potential of the technology and the need for a durable and reliable product. Common complaints about the cookers are discussed and the ideal cooker is described.

  2. Reliability Design Handbook

    DTIC Science & Technology

    1976-03-01

    prediction, failure modes and effects analysis ( FMEA ) and reliability growth techniques represent those prediction and design evaluation methods that...Assessment Production Operation Ö Maintenance MIL-HDBK- 217 Bayesian Techniques Probabilistic Design FMEA I R Growth " I...devices suffer thermal aging; oxidation and other chemical reactions are enhanced; viscosity reduction and evaporation of lubricants are problems

  3. Storage Reliability of Missile Materiel Program. Missile Materiel Reliability Prediction Handbook - Parts Count Prediction

    DTIC Science & Technology

    1978-02-01

    OF MISSILE MATERIEL PROGRAM ,. MISSILE MATERIEL RELIABILITY PREDICTION HANDBOOK + "ARTS COUNT PREDICTION LC-78-1 FEBRUARY 1978 Prepared by: Dennis F...data for predicting the reliability of missile systems based on a "parts count" approach. The handbook is a result of a program whosa objective is the...has been extracted from existing reliability prediction sources. For more informations, contactz Commaz • •r U. S. h. ,y Missile R&D Command ATTN

  4. Reliability Generalization (RG) Analysis: The Test Is Not Reliable

    ERIC Educational Resources Information Center

    Warne, Russell

    2008-01-01

    Literature shows that most researchers are unaware of some of the characteristics of reliability. This paper clarifies some misconceptions by describing the procedures, benefits, and limitations of reliability generalization while using it to illustrate the nature of score reliability. Reliability generalization (RG) is a meta-analytic method…

  5. A short food-group-based dietary questionnaire is reliable and valid for assessing toddlers' dietary risk in relatively advantaged samples.

    PubMed

    Bell, Lucinda K; Golley, Rebecca K; Magarey, Anthea M

    2014-08-28

    Identifying toddlers at dietary risk is crucial for determining who requires intervention to improve dietary patterns and reduce health consequences. The objectives of the present study were to develop a simple tool that assesses toddlers' dietary risk and investigate its reliability and validity. The nineteen-item Toddler Dietary Questionnaire (TDQ) is informed by dietary patterns observed in Australian children aged 14 (n 552) and 24 (n 493) months and the Australian dietary guidelines. It assesses the intake of 'core' food groups (e.g. fruit, vegetables and dairy products) and 'non-core' food groups (e.g. high-fat, high-sugar and/or high-salt foods and sweetened beverages) over the previous 7 d, which is then scored against a dietary risk criterion (0-100; higher score = higher risk). Parents of toddlers aged 12-36 months (Socio-Economic Index for Areas decile range 5-9) were asked to complete the TDQ for their child (n 111) on two occasions, 3·2 (SD 1·8) weeks apart, to assess test-retest reliability. They were also asked to complete a validated FFQ from which the risk score was calculated and compared with the TDQ-derived risk score (relative validity). Mean scores were highly correlated and not significantly different for reliability (intra-class correlation = 0·90, TDQ1 30·2 (SD 8·6) v. TDQ2 30·9 (SD 8·9); P= 0·14) and validity (r 0·83, average TDQ ((TDQ1+TDQ2)/2) 30·5 (SD 8·4) v. FFQ 31·4 (SD 8·1); P= 0·05). All the participants were classified into the same (reliability 75 %; validity 79 %) or adjacent (reliability 25 %; validity 21 %) risk category (low (0-24), moderate (25-49), high (50-74) and very high (75-100)). Overall, the TDQ is a valid and reliable screening tool for identifying at-risk toddlers in relatively advantaged samples.

  6. Validity and intra-observer reliability of three-dimensional scanning compared to conventional anthropometry for children and adolescents from a population-based cohort study.

    PubMed

    Glock, Fabian; Vogel, Mandy; Naumann, Stephanie; Kuehnapfel, Andreas; Scholz, Markus; Hiemisch, Andreas; Kirsten, Toralf; Rieger, Kristin; Koerner, Antje; Loeffler, Markus; Kiess, Wieland

    2017-01-04

    BackgroundConventional anthropometric measurements are time consuming and require well trained medical staff. To use three-dimensional whole body laser scanning in daily clinical work, validity and reliability have to be confirmed.MethodsWe compared a whole body laser scanner to conventional anthropometry in a group of 473 children and adolescents from the Leipzig Research Centre for Civilization Diseases (LIFE-Child). Concordance correlation coefficients (CCC) were calculated separately for sex, weight and age to assess validity. Overall CCC (OCCC) were used to analyze intra-observer reliability.ResultsBody height and the circumferences of waist, hip, upper arm and calf had an "excellent" (CCC ≥ 0.9), neck and thigh circumference a "good" (CCC ≥ 0.7) and head circumference a "low" (CCC < 0.5) degree of concordance over the complete study population. We observed dependencies of validity on sex, weight and age. Intra-observer reliability of both techniques is "excellent" (OCCC ≥ 0.9).ConclusionScanning is faster, requires a less intensive staff training and provides more information. It can be used in an epidemiologic setting with children and adolescents but some measurements should be considered with caution due to reduced agreement with conventional anthropometry.Pediatric Research (2017); doi:10.1038/pr.2016.274.

  7. Investigation of reliability method formulations in Dakota/UQ.

    SciTech Connect

    Renaud, John E.; Perez, Victor M.; Wojtkiewicz, Steven F., Jr.; Agarwal, H.; Eldred, Michael Scott

    2004-07-01

    Reliability methods are probabilistic algorithms for quantifying the effect of simulation input uncertainties on response metrics of interest. In particular, they compute approximate response function distribution statistics (probability, reliability and response levels) based on specified input random variable probability distributions. In this paper, a number of algorithmic variations are explored for both the forward reliability analysis of computing probabilities for specified response levels (the reliability index approach (RIA)) and the inverse reliability analysis of computing response levels for specified probabilities (the performance measure approach (PMA)). These variations include limit state linearizations, probability integrations, warm starting and optimization algorithm selections. The resulting RIA/PMA reliability algorithms for uncertainty quantification are then employed within bi-level and sequential reliability-based design optimization approaches. Relative performance of these uncertainty quantification and reliability-based design optimization algorithms are presented for a number of computational experiments performed using the DAKOTA/UQ software.

  8. Space Shuttle Propulsion System Reliability

    NASA Technical Reports Server (NTRS)

    Welzyn, Ken; VanHooser, Katherine; Moore, Dennis; Wood, David

    2011-01-01

    This session includes the following sessions: (1) External Tank (ET) System Reliability and Lessons, (2) Space Shuttle Main Engine (SSME), Reliability Validated by a Million Seconds of Testing, (3) Reusable Solid Rocket Motor (RSRM) Reliability via Process Control, and (4) Solid Rocket Booster (SRB) Reliability via Acceptance and Testing.

  9. [Reliability of cancer as the underlying cause of death according to the Mortality Information System and Population-Based Cancer Registry in Goiânia, Goiás State, Brazil].

    PubMed

    Oliveira, Patricia Pereira Vasconcelos de; Silva, Gulnar Azevedo e; Curado, Maria Paula; Malta, Deborah Carvalho; Moura, Lenildo de

    2014-02-01

    This study assessed the reliability of cancer as the underlying cause of death using probabilistic linkage between the Mortality Information System and Population-Based Cancer Registry (PBCR) in Goiânia, Goiás State, Brazil, from 2000 to 2005. RecLink III was used for probabilistic linkage, and reliability was assessed by Cohen's kappa and prevalence-adjusted and bias-adjusted kappa (PABAK). In the probabilistic linkage, 2,874 individuals were identified for the reliability analysis. Cohen's kappa ranged from 0.336 to 0.846 and PABAK from 0.810 to 0.990 for 14 neoplasm groups defined in the study. For reliability of the 35 leading cancers, 12(34.3%) presented kappa values under 0.600 and PABAK over 0.981. Among the neoplasms common to both sexes, crude agreement ranged from 0.672 to 0.790 and adjusted agreement from 0.894 to 0.961. Sixty-seven percent of cases classified by the Mortality Information System as "cancer of ill-defined sites" were reclassified according to the PBCR. This study was useful for the classification of cancer mortality estimates in areas covered by the PBCR.

  10. Reliability and Validity of a New Test of Change-of-Direction Speed for Field-Based Sports: the Change-of-Direction and Acceleration Test (CODAT).

    PubMed

    Lockie, Robert G; Schultz, Adrian B; Callaghan, Samuel J; Jeffriess, Matthew D; Berry, Simon P

    2013-01-01

    Field sport coaches must use reliable and valid tests to assess change-of-direction speed in their athletes. Few tests feature linear sprinting with acute change- of-direction maneuvers. The Change-of-Direction and Acceleration Test (CODAT) was designed to assess field sport change-of-direction speed, and includes a linear 5-meter (m) sprint, 45° and 90° cuts, 3- m sprints to the left and right, and a linear 10-m sprint. This study analyzed the reliability and validity of this test, through comparisons to 20-m sprint (0-5, 0-10, 0-20 m intervals) and Illinois agility run (IAR) performance. Eighteen Australian footballers (age = 23.83 ± 7.04 yrs; height = 1.79 ± 0.06 m; mass = 85.36 ± 13.21 kg) were recruited. Following familiarization, subjects completed the 20-m sprint, CODAT, and IAR in 2 sessions, 48 hours apart. Intra-class correlation coefficients (ICC) assessed relative reliability. Absolute reliability was analyzed through paired samples t-tests (p ≤ 0.05) determining between-session differences. Typical error (TE), coefficient of variation (CV), and differences between the TE and smallest worthwhile change (SWC), also assessed absolute reliability and test usefulness. For the validity analysis, Pearson's correlations (p ≤ 0.05) analyzed between-test relationships. Results showed no between-session differences for any test (p = 0.19-0.86). CODAT time averaged ~6 s, and the ICC and CV equaled 0.84 and 3.0%, respectively. The homogeneous sample of Australian footballers meant that the CODAT's TE (0.19 s) exceeded the usual 0.2 x standard deviation (SD) SWC (0.10 s). However, the CODAT is capable of detecting moderate performance changes (SWC calculated as 0.5 x SD = 0.25 s). There was a near perfect correlation between the CODAT and IAR (r = 0.92), and very large correlations with the 20-m sprint (r = 0.75-0.76), suggesting that the CODAT was a valid change-of-direction speed test. Due to movement specificity, the CODAT has value for field sport

  11. Reliability and Validity of a New Test of Change-of-Direction Speed for Field-Based Sports: the Change-of-Direction and Acceleration Test (CODAT)

    PubMed Central

    Lockie, Robert G.; Schultz, Adrian B.; Callaghan, Samuel J.; Jeffriess, Matthew D.; Berry, Simon P.

    2013-01-01

    Field sport coaches must use reliable and valid tests to assess change-of-direction speed in their athletes. Few tests feature linear sprinting with acute change- of-direction maneuvers. The Change-of-Direction and Acceleration Test (CODAT) was designed to assess field sport change-of-direction speed, and includes a linear 5-meter (m) sprint, 45° and 90° cuts, 3- m sprints to the left and right, and a linear 10-m sprint. This study analyzed the reliability and validity of this test, through comparisons to 20-m sprint (0-5, 0-10, 0-20 m intervals) and Illinois agility run (IAR) performance. Eighteen Australian footballers (age = 23.83 ± 7.04 yrs; height = 1.79 ± 0.06 m; mass = 85.36 ± 13.21 kg) were recruited. Following familiarization, subjects completed the 20-m sprint, CODAT, and IAR in 2 sessions, 48 hours apart. Intra-class correlation coefficients (ICC) assessed relative reliability. Absolute reliability was analyzed through paired samples t-tests (p ≤ 0.05) determining between-session differences. Typical error (TE), coefficient of variation (CV), and differences between the TE and smallest worthwhile change (SWC), also assessed absolute reliability and test usefulness. For the validity analysis, Pearson’s correlations (p ≤ 0.05) analyzed between-test relationships. Results showed no between-session differences for any test (p = 0.19-0.86). CODAT time averaged ~6 s, and the ICC and CV equaled 0.84 and 3.0%, respectively. The homogeneous sample of Australian footballers meant that the CODAT’s TE (0.19 s) exceeded the usual 0.2 x standard deviation (SD) SWC (0.10 s). However, the CODAT is capable of detecting moderate performance changes (SWC calculated as 0.5 x SD = 0.25 s). There was a near perfect correlation between the CODAT and IAR (r = 0.92), and very large correlations with the 20-m sprint (r = 0.75-0.76), suggesting that the CODAT was a valid change-of-direction speed test. Due to movement specificity, the CODAT has value for field sport

  12. Human Reliability Program Workshop

    SciTech Connect

    Landers, John; Rogers, Erin; Gerke, Gretchen

    2014-05-18

    A Human Reliability Program (HRP) is designed to protect national security as well as worker and public safety by continuously evaluating the reliability of those who have access to sensitive materials, facilities, and programs. Some elements of a site HRP include systematic (1) supervisory reviews, (2) medical and psychological assessments, (3) management evaluations, (4) personnel security reviews, and (4) training of HRP staff and critical positions. Over the years of implementing an HRP, the Department of Energy (DOE) has faced various challenges and overcome obstacles. During this 4-day activity, participants will examine programs that mitigate threats to nuclear security and the insider threat to include HRP, Nuclear Security Culture (NSC) Enhancement, and Employee Assistance Programs. The focus will be to develop an understanding of the need for a systematic HRP and to discuss challenges and best practices associated with mitigating the insider threat.

  13. Reliable broadcast protocols

    NASA Technical Reports Server (NTRS)

    Joseph, T. A.; Birman, Kenneth P.

    1989-01-01

    A number of broadcast protocols that are reliable subject to a variety of ordering and delivery guarantees are considered. Developing applications that are distributed over a number of sites and/or must tolerate the failures of some of them becomes a considerably simpler task when such protocols are available for communication. Without such protocols the kinds of distributed applications that can reasonably be built will have a very limited scope. As the trend towards distribution and decentralization continues, it will not be surprising if reliable broadcast protocols have the same role in distributed operating systems of the future that message passing mechanisms have in the operating systems of today. On the other hand, the problems of engineering such a system remain large. For example, deciding which protocol is the most appropriate to use in a certain situation or how to balance the latency-communication-storage costs is not an easy question.

  14. Compact, Reliable EEPROM Controller

    NASA Technical Reports Server (NTRS)

    Katz, Richard; Kleyner, Igor

    2010-01-01

    A compact, reliable controller for an electrically erasable, programmable read-only memory (EEPROM) has been developed specifically for a space-flight application. The design may be adaptable to other applications in which there are requirements for reliability in general and, in particular, for prevention of inadvertent writing of data in EEPROM cells. Inadvertent writes pose risks of loss of reliability in the original space-flight application and could pose such risks in other applications. Prior EEPROM controllers are large and complex and do not provide all reasonable protections (in many cases, few or no protections) against inadvertent writes. In contrast, the present controller provides several layers of protection against inadvertent writes. The controller also incorporates a write-time monitor, enabling determination of trends in the performance of an EEPROM through all phases of testing. The controller has been designed as an integral subsystem of a system that includes not only the controller and the controlled EEPROM aboard a spacecraft but also computers in a ground control station, relatively simple onboard support circuitry, and an onboard communication subsystem that utilizes the MIL-STD-1553B protocol. (MIL-STD-1553B is a military standard that encompasses a method of communication and electrical-interface requirements for digital electronic subsystems connected to a data bus. MIL-STD- 1553B is commonly used in defense and space applications.) The intent was to both maximize reliability while minimizing the size and complexity of onboard circuitry. In operation, control of the EEPROM is effected via the ground computers, the MIL-STD-1553B communication subsystem, and the onboard support circuitry, all of which, in combination, provide the multiple layers of protection against inadvertent writes. There is no controller software, unlike in many prior EEPROM controllers; software can be a major contributor to unreliability, particularly in fault

  15. Designing reliability into accelerators

    NASA Astrophysics Data System (ADS)

    Hutton, A.

    1992-07-01

    Future accelerators will have to provide a high degree of reliability. Quality must be designed in right from the beginning and must remain a central theme throughout the project. The problem is similar to the problems facing US industry today, and examples of the successful application of quality engineering will be given. Different aspects of an accelerator project will be addressed: Concept, Design, Motivation, Management Techniques, and Fault Diagnosis. The importance of creating and maintaining a coherent team will be stressed.

  16. Reliability and testing

    NASA Technical Reports Server (NTRS)

    Auer, Werner

    1996-01-01

    Reliability and its interdependence with testing are important topics for development and manufacturing of successful products. This generally accepted fact is not only a technical statement, but must be also seen in the light of 'Human Factors.' While the background for this paper is the experience gained with electromechanical/electronic space products, including control and system considerations, it is believed that the content could be also of interest for other fields.

  17. Laser System Reliability

    DTIC Science & Technology

    1977-03-01

    NEALE CAPT. RANDALL D. GODFREY CAPT. JOHN E. ACTON HR. DAVE B. LEMMING (ASD) :,^ 19 . ••^w**** SECTION III RELIABILITY PREDICTION...Dete Exchange Program) failure rate date bank. In addition, some data have been obtained from Hughes. Rocketdyne , Garrett, and the AFWL’s APT Failure...Central Ave, Suite 306, Albuq, NM 87108 R/M Systems, Inc (Dr. K. Blemel), 10801 Lomas 81vd NE, Albuquerque, NM 87112 Rocketdyne 01 v, Rockwell

  18. Spacecraft transmitter reliability

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A workshop on spacecraft transmitter reliability was held at the NASA Lewis Research Center on September 25 and 26, 1979, to discuss present knowledge and to plan future research areas. Since formal papers were not submitted, this synopsis was derived from audio tapes of the workshop. The following subjects were covered: users' experience with space transmitters; cathodes; power supplies and interfaces; and specifications and quality assurance. A panel discussion ended the workshop.

  19. Decision theory in structural reliability

    NASA Technical Reports Server (NTRS)

    Thomas, J. M.; Hanagud, S.; Hawk, J. D.

    1975-01-01

    Some fundamentals of reliability analysis as applicable to aerospace structures are reviewed, and the concept of a test option is introduced. A decision methodology, based on statistical decision theory, is developed for determining the most cost-effective design factor and method of testing for a given structural assembly. The method is applied to several Saturn V and Space Shuttle structural assemblies as examples. It is observed that the cost and weight features of the design have a significant effect on the optimum decision.

  20. Apollo experience report: Reliability and quality assurance

    NASA Technical Reports Server (NTRS)

    Sperber, K. P.

    1973-01-01

    The reliability of the Apollo spacecraft resulted from the application of proven reliability and quality techniques and from sound management, engineering, and manufacturing practices. Continual assessment of these techniques and practices was made during the program, and, when deficiencies were detected, adjustments were made and the deficiencies were effectively corrected. The most significant practices, deficiencies, adjustments, and experiences during the Apollo Program are described in this report. These experiences can be helpful in establishing an effective base on which to structure an efficient reliability and quality assurance effort for future space-flight programs.