Science.gov

Sample records for reliability physics-of-failure based

  1. Prediction of reliability on thermoelectric module through accelerated life test and Physics-of-failure

    NASA Astrophysics Data System (ADS)

    Choi, Hyoung-Seuk; Seo, Won-Seon; Choi, Duck-Kyun

    2011-09-01

    Thermoelectric cooling module (TEM) which is electric device has a mechanical stress because of temperature gradient in itself. It means that structure of TEM is vulnerable in an aspect of reliability but research on reliability of TEM was not performed a lot. Recently, the more the utilization of thermoelectric cooling devices grows, the more the needs for life prediction and improvement are increasing. In this paper, we investigated life distribution, shape parameter of the TEM through accelerated life test (ALT). And we discussed about how to enhance life of TEM through the Physics-of-failure. Experimental results of ALT showed that the thermoelectric cooling module follows the Weibull distribution, shape parameter of which is 3.6. The acceleration model is coffin Coffin-Manson and material constant is 1.8.

  2. Agent autonomy approach to probabilistic physics-of-failure modeling of complex dynamic systems with interacting failure mechanisms

    NASA Astrophysics Data System (ADS)

    Gromek, Katherine Emily

    A novel computational and inference framework of the physics-of-failure (PoF) reliability modeling for complex dynamic systems has been established in this research. The PoF-based reliability models are used to perform a real time simulation of system failure processes, so that the system level reliability modeling would constitute inferences from checking the status of component level reliability at any given time. The "agent autonomy" concept is applied as a solution method for the system-level probabilistic PoF-based (i.e. PPoF-based) modeling. This concept originated from artificial intelligence (AI) as a leading intelligent computational inference in modeling of multi agents systems (MAS). The concept of agent autonomy in the context of reliability modeling was first proposed by M. Azarkhail [1], where a fundamentally new idea of system representation by autonomous intelligent agents for the purpose of reliability modeling was introduced. Contribution of the current work lies in the further development of the agent anatomy concept, particularly the refined agent classification within the scope of the PoF-based system reliability modeling, new approaches to the learning and the autonomy properties of the intelligent agents, and modeling interacting failure mechanisms within the dynamic engineering system. The autonomous property of intelligent agents is defined as agent's ability to self-activate, deactivate or completely redefine their role in the analysis. This property of agents and the ability to model interacting failure mechanisms of the system elements makes the agent autonomy fundamentally different from all existing methods of probabilistic PoF-based reliability modeling. 1. Azarkhail, M., "Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems", PhD thesis, University of Maryland, College Park, 2007.

  3. Methodology for Physics and Engineering of Reliable Products

    NASA Technical Reports Server (NTRS)

    Cornford, Steven L.; Gibbel, Mark

    1996-01-01

    Physics of failure approaches have gained wide spread acceptance within the electronic reliability community. These methodologies involve identifying root cause failure mechanisms, developing associated models, and utilizing these models to inprove time to market, lower development and build costs and higher reliability. The methodology outlined herein sets forth a process, based on integration of both physics and engineering principles, for achieving the same goals.

  4. Predicting remaining life by fusing the physics of failure modeling with diagnostics

    NASA Astrophysics Data System (ADS)

    Kacprzynski, G. J.; Sarlashkar, A.; Roemer, M. J.; Hess, A.; Hardman, B.

    2004-03-01

    Technology that enables failure prediction of critical machine components (prognostics) has the potential to significantly reduce maintenance costs and increase availability and safety. This article summarizes a research effort funded through the U.S. Defense Advanced Research Projects Agency and Naval Air System Command aimed at enhancing prognostic accuracy through more advanced physics-of-failure modeling and intelligent utilization of relevant diagnostic information. H-60 helicopter gear is used as a case study to introduce both stochastic sub-zone crack initiation and three-dimensional fracture mechanics lifing models along with adaptive model updating techniques for tuning key failure mode variables at a local material/damage site based on fused vibration features. The overall prognostic scheme is aimed at minimizing inherent modeling and operational uncertainties via sensed system measurements that evolve as damage progresses.

  5. Reliability-based design optimization using efficient global reliability analysis.

    SciTech Connect

    Bichon, Barron J.; Mahadevan, Sankaran; Eldred, Michael Scott

    2010-05-01

    Finding the optimal (lightest, least expensive, etc.) design for an engineered component that meets or exceeds a specified level of reliability is a problem of obvious interest across a wide spectrum of engineering fields. Various methods for this reliability-based design optimization problem have been proposed. Unfortunately, this problem is rarely solved in practice because, regardless of the method used, solving the problem is too expensive or the final solution is too inaccurate to ensure that the reliability constraint is actually satisfied. This is especially true for engineering applications involving expensive, implicit, and possibly nonlinear performance functions (such as large finite element models). The Efficient Global Reliability Analysis method was recently introduced to improve both the accuracy and efficiency of reliability analysis for this type of performance function. This paper explores how this new reliability analysis method can be used in a design optimization context to create a method of sufficient accuracy and efficiency to enable the use of reliability-based design optimization as a practical design tool.

  6. Reliability based design optimization: Formulations and methodologies

    NASA Astrophysics Data System (ADS)

    Agarwal, Harish

    Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed

  7. Reliability-based pricing of electricity service

    SciTech Connect

    Hagazy, Y.A.

    1993-01-01

    This research has two objectives: (a) to develop a price structure that unbundles electricity service by reliability levels, and (b) to analyze the implications of such a structure on economic welfare, system operation, load management, and energy conservation. The authors developed a pricing mechanism for electricity service that combines priority (reliability differentiation) pricing with real-time Ramsey type pricing. The electric utility is assumed to be a single welfare-maximizing firm able to set and communicate prices instantly. At time of supply shortages, the utility has direct control over customer loads and follows a rationing method among customers willing to accept power interruptions. Therefore, customers are given the choice either to be served with a high reliability [open quotes]firm[close quotes] service, or to be subject to interruption. To encourage customers to make rational reliability choices, a payment/compensation mechanism was integrated into the welfare-maximization model. In order to account for uncertainties associated with the operation of electric power systems, a stochastic production cost simulation is also integrated with the model. The stochastic production cost simulation yields the estimation of the expected production, cost, marginal costs, and system reliability level at total demand. The authors examine the welfare gain and energy and reserve saving possibilities due to different pricing schemes. The results show that reliability-based pricing yields higher economic efficiency and energy and power saving than both spot and Ramsey pricing when system imperfect reliability is considered. Therefore, the implication of this research is that reliability-based pricing provides a feasible solution to electric utilities as a substitute for power purchases, and/or new capacity investment.

  8. Value-based reliability transmission planning

    SciTech Connect

    Dalton, J.G. III; Garrison, D.L.; Fallon, C.M.

    1996-08-01

    This paper presents a new value-based reliability planning (VBRP) process proposed for planning Duke Power Company`s (DPC) regional transmission system. All transmission served customers are fed from DPC`s regional transmission system which consists of a 44-kV predominantly radial system and a 100-kV predominantly non-radial system. In the past, any single contingency that could occur during system peak conditions and cause a thermal overload required the overloaded facility to be upgraded, regardless of the costs or the likelihood of the overload occurring. The new VBRP process is based on transmission system reliability evaluation and includes the following important elements: (1) a ten-year historical data base describing the probabilities of forced outages for lines and transformers; (2) a five-year average load duration curve describing the probability of an overload should a contingency occur; (3) a customer outage cost data base; (4) and probabilistic techniques. The new process attempts to balance the costs of improving service reliability with the benefits or value that these improvements bring to these customers. The objective is to provide the customers their required level of reliability while minimizing the Total Cost of their electric service.

  9. Reliability based fatigue design and maintenance procedures

    NASA Technical Reports Server (NTRS)

    Hanagud, S.

    1977-01-01

    A stochastic model has been developed to describe a probability for fatigue process by assuming a varying hazard rate. This stochastic model can be used to obtain the desired probability of a crack of certain length at a given location after a certain number of cycles or time. Quantitative estimation of the developed model was also discussed. Application of the model to develop a procedure for reliability-based cost-effective fail-safe structural design is presented. This design procedure includes the reliability improvement due to inspection and repair. Methods of obtaining optimum inspection and maintenance schemes are treated.

  10. A Reliability-Based Track Fusion Algorithm

    PubMed Central

    Xu, Li; Pan, Liqiang; Jin, Shuilin; Liu, Haibo; Yin, Guisheng

    2015-01-01

    The common track fusion algorithms in multi-sensor systems have some defects, such as serious imbalances between accuracy and computational cost, the same treatment of all the sensor information regardless of their quality, high fusion errors at inflection points. To address these defects, a track fusion algorithm based on the reliability (TFR) is presented in multi-sensor and multi-target environments. To improve the information quality, outliers in the local tracks are eliminated at first. Then the reliability of local tracks is calculated, and the local tracks with high reliability are chosen for the state estimation fusion. In contrast to the existing methods, TFR reduces high fusion errors at the inflection points of system tracks, and obtains a high accuracy with less computational cost. Simulation results verify the effectiveness and the superiority of the algorithm in dense sensor environments. PMID:25950174

  11. A reliability-based track fusion algorithm.

    PubMed

    Xu, Li; Pan, Liqiang; Jin, Shuilin; Liu, Haibo; Yin, Guisheng

    2015-01-01

    The common track fusion algorithms in multi-sensor systems have some defects, such as serious imbalances between accuracy and computational cost, the same treatment of all the sensor information regardless of their quality, high fusion errors at inflection points. To address these defects, a track fusion algorithm based on the reliability (TFR) is presented in multi-sensor and multi-target environments. To improve the information quality, outliers in the local tracks are eliminated at first. Then the reliability of local tracks is calculated, and the local tracks with high reliability are chosen for the state estimation fusion. In contrast to the existing methods, TFR reduces high fusion errors at the inflection points of system tracks, and obtains a high accuracy with less computational cost. Simulation results verify the effectiveness and the superiority of the algorithm in dense sensor environments.

  12. Reliability Evaluation of Next Generation Inverter: Cooperative Research and Development Final Report, CRADA Number CRD-12-478

    SciTech Connect

    Paret, Paul

    2016-10-01

    The National Renewable Energy Laboratory (NREL) will conduct thermal and reliability modeling on three sets of power modules for the development of a next generation inverter for electric traction drive vehicles. These modules will be chosen by General Motors (GM) to represent three distinct technological approaches to inverter power module packaging. Likely failure mechanisms will be identified in each package and a physics-of-failure-based reliability assessment will be conducted.

  13. Measurement-based reliability/performability models

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  14. Reliability Analysis and Reliability-Based Design Optimization of Circular Composite Cylinders Under Axial Compression

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2001-01-01

    This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.

  15. A reliability measure of protein-protein interactions and a reliability measure-based search engine.

    PubMed

    Park, Byungkyu; Han, Kyungsook

    2010-02-01

    Many methods developed for estimating the reliability of protein-protein interactions are based on the topology of protein-protein interaction networks. This paper describes a new reliability measure for protein-protein interactions, which does not rely on the topology of protein interaction networks, but expresses biological information on functional roles, sub-cellular localisations and protein classes as a scoring schema. The new measure is useful for filtering many spurious interactions, as well as for estimating the reliability of protein interaction data. In particular, the reliability measure can be used to search protein-protein interactions with the desired reliability in databases. The reliability-based search engine is available at http://yeast.hpid.org. We believe this is the first search engine for interacting proteins, which is made available to public. The search engine and the reliability measure of protein interactions should provide useful information for determining proteins to focus on.

  16. Refining network reconstruction based on functional reliability.

    PubMed

    Zhang, Yunjun; Ouyang, Qi; Geng, Zhi

    2014-07-21

    Reliable functioning is crucial for the survival and development of the genetic regulatory networks in living cells and organisms. This functional reliability is an important feature of the networks and reflects the structural features that have been embedded in the regulatory networks by evolution. In this paper, we integrate this reliability into network reconstruction. We introduce the concept of dependency probability to measure the dependency of functional reliability on network edges. We also propose a method to estimate the dependency probability and select edges with high contributions to functional reliability. We use two real examples, the regulatory network of the cell cycle of the budding yeast and that of the fission yeast, to demonstrate that the proposed method improves network reconstruction. In addition, the dependency probability is robust in calculation and can be easily implemented in practice.

  17. Assuring reliability program effectiveness.

    NASA Technical Reports Server (NTRS)

    Ball, L. W.

    1973-01-01

    An attempt is made to provide simple identification and description of techniques that have proved to be most useful either in developing a new product or in improving reliability of an established product. The first reliability task is obtaining and organizing parts failure rate data. Other tasks are parts screening, tabulation of general failure rates, preventive maintenance, prediction of new product reliability, and statistical demonstration of achieved reliability. Five principal tasks for improving reliability involve the physics of failure research, derating of internal stresses, control of external stresses, functional redundancy, and failure effects control. A final task is the training and motivation of reliability specialist engineers.

  18. Assuring reliability program effectiveness.

    NASA Technical Reports Server (NTRS)

    Ball, L. W.

    1973-01-01

    An attempt is made to provide simple identification and description of techniques that have proved to be most useful either in developing a new product or in improving reliability of an established product. The first reliability task is obtaining and organizing parts failure rate data. Other tasks are parts screening, tabulation of general failure rates, preventive maintenance, prediction of new product reliability, and statistical demonstration of achieved reliability. Five principal tasks for improving reliability involve the physics of failure research, derating of internal stresses, control of external stresses, functional redundancy, and failure effects control. A final task is the training and motivation of reliability specialist engineers.

  19. System Reliability for LED-Based Products

    SciTech Connect

    Davis, J Lynn; Mills, Karmann; Lamvik, Michael; Yaga, Robert; Shepherd, Sarah D; Bittle, James; Baldasaro, Nick; Solano, Eric; Bobashev, Georgiy; Johnson, Cortina; Evans, Amy

    2014-04-07

    Results from accelerated life tests (ALT) on mass-produced commercially available 6” downlights are reported along with results from commercial LEDs. The luminaires capture many of the design features found in modern luminaires. In general, a systems perspective is required to understand the reliability of these devices since LED failure is rare. In contrast, components such as drivers, lenses, and reflector are more likely to impact luminaire reliability than LEDs.

  20. Reliability-based covariance control design

    SciTech Connect

    Field, R.V. Jr.; Bergman, L.A.

    1997-03-01

    An extension to classical covariance control methods, introduced by Skelton and co-workers, is proposed specifically for application to the control of civil engineering structures subjected to random dynamic excitations. The covariance structure of the system is developed directly from specification of its reliability via the assumption of independent (Poisson) outcrossings of its stationary response process from a polyhedral safe region. This leads to a set of state covariance controllers, each of which guarantees that the closed-loop system will possess the specified level of reliability. An example civil engineering structure is considered.

  1. Optimum structural design based on reliability analysis

    NASA Technical Reports Server (NTRS)

    Heer, E.; Shinozuka, M.; Yang, J. N.

    1970-01-01

    Proof-load test improves statistical confidence in the estimate of reliability, numerical examples indicate a definite advantage of the proof-load approach in terms of savings in structural weight. The cost of establishing the statistical distribution of strength of the structural material is also introduced into the cost formulation

  2. Reliability of digital reactor protection system based on extenics.

    PubMed

    Zhao, Jing; He, Ya-Nan; Gu, Peng-Fei; Chen, Wei-Hua; Gao, Feng

    2016-01-01

    After the Fukushima nuclear accident, safety of nuclear power plants (NPPs) is widespread concerned. The reliability of reactor protection system (RPS) is directly related to the safety of NPPs, however, it is difficult to accurately evaluate the reliability of digital RPS. The method is based on estimating probability has some uncertainties, which can not reflect the reliability status of RPS dynamically and support the maintenance and troubleshooting. In this paper, the reliability quantitative analysis method based on extenics is proposed for the digital RPS (safety-critical), by which the relationship between the reliability and response time of RPS is constructed. The reliability of the RPS for CPR1000 NPP is modeled and analyzed by the proposed method as an example. The results show that the proposed method is capable to estimate the RPS reliability effectively and provide support to maintenance and troubleshooting of digital RPS system.

  3. Optimal reliability-based planning of experiments for POD curves

    SciTech Connect

    Soerensen, J.D.; Faber, M.H.; Kroon, I.B.

    1995-12-31

    Optimal planning of crack detection tests is considered. The tests are used to update the information on the reliability of inspection techniques modeled by probability of detection (P.O.D.) curves. It is shown how cost-optimal and reliability-based test plans can be obtained using First Order Reliability Methods in combination with life-cycle cost-optimal inspection and maintenance planning. The methodology is based on preposterior analyses from Bayesian decisions theory. An illustrative example is shown.

  4. Complementary Reliability-Based Decodings of Binary Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1997-01-01

    This correspondence presents a hybrid reliability-based decoding algorithm which combines the reprocessing method based on the most reliable basis and a generalized Chase-type algebraic decoder based on the least reliable positions. It is shown that reprocessing with a simple additional algebraic decoding effort achieves significant coding gain. For long codes, the order of reprocessing required to achieve asymptotic optimum error performance is reduced by approximately 1/3. This significantly reduces the computational complexity, especially for long codes. Also, a more efficient criterion for stopping the decoding process is derived based on the knowledge of the algebraic decoding solution.

  5. A Reliability-Based Method to Sensor Data Fusion

    PubMed Central

    Zhuang, Miaoyan; Xie, Chunhe

    2017-01-01

    Multi-sensor data fusion technology based on Dempster–Shafer evidence theory is widely applied in many fields. However, how to determine basic belief assignment (BBA) is still an open issue. The existing BBA methods pay more attention to the uncertainty of information, but do not simultaneously consider the reliability of information sources. Real-world information is not only uncertain, but also partially reliable. Thus, uncertainty and partial reliability are strongly associated with each other. To take into account this fact, a new method to represent BBAs along with their associated reliabilities is proposed in this paper, which is named reliability-based BBA. Several examples are carried out to show the validity of the proposed method. PMID:28678179

  6. A Reliability-Based Method to Sensor Data Fusion.

    PubMed

    Jiang, Wen; Zhuang, Miaoyan; Xie, Chunhe

    2017-07-05

    Multi-sensor data fusion technology based on Dempster-Shafer evidence theory is widely applied in many fields. However, how to determine basic belief assignment (BBA) is still an open issue. The existing BBA methods pay more attention to the uncertainty of information, but do not simultaneously consider the reliability of information sources. Real-world information is not only uncertain, but also partially reliable. Thus, uncertainty and partial reliability are strongly associated with each other. To take into account this fact, a new method to represent BBAs along with their associated reliabilities is proposed in this paper, which is named reliability-based BBA. Several examples are carried out to show the validity of the proposed method.

  7. Reliability Evaluation Based on Different Distributions of Random Load

    PubMed Central

    Gao, Peng; Xie, Liyang

    2013-01-01

    The reliability models of the components under the nonstationary random load are developed in this paper. Through the definition of the distribution of the random load, it can be seen that the conventional load-strength interference model is suitable for the calculation of the static reliability of the components, which does not reflect the dynamic change in the reliability and cannot be used to evaluate the dynamic reliability. Therefore, by developing an approach to converting the nonstationary random load into the random load whose pdf is the same at each moment when the random load applies, the reliability model based on the longitudinal distribution is derived. Moreover, through the definition of the transverse standard load and the transverse standard load coefficient, the reliability model based on the transverse distribution is derived. When the occurrence of the random load follows the Poisson process, the dynamic reliability models considering the strength degradation are derived. These models take the correlation between the random load and the strength into consideration. The result shows that the dispersion of the initial strength and that of the transverse standard load coefficient have great influences on the reliability and the hazard rate of the components. PMID:24223504

  8. Reliability modeling of fault-tolerant computer based systems

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.

    1987-01-01

    Digital fault-tolerant computer-based systems have become commonplace in military and commercial avionics. These systems hold the promise of increased availability, reliability, and maintainability over conventional analog-based systems through the application of replicated digital computers arranged in fault-tolerant configurations. Three tightly coupled factors of paramount importance, ultimately determining the viability of these systems, are reliability, safety, and profitability. Reliability, the major driver affects virtually every aspect of design, packaging, and field operations, and eventually produces profit for commercial applications or increased national security. However, the utilization of digital computer systems makes the task of producing credible reliability assessment a formidable one for the reliability engineer. The root of the problem lies in the digital computer's unique adaptability to changing requirements, computational power, and ability to test itself efficiently. Addressed here are the nuances of modeling the reliability of systems with large state sizes, in the Markov sense, which result from systems based on replicated redundant hardware and to discuss the modeling of factors which can reduce reliability without concomitant depletion of hardware. Advanced fault-handling models are described and methods of acquiring and measuring parameters for these models are delineated.

  9. Reliability-based Assessment of Stability of Slopes

    NASA Astrophysics Data System (ADS)

    Hsein Juang, C.; Zhang, J.; Gong, W.

    2015-09-01

    Multiple sources of uncertainties often exist in the evaluation of slope stability. When assessing stability of slopes in the face of uncertainties, it is desirable, and sometimes necessary, to adopt reliability-based approaches that consider these uncertainties explicitly. This paper focuses on the practical procedures developed recently for the reliability-based assessment of slope stability. The statistical characterization of model uncertainty and parameter uncertainty are first described, followed by an evaluation of the failure probability of a slope corresponding to a single slip surface, and the system failure probability. The availability of site-specific information then makes it possible to update the reliability of the slope through the Bayes’ theorem. Furthermore, how to perform reliability-based design when the statistics of random variables cannot be determined accurately is also discussed. Finally, case studies are presented to illustrate the benefit of performing reliability-based design and the procedure for conducting reliability-based robust design when the statistics of the random variables are incomplete.

  10. Reliability-based lifetime maintenance of aging highway bridges

    NASA Astrophysics Data System (ADS)

    Enright, Michael P.; Frangopol, Dan M.

    2000-06-01

    As the nation's infrastructure continues to age, the cost of maintaining it at an acceptable safety level continues to increase. In the United States, about one of every three bridges is rated structurally deficient and/or functionally obsolete. It will require about 80 billion to eliminate the current backlog of bridge deficiencies and maintain repair levels. Unfortunately, the financial resources allocated for these activities fall extremely short of the demand. Although several existing and emerging NDT techniques are available to gather inspection data, current maintenance planning decisions for deficient bridges are based on data from subjective condition assessments and do not consider the reliability of bridge components and systems. Recently, reliability-based optimum maintenance planning strategies have been developed. They can be used to predict inspection and repair times to achieve minimum life-cycle cost of deteriorating structural systems. In this study, a reliability-based methodology which takes into account loading randomness and history, and randomness in strength and degradation resulting from aggressive environmental factors, is used to predict the time- dependent reliability of aging highway bridges. A methodology for incorporating inspection data into reliability predictions is also presented. Finally, optimal lifetime maintenance strategies are identified, in which optimal inspection/repair times are found based on minimum expected life-cycle cost under prescribed reliability constraints. The influence of discount rate on optimum solutions is evaluated.

  11. Reliability evaluation of microgrid considering incentive-based demand response

    NASA Astrophysics Data System (ADS)

    Huang, Ting-Cheng; Zhang, Yong-Jun

    2017-07-01

    Incentive-based demand response (IBDR) can guide customers to adjust their behaviour of electricity and curtail load actively. Meanwhile, distributed generation (DG) and energy storage system (ESS) can provide time for the implementation of IBDR. The paper focus on the reliability evaluation of microgrid considering IBDR. Firstly, the mechanism of IBDR and its impact on power supply reliability are analysed. Secondly, the IBDR dispatch model considering customer’s comprehensive assessment and the customer response model are developed. Thirdly, the reliability evaluation method considering IBDR based on Monte Carlo simulation is proposed. Finally, the validity of the above models and method is studied through numerical tests on modified RBTS Bus6 test system. Simulation results demonstrated that IBDR can improve the reliability of microgrid.

  12. Developing safety performance functions incorporating reliability-based risk measures.

    PubMed

    Ibrahim, Shewkar El-Bassiouni; Sayed, Tarek

    2011-11-01

    Current geometric design guides provide deterministic standards where the safety margin of the design output is generally unknown and there is little knowledge of the safety implications of deviating from these standards. Several studies have advocated probabilistic geometric design where reliability analysis can be used to account for the uncertainty in the design parameters and to provide a risk measure of the implication of deviation from design standards. However, there is currently no link between measures of design reliability and the quantification of safety using collision frequency. The analysis presented in this paper attempts to bridge this gap by incorporating a reliability-based quantitative risk measure such as the probability of non-compliance (P(nc)) in safety performance functions (SPFs). Establishing this link will allow admitting reliability-based design into traditional benefit-cost analysis and should lead to a wider application of the reliability technique in road design. The present application is concerned with the design of horizontal curves, where the limit state function is defined in terms of the available (supply) and stopping (demand) sight distances. A comprehensive collision and geometric design database of two-lane rural highways is used to investigate the effect of the probability of non-compliance on safety. The reliability analysis was carried out using the First Order Reliability Method (FORM). Two Negative Binomial (NB) SPFs were developed to compare models with and without the reliability-based risk measures. It was found that models incorporating the P(nc) provided a better fit to the data set than the traditional (without risk) NB SPFs for total, injury and fatality (I+F) and property damage only (PDO) collisions.

  13. Reliability-Based Decision Fusion in Multimodal Biometric Verification Systems

    NASA Astrophysics Data System (ADS)

    Kryszczuk, Krzysztof; Richiardi, Jonas; Prodanov, Plamen; Drygajlo, Andrzej

    2007-12-01

    We present a methodology of reliability estimation in the multimodal biometric verification scenario. Reliability estimation has shown to be an efficient and accurate way of predicting and correcting erroneous classification decisions in both unimodal (speech, face, online signature) and multimodal (speech and face) systems. While the initial research results indicate the high potential of the proposed methodology, the performance of the reliability estimation in a multimodal setting has not been sufficiently studied or evaluated. In this paper, we demonstrate the advantages of using the unimodal reliability information in order to perform an efficient biometric fusion of two modalities. We further show the presented method to be superior to state-of-the-art multimodal decision-level fusion schemes. The experimental evaluation presented in this paper is based on the popular benchmarking bimodal BANCA database.

  14. Fatigue reliability based optimal design of planar compliant micropositioning stages

    NASA Astrophysics Data System (ADS)

    Wang, Qiliang; Zhang, Xianmin

    2015-10-01

    Conventional compliant micropositioning stages are usually developed based on static strength and deterministic methods, which may lead to either unsafe or excessive designs. This paper presents a fatigue reliability analysis and optimal design of a three-degree-of-freedom (3 DOF) flexure-based micropositioning stage. Kinematic, modal, static, and fatigue stress modelling of the stage were conducted using the finite element method. The maximum equivalent fatigue stress in the hinges was derived using sequential quadratic programming. The fatigue strength of the hinges was obtained by considering various influencing factors. On this basis, the fatigue reliability of the hinges was analysed using the stress-strength interference method. Fatigue-reliability-based optimal design of the stage was then conducted using the genetic algorithm and MATLAB. To make fatigue life testing easier, a 1 DOF stage was then optimized and manufactured. Experimental results demonstrate the validity of the approach.

  15. Fatigue reliability based optimal design of planar compliant micropositioning stages.

    PubMed

    Wang, Qiliang; Zhang, Xianmin

    2015-10-01

    Conventional compliant micropositioning stages are usually developed based on static strength and deterministic methods, which may lead to either unsafe or excessive designs. This paper presents a fatigue reliability analysis and optimal design of a three-degree-of-freedom (3 DOF) flexure-based micropositioning stage. Kinematic, modal, static, and fatigue stress modelling of the stage were conducted using the finite element method. The maximum equivalent fatigue stress in the hinges was derived using sequential quadratic programming. The fatigue strength of the hinges was obtained by considering various influencing factors. On this basis, the fatigue reliability of the hinges was analysed using the stress-strength interference method. Fatigue-reliability-based optimal design of the stage was then conducted using the genetic algorithm and MATLAB. To make fatigue life testing easier, a 1 DOF stage was then optimized and manufactured. Experimental results demonstrate the validity of the approach.

  16. Multi-mode reliability-based design of horizontal curves.

    PubMed

    Essa, Mohamed; Sayed, Tarek; Hussein, Mohamed

    2016-08-01

    Recently, reliability analysis has been advocated as an effective approach to account for uncertainty in the geometric design process and to evaluate the risk associated with a particular design. In this approach, a risk measure (e.g. probability of noncompliance) is calculated to represent the probability that a specific design would not meet standard requirements. The majority of previous applications of reliability analysis in geometric design focused on evaluating the probability of noncompliance for only one mode of noncompliance such as insufficient sight distance. However, in many design situations, more than one mode of noncompliance may be present (e.g. insufficient sight distance and vehicle skidding at horizontal curves). In these situations, utilizing a multi-mode reliability approach that considers more than one failure (noncompliance) mode is required. The main objective of this paper is to demonstrate the application of multi-mode (system) reliability analysis to the design of horizontal curves. The process is demonstrated by a case study of Sea-to-Sky Highway located between Vancouver and Whistler, in southern British Columbia, Canada. Two noncompliance modes were considered: insufficient sight distance and vehicle skidding. The results show the importance of accounting for several noncompliance modes in the reliability model. The system reliability concept could be used in future studies to calibrate the design of various design elements in order to achieve consistent safety levels based on all possible modes of noncompliance.

  17. Indel Reliability in Indel-Based Phylogenetic Inference

    PubMed Central

    Ashkenazy, Haim; Cohen, Ofir; Pupko, Tal; Huchon, Dorothée

    2014-01-01

    It is often assumed that it is unlikely that the same insertion or deletion (indel) event occurred at the same position in two independent evolutionary lineages, and thus, indel-based inference of phylogeny should be less subject to homoplasy compared with standard inference which is based on substitution events. Indeed, indels were successfully used to solve debated evolutionary relationships among various taxonomical groups. However, indels are never directly observed but rather inferred from the alignment and thus indel-based inference may be sensitive to alignment errors. It is hypothesized that phylogenetic reconstruction would be more accurate if it relied only on a subset of reliable indels instead of the entire indel data. Here, we developed a method to quantify the reliability of indel characters by measuring how often they appear in a set of alternative multiple sequence alignments. Our approach is based on the assumption that indels that are consistently present in most alternative alignments are more reliable compared with indels that appear only in a small subset of these alignments. Using simulated and empirical data, we studied the impact of filtering and weighting indels by their reliability scores on the accuracy of indel-based phylogenetic reconstruction. The new method is available as a web-server at http://guidance.tau.ac.il/RELINDEL/. PMID:25409663

  18. Reliability, Compliance, and Security in Web-Based Course Assessments

    ERIC Educational Resources Information Center

    Bonham, Scott

    2008-01-01

    Pre- and postcourse assessment has become a very important tool for education research in physics and other areas. The web offers an attractive alternative to in-class paper administration, but concerns about web-based administration include reliability due to changes in medium, student compliance rates, and test security, both question leakage…

  19. Reliability, Compliance, and Security in Web-Based Course Assessments

    ERIC Educational Resources Information Center

    Bonham, Scott

    2008-01-01

    Pre- and postcourse assessment has become a very important tool for education research in physics and other areas. The web offers an attractive alternative to in-class paper administration, but concerns about web-based administration include reliability due to changes in medium, student compliance rates, and test security, both question leakage…

  20. Reliability-Based Design Optimization Using Buffered Failure Probability

    DTIC Science & Technology

    2010-06-01

    missile. One component of the missile’s launcher is an optical system. Suppose that two different optical systems, 1 and 2, are available for...ed.). Belmont, MA: Athena Scientific. Bichon, B. J., Mahadevan, S., & Eldred, M. S. (May 4–7, 2009). Reliability-based design optimization using

  1. The Verification-based Analysis of Reliable Multicast Protocol

    NASA Technical Reports Server (NTRS)

    Wu, Yunqing

    1996-01-01

    Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP Multicasting. In this paper, we develop formal models for R.W using existing automatic verification systems, and perform verification-based analysis on the formal RMP specifications. We also use the formal models of RW specifications to generate a test suite for conformance testing of the RMP implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress between the implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.

  2. Intelligent computer based reliability assessment of multichip modules

    NASA Astrophysics Data System (ADS)

    Grosse, Ian R.; Katragadda, Prasanna; Bhattacharya, Sandeepan; Kulkarni, Sarang

    1994-04-01

    To deliver reliable Multichip (MCM's) in the face of rapidly changing technology, computer-based tools are needed for predicting the thermal mechanical behavior of various MCM package designs and selecting the most promising design in terms of performance, robustness, and reliability. The design tool must be able to address new design technologies manufacturing processes, novel materials, application criteria, and thermal environmental conditions. Reliability is one of the most important factors for determining design quality and hence must be a central condition in the design of Multichip Module packages. Clearly, design engineers need computer based simulation tools for rapid and efficient electrical, thermal, and mechanical modeling and optimization of advanced devices. For three dimensional thermal and mechanical simulation of advanced devices, the finite element method (FEM) is increasingly becoming the numerical method of choice. FEM is a versatile and sophisticated numerical techniques for solving the partial differential equations that describe the physical behavior of complex designs. AUTOTHERM(TM) is a MCM design tool developed by Mentor Graphics for Motorola, Inc. This tool performs thermal analysis of MCM packages using finite element analysis techniques. The tools used the philosophy of object oriented representation of components and simplified specification of boundary conditions for the thermal analysis so that the user need not be an expert in using finite element techniques. Different package types can be assessed and environmental conditions can be modeled. It also includes a detailed reliability module which allows the user to choose a desired failure mechanism (model). All the current tools perform thermal and/or stress analysis and do not address the issues of robustness and optimality of the MCM designs and the reliability prediction techniques are based on closed form analytical models and can often fail to predict the cycles of failure (N

  3. Diagnostic reliability of MMPI-2 computer-based test interpretations.

    PubMed

    Pant, Hina; McCabe, Brian J; Deskovitz, Mark A; Weed, Nathan C; Williams, John E

    2014-09-01

    Reflecting the common use of the MMPI-2 to provide diagnostic considerations, computer-based test interpretations (CBTIs) also typically offer diagnostic suggestions. However, these diagnostic suggestions can sometimes be shown to vary widely across different CBTI programs even for identical MMPI-2 profiles. The present study evaluated the diagnostic reliability of 6 commercially available CBTIs using a 20-item Q-sort task developed for this study. Four raters each sorted diagnostic classifications based on these 6 CBTI reports for 20 MMPI-2 profiles. Two questions were addressed. First, do users of CBTIs understand the diagnostic information contained within the reports similarly? Overall, diagnostic sorts of the CBTIs showed moderate inter-interpreter diagnostic reliability (mean r = .56), with sorts for the 1/2/3 profile showing the highest inter-interpreter diagnostic reliability (mean r = .67). Second, do different CBTIs programs vary with respect to diagnostic suggestions? It was found that diagnostic sorts of the CBTIs had a mean inter-CBTI diagnostic reliability of r = .56, indicating moderate but not strong agreement across CBTIs in terms of diagnostic suggestions. The strongest inter-CBTI diagnostic agreement was found for sorts of the 1/2/3 profile CBTIs (mean r = .71). Limitations and future directions are discussed. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  4. Reliability analysis based on the losses from failures.

    PubMed

    Todinov, M T

    2006-04-01

    The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the

  5. Reliability-based optimization under random vibration environment

    NASA Technical Reports Server (NTRS)

    Rao, S. S.

    1981-01-01

    A methodology of formulating the optimum design problem for structural systems with random parameters and subjected to random vibration as a mathematical programming problem is presented. The proposed method is applied to the optimum design of a cantilever beam with a tip mass and a truss structure supporting a water tank. The excitations are assumed to be Gaussian processes and the geometric and material properties are taken to be normally distributed random variables. The probabilistic constraints are specified for individual failure modes since it is easier to specify the reliability level for each failure mode keeping in view the consequences of failure in that particular mode. The time parameter appearing in the random vibration based constraints is eliminated by replacing the probabilities of failure by suitable upper bounds. The numerical results demonstrate the feasibility and effectiveness of applying the reliability-based design concepts to structures with random parameters and operating in random vibration environment.

  6. Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory.

    PubMed

    Yuan, Kaijuan; Xiao, Fuyuan; Fei, Liguo; Kang, Bingyi; Deng, Yong

    2016-01-18

    Sensor data fusion plays an important role in fault diagnosis. Dempster-Shafer (D-R) evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods.

  7. Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory

    PubMed Central

    Yuan, Kaijuan; Xiao, Fuyuan; Fei, Liguo; Kang, Bingyi; Deng, Yong

    2016-01-01

    Sensor data fusion plays an important role in fault diagnosis. Dempster–Shafer (D-R) evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods. PMID:26797611

  8. Reliability based design including future tests and multiagent approaches

    NASA Astrophysics Data System (ADS)

    Villanueva, Diane

    The initial stages of reliability-based design optimization involve the formulation of objective functions and constraints, and building a model to estimate the reliability of the design with quantified uncertainties. However, even experienced hands often overlook important objective functions and constraints that affect the design. In addition, uncertainty reduction measures, such as tests and redesign, are often not considered in reliability calculations during the initial stages. This research considers two areas that concern the design of engineering systems: 1) the trade-off of the effect of a test and post-test redesign on reliability and cost and 2) the search for multiple candidate designs as insurance against unforeseen faults in some designs. In this research, a methodology was developed to estimate the effect of a single future test and post-test redesign on reliability and cost. The methodology uses assumed distributions of computational and experimental errors with re-design rules to simulate alternative future test and redesign outcomes to form a probabilistic estimate of the reliability and cost for a given design. Further, it was explored how modeling a future test and redesign provides a company an opportunity to balance development costs versus performance by simultaneously designing the design and the post-test redesign rules during the initial design stage. The second area of this research considers the use of dynamic local surrogates, or surrogate-based agents, to locate multiple candidate designs. Surrogate-based global optimization algorithms often require search in multiple candidate regions of design space, expending most of the computation needed to define multiple alternate designs. Thus, focusing on solely locating the best design may be wasteful. We extended adaptive sampling surrogate techniques to locate multiple optima by building local surrogates in sub-regions of the design space to identify optima. The efficiency of this method

  9. A Research Roadmap for Computation-Based Human Reliability Analysis

    SciTech Connect

    Boring, Ronald; Mandelli, Diego; Joe, Jeffrey; Smith, Curtis; Groth, Katrina

    2015-08-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.

  10. Reliability-based analysis and design optimization for durability

    NASA Astrophysics Data System (ADS)

    Choi, Kyung K.; Youn, Byeng D.; Tang, Jun; Hardee, Edward

    2005-05-01

    In the Army mechanical fatigue subject to external and inertia transient loads in the service life of mechanical systems often leads to a structural failure due to accumulated damage. Structural durability analysis that predicts the fatigue life of mechanical components subject to dynamic stresses and strains is a compute intensive multidisciplinary simulation process, since it requires the integration of several computer-aided engineering tools and considerable data communication and computation. Uncertainties in geometric dimensions due to manufacturing tolerances cause the indeterministic nature of the fatigue life of a mechanical component. Due to the fact that uncertainty propagation to structural fatigue under transient dynamic loading is not only numerically complicated but also extremely computationally expensive, it is a challenging task to develop a structural durability-based design optimization process and reliability analysis to ascertain whether the optimal design is reliable. The objective of this paper is the demonstration of an integrated CAD-based computer-aided engineering process to effectively carry out design optimization for structural durability, yielding a durable and cost-effectively manufacturable product. This paper shows preliminary results of reliability-based durability design optimization for the Army Stryker A-Arm.

  11. Efficient reliability-based design of mooring systems

    SciTech Connect

    Larsen, K.

    1996-12-31

    Uncertainties both in the environmentally induced load effects and in the strength of mooring line components make a rational design of mooring systems a complex task. The methods of structural reliability, taking these uncertainties into account, have been applied in an efficient probabilistic analysis procedure for the tension overload limit state for mooring lines. This paper outlines the philosophy and methodology for this procedure, followed by numerical examples of a turret moored ship. Both base case annual failure probabilities and results from a number of sensitivity analyses are presented. It is demonstrated that the reliability-based design procedure can be effectively utilized to quantify the safety against failure due to tension overload of moorings. The results of the case studies indicate that the largest uncertainties are associated with the distribution parameters of the chain link and steel wire rope segment tension capacity, and the modelling of the environment. The modelling of spreading angles between waves, wind and current vs. colinearity, and double-peaked vs. single-peaked wave spectrum models are key parameters in the reliability assessment.

  12. Limit states and reliability-based pipeline design. Final report

    SciTech Connect

    Zimmerman, T.J.E.; Chen, Q.; Pandey, M.D.

    1997-06-01

    This report provides the results of a study to develop limit states design (LSD) procedures for pipelines. Limit states design, also known as load and resistance factor design (LRFD), provides a unified approach to dealing with all relevant failure modes combinations of concern. It explicitly accounts for the uncertainties that naturally occur in the determination of the loads which act on a pipeline and in the resistance of the pipe to failure. The load and resistance factors used are based on reliability considerations; however, the designer is not faced with carrying out probabilistic calculations. This work is done during development and periodic updating of the LSD document. This report provides background information concerning limits states and reliability-based design (Section 2), gives the limit states design procedures that were developed (Section 3) and provides results of the reliability analyses that were undertaken in order to partially calibrate the LSD method (Section 4). An appendix contains LSD design examples in order to demonstrate use of the method. Section 3, Limit States Design has been written in the format of a recommended practice. It has been structured so that, in future, it can easily be converted to a limit states design code format. Throughout the report, figures and tables are given at the end of each section, with the exception of Section 3, where to facilitate understanding of the LSD method, they have been included with the text.

  13. Reliability-Based Control Design for Uncertain Systems

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.

    2005-01-01

    This paper presents a robust control design methodology for systems with probabilistic parametric uncertainty. Control design is carried out by solving a reliability-based multi-objective optimization problem where the probability of violating design requirements is minimized. Simultaneously, failure domains are optimally enlarged to enable global improvements in the closed-loop performance. To enable an efficient numerical implementation, a hybrid approach for estimating reliability metrics is developed. This approach, which integrates deterministic sampling and asymptotic approximations, greatly reduces the numerical burden associated with complex probabilistic computations without compromising the accuracy of the results. Examples using output-feedback and full-state feedback with state estimation are used to demonstrate the ideas proposed.

  14. Probabilistic confidence for decisions based on uncertain reliability estimates

    NASA Astrophysics Data System (ADS)

    Reid, Stuart G.

    2013-05-01

    Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.

  15. Improved reliability analysis method based on the failure assessment diagram

    NASA Astrophysics Data System (ADS)

    Zhou, Yu; Zhang, Zheng; Zhong, Qunpeng

    2012-07-01

    With the uncertainties related to operating conditions, in-service non-destructive testing (NDT) measurements and material properties considered in the structural integrity assessment, probabilistic analysis based on the failure assessment diagram (FAD) approach has recently become an important concern. However, the point density revealing the probabilistic distribution characteristics of the assessment points is usually ignored. To obtain more detailed and direct knowledge from the reliability analysis, an improved probabilistic fracture mechanics (PFM) assessment method is proposed. By integrating 2D kernel density estimation (KDE) technology into the traditional probabilistic assessment, the probabilistic density of the randomly distributed assessment points is visualized in the assessment diagram. Moreover, a modified interval sensitivity analysis is implemented and compared with probabilistic sensitivity analysis. The improved reliability analysis method is applied to the assessment of a high pressure pipe containing an axial internal semi-elliptical surface crack. The results indicate that these two methods can give consistent sensitivities of input parameters, but the interval sensitivity analysis is computationally more efficient. Meanwhile, the point density distribution and its contour are plotted in the FAD, thereby better revealing the characteristics of PFM assessment. This study provides a powerful tool for the reliability analysis of critical structures.

  16. Study of vertical breakwater reliability based on copulas

    NASA Astrophysics Data System (ADS)

    Dong, Sheng; Li, Jingjing; Li, Xue; Wei, Yong

    2016-04-01

    The reliability of a vertical breakwater is calculated using direct integration methods based on joint density functions. The horizontal and uplifting wave forces on the vertical breakwater can be well fitted by the lognormal and the Gumbel distributions, respectively. The joint distribution of the horizontal and uplifting wave forces is analyzed using different probabilistic distributions, including the bivariate logistic Gumbel distribution, the bivariate lognormal distribution, and three bivariate Archimedean copulas functions constructed with different marginal distributions simultaneously. We use the fully nested copulas to construct multivariate distributions taking into account related variables. Different goodness fitting tests are carried out to determine the best bivariate copula model for wave forces on a vertical breakwater. We show that a bivariate model constructed by Frank copula gives the best reliability analysis, using marginal distributions of Gumbel and lognormal to account for uplifting pressure and horizontal wave force on a vertical breakwater, respectively. The results show that failure probability of the vertical breakwater calculated by multivariate density function is comparable to those by the Joint Committee on Structural Safety methods. As copulas are suitable for constructing a bivariate or multivariate joint distribution, they have great potential in reliability analysis for other coastal structures.

  17. Reliability Quantification of Advanced Stirling Convertor (ASC) Components

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward

    2010-01-01

    The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.

  18. Establishing maintenance intervals based on measurement reliability of engineering endpoints.

    PubMed

    James, P J

    2000-01-01

    Methods developed by the metrological community and principles used by the research community were integrated to provide a basis for a periodic maintenance interval analysis system. Engineering endpoints are used as measurement attributes on which to base two primary quality indicators: accuracy and reliability. Also key to establishing appropriate maintenance intervals is the ability to recognize two primary failure modes: random failure and time-related failure. The primary objective of the maintenance program is to avert predictable and preventable device failure, and understanding time-related failures enables service personnel to set intervals accordingly.

  19. SMT: a reliability based interactive DTI tractography algorithm.

    PubMed

    Yoldemir, Burak; Acar, Burak; Firat, Zeynep; Kiliçkesmez, Özgür

    2012-10-01

    Tractography refers to the in vivo reconstruction of fiber bundles, e.g., in brain, via the analysis of anisotropic diffusion patterns measured by diffusion weighted magnetic resonance imaging (DWI). The data provides a probabilistic model of local diffusion which was shown to correlate with the underlying fibrous structure under certain assumptions. Deterministic tractography suffers from uncertainties at kissing and crossing fibers, at different levels depending on the diffusion model employed (e.g., DTI, HARDI), yet it is easy to interpret and use in clinic. In this study, a novel generic algorithm, split and merge tractography (SMT), is proposed that provides a real-time, interactive and reliability ranked assessment of potential pathways, communicating the true information content of the data without sacrificing the usability of tractography. Specifically, SMT takes in a precomputed set of tracts and the diffusion data (e.g., DTI, HARDI) as its input, generates a set of short (reliable) tracts via splitting at unreliable points and forms quasi-random clusters of short tracts by means of which the space of short tract clusters, representing complete tracts, is sampled. A histogram of thus formed clusters is built in an efficient way and used for real-time, interactive assessment of pathways. The current implementation uses DTI and fourth-order Runge-Kutta integration based streamline tractography as its input. The method is qualitatively assessed on phantom DTI data and real DTI data. Phantom experiments demonstrated that SMT is capable of highlighting the problematic regions and suggesting pathways that are completely overseen by input streamline tractography. Real data experiment results correlate well with known anatomy and also demonstrate that the reliability ranking can efficiently suppress the erroneous tracts interactively. The method is compared to a recent method that also pursues a similar approach, yet in a global optimization based framework. The

  20. Multi-objective reliability-based optimization with stochastic metamodels.

    PubMed

    Coelho, Rajan Filomeno; Bouillard, Philippe

    2011-01-01

    This paper addresses continuous optimization problems with multiple objectives and parameter uncertainty defined by probability distributions. First, a reliability-based formulation is proposed, defining the nondeterministic Pareto set as the minimal solutions such that user-defined probabilities of nondominance and constraint satisfaction are guaranteed. The formulation can be incorporated with minor modifications in a multiobjective evolutionary algorithm (here: the nondominated sorting genetic algorithm-II). Then, in the perspective of applying the method to large-scale structural engineering problems--for which the computational effort devoted to the optimization algorithm itself is negligible in comparison with the simulation--the second part of the study is concerned with the need to reduce the number of function evaluations while avoiding modification of the simulation code. Therefore, nonintrusive stochastic metamodels are developed in two steps. First, for a given sampling of the deterministic variables, a preliminary decomposition of the random responses (objectives and constraints) is performed through polynomial chaos expansion (PCE), allowing a representation of the responses by a limited set of coefficients. Then, a metamodel is carried out by kriging interpolation of the PCE coefficients with respect to the deterministic variables. The method has been tested successfully on seven analytical test cases and on the 10-bar truss benchmark, demonstrating the potential of the proposed approach to provide reliability-based Pareto solutions at a reasonable computational cost.

  1. Multisite reliability of MR-based functional connectivity.

    PubMed

    Noble, Stephanie; Scheinost, Dustin; Finn, Emily S; Shen, Xilin; Papademetris, Xenophon; McEwen, Sarah C; Bearden, Carrie E; Addington, Jean; Goodyear, Bradley; Cadenhead, Kristin S; Mirzakhanian, Heline; Cornblatt, Barbara A; Olvet, Doreen M; Mathalon, Daniel H; McGlashan, Thomas H; Perkins, Diana O; Belger, Aysenil; Seidman, Larry J; Thermenos, Heidi; Tsuang, Ming T; van Erp, Theo G M; Walker, Elaine F; Hamann, Stephan; Woods, Scott W; Cannon, Tyrone D; Constable, R Todd

    2017-02-01

    Recent years have witnessed an increasing number of multisite MRI functional connectivity (fcMRI) studies. While multisite studies provide an efficient way to accelerate data collection and increase sample sizes, especially for rare clinical populations, any effects of site or MRI scanner could ultimately limit power and weaken results. Little data exists on the stability of functional connectivity measurements across sites and sessions. In this study, we assess the influence of site and session on resting state functional connectivity measurements in a healthy cohort of traveling subjects (8 subjects scanned twice at each of 8 sites) scanned as part of the North American Prodrome Longitudinal Study (NAPLS). Reliability was investigated in three types of connectivity analyses: (1) seed-based connectivity with posterior cingulate cortex (PCC), right motor cortex (RMC), and left thalamus (LT) as seeds; (2) the intrinsic connectivity distribution (ICD), a voxel-wise connectivity measure; and (3) matrix connectivity, a whole-brain, atlas-based approach to assessing connectivity between nodes. Contributions to variability in connectivity due to subject, site, and day-of-scan were quantified and used to assess between-session (test-retest) reliability in accordance with Generalizability Theory. Overall, no major site, scanner manufacturer, or day-of-scan effects were found for the univariate connectivity analyses; instead, subject effects dominated relative to the other measured factors. However, summaries of voxel-wise connectivity were found to be sensitive to site and scanner manufacturer effects. For all connectivity measures, although subject variance was three times the site variance, the residual represented 60-80% of the variance, indicating that connectivity differed greatly from scan to scan independent of any of the measured factors (i.e., subject, site, and day-of-scan). Thus, for a single 5min scan, reliability across connectivity measures was poor (ICC=0

  2. Stochastic structural and reliability based optimization of tuned mass damper

    NASA Astrophysics Data System (ADS)

    Mrabet, E.; Guedri, M.; Ichchou, M. N.; Ghanmi, S.

    2015-08-01

    The purpose of the current work is to present and discuss a technique for optimizing the parameters of a vibration absorber in the presence of uncertain bounded structural parameters. The technique used in the optimization is an interval extension based on a Taylor expansion of the objective function. The technique permits the transformation of the problem, initially non-deterministic, into two independents deterministic sub-problems. Two optimization strategies are considered: the Stochastic Structural Optimization (SSO) and the Reliability Based Optimization (RBO). It has been demonstrated through two different structures that the technique is valid for the SSO problem, even for high levels of uncertainties and it is less suitable for the RBO problem, especially when considering high levels of uncertainties.

  3. Reliability-based robust design optimization of vehicle components, Part I: Theory

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin

    2015-06-01

    The reliability-based design optimization, the reliability sensitivity analysis and robust design method are employed to present a practical and effective approach for reliability-based robust design optimization of vehicle components. A procedure for reliability-based robust design optimization of vehicle components is proposed. Application of the method is illustrated by reliability-based robust design optimization of axle and spring. Numerical results have shown that the proposed method can be trusted to perform reliability-based robust design optimization of vehicle components.

  4. Quantifying neurotransmission reliability through metrics-based information analysis.

    PubMed

    Brasselet, Romain; Johansson, Roland S; Arleo, Angelo

    2011-04-01

    We set forth an information-theoretical measure to quantify neurotransmission reliability while taking into full account the metrical properties of the spike train space. This parametric information analysis relies on similarity measures induced by the metrical relations between neural responses as spikes flow in. Thus, in order to assess the entropy, the conditional entropy, and the overall information transfer, this method does not require any a priori decoding algorithm to partition the space into equivalence classes. It therefore allows the optimal parameters of a class of distances to be determined with respect to information transmission. To validate the proposed information-theoretical approach, we study precise temporal decoding of human somatosensory signals recorded using microneurography experiments. For this analysis, we employ a similarity measure based on the Victor-Purpura spike train metrics. We show that with appropriate parameters of this distance, the relative spike times of the mechanoreceptors' responses convey enough information to perform optimal discrimination--defined as maximum metrical information and zero conditional entropy--of 81 distinct stimuli within 40 ms of the first afferent spike. The proposed information-theoretical measure proves to be a suitable generalization of Shannon mutual information in order to consider the metrics of temporal codes explicitly. It allows neurotransmission reliability to be assessed in the presence of large spike train spaces (e.g., neural population codes) with high temporal precision.

  5. 49 CFR Appendix E to Part 238 - General Principles of Reliability-Based Maintenance Programs

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... STANDARDS Pt. 238, App. E Appendix E to Part 238—General Principles of Reliability-Based Maintenance... minimum total cost, including maintenance costs and the costs of residual failures. (b) Reliability-based... result of the undetected failure of a hidden function. (c) In a reliability-based maintenance...

  6. A Research of Weapon System Storage Reliability Simulation Method Based on Fuzzy Theory

    NASA Astrophysics Data System (ADS)

    Shi, Yonggang; Wu, Xuguang; Chen, Haijian; Xu, Tingxue

    Aimed at the problem of the new, complicated weapon equipment system storage reliability analyze, the paper researched on the methods of fuzzy fault tree analysis and fuzzy system storage reliability simulation, discussed the path that regarded weapon system as fuzzy system, and researched the storage reliability of weapon system based on fuzzy theory, provided a method of storage reliability research for the new, complicated weapon equipment system. As an example, built up the fuzzy fault tree of one type missile control instrument based on function analysis, and used the method of fuzzy system storage reliability simulation to analyze storage reliability index of control instrument.

  7. RELIABILITY BASED DESIGN OF FIXED FOUNDATION WIND TURBINES

    SciTech Connect

    Nichols, R.

    2013-10-14

    Recent analysis of offshore wind turbine foundations using both applicable API and IEC standards show that the total load demand from wind and waves is greatest in wave driven storms. Further, analysis of overturning moment loads (OTM) reveal that impact forces exerted by breaking waves are the largest contributor to OTM in big storms at wind speeds above the operating range of 25 m/s. Currently, no codes or standards for offshore wind power generators have been adopted by the Bureau of Ocean Energy Management Regulation and Enforcement (BOEMRE) for use on the Outer Continental Shelf (OCS). Current design methods based on allowable stress design (ASD) incorporate the uncertainty in the variation of loads transferred to the foundation and geotechnical capacity of the soil and rock to support the loads is incorporated into a factor of safety. Sources of uncertainty include spatial and temporal variation of engineering properties, reliability of property measurements applicability and sufficiency of sampling and testing methods, modeling errors, and variability of estimated load predictions. In ASD these sources of variability are generally given qualitative rather than quantitative consideration. The IEC 61400‐3 design standard for offshore wind turbines is based on ASD methods. Load and resistance factor design (LRFD) methods are being increasingly used in the design of structures. Uncertainties such as those listed above can be included quantitatively into the LRFD process. In LRFD load factors and resistance factors are statistically based. This type of analysis recognizes that there is always some probability of failure and enables the probability of failure to be quantified. This paper presents an integrated approach consisting of field observations and numerical simulation to establish the distribution of loads from breaking waves to support the LRFD of fixed offshore foundations.

  8. Simulation and Non-Simulation Based Human Reliability Analysis Approaches

    SciTech Connect

    Boring, Ronald Laurids; Shirley, Rachel Elizabeth; Joe, Jeffrey Clark; Mandelli, Diego

    2014-12-01

    Part of the U.S. Department of Energy’s Light Water Reactor Sustainability (LWRS) Program, the Risk-Informed Safety Margin Characterization (RISMC) Pathway develops approaches to estimating and managing safety margins. RISMC simulations pair deterministic plant physics models with probabilistic risk models. As human interactions are an essential element of plant risk, it is necessary to integrate human actions into the RISMC risk model. In this report, we review simulation-based and non-simulation-based human reliability assessment (HRA) methods. Chapter 2 surveys non-simulation-based HRA methods. Conventional HRA methods target static Probabilistic Risk Assessments for Level 1 events. These methods would require significant modification for use in dynamic simulation of Level 2 and Level 3 events. Chapter 3 is a review of human performance models. A variety of methods and models simulate dynamic human performance; however, most of these human performance models were developed outside the risk domain and have not been used for HRA. The exception is the ADS-IDAC model, which can be thought of as a virtual operator program. This model is resource-intensive but provides a detailed model of every operator action in a given scenario, along with models of numerous factors that can influence operator performance. Finally, Chapter 4 reviews the treatment of timing of operator actions in HRA methods. This chapter is an example of one of the critical gaps between existing HRA methods and the needs of dynamic HRA. This report summarizes the foundational information needed to develop a feasible approach to modeling human interactions in the RISMC simulations.

  9. Simulating SpaceWire-Based Reliable and Timely Protocols

    NASA Astrophysics Data System (ADS)

    Jameuz, D.

    2009-05-01

    SpaceWire links are now widely used in spacecraft and beyond but the implementations of SpaceWire-based communications are as many as the projects using SpaceWire because of a lack of standard message passing protocol for SpaceWire. This is particularly problematic for the implementation of SpaceWire networks. In order to respond to the increasing pressure in this direction, we came up with consolidated ideas about the classes of Quality of Service that are required for any message passing protocol for SpaceWire to be useful an adopted by users and developers. In order to validate these ideas, ESA is currently funding a number of prototyping activities and will soon issue an open tender for a simulation activity. The later project will result in a simulator that will allow validating both new SpaceWire-based protocols, and any specific instance of SpaceWire-based network (using validated technology) considered for a given operational space sub-system to be developed. This paper presents the classes of Quality of Service proposed to allow for reliability and timeliness as well as the simulator that will enable validating these QoS classes. In this paper, we recall the general properties of digital communication protocols and select a set of these properties relevant to SpaceWire-based embedded systems in general and to space applications in particular, in the frame of the ‘Single Fault' hypothesis. This leads to the definition of four classes of Quality of Service, two non real-time (minimum service and transport-like service) and two real-time (FTRT service and Acknowledged Real-Time service). We then provide some hints for the implementation of these QoS classes in the frame of SpaceWire networks: while the non real-time classes can clearly be implemented within the frame of the current SpaceWire standard, it appears that implementing real-time classes requires significant additional capability be embedded in some SpaceWire routers and nodes. Last, we present

  10. Reliability-based optimum inspection and maintenance procedures. [for engines

    NASA Technical Reports Server (NTRS)

    Nanagud, S.; Uppaluri, B.

    1975-01-01

    The development of reliability-based optimum inspection and maintenance schedules for engines needs an understanding of the fatigue behavior of the engines. Critical areas of the engine structure prone to fatigue damage are usually identified beforehand or after the fleet has been put into operation. In these areas, fatigue cracks initiate after several flight hours, and these cracks grow in length until failure takes place when these cracks attain the critical lengths. Crack initiation time and its growth rate are considered to be random variables. Usually, the inspection (fatigue) or test data from similar engines are used as prior distributions. The existing state-of-the-art is to ignore the different lengths of cracks obserbed at various inspections and to consider only the fact that a crack existed (or did not exist) at the time of inspection. In this paper, a procedure has been developed to obtain the probability of finding a crack of a given size at a certain time if the probability distributions for crack initiation and rates of growth are known. Application of the developed stochastic models to devise optimum procedures for inspection and maintenance are also discussed.

  11. Reliable freestanding position-based routing in highway scenarios.

    PubMed

    Galaviz-Mosqueda, Gabriel A; Aquino-Santos, Raúl; Villarreal-Reyes, Salvador; Rivera-Rodríguez, Raúl; Villaseñor-González, Luis; Edwards, Arthur

    2012-10-24

    Vehicular Ad Hoc Networks (VANETs) are considered by car manufacturers and the research community as the enabling technology to radically improve the safety, efficiency and comfort of everyday driving. However, before VANET technology can fulfill all its expected potential, several difficulties must be addressed. One key issue arising when working with VANETs is the complexity of the networking protocols compared to those used by traditional infrastructure networks. Therefore, proper design of the routing strategy becomes a main issue for the effective deployment of VANETs. In this paper, a reliable freestanding position-based routing algorithm (FPBR) for highway scenarios is proposed. For this scenario, several important issues such as the high mobility of vehicles and the propagation conditions may affect the performance of the routing strategy. These constraints have only been partially addressed in previous proposals. In contrast, the design approach used for developing FPBR considered the constraints imposed by a highway scenario and implements mechanisms to overcome them. FPBR performance is compared to one of the leading protocols for highway scenarios. Performance metrics show that FPBR yields similar results when considering freespace propagation conditions, and outperforms the leading protocol when considering a realistic highway path loss model.

  12. Reliability-based optimum inspection and maintenance procedures. [for engines

    NASA Technical Reports Server (NTRS)

    Nanagud, S.; Uppaluri, B.

    1975-01-01

    The development of reliability-based optimum inspection and maintenance schedules for engines needs an understanding of the fatigue behavior of the engines. Critical areas of the engine structure prone to fatigue damage are usually identified beforehand or after the fleet has been put into operation. In these areas, fatigue cracks initiate after several flight hours, and these cracks grow in length until failure takes place when these cracks attain the critical lengths. Crack initiation time and its growth rate are considered to be random variables. Usually, the inspection (fatigue) or test data from similar engines are used as prior distributions. The existing state-of-the-art is to ignore the different lengths of cracks obserbed at various inspections and to consider only the fact that a crack existed (or did not exist) at the time of inspection. In this paper, a procedure has been developed to obtain the probability of finding a crack of a given size at a certain time if the probability distributions for crack initiation and rates of growth are known. Application of the developed stochastic models to devise optimum procedures for inspection and maintenance are also discussed.

  13. Reliable Freestanding Position-Based Routing in Highway Scenarios

    PubMed Central

    Galaviz-Mosqueda, Gabriel A.; Aquino-Santos, Raúl; Villarreal-Reyes, Salvador; Rivera-Rodríguez, Raúl; Villaseñor-González, Luis; Edwards, Arthur

    2012-01-01

    Vehicular Ad Hoc Networks (VANETs) are considered by car manufacturers and the research community as the enabling technology to radically improve the safety, efficiency and comfort of everyday driving. However, before VANET technology can fulfill all its expected potential, several difficulties must be addressed. One key issue arising when working with VANETs is the complexity of the networking protocols compared to those used by traditional infrastructure networks. Therefore, proper design of the routing strategy becomes a main issue for the effective deployment of VANETs. In this paper, a reliable freestanding position-based routing algorithm (FPBR) for highway scenarios is proposed. For this scenario, several important issues such as the high mobility of vehicles and the propagation conditions may affect the performance of the routing strategy. These constraints have only been partially addressed in previous proposals. In contrast, the design approach used for developing FPBR considered the constraints imposed by a highway scenario and implements mechanisms to overcome them. FPBR performance is compared to one of the leading protocols for highway scenarios. Performance metrics show that FPBR yields similar results when considering freespace propagation conditions, and outperforms the leading protocol when considering a realistic highway path loss model. PMID:23202159

  14. A Vision for Spaceflight Reliability: NASA's Objectives Based Strategy

    NASA Technical Reports Server (NTRS)

    Groen, Frank; Evans, John; Hall, Tony

    2015-01-01

    In defining the direction for a new Reliability and Maintainability standard, OSMA has extracted the essential objectives that our programs need, to undertake a reliable mission. These objectives have been structured to lead mission planning through construction of an objective hierarchy, which defines the critical approaches for achieving high reliability and maintainability (R M). Creating a hierarchy, as a basis for assurance implementation, is a proven approach; yet, it holds the opportunity to enable new directions, as NASA moves forward in tackling the challenges of space exploration.

  15. Integrated circuit reliability. Citations from the NTIS data base

    NASA Astrophysics Data System (ADS)

    Reed, W. E.

    1980-06-01

    The bibliography presents research pertinent to design, reliability prediction, failure and malfunction, processing techniques, and radiation damage. This updated bibliography contains 193 abstracts, 17 of which are new entries to the previous edition.

  16. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2009-01-01

    A stochastic design optimization methodology (SDO) has been developed to design components of an airframe structure that can be made of metallic and composite materials. The design is obtained as a function of the risk level, or reliability, p. The design method treats uncertainties in load, strength, and material properties as distribution functions, which are defined with mean values and standard deviations. A design constraint or a failure mode is specified as a function of reliability p. Solution to stochastic optimization yields the weight of a structure as a function of reliability p. Optimum weight versus reliability p traced out an inverted-S-shaped graph. The center of the inverted-S graph corresponded to 50 percent (p = 0.5) probability of success. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponds to unity for reliability p (or p = 1). Weight can be reduced to a small value for the most failure-prone design with a reliability that approaches zero (p = 0). Reliability can be changed for different components of an airframe structure. For example, the landing gear can be designed for a very high reliability, whereas it can be reduced to a small extent for a raked wingtip. The SDO capability is obtained by combining three codes: (1) The MSC/Nastran code was the deterministic analysis tool, (2) The fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and (3) NASA Glenn Research Center s optimization testbed CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life raked wingtip structure of the Boeing 767-400 extended range airliner made of metallic and composite materials.

  17. Physical-Mechanisms Based Reliability Analysis For Emerging Technologies

    DTIC Science & Technology

    2017-05-05

    approved for public release. 13. SUPPLEMENTARY NOTES 14. ABSTRACT Space and defense systems require the highest levels of functional performance in...approved for public release. 2 Physical Mechanisms Impacting Reliability in Emerging Technologies 1 EXECUTIVE SUMMARY 1.1 Overview Space and...interest to U.S. military and space applications and develop a quantitative understanding of the impact on reliability. The research tasks

  18. Reliability-Based Life Assessment of Stirling Convertor Heater Head

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Halford, Gary R.; Korovaichuk, Igor

    2004-01-01

    Onboard radioisotope power systems being developed and planned for NASA's deep-space missions require reliable design lifetimes of up to 14 yr. The structurally critical heater head of the high-efficiency Stirling power convertor has undergone extensive computational analysis of operating temperatures, stresses, and creep resistance of the thin-walled Inconel 718 bill of material. A preliminary assessment of the effect of uncertainties in the material behavior was also performed. Creep failure resistance of the thin-walled heater head could show variation due to small deviations in the manufactured thickness and in uncertainties in operating temperature and pressure. Durability prediction and reliability of the heater head are affected by these deviations from nominal design conditions. Therefore, it is important to include the effects of these uncertainties in predicting the probability of survival of the heater head under mission loads. Furthermore, it may be possible for the heater head to experience rare incidences of small temperature excursions of short duration. These rare incidences would affect the creep strain rate and, therefore, the life. This paper addresses the effects of such rare incidences on the reliability. In addition, the sensitivities of variables affecting the reliability are quantified, and guidelines developed to improve the reliability are outlined. Heater head reliability is being quantified with data from NASA Glenn Research Center's accelerated benchmark testing program.

  19. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W.; Gumbert, Clyde R.; Newman, Perry A.

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The optimal solutions associated with the MPP provide measurements related to safety probability. This study focuses on two commonly used approximate probability integration methods; i.e., the Reliability Index Approach (RIA) and the Performance Measurement Approach (PMA). Their reliability sensitivity equations are first derived in this paper, based on the derivatives of their respective optimal solutions. Examples are then provided to demonstrate the use of these derivatives for better reliability analysis and Reliability-Based Design Optimization (RBDO).

  20. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W.; Gumbert, Clyde R.; Newman, Perry A.

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The optimal solutions associated with the MPP provide measurements related to safety probability. This study focuses on two commonly used approximate probability integration methods; i.e., the Reliability Index Approach (RIA) and the Performance Measurement Approach (PMA). Their reliability sensitivity equations are first derived in this paper, based on the derivatives of their respective optimal solutions. Examples are then provided to demonstrate the use of these derivatives for better reliability analysis and Reliability-Based Design Optimization (RBDO).

  1. Reliability Generalization of Curriculum-Based Measurement Reading Aloud: A Meta-Analytic Review

    ERIC Educational Resources Information Center

    Yeo, Seungsoo

    2011-01-01

    The purpose of this study was to employ the meta-analytic method of Reliability Generalization to investigate the magnitude and variability of reliability estimates obtained across studies using Curriculum-Based Measurement reading aloud. Twenty-eight studies that met the inclusion criteria were used to calculate the overall mean reliability of…

  2. 49 CFR Appendix E to Part 238 - General Principles of Reliability-Based Maintenance Programs

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Maintenance Programs E Appendix E to Part 238 Transportation Other Regulations Relating to Transportation... STANDARDS Pt. 238, App. E Appendix E to Part 238—General Principles of Reliability-Based Maintenance... reliability beyond the design reliability. (e) When a maintenance program is developed, it includes tasks that...

  3. Reliability Generalization of Curriculum-Based Measurement Reading Aloud: A Meta-Analytic Review

    ERIC Educational Resources Information Center

    Yeo, Seungsoo

    2011-01-01

    The purpose of this study was to employ the meta-analytic method of Reliability Generalization to investigate the magnitude and variability of reliability estimates obtained across studies using Curriculum-Based Measurement reading aloud. Twenty-eight studies that met the inclusion criteria were used to calculate the overall mean reliability of…

  4. Reliability-based condition assessment of steel containment and liners

    SciTech Connect

    Ellingwood, B.; Bhattacharya, B.; Zheng, R.

    1996-11-01

    Steel containments and liners in nuclear power plants may be exposed to aggressive environments that may cause their strength and stiffness to decrease during the plant service life. Among the factors recognized as having the potential to cause structural deterioration are uniform, pitting or crevice corrosion; fatigue, including crack initiation and propagation to fracture; elevated temperature; and irradiation. The evaluation of steel containments and liners for continued service must provide assurance that they are able to withstand future extreme loads during the service period with a level of reliability that is sufficient for public safety. Rational methodologies to provide such assurances can be developed using modern structural reliability analysis principles that take uncertainties in loading, strength, and degradation resulting from environmental factors into account. The research described in this report is in support of the Steel Containments and Liners Program being conducted for the US Nuclear Regulatory Commission by the Oak Ridge National Laboratory. The research demonstrates the feasibility of using reliability analysis as a tool for performing condition assessments and service life predictions of steel containments and liners. Mathematical models that describe time-dependent changes in steel due to aggressive environmental factors are identified, and statistical data supporting the use of these models in time-dependent reliability analysis are summarized. The analysis of steel containment fragility is described, and simple illustrations of the impact on reliability of structural degradation are provided. The role of nondestructive evaluation in time-dependent reliability analysis, both in terms of defect detection and sizing, is examined. A Markov model provides a tool for accounting for time-dependent changes in damage condition of a structural component or system. 151 refs.

  5. A reliability and mass perspective of SP-100 Stirling cycle lunar-base powerplant designs

    SciTech Connect

    Bloomfield, H.S.

    1991-06-01

    The purpose was to obtain reliability and mass perspectives on selection of space power system conceptual designs based on SP-100 reactor and Stirling cycle power-generation subsystems. The approach taken was to: (1) develop a criterion for an acceptable overall reliability risk as a function of the expected range of emerging technology subsystem unit reliabilities; (2) conduct reliability and mass analyses for a diverse matrix of 800-kWe lunar-base design configurations employing single and multiple powerplants with both full and partial subsystem redundancy combinations; and (3) derive reliability and mass perspectives on selection of conceptual design configurations that meet an acceptable reliability criterion with the minimum system mass increase relative to reference powerplant design. The developed perspectives provided valuable insight into the considerations required to identify and characterize high-reliability and low-mass lunar-base powerplant conceptual design.

  6. A reliability and mass perspective of SP-100 Stirling cycle lunar-base powerplant designs

    NASA Technical Reports Server (NTRS)

    Bloomfield, Harvey S.

    1991-01-01

    The purpose was to obtain reliability and mass perspectives on selection of space power system conceptual designs based on SP-100 reactor and Stirling cycle power-generation subsystems. The approach taken was to: (1) develop a criterion for an acceptable overall reliability risk as a function of the expected range of emerging technology subsystem unit reliabilities; (2) conduct reliability and mass analyses for a diverse matrix of 800-kWe lunar-base design configurations employing single and multiple powerplants with both full and partial subsystem redundancy combinations; and (3) derive reliability and mass perspectives on selection of conceptual design configurations that meet an acceptable reliability criterion with the minimum system mass increase relative to reference powerplant design. The developed perspectives provided valuable insight into the considerations required to identify and characterize high-reliability and low-mass lunar-base powerplant conceptual design.

  7. Differences scores: regression-based reliable difference and the regression-based confidence interval.

    PubMed

    Charter, Richard A

    2009-04-01

    Over 50 years ago Payne and Jones (1957) developed what has been labeled the traditional reliable difference formula that continues to be useful as a significance test for the difference between two test scores. The traditional reliable difference is based on the standard error of measurement (SEM) and has been updated to a confidence interval approach. As an alternative to the traditional reliable difference, this article presents the regression-based reliable difference that is based on the standard error of estimate (SEE) and estimated true scores. This new approach should be attractive to clinicians preferring the idea of scores regressing toward the mean. The new approach is also presented in confidence interval form with an interpretation that can be viewed as a statement of all hypotheses that are tenable and consistent with the observed data and has the backing of several authorities. Two well-known conceptualizations for true score confidence intervals are the traditional and regression-based. Now clinicians favoring the regression-based conceptualization are not restricted to the use of traditional model when testing score differences using confidence intervals.

  8. Neural Networks Based Approach to Enhance Space Hardware Reliability

    NASA Technical Reports Server (NTRS)

    Zebulum, Ricardo S.; Thakoor, Anilkumar; Lu, Thomas; Franco, Lauro; Lin, Tsung Han; McClure, S. S.

    2011-01-01

    This paper demonstrates the use of Neural Networks as a device modeling tool to increase the reliability analysis accuracy of circuits targeted for space applications. The paper tackles a number of case studies of relevance to the design of Flight hardware. The results show that the proposed technique generates more accurate models than the ones regularly used to model circuits.

  9. Architecture-Based Reliability Analysis of Web Services

    ERIC Educational Resources Information Center

    Rahmani, Cobra Mariam

    2012-01-01

    In a Service Oriented Architecture (SOA), the hierarchical complexity of Web Services (WS) and their interactions with the underlying Application Server (AS) create new challenges in providing a realistic estimate of WS performance and reliability. The current approaches often treat the entire WS environment as a black-box. Thus, the sensitivity…

  10. Architecture-Based Reliability Analysis of Web Services

    ERIC Educational Resources Information Center

    Rahmani, Cobra Mariam

    2012-01-01

    In a Service Oriented Architecture (SOA), the hierarchical complexity of Web Services (WS) and their interactions with the underlying Application Server (AS) create new challenges in providing a realistic estimate of WS performance and reliability. The current approaches often treat the entire WS environment as a black-box. Thus, the sensitivity…

  11. Neural Networks Based Approach to Enhance Space Hardware Reliability

    NASA Technical Reports Server (NTRS)

    Zebulum, Ricardo S.; Thakoor, Anilkumar; Lu, Thomas; Franco, Lauro; Lin, Tsung Han; McClure, S. S.

    2011-01-01

    This paper demonstrates the use of Neural Networks as a device modeling tool to increase the reliability analysis accuracy of circuits targeted for space applications. The paper tackles a number of case studies of relevance to the design of Flight hardware. The results show that the proposed technique generates more accurate models than the ones regularly used to model circuits.

  12. Reliability and Validity of Curriculum-Based Informal Reading Inventories.

    ERIC Educational Resources Information Center

    Fuchs, Lynn; And Others

    A study was conducted to explore the reliability and validity of three prominent procedures used in informal reading inventories (IRIs): (1) choosing a 95% word recognition accuracy standard for determining student instructional level, (2) arbitrarily selecting a passage to represent the difficulty level of a basal reader, and (3) employing…

  13. Optimization Based Efficiencies in First Order Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Peck, Jeffrey A.; Mahadevan, Sankaran

    2003-01-01

    This paper develops a method for updating the gradient vector of the limit state function in reliability analysis using Broyden's rank one updating technique. In problems that use commercial code as a black box, the gradient calculations are usually done using a finite difference approach, which becomes very expensive for large system models. The proposed method replaces the finite difference gradient calculations in a standard first order reliability method (FORM) with Broyden's Quasi-Newton technique. The resulting algorithm of Broyden updates within a FORM framework (BFORM) is used to run several example problems, and the results compared to standard FORM results. It is found that BFORM typically requires fewer functional evaluations that FORM to converge to the same answer.

  14. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).

  15. Analyzing reliability of seizure diagnosis based on semiology.

    PubMed

    Jin, Bo; Wu, Han; Xu, Jiahui; Yan, Jianwei; Ding, Yao; Wang, Z Irene; Guo, Yi; Wang, Zhongjin; Shen, Chunhong; Chen, Zhong; Ding, Meiping; Wang, Shuang

    2014-12-01

    This study aimed to determine the accuracy of seizure diagnosis by semiological analysis and to assess the factors that affect diagnostic reliability. A total of 150 video clips of seizures from 50 patients (each with three seizures of the same type) were observed by eight epileptologists, 12 neurologists, and 20 physicians (internists). The videos included 37 series of epileptic seizures, eight series of physiologic nonepileptic events (PNEEs), and five series of psychogenic nonepileptic seizures (PNESs). After observing each video, the doctors chose the diagnosis of epileptic seizures or nonepileptic events for the patient; if the latter was chosen, they further chose the diagnosis of PNESs or PNEEs. The overall diagnostic accuracy rate for epileptic seizures and nonepileptic events increased from 0.614 to 0.660 after observations of all three seizures (p < 0.001). The diagnostic sensitivity and specificity of epileptic seizures were 0.770 and 0.808, respectively, for the epileptologists. These values were significantly higher than those for the neurologists (0.660 and 0.699) and physicians (0.588 and 0.658). A wide range of diagnostic accuracy was found across the various seizures types. An accuracy rate of 0.895 for generalized tonic-clonic seizures was the highest, followed by 0.800 for dialeptic seizures and then 0.760 for automotor seizures. The accuracy rates for myoclonic seizures (0.530), hypermotor seizures (0.481), gelastic/dacrystic seizures (0.438), and PNESs (0.430) were poor. The reliability of semiological diagnosis of seizures is greatly affected by the seizure type as well as the doctor's experience. Although the overall reliability is limited, it can be improved by observing more seizures.

  16. Reliability-Based Design Optimization of a Composite Airframe Component

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Coroneos, Rula; Patnaik, Surya N.

    2011-01-01

    A stochastic optimization methodology (SDO) has been developed to design airframe structural components made of metallic and composite materials. The design method accommodates uncertainties in load, strength, and material properties that are defined by distribution functions with mean values and standard deviations. A response parameter, like a failure mode, has become a function of reliability. The primitive variables like thermomechanical loads, material properties, and failure theories, as well as variables like depth of beam or thickness of a membrane, are considered random parameters with specified distribution functions defined by mean values and standard deviations.

  17. A simple reliability-based topology optimization approach for continuum structures using a topology description function

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Wen, Guilin; Zhi Zuo, Hao; Qing, Qixiang

    2016-07-01

    The structural configuration obtained by deterministic topology optimization may represent a low reliability level and lead to a high failure rate. Therefore, it is necessary to take reliability into account for topology optimization. By integrating reliability analysis into topology optimization problems, a simple reliability-based topology optimization (RBTO) methodology for continuum structures is investigated in this article. The two-layer nesting involved in RBTO, which is time consuming, is decoupled by the use of a particular optimization procedure. A topology description function approach (TOTDF) and a first order reliability method are employed for topology optimization and reliability calculation, respectively. The problem of the non-smoothness inherent in TOTDF is dealt with using two different smoothed Heaviside functions and the corresponding topologies are compared. Numerical examples demonstrate the validity and efficiency of the proposed improved method. In-depth discussions are also presented on the influence of different structural reliability indices on the final layout.

  18. A perspective on the reliability of MEMS-based components for telecommunications

    NASA Astrophysics Data System (ADS)

    McNulty, John C.

    2008-02-01

    Despite the initial skepticism of OEM companies regarding reliability, MEMS-based devices are increasingly common in optical networking. This presentation will discuss the use and reliability of MEMS in a variety of network applications, from tunable lasers and filters to variable optical attenuators and dynamic channel equalizers. The failure mechanisms of these devices will be addressed in terms of reliability physics, packaging methodologies, and process controls. Typical OEM requirements will also be presented, including testing beyond of the scope of Telcordia qualification standards. The key conclusion is that, with sufficiently robust design and manufacturing controls, MEMS-based devices can meet or exceed the demanding reliability requirements for telecommunications components.

  19. Reliability-based failure analysis of brittle materials

    NASA Technical Reports Server (NTRS)

    Powers, Lynn M.; Ghosn, Louis J.

    1989-01-01

    The reliability of brittle materials under a generalized state of stress is analyzed using the Batdorf model. The model is modified to include the reduction in shear due to the effect of the compressive stress on the microscopic crack faces. The combined effect of both surface and volume flaws is included. Due to the nature of fracture of brittle materials under compressive loading, the component is modeled as a series system in order to establish bounds on the probability of failure. A computer program was written to determine the probability of failure employing data from a finite element analysis. The analysis showed that for tensile loading a single crack will be the cause of total failure but under compressive loading a series of microscopic cracks must join together to form a dominant crack.

  20. Assessing the reliability of Curriculum-Based Measurement: an application of Latent Growth Modeling.

    PubMed

    Yeo, Seungsoo; Kim, Dong-Il; Branum-Martin, Lee; Wayman, Miya Miura; Espin, Christine A

    2012-04-01

    The purpose of this study was to demonstrate the use of Latent Growth Modeling (LGM) as a method for estimating reliability of Curriculum-Based Measurement (CBM) progress-monitoring data. The LGM approach permits the error associated with each measure to differ at each time point, thus providing an alternative method for examining of the reliability of CBM reading aloud data over repeated measurements. The analysis revealed that the reliability of CBM data was not a fixed property of the measure, but it changed with time. The study demonstrates the need to consider reliability in new ways with respect to the use of CBM data as repeated measures.

  1. Hard and Soft Constraints in Reliability-Based Design Optimization

    NASA Technical Reports Server (NTRS)

    Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.

    2006-01-01

    This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.

  2. A damage mechanics based approach to structural deterioration and reliability

    SciTech Connect

    Bhattcharya, B.; Ellingwood, B.

    1998-02-01

    Structural deterioration often occurs without perceptible manifestation. Continuum damage mechanics defines structural damage in terms of the material microstructure, and relates the damage variable to the macroscopic strength or stiffness of the structure. This enables one to predict the state of damage prior to the initiation of a macroscopic flaw, and allows one to estimate residual strength/service life of an existing structure. The accumulation of damage is a dissipative process that is governed by the laws of thermodynamics. Partial differential equations for damage growth in terms of the Helmholtz free energy are derived from fundamental thermodynamical conditions. Closed-form solutions to the equations are obtained under uniaxial loading for ductile deformation damage as a function of plastic strain, for creep damage as a function of time, and for fatigue damage as function of number of cycles. The proposed damage growth model is extended into the stochastic domain by considering fluctuations in the free energy, and closed-form solutions of the resulting stochastic differential equation are obtained in each of the three cases mentioned above. A reliability analysis of a ring-stiffened cylindrical steel shell subjected to corrosion, accidental pressure, and temperature is performed.

  3. A GIS-based software for lifeline reliability analysis under seismic hazard

    NASA Astrophysics Data System (ADS)

    Sevtap Selcuk-Kestel, A.; Sebnem Duzgun, H.; Oduncuoglu, Lutfi

    2012-05-01

    Lifelines are vital networks, and it is important that those networks are still functional after major natural disasters such as earthquakes. Assessing reliability of lifelines requires spatial analysis of lifelines with respect to a given earthquake hazard map. In this paper, a GIS-based software for the spatial assessment of lifeline reliability which is developed by using GeoTools environment is presented. The developed GIS-based software imports seismic hazard and lifeline network layers and then creates a gridded network structure. Finally, it adopts a network reliability algorithm to calculate the upper and lower bounds for system reliability of the lifeline under seismic hazard. The software enables user visualizing the reliability values in graphical form as well as thematic lifeline reliability map with colors indicating reliability level along with the link and the overall network. It also provides functions for saving the analysis results in shape file format. The software is tested and validated for an application taken from literature which is a part of water distribution system of Bursa in Turkey. The developed GIS-based software module that creates GIS-based reliability map of the lifelines under seismic hazard is user friendly, modifiable, fast in execution time, illustrative and validated for the existing literature studies.

  4. Reliability and Validity of the Evidence-Based Practice Confidence (EPIC) Scale

    ERIC Educational Resources Information Center

    Salbach, Nancy M.; Jaglal, Susan B.; Williams, Jack I.

    2013-01-01

    Introduction: The reliability, minimal detectable change (MDC), and construct validity of the evidence-based practice confidence (EPIC) scale were evaluated among physical therapists (PTs) in clinical practice. Methods: A longitudinal mail survey was conducted. Internal consistency and test-retest reliability were estimated using Cronbach's alpha…

  5. Is School-Based Height and Weight Screening of Elementary Students Private and Reliable?

    ERIC Educational Resources Information Center

    Stoddard, Sarah A.; Kubik, Martha Y.; Skay, Carol

    2008-01-01

    The Institute of Medicine recommends school-based body mass index (BMI) screening as an obesity prevention strategy. While school nurses have provided height/weight screening for years, little has been published describing measurement reliability or process. This study evaluated the reliability of height/weight measures collected by school nurses…

  6. Is School-Based Height and Weight Screening of Elementary Students Private and Reliable?

    ERIC Educational Resources Information Center

    Stoddard, Sarah A.; Kubik, Martha Y.; Skay, Carol

    2008-01-01

    The Institute of Medicine recommends school-based body mass index (BMI) screening as an obesity prevention strategy. While school nurses have provided height/weight screening for years, little has been published describing measurement reliability or process. This study evaluated the reliability of height/weight measures collected by school nurses…

  7. The Reliability of Randomly Generated Math Curriculum-Based Measurements

    ERIC Educational Resources Information Center

    Strait, Gerald G.; Smith, Bradley H.; Pender, Carolyn; Malone, Patrick S.; Roberts, Jarod; Hall, John D.

    2015-01-01

    "Curriculum-Based Measurement" (CBM) is a direct method of academic assessment used to screen and evaluate students' skills and monitor their responses to academic instruction and intervention. Interventioncentral.org offers a math worksheet generator at no cost that creates randomly generated "math curriculum-based measures"…

  8. The Reliability of Randomly Generated Math Curriculum-Based Measurements

    ERIC Educational Resources Information Center

    Strait, Gerald G.; Smith, Bradley H.; Pender, Carolyn; Malone, Patrick S.; Roberts, Jarod; Hall, John D.

    2015-01-01

    "Curriculum-Based Measurement" (CBM) is a direct method of academic assessment used to screen and evaluate students' skills and monitor their responses to academic instruction and intervention. Interventioncentral.org offers a math worksheet generator at no cost that creates randomly generated "math curriculum-based measures"…

  9. Expected-Credibility-Based Job Scheduling for Reliable Volunteer Computing

    NASA Astrophysics Data System (ADS)

    Watanabe, Kan; Fukushi, Masaru; Horiguchi, Susumu

    This paper presents a proposal of an expected-credibility-based job scheduling method for volunteer computing (VC) systems with malicious participants who return erroneous results. Credibility-based voting is a promising approach to guaranteeing the computational correctness of VC systems. However, it relies on a simple round-robin job scheduling method that does not consider the jobs' order of execution, thereby resulting in numerous unnecessary job allocations and performance degradation of VC systems. To improve the performance of VC systems, the proposed job scheduling method selects a job to be executed prior to others dynamically based on two novel metrics: expected credibility and the expected number of results for each job. Simulation of VCs shows that the proposed method can improve the VC system performance up to 11%; It always outperforms the original round-robin method irrespective of the value of unknown parameters such as population and behavior of saboteurs.

  10. Reliability of pedigree-based and genomic evaluations in selected populations.

    PubMed

    Gorjanc, Gregor; Bijma, Piter; Hickey, John M

    2015-08-14

    Reliability is an important parameter in breeding. It measures the precision of estimated breeding values (EBV) and, thus, potential response to selection on those EBV. The precision of EBV is commonly measured by relating the prediction error variance (PEV) of EBV to the base population additive genetic variance (base PEV reliability), while the potential for response to selection is commonly measured by the squared correlation between the EBV and breeding values (BV) on selection candidates (reliability of selection). While these two measures are equivalent for unselected populations, they are not equivalent for selected populations. The aim of this study was to quantify the effect of selection on these two measures of reliability and to show how this affects comparison of breeding programs using pedigree-based or genomic evaluations. Two scenarios with random and best linear unbiased prediction (BLUP) selection were simulated, where the EBV of selection candidates were estimated using only pedigree, pedigree and phenotype, genome-wide marker genotypes and phenotype, or only genome-wide marker genotypes. The base PEV reliabilities of these EBV were compared to the corresponding reliabilities of selection. Realized genetic selection intensity was evaluated to quantify the potential of selection on the different types of EBV and, thus, to validate differences in reliabilities. Finally, the contribution of different underlying processes to changes in additive genetic variance and reliabilities was quantified. The simulations showed that, for selected populations, the base PEV reliability substantially overestimates the reliability of selection of EBV that are mainly based on old information from the parental generation, as is the case with pedigree-based prediction. Selection on such EBV gave very low realized genetic selection intensities, confirming the overestimation and importance of genotyping both male and female selection candidates. The two measures of

  11. Reliability Modeling Development and Its Applications for Ceramic Capacitors with Base-Metal Electrodes (BMEs)

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2014-01-01

    This presentation includes a summary of NEPP-funded deliverables for the Base-Metal Electrodes (BMEs) capacitor task, development of a general reliability model for BME capacitors, and a summary and future work.

  12. On the Reliability of Vocational Workplace-Based Certifications

    ERIC Educational Resources Information Center

    Harth, H.; Hemker, B.T.

    2013-01-01

    The assessment of vocational workplace-based qualifications in England relies on human assessors (raters). These assessors observe naturally occurring, non-standardised evidence, unique to each learner and evaluate the learner as competent/not yet competent against content standards. Whilst these are considered difficult to measure, this study…

  13. A Reliable Homemade Electrode Based on Glassy Polymeric Carbon

    ERIC Educational Resources Information Center

    Santos, Andre L.; Takeuchi, Regina M.; Oliviero, Herilton P.; Rodriguez, Marcello G.; Zimmerman, Robert L.

    2004-01-01

    The production of a GPC-based material by submitting a cross-linked resin precursor to control thermal conditions is discussed. The precursor material is prepolymerized at 60-degree Celsius in a mold and is carbonized in inert atmosphere by slowly raising the temperature, the rise is performed to avoid change in the shape of the carbonization…

  14. A Reliable Homemade Electrode Based on Glassy Polymeric Carbon

    ERIC Educational Resources Information Center

    Santos, Andre L.; Takeuchi, Regina M.; Oliviero, Herilton P.; Rodriguez, Marcello G.; Zimmerman, Robert L.

    2004-01-01

    The production of a GPC-based material by submitting a cross-linked resin precursor to control thermal conditions is discussed. The precursor material is prepolymerized at 60-degree Celsius in a mold and is carbonized in inert atmosphere by slowly raising the temperature, the rise is performed to avoid change in the shape of the carbonization…

  15. On the Reliability of Vocational Workplace-Based Certifications

    ERIC Educational Resources Information Center

    Harth, H.; Hemker, B.T.

    2013-01-01

    The assessment of vocational workplace-based qualifications in England relies on human assessors (raters). These assessors observe naturally occurring, non-standardised evidence, unique to each learner and evaluate the learner as competent/not yet competent against content standards. Whilst these are considered difficult to measure, this study…

  16. Advanced reliability modeling of fault-tolerant computer-based systems

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.

    1982-01-01

    Two methodologies for the reliability assessment of fault tolerant digital computer based systems are discussed. The computer-aided reliability estimation 3 (CARE 3) and gate logic software simulation (GLOSS) are assessment technologies that were developed to mitigate a serious weakness in the design and evaluation process of ultrareliable digital systems. The weak link is based on the unavailability of a sufficiently powerful modeling technique for comparing the stochastic attributes of one system against others. Some of the more interesting attributes are reliability, system survival, safety, and mission success.

  17. Measurement-based reliability prediction methodology. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Linn, Linda Shen

    1991-01-01

    In the past, analytical and measurement based models were developed to characterize computer system behavior. An open issue is how these models can be used, if at all, for system design improvement. The issue is addressed here. A combined statistical/analytical approach to use measurements from one environment to model the system failure behavior in a new environment is proposed. A comparison of the predicted results with the actual data from the new environment shows a close correspondence.

  18. Reliable binary cell-fate decisions based on oscillations

    NASA Astrophysics Data System (ADS)

    Pfeuty, B.; Kaneko, K.

    2014-02-01

    Biological systems have often to perform binary decisions under highly dynamic and noisy environments, such as during cell-fate determination. These decisions can be implemented by two main bifurcation mechanisms based on the transitions from either monostability or oscillation to bistability. We compare these two mechanisms by using stochastic models with time-varying fields and by establishing asymptotic formulas for the choice probabilities. Different scaling laws for decision sensitivity with respect to noise strength and signal timescale are obtained, supporting a role for oscillatory dynamics in performing noise-robust and temporally tunable binary decision-making. This result provides a rationale for recent experimental evidences showing that oscillatory expression of proteins often precedes binary cell-fate decisions.

  19. Reliability Coupled Sensitivity Based Design Approach for Gravity Retaining Walls

    NASA Astrophysics Data System (ADS)

    Guha Ray, A.; Baidya, D. K.

    2012-09-01

    Sensitivity analysis involving different random variables and different potential failure modes of a gravity retaining wall focuses on the fact that high sensitivity of a particular variable on a particular mode of failure does not necessarily imply a remarkable contribution to the overall failure probability. The present paper aims at identifying a probabilistic risk factor ( R f ) for each random variable based on the combined effects of failure probability ( P f ) of each mode of failure of a gravity retaining wall and sensitivity of each of the random variables on these failure modes. P f is calculated by Monte Carlo simulation and sensitivity analysis of each random variable is carried out by F-test analysis. The structure, redesigned by modifying the original random variables with the risk factors, is safe against all the variations of random variables. It is observed that R f for friction angle of backfill soil ( φ 1 ) increases and cohesion of foundation soil ( c 2 ) decreases with an increase of variation of φ 1 , while R f for unit weights ( γ 1 and γ 2 ) for both soil and friction angle of foundation soil ( φ 2 ) remains almost constant for variation of soil properties. The results compared well with some of the existing deterministic and probabilistic methods and found to be cost-effective. It is seen that if variation of φ 1 remains within 5 %, significant reduction in cross-sectional area can be achieved. But if the variation is more than 7-8 %, the structure needs to be modified. Finally design guidelines for different wall dimensions, based on the present approach, are proposed.

  20. Reliable Location-Based Services from Radio Navigation Systems

    PubMed Central

    Qiu, Di; Boneh, Dan; Lo, Sherman; Enge, Per

    2010-01-01

    Loran is a radio-based navigation system originally designed for naval applications. We show that Loran-C’s high-power and high repeatable accuracy are fantastic for security applications. First, we show how to derive a precise location tag—with a sensitivity of about 20 meters—that is difficult to project to an exact location. A device can use our location tag to block or allow certain actions, without knowing its precise location. To ensure that our tag is reproducible we make use of fuzzy extractors, a mechanism originally designed for biometric authentication. We build a fuzzy extractor specifically designed for radio-type errors and give experimental evidence to show its effectiveness. Second, we show that our location tag is difficult to predict from a distance. For example, an observer cannot predict the location tag inside a guarded data center from a few hundreds of meters away. As an application, consider a location-aware disk drive that will only work inside the data center. An attacker who steals the device and is capable of spoofing Loran-C signals, still cannot make the device work since he does not know what location tag to spoof. We provide experimental data supporting our unpredictability claim. PMID:22163532

  1. Reliable location-based services from radio navigation systems.

    PubMed

    Qiu, Di; Boneh, Dan; Lo, Sherman; Enge, Per

    2010-01-01

    Loran is a radio-based navigation system originally designed for naval applications. We show that Loran-C's high-power and high repeatable accuracy are fantastic for security applications. First, we show how to derive a precise location tag--with a sensitivity of about 20 meters--that is difficult to project to an exact location. A device can use our location tag to block or allow certain actions, without knowing its precise location. To ensure that our tag is reproducible we make use of fuzzy extractors, a mechanism originally designed for biometric authentication. We build a fuzzy extractor specifically designed for radio-type errors and give experimental evidence to show its effectiveness. Second, we show that our location tag is difficult to predict from a distance. For example, an observer cannot predict the location tag inside a guarded data center from a few hundreds of meters away. As an application, consider a location-aware disk drive that will only work inside the data center. An attacker who steals the device and is capable of spoofing Loran-C signals, still cannot make the device work since he does not know what location tag to spoof. We provide experimental data supporting our unpredictability claim.

  2. Composite reliability of a workplace-based assessment toolbox for postgraduate medical education.

    PubMed

    Moonen-van Loon, J M W; Overeem, K; Donkers, H H L M; van der Vleuten, C P M; Driessen, E W

    2013-12-01

    In recent years, postgraduate assessment programmes around the world have embraced workplace-based assessment (WBA) and its related tools. Despite their widespread use, results of studies on the validity and reliability of these tools have been variable. Although in many countries decisions about residents' continuation of training and certification as a specialist are based on the composite results of different WBAs collected in a portfolio, to our knowledge, the reliability of such a WBA toolbox has never been investigated. Using generalisability theory, we analysed the separate and composite reliability of three WBA tools [mini-Clinical Evaluation Exercise (mini-CEX), direct observation of procedural skills (DOPS), and multisource feedback (MSF)] included in a resident portfolio. G-studies and D-studies of 12,779 WBAs from a total of 953 residents showed that a reliability coefficient of 0.80 was obtained for eight mini-CEXs, nine DOPS, and nine MSF rounds, whilst the same reliability was found for seven mini-CEXs, eight DOPS, and one MSF when combined in a portfolio. At the end of the first year of residency a portfolio with five mini-CEXs, six DOPS, and one MSF afforded reliable judgement. The results support the conclusion that several WBA tools combined in a portfolio can be a feasible and reliable method for high-stakes judgements.

  3. Reliability estimation for cutting tools based on logistic regression model using vibration signals

    NASA Astrophysics Data System (ADS)

    Chen, Baojia; Chen, Xuefeng; Li, Bing; He, Zhengjia; Cao, Hongrui; Cai, Gaigai

    2011-10-01

    As an important part of CNC machine, the reliability of cutting tools influences the whole manufacturing effectiveness and stability of equipment. The present study proposes a novel reliability estimation approach to the cutting tools based on logistic regression model by using vibration signals. The operation condition information of the CNC machine is incorporated into reliability analysis to reflect the product time-varying characteristics. The proposed approach is superior to other degradation estimation methods in that it does not necessitate any assumption about degradation paths and probability density functions of condition parameters. The three steps of new reliability estimation approach for cutting tools are as follows. First, on-line vibration signals of cutting tools are measured during the manufacturing process. Second, wavelet packet (WP) transform is employed to decompose the original signals and correlation analysis is employed to find out the feature frequency bands which indicate tool wear. Third, correlation analysis is also used to select the salient feature parameters which are composed of feature band energy, energy entropy and time-domain features. Finally, reliability estimation is carried out based on logistic regression model. The approach has been validated on a NC lathe. Under different failure threshold, the reliability and failure time of the cutting tools are all estimated accurately. The positive results show the plausibility and effectiveness of the proposed approach, which can facilitate machine performance and reliability estimation.

  4. Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang

    2017-07-01

    In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.

  5. Reliability of hypothalamic–pituitary–adrenal axis assessment methods for use in population-based studies

    PubMed Central

    Wand, Gary S.; Malhotra, Saurabh; Kamel, Ihab; Horton, Karen

    2013-01-01

    Population-based studies have been hampered in exploring hypothalamic–pituitary–adrenal axis (HPA) activity as a potential explanatory link between stress-related and metabolic disorders due to their lack of incorporation of reliable measures of chronic cortisol exposure. The purpose of this review is to summarize current literature on the reliability of HPA axis measures and to discuss the feasibility of performing them in population-based studies. We identified articles through PubMed using search terms related to cortisol, HPA axis, adrenal imaging, and reliability. The diurnal salivary cortisol curve (generated from multiple salivary samples from awakening to midnight) and 11 p.m. salivary cortisol had the highest between-visit reliabilities (r = 0.63–0.84 and 0.78, respectively). The cortisol awakening response and dexamethasone-suppressed cortisol had the next highest between-visit reliabilities (r = 0.33–0.67 and 0.42–0.66, respectively). Based on our own data, the inter-reader reliability (rs) of adrenal gland volume from non-contrast CT was 0.67–0.71 for the left and 0.47–0.70 for the right adrenal glands. While a single 8 a.m. salivary cortisol is one of the easiest measures to perform, it had the lowest between-visit reliability (R = 0.18–0.47). Based on the current literature, use of sampling multiple salivary cortisol measures across the diurnal curve (with awakening cortisol), dexamethasone-suppressed cortisol, and adrenal gland volume are measures of HPA axis tone with similar between-visit reliabilities which likely reflect chronic cortisol burden and are feasible to perform in population-based studies. PMID:21533585

  6. Towards Reliable Evaluation of Anomaly-Based Intrusion Detection Performance

    NASA Technical Reports Server (NTRS)

    Viswanathan, Arun

    2012-01-01

    This report describes the results of research into the effects of environment-induced noise on the evaluation process for anomaly detectors in the cyber security domain. This research was conducted during a 10-week summer internship program from the 19th of August, 2012 to the 23rd of August, 2012 at the Jet Propulsion Laboratory in Pasadena, California. The research performed lies within the larger context of the Los Angeles Department of Water and Power (LADWP) Smart Grid cyber security project, a Department of Energy (DoE) funded effort involving the Jet Propulsion Laboratory, California Institute of Technology and the University of Southern California/ Information Sciences Institute. The results of the present effort constitute an important contribution towards building more rigorous evaluation paradigms for anomaly-based intrusion detectors in complex cyber physical systems such as the Smart Grid. Anomaly detection is a key strategy for cyber intrusion detection and operates by identifying deviations from profiles of nominal behavior and are thus conceptually appealing for detecting "novel" attacks. Evaluating the performance of such a detector requires assessing: (a) how well it captures the model of nominal behavior, and (b) how well it detects attacks (deviations from normality). Current evaluation methods produce results that give insufficient insight into the operation of a detector, inevitably resulting in a significantly poor characterization of a detectors performance. In this work, we first describe a preliminary taxonomy of key evaluation constructs that are necessary for establishing rigor in the evaluation regime of an anomaly detector. We then focus on clarifying the impact of the operational environment on the manifestation of attacks in monitored data. We show how dynamic and evolving environments can introduce high variability into the data stream perturbing detector performance. Prior research has focused on understanding the impact of this

  7. Towards Reliable Evaluation of Anomaly-Based Intrusion Detection Performance

    NASA Technical Reports Server (NTRS)

    Viswanathan, Arun

    2012-01-01

    This report describes the results of research into the effects of environment-induced noise on the evaluation process for anomaly detectors in the cyber security domain. This research was conducted during a 10-week summer internship program from the 19th of August, 2012 to the 23rd of August, 2012 at the Jet Propulsion Laboratory in Pasadena, California. The research performed lies within the larger context of the Los Angeles Department of Water and Power (LADWP) Smart Grid cyber security project, a Department of Energy (DoE) funded effort involving the Jet Propulsion Laboratory, California Institute of Technology and the University of Southern California/ Information Sciences Institute. The results of the present effort constitute an important contribution towards building more rigorous evaluation paradigms for anomaly-based intrusion detectors in complex cyber physical systems such as the Smart Grid. Anomaly detection is a key strategy for cyber intrusion detection and operates by identifying deviations from profiles of nominal behavior and are thus conceptually appealing for detecting "novel" attacks. Evaluating the performance of such a detector requires assessing: (a) how well it captures the model of nominal behavior, and (b) how well it detects attacks (deviations from normality). Current evaluation methods produce results that give insufficient insight into the operation of a detector, inevitably resulting in a significantly poor characterization of a detectors performance. In this work, we first describe a preliminary taxonomy of key evaluation constructs that are necessary for establishing rigor in the evaluation regime of an anomaly detector. We then focus on clarifying the impact of the operational environment on the manifestation of attacks in monitored data. We show how dynamic and evolving environments can introduce high variability into the data stream perturbing detector performance. Prior research has focused on understanding the impact of this

  8. Validity and reliability of Internet-based physiotherapy assessment for musculoskeletal disorders: a systematic review.

    PubMed

    Mani, Suresh; Sharma, Shobha; Omar, Baharudin; Paungmali, Aatit; Joseph, Leonard

    2017-04-01

    Purpose The purpose of this review is to systematically explore and summarise the validity and reliability of telerehabilitation (TR)-based physiotherapy assessment for musculoskeletal disorders. Method A comprehensive systematic literature review was conducted using a number of electronic databases: PubMed, EMBASE, PsycINFO, Cochrane Library and CINAHL, published between January 2000 and May 2015. The studies examined the validity, inter- and intra-rater reliabilities of TR-based physiotherapy assessment for musculoskeletal conditions were included. Two independent reviewers used the Quality Appraisal Tool for studies of diagnostic Reliability (QAREL) and the Quality Assessment of Diagnostic Accuracy Studies (QUADAS) tool to assess the methodological quality of reliability and validity studies respectively. Results A total of 898 hits were achieved, of which 11 articles based on inclusion criteria were reviewed. Nine studies explored the concurrent validity, inter- and intra-rater reliabilities, while two studies examined only the concurrent validity. Reviewed studies were moderate to good in methodological quality. The physiotherapy assessments such as pain, swelling, range of motion, muscle strength, balance, gait and functional assessment demonstrated good concurrent validity. However, the reported concurrent validity of lumbar spine posture, special orthopaedic tests, neurodynamic tests and scar assessments ranged from low to moderate. Conclusion TR-based physiotherapy assessment was technically feasible with overall good concurrent validity and excellent reliability, except for lumbar spine posture, orthopaedic special tests, neurodynamic testa and scar assessment.

  9. Reliability and accuracy of dermatologists' clinic-based and digital image consultations.

    PubMed

    Whited, J D; Hall, R P; Simel, D L; Foy, M E; Stechuchak, K M; Drugge, R J; Grichnik, J M; Myers, S A; Horner, R D

    1999-11-01

    Telemedicine technology holds great promise for dermatologic health care delivery. However, the clinical outcomes of digital image consultations (teledermatology) must be compared with traditional clinic-based consultations. Our purpose was to assess and compare the reliability and accuracy of dermatologists' diagnoses and management recommendations for clinic-based and digital image consultations. One hundred sixty-eight lesions found among 129 patients were independently examined by 2 clinic-based dermatologists and 3 different digital image dermatologist consultants. The reliability and accuracy of the examiners' diagnoses and the reliability of their management recommendations were compared. Proportion agreement among clinic-based examiners for their single most likely diagnosis was 0. 54 (95% confidence interval [CI], 0.46-0.61) and was 0.92 (95% CI, 0. 88-0.96) when ratings included differential diagnoses. Digital image consultants provided diagnoses that were comparably reliable to the clinic-based examiners. Agreement on management recommendations was variable. Digital image and clinic-based consultants displayed similar diagnostic accuracy. Digital image consultations result in reliable and accurate diagnostic outcomes when compared with traditional clinic-based consultations.

  10. Reliability analysis of idealized tunnel support system using probability-based methods with case studies

    NASA Astrophysics Data System (ADS)

    Gharouni-Nik, Morteza; Naeimi, Meysam; Ahadi, Sodayf; Alimoradi, Zahra

    2014-06-01

    In order to determine the overall safety of a tunnel support lining, a reliability-based approach is presented in this paper. Support elements in jointed rock tunnels are provided to control the ground movement caused by stress redistribution during the tunnel drive. Main support elements contribute to stability of the tunnel structure are recognized owing to identify various aspects of reliability and sustainability in the system. The selection of efficient support methods for rock tunneling is a key factor in order to reduce the number of problems during construction and maintain the project cost and time within the limited budget and planned schedule. This paper introduces a smart approach by which decision-makers will be able to find the overall reliability of tunnel support system before selecting the final scheme of the lining system. Due to this research focus, engineering reliability which is a branch of statistics and probability is being appropriately applied to the field and much effort has been made to use it in tunneling while investigating the reliability of the lining support system for the tunnel structure. Therefore, reliability analysis for evaluating the tunnel support performance is the main idea used in this research. Decomposition approaches are used for producing system block diagram and determining the failure probability of the whole system. Effectiveness of the proposed reliability model of tunnel lining together with the recommended approaches is examined using several case studies and the final value of reliability obtained for different designing scenarios. Considering the idea of linear correlation between safety factors and reliability parameters, the values of isolated reliabilities determined for different structural components of tunnel support system. In order to determine individual safety factors, finite element modeling is employed for different structural subsystems and the results of numerical analyses are obtained in

  11. The B-747 flight control system maintenance and reliability data base for cost effectiveness tradeoff studies

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Primary and automatic flight controls are combined for a total flight control reliability and maintenance cost data base using information from two previous reports and additional cost data gathered from a major airline. A comparison of the current B-747 flight control system effects on reliability and operating cost with that of a B-747 designed for an active control wing load alleviation system is provided.

  12. Reliable contact fabrication on nanostructured Bi2Te3-based thermoelectric materials.

    PubMed

    Feng, Shien-Ping; Chang, Ya-Huei; Yang, Jian; Poudel, Bed; Yu, Bo; Ren, Zhifeng; Chen, Gang

    2013-05-14

    A cost-effective and reliable Ni-Au contact on nanostructured Bi2Te3-based alloys for a solar thermoelectric generator (STEG) is reported. The use of MPS SAMs creates a strong covalent binding and more nucleation sites with even distribution for electroplating contact electrodes on nanostructured thermoelectric materials. A reliable high-performance flat-panel STEG can be obtained by using this new method.

  13. Reliability of 3D laser-based anthropometry and comparison with classical anthropometry

    PubMed Central

    Kuehnapfel, Andreas; Ahnert, Peter; Loeffler, Markus; Broda, Anja; Scholz, Markus

    2016-01-01

    Anthropometric quantities are widely used in epidemiologic research as possible confounders, risk factors, or outcomes. 3D laser-based body scans (BS) allow evaluation of dozens of quantities in short time with minimal physical contact between observers and probands. The aim of this study was to compare BS with classical manual anthropometric (CA) assessments with respect to feasibility, reliability, and validity. We performed a study on 108 individuals with multiple measurements of BS and CA to estimate intra- and inter-rater reliabilities for both. We suggested BS equivalents of CA measurements and determined validity of BS considering CA the gold standard. Throughout the study, the overall concordance correlation coefficient (OCCC) was chosen as indicator of agreement. BS was slightly more time consuming but better accepted than CA. For CA, OCCCs for intra- and inter-rater reliability were greater than 0.8 for all nine quantities studied. For BS, 9 of 154 quantities showed reliabilities below 0.7. BS proxies for CA measurements showed good agreement (minimum OCCC > 0.77) after offset correction. Thigh length showed higher reliability in BS while upper arm length showed higher reliability in CA. Except for these issues, reliabilities of CA measurements and their BS equivalents were comparable. PMID:27225483

  14. Development of a nanosatellite de-orbiting system by reliability based design optimization

    NASA Astrophysics Data System (ADS)

    Nikbay, Melike; Acar, Pınar; Aslan, Alim Rüstem

    2015-12-01

    This paper presents design approaches to develop a reliable and efficient de-orbiting system for the 3USAT nanosatellite to provide a beneficial orbital decay process at the end of a mission. A de-orbiting system is initially designed by employing the aerodynamic drag augmentation principle where the structural constraints of the overall satellite system and the aerodynamic forces are taken into account. Next, an alternative de-orbiting system is designed with new considerations and further optimized using deterministic and reliability based design techniques. For the multi-objective design, the objectives are chosen to maximize the aerodynamic drag force through the maximization of the Kapton surface area while minimizing the de-orbiting system mass. The constraints are related in a deterministic manner to the required deployment force, the height of the solar panel hole and the deployment angle. The length and the number of layers of the deployable Kapton structure are used as optimization variables. In the second stage of this study, uncertainties related to both manufacturing and operating conditions of the deployable structure in space environment are considered. These uncertainties are then incorporated into the design process by using different probabilistic approaches such as Monte Carlo Simulation, the First-Order Reliability Method and the Second-Order Reliability Method. The reliability based design optimization seeks optimal solutions using the former design objectives and constraints with the inclusion of a reliability index. Finally, the de-orbiting system design alternatives generated by different approaches are investigated and the reliability based optimum design is found to yield the best solution since it significantly improves both system reliability and performance requirements.

  15. Cost effectiveness and reliability improvement capabilities of CHECWORKS and SWIPS based inspection and maintenance plans

    SciTech Connect

    Singh, S.; Borodotsky, A.

    1996-11-01

    CHECWORKS and SWIPS are computer programs that undertaken degradation analysis and generate prioritized lists for scheduling inspections. A simplified analysis presented in this paper demonstrates that application of such computer aided degeneration analysis and prioritized list based I and M plans can achieve full system reliability along with high cost effectiveness. For purposes of developing insight and a basic understanding, relative cost-effectiveness and system reliability improvement analysis is conducted on a small service water system for five types of inspection and maintenance plans. The following plans are considered: a full inspection plan, a random sampling plan, an experienced based sampling plan, a computer aided degeneration analysis based plan, and a superman based plan. The results obtained show that a computer aided inspection plan is the most effective of the feasible plans. For a realistic system, the plan can pay for the analysis cost and generate additional cost savings while maintaining full system reliability.

  16. An Energy-Based Limit State Function for Estimation of Structural Reliability in Shock Environments

    DOE PAGES

    Guthrie, Michael A.

    2013-01-01

    limit state function is developed for the estimation of structural reliability in shock environments. This limit state function uses peak modal strain energies to characterize environmental severity and modal strain energies at failure to characterize the structural capacity. The Hasofer-Lind reliability index is briefly reviewed and its computation for the energy-based limit state function is discussed. Applications to two degree of freedom mass-spring systems and to a simple finite element model are considered. For these examples, computation of the reliability index requires little effort beyond a modal analysis, but still accounts for relevant uncertainties in both the structure and environment.more » For both examples, the reliability index is observed to agree well with the results of Monte Carlo analysis. In situations where fast, qualitative comparison of several candidate designs is required, the reliability index based on the proposed limit state function provides an attractive metric which can be used to compare and control reliability.« less

  17. Reliability of a field based 2D:4D measurement technique in children.

    PubMed

    Ranson, R M; Taylor, S R; Stratton, G

    2013-08-01

    There is limited literature on the relationship between second to fourth finger digit ratio (2D:4D) and health- and skill-related fitness in children. To examine this relationship it is important to establish a reliable method of assessing 2D:4D for use with large groups of children. The aim of the study was to examine the reliability of a field-based 2D:4D measure in children. METHODS/RESEARCH DESIGN: Fifty 8-11 year olds had 2D:4D of the right hand measured using a Perspex table top, a digital camera, and Adobe Photoshop software. Second to fourth finger digit ratio (and 2D and 4D) intra-observer and inter-observer reliabilities were assessed on the same day and intraobserver reliability was measured between days. Limits of agreement (LoA), coefficient of variation (CV) and Pearson's correlation coefficient were used for statistical analysis. High correlation coefficients (r=0.95-0.99) and low CV's (0.4-1.2%) were reported for intra- and inter-observer reliabilities on the same day and between days. LoA revealed negligible systematic bias with random error ranging from 0.02 to 0.12. These findings suggest that 2D:4D (and 2D and 4D) assessment in children using digital photography provides a reliable measure of 2D:4D that can be used during field-based testing. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Inter-rater reliability of an observation-based ergonomics assessment checklist for office workers.

    PubMed

    Pereira, Michelle Jessica; Straker, Leon Melville; Comans, Tracy Anne; Johnston, Venerina

    2016-12-01

    To establish the inter-rater reliability of an observation-based ergonomics assessment checklist for computer workers. A 37-item (38-item if a laptop was part of the workstation) comprehensive observational ergonomics assessment checklist comparable to government guidelines and up to date with empirical evidence was developed. Two trained practitioners assessed full-time office workers performing their usual computer-based work and evaluated the suitability of workstations used. Practitioners assessed each participant consecutively. The order of assessors was randomised, and the second assessor was blinded to the findings of the first. Unadjusted kappa coefficients between the raters were obtained for the overall checklist and subsections that were formed from question-items relevant to specific workstation equipment. Twenty-seven office workers were recruited. The inter-rater reliability between two trained practitioners achieved moderate to good reliability for all except one checklist component. This checklist has mostly moderate to good reliability between two trained practitioners. Practitioner Summary: This reliable ergonomics assessment checklist for computer workers was designed using accessible government guidelines and supplemented with up-to-date evidence. Employers in Queensland (Australia) can fulfil legislative requirements by using this reliable checklist to identify and subsequently address potential risk factors for work-related injury to provide a safe working environment.

  19. Bridge reliability assessment based on the PDF of long-term monitored extreme strains

    NASA Astrophysics Data System (ADS)

    Jiao, Meiju; Sun, Limin

    2011-04-01

    Structural health monitoring (SHM) systems can provide valuable information for the evaluation of bridge performance. As the development and implementation of SHM technology in recent years, the data mining and use has received increasingly attention and interests in civil engineering. Based on the principle of probabilistic and statistics, a reliability approach provides a rational basis for analysis of the randomness in loads and their effects on structures. A novel approach combined SHM systems with reliability method to evaluate the reliability of a cable-stayed bridge instrumented with SHM systems was presented in this paper. In this study, the reliability of the steel girder of the cable-stayed bridge was denoted by failure probability directly instead of reliability index as commonly used. Under the assumption that the probability distributions of the resistance are independent to the responses of structures, a formulation of failure probability was deduced. Then, as a main factor in the formulation, the probability density function (PDF) of the strain at sensor locations based on the monitoring data was evaluated and verified. That Donghai Bridge was taken as an example for the application of the proposed approach followed. In the case study, 4 years' monitoring data since the operation of the SHM systems was processed, and the reliability assessment results were discussed. Finally, the sensitivity and accuracy of the novel approach compared with FORM was discussed.

  20. A General Reliability Model for Ni-BaTiO3-Based Multilayer Ceramic Capacitors

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2014-01-01

    The evaluation of multilayer ceramic capacitors (MLCCs) with Ni electrode and BaTiO3 dielectric material for potential space project applications requires an in-depth understanding of their reliability. A general reliability model for Ni-BaTiO3 MLCC is developed and discussed. The model consists of three parts: a statistical distribution; an acceleration function that describes how a capacitor's reliability life responds to the external stresses, and an empirical function that defines contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size, and capacitor chip size A. Application examples are also discussed based on the proposed reliability model for Ni-BaTiO3 MLCCs.

  1. A General Reliability Model for Ni-BaTiO3-Based Multilayer Ceramic Capacitors

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2014-01-01

    The evaluation for potential space project applications of multilayer ceramic capacitors (MLCCs) with Ni electrode and BaTiO3 dielectric material requires an in-depth understanding of the MLCCs reliability. A general reliability model for Ni-BaTiO3 MLCCs is developed and discussed in this paper. The model consists of three parts: a statistical distribution; an acceleration function that describes how a capacitors reliability life responds to external stresses; and an empirical function that defines the contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size r, and capacitor chip size A. Application examples are also discussed based on the proposed reliability model for Ni-BaTiO3 MLCCs.

  2. Reliability of diagnostic methods based on low-frequency noise analysis

    SciTech Connect

    Gorlov, M. I.; Smirnov, D. Yu. Koz'yakov, N. N.

    2009-12-15

    Various methods for separating an integrated circuit (IC) batch were considered using noise parameters for the purpose of determining their reliability. The existing methods for screening semiconductor products using low-frequency (LF) noise were tested on transistors, as well as both digital and analog ICs, and showed good results. Selection criteria for semiconductor products were determined based on the statistics of a representative sample; however, their reliability was not estimated. The calculation of the correlation coefficient of determined LF noise parameters and reference reliability testing results was taken as the basis of the determination of reliability of diagnostic methods. For the experiment, KR142EN5A ICs made by bipolar technology were selected, which represent three-pin stabilizers with a fixed output voltage from 5 V and are used in many radio-electronic devices.

  3. The Turkish Version of Web-Based Learning Platform Evaluation Scale: Reliability and Validity Study

    ERIC Educational Resources Information Center

    Dag, Funda

    2016-01-01

    The purpose of this study is to determine the language equivalence and the validity and reliability of the Turkish version of the "Web-Based Learning Platform Evaluation Scale" ("Web Tabanli Ögrenme Ortami Degerlendirme Ölçegi" [WTÖODÖ]) used in the selection and evaluation of web-based learning environments. Within this scope,…

  4. Score Reliability of a Test Composed of Passage-Based Testlets: A Generalizability Theory Perspective.

    ERIC Educational Resources Information Center

    Lee, Yong-Won

    The purpose of this study was to investigate the impact of local item dependence (LID) in passage-based testlets on the test score reliability of an English as a Foreign Language (EFL) reading comprehension test from the perspective of generalizability (G) theory. Definitions and causes of LID in passage-based testlets are reviewed within the…

  5. Score Reliability of a Test Composed of Passage-Based Testlets: A Generalizability Theory Perspective.

    ERIC Educational Resources Information Center

    Lee, Yong-Won

    The purpose of this study was to investigate the impact of local item dependence (LID) in passage-based testlets on the test score reliability of an English as a Foreign Language (EFL) reading comprehension test from the perspective of generalizability (G) theory. Definitions and causes of LID in passage-based testlets are reviewed within the…

  6. Non-probabilistic reliability method and reliability-based optimal LQR design for vibration control of structures with uncertain-but-bounded parameters

    NASA Astrophysics Data System (ADS)

    Guo, Shu-Xiang; Li, Ying

    2013-12-01

    Uncertainty is inherent and unavoidable in almost all engineering systems. It is of essential significance to deal with uncertainties by means of reliability approach and to achieve a reasonable balance between reliability against uncertainties and system performance in the control design of uncertain systems. Nevertheless, reliability methods which can be used directly for analysis and synthesis of active control of structures in the presence of uncertainties remain to be developed, especially in non-probabilistic uncertainty situations. In the present paper, the issue of vibration control of uncertain structures using linear quadratic regulator (LQR) approach is studied from the viewpoint of reliability. An efficient non-probabilistic robust reliability method for LQR-based static output feedback robust control of uncertain structures is presented by treating bounded uncertain parameters as interval variables. The optimal vibration controller design for uncertain structures is carried out by solving a robust reliability-based optimization problem with the objective to minimize the quadratic performance index. The controller obtained may possess optimum performance under the condition that the controlled structure is robustly reliable with respect to admissible uncertainties. The proposed method provides an essential basis for achieving a balance between robustness and performance in controller design of uncertain structures. The presented formulations are in the framework of linear matrix inequality and can be carried out conveniently. Two numerical examples are provided to illustrate the effectiveness and feasibility of the present method.

  7. The SUPERB Project: Reliability-based design guideline for submarine pipelines

    SciTech Connect

    Sotberg, T.; Bruschi, R.; Moerk, K.

    1996-12-31

    This paper gives an overview of the research program SUPERB, the main objective being the development of a SUbmarine PipelinE Reliability Based Design Guideline with a comprehensive setup of design recommendations and criteria for pipeline design. The motivation of this program is related to the fact that project guidelines currently in force do not account for modern fabrication technology and the findings of recent research programs and by advanced engineering tools. The main structure of the Limit State Based Design (LSBD) Guideline is described followed by an outline of the safety philosophy which is introduced to fit within this framework. Focus is on the development of a reliability-based design guideline as a rational tool to manage future offshore projects with an optimal balance between project safety and economy. Selection of appropriate limit state functions and use of reliability tools to calibrate partial safety factors is also discussed.

  8. Reliability-based structural optimization: A proposed analytical-experimental study

    NASA Technical Reports Server (NTRS)

    Stroud, W. Jefferson; Nikolaidis, Efstratios

    1993-01-01

    An analytical and experimental study for assessing the potential of reliability-based structural optimization is proposed and described. In the study, competing designs obtained by deterministic and reliability-based optimization are compared. The experimental portion of the study is practical because the structure selected is a modular, actively and passively controlled truss that consists of many identical members, and because the competing designs are compared in terms of their dynamic performance and are not destroyed if failure occurs. The analytical portion of this study is illustrated on a 10-bar truss example. In the illustrative example, it is shown that reliability-based optimization can yield a design that is superior to an alternative design obtained by deterministic optimization. These analytical results provide motivation for the proposed study, which is underway.

  9. Small numbers, disclosure risk, security, and reliability issues in Web-based data query systems.

    PubMed

    Rudolph, Barbara A; Shah, Gulzar H; Love, Denise

    2006-01-01

    This article describes the process for developing consensus guidelines and tools for releasing public health data via the Web and highlights approaches leading agencies have taken to balance disclosure risk with public dissemination of reliable health statistics. An agency's choice of statistical methods for improving the reliability of released data for Web-based query systems is based upon a number of factors, including query system design (dynamic analysis vs preaggregated data and tables), population size, cell size, data use, and how data will be supplied to users. The article also describes those efforts that are necessary to reduce the risk of disclosure of an individual's protected health information.

  10. A Robust and Reliability-Based Optimization Framework for Conceptual Aircraft Wing Design

    NASA Astrophysics Data System (ADS)

    Paiva, Ricardo Miguel

    A robustness and reliability based multidisciplinary analysis and optimization framework for aircraft design is presented. Robust design optimization and Reliability Based Design Optimization are merged into a unified formulation which streamlines the setup of optimization problems and aims at preventing foreseeable implementation issues in uncertainty based design. Surrogate models are evaluated to circumvent the intensive computations resulting from using direct evaluation in nondeterministic optimization. Three types of models are implemented in the framework: quadratic interpolation, regression Kriging and artificial neural networks. Regression Kriging presents the best compromise between performance and accuracy in deterministic wing design problems. The performance of the simultaneous implementation of robustness and reliability is evaluated using simple analytic problems and more complex wing design problems, revealing that performance benefits can still be achieved while satisfying probabilistic constraints rather than the simpler (and not as computationally intensive) robust constraints. The latter are proven to to be unable to follow a reliability constraint as uncertainty in the input variables increases. The computational effort of the reliability analysis is further reduced through the implementation of a coordinate change in the respective optimization sub-problem. The computational tool developed is a stand-alone application and it presents a user-friendly graphical user interface. The multidisciplinary analysis and design optimization tool includes modules for aerodynamics, structural, aeroelastic and cost analysis, that can be used either individually or coupled.

  11. Validation of highly reliable, real-time knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1988-01-01

    Knowledge-based systems have the potential to greatly increase the capabilities of future aircraft and spacecraft and to significantly reduce support manpower needed for the space station and other space missions. However, a credible validation methodology must be developed before knowledge-based systems can be used for life- or mission-critical applications. Experience with conventional software has shown that the use of good software engineering techniques and static analysis tools can greatly reduce the time needed for testing and simulation of a system. Since exhaustive testing is infeasible, reliability must be built into the software during the design and implementation phases. Unfortunately, many of the software engineering techniques and tools used for conventional software are of little use in the development of knowledge-based systems. Therefore, research at Langley is focused on developing a set of guidelines, methods, and prototype validation tools for building highly reliable, knowledge-based systems. The use of a comprehensive methodology for building highly reliable, knowledge-based systems should significantly decrease the time needed for testing and simulation. A proven record of delivering reliable systems at the beginning of the highly visible testing and simulation phases is crucial to the acceptance of knowledge-based systems in critical applications.

  12. Longitudinal reliability of tract-based spatial statistics in diffusion tensor imaging.

    PubMed

    Madhyastha, Tara; Mérillat, Susan; Hirsiger, Sarah; Bezzola, Ladina; Liem, Franziskus; Grabowski, Thomas; Jäncke, Lutz

    2014-09-01

    Relatively little is known about reliability of longitudinal diffusion-tensor imaging (DTI) measurements despite growing interest in using DTI to track change in white matter structure. The purpose of this study is to quantify within- and between session scan-rescan reliability of DTI-derived measures that are commonly used to describe the characteristics of neural white matter in the context of neural plasticity research. DTI data were acquired from 16 cognitively healthy older adults (mean age 68.4). We used the Tract-Based Spatial Statistics (TBSS) approach implemented in FSL, evaluating how different DTI preprocessing choices affect reliability indices. Test-Retest reliability, quantified as ICC averaged across the voxels of the TBSS skeleton, ranged from 0.524 to 0.798 depending on the specific DTI-derived measure and the applied preprocessing steps. The two main preprocessing steps that we found to improve TBSS reliability were (a) the use of a common individual template and (b) smoothing DTI data using a 1-voxel median filter. Overall our data indicate that small choices in the preprocessing pipeline have a significant effect on test-retest reliability, therefore influencing the power to detect change within a longitudinal study. Furthermore, differences in the data processing pipeline limit the comparability of results across studies. Copyright © 2014 Wiley Periodicals, Inc.

  13. Reliability Based Design for a Raked Wing Tip of an Airframe

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2011-01-01

    A reliability-based optimization methodology has been developed to design the raked wing tip of the Boeing 767-400 extended range airliner made of composite and metallic materials. Design is formulated for an accepted level of risk or reliability. The design variables, weight and the constraints became functions of reliability. Uncertainties in the load, strength and the material properties, as well as the design variables, were modeled as random parameters with specified distributions, like normal, Weibull or Gumbel functions. The objective function and constraint, or a failure mode, became derived functions of the risk-level. Solution to the problem produced the optimum design with weight, variables and constraints as a function of the risk-level. Optimum weight versus reliability traced out an inverted-S shaped graph. The center of the graph corresponded to a 50 percent probability of success, or one failure in two samples. Under some assumptions, this design would be quite close to the deterministic optimum solution. The weight increased when reliability exceeded 50 percent, and decreased when the reliability was compromised. A design could be selected depending on the level of risk acceptable to a situation. The optimization process achieved up to a 20-percent reduction in weight over traditional design.

  14. Composite Stress Rupture: A New Reliability Model Based on Strength Decay

    NASA Technical Reports Server (NTRS)

    Reeder, James R.

    2012-01-01

    A model is proposed to estimate reliability for stress rupture of composite overwrap pressure vessels (COPVs) and similar composite structures. This new reliability model is generated by assuming a strength degradation (or decay) over time. The model suggests that most of the strength decay occurs late in life. The strength decay model will be shown to predict a response similar to that predicted by a traditional reliability model for stress rupture based on tests at a single stress level. In addition, the model predicts that even though there is strength decay due to proof loading, a significant overall increase in reliability is gained by eliminating any weak vessels, which would fail early. The model predicts that there should be significant periods of safe life following proof loading, because time is required for the strength to decay from the proof stress level to the subsequent loading level. Suggestions for testing the strength decay reliability model have been made. If the strength decay reliability model predictions are shown through testing to be accurate, COPVs may be designed to carry a higher level of stress than is currently allowed, which will enable the production of lighter structures

  15. An Evidential Reasoning-Based CREAM to Human Reliability Analysis in Maritime Accident Process.

    PubMed

    Wu, Bing; Yan, Xinping; Wang, Yang; Soares, C Guedes

    2017-01-09

    This article proposes a modified cognitive reliability and error analysis method (CREAM) for estimating the human error probability in the maritime accident process on the basis of an evidential reasoning approach. This modified CREAM is developed to precisely quantify the linguistic variables of the common performance conditions and to overcome the problem of ignoring the uncertainty caused by incomplete information in the existing CREAM models. Moreover, this article views maritime accident development from the sequential perspective, where a scenario- and barrier-based framework is proposed to describe the maritime accident process. This evidential reasoning-based CREAM approach together with the proposed accident development framework are applied to human reliability analysis of a ship capsizing accident. It will facilitate subjective human reliability analysis in different engineering systems where uncertainty exists in practice. © 2017 Society for Risk Analysis.

  16. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    NASA Astrophysics Data System (ADS)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  17. Space station software reliability analysis based on failures observed during testing at the multisystem integration facility

    NASA Technical Reports Server (NTRS)

    Tamayo, Tak Chai

    1987-01-01

    Quality of software not only is vital to the successful operation of the space station, it is also an important factor in establishing testing requirements, time needed for software verification and integration as well as launching schedules for the space station. Defense of management decisions can be greatly strengthened by combining engineering judgments with statistical analysis. Unlike hardware, software has the characteristics of no wearout and costly redundancies, thus making traditional statistical analysis not suitable in evaluating reliability of software. A statistical model was developed to provide a representation of the number as well as types of failures occur during software testing and verification. From this model, quantitative measure of software reliability based on failure history during testing are derived. Criteria to terminate testing based on reliability objectives and methods to estimate the expected number of fixings required are also presented.

  18. The reliability of clinical judgments and criteria associated with mechanisms-based classifications of pain in patients with low back pain disorders: a preliminary reliability study

    PubMed Central

    Smart, Keith M; Curley, Antoinette; Blake, Catherine; Staines, Anthony; Doody, Catherine

    2010-01-01

    Mechanisms-based classifications of pain have been advocated for their potential to aid understanding of clinical presentations of pain and improve clinical outcomes. However, the reliability of mechanisms-based classifications of pain and the clinical criteria upon which such classifications are based are not known. The purpose of this investigation was to assess the inter- and intra-examiner reliability of clinical judgments associated with: (i) mechanisms-based classifications of pain; and (ii) the identification and interpretation of individual symptoms and signs from a Delphi-derived expert consensus list of clinical criteria associated with mechanisms-based classifications of pain in patients with low back (±leg) pain disorders. The inter- and intra-examiner reliability of an examination protocol performed by two physiotherapists on two separate cohorts of 40 patients was assessed. Data were analysed using kappa and percentage of agreement values. Inter- and intra-examiner agreement associated with clinicians’ mechanisms-based classifications of low back (±leg) pain was ‘substantial’ (kappa  = 0.77; 95% confidence interval (CI): 0.57–0.96; % agreement  = 87.5) and ‘almost perfect’ (kappa  = 0.96; 95% CI: 0.92–1.00; % agreement = 92.5), respectively. Sixty-eight and 95% of items on the clinical criteria checklist demonstrated clinically acceptable (kappa ⩾ 0.61 or % agreement ⩾ 80%) inter- and intra-examiner reliability, respectively. The results of this study provide preliminary evidence supporting the reliability of clinical judgments associated with mechanisms-based classifications of pain in patients with low back (±leg) pain disorders. The reliability of mechanisms-based classifications of pain should be investigated using larger samples of patients and multiple independent examiners. PMID:21655393

  19. The Seismic Reliability of Offshore Structures Based on Nonlinear Time History Analyses

    SciTech Connect

    Hosseini, Mahmood; Karimiyani, Somayyeh; Ghafooripour, Amin; Jabbarzadeh, Mohammad Javad

    2008-07-08

    Regarding the past earthquakes damages to offshore structures, as vital structures in the oil and gas industries, it is important that their seismic design is performed by very high reliability. Accepting the Nonlinear Time History Analyses (NLTHA) as the most reliable seismic analysis method, in this paper an offshore platform of jacket type with the height of 304 feet, having a deck of 96 feet by 94 feet, and weighing 290 million pounds has been studied. At first, some Push-Over Analyses (POA) have been preformed to recognize the more critical members of the jacket, based on the range of their plastic deformations. Then NLTHA have been performed by using the 3-components accelerograms of 100 earthquakes, covering a wide range of frequency content, and normalized to three Peak Ground Acceleration (PGA) levels of 0.3 g, 0.65 g, and 1.0 g. By using the results of NLTHA the damage and rupture probabilities of critical member have been studied to assess the reliability of the jacket structure. Regarding that different structural members of the jacket have different effects on the stability of the platform, an 'importance factor' has been considered for each critical member based on its location and orientation in the structure, and then the reliability of the whole structure has been obtained by combining the reliability of the critical members, each having its specific importance factor.

  20. Reliability assessment of long span bridges based on structural health monitoring: application to Yonghe Bridge

    NASA Astrophysics Data System (ADS)

    Li, Shunlong; Li, Hui; Ou, Jinping; Li, Hongwei

    2009-07-01

    This paper presents the reliability estimation studies based on structural health monitoring data for long span cable stayed bridges. The data collected by structural health monitoring system can be used to update the assumptions or probability models of random load effects, which would give potential for accurate reliability estimation. The reliability analysis is based on the estimated distribution for Dead, Live, Wind and Temperature Load effects. For the components with FBG strain sensors, the Dead, Live and unit Temperature Load effects can be determined by the strain measurements. For components without FBG strain sensors, the Dead and unit Temperature Load and Wind Load effects of the bridge can be evaluated by the finite element model, updated and calibrated by monitoring data. By applying measured truck loads and axle spacing data from weight in motion (WIM) system to the calibrated finite element model, the Live Load effects of components without FBG sensors can be generated. The stochastic process of Live Load effects can be described approximately by a Filtered Poisson Process and the extreme value distribution of Live Load effects can be calculated by Filtered Poisson Process theory. Then first order reliability method (FORM) is employed to estimate the reliability index of main components of the bridge (i.e. stiffening girder).

  1. Reliability of home-based, motor function measure in hereditary neuromuscular diseases.

    PubMed

    Ruiz-Cortes, Xiomara; Ortiz-Corredor, Fernando; Mendoza-Pulido, Camilo

    2017-02-01

    Objective To evaluate the reliability of the motor function measure (MFM) scale in the assessment of disease severity and progression when administered at home and clinic and assess its correlation with the Paediatric Outcomes Data Collection Instrument (PODCI). Methods In this prospective study, two assessors rated children with hereditary neuromuscular diseases (HNMDs) using the MFM at the clinic and then 2 weeks later at the patients' home. Intraclass correlation coefficient (ICC) was calculated for the reliability of the MFM and its domains. The reliability of each item was assessed and the correlation between MFM and three domains of PODCI was evaluated. Results A total of 48 children (5-17 years of age) were assessed in both locations and the MFM scale demonstrated excellent inter-rater reliability (ICC, 0.98). Weighted kappa ranged from excellent to poor. Correlation of the home-based MFM with the PODCI domain 'basic mobility and transfers' was excellent, with the 'upper extremity' domain was moderate, but there was no correlation with the 'happiness' domain. Conclusion The MFM is a reliable tool for assessing patients with HNMD when used in a home-based setting.

  2. The Seismic Reliability of Offshore Structures Based on Nonlinear Time History Analyses

    NASA Astrophysics Data System (ADS)

    Hosseini, Mahmood; Karimiyani, Somayyeh; Ghafooripour, Amin; Jabbarzadeh, Mohammad Javad

    2008-07-01

    Regarding the past earthquakes damages to offshore structures, as vital structures in the oil and gas industries, it is important that their seismic design is performed by very high reliability. Accepting the Nonlinear Time History Analyses (NLTHA) as the most reliable seismic analysis method, in this paper an offshore platform of jacket type with the height of 304 feet, having a deck of 96 feet by 94 feet, and weighing 290 million pounds has been studied. At first, some Push-Over Analyses (POA) have been preformed to recognize the more critical members of the jacket, based on the range of their plastic deformations. Then NLTHA have been performed by using the 3-components accelerograms of 100 earthquakes, covering a wide range of frequency content, and normalized to three Peak Ground Acceleration (PGA) levels of 0.3 g, 0.65 g, and 1.0 g. By using the results of NLTHA the damage and rupture probabilities of critical member have been studied to assess the reliability of the jacket structure. Regarding that different structural members of the jacket have different effects on the stability of the platform, an "importance factor" has been considered for each critical member based on its location and orientation in the structure, and then the reliability of the whole structure has been obtained by combining the reliability of the critical members, each having its specific importance factor.

  3. Composite Reliability of a Workplace-Based Assessment Toolbox for Postgraduate Medical Education

    ERIC Educational Resources Information Center

    Moonen-van Loon, J. M. W.; Overeem, K.; Donkers, H. H. L. M.; van der Vleuten, C. P. M.; Driessen, E. W.

    2013-01-01

    In recent years, postgraduate assessment programmes around the world have embraced workplace-based assessment (WBA) and its related tools. Despite their widespread use, results of studies on the validity and reliability of these tools have been variable. Although in many countries decisions about residents' continuation of training and…

  4. Composite Reliability of a Workplace-Based Assessment Toolbox for Postgraduate Medical Education

    ERIC Educational Resources Information Center

    Moonen-van Loon, J. M. W.; Overeem, K.; Donkers, H. H. L. M.; van der Vleuten, C. P. M.; Driessen, E. W.

    2013-01-01

    In recent years, postgraduate assessment programmes around the world have embraced workplace-based assessment (WBA) and its related tools. Despite their widespread use, results of studies on the validity and reliability of these tools have been variable. Although in many countries decisions about residents' continuation of training and…

  5. Generalizability Theory Reliability of Written Expression Curriculum-Based Measurement in Universal Screening

    ERIC Educational Resources Information Center

    Keller-Margulis, Milena A.; Mercer, Sterett H.; Thomas, Erin L.

    2016-01-01

    The purpose of this study was to examine the reliability of written expression curriculum-based measurement (WE-CBM) in the context of universal screening from a generalizability theory framework. Students in second through fifth grade (n = 145) participated in the study. The sample included 54% female students, 49% White students, 23% African…

  6. An Evaluation method for C2 Cyber-Physical Systems Reliability Based on Deep Learning

    DTIC Science & Technology

    2014-06-01

    the reliability testing data of the system, we obtain the prior distribution of the relia- bility is 1 1( ) ( ; , )R LG R r  . By Bayes theo- rem ...criticality cyber-physical sys- tems[C]//Proc of ICDCS. Piscataway, NJ: IEEE, 2010:169-178. [17] Zimmer C, Bhat B, Muller F, et al. Time-based intrusion de

  7. Body-Image Perceptions: Reliability of a BMI-Based Silhouette Matching Test

    ERIC Educational Resources Information Center

    Peterson, Michael; Ellenberg, Deborah; Crossan, Sarah

    2003-01-01

    Objective: To assess the reliability of a BMI-based Silhouette Matching Test (BMI-SMT). Methods: The perceptions of ideal and current body images of 215 ninth through twelfth graders' were assessed at 5 different schools within a mid-Atlantic state public school system. Results: Findings provided quantifiable data and discriminating measurements…

  8. 49 CFR Appendix E to Part 238 - General Principles of Reliability-Based Maintenance Programs

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... minimum total cost, including maintenance costs and the costs of residual failures. (b) Reliability-based... as the direct cost of repair; (3) Non-operational consequences, which involve only the direct cost of... instrumentation or other design features. The feasibility and cost effectiveness of scheduled maintenance depend...

  9. Empirical vs. Expected IRT-Based Reliability Estimation in Computerized Multistage Testing (MST)

    ERIC Educational Resources Information Center

    Zhang, Yanwei; Breithaupt, Krista; Tessema, Aster; Chuah, David

    2006-01-01

    Two IRT-based procedures to estimate test reliability for a certification exam that used both adaptive (via a MST model) and non-adaptive design were considered in this study. Both procedures rely on calibrated item parameters to estimate error variance. In terms of score variance, one procedure (Method 1) uses the empirical ability distribution…

  10. Generalizability Theory Reliability of Written Expression Curriculum-Based Measurement in Universal Screening

    ERIC Educational Resources Information Center

    Keller-Margulis, Milena A.; Mercer, Sterett H.; Thomas, Erin L.

    2016-01-01

    The purpose of this study was to examine the reliability of written expression curriculum-based measurement (WE-CBM) in the context of universal screening from a generalizability theory framework. Students in second through fifth grade (n = 145) participated in the study. The sample included 54% female students, 49% White students, 23% African…

  11. Two Prophecy Formulas for Assessing the Reliability of Item Response Theory-Based Ability Estimates

    ERIC Educational Resources Information Center

    Raju, Nambury S.; Oshima, T.C.

    2005-01-01

    Two new prophecy formulas for estimating item response theory (IRT)-based reliability of a shortened or lengthened test are proposed. Some of the relationships between the two formulas, one of which is identical to the well-known Spearman-Brown prophecy formula, are examined and illustrated. The major assumptions underlying these formulas are…

  12. Assessing the Reliability of Curriculum-Based Measurement: An Application of Latent Growth Modeling

    ERIC Educational Resources Information Center

    Yeo, Seungsoo; Kim, Dong-Il; Branum-Martin, Lee; Wayman, Miya Miura; Espin, Christine A.

    2012-01-01

    The purpose of this study was to demonstrate the use of Latent Growth Modeling (LGM) as a method for estimating reliability of Curriculum-Based Measurement (CBM) progress-monitoring data. The LGM approach permits the error associated with each measure to differ at each time point, thus providing an alternative method for examining of the…

  13. Assessing I-Grid(TM) web-based monitoring for power quality and reliability benchmarking

    SciTech Connect

    Divan, Deepak; Brumsickle, William; Eto, Joseph

    2003-04-30

    This paper presents preliminary findings from DOEs pilot program. The results show how a web-based monitoring system can form the basis for aggregation of data and correlation and benchmarking across broad geographical lines. A longer report describes additional findings from the pilot, including impacts of power quality and reliability on customers operations [Divan, Brumsickle, Eto 2003].

  14. Dissipativity-Based Reliable Control for Fuzzy Markov Jump Systems With Actuator Faults.

    PubMed

    Tao, Jie; Lu, Renquan; Shi, Peng; Su, Hongye; Wu, Zheng-Guang

    2017-09-01

    This paper is concerned with the problem of reliable dissipative control for Takagi-Sugeno fuzzy systems with Markov jumping parameters. Considering the influence of actuator faults, a sufficient condition is developed to ensure that the resultant closed-loop system is stochastically stable and strictly ( Q, S,R )-dissipative based on a relaxed approach in which mode-dependent and fuzzy-basis-dependent Lyapunov functions are employed. Then a reliable dissipative control for fuzzy Markov jump systems is designed, with sufficient condition proposed for the existence of guaranteed stability and dissipativity controller. The effectiveness and potential of the obtained design method is verified by two simulation examples.

  15. Research on Air Traffic Control Automatic System Software Reliability Based on Markov Chain

    NASA Astrophysics Data System (ADS)

    Wang, Xinglong; Liu, Weixiang

    Ensuring the space of air craft and high efficiency of air traffic are the main job tasks of the air traffic control automatic system. An Air Traffic Control Automatic System (ATCAS) and Markov model is put forward in this paper, which collected the 36 month failure data of ATCAS; A method to predict the s1,s2,s3 of ATCAS is based on Markov chain which predicts and validates the Reliability of ATCTS according to the deriving theory of Reliability. The experimental results show that the method can be used for the future research and proved to be practicable.

  16. Can I leave the theatre? A key to more reliable workplace-based assessment.

    PubMed

    Weller, J M; Misur, M; Nicolson, S; Morris, J; Ure, S; Crossley, J; Jolly, B

    2014-06-01

    The value of workplace-based assessments such as the mini-clinical evaluation exercise (mini-CEX), and clinicians' confidence and engagement in the process, has been constrained by low reliability and limited capacity to identify underperforming trainees. We proposed that changing the way supervisors make judgements about trainees would improve score reliability and identification of underperformers. Anaesthetists regularly make decisions about the level of trainee independence with a case, based on how closely they need to supervise them. We therefore used this as the basis for a new scoring system. We analysed 338 mini-CEXs where supervisors scored trainees using the conventional system, and also scored trainee independence, based on the need for direct, or more distant, supervision. As supervisory requirements depend on case difficulty, we then compared the actual trainee independence score and the expected trainee independence score obtained externally. Compared with the conventional scoring system used in previous studies, reliability was very substantially improved using a system based on a trainee's level of independence with a case. Reliability improved further when this score was corrected for case difficulty. Furthermore, the new scoring system overcame the previously identified problem of assessor leniency and identified a number of trainees performing below expectations. Supervisors' judgements on trainee independence with a case, based on the need for direct or more distant supervision, can generate reliable scores of trainee ability without the need for an onerous number of assessments, identify trainees performing below expectations, and track trainee progress towards independent specialist practice. © The Author [2014]. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Reliability assessment of a hospital quality measure based on rates of adverse outcomes on nursing units.

    PubMed

    Staggs, Vincent S

    2015-12-31

    The purpose of this study was to develop methods for assessing the reliability of scores on a widely disseminated hospital quality measure based on nursing unit fall rates. Poisson regression interactive multilevel modeling was adapted to account for clustering of units within hospitals. Three signal-noise reliability measures were computed. Squared correlations between the hospital score and true hospital fall rate averaged 0.52 ± 0.18 for total falls (0.68 ± 0.18 for injurious falls). Reliabilities on the other two measures averaged at least 0.70 but varied widely across hospitals. Parametric bootstrap data reflecting within-unit noise in falls were generated to evaluate percentile-ranked hospital scores as estimators of true hospital fall rate ranks. Spearman correlations between bootstrap hospital scores and true fall rates averaged 0.81 ± 0.01 (0.79 ± 0.01). Bias was negligible, but ranked hospital scores were imprecise, varying across bootstrap samples with average SD 11.8 (14.9) percentiles. Across bootstrap samples, hospital-measure scores fell in the same decile as the true fall rate in about 30% of cases. Findings underscore the importance of thoroughly assessing reliability of quality measurements before deciding how they will be used. Both the hospital measure and the reliability methods described can be adapted to other contexts involving clustered rates of adverse patient outcomes. © The Author(s) 2015.

  18. Validity and reliability of a new edge-based computerized method for identification of cephalometric landmarks.

    PubMed

    Kazandjian, Serge; Kiliaridis, Stavros; Mavropoulos, Anestis

    2006-07-01

    To evaluate the validity and inter- and intraexaminer reliability when on-screen landmarks are digitized manually or when these are computer-assisted by means of a new cephalometric software feature. Twenty radiographs were digitized four times by two experienced orthodontists using a manual method and an edge-based algorithm that helps landmark identification by detecting the edges of anatomical structures. The computer-assisted method did not agree with manual digitization in 7 of 13 landmarks and 5 of 10 variables. With a tolerance of 0.5 mm or degrees, the two methods did not agree in cephalometric variables. Intraoperator reliability was improved for B point (x-axis), and Menton (x- and y-axis). It got worse for point A (y-axis). Interoperator reliability was improved for B point (x- and y-axis), Soft Labrale Inferior (x- and y-axis), Soft Pogonion (x-axis), and Menton (y-axis). It decreased for point A (y-axis). Intra- and interoperator reliability got better for only one cephalometric variable under study (SNB). The edge-locking feature seems to be a promising tool for increasing the reliability of on-screen cephalometric analysis. There seem to be difficulties in locating the appropriate edges when artifacts or soft tissue edges are located near the targeted landmark. The existence of very small, but systematic differences between the two digitization methods manifests the need for further improvement.

  19. Measuring Fidelity and Adaptation: Reliability of a Instrument for School-Based Prevention Programs.

    PubMed

    Bishop, Dana C; Pankratz, Melinda M; Hansen, William B; Albritton, Jordan; Albritton, Lauren; Strack, Joann

    2014-06-01

    There is a need to standardize methods for assessing fidelity and adaptation. Such standardization would allow program implementation to be examined in a manner that will be useful for understanding the moderating role of fidelity in dissemination research. This article describes a method for collecting data about fidelity of implementation for school-based prevention programs, including measures of adherence, quality of delivery, dosage, participant engagement, and adaptation. We report about the reliability of these methods when applied by four observers who coded video recordings of teachers delivering All Stars, a middle school drug prevention program. Interrater agreement for scaled items was assessed for an instrument designed to evaluate program fidelity. Results indicated sound interrater reliability for items assessing adherence, dosage, quality of teaching, teacher understanding of concepts, and program adaptations. The interrater reliability for items assessing potential program effectiveness, classroom management, achievement of activity objectives, and adaptation valences was improved by dichotomizing the response options for these items. The item that assessed student engagement demonstrated only modest interrater reliability and was not improved through dichotomization. Several coder pairs were discordant on items that overall demonstrated good interrater reliability. Proposed modifications to the coding manual and protocol are discussed.

  20. A rainwater harvesting system reliability model based on nonparametric stochastic rainfall generator

    NASA Astrophysics Data System (ADS)

    Basinger, Matt; Montalto, Franco; Lall, Upmanu

    2010-10-01

    SummaryThe reliability with which harvested rainwater can be used as a means of flushing toilets, irrigating gardens, and topping off air-conditioner serving multifamily residential buildings in New York City is assessed using a new rainwater harvesting (RWH) system reliability model. Although demonstrated with a specific case study, the model is portable because it is based on a nonparametric rainfall generation procedure utilizing a bootstrapped markov chain. Precipitation occurrence is simulated using transition probabilities derived for each day of the year based on the historical probability of wet and dry day state changes. Precipitation amounts are selected from a matrix of historical values within a moving 15 day window that is centered on the target day. RWH system reliability is determined for user-specified catchment area and tank volume ranges using precipitation ensembles generated using the described stochastic procedure. The reliability with which NYC backyard gardens can be irrigated and air conditioning units supplied with water harvested from local roofs exceeds 80% and 90%, respectively, for the entire range of catchment areas and tank volumes considered in the analysis. For RWH systems installed on the most commonly occurring rooftop catchment areas found in NYC (51-75 m 2), toilet flushing demand can be met with 7-40% reliability, with lower end of the range representing buildings with high flow toilets and no storage elements, and the upper end representing buildings that feature low flow fixtures and storage tanks of up to 5 m 3. When the reliability curves developed are used to size RWH systems to flush the low flow toilets of all multifamily buildings found a typical residential neighborhood in the Bronx, rooftop runoff inputs to the sewer system are reduced by approximately 28% over an average rainfall year, and potable water demand is reduced by approximately 53%.

  1. Degradation mechanisms in high-power multi-mode InGaAs-AlGaAs strained quantum well lasers for high-reliability applications

    NASA Astrophysics Data System (ADS)

    Sin, Yongkun; Presser, Nathan; Brodie, Miles; Lingley, Zachary; Foran, Brendan; Moss, Steven C.

    2015-03-01

    Laser diode manufacturers perform accelerated multi-cell lifetests to estimate lifetimes of lasers using an empirical model. Since state-of-the-art laser diodes typically require a long period of latency before they degrade, significant amount of stress is applied to the lasers to generate failures in relatively short test durations. A drawback of this approach is the lack of mean-time-to-failure data under intermediate and low stress conditions, leading to uncertainty in model parameters (especially optical power and current exponent) and potential overestimation of lifetimes at usage conditions. This approach is a concern especially for satellite communication systems where high reliability is required of lasers for long-term duration in the space environment. A number of groups have studied reliability and degradation processes in GaAs-based lasers, but none of these studies have yielded a reliability model based on the physics of failure. The lack of such a model is also a concern for space applications where complete understanding of degradation mechanisms is necessary. Our present study addresses the aforementioned issues by performing long-term lifetests under low stress conditions followed by failure mode analysis (FMA) and physics of failure investigation. We performed low-stress lifetests on both MBE- and MOCVD-grown broad-area InGaAs- AlGaAs strained QW lasers under ACC (automatic current control) mode to study low-stress degradation mechanisms. Our lifetests have accumulated over 36,000 test hours and FMA is performed on failures using our angle polishing technique followed by EL. This technique allows us to identify failure types by observing dark line defects through a window introduced in backside metal contacts. We also investigated degradation mechanisms in MOCVD-grown broad-area InGaAs-AlGaAs strained QW lasers using various FMA techniques. Since it is a challenge to control defect densities during the growth of laser structures, we chose to

  2. Sequential optimization with particle splitting-based reliability assessment for engineering design under uncertainties

    NASA Astrophysics Data System (ADS)

    Zhuang, Xiaotian; Pan, Rong; Sun, Qing

    2014-08-01

    The evaluation of probabilistic constraints plays an important role in reliability-based design optimization. Traditional simulation methods such as Monte Carlo simulation can provide highly accurate results, but they are often computationally intensive to implement. To improve the computational efficiency of the Monte Carlo method, this article proposes a particle splitting approach, a rare-event simulation technique that evaluates probabilistic constraints. The particle splitting-based reliability assessment is integrated into the iterative steps of design optimization. The proposed method provides an enhancement of subset simulation by increasing sample diversity and producing a stable solution. This method is further extended to address the problem with multiple probabilistic constraints. The performance of the particle splitting approach is compared with the most probable point based method and other approximation methods through examples.

  3. Reliability of high power diode laser systems based on single emitters

    NASA Astrophysics Data System (ADS)

    Leisher, Paul; Reynolds, Mitch; Brown, Aaron; Kennedy, Keith; Bao, Ling; Wang, Jun; Grimshaw, Mike; DeVito, Mark; Karlsen, Scott; Small, Jay; Ebert, Chris; Martinsen, Rob; Haden, Jim

    2011-03-01

    Diode laser modules based on arrays of single emitters offer a number of advantages over bar-based solutions including enhanced reliability, higher brightness, and lower cost per bright watt. This approach has enabled a rapid proliferation of commercially available high-brightness fiber-coupled diode laser modules. Incorporating ever-greater numbers of emitters within a single module offers a direct path for power scaling while simultaneously maintaining high brightness and minimizing overall cost. While reports of long lifetimes for single emitter diode laser technology are widespread, the complex relationship between the standalone chip reliability and package-induced failure modes, as well as the impact of built-in redundancy offered by multiple emitters, are not often discussed. In this work, we present our approach to the modeling of fiber-coupled laser systems based on single-emitter laser diodes.

  4. The weakest t-norm based intuitionistic fuzzy fault-tree analysis to evaluate system reliability.

    PubMed

    Kumar, Mohit; Yadav, Shiv Prasad

    2012-07-01

    In this paper, a new approach of intuitionistic fuzzy fault-tree analysis is proposed to evaluate system reliability and to find the most critical system component that affects the system reliability. Here weakest t-norm based intuitionistic fuzzy fault tree analysis is presented to calculate fault interval of system components from integrating expert's knowledge and experience in terms of providing the possibility of failure of bottom events. It applies fault-tree analysis, α-cut of intuitionistic fuzzy set and T(ω) (the weakest t-norm) based arithmetic operations on triangular intuitionistic fuzzy sets to obtain fault interval and reliability interval of the system. This paper also modifies Tanaka et al.'s fuzzy fault-tree definition. In numerical verification, a malfunction of weapon system "automatic gun" is presented as a numerical example. The result of the proposed method is compared with the listing approaches of reliability analysis methods. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Design for a Crane Metallic Structure Based on Imperialist Competitive Algorithm and Inverse Reliability Strategy

    NASA Astrophysics Data System (ADS)

    Fan, Xiao-Ning; Zhi, Bo

    2017-07-01

    Uncertainties in parameters such as materials, loading, and geometry are inevitable in designing metallic structures for cranes. When considering these uncertainty factors, reliability-based design optimization (RBDO) offers a more reasonable design approach. However, existing RBDO methods for crane metallic structures are prone to low convergence speed and high computational cost. A unilevel RBDO method, combining a discrete imperialist competitive algorithm with an inverse reliability strategy based on the performance measure approach, is developed. Application of the imperialist competitive algorithm at the optimization level significantly improves the convergence speed of this RBDO method. At the reliability analysis level, the inverse reliability strategy is used to determine the feasibility of each probabilistic constraint at each design point by calculating its α-percentile performance, thereby avoiding convergence failure, calculation error, and disproportionate computational effort encountered using conventional moment and simulation methods. Application of the RBDO method to an actual crane structure shows that the developed RBDO realizes a design with the best tradeoff between economy and safety together with about one-third of the convergence speed and the computational cost of the existing method. This paper provides a scientific and effective design approach for the design of metallic structures of cranes.

  6. Reliability of Therapist Self-Report on Treatment Targets and Focus in Family-Based Intervention

    PubMed Central

    Hogue, Aaron; Dauber, Sarah; Henderson, Craig E.; Liddle, Howard A.

    2013-01-01

    Reliable therapist-report methods appear to be an essential component of quality assurance procedures to support adoption of evidence-based practices in usual care, but studies have found weak correspondence between therapist and observer ratings of treatment techniques. This study examined therapist reliability and accuracy in rating intervention target (i.e., session participants) and focus (i.e., session content) in a manual-guided, family-based preventive intervention implemented with 50 inner-city adolescents at risk for substance use. A total of 106 sessions selected from three phases of treatment were rated via post-session self-report by the participating therapist and also via videotape by nonparticipant coders. Both groups estimated the amount of session time devoted to model-prescribed treatment targets (adolescent, parent, conjoint) and foci (family, school, peer, prosocial, drugs). Therapists demonstrated excellent reliability with coders for treatment targets and moderate to high reliability for treatment foci across the sample and within each phase. Also, therapists did not consistently overestimate their degree of activity with targets or foci. Implications of study findings for fidelity assessment in routine settings are discussed. PMID:24068479

  7. Test-retest reliability of MRI-based disk position diagnosis of the temporomandibular joint.

    PubMed

    Nagamatsu-Sakaguchi, Chiyomi; Maekawa, Kenji; Ono, Tsuyoshi; Yanagi, Yoshinobu; Minakuchi, Hajime; Miyawaki, Shouichi; Asaumi, Junichi; Takano-Yamamoto, Teruko; Clark, Glenn T; Kuboki, Takuo

    2012-02-01

    This study evaluated the test-retest reliability for determining the temporomandibular joint (TMJ) disk position, diagnosed using magnetic resonance imaging (MRI). These assessments were done as a base-line measurement for a prospective cohort study, which examines the risk factors for precipitation and progression of temporomandibular disorders. Fifteen subjects (mean age, 24.2 ± 0.94 years; male/female = 8/7) were recruited from the students of Okayama University Dental School. Sagittal MR TMJ images were taken with a 1.5-T MR scanner (Magneton Vision, Siemens) in close and maximal open positions twice at about 1-week (6-11 days) interval. The images were displayed using 200% magnification on a computer screen with a commercially available image software package (OSIRIS, UIN/HCUG). Three calibrated examiners diagnosed the disk positions using the standardized criteria. The disk position of each joint was classified as normal, anterior disk displacement with or without reduction, and others. The first and second disk position diagnoses were compared, and the test-retest reliability level was calculated using the kappa index. The second disk position diagnosis was consistent with the first in 27 out of 30 joints. The calculated kappa value representing the test-retest reliability level between the first and second disk position diagnosis was 0.812. These results indicated that the test-retest reliability of MRI-based diagnosis of TMJ disk positions at about 1-week interval was substantially high, even though they were not completely consistent.

  8. Refining a Web-based goal assessment interview: item reduction based on reliability and predictive validity.

    PubMed

    Schwartz, Carolyn E; Li, Jei; Rapkin, Bruce D

    2016-09-01

    Goals are an important basis for patients' cognitive appraisal processes underlying quality-of-life (QOL) assessment because they are the foundation to one's frame of reference. We sought to identify the best of six goal delineation items and relevant themes for two new versions of the QOL Appraisal Profile: an interview tool using a subset of the best open-ended goal delineation items, and a shorter close-ended version for use in survey research. This is a secondary analysis of longitudinal data (n = 1126) of participants in the North American Research Committee on Multiple Sclerosis (MS) registry. The open-ended data were coded by at least two trained coders with moderately high inter-rater agreement. There were 31 themes reflecting goal content such as health, interpersonal, independence, mental health, and financial themes. Descriptive statistics identified most prevalent themes. Reliability analysis (alpha, item-total correlations) and hierarchical linear modeling identified the best goal items. Based on these qualitative and quantitative analyses, Solve (item 2) is the best single item because it is clear anchor for about a third of the goal themes, and explains the most variance in outcomes and demographic characteristics, suggesting that it taps into and reveals diversity in the sample. The next best items are Accomplish and Maintain (items 1 and 4), which are useful in tapping into and revealing diversity among people reporting cognitive deficits (Accomplish), and demographic factors (both Accomplish and Maintain items). The goal delineation items identified as best performers in this study will be used to develop a shorter open-ended version of the QOL Appraisal Profile, and an entirely close-ended version of the QOL Appraisal Profile for use in more standard survey research settings. These tools will enable coaching patients in medical decision making as well as investigations of appraisal and response shift in QOL research.

  9. A Reliability Model for Ni-BaTiO3-Based (BME) Ceramic Capacitors

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2014-01-01

    The evaluation of multilayer ceramic capacitors (MLCCs) with base-metal electrodes (BMEs) for potential NASA space project applications requires an in-depth understanding of their reliability. The reliability of an MLCC is defined as the ability of the dielectric material to retain its insulating properties under stated environmental and operational conditions for a specified period of time t. In this presentation, a general mathematic expression of a reliability model for a BME MLCC is developed and discussed. The reliability model consists of three parts: (1) a statistical distribution that describes the individual variation of properties in a test group of samples (Weibull, log normal, normal, etc.), (2) an acceleration function that describes how a capacitors reliability responds to external stresses such as applied voltage and temperature (All units in the test group should follow the same acceleration function if they share the same failure mode, independent of individual units), and (3) the effect and contribution of the structural and constructional characteristics of a multilayer capacitor device, such as the number of dielectric layers N, dielectric thickness d, average grain size r, and capacitor chip size S. In general, a two-parameter Weibull statistical distribution model is used in the description of a BME capacitors reliability as a function of time. The acceleration function that relates a capacitors reliability to external stresses is dependent on the failure mode. Two failure modes have been identified in BME MLCCs: catastrophic and slow degradation. A catastrophic failure is characterized by a time-accelerating increase in leakage current that is mainly due to existing processing defects (voids, cracks, delamination, etc.), or the extrinsic defects. A slow degradation failure is characterized by a near-linear increase in leakage current against the stress time; this is caused by the electromigration of oxygen vacancies (intrinsic defects). The

  10. Reliability Sensitivity Analysis and Design Optimization of Composite Structures Based on Response Surface Methodology

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2003-01-01

    This report discusses the development and application of two alternative strategies in the form of global and sequential local response surface (RS) techniques for the solution of reliability-based optimization (RBO) problems. The problem of a thin-walled composite circular cylinder under axial buckling instability is used as a demonstrative example. In this case, the global technique uses a single second-order RS model to estimate the axial buckling load over the entire feasible design space (FDS) whereas the local technique uses multiple first-order RS models with each applied to a small subregion of FDS. Alternative methods for the calculation of unknown coefficients in each RS model are explored prior to the solution of the optimization problem. The example RBO problem is formulated as a function of 23 uncorrelated random variables that include material properties, thickness and orientation angle of each ply, cylinder diameter and length, as well as the applied load. The mean values of the 8 ply thicknesses are treated as independent design variables. While the coefficients of variation of all random variables are held fixed, the standard deviations of ply thicknesses can vary during the optimization process as a result of changes in the design variables. The structural reliability analysis is based on the first-order reliability method with reliability index treated as the design constraint. In addition to the probabilistic sensitivity analysis of reliability index, the results of the RBO problem are presented for different combinations of cylinder length and diameter and laminate ply patterns. The two strategies are found to produce similar results in terms of accuracy with the sequential local RS technique having a considerably better computational efficiency.

  11. Perceptual attraction in tool use: evidence for a reliability-based weighting mechanism.

    PubMed

    Debats, Nienke B; Ernst, Marc O; Heuer, Herbert

    2017-04-01

    Humans are well able to operate tools whereby their hand movement is linked, via a kinematic transformation, to a spatially distant object moving in a separate plane of motion. An everyday example is controlling a cursor on a computer monitor. Despite these separate reference frames, the perceived positions of the hand and the object were found to be biased toward each other. We propose that this perceptual attraction is based on the principles by which the brain integrates redundant sensory information of single objects or events, known as optimal multisensory integration. That is, 1) sensory information about the hand and the tool are weighted according to their relative reliability (i.e., inverse variances), and 2) the unisensory reliabilities sum up in the integrated estimate. We assessed whether perceptual attraction is consistent with optimal multisensory integration model predictions. We used a cursor-control tool-use task in which we manipulated the relative reliability of the unisensory hand and cursor position estimates. The perceptual biases shifted according to these relative reliabilities, with an additional bias due to contextual factors that were present in experiment 1 but not in experiment 2 The biased position judgments' variances were, however, systematically larger than the predicted optimal variances. Our findings suggest that the perceptual attraction in tool use results from a reliability-based weighting mechanism similar to optimal multisensory integration, but that certain boundary conditions for optimality might not be satisfied.NEW & NOTEWORTHY Kinematic tool use is associated with a perceptual attraction between the spatially separated hand and the effective part of the tool. We provide a formal account for this phenomenon, thereby showing that the process behind it is similar to optimal integration of sensory information relating to single objects.

  12. Validity and reliability of autofluorescence-based quantification method of dental plaque.

    PubMed

    Han, Sun-Young; Kim, Bo-Ra; Ko, Hae-Youn; Kwon, Ho-Keun; Kim, Baek-Il

    2015-12-01

    The aim of this study was to evaluate validity and reliability of autofluorescence-based plaque quantification (APQ) method. The facial surfaces of 600 sound anterior teeth of 50 subjects were examined. The subjects received dental plaque examination using Turesky modified Quigley Hein plaque index (QHI) and Silness & Löe plaque index (SLI). The autofluorescence images were taken before the plaque examination with Quantitative Light-induced Fluorescence-Digital, and plaque percent index (PPI) was calculated. Correlation between two existing plaque indices and the PPI of the APQ method was evaluated to find which level of plaque redness on tooth (ΔR) by the APQ method shows the highest correlation. The area under the ROC curve (AUC) analysis and intra- and inter-examiner reliability tests were performed. The PPIΔR20 of the APQ method showed a moderate correlation with two existing plaque indices (rho of QHI=0.48, SLI=0.51). This methodology fell in the fair category and it had an excellent reliability. The APQ method also showed possibility to detect heavy plaque with fair validity. The APQ method demonstrated excellent reliability, and fair validity, compared with 2 conventional indices. The plaque quantification described has the potential to be used in clinical evaluation of oral hygiene procedures. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Reliable and redundant FPGA based read-out design in the ATLAS TileCal Demonstrator

    SciTech Connect

    Akerstedt, Henrik; Muschter, Steffen; Drake, Gary; Anderson, Kelby; Bohm, Christian; Oreglia, Mark; Tang, Fukun

    2015-10-01

    The Tile Calorimeter at ATLAS [1] is a hadron calorimeter based on steel plates and scintillating tiles read out by PMTs. The current read-out system uses standard ADCs and custom ASICs to digitize and temporarily store the data on the detector. However, only a subset of the data is actually read out to the counting room. The on-detector electronics will be replaced around 2023. To achieve the required reliability the upgraded system will be highly redundant. Here the ASICs will be replaced with Kintex-7 FPGAs from Xilinx. This, in addition to the use of multiple 10 Gbps optical read-out links, will allow a full read-out of all detector data. Due to the higher radiation levels expected when the beam luminosity is increased, opportunities for repairs will be less frequent. The circuitry and firmware must therefore be designed for sufficiently high reliability using redundancy and radiation tolerant components. Within a year, a hybrid demonstrator including the new readout system will be installed in one slice of the ATLAS Tile Calorimeter. This will allow the proposed upgrade to be thoroughly evaluated well before the planned 2023 deployment in all slices, especially with regard to long term reliability. Different firmware strategies alongside with their integration in the demonstrator are presented in the context of high reliability protection against hardware malfunction and radiation induced errors.

  14. Reliability analysis of the solar array based on Fault Tree Analysis

    NASA Astrophysics Data System (ADS)

    Jianing, Wu; Shaoze, Yan

    2011-07-01

    The solar array is an important device used in the spacecraft, which influences the quality of in-orbit operation of the spacecraft and even the launches. This paper analyzes the reliability of the mechanical system and certifies the most vital subsystem of the solar array. The fault tree analysis (FTA) model is established according to the operating process of the mechanical system based on DFH-3 satellite; the logical expression of the top event is obtained by Boolean algebra and the reliability of the solar array is calculated. The conclusion shows that the hinges are the most vital links between the solar arrays. By analyzing the structure importance(SI) of the hinge's FTA model, some fatal causes, including faults of the seal, insufficient torque of the locking spring, temperature in space, and friction force, can be identified. Damage is the initial stage of the fault, so limiting damage is significant to prevent faults. Furthermore, recommendations for improving reliability associated with damage limitation are discussed, which can be used for the redesigning of the solar array and the reliability growth planning.

  15. Novel ring-based architecture for TWDM-PON with high reliability and flexible extensibility

    NASA Astrophysics Data System (ADS)

    Xiong, Yu; Sun, Peng; Li, Zhiqiang

    2017-02-01

    Time and wavelength division multiplexed passive optical network (TWDM-PON) was determined as a primary solution to NG-PON2 by the full service access network (FSAN) in 2012. Since then, TWDM-PON has been applied to a wider set of applications, including those that are outage sensitive and expansion flexible. So the protection techniques with reliability and flexibility should be studied to address the above needs. In this paper, we propose a novel ring-based architecture for TWDM-PON. The architecture can provide reliable ring protection scheme against a fiber fault occurring on main ring (MR), sub-ring (SR) or last mile ring (LMR). In addition, we exploit the extended node (EN) to realize the network expansion conveniently and smoothly for the flexible extensibility. Thus, more remote nodes(RNs) and optical network units (ONUs) could access this architecture through EN. Moreover, in order to further improve reliability of the network, we design the 1:1 protection scheme against the connected fiber fault between RN and EN. The results show that the proposed architecture has a recovery time of 17 ms under protection mode and the reliability of the network is also illustrated to be greatly improved compared to the network without protection. As the number of ONUs increases, the average cost of each ONU could be gradually reduced. Finally, the simulations verify the feasibility of the architecture.

  16. Generalizability theory reliability of written expression curriculum-based measurement in universal screening.

    PubMed

    Keller-Margulis, Milena A; Mercer, Sterett H; Thomas, Erin L

    2016-09-01

    The purpose of this study was to examine the reliability of written expression curriculum-based measurement (WE-CBM) in the context of universal screening from a generalizability theory framework. Students in second through fifth grade (n = 145) participated in the study. The sample included 54% female students, 49% White students, 23% African American students, 17% Hispanic students, 8% Asian students, and 3% of students identified as 2 or more races. Of the sample, 8% were English Language Learners and 6% were students receiving special education. Three WE-CBM probes were administered for 7 min each at 3 time points across 1 year. Writing samples were scored for commonly used WE-CBM metrics (e.g., correct minus incorrect word sequences; CIWS). Results suggest that nearly half the variance in WE-CBM is related to unsystematic error and that conventional screening procedures (i.e., the use of one 3-min sample) do not yield scores with adequate reliability for relative or absolute decisions about student performance. In most grades, three 3-min writing samples (or 2 longer duration samples) were required for adequate reliability for relative decisions, and three 7-min writing samples would not yield adequate reliability for relative decisions about within-year student growth. Implications and recommendations are discussed. (PsycINFO Database Record

  17. Web-based collection of expert opinion on routine scalp EEG: software development and interrater reliability.

    PubMed

    Halford, Jonathan J; Pressly, William B; Benbadis, Selim R; Tatum, William O; Turner, Robert P; Arain, Amir; Pritchard, Paul B; Edwards, Jonathan C; Dean, Brian C

    2011-04-01

    Computerized detection of epileptiform transients (ETs), characterized by interictal spikes and sharp waves in the EEG, has been a research goal for the last 40 years. A reliable method for detecting ETs would assist physicians in interpretation and improve efficiency in reviewing long-term EEG recordings. Computer algorithms developed thus far for detecting ETs are not as reliable as human experts, primarily due to the large number of false-positive detections. Comparing the performance of different algorithms is difficult because each study uses individual EEG test datasets. In this article, we present EEGnet, a distributed web-based platform for the acquisition and analysis of large-scale training datasets for comparison of different EEG ET detection algorithms. This software allows EEG scorers to log in through the web, mark EEG segments of interest, and categorize segments of interest using a conventional clinical EEG user interface. This software platform was used by seven board-certified academic epileptologists to score 40 short 30-second EEG segments from 40 patients, half containing ETs and half containing artifacts and normal variants. The software performance was adequate. Interrater reliability for marking the location of paroxysmal activity was low. Interrater reliability of marking artifacts and ETs was high and moderate, respectively.

  18. [Reliability theory based on quality risk network analysis for Chinese medicine injection].

    PubMed

    Li, Zheng; Kang, Li-Yuan; Fan, Xiao-Hui

    2014-08-01

    A new risk analysis method based upon reliability theory was introduced in this paper for the quality risk management of Chinese medicine injection manufacturing plants. The risk events including both cause and effect ones were derived in the framework as nodes with a Bayesian network analysis approach. It thus transforms the risk analysis results from failure mode and effect analysis (FMEA) into a Bayesian network platform. With its structure and parameters determined, the network can be used to evaluate the system reliability quantitatively with probabilistic analytical appraoches. Using network analysis tools such as GeNie and AgenaRisk, we are able to find the nodes that are most critical to influence the system reliability. The importance of each node to the system can be quantitatively evaluated by calculating the effect of the node on the overall risk, and minimization plan can be determined accordingly to reduce their influences and improve the system reliability. Using the Shengmai injection manufacturing plant of SZYY Ltd as a user case, we analyzed the quality risk with both static FMEA analysis and dynamic Bayesian Network analysis. The potential risk factors for the quality of Shengmai injection manufacturing were identified with the network analysis platform. Quality assurance actions were further defined to reduce the risk and improve the product quality.

  19. Interval Reliability Assessment of Power System under Epistemic Uncertainty Based on Belief UGF Method

    NASA Astrophysics Data System (ADS)

    Wang, Bolun; Wang, Yong; Ding, Ying; Li, Ming; Zhang, Cong

    2017-05-01

    At present, reliability assessment plays an important role in power systems. When the component failure probabilities are interval valued, common methods fail to achieve reasonable interval reliability assessment result of power systems. In this paper a novel approach based on the belief universal generating function (BUGF) is proposed to calculate the reliability indexes of power systems. Instead of giving a single-valued assessment result, a belief function and a plausibility function are exploited to calculate the lower and upper bounds of the loss of load probability (LOLP), the loss of load expectation (LOLE), the expected unsupplied load (EUL) and the expected unsupplied energy (EUE) in UGF, respectively. The proposed approach can track the correlation of the original data well and keep it to the end of the calculation. By using BUGF compared to other common methods to calculate the interval LOLP, LOLE, EUL, and EUE of IEEE-RTS 79, the results show the BUGF method can track the correlation of the original data well, and can get narrower and more accurate interval reliability indexes of the power generation system, which illustrates the effectiveness of the proposed approach.

  20. Reliability-Based Analysis and Design Methods for Reinforced Concrete Protective Structures

    DTIC Science & Technology

    1993-04-01

    of the factors that contributes to airblast and resistance prediction error is assumed to be lognormally distributed. Errors in the PCDM airblast...structural resistance prediction error model is also assumed to be composed of three multiplicative factors: (1) a correction factor for actual material...material properties can be used to develop structural resistance prediction error models and reliability-based capacity factors. Prediction error models

  1. High-Confidence Compositional Reliability Assessment of SOA-Based Systems Using Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Challagulla, Venkata U. B.; Bastani, Farokh B.; Yen, I.-Ling

    Service-oriented architecture (SOA) techniques are being increasingly used for developing critical applications, especially network-centric systems. While the SOA paradigm provides flexibility and agility to better respond to changing business requirements, the task of assessing the reliability of SOA-based systems is quite challenging. Deriving high confidence reliability estimates for mission-critical systems can require huge costs and time. SOAsystems/ applications are built by using either atomic or composite services as building blocks. These services are generally assumed to be realized with reuse and logical composition of components. One approach for assessing the reliability of SOA-based systems is to use AI reasoning techniques on dynamically collected failure data of each service and its components as one of the evidences together with results from random testing. Memory-Based Reasoning technique and Bayesian Belief Net-works are verified as the reasoning tools best suited to guide the prediction analysis. A framework constructed from the above approach identifies the least tested and “high usage” input subdomains of the service(s) and performs necessary remedial actions depending on the predicted results.

  2. Reliability and performance evaluation of systems containing embedded rule-based expert systems

    NASA Technical Reports Server (NTRS)

    Beaton, Robert M.; Adams, Milton B.; Harrison, James V. A.

    1989-01-01

    A method for evaluating the reliability of real-time systems containing embedded rule-based expert systems is proposed and investigated. It is a three stage technique that addresses the impact of knowledge-base uncertainties on the performance of expert systems. In the first stage, a Markov reliability model of the system is developed which identifies the key performance parameters of the expert system. In the second stage, the evaluation method is used to determine the values of the expert system's key performance parameters. The performance parameters can be evaluated directly by using a probabilistic model of uncertainties in the knowledge-base or by using sensitivity analyses. In the third and final state, the performance parameters of the expert system are combined with performance parameters for other system components and subsystems to evaluate the reliability and performance of the complete system. The evaluation method is demonstrated in the context of a simple expert system used to supervise the performances of an FDI algorithm associated with an aircraft longitudinal flight-control system.

  3. A Reliability and Validity of an Instrument to Evaluate the School-Based Assessment System: A Pilot Study

    ERIC Educational Resources Information Center

    Ghazali, Nor Hasnida Md

    2016-01-01

    A valid, reliable and practical instrument is needed to evaluate the implementation of the school-based assessment (SBA) system. The aim of this study is to develop and assess the validity and reliability of an instrument to measure the perception of teachers towards the SBA implementation in schools. The instrument is developed based on a…

  4. 76 FR 40722 - Granite Reliable Power, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-11

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Granite Reliable Power, LLC; Supplemental Notice That Initial Market-Based... above-referenced proceeding of Granite Reliable Power, LLC's application for market-based rate...

  5. Reliability- and performance-based robust design optimization of MEMS structures considering technological uncertainties

    NASA Astrophysics Data System (ADS)

    Martowicz, Adam; Uhl, Tadeusz

    2012-10-01

    The paper discusses the applicability of a reliability- and performance-based multi-criteria robust design optimization technique for micro-electromechanical systems, considering their technological uncertainties. Nowadays, micro-devices are commonly applied systems, especially in the automotive industry, taking advantage of utilizing both the mechanical structure and electronic control circuit on one board. Their frequent use motivates the elaboration of virtual prototyping tools that can be applied in design optimization with the introduction of technological uncertainties and reliability. The authors present a procedure for the optimization of micro-devices, which is based on the theory of reliability-based robust design optimization. This takes into consideration the performance of a micro-device and its reliability assessed by means of uncertainty analysis. The procedure assumes that, for each checked design configuration, the assessment of uncertainty propagation is performed with the meta-modeling technique. The described procedure is illustrated with an example of the optimization carried out for a finite element model of a micro-mirror. The multi-physics approach allowed the introduction of several physical phenomena to correctly model the electrostatic actuation and the squeezing effect present between electrodes. The optimization was preceded by sensitivity analysis to establish the design and uncertain domains. The genetic algorithms fulfilled the defined optimization task effectively. The best discovered individuals are characterized by a minimized value of the multi-criteria objective function, simultaneously satisfying the constraint on material strength. The restriction of the maximum equivalent stresses was introduced with the conditionally formulated objective function with a penalty component. The yielded results were successfully verified with a global uniform search through the input design domain.

  6. A genetic algorithm approach for assessing soil liquefaction potential based on reliability method

    NASA Astrophysics Data System (ADS)

    Bagheripour, M. H.; Shooshpasha, I.; Afzalirad, M.

    2012-02-01

    Deterministic approaches are unable to account for the variations in soil's strength properties, earthquake loads, as well as source of errors in evaluations of liquefaction potential in sandy soils which make them questionable against other reliability concepts. Furthermore, deterministic approaches are incapable of precisely relating the probability of liquefaction and the factor of safety (FS). Therefore, the use of probabilistic approaches and especially, reliability analysis is considered since a complementary solution is needed to reach better engineering decisions. In this study, Advanced First-Order Second-Moment (AFOSM) technique associated with genetic algorithm (GA) and its corresponding sophisticated optimization techniques have been used to calculate the reliability index and the probability of liquefaction. The use of GA provides a reliable mechanism suitable for computer programming and fast convergence. A new relation is developed here, by which the liquefaction potential can be directly calculated based on the estimated probability of liquefaction ( P L ), cyclic stress ratio (CSR) and normalized standard penetration test (SPT) blow counts while containing a mean error of less than 10% from the observational data. The validity of the proposed concept is examined through comparison of the results obtained by the new relation and those predicted by other investigators. A further advantage of the proposed relation is that it relates P L and FS and hence it provides possibility of decision making based on the liquefaction risk and the use of deterministic approaches. This could be beneficial to geotechnical engineers who use the common methods of FS for evaluation of liquefaction. As an application, the city of Babolsar which is located on the southern coasts of Caspian Sea is investigated for liquefaction potential. The investigation is based primarily on in situ tests in which the results of SPT are analysed.

  7. Reliability-based structural optimization using response surface approximations and probabilistic sufficiency factor

    NASA Astrophysics Data System (ADS)

    Qu, Xueyong

    Uncertainties exist practically everywhere from structural design to manufacturing, product lifetime service, and maintenance. Uncertainties can be introduced by errors in modeling and simulation; by manufacturing imperfections (such as variability in material properties and structural geometric dimensions); and by variability in loading. Structural design by safety factors using nominal values without considering uncertainties may lead to designs that are either unsafe, or too conservative and thus not efficient. The focus of this dissertation is reliability-based design optimization (RBDO) of composite structures. Uncertainties are modeled by the probabilistic distributions of random variables. Structural reliability is evaluated in term of the probability of failure. RBDO minimizes cost such as structural weight subject to reliability constraints. Since engineering structures usually have multiple failure modes, Monte Carlo simulation (MCS) was used employed to calculate the system probability of failure. Response surface (RS) approximation techniques were used to solve the difficulties associated with MCS. The high computational cost of a large number of MCS samples was alleviated by analysis RS, and numerical noise in the results of MCS was filtered out by design RS. RBDO of composite laminates is investigated for use in hydrogen tanks in cryogenic environments. The major challenge is to reduce the large residual strains developed due to thermal mismatch between matrix and fibers while maintaining the load carrying capacity. RBDO is performed to provide laminate designs, quantify the effects of uncertainties on the optimum weight, and identify those parameters that have the largest influence on optimum design. Studies of weight and reliability tradeoffs indicate that the most cost-effective measure for reducing weight and increasing reliability is quality control. A probabilistic sufficiency factor (PSF) approach was developed to improve the computational

  8. Reliability and validity of the NeuroCognitive Performance Test, a web-based neuropsychological assessment.

    PubMed

    Morrison, Glenn E; Simone, Christa M; Ng, Nicole F; Hardy, Joseph L

    2015-01-01

    The NeuroCognitive Performance Test (NCPT) is a brief, repeatable, web-based cognitive assessment platform that measures performance across several cognitive domains. The NCPT platform is modular and includes 18 subtests that can be arranged into customized batteries. Here we present normative data from a sample of 130,140 healthy volunteers for an NCPT battery consisting of 8 subtests. Participants took the NCPT remotely and without supervision. Factor structure and effects of age, education, and gender were evaluated with this normative dataset. Test-retest reliability was evaluated in a subset of participants who took the battery again an average of 78.8 days later. The eight NCPT subtests group into 4 putative cognitive domains, have adequate to good test-retest reliability, and are sensitive to expected age- and education-related cognitive effects. Concurrent validity to standard neuropsychological tests was demonstrated in 73 healthy volunteers. In an exploratory analysis the NCPT battery could differentiate those who self-reported Mild Cognitive Impairment or Alzheimer's disease from matched healthy controls. Overall these results demonstrate the reliability and validity of the NCPT battery as a measure of cognitive performance and support the feasibility of web-based, unsupervised testing, with potential utility in clinical and research settings.

  9. Reliability and validity of the NeuroCognitive Performance Test, a web-based neuropsychological assessment

    PubMed Central

    Morrison, Glenn E.; Simone, Christa M.; Ng, Nicole F.; Hardy, Joseph L.

    2015-01-01

    The NeuroCognitive Performance Test (NCPT) is a brief, repeatable, web-based cognitive assessment platform that measures performance across several cognitive domains. The NCPT platform is modular and includes 18 subtests that can be arranged into customized batteries. Here we present normative data from a sample of 130,140 healthy volunteers for an NCPT battery consisting of 8 subtests. Participants took the NCPT remotely and without supervision. Factor structure and effects of age, education, and gender were evaluated with this normative dataset. Test-retest reliability was evaluated in a subset of participants who took the battery again an average of 78.8 days later. The eight NCPT subtests group into 4 putative cognitive domains, have adequate to good test-retest reliability, and are sensitive to expected age- and education-related cognitive effects. Concurrent validity to standard neuropsychological tests was demonstrated in 73 healthy volunteers. In an exploratory analysis the NCPT battery could differentiate those who self-reported Mild Cognitive Impairment or Alzheimer's disease from matched healthy controls. Overall these results demonstrate the reliability and validity of the NCPT battery as a measure of cognitive performance and support the feasibility of web-based, unsupervised testing, with potential utility in clinical and research settings. PMID:26579035

  10. Summary of Research on Reliability Criteria-Based Flight System Control

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva; Belcastro, Christine (Technical Monitor)

    2002-01-01

    This paper presents research on the reliability assessment of adaptive flight control systems. The topics include: 1) Overview of Project Focuses; 2) Reliability Analysis; and 3) Design for Reliability. This paper is presented in viewgraph form.

  11. Cut set-based risk and reliability analysis for arbitrarily interconnected networks

    DOEpatents

    Wyss, Gregory D.

    2000-01-01

    Method for computing all-terminal reliability for arbitrarily interconnected networks such as the United States public switched telephone network. The method includes an efficient search algorithm to generate minimal cut sets for nonhierarchical networks directly from the network connectivity diagram. Efficiency of the search algorithm stems in part from its basis on only link failures. The method also includes a novel quantification scheme that likewise reduces computational effort associated with assessing network reliability based on traditional risk importance measures. Vast reductions in computational effort are realized since combinatorial expansion and subsequent Boolean reduction steps are eliminated through analysis of network segmentations using a technique of assuming node failures to occur on only one side of a break in the network, and repeating the technique for all minimal cut sets generated with the search algorithm. The method functions equally well for planar and non-planar networks.

  12. Accounting for Proof Test Data in a Reliability Based Design Optimization Framework

    NASA Technical Reports Server (NTRS)

    Ventor, Gerharad; Scotti, Stephen J.

    2012-01-01

    This paper investigates the use of proof (or acceptance) test data during the reliability based design optimization of structural components. It is assumed that every component will be proof tested and that the component will only enter into service if it passes the proof test. The goal is to reduce the component weight, while maintaining high reliability, by exploiting the proof test results during the design process. The proposed procedure results in the simultaneous design of the structural component and the proof test itself and provides the designer with direct control over the probability of failing the proof test. The procedure is illustrated using two analytical example problems and the results indicate that significant weight savings are possible when exploiting the proof test results during the design process.

  13. Differential Evolution Based Intelligent System State Search Method for Composite Power System Reliability Evaluation

    NASA Astrophysics Data System (ADS)

    Bakkiyaraj, Ashok; Kumarappan, N.

    2015-09-01

    This paper presents a new approach for evaluating the reliability indices of a composite power system that adopts binary differential evolution (BDE) algorithm in the search mechanism to select the system states. These states also called dominant states, have large state probability and higher loss of load curtailment necessary to maintain real power balance. A chromosome of a BDE algorithm represents the system state. BDE is not applied for its traditional application of optimizing a non-linear objective function, but used as tool for exploring more number of dominant states by producing new chromosomes, mutant vectors and trail vectors based on the fitness function. The searched system states are used to evaluate annualized system and load point reliability indices. The proposed search methodology is applied to RBTS and IEEE-RTS test systems and results are compared with other approaches. This approach evaluates the indices similar to existing methods while analyzing less number of system states.

  14. Predictive models of safety based on audit findings: Part 1: Model development and reliability.

    PubMed

    Hsiao, Yu-Lin; Drury, Colin; Wu, Changxu; Paquet, Victor

    2013-03-01

    This consecutive study was aimed at the quantitative validation of safety audit tools as predictors of safety performance, as we were unable to find prior studies that tested audit validity against safety outcomes. An aviation maintenance domain was chosen for this work as both audits and safety outcomes are currently prescribed and regulated. In Part 1, we developed a Human Factors/Ergonomics classification framework based on HFACS model (Shappell and Wiegmann, 2001a,b), for the human errors detected by audits, because merely counting audit findings did not predict future safety. The framework was tested for measurement reliability using four participants, two of whom classified errors on 1238 audit reports. Kappa values leveled out after about 200 audits at between 0.5 and 0.8 for different tiers of errors categories. This showed sufficient reliability to proceed with prediction validity testing in Part 2. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  15. A reliable and sensitive bead-based fluorescence assay for identification of nucleic acid sequences

    NASA Astrophysics Data System (ADS)

    Klamp, Tobias; Yahiatène, Idir; Lampe, André; Schüttpelz, Mark; Sauer, Markus

    2011-03-01

    The sensitive and rapid detection of pathogenic DNA is of tremendous importance in the field of diagnostics. We demonstrate the ability of detecting and quantifying single- and double-stranded pathogenic DNA with picomolar sensitivity in a bead-based fluorescence assay. Selecting appropriate capturing and detection sequences enables rapid (2 h) and reliable DNA quantification. We show that synthetic sequences of S. pneumoniae and M. luteus can be quantified in very small sample volumes (20 μL) across a linear detection range over four orders of magnitude from 1 nM to 1 pM, using a miniaturized wide-field fluorescence microscope without amplification steps. The method offers single molecule detection sensitivity without using complex setups and thus volunteers as simple, robust, and reliable method for the sensitive detection of DNA and RNA sequences.

  16. A framework for conducting mechanistic based reliability assessments of components operating in complex systems

    NASA Astrophysics Data System (ADS)

    Wallace, Jon Michael

    2003-10-01

    Reliability prediction of components operating in complex systems has historically been conducted in a statistically isolated manner. Current physics-based, i.e. mechanistic, component reliability approaches focus more on component-specific attributes and mathematical algorithms and not enough on the influence of the system. The result is that significant error can be introduced into the component reliability assessment process. The objective of this study is the development of a framework that infuses the needs and influence of the system into the process of conducting mechanistic-based component reliability assessments. The formulated framework consists of six primary steps. The first three steps, identification, decomposition, and synthesis, are primarily qualitative in nature and employ system reliability and safety engineering principles to construct an appropriate starting point for the component reliability assessment. The following two steps are the most unique. They involve a step to efficiently characterize and quantify the system-driven local parameter space and a subsequent step using this information to guide the reduction of the component parameter space. The local statistical space quantification step is accomplished using two proposed multivariate probability models: Multi-Response First Order Second Moment and Taylor-Based Inverse Transformation. Where existing joint probability models require preliminary distribution and correlation information of the responses, these models combine statistical information of the input parameters with an efficient sampling of the response analyses to produce the multi-response joint probability distribution. Parameter space reduction is accomplished using Approximate Canonical Correlation Analysis (ACCA) employed as a multi-response screening technique. The novelty of this approach is that each individual local parameter and even subsets of parameters representing entire contributing analyses can now be rank

  17. Analyzing Reliability and Performance Trade-Offs of HLS-Based Designs in SRAM-Based FPGAs Under Soft Errors

    NASA Astrophysics Data System (ADS)

    Tambara, Lucas Antunes; Tonfat, Jorge; Santos, André; Kastensmidt, Fernanda Lima; Medina, Nilberto H.; Added, Nemitala; Aguiar, Vitor A. P.; Aguirre, Fernando; Silveira, Marcilei A. G.

    2017-02-01

    The increasing system complexity of FPGA-based hardware designs and shortening of time-to-market have motivated the adoption of new designing methodologies focused on addressing the current need for high-performance circuits. High-Level Synthesis (HLS) tools can generate Register Transfer Level (RTL) designs from high-level software programming languages. These tools have evolved significantly in recent years, providing optimized RTL designs, which can serve the needs of safety-critical applications that require both high performance and high reliability levels. However, a reliability evaluation of HLS-based designs under soft errors has not yet been presented. In this work, the trade-offs of different HLS-based designs in terms of reliability, resource utilization, and performance are investigated by analyzing their behavior under soft errors and comparing them to a standard processor-based implementation in an SRAM-based FPGA. Results obtained from fault injection campaigns and radiation experiments show that it is possible to increase the performance of a processor-based system up to 5,000 times by changing its architecture with a small impact in the cross section (increasing up to 8 times), and still increasing the Mean Workload Between Failures (MWBF) of the system.

  18. Reliability Evaluation of Base-Metal-Electrode Multilayer Ceramic Capacitors for Potential Space Applications

    NASA Technical Reports Server (NTRS)

    Liu, David (Donhang); Sampson, Michael J.

    2011-01-01

    Base-metal-electrode (BME) ceramic capacitors are being investigated for possible use in high-reliability spacelevel applications. This paper focuses on how BME capacitors construction and microstructure affects their lifetime and reliability. Examination of the construction and microstructure of commercial off-the-shelf (COTS) BME capacitors reveals great variance in dielectric layer thickness, even among BME capacitors with the same rated voltage. Compared to PME (precious-metal-electrode) capacitors, BME capacitors exhibit a denser and more uniform microstructure, with an average grain size between 0.3 and 0.5 m, which is much less than that of most PME capacitors. BME capacitors can be fabricated with more internal electrode layers and thinner dielectric layers than PME capacitors because they have a fine-grained microstructure and do not shrink much during ceramic sintering. This makes it possible for BME capacitors to achieve a very high capacitance volumetric efficiency. The reliability of BME and PME capacitors was investigated using highly accelerated life testing (HALT). Most BME capacitors were found to fail with an early avalanche breakdown, followed by a regular dielectric wearout failure during the HALT test. When most of the early failures, characterized with avalanche breakdown, were removed, BME capacitors exhibited a minimum mean time-to-failure (MTTF) of more than 105 years at room temperature and rated voltage. Dielectric thickness was found to be a critical parameter for the reliability of BME capacitors. The number of stacked grains in a dielectric layer appears to play a significant role in determining BME capacitor reliability. Although dielectric layer thickness varies for a given rated voltage in BME capacitors, the number of stacked grains is relatively consistent, typically around 12 for a number of BME capacitors with a rated voltage of 25V. This may suggest that the number of grains per dielectric layer is more critical than the

  19. Physics-based process modeling, reliability prediction, and design guidelines for flip-chip devices

    NASA Astrophysics Data System (ADS)

    Michaelides, Stylianos

    -down devices without the underfill, based on the thorough understanding of the failure modes. Also, practical design guidelines for material, geometry and process parameters for reliable flip-chip devices have been developed.

  20. sLORETA allows reliable distributed source reconstruction based on subdural strip and grid recordings.

    PubMed

    Dümpelmann, Matthias; Ball, Tonio; Schulze-Bonhage, Andreas

    2012-05-01

    Source localization based on invasive recordings by subdural strip and grid electrodes is a topic of increasing interest. This simulation study addresses the question, which factors are relevant for reliable source reconstruction based on sLORETA. MRI and electrode positions of a patient undergoing invasive presurgical epilepsy diagnostics were the basis of sLORETA simulations. A boundary element head model derived from the MRI was used for the simulation of electrical potentials and source reconstruction. Focal dipolar sources distributed on a regular three-dimensional lattice and spatiotemporal distributed patches served as input for simulation. In addition to the distance between original and reconstructed source maxima, the activation volume of the reconstruction and the correlation of time courses between the original and reconstructed sources were investigated. Simulations were supplemented by the localization of the patient's spike activity. For noise-free simulated data, sLORETA achieved results with zero localization error. Added noise diminished the percentage of reliable source localizations with a localization error ≤15 mm to 67.8%. Only for source positions close to the electrode contacts the activation volume correctly represented focal generators. Time-courses of original and reconstructed sources were significantly correlated. The case study results showed accurate localization. sLORETA is a distributed source model, which can be applied for reliable grid and strip based source localization. For distant source positions, overestimation of the extent of the generator has to be taken into account. sLORETA-based source reconstruction has the potential to improve the localization of distributed generators in presurgical epilepsy diagnostics and cognitive neuroscience. Copyright © 2011 Wiley-Liss, Inc.

  1. The validity and interrater reliability of video-based posture observation during asymmetric lifting tasks.

    PubMed

    Xu, Xu; Chang, Chien-Chi; Faber, Gert S; Kingma, Idsart; Dennerlein, Jack T

    2011-08-01

    The objective was to evaluate the validity and interrater reliability of a video-based posture observation method for the major body segment angles during asymmetric lifting tasks. Observational methods have been widely used as an awkward-posture assessment tool for ergonomics studies. Previous research proposed a video-based posture observation method with estimation of major segment angles during lifting tasks. However, it was limited to symmetric lifting tasks. The current study extended this method to asymmetric lifting tasks and investigated the validity and the interrater reliability. Various asymmetric lifting tasks were performed in a laboratory while a side-view video camera recorded the lift, and the body segment angles were measured directly by a motion tracking system. For this study, 10 raters estimated seven major segment angles using a customized program that played back the video recording, thus allowing users to enter segment angles. The validity of estimated segment angles was evaluated in relation to measured segment angles. Interrater reliability was assessed among the raters. For all the segment angles except trunk lateral bending, the estimated segment angles were strongly correlated with the measured segment angles (r > .8), and the intraclass correlation coefficient was greater than 0.75. The proposed observational method was able to provide a robust estimation of major segment angles for asymmetric lifting tasks based on side-view video clips. The estimated segment angles were consistent among raters. This method can be used for assessing posture during asymmetric lifting tasks. It also supports developing a video-based rapid joint loading estimation method.

  2. WEAMR — A Weighted Energy Aware Multipath Reliable Routing Mechanism for Hotline-Based WSNs

    PubMed Central

    Tufail, Ali; Qamar, Arslan; Khan, Adil Mehmood; Baig, Waleed Akram; Kim, Ki-Hyung

    2013-01-01

    Reliable source to sink communication is the most important factor for an efficient routing protocol especially in domains of military, healthcare and disaster recovery applications. We present weighted energy aware multipath reliable routing (WEAMR), a novel energy aware multipath routing protocol which utilizes hotline-assisted routing to meet such requirements for mission critical applications. The protocol reduces the number of average hops from source to destination and provides unmatched reliability as compared to well known reactive ad hoc protocols i.e., AODV and AOMDV. Our protocol makes efficient use of network paths based on weighted cost calculation and intelligently selects the best possible paths for data transmissions. The path cost calculation considers end to end number of hops, latency and minimum energy node value in the path. In case of path failure path recalculation is done efficiently with minimum latency and control packets overhead. Our evaluation shows that our proposal provides better end-to-end delivery with less routing overhead and higher packet delivery success ratio compared to AODV and AOMDV. The use of multipath also increases overall life time of WSN network using optimum energy available paths between sender and receiver in WDNs. PMID:23669714

  3. Methods for assessing the reliability of quality of life based on SF-36.

    PubMed

    Pan, Yi; Barnhart, Huiman X

    2016-12-30

    The 36-Item Short Form Health Survey (SF-36) has been widely used to measure quality of life. Reliability has been traditionally assessed by intraclass correlation coefficient (ICC), which is equivalent to Cronbach's alpha theoretically. However, it is a scaled assessment of reliability and does not indicate the extent of differences because of measurement error. In this paper, total deviation index (TDI) is used to interpret the magnitude of measurement error for SF-36, and a new formula for computing TDI for average item score is proposed. The interpretation based on TDI is simple and intuitive by providing, with a high probability, the expected difference that is because of measurement error. We also show that a high value of ICC does not always correspond to a smaller magnitude of measurement error, which indicates that ICC can sometimes provide a false sense of high reliability. The methodology is illustrated with reported SF-36 data from the literature and from real data in the Arthritis Self-Management Program. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Probabilistic and structural reliability analysis of laminated composite structures based on the IPACS code

    NASA Technical Reports Server (NTRS)

    Sobel, Larry; Buttitta, Claudio; Suarez, James

    1993-01-01

    Probabilistic predictions based on the Integrated Probabilistic Assessment of Composite Structures (IPACS) code are presented for the material and structural response of unnotched and notched, 1M6/3501-6 Gr/Ep laminates. Comparisons of predicted and measured modulus and strength distributions are given for unnotched unidirectional, cross-ply, and quasi-isotropic laminates. The predicted modulus distributions were found to correlate well with the test results for all three unnotched laminates. Correlations of strength distributions for the unnotched laminates are judged good for the unidirectional laminate and fair for the cross-ply laminate, whereas the strength correlation for the quasi-isotropic laminate is deficient because IPACS did not yet have a progressive failure capability. The paper also presents probabilistic and structural reliability analysis predictions for the strain concentration factor (SCF) for an open-hole, quasi-isotropic laminate subjected to longitudinal tension. A special procedure was developed to adapt IPACS for the structural reliability analysis. The reliability results show the importance of identifying the most significant random variables upon which the SCF depends, and of having accurate scatter values for these variables.

  5. Probabilistic and structural reliability analysis of laminated composite structures based on the IPACS code

    NASA Technical Reports Server (NTRS)

    Sobel, Larry; Buttitta, Claudio; Suarez, James

    1993-01-01

    Probabilistic predictions based on the Integrated Probabilistic Assessment of Composite Structures (IPACS) code are presented for the material and structural response of unnotched and notched, 1M6/3501-6 Gr/Ep laminates. Comparisons of predicted and measured modulus and strength distributions are given for unnotched unidirectional, cross-ply, and quasi-isotropic laminates. The predicted modulus distributions were found to correlate well with the test results for all three unnotched laminates. Correlations of strength distributions for the unnotched laminates are judged good for the unidirectional laminate and fair for the cross-ply laminate, whereas the strength correlation for the quasi-isotropic laminate is deficient because IPACS did not yet have a progressive failure capability. The paper also presents probabilistic and structural reliability analysis predictions for the strain concentration factor (SCF) for an open-hole, quasi-isotropic laminate subjected to longitudinal tension. A special procedure was developed to adapt IPACS for the structural reliability analysis. The reliability results show the importance of identifying the most significant random variables upon which the SCF depends, and of having accurate scatter values for these variables.

  6. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Kanjilal, Oindrila; Manohar, C. S.

    2017-07-01

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.

  7. Reliability of trunk shape measurements based on 3-D surface reconstructions

    PubMed Central

    Cheriet, Farida; Danserau, Jean; Ronsky, Janet; Zernicke, Ronald F.; Labelle, Hubert

    2007-01-01

    This study aimed to estimate the reliability of 3-D trunk surface measurements for the characterization of external asymmetry associated with scoliosis. Repeated trunk surface acquisitions using the Inspeck system (Inspeck Inc., Montreal, Canada), with two different postures A (anatomical position) and B (‘‘clavicle’’ position), were obtained from patients attending a scoliosis clinic. For each acquisition, a 3-D model of the patient’s trunk was built and a series of measurements was computed. For each measure and posture, intraclass correlation coefficients (ICC) were obtained using a bivariate analysis of variance, and the smallest detectable difference was calculated. For posture A, reliability was fair to excellent with ICC from 0.91 to 0.99 (0.85 to 0.99 for the lower bound of the 95% confidence interval). For posture B, the ICC was 0.85 to 0.98 (0.74 to 0.99 for the lower bound of the 95% confidence interval). The smallest statistically significant differences for the maximal back surface rotation was 2.5 and 1.5° for the maximal trunk rotation. Apparent global asymmetry and axial trunk rotation indices were relatively robust to changes in arm posture, both in terms of mean values and within-subject variations, and also showed a good reliability. Computing measurements from cross-sectional analysis enabled a reduction in errors compared to the measurements based on markers’ position. Although not yet sensitive enough to detect small changes for monitoring of curve natural progression, trunk surface analysis can help to document the external asymmetry associated with different types of spinal curves as well as the cosmetic improvement obtained after surgical interventions. The anatomical posture is slightly more reliable as it allows a better coverage of the trunk surface by the digitizing system. PMID:17701228

  8. Interrater Reliability of the Power Mobility Road Test in the Virtual Reality-Based Simulator-2.

    PubMed

    Kamaraj, Deepan C; Dicianno, Brad E; Mahajan, Harshal P; Buhari, Alhaji M; Cooper, Rory A

    2016-07-01

    To assess interrater reliability of the Power Mobility Road Test (PMRT) when administered through the Virtual Reality-based SIMulator-version 2 (VRSIM-2). Within-subjects repeated-measures design. Participants interacted with VRSIM-2 through 2 display options (desktop monitor vs immersive virtual reality screens) using 2 control interfaces (roller system vs conventional movement-sensing joystick), providing 4 different driving scenarios (driving conditions 1-4). Participants performed 3 virtual driving sessions for each of the 2 display screens and 1 session through a real-world driving course (driving condition 5). The virtual PMRT was conducted in a simulated indoor office space, and an equivalent course was charted in an open space for the real-world assessment. After every change in driving condition, participants completed a self-reported workload assessment questionnaire, the Task Load Index, developed by the National Aeronautics and Space Administration. A convenience sample of electric-powered wheelchair (EPW) athletes (N=21) recruited at the 31st National Veterans Wheelchair Games. Not applicable. Total composite PMRT score. The PMRT had high interrater reliability (intraclass correlation coefficient [ICC]>.75) between the 2 raters in all 5 driving conditions. Post hoc analyses revealed that the reliability analyses had >80% power to detect high ICCs in driving conditions 1 and 4. The PMRT has high interrater reliability in conditions 1 and 4 and could be used to assess EPW driving performance virtually in VRSIM-2. However, further psychometric assessment is necessary to assess the feasibility of administering the PMRT using the different interfaces of VRSIM-2. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  9. Quantified Risk Ranking Model for Condition-Based Risk and Reliability Centered Maintenance

    NASA Astrophysics Data System (ADS)

    Chattopadhyaya, Pradip Kumar; Basu, Sushil Kumar; Majumdar, Manik Chandra

    2017-06-01

    In the recent past, risk and reliability centered maintenance (RRCM) framework is introduced with a shift in the methodological focus from reliability and probabilities (expected values) to reliability, uncertainty and risk. In this paper authors explain a novel methodology for risk quantification and ranking the critical items for prioritizing the maintenance actions on the basis of condition-based risk and reliability centered maintenance (CBRRCM). The critical items are identified through criticality analysis of RPN values of items of a system and the maintenance significant precipitating factors (MSPF) of items are evaluated. The criticality of risk is assessed using three risk coefficients. The likelihood risk coefficient treats the probability as a fuzzy number. The abstract risk coefficient deduces risk influenced by uncertainty, sensitivity besides other factors. The third risk coefficient is called hazardous risk coefficient, which is due to anticipated hazards which may occur in the future and the risk is deduced from criteria of consequences on safety, environment, maintenance and economic risks with corresponding cost for consequences. The characteristic values of all the three risk coefficients are obtained with a particular test. With few more tests on the system, the values may change significantly within controlling range of each coefficient, hence `random number simulation' is resorted to obtain one distinctive value for each coefficient. The risk coefficients are statistically added to obtain final risk coefficient of each critical item and then the final rankings of critical items are estimated. The prioritization in ranking of critical items using the developed mathematical model for risk assessment shall be useful in optimization of financial losses and timing of maintenance actions.

  10. A reliable transmission protocol for ZigBee-based wireless patient monitoring.

    PubMed

    Chen, Shyr-Kuen; Kao, Tsair; Chan, Chia-Tai; Huang, Chih-Ning; Chiang, Chih-Yen; Lai, Chin-Yu; Tung, Tse-Hua; Wang, Pi-Chung

    2012-01-01

    Patient monitoring systems are gaining their importance as the fast-growing global elderly population increases demands for caretaking. These systems use wireless technologies to transmit vital signs for medical evaluation. In a multihop ZigBee network, the existing systems usually use broadcast or multicast schemes to increase the reliability of signals transmission; however, both the schemes lead to significantly higher network traffic and end-to-end transmission delay. In this paper, we present a reliable transmission protocol based on anycast routing for wireless patient monitoring. Our scheme automatically selects the closest data receiver in an anycast group as a destination to reduce the transmission latency as well as the control overhead. The new protocol also shortens the latency of path recovery by initiating route recovery from the intermediate routers of the original path. On the basis of a reliable transmission scheme, we implement a ZigBee device for fall monitoring, which integrates fall detection, indoor positioning, and ECG monitoring. When the triaxial accelerometer of the device detects a fall, the current position of the patient is transmitted to an emergency center through a ZigBee network. In order to clarify the situation of the fallen patient, 4-s ECG signals are also transmitted. Our transmission scheme ensures the successful transmission of these critical messages. The experimental results show that our scheme is fast and reliable. We also demonstrate that our devices can seamlessly integrate with the next generation technology of wireless wide area network, worldwide interoperability for microwave access, to achieve real-time patient monitoring.

  11. Quantified Risk Ranking Model for Condition-Based Risk and Reliability Centered Maintenance

    NASA Astrophysics Data System (ADS)

    Chattopadhyaya, Pradip Kumar; Basu, Sushil Kumar; Majumdar, Manik Chandra

    2016-03-01

    In the recent past, risk and reliability centered maintenance (RRCM) framework is introduced with a shift in the methodological focus from reliability and probabilities (expected values) to reliability, uncertainty and risk. In this paper authors explain a novel methodology for risk quantification and ranking the critical items for prioritizing the maintenance actions on the basis of condition-based risk and reliability centered maintenance (CBRRCM). The critical items are identified through criticality analysis of RPN values of items of a system and the maintenance significant precipitating factors (MSPF) of items are evaluated. The criticality of risk is assessed using three risk coefficients. The likelihood risk coefficient treats the probability as a fuzzy number. The abstract risk coefficient deduces risk influenced by uncertainty, sensitivity besides other factors. The third risk coefficient is called hazardous risk coefficient, which is due to anticipated hazards which may occur in the future and the risk is deduced from criteria of consequences on safety, environment, maintenance and economic risks with corresponding cost for consequences. The characteristic values of all the three risk coefficients are obtained with a particular test. With few more tests on the system, the values may change significantly within controlling range of each coefficient, hence `random number simulation' is resorted to obtain one distinctive value for each coefficient. The risk coefficients are statistically added to obtain final risk coefficient of each critical item and then the final rankings of critical items are estimated. The prioritization in ranking of critical items using the developed mathematical model for risk assessment shall be useful in optimization of financial losses and timing of maintenance actions.

  12. Reliability of team-based self-monitoring in critical events: a pilot study

    PubMed Central

    2013-01-01

    Background Teamwork is a critical component during critical events. Assessment is mandatory for remediation and to target training programmes for observed performance gaps. Methods The primary purpose was to test the feasibility of team-based self-monitoring of crisis resource management with a validated teamwork assessment tool. A secondary purpose was to assess item-specific reliability and content validity in order to develop a modified context-optimised assessment tool. We conducted a prospective, single-centre study to assess team-based self-monitoring of teamwork after in-situ inter-professional simulated critical events by comparison with an assessment by observers. The Mayo High Performance Teamwork Scale (MHPTS) was used as the assessment tool with evaluation of internal consistency, item-specific consensus estimates for agreement between participating teams and observers, and content validity. Results 105 participants and 58 observers completed the MHPTS after a total of 16 simulated critical events over 8 months. Summative internal consistency of the MHPTS calculated as Cronbach’s alpha was acceptable with 0.712 for observers and 0.710 for participants. Overall consensus estimates for dichotomous data (agreement/non-agreement) was 0.62 (Cohen’s kappa; IQ-range 0.31-0.87). 6/16 items had excellent (kappa > 0.8) and 3/16 good reliability (kappa > 0.6). Short questions concerning easy to observe behaviours were more likely to be reliable. The MHPTS was modified using a threshold for good reliability of kappa > 0.6. The result is a 9 item self-assessment tool (TeamMonitor) with a calculated median kappa of 0.86 (IQ-range: 0.67-1.0) and good content validity. Conclusions Team-based self-monitoring with the MHPTS to assess team performance during simulated critical events is feasible. A context-based modification of the tool is achievable with good internal consistency and content validity. Further studies are needed to investigate if team-based

  13. Advanced System-Level Reliability Analysis and Prediction with Field Data Integration

    DTIC Science & Technology

    2011-09-01

    innovative life prediction methodologies that incorporate emerging probabilistic lifing techniques as well as advanced physics-of- failure...often based on simplifying assumptions and their predictions may suffer from different sources of uncertainty. For instance, one source of...system level, most modeling approaches focus on life prediction for single components and fail to account for the interdependencies that may result

  14. Validity and Reliability of an IMU-based Method to Detect APAs Prior to Gait Initiation

    PubMed Central

    Chiari, Lorenzo; Holmstrom, Lars; Salarian, Arash; Horak, Fay B.

    2015-01-01

    Anticipatory postural adjustments (APAs) prior to gait initiation have been largely studied in in traditional, laboratory settings using force plates under the feet to characterize the displacement of the center of pressure. However clinical trials and clinical practice would benefit from a portable, inexpensive method for characterizing APAs. Therefore, the main objectives of this study were: 1) to develop a novel, automatic IMU-based method to detect and characterize APAs during gait initiation and 2) to measure its test-retest reliability. Experiment I was carried out in the laboratory to determine the validity of the IMU-based method in ten subjects with PD (OFF medication) and 12 control subjects. Experiment II was carried out in the clinic, to determine test-retest reliability of the IMU-based method in a different set of 17 early-to-moderate, treated subjects with PD (tested ON medication) and 17 age-matched control subjects. Results showed that gait initiation characteristics (both APAs and 1st step) detected with our novel method were significantly correlated to the characteristics calculated with a force plate and motion analysis system. The size of APAs measured with either inertial sensors or force plate were significantly smaller in subjects with PD than in control subjects (p<0.05). Test-retest reliability for the gait initiation characteristics measured with inertial sensors was moderate-to-excellent (.56

  15. Validity and reliability of an IMU-based method to detect APAs prior to gait initiation.

    PubMed

    Mancini, Martina; Chiari, Lorenzo; Holmstrom, Lars; Salarian, Arash; Horak, Fay B

    2016-01-01

    Anticipatory postural adjustments (APAs) prior to gait initiation have been largely studied in traditional, laboratory settings using force plates under the feet to characterize the displacement of the center of pressure. However clinical trials and clinical practice would benefit from a portable, inexpensive method for characterizing APAs. Therefore, the main objectives of this study were (1) to develop a novel, automatic IMU-based method to detect and characterize APAs during gait initiation and (2) to measure its test-retest reliability. Experiment I was carried out in the laboratory to determine the validity of the IMU-based method in 10 subjects with PD (OFF medication) and 12 control subjects. Experiment II was carried out in the clinic, to determine test-retest reliability of the IMU-based method in a different set of 17 early-to-moderate, treated subjects with PD (tested ON medication) and 17 age-matched control subjects. Results showed that gait initiation characteristics (both APAs and 1st step) detected with our novel method were significantly correlated to the characteristics calculated with a force plate and motion analysis system. The size of APAs measured with either inertial sensors or force plate was significantly smaller in subjects with PD than in control subjects (p<0.05). Test-retest reliability for the gait initiation characteristics measured with inertial sensors was moderate-to-excellent (0.56

  16. How to Characterize the Reliability of Ceramic Capacitors with Base-Metal Electrodes (BMEs)

    NASA Technical Reports Server (NTRS)

    Liu, David (Donhang)

    2015-01-01

    The reliability of an MLCC device is the product of a time-dependent part and a time-independent part: 1) Time-dependent part is a statistical distribution; 2) Time-independent part is the reliability at t=0, the initial reliability. Initial reliability depends only on how a BME MLCC is designed and processed. Similar to the way the minimum dielectric thickness ensured the long-term reliability of a PME MLCC, the initial reliability also ensures the long term-reliability of a BME MLCC. This presentation shows new discoveries regarding commonalities and differences between PME and BME capacitor technologies.

  17. How to Characterize the Reliability of Ceramic Capacitors with Base-Metal Electrodes (BMEs)

    NASA Technical Reports Server (NTRS)

    Liu, Donhang

    2015-01-01

    The reliability of an MLCC device is the product of a time-dependent part and a time-independent part: 1) Time-dependent part is a statistical distribution; 2) Time-independent part is the reliability at t0, the initial reliability. Initial reliability depends only on how a BME MLCC is designed and processed. Similar to the way the minimum dielectric thickness ensured the long-term reliability of a PME MLCC, the initial reliability also ensures the long term-reliability of a BME MLCC. This presentation shows new discoveries regarding commonalities and differences between PME and BME capacitor technologies.

  18. Effect of Clinically Discriminating, Evidence-Based Checklist Items on the Reliability of Scores from an Internal Medicine Residency OSCE

    ERIC Educational Resources Information Center

    Daniels, Vijay J.; Bordage, Georges; Gierl, Mark J.; Yudkowsky, Rachel

    2014-01-01

    Objective structured clinical examinations (OSCEs) are used worldwide for summative examinations but often lack acceptable reliability. Research has shown that reliability of scores increases if OSCE checklists for medical students include only clinically relevant items. Also, checklists are often missing evidence-based items that high-achieving…

  19. Effect of Clinically Discriminating, Evidence-Based Checklist Items on the Reliability of Scores from an Internal Medicine Residency OSCE

    ERIC Educational Resources Information Center

    Daniels, Vijay J.; Bordage, Georges; Gierl, Mark J.; Yudkowsky, Rachel

    2014-01-01

    Objective structured clinical examinations (OSCEs) are used worldwide for summative examinations but often lack acceptable reliability. Research has shown that reliability of scores increases if OSCE checklists for medical students include only clinically relevant items. Also, checklists are often missing evidence-based items that high-achieving…

  20. Reliability-based optimization of maintenance scheduling of mechanical components under fatigue

    PubMed Central

    Beaurepaire, P.; Valdebenito, M.A.; Schuëller, G.I.; Jensen, H.A.

    2012-01-01

    This study presents the optimization of the maintenance scheduling of mechanical components under fatigue loading. The cracks of damaged structures may be detected during non-destructive inspection and subsequently repaired. Fatigue crack initiation and growth show inherent variability, and as well the outcome of inspection activities. The problem is addressed under the framework of reliability based optimization. The initiation and propagation of fatigue cracks are efficiently modeled using cohesive zone elements. The applicability of the method is demonstrated by a numerical example, which involves a plate with two holes subject to alternating stress. PMID:23564979

  1. Reliability-based optimization of maintenance scheduling of mechanical components under fatigue.

    PubMed

    Beaurepaire, P; Valdebenito, M A; Schuëller, G I; Jensen, H A

    2012-05-01

    This study presents the optimization of the maintenance scheduling of mechanical components under fatigue loading. The cracks of damaged structures may be detected during non-destructive inspection and subsequently repaired. Fatigue crack initiation and growth show inherent variability, and as well the outcome of inspection activities. The problem is addressed under the framework of reliability based optimization. The initiation and propagation of fatigue cracks are efficiently modeled using cohesive zone elements. The applicability of the method is demonstrated by a numerical example, which involves a plate with two holes subject to alternating stress.

  2. Assessing Elementary Lesions in Gout by Ultrasound: Results of an OMERACT Patient-based Agreement and Reliability Exercise.

    PubMed

    Terslev, Lene; Gutierrez, Marwin; Christensen, Robin; Balint, Peter V; Bruyn, George A; Delle Sedie, Andrea; Filippucci, Emilio; Garrido, Jesus; Hammer, Hilde B; Iagnocco, Annamaria; Kane, David; Kaeley, Gurjit S; Keen, Helen; Mandl, Peter; Naredo, Esperanza; Pineda, Carlos; Schicke, Bernd; Thiele, Ralf; D'Agostino, Maria Antonietta; Schmidt, Wolfgang A

    2015-11-01

    To test the reliability of the consensus-based ultrasound (US) definitions of elementary gout lesions in patients. Eight patients with microscopically proven gout were evaluated by 16 sonographers for signs of double contour (DC), aggregates, erosions, and tophi in the first metatarsophalangeal joint and the knee bilaterally. The patients were examined twice using B-mode US to test agreement and inter- and intraobserver reliability of the elementary components. The prevalence of the lesions were DC 52.8%, tophus 61.1%, aggregates 29.8%, and erosions 32.4%. The intraobserver reliability was good for all lesions except DC, where it was moderate. The best reliability per lesion was seen for tophus (κ 0.73, 95% CI 0.61-0.85) and lowest for DC (κ 0.53, 95% CI 0.38-0.67). The interobserver reliability was good for tophus and erosions, but fair to moderate for aggregates and DC, respectively. The best reliability was seen for erosions (κ 0.74, 95% CI 0.65-0.81) and lowest for aggregates (κ 0.21, 95% CI 0.04-0.37). This is the first step to test consensus-based US definitions on elementary lesions in patients with gout. High intraobserver reliability was found when applying the definition in patients on all elementary lesions while interobserver reliability was moderate to low. Further studies are needed to improve the interobserver reliability, particularly for DC and aggregates.

  3. Using Model Replication to Improve the Reliability of Agent-Based Models

    NASA Astrophysics Data System (ADS)

    Zhong, Wei; Kim, Yushim

    The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.

  4. Workplace-based assessment of communication skills: A pilot project addressing feasibility, acceptance and reliability.

    PubMed

    Weyers, Simone; Jemi, Iman; Karger, André; Raski, Bianca; Rotthoff, Thomas; Pentzek, Michael; Mortsiefer, Achim

    2016-01-01

    Background: Imparting communication skills has been given great importance in medical curricula. In addition to standardized assessments, students should communicate with real patients in actual clinical situations during workplace-based assessments and receive structured feedback on their performance. The aim of this project was to pilot a formative testing method for workplace-based assessment. Our investigation centered in particular on whether or not physicians view the method as feasible and how high acceptance is among students. In addition, we assessed the reliability of the method. Method: As part of the project, 16 students held two consultations each with chronically ill patients at the medical practice where they were completing GP training. These consultations were video-recorded. The trained mentoring physician rated the student's performance and provided feedback immediately following the consultations using the Berlin Global Rating scale (BGR). Two impartial, trained raters also evaluated the videos using BGR. For qualitative and quantitative analysis, information on how physicians and students viewed feasibility and their levels of acceptance was collected in written form in a partially standardized manner. To test for reliability, the test-retest reliability was calculated for both of the overall evaluations given by each rater. The inter-rater reliability was determined for the three evaluations of each individual consultation. Results: The formative assessment method was rated positively by both physicians and students. It is relatively easy to integrate into daily routines. Its significant value lies in the personal, structured and recurring feedback. The two overall scores for each patient consultation given by the two impartial raters correlate moderately. The degree of uniformity among the three raters in respect to the individual consultations is low. Discussion: Within the scope of this pilot project, only a small sample of physicians and

  5. Workplace-based assessment of communication skills: A pilot project addressing feasibility, acceptance and reliability

    PubMed Central

    Weyers, Simone; Jemi, Iman; Karger, André; Raski, Bianca; Rotthoff, Thomas; Pentzek, Michael; Mortsiefer, Achim

    2016-01-01

    Background: Imparting communication skills has been given great importance in medical curricula. In addition to standardized assessments, students should communicate with real patients in actual clinical situations during workplace-based assessments and receive structured feedback on their performance. The aim of this project was to pilot a formative testing method for workplace-based assessment. Our investigation centered in particular on whether or not physicians view the method as feasible and how high acceptance is among students. In addition, we assessed the reliability of the method. Method: As part of the project, 16 students held two consultations each with chronically ill patients at the medical practice where they were completing GP training. These consultations were video-recorded. The trained mentoring physician rated the student’s performance and provided feedback immediately following the consultations using the Berlin Global Rating scale (BGR). Two impartial, trained raters also evaluated the videos using BGR. For qualitative and quantitative analysis, information on how physicians and students viewed feasibility and their levels of acceptance was collected in written form in a partially standardized manner. To test for reliability, the test-retest reliability was calculated for both of the overall evaluations given by each rater. The inter-rater reliability was determined for the three evaluations of each individual consultation. Results: The formative assessment method was rated positively by both physicians and students. It is relatively easy to integrate into daily routines. Its significant value lies in the personal, structured and recurring feedback. The two overall scores for each patient consultation given by the two impartial raters correlate moderately. The degree of uniformity among the three raters in respect to the individual consultations is low. Discussion: Within the scope of this pilot project, only a small sample of physicians and

  6. Reliability and Validity of Assessing User Satisfaction With Web-Based Health Interventions.

    PubMed

    Boß, Leif; Lehr, Dirk; Reis, Dorota; Vis, Christiaan; Riper, Heleen; Berking, Matthias; Ebert, David Daniel

    2016-08-31

    The perspective of users should be taken into account in the evaluation of Web-based health interventions. Assessing the users' satisfaction with the intervention they receive could enhance the evidence for the intervention effects. Thus, there is a need for valid and reliable measures to assess satisfaction with Web-based health interventions. The objective of this study was to analyze the reliability, factorial structure, and construct validity of the Client Satisfaction Questionnaire adapted to Internet-based interventions (CSQ-I). The psychometric quality of the CSQ-I was analyzed in user samples from 2 separate randomized controlled trials evaluating Web-based health interventions, one from a depression prevention intervention (sample 1, N=174) and the other from a stress management intervention (sample 2, N=111). At first, the underlying measurement model of the CSQ-I was analyzed to determine the internal consistency. The factorial structure of the scale and the measurement invariance across groups were tested by multigroup confirmatory factor analyses. Additionally, the construct validity of the scale was examined by comparing satisfaction scores with the primary clinical outcome. Multigroup confirmatory analyses on the scale yielded a one-factorial structure with a good fit (root-mean-square error of approximation =.09, comparative fit index =.96, standardized root-mean-square residual =.05) that showed partial strong invariance across the 2 samples. The scale showed very good reliability, indicated by McDonald omegas of .95 in sample 1 and .93 in sample 2. Significant correlations with change in depressive symptoms (r=-.35, P<.001) and perceived stress (r=-.48, P<.001) demonstrated the construct validity of the scale. The proven internal consistency, factorial structure, and construct validity of the CSQ-I indicate a good overall psychometric quality of the measure to assess the user's general satisfaction with Web-based interventions for depression and

  7. Reliability and Validity of Assessing User Satisfaction With Web-Based Health Interventions

    PubMed Central

    Lehr, Dirk; Reis, Dorota; Vis, Christiaan; Riper, Heleen; Berking, Matthias; Ebert, David Daniel

    2016-01-01

    Background The perspective of users should be taken into account in the evaluation of Web-based health interventions. Assessing the users’ satisfaction with the intervention they receive could enhance the evidence for the intervention effects. Thus, there is a need for valid and reliable measures to assess satisfaction with Web-based health interventions. Objective The objective of this study was to analyze the reliability, factorial structure, and construct validity of the Client Satisfaction Questionnaire adapted to Internet-based interventions (CSQ-I). Methods The psychometric quality of the CSQ-I was analyzed in user samples from 2 separate randomized controlled trials evaluating Web-based health interventions, one from a depression prevention intervention (sample 1, N=174) and the other from a stress management intervention (sample 2, N=111). At first, the underlying measurement model of the CSQ-I was analyzed to determine the internal consistency. The factorial structure of the scale and the measurement invariance across groups were tested by multigroup confirmatory factor analyses. Additionally, the construct validity of the scale was examined by comparing satisfaction scores with the primary clinical outcome. Results Multigroup confirmatory analyses on the scale yielded a one-factorial structure with a good fit (root-mean-square error of approximation =.09, comparative fit index =.96, standardized root-mean-square residual =.05) that showed partial strong invariance across the 2 samples. The scale showed very good reliability, indicated by McDonald omegas of .95 in sample 1 and .93 in sample 2. Significant correlations with change in depressive symptoms (r=−.35, P<.001) and perceived stress (r=−.48, P<.001) demonstrated the construct validity of the scale. Conclusions The proven internal consistency, factorial structure, and construct validity of the CSQ-I indicate a good overall psychometric quality of the measure to assess the user’s general

  8. Bifactor Modeling and the Estimation of Model-Based Reliability in the WAIS-IV.

    PubMed

    Gignac, Gilles E; Watkins, Marley W

    2013-09-01

    Previous confirmatory factor analytic research that has examined the factor structure of the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) has endorsed either higher order models or oblique factor models that tend to amalgamate both general factor and index factor sources of systematic variance. An alternative model that has not yet been examined for the WAIS-IV is the bifactor model. Bifactor models allow all subtests to load onto both the general factor and their respective index factor directly. Bifactor models are also particularly amenable to the estimation of model-based reliabilities for both global composite scores (ω h ) and subscale/index scores (ω s ). Based on the WAIS-IV normative sample correlation matrices, a bifactor model that did not include any index factor cross loadings or correlated residuals was found to be better fitting than the conventional higher order and oblique factor models. Although the ω h estimate associated with the full scale intelligence quotient (FSIQ) scores was respectably high (.86), the ω s estimates associated with the WAIS-IV index scores were very low (.13 to .47). The results are interpreted in the context of the benefits of a bifactor modeling approach. Additionally, in light of the very low levels of unique internal consistency reliabilities associated with the index scores, it is contended that clinical index score interpretations are probably not justifiable.

  9. The Accessibility, Usability, and Reliability of Chinese Web-Based Information on HIV/AIDS

    PubMed Central

    Niu, Lu; Luo, Dan; Liu, Ying; Xiao, Shuiyuan

    2016-01-01

    Objective: The present study was designed to assess the quality of Chinese-language Internet-based information on HIV/AIDS. Methods: We entered the following search terms, in Chinese, into Baidu and Sogou: “HIV/AIDS”, “symptoms”, and “treatment”, and evaluated the first 50 hits of each query using the Minervation validation instrument (LIDA tool) and DISCERN instrument. Results: Of the 900 hits identified, 85 websites were included in this study. The overall score of the LIDA tool was 63.7%; the mean score of accessibility, usability, and reliability was 82.2%, 71.5%, and 27.3%, respectively. Of the top 15 sites according to the LIDA score, the mean DISCERN score was calculated at 43.1 (95% confidence intervals (CI) = 37.7–49.5). Noncommercial websites showed higher DISCERN scores than commercial websites; whereas commercial websites were more likely to be found in the first 20 links obtained from each search engine than the noncommercial websites. Conclusions: In general, the HIV/AIDS related Chinese-language websites have poor reliability, although their accessibility and usability are fair. In addition, the treatment information presented on Chinese-language websites is far from sufficient. There is an imperative need for professionals and specialized institutes to improve the comprehensiveness of web-based information related to HIV/AIDS. PMID:27556475

  10. The Soft Rock Socketed Monopile with Creep Effects - A Reliability Approach based on Wavelet Neural Networks

    NASA Astrophysics Data System (ADS)

    Kozubal, Janusz; Tomanovic, Zvonko; Zivaljevic, Slobodan

    2016-09-01

    In the present study the numerical model of the pile embedded in marl described by a time dependent model, based on laboratory tests, is proposed. The solutions complement the state of knowledge of the monopile loaded by horizontal force in its head with respect to its random variability values in time function. The investigated reliability problem is defined by the union of failure events defined by the excessive horizontal maximal displacement of the pile head in each periods of loads. Abaqus has been used for modeling of the presented task with a two layered viscoplastic model for marl. The mechanical parameters for both parts of model: plastic and rheological were calibrated based on the creep laboratory test results. The important aspect of the problem is reliability analysis of a monopile in complex environment under random sequences of loads which help understanding the role of viscosity in nature of rock basis constructions. Due to the lack of analytical solutions the computations were done by the method of response surface in conjunction with wavelet neural network as a method recommended for time sequences process and description of nonlinear phenomenon.

  11. Reliable clock estimation using linear weighted fusion based on pairwise broadcast synchronization

    SciTech Connect

    Shi, Xin Zhao, Xiangmo Hui, Fei Ma, Junyan Yang, Lan

    2014-10-06

    Clock synchronization in wireless sensor networks (WSNs) has been studied extensively in recent years and many protocols are put forward based on the point of statistical signal processing, which is an effective way to optimize accuracy. However, the accuracy derived from the statistical data can be improved mainly by sufficient packets exchange, which will consume the limited power resources greatly. In this paper, a reliable clock estimation using linear weighted fusion based on pairwise broadcast synchronization is proposed to optimize sync accuracy without expending additional sync packets. As a contribution, a linear weighted fusion scheme for multiple clock deviations is constructed with the collaborative sensing of clock timestamp. And the fusion weight is defined by the covariance of sync errors for different clock deviations. Extensive simulation results show that the proposed approach can achieve better performance in terms of sync overhead and sync accuracy.

  12. TOPICAL REVIEW: Mechanistically based probability modelling, life prediction and reliability assessment

    NASA Astrophysics Data System (ADS)

    Wei, Robert P.; Harlow, D. Gary

    2005-01-01

    Life prediction and reliability assessment are essential components for the life-cycle engineering and management (LCEM) of modern engineered systems. These systems can range from microelectronic and bio-medical devices to large machinery and structures. To be effective, the underlying approach to LCEM must be transformed to embody mechanistically based probability modelling, vis-à-vis the more traditional experientially based statistical modelling, for predicting damage evolution and distribution. In this paper, the probability and statistical approaches are compared and differentiated. The process of model development on the basis of mechanistic understanding derived from critical experiments is illustrated through selected examples. The efficacy of this approach is illustrated through an example of the evolution and distribution of corrosion and corrosion fatigue damage in aluminium alloys in relation to aircraft that had been in long-term service.

  13. Reliable Identification of Vehicle-Boarding Actions Based on Fuzzy Inference System

    PubMed Central

    Ahn, DaeHan; Park, Homin; Hwang, Seokhyun; Park, Taejoon

    2017-01-01

    Existing smartphone-based solutions to prevent distracted driving suffer from inadequate system designs that only recognize simple and clean vehicle-boarding actions, thereby failing to meet the required level of accuracy in real-life environments. In this paper, exploiting unique sensory features consistently monitored from a broad range of complicated vehicle-boarding actions, we propose a reliable and accurate system based on fuzzy inference to classify the sides of vehicle entrance by leveraging built-in smartphone sensors only. The results of our comprehensive evaluation on three vehicle types with four participants demonstrate that the proposed system achieves 91.1%∼94.0% accuracy, outperforming other methods by 26.9%∼38.4% and maintains at least 87.8% accuracy regardless of smartphone positions and vehicle types. PMID:28208795

  14. Reliability of file-based retrospective ratings of psychopathy with the PCL-R.

    PubMed

    Grann, M; Långström, N; Tengström, A; Stålenheim, E G

    1998-06-01

    A rapidly emerging consensus recognizes Hare's Psychopathy Checklist-Revised (PCL-R; Hare, 1991) as the most valid and useful instrument to assess psychopathy (Fulero, 1995; Stone, 1995). We compared independent clinical PCL-R ratings of 40 forensic adult male criminal offenders to retrospective file-only ratings. File-based PCL-R ratings, in comparison to the clinical ratings, yielded categorical psychopathy diagnoses with a sensitivity of .57 and a specificity of .96. The intraclass correlation (ICC) of the total scores as estimated by ICC(2,1) was .88, and was markedly better on Factor 2, ICC(2,1) = .89, than on Factor 1, ICC(2,1) = .69. The findings support the belief that for research purposes, file-only PCL-R ratings based on Swedish forensic psychiatric investigation records can be made with good alternate-form reliability.

  15. Reliable Identification of Vehicle-Boarding Actions Based on Fuzzy Inference Syste.

    PubMed

    Ahn, DaeHan; Park, Homin; Hwang, Seokhyun; Park, Taejoon

    2017-02-09

    Existing smartphone-based solutions to prevent distracted driving suffer from inadequate system designs that only recognize simple and clean vehicle-boarding actions, thereby failing to meet the required level of accuracy in real-life environments. In this paper, exploiting unique sensory features consistently monitored from a broad range of complicated vehicle-boarding actions, we propose a reliable and accurate system based on fuzzy inference to classify the sides of vehicle entrancebyleveragingbuilt-insmartphonesensorsonly. Theresultsofourcomprehensiveevaluation on three vehicle types with four participants demonstrate that the proposed system achieves 91.1%∼94.0% accuracy, outperforming other methods by 26.9%∼38.4% and maintains at least 87.8 %accuracy regardless of smartphone positions and vehicle types.

  16. Reliability and validity of an internet-based questionnaire measuring lifetime physical activity.

    PubMed

    De Vera, Mary A; Ratzlaff, Charles; Doerfling, Paul; Kopec, Jacek

    2010-11-15

    Lifetime exposure to physical activity is an important construct for evaluating associations between physical activity and disease outcomes, given the long induction periods in many chronic diseases. The authors' objective in this study was to evaluate the measurement properties of the Lifetime Physical Activity Questionnaire (L-PAQ), a novel Internet-based, self-administered instrument measuring lifetime physical activity, among Canadian men and women in 2005-2006. Reliability was examined using a test-retest study. Validity was examined in a 2-part study consisting of 1) comparisons with previously validated instruments measuring similar constructs, the Lifetime Total Physical Activity Questionnaire (LT-PAQ) and the Chasan-Taber Physical Activity Questionnaire (CT-PAQ), and 2) a priori hypothesis tests of constructs measured by the L-PAQ. The L-PAQ demonstrated good reliability, with intraclass correlation coefficients ranging from 0.67 (household activity) to 0.89 (sports/recreation). Comparison between the L-PAQ and the LT-PAQ resulted in Spearman correlation coefficients ranging from 0.41 (total activity) to 0.71 (household activity); comparison between the L-PAQ and the CT-PAQ yielded coefficients of 0.58 (sports/recreation), 0.56 (household activity), and 0.50 (total activity). L-PAQ validity was further supported by observed relations between the L-PAQ and sociodemographic variables, consistent with a priori hypotheses. Overall, the L-PAQ is a useful instrument for assessing multiple domains of lifetime physical activity with acceptable reliability and validity.

  17. A Reliability Test of a Complex System Based on Empirical Likelihood

    PubMed Central

    Zhang, Jun; Hui, Yongchang

    2016-01-01

    To analyze the reliability of a complex system described by minimal paths, an empirical likelihood method is proposed to solve the reliability test problem when the subsystem distributions are unknown. Furthermore, we provide a reliability test statistic of the complex system and extract the limit distribution of the test statistic. Therefore, we can obtain the confidence interval for reliability and make statistical inferences. The simulation studies also demonstrate the theorem results. PMID:27760130

  18. Reliability of smartphone-based teleradiology for evaluating thoracolumbar spine fractures.

    PubMed

    Stahl, Ido; Dreyfuss, Daniel; Ofir, Dror; Merom, Lior; Raichel, Michael; Hous, Nir; Norman, Doron; Haddad, Elias

    2017-02-01

    Timely interpretation of computed tomography (CT) scans is of paramount importance in diagnosing and managing spinal column fractures, which can be devastating. Out-of-hospital, on-call spine surgeons are often asked to evaluate CT scans of patients who have sustained trauma to the thoracolumbar spine to make diagnosis and to determine the appropriate course of urgent treatment. Capturing radiographic scans and video clips from computer screens and sending them as instant messages have become common means of communication between physicians, aiding in triaging and transfer decision-making in orthopedic and neurosurgical emergencies. The present study aimed to compare the reliability of interpreting CT scans viewed by orthopedic surgeons in two ways for diagnosing, classifying, and treatment planning for thoracolumbar spine fractures: (1) captured as video clips from standard workstation-based picture archiving and communication system (PACS) and sent via a smartphone-based instant messaging application for viewing on a smartphone; and (2) viewed directly on a PACS. Reliability and agreement study. Thirty adults with thoracolumbar spine fractures who had been consecutively admitted to the Division of Orthopedic Surgery of a Level I trauma center during 2014. Intraobserver agreement. CT scans were captured by use of an iPhone 6 smartphone from a computer screen displaying PACS. Then by use of the WhatsApp instant messaging application, video clips of the scans were sent to the personal smartphones of five spine surgeons. These evaluators were asked to diagnose, classify, and determine the course of treatment for each case. Evaluation of the cases was repeated 4 weeks later, this time using the standard method of workstation-based PACS. Intraobserver agreement was interpreted based on the value of Cohen's kappa statistic. The study did not receive any outside funding. Intraobserver agreement for determining fracture level was near perfect (κ=0.94). Intraobserver

  19. Determine the optimal carrier selection for a logistics network based on multi-commodity reliability criterion

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Kuei; Yeh, Cheng-Ta

    2013-05-01

    From the perspective of supply chain management, the selected carrier plays an important role in freight delivery. This article proposes a new criterion of multi-commodity reliability and optimises the carrier selection based on such a criterion for logistics networks with routes and nodes, over which multiple commodities are delivered. Carrier selection concerns the selection of exactly one carrier to deliver freight on each route. The capacity of each carrier has several available values associated with a probability distribution, since some of a carrier's capacity may be reserved for various orders. Therefore, the logistics network, given any carrier selection, is a multi-commodity multi-state logistics network. Multi-commodity reliability is defined as a probability that the logistics network can satisfy a customer's demand for various commodities, and is a performance indicator for freight delivery. To solve this problem, this study proposes an optimisation algorithm that integrates genetic algorithm, minimal paths and Recursive Sum of Disjoint Products. A practical example in which multi-sized LCD monitors are delivered from China to Germany is considered to illustrate the solution procedure.

  20. Reliability based calibration of partial safety factors for design of free pipeline spans

    SciTech Connect

    Ronold, K.O.; Nielsen, N.J.R.; Tura, F.; Bryndum, M.B.; Smed, P.F.

    1995-12-31

    This paper demonstrates how a structural reliability method can be applied as a rational means to analyze free spans of submarine pipelines with respect to failure in ultimate loading, and to establish partial safety factors for design of such free spans against this failure mode. It is important to note that the described procedure shall be considered as an illustration of a structural reliability methodology, and that the results do not represent a set of final design recommendations. A scope of design cases, consisting of a number of available site-specific pipeline spans, is established and is assumed representative for the future occurrence of submarine pipeline spans. Probabilistic models for the wave and current loading and its transfer to stresses in the pipe wall of a pipeline span is established together with a stochastic representation of the material resistance. The event of failure in ultimate loading is considered as based on a limit state which is reached when the maximum stress over the design life of the pipeline exceeds the yield strength of the pipe material. The yielding limit state is considered an ultimate limit state (ULS).

  1. A reliability-based calibration study for upheaval buckling of pipelines

    SciTech Connect

    Moerk, K.J.; Bjoernsen, T.; Venaas, A.; Thorkildsen, F.

    1995-12-31

    When a pipeline is subjected to axial compressive loads it will try to extend and release the compression force. If the pipeline is restrained by a soil or rock cover forces will be established between the pipeline and the soil. Upheaval buckling will occur when the force exerted by the pipe on the soil exceeds the vertical restraint. The pipeline moves upwards leading to possible unacceptable local plastic deformations or collapse or making it vulnerable to fishing gear and other third party activities. The present paper describes a reliability-based design procedure against upheaval buckling of rock or soil covered pipelines. The study is performed using state-of-the-art design methodologies including an assessment of all known uncertainties related to the load and capacity, measurement, surveys and confidence in the applied models. A response surface technique is applied within the level 3 reliability analysis. Target safety levels are discussed and a case specific calibration of partial safety factors in a consistent design equation against upheaval buckling is performed. Finally, a set of safety factors are proposed for both SLS and ULS requirements.

  2. Novel bandgap-based under-voltage-lockout methods with high reliability

    NASA Astrophysics Data System (ADS)

    Yongrui, Zhao; Xinquan, Lai

    2013-10-01

    Highly reliable bandgap-based under-voltage-lockout (UVLO) methods are presented in this paper. The proposed under-voltage state to signal conversion methods take full advantages of the high temperature stability characteristics and the enhancement low-voltage protection methods which protect the core circuit from error operation; moreover, a common-source stage amplifier method is introduced to expand the output voltage range. All of these methods are verified in a UVLO circuit fabricated with a 0.5 μm standard BCD process technology. The experimental result shows that the proposed bandgap method exhibits a good temperature coefficient of 20 ppm/°C, which ensures that the UVLO keeps a stable output until the under-voltage state changes. Moreover, at room temperature, the high threshold voltage VTH+ generated by the UVLO is 12.3 V with maximum drift voltage of ±80 mV, and the low threshold voltage VTH- is 9.5 V with maximum drift voltage of ±70 mV. Also, the low voltage protection method used in the circuit brings a high reliability when the supply voltage is very low.

  3. Reliability of Beta angle in assessing true anteroposterior apical base discrepancy in different growth patterns.

    PubMed

    Sundareswaran, Shobha; Kumar, Vinay

    2015-01-01

    Beta angle as a skeletal anteroposterior dysplasia indicator is known to be useful in evaluating normodivergent growth patterns. Hence, we compared and verified the accuracy of Beta angle in predicting sagittal jaw discrepancy among subjects with hyperdivergent, hypodivergent and normodivergent growth patterns. Lateral cephalometric radiographs of 179 patients belonging to skeletal Classes I, II, and III were further divided into normodivergent, hyperdivergent, and hypodivergent groups based on their vertical growth patterns. Sagittal dysplasia indicators - angle ANB, Wits appraisal, and Beta angle values were measured and tabulated. The perpendicular point of intersection on line CB (Condylion-Point B) in Beta angle was designated as 'X' and linear dimension XB was evaluated. Statistically significant increase was observed in the mean values of Beta angle and XB distance in the vertical growth pattern groups of both skeletal Class I and Class II patients thus pushing them toward Class III and Class I, respectively. Beta angle is a reliable indicator of sagittal dysplasia in normal and horizontal patterns of growth. However, vertical growth patterns significantly increased Beta angle values, thus affecting their reliability as a sagittal discrepancy assessment tool. Hence, Beta angle may not be a valid tool for assessment of sagittal jaw discrepancy in patients exhibiting vertical growth patterns with skeletal Class I and Class II malocclusions. Nevertheless, Class III malocclusions having the highest Beta angle values were unaffected.

  4. Reliability analysis of laser ultrasonics for train axle diagnostics based on model assisted POD curves

    NASA Astrophysics Data System (ADS)

    Malik, M. S.; Cavuto, A.; Martarelli, M.; Pandarese, G.; Revel, G. M.

    2014-05-01

    High speed train axles are integrated for a lifetime and it is time and resource consuming to conduct in service inspection with high accuracy. Laser ultrasonics is a proposed solution as a subset of non-contact measuring methods effective also for hard to reach areas and even recently proved to be effective using Laser Doppler Vibrometer (LDV) or air-coupled probes in reception. A reliability analysis of laser ultrasonics for this specific application is here performed. The research is mainly based on numerical study of the effect of high energy laser pulses on the surface of a steel axle and of the behavior of the ultrasonic waves in detecting possible defects. Probability of Detection (POD) concept is used as an estimated reliability of the inspection method. In particular Model Assisted Probability of Detection (MAPOD), a modified form of POD where models are used to infer results for making a decisive statistical approach of POD curve, is here adopted. This paper implements this approach by taking the inputs from limited experiments conducted on a high speed train axle using laser ultrasonics (source pulsed Nd:Yag, reception by high-frequency LDV) to calibrate a multiphysics FE model and by using the calibrated model to generate data samples statistically representative of damaged train axles. The simulated flaws are in accordance with the real defects present on the axle. A set of flaws of different depth has been modeled in order to assess the laser ultrasonics POD for this specific application.

  5. On the Reliability of a Solitary Wave Based Transducer to Determine the Characteristics of Some Materials

    PubMed Central

    Deng, Wen; Nasrollahi, Amir; Rizzo, Piervincenzo; Li, Kaiyuan

    2015-01-01

    In the study presented in this article we investigated the feasibility and the reliability of a transducer design for the nondestructive evaluation (NDE) of the stiffness of structural materials. The NDE method is based on the propagation of highly nonlinear solitary waves (HNSWs) along a one-dimensional chain of spherical particles that is in contact with the material to be assessed. The chain is part of a built-in system designed and assembled to excite and detect HNSWs, and to exploit the dynamic interaction between the particles and the material to be inspected. This interaction influences the time-of-flight and the amplitude of the solitary pulses reflected at the transducer/material interface. The results of this study show that certain features of the waves are dependent on the modulus of elasticity of the material and that the built-in system is reliable. In the future the proposed NDE method may provide a cost-effective tool for the rapid assessment of materials’ modulus. PMID:26703617

  6. A Compact Forearm Crutch Based on Force Sensors for Aided Gait: Reliability and Validity

    PubMed Central

    Chamorro-Moriana, Gema; Sevillano, José Luis; Ridao-Fernández, Carmen

    2016-01-01

    Frequently, patients who suffer injuries in some lower member require forearm crutches in order to partially unload weight-bearing. These lesions cause pain in lower limb unloading and their progression should be controlled objectively to avoid significant errors in accuracy and, consequently, complications and after effects in lesions. The design of a new and feasible tool that allows us to control and improve the accuracy of loads exerted on crutches during aided gait is necessary, so as to unburden the lower limbs. In this paper, we describe such a system based on a force sensor, which we have named the GCH System 2.0. Furthermore, we determine the validity and reliability of measurements obtained using this tool via a comparison with the validated AMTI (Advanced Mechanical Technology, Inc., Watertown, MA, USA) OR6-7-2000 Platform. An intra-class correlation coefficient demonstrated excellent agreement between the AMTI Platform and the GCH System. A regression line to determine the predictive ability of the GCH system towards the AMTI Platform was found, which obtained a precision of 99.3%. A detailed statistical analysis is presented for all the measurements and also segregated for several requested loads on the crutches (10%, 25% and 50% of body weight). Our results show that our system, designed for assessing loads exerted by patients on forearm crutches during assisted gait, provides valid and reliable measurements of loads. PMID:27338396

  7. Reliability of Beta angle in assessing true anteroposterior apical base discrepancy in different growth patterns

    PubMed Central

    Sundareswaran, Shobha; Kumar, Vinay

    2015-01-01

    Introduction: Beta angle as a skeletal anteroposterior dysplasia indicator is known to be useful in evaluating normodivergent growth patterns. Hence, we compared and verified the accuracy of Beta angle in predicting sagittal jaw discrepancy among subjects with hyperdivergent, hypodivergent and normodivergent growth patterns. Materials and Methods: Lateral cephalometric radiographs of 179 patients belonging to skeletal Classes I, II, and III were further divided into normodivergent, hyperdivergent, and hypodivergent groups based on their vertical growth patterns. Sagittal dysplasia indicators - angle ANB, Wits appraisal, and Beta angle values were measured and tabulated. The perpendicular point of intersection on line CB (Condylion-Point B) in Beta angle was designated as ‘X’ and linear dimension XB was evaluated. Results: Statistically significant increase was observed in the mean values of Beta angle and XB distance in the vertical growth pattern groups of both skeletal Class I and Class II patients thus pushing them toward Class III and Class I, respectively. Conclusions: Beta angle is a reliable indicator of sagittal dysplasia in normal and horizontal patterns of growth. However, vertical growth patterns significantly increased Beta angle values, thus affecting their reliability as a sagittal discrepancy assessment tool. Hence, Beta angle may not be a valid tool for assessment of sagittal jaw discrepancy in patients exhibiting vertical growth patterns with skeletal Class I and Class II malocclusions. Nevertheless, Class III malocclusions having the highest Beta angle values were unaffected. PMID:25810649

  8. Physics of Failure Analysis of Xilinx Flip Chip CCGA Packages: Effects of Mission Environments on Properties of LP2 Underfill and ATI Lid Adhesive Materials

    NASA Technical Reports Server (NTRS)

    Suh, Jong-ook

    2013-01-01

    The Xilinx Virtex 4QV and 5QV (V4 and V5) are next-generation field-programmable gate arrays (FPGAs) for space applications. However, there have been concerns within the space community regarding the non-hermeticity of V4/V5 packages; polymeric materials such as the underfill and lid adhesive will be directly exposed to the space environment. In this study, reliability concerns associated with the non-hermeticity of V4/V5 packages were investigated by studying properties and behavior of the underfill and the lid adhesvie materials used in V4/V5 packages.

  9. Reliability and Validity of a Wireless Microelectromechanicals Based System (Keimove™) for Measuring Vertical Jumping Performance

    PubMed Central

    Requena, Bernardo; García, Inmaculada; Requena, Francisco; Saez-Saez de Villarreal, Eduardo; Pääsuke, Mati

    2012-01-01

    The aim of this study was to determine the validity and reliability of a microelectromechanicals (MEMs) based system (Keimove™) in measuring flight time and takeoff velocity during a counter-movement jump (CMJ). As criterion reference, data of a high- speed camera (HSC) and a force-platform (FP) synchronized with a linear position transducer (LPT) was used. Thirty professional soccer players completely familiarized with the CMJ technique performed three CMJs. The second and third trials were used for further analysis. The Keimove™ system, the HSC and the FP synchronized with the LPT (FP+LPT) simultaneously measured the CMJ performance. During each repetition, the Keimove™ system registered flight time and velocity at takeoff. At the same time and as criterion reference, both the HSC and the FP recorded the flight time while the LPT+FP registered the velocity at takeoff. Pearson correlation coefficients for the flight time were high (r = 0.99; p < 0.001) when Keimove™ system was compared with the HSC or the FP+LPT, respectively. For the velocity at takeoff variable, the Pearson r between the Keimove™ system and the FP+LPT was lower although significant at the 0.05 level. No significant differences in mean values were observed for flight times and velocity at takeoff between the three devices. Intraclass correlations and coefficients of variation between trials were similar and ranged between 0.92-0.97 and 2.1-7.4, respectively. In conclusion, the Keimove™ system represents a valid and reliable instrument to measure velocity at takeoff and flight time during CMJ testing. Thus, this MEMs-based system will offer a portable, cost-effective tool for the assessment CMJ performance. Key points The Keimove™ system is composed of specific software and a wireless MEMs-based device designed to be attached at the lumbar region of the athlete. The Keimove™ system is a mechanically valid and reliable instrument in measuring flight time and velocity at takeoff

  10. An Examination of Temporal Trends in Electricity Reliability Based on Reports from U.S. Electric Utilities

    SciTech Connect

    Eto, Joseph H.; LaCommare, Kristina Hamachi; Larsen, Peter; Todd, Annika; Fisher, Emily

    2012-01-06

    Since the 1960s, the U.S. electric power system has experienced a major blackout about once every 10 years. Each has been a vivid reminder of the importance society places on the continuous availability of electricity and has led to calls for changes to enhance reliability. At the root of these calls are judgments about what reliability is worth and how much should be paid to ensure it. In principle, comprehensive information on the actual reliability of the electric power system and on how proposed changes would affect reliability ought to help inform these judgments. Yet, comprehensive, national-scale information on the reliability of the U.S. electric power system is lacking. This report helps to address this information gap by assessing trends in U.S. electricity reliability based on information reported by electric utilities on power interruptions experienced by their customers. Our research augments prior investigations, which focused only on power interruptions originating in the bulk power system, by considering interruptions originating both from the bulk power system and from within local distribution systems. Our research also accounts for differences among utility reliability reporting practices by employing statistical techniques that remove the influence of these differences on the trends that we identify. The research analyzes up to 10 years of electricity reliability information collected from 155 U.S. electric utilities, which together account for roughly 50% of total U.S. electricity sales. The questions analyzed include: 1. Are there trends in reported electricity reliability over time? 2. How are trends in reported electricity reliability affected by the installation or upgrade of an automated outage management system? 3. How are trends in reported electricity reliability affected by the use of IEEE Standard 1366-2003?

  11. Design and implementation of reliability evaluation of SAS hard disk based on RAID card

    NASA Astrophysics Data System (ADS)

    Ren, Shaohua; Han, Sen

    2015-10-01

    Because of the huge advantage of RAID technology in storage, it has been widely used. However, the question associated with this technology is that the hard disk based on the RAID card can not be queried by Operating System. Therefore how to read the self-information and log data of hard disk has been a problem, while this data is necessary for reliability test of hard disk. In traditional way, this information can be read just suitable for SATA hard disk, but not for SAS hard disk. In this paper, we provide a method by using LSI RAID card's Application Program Interface, communicating with RAID card and analyzing the feedback data to solve the problem. Then we will get the necessary information to assess the SAS hard disk.

  12. Fast and reliable interrogation of USFBG sensors based on MG-Y laser discrete wavelength channels

    NASA Astrophysics Data System (ADS)

    Rohollahnejad, Jalal; Xia, Li; Cheng, Rui; Ran, Yanli; Su, Lei

    2017-01-01

    In this letter, we propose to use discrete wavelength channels of a single chip MG-Y laser to interrogate an ultra-short fiber Bragg grating with a wide Gaussian spectrum. The broadband Gaussian spectrum of USFBG is sampled by the wavelength channels of MG-Y laser, through which the center of the spectrum. The measurement inherits the important features of a common tunable laser interrogation technique, namely its high flexibility, natural insensitivity to intensity variations relative to common intensity-based approaches. While for traditional tunable laser methods, it requires to sweep the whole spectrum to obtain the center wavelength of the spectrum, for the proposed scheme, just a few discrete wavelength channels of laser are needed to be acquired, which leads to significant improvements of the efficiency and measurement speed. This reliable and low cost concept could offer the good foundation for USFBGs future applications in large scale distributed measurements, especially in time domain multiplexing scheme.

  13. Comparative analysis of different configurations of PLC-based safety systems from reliability point of view

    NASA Technical Reports Server (NTRS)

    Tapia, Moiez A.

    1993-01-01

    The study of a comparative analysis of distinct multiplex and fault-tolerant configurations for a PLC-based safety system from a reliability point of view is presented. It considers simplex, duplex and fault-tolerant triple redundancy configurations. The standby unit in case of a duplex configuration has a failure rate which is k times the failure rate of the standby unit, the value of k varying from 0 to 1. For distinct values of MTTR and MTTF of the main unit, MTBF and availability for these configurations are calculated. The effect of duplexing only the PLC module or only the sensors and the actuators module, on the MTBF of the configuration, is also presented. The results are summarized and merits and demerits of various configurations under distinct environments are discussed.

  14. Improved mechanical reliability of MEMS electret based vibration energy harvesters for automotive applications

    NASA Astrophysics Data System (ADS)

    Renaud, M.; Fujita, T.; Goedbloed, M.; de Nooijer, C.; van Schaijk, R.

    2014-11-01

    Current commercial wireless tire pressure monitoring systems (TPMS) require a battery as electrical power source. The battery limits the lifetime of the TPMS. This limit can be circumvented by replacing the battery by a vibration energy harvester. Autonomous wireless TPMS powered by MEMS electret based vibration energy harvester have been demonstrated. A remaining technical challenge to attain the grade of commercial product with these autonomous TPMS is the mechanical reliability of the MEMS harvester. It should survive the harsh conditions imposed by the tire environment, particularly in terms of mechanical shocks. As shown in this article, our first generation of harvesters has a shock resilience of 400 g, which is far from being sufficient for the targeted application. In order to improve this aspect, several types of shock absorbing structures are investigated. With the best proposed solution, the shock resilience of the harvesters is brought above 2500 g.

  15. Reliability-Based Design of a Safety-Critical Automation System: A Case Study

    NASA Technical Reports Server (NTRS)

    Carroll, Carol W.; Dunn, W.; Doty, L.; Frank, M. V.; Hulet, M.; Alvarez, Teresa (Technical Monitor)

    1994-01-01

    In 1986, NASA funded a project to modernize the NASA Ames Research Center Unitary Plan Wind Tunnels, including the replacement of obsolescent controls with a modern, automated distributed control system (DCS). The project effort on this system included an independent safety analysis (ISA) of the automation system. The purpose of the ISA was to evaluate the completeness of the hazard analyses which had already been performed on the Modernization Project. The ISA approach followed a tailoring of the risk assessment approach widely used on existing nuclear power plants. The tailoring of the nuclear industry oriented risk assessment approach to the automation system and its role in reliability-based design of the automation system is the subject of this paper.

  16. Development of a reliable transmission-based laser sensor system for intelligent transportation systems

    NASA Astrophysics Data System (ADS)

    Chowdhury, Mashrur A.; Banerjee, Partha; Nehmetallah, Georges; Goodhue, Paul C.; Das, Arobindu; Atluri, Mahesh

    2004-10-01

    The transportation community has applied sensors for various traffic management purposes, such as in traffic signal control, ramp metering, traveler information development, and incident detection by collecting and processing real-time vehicle position and speed. The U.S. transportation community has not adopted any single newer traffic detectors as the most accepted choice. The objective of this research is to develop an infrared sensor system in the laboratory that will provide improved estimates of vehicle speed compared to those available from current infrared sensors, to model the sensor"s failure conditions and probabilities, and ultimately refine the sensor to provide the most reliable data under various environmental conditions. This paper presents the initial development of the proposed sensor system. This system will be implemented in a highway segment to evaluate its the risks of failure under various environmental conditions. A modified design will then be developed based on the field evaluations.

  17. Reliability model generator

    NASA Technical Reports Server (NTRS)

    McMann, Catherine M. (Inventor); Cohen, Gerald C. (Inventor)

    1991-01-01

    An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.

  18. Web-based phenotyping for Tourette Syndrome: Reliability of common co-morbid diagnoses

    PubMed Central

    Darrow, Sabrina M.; Illmann, Cornelia; Gauvin, Caitlin; Osiecki, Lisa; Egan, Crystelle A.; Greenberg, Erica; Eckfield, Monika; Hirschtritt, Matthew E.; Pauls, David L.; Batterson, James R.; Berlin, Cheston M.; Malaty, Irene A.; Woods, Douglas W.; Scharf, Jeremiah; Mathews, Carol

    2015-01-01

    Collecting phenotypic data necessary for genetic analyses of neuropsychiatric disorders is time consuming and costly. Development of web-based phenotype assessments would greatly improve the efficiency and cost-effectiveness of genetic research. However, evaluating the reliability of this approach compared to standard, in-depth clinical interviews is essential. The current study replicates and extends a preliminary report on the utility of a web-based screen for Tourette Syndrome (TS) and common comorbid diagnoses (obsessive compulsive disorder (OCD) and attention deficit/hyperactivity disorder (ADHD)). A subset of individuals who completed a web-based phenotyping assessment for a TS genetic study was invited to participate in semi-structured diagnostic clinical interviews. The data from these interviews were used to determine participants’ diagnostic status for TS, OCD, and ADHD using best estimate procedures, which then served as the gold standard to compare diagnoses assigned using web-based screen data. The results show high rates of agreement for TS. Kappas for OCD and ADHD diagnoses were also high and together demonstrate the utility of this self-report data in comparison previous diagnoses from clinicians and dimensional assessment methods. PMID:26054936

  19. Web-based phenotyping for Tourette Syndrome: Reliability of common co-morbid diagnoses.

    PubMed

    Darrow, Sabrina M; Illmann, Cornelia; Gauvin, Caitlin; Osiecki, Lisa; Egan, Crystelle A; Greenberg, Erica; Eckfield, Monika; Hirschtritt, Matthew E; Pauls, David L; Batterson, James R; Berlin, Cheston M; Malaty, Irene A; Woods, Douglas W; Scharf, Jeremiah M; Mathews, Carol A

    2015-08-30

    Collecting phenotypic data necessary for genetic analyses of neuropsychiatric disorders is time consuming and costly. Development of web-based phenotype assessments would greatly improve the efficiency and cost-effectiveness of genetic research. However, evaluating the reliability of this approach compared to standard, in-depth clinical interviews is essential. The current study replicates and extends a preliminary report on the utility of a web-based screen for Tourette Syndrome (TS) and common comorbid diagnoses (obsessive compulsive disorder (OCD) and attention deficit/hyperactivity disorder (ADHD)). A subset of individuals who completed a web-based phenotyping assessment for a TS genetic study was invited to participate in semi-structured diagnostic clinical interviews. The data from these interviews were used to determine participants' diagnostic status for TS, OCD, and ADHD using best estimate procedures, which then served as the gold standard to compare diagnoses assigned using web-based screen data. The results show high rates of agreement for TS. Kappas for OCD and ADHD diagnoses were also high and together demonstrate the utility of this self-report data in comparison previous diagnoses from clinicians and dimensional assessment methods. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. Robustness and Reliability of Synergy-Based Myocontrol of a Multiple Degree of Freedom Robotic Arm.

    PubMed

    Lunardini, Francesca; Casellato, Claudia; d'Avella, Andrea; Sanger, Terence D; Pedrocchi, Alessandra

    2016-09-01

    In this study, we test the feasibility of the synergy- based approach for application in the realistic and clinically oriented framework of multi-degree of freedom (DOF) robotic control. We developed and tested online ten able-bodied subjects in a semi-supervised method to achieve simultaneous, continuous control of two DOFs of a robotic arm, using muscle synergies extracted from upper limb muscles while performing flexion-extension movements of the elbow and shoulder joints in the horizontal plane. To validate the efficacy of the synergy-based approach in extracting reliable control signals, compared to the simple muscle-pair method typically used in commercial applications, we evaluated the repeatability of the algorithm over days, the effect of the arm dynamics on the control performance, and the robustness of the control scheme to the presence of co-contraction between pairs of antagonist muscles. Results showed that, without the need for a daily calibration, all subjects were able to intuitively and easily control the synergy-based myoelectric interface in different scenarios, using both dynamic and isometric muscle contractions. The proposed control scheme was shown to be robust to co-contraction between antagonist muscles, providing better performance compared to the traditional muscle-pair approach. The current study is a first step toward user-friendly application of synergy-based myocontrol of assistive robotic devices.

  1. Interpretive Reliability of Six Computer-Based Test Interpretation Programs for the Minnesota Multiphasic Personality Inventory-2.

    PubMed

    Deskovitz, Mark A; Weed, Nathan C; McLaughlan, Joseph K; Williams, John E

    2016-04-01

    The reliability of six Minnesota Multiphasic Personality Inventory-Second edition (MMPI-2) computer-based test interpretation (CBTI) programs was evaluated across a set of 20 commonly appearing MMPI-2 profile codetypes in clinical settings. Evaluation of CBTI reliability comprised examination of (a) interrater reliability, the degree to which raters arrive at similar inferences based on the same CBTI profile and (b) interprogram reliability, the level of agreement across different CBTI systems. Profile inferences drawn by four raters were operationalized using q-sort methodology. Results revealed no significant differences overall with regard to interrater and interprogram reliability. Some specific CBTI/profile combinations (e.g., the CBTI by Automated Assessment Associates on a within normal limits profile) and specific profiles (e.g., the 4/9 profile displayed greater interprogram reliability than the 2/4 profile) were interpreted with variable consensus (α range = .21-.95). In practice, users should consider that certain MMPI-2 profiles are interpreted more or less consensually and that some CBTIs show variable reliability depending on the profile.

  2. A structure-based software reliability allocation using fuzzy analytic hierarchy process

    NASA Astrophysics Data System (ADS)

    Chatterjee, Subhashis; Singh, Jeetendra B.; Roy, Arunava

    2015-02-01

    During the design phase of a software, it is often required to evaluate the reliability of the software system. At this stage of development, one crucial question arises 'how to achieve a target reliability of the software?' Reliability allocation methods can be used to set reliability goals for individual components. In this paper, a software reliability allocation model has been proposed incorporating the user view point about various functions of a software. Proposed reliability allocation method attempts to answer the question 'how reliable should the system components be?' The proposed model will be useful for determining the reliability goal at the planning and design phase of a software project, hence making reliability a singular measure for performance evaluation. Proposed model requires a systematic formulation of user requirements and preference into the technical design and reliability of the software. To accomplish this task, a system hierarchy has been established, which combines the user's view of the system with that of the software manager and the programmer. Fuzzy analytic hierarchy process (FAHP) has been used to derive the required model parameters from the hierarchy. Sensitivity analysis has also been carried out in this paper. Finally, an example has been given to illustrate the effectiveness and feasibility of the proposed method.

  3. Gradient-based reliability maps for ACM-based segmentation of hippocampus.

    PubMed

    Zarpalas, Dimitrios; Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos

    2014-04-01

    Automatic segmentation of deep brain structures, such as the hippocampus (HC), in MR images has attracted considerable scientific attention due to the widespread use of MRI and to the principal role of some structures in various mental disorders. In this literature, there exists a substantial amount of work relying on deformable models incorporating prior knowledge about structures' anatomy and shape information. However, shape priors capture global shape characteristics and thus fail to model boundaries of varying properties; HC boundaries present rich, poor, and missing gradient regions. On top of that, shape prior knowledge is blended with image information in the evolution process, through global weighting of the two terms, again neglecting the spatially varying boundary properties, causing segmentation faults. An innovative method is hereby presented that aims to achieve highly accurate HC segmentation in MR images, based on the modeling of boundary properties at each anatomical location and the inclusion of appropriate image information for each of those, within an active contour model framework. Hence, blending of image information and prior knowledge is based on a local weighting map, which mixes gradient information, regional and whole brain statistical information with a multi-atlas-based spatial distribution map of the structure's labels. Experimental results on three different datasets demonstrate the efficacy and accuracy of the proposed method.

  4. Fast two-dimensional phase-unwrapping algorithm based on sorting by reliability following a noncontinuous path.

    PubMed

    Herráez, Miguel Arevallilo; Burton, David R; Lalor, Michael J; Gdeisat, Munther A

    2002-12-10

    We describe what is to our knowledge a novel technique for phase unwrapping. Several algorithms based on unwrapping the most-reliable pixels first have been proposed. These were restricted to continuous paths and were subject to difficulties in defining a starting pixel. The technique described here uses a different type of reliability function and does not follow a continuous path to perform the unwrapping operation. The technique is explained in detail and illustrated with a number of examples.

  5. The Diagnostic Validity and Reliability of an Internet-Based Clinical Assessment Program for Mental Disorders

    PubMed Central

    Klein, Britt; Meyer, Denny; Austin, David William; Abbott, Jo-Anne M

    2015-01-01

    Background Internet-based assessment has the potential to assist with the diagnosis of mental health disorders and overcome the barriers associated with traditional services (eg, cost, stigma, distance). Further to existing online screening programs available, there is an opportunity to deliver more comprehensive and accurate diagnostic tools to supplement the assessment and treatment of mental health disorders. Objective The aim was to evaluate the diagnostic criterion validity and test-retest reliability of the electronic Psychological Assessment System (e-PASS), an online, self-report, multidisorder, clinical assessment and referral system. Methods Participants were 616 adults residing in Australia, recruited online, and representing prospective e-PASS users. Following e-PASS completion, 158 participants underwent a telephone-administered structured clinical interview and 39 participants repeated the e-PASS within 25 days of initial completion. Results With structured clinical interview results serving as the gold standard, diagnostic agreement with the e-PASS varied considerably from fair (eg, generalized anxiety disorder: κ=.37) to strong (eg, panic disorder: κ=.62). Although the e-PASS’ sensitivity also varied (0.43-0.86) the specificity was generally high (0.68-1.00). The e-PASS sensitivity generally improved when reducing the e-PASS threshold to a subclinical result. Test-retest reliability ranged from moderate (eg, specific phobia: κ=.54) to substantial (eg, bulimia nervosa: κ=.87). Conclusions The e-PASS produces reliable diagnostic results and performs generally well in excluding mental disorders, although at the expense of sensitivity. For screening purposes, the e-PASS subclinical result generally appears better than a clinical result as a diagnostic indicator. Further development and evaluation is needed to support the use of online diagnostic assessment programs for mental disorders. Trial Registration Australian and New Zealand Clinical Trials

  6. Reliability and applications of statistical methods based on oligonucleotide frequencies in bacterial and archaeal genomes

    PubMed Central

    Bohlin, Jon; Skjerve, Eystein; Ussery, David W

    2008-01-01

    Background The increasing number of sequenced prokaryotic genomes contains a wealth of genomic data that needs to be effectively analysed. A set of statistical tools exists for such analysis, but their strengths and weaknesses have not been fully explored. The statistical methods we are concerned with here are mainly used to examine similarities between archaeal and bacterial DNA from different genomes. These methods compare observed genomic frequencies of fixed-sized oligonucleotides with expected values, which can be determined by genomic nucleotide content, smaller oligonucleotide frequencies, or be based on specific statistical distributions. Advantages with these statistical methods include measurements of phylogenetic relationship with relatively small pieces of DNA sampled from almost anywhere within genomes, detection of foreign/conserved DNA, and homology searches. Our aim was to explore the reliability and best suited applications for some popular methods, which include relative oligonucleotide frequencies (ROF), di- to hexanucleotide zero'th order Markov methods (ZOM) and 2.order Markov chain Method (MCM). Tests were performed on distant homology searches with large DNA sequences, detection of foreign/conserved DNA, and plasmid-host similarity comparisons. Additionally, the reliability of the methods was tested by comparing both real and random genomic DNA. Results Our findings show that the optimal method is context dependent. ROFs were best suited for distant homology searches, whilst the hexanucleotide ZOM and MCM measures were more reliable measures in terms of phylogeny. The dinucleotide ZOM method produced high correlation values when used to compare real genomes to an artificially constructed random genome with similar %GC, and should therefore be used with care. The tetranucleotide ZOM measure was a good measure to detect horizontally transferred regions, and when used to compare the phylogenetic relationships between plasmids and hosts

  7. Reliability and applications of statistical methods based on oligonucleotide frequencies in bacterial and archaeal genomes.

    PubMed

    Bohlin, Jon; Skjerve, Eystein; Ussery, David W

    2008-02-28

    The increasing number of sequenced prokaryotic genomes contains a wealth of genomic data that needs to be effectively analysed. A set of statistical tools exists for such analysis, but their strengths and weaknesses have not been fully explored. The statistical methods we are concerned with here are mainly used to examine similarities between archaeal and bacterial DNA from different genomes. These methods compare observed genomic frequencies of fixed-sized oligonucleotides with expected values, which can be determined by genomic nucleotide content, smaller oligonucleotide frequencies, or be based on specific statistical distributions. Advantages with these statistical methods include measurements of phylogenetic relationship with relatively small pieces of DNA sampled from almost anywhere within genomes, detection of foreign/conserved DNA, and homology searches. Our aim was to explore the reliability and best suited applications for some popular methods, which include relative oligonucleotide frequencies (ROF), di- to hexanucleotide zero'th order Markov methods (ZOM) and 2.order Markov chain Method (MCM). Tests were performed on distant homology searches with large DNA sequences, detection of foreign/conserved DNA, and plasmid-host similarity comparisons. Additionally, the reliability of the methods was tested by comparing both real and random genomic DNA. Our findings show that the optimal method is context dependent. ROFs were best suited for distant homology searches, whilst the hexanucleotide ZOM and MCM measures were more reliable measures in terms of phylogeny. The dinucleotide ZOM method produced high correlation values when used to compare real genomes to an artificially constructed random genome with similar %GC, and should therefore be used with care. The tetranucleotide ZOM measure was a good measure to detect horizontally transferred regions, and when used to compare the phylogenetic relationships between plasmids and hosts, significant correlation (R2

  8. Reliability and validity of procedure-based assessments in otolaryngology training.

    PubMed

    Awad, Zaid; Hayden, Lindsay; Robson, Andrew K; Muthuswamy, Keerthini; Tolley, Neil S

    2015-06-01

    To investigate the reliability and construct validity of procedure-based assessment (PBA) in assessing performance and progress in otolaryngology training. Retrospective database analysis using a national electronic database. We analyzed PBAs of otolaryngology trainees in North London from core trainees (CTs) to specialty trainees (STs). The tool contains six multi-item domains: consent, planning, preparation, exposure/closure, technique, and postoperative care, rated as "satisfactory" or "development required," in addition to an overall performance rating (pS) of 1 to 4. Individual domain score, overall calculated score (cS), and number of "development-required" items were calculated for each PBA. Receiver operating characteristic analysis helped determine sensitivity and specificity. There were 3,152 otolaryngology PBAs from 46 otolaryngology trainees analyzed. PBA reliability was high (Cronbach's α 0.899), and sensitivity approached 99%. cS correlated positively with pS and level in training (rs : +0.681 and +0.324, respectively). ST had higher cS and pS than CT (93% ± 0.6 and 3.2 ± 0.03 vs. 71% ± 3.1 and 2.3 ± 0.08, respectively; P < .001). cS and pS increased from CT1 to ST8 showing construct validity (rs : +0.348 and +0.354, respectively; P < .001). The technical skill domain had the highest utilization (98% of PBAs) and was the best predictor of cS and pS (rs : +0.96 and +0.66, respectively). PBA is reliable and valid for assessing otolaryngology trainees' performance and progress at all levels. It is highly sensitive in identifying competent trainees. The tool is used in a formative and feedback capacity. The technical domain is the best predictor and should be given close attention. NA. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.

  9. Test-Retest Reliability of Web-Based Retrospective Self-Report of Tobacco Exposure and Risk

    PubMed Central

    Brigham, Janet; Lessov-Schlaggar, Christina N; Javitz, Harold S; Krasnow, Ruth E; McElroy, Mary; Swan, Gary E

    2009-01-01

    Background Retrospectively collected data about the development and maintenance of behaviors that impact health are a valuable source of information. Establishing the reliability of retrospective measures is a necessary step in determining the utility of that methodology and in studying behaviors in the context of risk and protective factors. Objective The goal of this study was to examine the reliability of self-report of a specific health-affecting behavior, tobacco use, and its associated risk and protective factors as examined with a Web-based questionnaire. Methods Core tobacco use and risk behavior questions in the Lifetime Tobacco Use Questionnaire—a closed, invitation-only, password-controlled, Web-based instrument—were administered at a 2-month test-retest interval to a convenience sample of 1229 respondents aged 18 to 78 years. Tobacco use items, which covered cigarettes, cigars, smokeless tobacco, and pipe tobacco, included frequency of use, amount used, first use, and a pack-years calculation. Risk-related questions included family history of tobacco use, secondhand smoke exposure, alcohol use, and religiosity. Results Analyses of test-retest reliability indicated modest (.30 to .49), moderate (.50 to .69), or high (.70 to 1.00) reliability across nearly all questions, with minimal reliability differences in analyses by sex, age, and income grouping. Most measures of tobacco use history showed moderate to high reliability, particularly for age of first use, age of first weekly and first daily smoking, and age at first or only quit attempt. Some measures of family tobacco use history, secondhand smoke exposure, alcohol use, and religiosity also had high test-retest reliability. Reliability was modest for subjective response to first use. Conclusions The findings reflect the stability of retrospective recall of tobacco use and risk factor self-report responses in a Web-questionnaire context. Questions that are designed and tested with psychometric

  10. Test-retest reliability of web-based retrospective self-report of tobacco exposure and risk.

    PubMed

    Brigham, Janet; Lessov-Schlaggar, Christina N; Javitz, Harold S; Krasnow, Ruth E; McElroy, Mary; Swan, Gary E

    2009-08-11

    Retrospectively collected data about the development and maintenance of behaviors that impact health are a valuable source of information. Establishing the reliability of retrospective measures is a necessary step in determining the utility of that methodology and in studying behaviors in the context of risk and protective factors. The goal of this study was to examine the reliability of self-report of a specific health-affecting behavior, tobacco use, and its associated risk and protective factors as examined with a Web-based questionnaire. Core tobacco use and risk behavior questions in the Lifetime Tobacco Use Questionnaire-a closed, invitation-only, password-controlled, Web-based instrument-were administered at a 2-month test-retest interval to a convenience sample of 1229 respondents aged 18 to 78 years. Tobacco use items, which covered cigarettes, cigars, smokeless tobacco, and pipe tobacco, included frequency of use, amount used, first use, and a pack-years calculation. Risk-related questions included family history of tobacco use, secondhand smoke exposure, alcohol use, and religiosity. Analyses of test-retest reliability indicated modest (.30 to .49), moderate (.50 to .69), or high (.70 to 1.00) reliability across nearly all questions, with minimal reliability differences in analyses by sex, age, and income grouping. Most measures of tobacco use history showed moderate to high reliability, particularly for age of first use, age of first weekly and first daily smoking, and age at first or only quit attempt. Some measures of family tobacco use history, secondhand smoke exposure, alcohol use, and religiosity also had high test-retest reliability. Reliability was modest for subjective response to first use. The findings reflect the stability of retrospective recall of tobacco use and risk factor self-report responses in a Web-questionnaire context. Questions that are designed and tested with psychometric scrutiny can yield reliable results in a Web setting.

  11. Reliability-based aeroelastic optimization of a composite aircraft wing via fluid-structure interaction of high fidelity solvers

    NASA Astrophysics Data System (ADS)

    Nikbay, M.; Fakkusoglu, N.; Kuru, M. N.

    2010-06-01

    We consider reliability based aeroelastic optimization of a AGARD 445.6 composite aircraft wing with stochastic parameters. Both commercial engineering software and an in-house reliability analysis code are employed in this high-fidelity computational framework. Finite volume based flow solver Fluent is used to solve 3D Euler equations, while Gambit is the fluid domain mesh generator and Catia-V5-R16 is used as a parametric 3D solid modeler. Abaqus, a structural finite element solver, is used to compute the structural response of the aeroelastic system. Mesh based parallel code coupling interface MPCCI-3.0.6 is used to exchange the pressure and displacement information between Fluent and Abaqus to perform a loosely coupled fluid-structure interaction by employing a staggered algorithm. To compute the probability of failure for the probabilistic constraints, one of the well known MPP (Most Probable Point) based reliability analysis methods, FORM (First Order Reliability Method) is implemented in Matlab. This in-house developed Matlab code is embedded in the multidisciplinary optimization workflow which is driven by Modefrontier. Modefrontier 4.1, is used for its gradient based optimization algorithm called NBI-NLPQLP which is based on sequential quadratic programming method. A pareto optimal solution for the stochastic aeroelastic optimization is obtained for a specified reliability index and results are compared with the results of deterministic aeroelastic optimization.

  12. High-Reliability Principles Must Be Tied to Value-Based Outcomes.

    PubMed

    Wasden, Mitchell L

    2017-01-01

    WellStar Health System and the Medical University of South Carolina (MUSC), highlighted in this issue's feature articles, are two organizations seeking to drive high reliability by educating leaders and incorporating high-reliability principles into their quality improvement (QI) efforts. The organizations have taken slightly different approaches to executing on high reliability, yet both are encouraged by the apparent success of high-reliability principles in other industries.The high-reliability framework is often applied to healthcare despite the limitations of comparing healthcare organizations to nuclear reactors, commercial airlines, and aircraft carriers. Notably, these industries were classified as "highly reliable" after the fact, meaning their employees and leadership already had existing routines and qualities that researchers would describe, after direct observation, as a shared distinguishing feature. Thus, in contrast to Lean, Six Sigma, and other QI movements that have been applied in healthcare, industries such as nuclear power and aviation came with tools and quantitative processes already embedded. The high-reliability framework is qualitative, while the actual hard, quantitative tools and processes differ by industry. This fact makes adopting high-reliability principles difficult because the tools must be created and scaled for each industry. Healthcare is still in the early stages of building these tools to support the high-reliability framework, and the articles by Saunders and Brennan (at WellStar) and Cawley and Scheurer (at MUSC) describe early attempts to provide insights into launching high-reliability principles and tools in healthcare.High reliability is a theoretical construct that is difficult to implement without a concrete framework for execution. The five characteristics that frame a high-reliability organization (HRO), as outlined by Cawley and Scheurer (citing ), are (1) preoccupation with failure, (2) reluctance to simplify

  13. An asymmetric PCR-based, reliable and rapid single-tube native DNA engineering strategy

    PubMed Central

    2012-01-01

    Background Widely used restriction-dependent cloning methods are labour-intensive and time-consuming, while several types of ligase-independent cloning approaches have inherent limitations. A rapid and reliable method of cloning native DNA sequences into desired plasmids are highly desired. Results This paper introduces ABI-REC, a novel strategy combining asymmetric bridge PCR with intramolecular homologous recombination in bacteria for native DNA cloning. ABI-REC was developed to precisely clone inserts into defined location in a directional manner within recipient plasmids. It featured an asymmetric 3-primer PCR performed in a single tube that could robustly amplify a chimeric insert-plasmid DNA sequence with homologous arms at both ends. Intramolecular homologous recombination occurred to the chimera when it was transformed into E.coli and produced the desired recombinant plasmids with high efficiency and fidelity. It is rapid, and does not involve any operational nucleotides. We proved the reliability of ABI-REC using a double-resistance reporter assay, and investigated the effects of homology and insert length upon its efficiency. We found that 15 bp homology was sufficient to initiate recombination, while 25 bp homology had the highest cloning efficiency. Inserts up to 4 kb in size could be cloned by this method. The utility and advantages of ABI-REC were demonstrated through a series of pig myostatin (MSTN) promoter and terminator reporter plasmids, whose transcriptional activity was assessed in mammalian cells. We finally used ABI-REC to construct a pig MSTN promoter-terminator cassette reporter and showed that it could work coordinately to express EGFP. Conclusions ABI-REC has the following advantages: (i) rapid and highly efficient; (ii) native DNA cloning without introduction of extra bases; (iii) restriction-free; (iv) easy positioning of directional and site-specific recombination owing to formulated primer design. ABI-REC is a novel approach to

  14. Reliability Optimization Design for Contact Springs of AC Contactors Based on Adaptive Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zhao, Sheng; Su, Xiuping; Wu, Ziran; Xu, Chengwen

    The paper illustrates the procedure of reliability optimization modeling for contact springs of AC contactors under nonlinear multi-constraint conditions. The adaptive genetic algorithm (AGA) is utilized to perform reliability optimization on the contact spring parameters of a type of AC contactor. A method that changes crossover and mutation rates at different times in the AGA can effectively avoid premature convergence, and experimental tests are performed after optimization. The experimental result shows that the mass of each optimized spring is reduced by 16.2%, while the reliability increases to 99.9% from 94.5%. The experimental result verifies the correctness and feasibility of this reliability optimization designing method.

  15. The development and reliability of a simple field based screening tool to assess core stability in athletes.

    PubMed

    O'Connor, S; McCaffrey, N; Whyte, E; Moran, K

    2016-07-01

    To adapt the trunk stability test to facilitate further sub-classification of higher levels of core stability in athletes for use as a screening tool. To establish the inter-tester and intra-tester reliability of this adapted core stability test. Reliability study. Collegiate athletic therapy facilities. Fifteen physically active male subjects (19.46 ± 0.63) free from any orthopaedic or neurological disorders were recruited from a convenience sample of collegiate students. The intraclass correlation coefficients (ICC) and 95% Confidence Intervals (CI) were computed to establish inter-tester and intra-tester reliability. Excellent ICC values were observed in the adapted core stability test for inter-tester reliability (0.97) and good to excellent intra-tester reliability (0.73-0.90). While the 95% CI were narrow for inter-tester reliability, Tester A and C 95% CI's were widely distributed compared to Tester B. The adapted core stability test developed in this study is a quick and simple field based test to administer that can further subdivide athletes with high levels of core stability. The test demonstrated high inter-tester and intra-tester reliability. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Dynamic fatigue reliability of contact wires based on dynamic simulation of high-speed pantograph-catenary

    NASA Astrophysics Data System (ADS)

    Hu, Ping; Liang, Jie; Fan, Wenli

    2017-05-01

    This paper proposed a dynamic fatigue reliability method of contact wires based on dynamic simulation of high-speed pantograph-catenary. Firstly, the Weibull distribution was adopted to describe the fatigue life of contact wires so as to build the fatigue reliability model of contact wires. And then 10 finite element models of elastic chain type for contact networks were set up with the EN50367 parameter. Thereafter the stress time of the weakest unit on contact wires was calculated through the finite element simulation. Secondly, the mean value and amplitude of each stress cycle were calculated by means of rain flow counting method, and these values were put into the reliability model to draw the changing curve of the fatigue reliability of contact wires with time. The numerical example showed that the fatigue reliability of contact wires would be less than 0.98 when these wires were used for more than 10 years. The method provided in this paper can be used to estimate the fatigue reliability of contact wires more accurately, and it can be used as a reference for the reliability design of the catenary system and the formulation of preventive maintenance plans.

  17. Post-illumination pupil response after blue light: Reliability of optimized melanopsin-based phototransduction assessment.

    PubMed

    van der Meijden, Wisse P; te Lindert, Bart H W; Bijlenga, Denise; Coppens, Joris E; Gómez-Herrero, Germán; Bruijel, Jessica; Kooij, J J Sandra; Cajochen, Christian; Bourgin, Patrice; Van Someren, Eus J W

    2015-10-01

    ± 3.6 yr) we examined the potential confounding effects of dark adaptation, time of the day (morning vs. afternoon), body posture (upright vs. supine position), and 24-h environmental light history on the PIPR assessment. Mixed effect regression models were used to analyze these possible confounders. A supine position caused larger PIPR-mm (β = 0.29 mm, SE = 0.10, p = 0.01) and PIPR-% (β = 4.34%, SE = 1.69, p = 0.02), which was due to an increase in baseline dark pupil diameter; this finding is of relevance for studies requiring a supine posture, as in functional Magnetic Resonance Imaging, constant routine protocols, and bed-ridden patients. There were no effects of dark adaptation, time of day, and light history. In conclusion, the presented method provides a reliable and robust assessment of the PIPR to allow for studies on individual differences in melanopsin-based phototransduction and effects of interventions.

  18. Reliable Dual Tensor Model Estimation in Single and Crossing Fibers Based on Jeffreys Prior

    PubMed Central

    Yang, Jianfei; Poot, Dirk H. J.; Caan, Matthan W. A.; Su, Tanja; Majoie, Charles B. L. M.; van Vliet, Lucas J.; Vos, Frans M.

    2016-01-01

    Purpose This paper presents and studies a framework for reliable modeling of diffusion MRI using a data-acquisition adaptive prior. Methods Automated relevance determination estimates the mean of the posterior distribution of a rank-2 dual tensor model exploiting Jeffreys prior (JARD). This data-acquisition prior is based on the Fisher information matrix and enables the assessment whether two tensors are mandatory to describe the data. The method is compared to Maximum Likelihood Estimation (MLE) of the dual tensor model and to FSL’s ball-and-stick approach. Results Monte Carlo experiments demonstrated that JARD’s volume fractions correlated well with the ground truth for single and crossing fiber configurations. In single fiber configurations JARD automatically reduced the volume fraction of one compartment to (almost) zero. The variance in fractional anisotropy (FA) of the main tensor component was thereby reduced compared to MLE. JARD and MLE gave a comparable outcome in data simulating crossing fibers. On brain data, JARD yielded a smaller spread in FA along the corpus callosum compared to MLE. Tract-based spatial statistics demonstrated a higher sensitivity in detecting age-related white matter atrophy using JARD compared to both MLE and the ball-and-stick approach. Conclusions The proposed framework offers accurate and precise estimation of diffusion properties in single and dual fiber regions. PMID:27760166

  19. Reliable noninvasive genotyping based on excremental PCR of nuclear DNA purified with a magnetic bead protocol.

    PubMed

    Flagstad, O; Røed, K; Stacy, J E; Jakobsen, K S

    1999-05-01

    A new protocol for extraction of DNA from faeces is presented. The protocol involves gentle washing of the surface of the faeces followed by a very simple DNA extraction utilizing the wash supernatant as the source of DNA. Unlike most other protocols, it does not involve the use of proteinase K and/or organic extraction, but is instead based on adsorption of the DNA to magnetic beads. The protocol was tested by microsatellite genotyping across six loci for sheep and reindeer faeces. Comparison with DNA extracted from blood demonstrated that the protocol was very reliable, even when used on material stored for a long time. The protocol was compared with another simple, solid-phase DNA-binding protocol, with the result that the bead-based protocol gave a slightly better amplification success and a lower frequency of allelic drop-outs. Furthermore, our experiments showed that the surface wash prior to DNA extraction is a crucial step, not only for our protocol, but for other solid-phase protocols as well.

  20. New Atrophic Acne Scar Classification: Reliability of Assessments Based on Size, Shape, and Number.

    PubMed

    Kang, Sewon; Lozada, Vicente Torres; Bettoli, Vincenzo; Tan, Jerry; Rueda, Maria Jose; Layton, Alison; Petit, Lauren; Dréno, Brigitte

    2016-06-01

    Post-acne atrophic scarring is a major concern for which standardized outcome measures are needed. Traditionally, this type of scar has been classified based on shape; but survey of practicing dermatologists has shown that atrophic scar morphology has not been well enough defined to allow good agreement in clinical classification. Reliance on clinical assessment is still needed at the current time, since objective tools are not yet available in routine practice.
    Evaluate classification for atrophic acne scars by shape, size, and facial location and establish reliability in assessments.
    We conducted a non-interventional study with dermatologists performing live clinical assessments of atrophic acne scars. To objectively compare identification of lesions, individual lesions were marked on a high-resolution photo of the patient that was displayed on a computer during the clinical evaluation. The Jacob clinical classification system was used to define three primary shapes of scars 1) icepick, 2) boxcar, and 3) rolling. To determine agreement for classification by size, independent technicians assessed the investigators' markings on digital images. Identical localization of scars was denoted if the maximal distance between their centers was ≤ 60 pixels (approximately 3 mm). Raters assessed scars on the same patients twice (morning/afternoon). Aggregate models of rater assessments were created and analyzed for agreement.
    Raters counted a mean scar count per subject ranging from 15.75 to 40.25 scars. Approximately 50% of scars were identified by all raters and ~75% of scars were identified by at least 2 of 3 raters (weak agreement, Kappa pairwise agreement 0.30). Agreement between consecutive counts was moderate, with Kappa index ranging from 0.26 to 0.47 (after exclusion of one outlier investigator who had significantly higher counts than all others). Shape classifications of icepick, boxcar, and rolling differed significantly between raters and even

  1. Optimal Preventive Maintenance Schedule based on Lifecycle Cost and Time-Dependent Reliability

    DTIC Science & Technology

    2011-11-10

    cost PC , the inspection cost IC and an expected variable cost EVC [2, 32]. These costs are a function of quality and reliability. The lifecycle...expected variable cost EVC is a function of the time- dependent reliability which is used to estimate the expected present value of repairing and/or

  2. Instrumentation and Control Needs for Reliable Operation of Lunar Base Surface Nuclear Power Systems

    NASA Technical Reports Server (NTRS)

    Turso, James; Chicatelli, Amy; Bajwa, Anupa

    2005-01-01

    needed to enable this critical functionality of autonomous operation. It will be imperative to consider instrumentation and control requirements in parallel to system configuration development so as to identify control-related, as well as integrated system-related, problem areas early to avoid potentially expensive work-arounds . This paper presents an overview of the enabling technologies necessary for the development of reliable, autonomous lunar base nuclear power systems with an emphasis on system architectures and off-the-shelf algorithms rather than hardware. Autonomy needs are presented in the context of a hypothetical lunar base nuclear power system. The scenarios and applications presented are hypothetical in nature, based on information from open-literature sources, and only intended to provoke thought and provide motivation for the use of autonomous, intelligent control and diagnostics.

  3. Is Teacher Assessment Reliable or Valid for High School Students under a Web-Based Portfolio Environment?

    ERIC Educational Resources Information Center

    Chang, Chi-Cheng; Wu, Bing-Hong

    2012-01-01

    This study explored the reliability and validity of teacher assessment under a Web-based portfolio assessment environment (or Web-based teacher portfolio assessment). Participants were 72 eleventh graders taking the "Computer Application" course. The students perform portfolio creation, inspection, self- and peer-assessment using the Web-based…

  4. Is Learner Self-Assessment Reliable and Valid in a Web-Based Portfolio Environment for High School Students?

    ERIC Educational Resources Information Center

    Chang, Chi-Cheng; Liang, Chaoyun; Chen, Yi-Hui

    2013-01-01

    This study explored the reliability and validity of Web-based portfolio self-assessment. Participants were 72 senior high school students enrolled in a computer application course. The students created learning portfolios, viewed peers' work, and performed self-assessment on the Web-based portfolio assessment system. The results indicated: 1)…

  5. Is Teacher Assessment Reliable or Valid for High School Students under a Web-Based Portfolio Environment?

    ERIC Educational Resources Information Center

    Chang, Chi-Cheng; Wu, Bing-Hong

    2012-01-01

    This study explored the reliability and validity of teacher assessment under a Web-based portfolio assessment environment (or Web-based teacher portfolio assessment). Participants were 72 eleventh graders taking the "Computer Application" course. The students perform portfolio creation, inspection, self- and peer-assessment using the Web-based…

  6. Use of viral promoters in mammalian cell-based bioassays: How reliable?

    PubMed Central

    Betrabet, Shrikant S; Choudhuri, Jyoti; Gill-Sharma, Manjit

    2004-01-01

    Cell-based bioassays have been suggested for screening of hormones and drug bioactivities. They are a plausible alternative to animal based methods. The technique used is called receptor/reporter system. Receptor/reporter system was initially developed as a research technique to understand gene function. Often reporter constructs containing viral promoters were used because they could be expressed with very 'high' magnitude in a variety of cell types in the laboratory. On the other hand mammalian genes are expressed in a cell/tissue specific manner, which makes them (i.e. cells/tissues) specialized for specific function in vivo. Therefore, if the receptor/reporter system is to be used as a cell-based screen for testing of hormones and drugs for human therapy then the choice of cell line as well as the promoter in the reporter module is of prime importance so as to get a realistic measure of the bioactivities of 'test' compounds. We evaluated two conventionally used viral promoters and a natural mammalian promoter, regulated by steroid hormone progesterone, in a cell-based receptor/reporter system. The promoters were spliced into vectors expressing enzyme CAT (chloramphenicol acetyl transferase), which served as a reporter of their magnitudes and consistencies in controlling gene expressions. They were introduced into breast cell lines T47D and MCF-7, which served as a cell-based source of progesterone receptors. The yardstick of their reliability was highest magnitude as well as consistency in CAT expression on induction by sequential doses of progesterone. All the promoters responded to induction by progesterone doses ranging from 10-12 to 10-6 molar by expressing CAT enzyme, albeit with varying magnitudes and consistencies. The natural mammalian promoter showed the most coherence in magnitude as well as dose dependent expression profile in both the cell lines. Our study casts doubts on use of viral promoters in a cell-based bioassay for measuring bioactivities of

  7. A reliability assessment of constrained spherical deconvolution-based diffusion-weighted magnetic resonance imaging in individuals with chronic stroke.

    PubMed

    Snow, Nicholas J; Peters, Sue; Borich, Michael R; Shirzad, Navid; Auriat, Angela M; Hayward, Kathryn S; Boyd, Lara A

    2016-01-15

    Diffusion-weighted magnetic resonance imaging (DW-MRI) is commonly used to assess white matter properties after stroke. Novel work is utilizing constrained spherical deconvolution (CSD) to estimate complex intra-voxel fiber architecture unaccounted for with tensor-based fiber tractography. However, the reliability of CSD-based tractography has not been established in people with chronic stroke. Establishing the reliability of CSD-based DW-MRI in chronic stroke. High-resolution DW-MRI was performed in ten adults with chronic stroke during two separate sessions. Deterministic region of interest-based fiber tractography using CSD was performed by two raters. Mean fractional anisotropy (FA), apparent diffusion coefficient (ADC), tract number, and tract volume were extracted from reconstructed fiber pathways in the corticospinal tract (CST) and superior longitudinal fasciculus (SLF). Callosal fiber pathways connecting the primary motor cortices were also evaluated. Inter-rater and test-retest reliability were determined by intra-class correlation coefficients (ICCs). ICCs revealed excellent reliability for FA and ADC in ipsilesional (0.86-1.00; p<0.05) and contralesional hemispheres (0.94-1.00; p<0.0001), for CST and SLF fibers; and excellent reliability for all metrics in callosal fibers (0.85-1.00; p<0.05). ICC ranged from poor to excellent for tract number and tract volume in ipsilesional (-0.11 to 0.92; p≤0.57) and contralesional hemispheres (-0.27 to 0.93; p≤0.64), for CST and SLF fibers. Like other select DW-MRI approaches, CSD-based tractography is a reliable approach to evaluate FA and ADC in major white matter pathways, in chronic stroke. Future work should address the reproducibility and utility of CSD-based metrics of tract number and tract volume. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. An Acetylcholinesterase-Based Chronoamperometric Biosensor for Fast and Reliable Assay of Nerve Agents

    PubMed Central

    Pohanka, Miroslav; Adam, Vojtech; Kizek, Rene

    2013-01-01

    The enzyme acetylcholinesterase (AChE) is an important part of cholinergic nervous system, where it stops neurotransmission by hydrolysis of the neurotransmitter acetylcholine. It is sensitive to inhibition by organophosphate and carbamate insecticides, some Alzheimer disease drugs, secondary metabolites such as aflatoxins and nerve agents used in chemical warfare. When immobilized on a sensor (physico-chemical transducer), it can be used for assay of these inhibitors. In the experiments described herein, an AChE- based electrochemical biosensor using screen printed electrode systems was prepared. The biosensor was used for assay of nerve agents such as sarin, soman, tabun and VX. The limits of detection achieved in a measuring protocol lasting ten minutes were 7.41 × 10−12 mol/L for sarin, 6.31 × 10−12 mol/L for soman, 6.17 × 10−11 mol/L for tabun, and 2.19 × 10−11 mol/L for VX, respectively. The assay was reliable, with minor interferences caused by the organic solvents ethanol, methanol, isopropanol and acetonitrile. Isopropanol was chosen as suitable medium for processing lipophilic samples. PMID:23999806

  9. Science-Based Simulation Model of Human Performance for Human Reliability Analysis

    SciTech Connect

    Dana L. Kelly; Ronald L. Boring; Ali Mosleh; Carol Smidts

    2011-10-01

    Human reliability analysis (HRA), a component of an integrated probabilistic risk assessment (PRA), is the means by which the human contribution to risk is assessed, both qualitatively and quantitatively. However, among the literally dozens of HRA methods that have been developed, most cannot fully model and quantify the types of errors that occurred at Three Mile Island. Furthermore, all of the methods lack a solid empirical basis, relying heavily on expert judgment or empirical results derived in non-reactor domains. Finally, all of the methods are essentially static, and are thus unable to capture the dynamics of an accident in progress. The objective of this work is to begin exploring a dynamic simulation approach to HRA, one whose models have a basis in psychological theories of human performance, and whose quantitative estimates have an empirical basis. This paper highlights a plan to formalize collaboration among the Idaho National Laboratory (INL), the University of Maryland, and The Ohio State University (OSU) to continue development of a simulation model initially formulated at the University of Maryland. Initial work will focus on enhancing the underlying human performance models with the most recent psychological research, and on planning follow-on studies to establish an empirical basis for the model, based on simulator experiments to be carried out at the INL and at the OSU.

  10. Optimization of Systems with Uncertainty: Initial Developments for Performance, Robustness and Reliability Based Designs

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    This paper presents a study on the optimization of systems with structured uncertainties, whose inputs and outputs can be exhaustively described in the probabilistic sense. By propagating the uncertainty from the input to the output in the space of the probability density functions and the moments, optimization problems that pursue performance, robustness and reliability based designs are studied. Be specifying the desired outputs in terms of desired probability density functions and then in terms of meaningful probabilistic indices, we settle a computationally viable framework for solving practical optimization problems. Applications to static optimization and stability control are used to illustrate the relevance of incorporating uncertainty in the early stages of the design. Several examples that admit a full probabilistic description of the output in terms of the design variables and the uncertain inputs are used to elucidate the main features of the generic problem and its solution. Extensions to problems that do not admit closed form solutions are also evaluated. Concrete evidence of the importance of using a consistent probabilistic formulation of the optimization problem and a meaningful probabilistic description of its solution is provided in the examples. In the stability control problem the analysis shows that standard deterministic approaches lead to designs with high probability of running into instability. The implementation of such designs can indeed have catastrophic consequences.

  11. Reliability-Based Stability Analysis of Rock Slopes Using Numerical Analysis and Response Surface Method

    NASA Astrophysics Data System (ADS)

    Dadashzadeh, N.; Duzgun, H. S. B.; Yesiloglu-Gultekin, N.

    2017-08-01

    While advanced numerical techniques in slope stability analysis are successfully used in deterministic studies, they have so far found limited use in probabilistic analyses due to their high computation cost. The first-order reliability method (FORM) is one of the most efficient probabilistic techniques to perform probabilistic stability analysis by considering the associated uncertainties in the analysis parameters. However, it is not possible to directly use FORM in numerical slope stability evaluations as it requires definition of a limit state performance function. In this study, an integrated methodology for probabilistic numerical modeling of rock slope stability is proposed. The methodology is based on response surface method, where FORM is used to develop an explicit performance function from the results of numerical simulations. The implementation of the proposed methodology is performed by considering a large potential rock wedge in Sumela Monastery, Turkey. The accuracy of the developed performance function to truly represent the limit state surface is evaluated by monitoring the slope behavior. The calculated probability of failure is compared with Monte Carlo simulation (MCS) method. The proposed methodology is found to be 72% more efficient than MCS, while the accuracy is decreased with an error of 24%.

  12. Shock reliability analysis and improvement of MEMS electret-based vibration energy harvesters

    NASA Astrophysics Data System (ADS)

    Renaud, M.; Fujita, T.; Goedbloed, M.; de Nooijer, C.; van Schaijk, R.

    2015-10-01

    Vibration energy harvesters can serve as a replacement solution to batteries for powering tire pressure monitoring systems (TPMS). Autonomous wireless TPMS powered by microelectromechanical system (MEMS) electret-based vibration energy harvester have been demonstrated. The mechanical reliability of the MEMS harvester still has to be assessed in order to bring the harvester to the requirements of the consumer market. It should survive the mechanical shocks occurring in the tire environment. A testing procedure to quantify the shock resilience of harvesters is described in this article. Our first generation of harvesters has a shock resilience of 400 g, which is far from being sufficient for the targeted application. In order to improve this aspect, the first important aspect is to understand the failure mechanism. Failure is found to occur in the form of fracture of the device’s springs. It results from impacts between the anchors of the springs when the harvester undergoes a shock. The shock resilience of the harvesters can be improved by redirecting these impacts to nonvital parts of the device. With this philosophy in mind, we design three types of shock absorbing structures and test their effect on the shock resilience of our MEMS harvesters. The solution leading to the best results consists of rigid silicon stoppers covered by a layer of Parylene. The shock resilience of the harvesters is brought above 2500 g. Results in the same range are also obtained with flexible silicon bumpers, which are simpler to manufacture.

  13. Autonomous, Decentralized Grid Architecture: Prosumer-Based Distributed Autonomous Cyber-Physical Architecture for Ultra-Reliable Green Electricity Networks

    SciTech Connect

    2012-01-11

    GENI Project: Georgia Tech is developing a decentralized, autonomous, internet-like control architecture and control software system for the electric power grid. Georgia Tech’s new architecture is based on the emerging concept of electricity prosumers—economically motivated actors that can produce, consume, or store electricity. Under Georgia Tech’s architecture, all of the actors in an energy system are empowered to offer associated energy services based on their capabilities. The actors achieve their sustainability, efficiency, reliability, and economic objectives, while contributing to system-wide reliability and efficiency goals. This is in marked contrast to the current one-way, centralized control paradigm.

  14. Study of Fuze Structure and Reliability Design Based on the Direct Search Method

    NASA Astrophysics Data System (ADS)

    Lin, Zhang; Ning, Wang

    2017-03-01

    Redundant design is one of the important methods to improve the reliability of the system, but mutual coupling of multiple factors is often involved in the design. In my study, Direct Search Method is introduced into the optimum redundancy configuration for design optimization, in which, the reliability, cost, structural weight and other factors can be taken into account simultaneously, and the redundant allocation and reliability design of aircraft critical system are computed. The results show that this method is convenient and workable, and applicable to the redundancy configurations and optimization of various designs upon appropriate modifications. And this method has a good practical value.

  15. A GC/MS-based metabolomic approach for reliable diagnosis of phenylketonuria.

    PubMed

    Xiong, Xiyue; Sheng, Xiaoqi; Liu, Dan; Zeng, Ting; Peng, Ying; Wang, Yichao

    2015-11-01

    ), which showed that phenylacetic acid may be used as a reliable discriminator for the diagnosis of PKU. The low false positive rate (1-specificity, 0.064) can be eliminated or at least greatly reduced by simultaneously referring to other markers, especially phenylpyruvic acid, a unique marker in PKU. Additionally, this standard was obtained with high sensitivity and specificity in a less invasive manner for diagnosing PKU compared with the Phe/Tyr ratio. Therefore, we conclude that urinary metabolomic information based on the improved oximation-silylation method together with GC/MS may be reliable for the diagnosis and differential diagnosis of PKU.

  16. VLSI reliability

    SciTech Connect

    Sabnis, A.G. )

    1990-01-01

    This book presents major topics in IC reliability from basic concepts to packaging issues. Other topics covered include failure analysis techniques, radiation effects, and reliability assurance and qualification. This book offers insight into the practical aspects of VLSI reliability.

  17. Reliability prediction for evolutionary product in the conceptual design phase using neural network-based fuzzy synthetic assessment

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Huang, Hong-Zhong; Ling, Dan

    2013-03-01

    Reliability prediction plays an important role in product lifecycle management. It has been used to assess various reliability indices (such as reliability, availability and mean time to failure) before a new product is physically built and/or put into use. In this article, a novel approach is proposed to facilitate reliability prediction for evolutionary products during their early design stages. Due to the lack of sufficient data in the conceptual design phase, reliability prediction is not a straightforward task. Taking account of the information from existing similar products and knowledge from domain experts, a neural network-based fuzzy synthetic assessment (FSA) approach is proposed to predict the reliability indices that a new evolutionary product could achieve. The proposed approach takes advantage of the capability of the back-propagation neural network in terms of constructing highly non-linear functional relationship and combines both the data sets from existing similar products and subjective knowledge from domain experts. It is able to reach a more accurate prediction than the conventional FSA method reported in the literature. The effectiveness and advantages of the proposed method are demonstrated via a case study of the fuel injection pump and a comparative study.

  18. Psychometric instrumentation: reliability and validity of instruments used for clinical practice, evidence-based practice projects and research studies.

    PubMed

    Mayo, Ann M

    2015-01-01

    It is important for CNSs and other APNs to consider the reliability and validity of instruments chosen for clinical practice, evidence-based practice projects, or research studies. Psychometric testing uses specific research methods to evaluate the amount of error associated with any particular instrument. Reliability estimates explain more about how well the instrument is designed, whereas validity estimates explain more about scores that are produced by the instrument. An instrument may be architecturally sound overall (reliable), but the same instrument may not be valid. For example, if a specific group does not understand certain well-constructed items, then the instrument does not produce valid scores when used with that group. Many instrument developers may conduct reliability testing only once, yet continue validity testing in different populations over many years. All CNSs should be advocating for the use of reliable instruments that produce valid results. Clinical nurse specialists may find themselves in situations where reliability and validity estimates for some instruments that are being utilized are unknown. In such cases, CNSs should engage key stakeholders to sponsor nursing researchers to pursue this most important work.

  19. Reliability-based management of buried pipelines considering external corrosion defects

    NASA Astrophysics Data System (ADS)

    Miran, Seyedeh Azadeh

    Corrosion is one of the main deteriorating mechanisms that degrade the energy pipeline integrity, due to transferring corrosive fluid or gas and interacting with corrosive environment. Corrosion defects are usually detected by periodical inspections using in-line inspection (ILI) methods. In order to ensure pipeline safety, this study develops a cost-effective maintenance strategy that consists of three aspects: corrosion growth model development using ILI data, time-dependent performance evaluation, and optimal inspection interval determination. In particular, the proposed study is applied to a cathodic protected buried steel pipeline located in Mexico. First, time-dependent power-law formulation is adopted to probabilistically characterize growth of the maximum depth and length of the external corrosion defects. Dependency between defect depth and length are considered in the model development and generation of the corrosion defects over time is characterized by the homogenous Poisson process. The growth models unknown parameters are evaluated based on the ILI data through the Bayesian updating method with Markov Chain Monte Carlo (MCMC) simulation technique. The proposed corrosion growth models can be used when either matched or non-matched defects are available, and have ability to consider newly generated defects since last inspection. Results of this part of study show that both depth and length growth models can predict damage quantities reasonably well and a strong correlation between defect depth and length is found. Next, time-dependent system failure probabilities are evaluated using developed corrosion growth models considering prevailing uncertainties where three failure modes, namely small leak, large leak and rupture are considered. Performance of the pipeline is evaluated through failure probability per km (or called a sub-system) where each subsystem is considered as a series system of detected and newly generated defects within that sub

  20. Reliability of Nationwide Prevalence Estimates of Dementia: A Critical Appraisal Based on Brazilian Surveys

    PubMed Central

    2015-01-01

    Background The nationwide dementia prevalence is usually calculated by applying the results of local surveys to countries’ populations. To evaluate the reliability of such estimations in developing countries, we chose Brazil as an example. We carried out a systematic review of dementia surveys, ascertained their risk of bias, and present the best estimate of occurrence of dementia in Brazil. Methods and Findings We carried out an electronic search of PubMed, Latin-American databases, and a Brazilian thesis database for surveys focusing on dementia prevalence in Brazil. The systematic review was registered at PROSPERO (CRD42014008815). Among the 35 studies found, 15 analyzed population-based random samples. However, most of them utilized inadequate criteria for diagnostics. Six studies without these limitations were further analyzed to assess the risk of selection, attrition, outcome and population bias as well as several statistical issues. All the studies presented moderate or high risk of bias in at least two domains due to the following features: high non-response, inaccurate cut-offs, and doubtful accuracy of the examiners. Two studies had limited external validity due to high rates of illiteracy or low income. The three studies with adequate generalizability and the lowest risk of bias presented a prevalence of dementia between 7.1% and 8.3% among subjects aged 65 years and older. However, after adjustment for accuracy of screening, the best available evidence points towards a figure between 15.2% and 16.3%. Conclusions The risk of bias may strongly limit the generalizability of dementia prevalence estimates in developing countries. Extrapolations that have already been made for Brazil and Latin America were based on a prevalence that should have been adjusted for screening accuracy or not used at all due to severe bias. Similar evaluations regarding other developing countries are needed in order to verify the scope of these limitations. PMID:26131563

  1. Stochastic Analysis of Waterhammer and Applications in Reliability-Based Structural Design for Hydro Turbine Penstocks

    SciTech Connect

    Zhang, Qin Fen; Karney, Professor Byran W.; Suo, Prof. Lisheng; Colombo, Dr. Andrew

    2011-01-01

    Abstract: The randomness of transient events, and the variability in factors which influence the magnitudes of resultant pressure fluctuations, ensures that waterhammer and surges in a pressurized pipe system are inherently stochastic. To bolster and improve reliability-based structural design, a stochastic model of transient pressures is developed for water conveyance systems in hydropower plants. The statistical characteristics and probability distributions of key factors in boundary conditions, initial states and hydraulic system parameters are analyzed based on a large record of observed data from hydro plants in China; and then the statistical characteristics and probability distributions of annual maximum waterhammer pressures are simulated using Monte Carlo method and verified by the analytical probabilistic model for a simplified pipe system. In addition, the characteristics (annual occurrence, sustaining period and probability distribution) of hydraulic loads for both steady and transient states are discussed. Illustrating with an example of penstock structural design, it is shown that the total waterhammer pressure should be split into two individual random variable loads: the steady/static pressure and the waterhammer pressure rise during transients; and that different partial load factors should be applied to each individual load to reflect its unique physical and stochastic features. Particularly, the normative load (usually the unfavorable value at 95-percentage point) for steady/static hydraulic pressure should be taken from the probability distribution of its maximum values during the pipe's design life, while for waterhammer pressure rise, as the second variable load, the probability distribution of its annual maximum values is used to determine its normative load.

  2. Validation and selection of ODE based systems biology models: how to arrive at more reliable decisions.

    PubMed

    Hasdemir, Dicle; Hoefsloot, Huub C J; Smilde, Age K

    2015-07-08

    Most ordinary differential equation (ODE) based modeling studies in systems biology involve a hold-out validation step for model validation. In this framework a pre-determined part of the data is used as validation data and, therefore it is not used for estimating the parameters of the model. The model is assumed to be validated if the model predictions on the validation dataset show good agreement with the data. Model selection between alternative model structures can also be performed in the same setting, based on the predictive power of the model structures on the validation dataset. However, drawbacks associated with this approach are usually under-estimated. We have carried out simulations by using a recently published High Osmolarity Glycerol (HOG) pathway from S.cerevisiae to demonstrate these drawbacks. We have shown that it is very important how the data is partitioned and which part of the data is used for validation purposes. The hold-out validation strategy leads to biased conclusions, since it can lead to different validation and selection decisions when different partitioning schemes are used. Furthermore, finding sensible partitioning schemes that would lead to reliable decisions are heavily dependent on the biology and unknown model parameters which turns the problem into a paradox. This brings the need for alternative validation approaches that offer flexible partitioning of the data. For this purpose, we have introduced a stratified random cross-validation (SRCV) approach that successfully overcomes these limitations. SRCV leads to more stable decisions for both validation and selection which are not biased by underlying biological phenomena. Furthermore, it is less dependent on the specific noise realization in the data. Therefore, it proves to be a promising alternative to the standard hold-out validation strategy.

  3. Reliability of a Holter-based methodology for evaluation of sleep apnoea syndrome.

    PubMed

    Szyszko, Ariel; Franceschini, Carlos; Gonzalez-Zuelgaray, Jorge

    2009-01-01

    Sleep apnoea has significant medical implications. A reliable non-invasive method (as a regular Holter system with a specific software) would be valuable for the screening of this condition in ambulatory patients. A total of 40 patients were divided into two groups: Group I, 20 patients with clinical suspicion of obstructive sleep apnoea (OSA) and Epworth sleepiness score >or= 10 and Group II, 20 controls. In Group I, polysomnography was performed simultaneously with Holter (specific software to detect sleep apnoea). In Group II, Holter-based detection was utilized. A cutoff value of 10 for the apnoea-hypopnoea index (for polysomnography) or for the respiratory disturbance index (RDI) (for Holter) was considered abnormal. Sleep apnoea was confirmed by polysomnography in 14 patients (70%) in Group I. Holter recordings correctly identified OSA in 11 patients (r = 0.74 with polysomnography; P = 0.0002). Holter showed 78.5% sensitivity, 83.3% specificity, 91.6% positive predictive value, and 62.5% negative predictive value (with polysomnography as the gold standard). The RDI measured by Holter was 19.5 +/- 20 in Group I and 3.9 +/- 4.4 in controls (P < 0.005). The measurement between Holter and polysomnography (Bland and Altman method) showed good correlation (mean 4.7 with 39.4 and -30.1 SD) and a Pearson correlation coefficient (r) of 0.74 (P = 0.0002, 95% CI: 0.44-0.89). Holter-based software may constitute an accessible tool on initial suspicion of OSA.

  4. Delay Analysis of Car-to-Car Reliable Data Delivery Strategies Based on Data Mulling with Network Coding

    NASA Astrophysics Data System (ADS)

    Park, Joon-Sang; Lee, Uichin; Oh, Soon Young; Gerla, Mario; Lun, Desmond Siumen; Ro, Won Woo; Park, Joonseok

    Vehicular ad hoc networks (VANET) aims to enhance vehicle navigation safety by providing an early warning system: any chance of accidents is informed through the wireless communication between vehicles. For the warning system to work, it is crucial that safety messages be reliably delivered to the target vehicles in a timely manner and thus reliable and timely data dissemination service is the key building block of VANET. Data mulling technique combined with three strategies, network codeing, erasure coding and repetition coding, is proposed for the reliable and timely data dissemination service. Particularly, vehicles in the opposite direction on a highway are exploited as data mules, mobile nodes physically delivering data to destinations, to overcome intermittent network connectivity cause by sparse vehicle traffic. Using analytic models, we show that in such a highway data mulling scenario the network coding based strategy outperforms erasure coding and repetition based strategies.

  5. CardioGuard: A Brassiere-Based Reliable ECG Monitoring Sensor System for Supporting Daily Smartphone Healthcare Applications

    PubMed Central

    Kwon, Sungjun; Kim, Jeehoon; Kang, Seungwoo; Lee, Youngki; Baek, Hyunjae

    2014-01-01

    Abstract We propose CardioGuard, a brassiere-based reliable electrocardiogram (ECG) monitoring sensor system, for supporting daily smartphone healthcare applications. It is designed to satisfy two key requirements for user-unobtrusive daily ECG monitoring: reliability of ECG sensing and usability of the sensor. The system is validated through extensive evaluations. The evaluation results showed that the CardioGuard sensor reliably measure the ECG during 12 representative daily activities including diverse movement levels; 89.53% of QRS peaks were detected on average. The questionnaire-based user study with 15 participants showed that the CardioGuard sensor was comfortable and unobtrusive. Additionally, the signal-to-noise ratio test and the washing durability test were conducted to show the high-quality sensing of the proposed sensor and its physical durability in practical use, respectively. PMID:25405527

  6. CardioGuard: a brassiere-based reliable ECG monitoring sensor system for supporting daily smartphone healthcare applications.

    PubMed

    Kwon, Sungjun; Kim, Jeehoon; Kang, Seungwoo; Lee, Youngki; Baek, Hyunjae; Park, Kwangsuk

    2014-12-01

    We propose CardioGuard, a brassiere-based reliable electrocardiogram (ECG) monitoring sensor system, for supporting daily smartphone healthcare applications. It is designed to satisfy two key requirements for user-unobtrusive daily ECG monitoring: reliability of ECG sensing and usability of the sensor. The system is validated through extensive evaluations. The evaluation results showed that the CardioGuard sensor reliably measure the ECG during 12 representative daily activities including diverse movement levels; 89.53% of QRS peaks were detected on average. The questionnaire-based user study with 15 participants showed that the CardioGuard sensor was comfortable and unobtrusive. Additionally, the signal-to-noise ratio test and the washing durability test were conducted to show the high-quality sensing of the proposed sensor and its physical durability in practical use, respectively.

  7. Reliability prediction of large fuel cell stack based on structure stress analysis

    NASA Astrophysics Data System (ADS)

    Liu, L. F.; Liu, B.; Wu, C. W.

    2017-09-01

    The aim of this paper is to improve the reliability of Proton Electrolyte Membrane Fuel Cell (PEMFC) stack by designing the clamping force and the thickness difference between the membrane electrode assembly (MEA) and the gasket. The stack reliability is directly determined by the component reliability, which is affected by the material property and contact stress. The component contact stress is a random variable because it is usually affected by many uncertain factors in the production and clamping process. We have investigated the influences of parameter variation coefficient on the probability distribution of contact stress using the equivalent stiffness model and the first-order second moment method. The optimal contact stress to make the component stay in the highest level reliability is obtained by the stress-strength interference model. To obtain the optimal contact stress between the contact components, the optimal thickness of the component and the stack clamping force are optimally designed. Finally, a detailed description is given how to design the MEA and gasket dimensions to obtain the highest stack reliability. This work can provide a valuable guidance in the design of stack structure for a high reliability of fuel cell stack.

  8. Assessing local instrument reliability and validity: a field-based example from northern Uganda.

    PubMed

    Betancourt, Theresa S; Bass, Judith; Borisova, Ivelina; Neugebauer, Richard; Speelman, Liesbeth; Onyango, Grace; Bolton, Paul

    2009-08-01

    This paper presents an approach for evaluating the reliability and validity of mental health measures in non-Western field settings. We describe this approach using the example of our development of the Acholi psychosocial assessment instrument (APAI), which is designed to assess depression-like (two tam, par and kumu), anxiety-like (ma lwor) and conduct problems (kwo maraco) among war-affected adolescents in northern Uganda. To examine the criterion validity of this measure in the absence of a traditional gold standard, we derived local syndrome terms from qualitative data and used self reports of these syndromes by indigenous people as a reference point for determining caseness. Reliability was examined using standard test-retest and inter-rater methods. Each of the subscale scores for the depression-like syndromes exhibited strong internal reliability ranging from alpha = 0.84-0.87. Internal reliability was good for anxiety (0.70), conduct problems (0.83), and the pro-social attitudes and behaviors (0.70) subscales. Combined inter-rater reliability and test-retest reliability were good for most subscales except for the conduct problem scale and prosocial scales. The pattern of significant mean differences in the corresponding APAI problem scale score between self-reported cases vs. noncases on local syndrome terms was confirmed in the data for all of the three depression-like syndromes, but not for the anxiety-like syndrome ma lwor or the conduct problem kwo maraco.

  9. Assessing local instrument reliability and validity: A field-based example from northern Uganda

    PubMed Central

    Betancourt, Theresa; Bass, Judith; Borisova, Ivelina; Neugebauer, Richard; Speelman, Liesbeth; Onyango, Grace; Bolton, Paul

    2008-01-01

    This paper presents an approach for evaluating the reliability and validity of mental health measures in non-Western field settings. We describe this approach using the example of our development of the Acholi Psychosocial Assessment Instrument (APAI), which is designed to assess depression-like (two tam, par and kumu), anxiety-like (ma lwor) and conduct problems (kwo maraco) among war-affected adolescents in northern Uganda. To examine the criterion validity of this measure in the absence of a traditional gold standard, we derived local syndrome terms from qualitative data and used self reports of these syndromes by indigenous people as a reference point for determining caseness. Reliability was examined using standard test-retest and inter-rater methods. Each of the subscale scores for the depression-like syndromes exhibited strong internal reliability ranging from α =0.84 to 0.87. Internal reliability was good for anxiety (0.70), conduct problems (0.83), and the pro-social attitudes and behaviors (0.70) subscales. Combined inter-rater reliability and test-retest reliability were good for most subscales except for the conduct problem scale and prosocial scales. The pattern of significant mean differences in the corresponding APAI problem scale score between self-reported cases vs. noncases on local syndrome terms was confirmed in the data for all of the three depression-like syndromes, but not for the anxiety-like syndrome ma lwor or the conduct problem kwo maraco. PMID:19165403

  10. Reliable and simple spectrophotometric determination of sun protection factor: A case study using organic UV filter-based sunscreen products.

    PubMed

    Yang, Soo In; Liu, Shuanghui; Brooks, Geoffrey J; Lanctot, Yves; Gruber, James V

    2017-08-23

    Current in vitro SPF screening method for plant oil body (oleosome)-based SPF products possesses significant inconsistency and low reliability in the SPF rating. The primary objective of this study was to evaluate the reliability and reproducibility of spectrophotometrically determined sun protection factor (SPF) from oleosome-based SPF products. The secondary objective was the data comparison of the spectrophotometric measurements against in vivo SPF testing to establish a reliable in vitro test method as a screening assay. Octyl methoxycinnamate (UVB filter) and avobenzone (UVA filter) were loaded into safflower oil bodies and formulated into oil-in-water emulsion-based finished products. To evaluate the reliability between in vivo and spectrophotometric test methods, samples were dispatched to a clinical laboratory, and the reported SPF values were compared with spectrophotometric test results. The observed SPF from the in vivo and spectrophotometric test results demonstrated a high correlation for SPF 30 products. Proportional correlation between the two evaluation methods was observed for SPF 15 and 50 products with slightly lesser accuracy with a smaller number of population tested in the clinical studies. A reliable spectrophotometric screening method for oil body-based SPF formulas has been developed using two broadly used organic UV sunscreen actives as a case study. The results demonstrated a high level of reproducibility and reliability compared to the US FDA-guided in vivo SPF testing method. © 2017 The Authors. Journal of Cosmetic Dermatology Published by Wiley Periodicals, Inc.

  11. Modeling the reliability of a class of fault-tolerant VLSI/WSI systems based on multiple-level redundancy

    NASA Astrophysics Data System (ADS)

    Chen, Yung-Yuan; Upadhyaya, Shambhu J.

    1994-06-01

    A class of fault-tolerant Very Large Scale Integration (VLSI) and Wafer Scale Integration (WSI) schemes, called the multiple-level redundancy, which incorporates both hierarchical and element level redundancy has been proposed for the design of high yield and high reliability large area array processors. The residual redundancy left unused after successfully reconfiguring and eliminating the manufacturing defects can be used to improve the operational reliability of a system. Since existing techniques for the analysis of the effect of residual redundancy on reliability improvement are not applicable, we present a new hierarchical model to estimate the reliability of the systems designed by our approach. Our model emphasizes the effect of support circuit (interconnection) failures on system reliability, leading to more accurate analysis. We discuss two area prediction models, one based on the regular WSI process, another based on the advanced WSI process, to estimate the area-related parameters. This analysis gives an insight into the practical implementations of fault-tolerant schemes in VLSI/WSI technology. Results of a computer experiment conducted to validate our models are also discussed.

  12. A reliability study using computer-based analysis of finger joint space narrowing in rheumatoid arthritis patients.

    PubMed

    Hatano, Katsuya; Kamishima, Tamotsu; Sutherland, Kenneth; Kato, Masaru; Nakagawa, Ikuma; Ichikawa, Shota; Kawauchi, Keisuke; Saitou, Shota; Mukai, Masaya

    2017-02-01

    The joint space difference index (JSDI) is a newly developed radiographic index which can quantitatively assess joint space narrowing progression of rheumatoid arthritis (RA) patients by using an image subtraction method on a computer. The aim of this study was to investigate the reliability of this method by non-experts utilizing RA image evaluation. Four non-experts assessed JSDI for radiographic images of 510 metacarpophalangeal joints from 51 RA patients twice with an interval of more than 2 weeks. Two rheumatologists and one radiologist as well as the four non-experts examined the joints by using the Sharp-van der Heijde Scoring (SHS) method. The radiologist and four non-experts repeated the scoring with an interval of more than 2 weeks. We calculated intra-/inter-observer reliability using the intra-class correlation coefficients (ICC) for JSDI and SHS scoring, respectively. The intra-/inter-observer reliabilities for the computer-based method were almost perfect (inter-observer ICC, 0.966-0.983; intra-observer ICC, 0.954-0.996). Contrary to this, intra-/inter-observer reliability for SHS by experts was moderate to almost perfect (inter-observer ICC, 0.556-0.849; intra-observer ICC, 0.589-0.839). The results suggest that our computer-based method has high reliability to detect finger joint space narrowing progression in RA patients.

  13. Enhanced integrability of porous low-permittivity dielectrics for improved reliability in copper-based interconnects

    NASA Astrophysics Data System (ADS)

    Luo, Fu

    Achieving the aggressive device performance metrics demanded by the microelectronics industry dictates the use of low dielectric constant ('low-k') insulating materials to reduce the capacitive component of the interconnect-related RC signal propagation delay. In particular, to meet interconnect performance requirements for the 65 nm node and beyond, one approach is to introduce significant levels of porosity into the interlayer dielectric (ILD) films. However, the incorporation of porosity leads to a number of integration challenges, including increased reliability issues due to the open pores distributed on the sidewalls of vias/trenches. The research discussed in this paper demonstrates that it is possible to 'seal' the sidewalls of patterned porous dielectric layers using a specially designed deposition-etch passivation process. The concept of the process is to employ selected organosilcon precursors to deposit fully dense carbon-doped oxide (CDO) type films using plasma enhanced chemical vapor deposition (PECVD) on patterned porous dielectric structures, and then to preferentially plasma etch the material built up on the via floor. In order to ensure sufficient sealing results, several deposition-etch cycles are required. Based on this concept, a systematic process development project was carried out. The properties of the resulting CDO films are discussed. The integration characteristics of the CDO film with candidate porous low-k material and with a subsequently deposited TaN barrier layer were also investigated. In addition, two unique approaches have been developed for the characterization of the sealing effectiveness of the cycled passivation process. These two approaches are based on spectroscopic ellipsometry and capacitance-voltage techniques. Both use the exposure of the passivated porous material to the vapor of an organic solvent to evaluate the responses of samples to the presence of the solvent vapor. Results from these experiments confirmed that

  14. Nanoparticle-based cancer treatment: can delivered dose and biological dose be reliably modeled and quantified?

    NASA Astrophysics Data System (ADS)

    Hoopes, P. Jack; Petryk, Alicia A.; Giustini, Andrew J.; Stigliano, Robert V.; D'Angelo, Robert N.; Tate, Jennifer A.; Cassim, Shiraz M.; Foreman, Allan; Bischof, John C.; Pearce, John A.; Ryan, Thomas

    2011-03-01

    Essential developments in the reliable and effective use of heat in medicine include: 1) the ability to model energy deposition and the resulting thermal distribution and tissue damage (Arrhenius models) over time in 3D, 2) the development of non-invasive thermometry and imaging for tissue damage monitoring, and 3) the development of clinically relevant algorithms for accurate prediction of the biological effect resulting from a delivered thermal dose in mammalian cells, tissues, and organs. The accuracy and usefulness of this information varies with the type of thermal treatment, sensitivity and accuracy of tissue assessment, and volume, shape, and heterogeneity of the tumor target and normal tissue. That said, without the development of an algorithm that has allowed the comparison and prediction of the effects of hyperthermia in a wide variety of tumor and normal tissues and settings (cumulative equivalent minutes/ CEM), hyperthermia would never have achieved clinical relevance. A new hyperthermia technology, magnetic nanoparticle-based hyperthermia (mNPH), has distinct advantages over the previous techniques: the ability to target the heat to individual cancer cells (with a nontoxic nanoparticle), and to excite the nanoparticles noninvasively with a noninjurious magnetic field, thus sparing associated normal cells and greatly improving the therapeutic ratio. As such, this modality has great potential as a primary and adjuvant cancer therapy. Although the targeted and safe nature of the noninvasive external activation (hysteretic heating) are a tremendous asset, the large number of therapy based variables and the lack of an accurate and useful method for predicting, assessing and quantifying mNP dose and treatment effect is a major obstacle to moving the technology into routine clinical practice. Among other parameters, mNPH will require the accurate determination of specific nanoparticle heating capability, the total nanoparticle content and biodistribution in

  15. Determining Functional Reliability of Pyrotechnic Mechanical Devices

    NASA Technical Reports Server (NTRS)

    Bement, Laurence J.; Multhaup, Herbert A.

    1997-01-01

    This paper describes a new approach for evaluating mechanical performance and predicting the mechanical functional reliability of pyrotechnic devices. Not included are other possible failure modes, such as the initiation of the pyrotechnic energy source. The requirement of hundreds or thousands of consecutive, successful tests on identical components for reliability predictions, using the generally accepted go/no-go statistical approach routinely ignores physics of failure. The approach described in this paper begins with measuring, understanding and controlling mechanical performance variables. Then, the energy required to accomplish the function is compared to that delivered by the pyrotechnic energy source to determine mechanical functional margin. Finally, the data collected in establishing functional margin is analyzed to predict mechanical functional reliability, using small-sample statistics. A careful application of this approach can provide considerable cost improvements and understanding over that of go/no-go statistics. Performance and the effects of variables can be defined, and reliability predictions can be made by evaluating 20 or fewer units. The application of this approach to a pin puller used on a successful NASA mission is provided as an example.

  16. Content Validity and Inter-Rater Reliability of the Halliwick-Concept-Based Instrument "Swimming with Independent Measure"

    ERIC Educational Resources Information Center

    Srsen, Katja Groleger; Vidmar, Gaj; Pikl, Masa; Vrecar, Irena; Burja, Cirila; Krusec, Klavdija

    2012-01-01

    The Halliwick concept is widely used in different settings to promote joyful movement in water and swimming. To assess the swimming skills and progression of an individual swimmer, a valid and reliable measure should be used. The Halliwick-concept-based Swimming with Independent Measure (SWIM) was introduced for this purpose. We aimed to determine…

  17. Tool for Assessing Responsibility-Based Education (TARE): Instrument Development, Content Validity, and Inter-Rater Reliability

    ERIC Educational Resources Information Center

    Wright, Paul M.; Craig, Mark W.

    2011-01-01

    Numerous scholars have stressed the importance of personal and social responsibility in physical activity settings; however, there is a lack of instrumentation to study the implementation of responsibility-based teaching strategies. The development, content validity, and initial inter-rater reliability testing of the Tool for Assessing…

  18. A Reliable and Inexpensive Method of Nucleic Acid Extraction for the PCR-Based Detection of Diverse Plant Pathogens

    USDA-ARS?s Scientific Manuscript database

    A reliable extraction method is described for the preparation of total nucleic acids from several plant genera for subsequent detection of plant pathogens by PCR-based techniques. By the combined use of a modified CTAB (cetyltrimethylammonium bromide) extraction protocol and a semi-automatic homogen...

  19. Content Validity and Inter-Rater Reliability of the Halliwick-Concept-Based Instrument "Swimming with Independent Measure"

    ERIC Educational Resources Information Center

    Srsen, Katja Groleger; Vidmar, Gaj; Pikl, Masa; Vrecar, Irena; Burja, Cirila; Krusec, Klavdija

    2012-01-01

    The Halliwick concept is widely used in different settings to promote joyful movement in water and swimming. To assess the swimming skills and progression of an individual swimmer, a valid and reliable measure should be used. The Halliwick-concept-based Swimming with Independent Measure (SWIM) was introduced for this purpose. We aimed to determine…

  20. A Critique of Raju and Oshima's Prophecy Formulas for Assessing the Reliability of Item Response Theory-Based Ability Estimates

    ERIC Educational Resources Information Center

    Wang, Wen-Chung

    2008-01-01

    Raju and Oshima (2005) proposed two prophecy formulas based on item response theory in order to predict the reliability of ability estimates for a test after change in its length. The first prophecy formula is equivalent to the classical Spearman-Brown prophecy formula. The second prophecy formula is misleading because of an underlying false…

  1. The Development of the Functional Literacy Experience Scale Based upon Ecological Theory (FLESBUET) and Validity-Reliability Study

    ERIC Educational Resources Information Center

    Özenç, Emine Gül; Dogan, M. Cihangir

    2014-01-01

    This study aims to perform a validity-reliability test by developing the Functional Literacy Experience Scale based upon Ecological Theory (FLESBUET) for primary education students. The study group includes 209 fifth grade students at Sabri Taskin Primary School in the Kartal District of Istanbul, Turkey during the 2010-2011 academic year.…

  2. Improved Reliability of InGaN-Based Light-Emitting Diodes by HfO2 Passivation Layer.

    PubMed

    Park, Seung Hyun; Kim, Yoon Seok; Kim, Tae Hoon; Ryu, Sang Wan

    2016-02-01

    We utilized a passivation layer to improve the leakage current and reliability characteristics of GaN-based light-emitting diodes. The electrical and optical characteristics of the fabricated LEDs were characterized by current-voltage and optical power measurements. The HfO2 passivation layer showed no optical power degradation and suppressed leakage current. The low deposition temper- ature of sputtered HfO2 is responsible for the improved reliability of the LEDs because it suppresses the diffusion of hydrogen plasma into GaN to form harmful Mg-H complexes.

  3. Content validity and inter-rater reliability of the Halliwick-concept-based instrument 'Swimming with Independent Measure'.

    PubMed

    Sršen, Katja Groleger; Vidmar, Gaj; Pikl, Maša; Vrečar, Irena; Burja, Cirila; Krušec, Klavdija

    2012-06-01

    The Halliwick concept is widely used in different settings to promote joyful movement in water and swimming. To assess the swimming skills and progression of an individual swimmer, a valid and reliable measure should be used. The Halliwick-concept-based Swimming with Independent Measure (SWIM) was introduced for this purpose. We aimed to determine its content validity and inter-rater reliability. Fifty-four healthy children, 3.5-11 years old, from a mainstream swimming program participated in a content validity study. They were evaluated with SWIM and the national evaluation system of swimming abilities (classifying children into seven categories). To study the inter-rater reliability of SWIM, we included 37 children and youth from a Halliwick swimming program, aged 7-22 years, who were evaluated by two Halliwick instructors independently. The average SWIM score differed between national evaluation system categories and followed the expected order (P<0.001), whereby a ceiling effect was observed in the higher categories. High inter-rater reliability was found for all 11 SWIM items. The lowest reliability was observed for item G (sagittal rotation), although the estimates were still above 0.9. As expected, the highest reliability was observed for the total score (intraclass correlation 0.996). The validity of SWIM with respect to the national evaluation system of swimming abilities is high until the point where a swimmer is well adapted to water and already able to learn some swimming techniques. The inter-rater reliability of SWIM is very high; thus, we believe that SWIM can be used in further research and practice to follow the progress of swimmers.

  4. The specification-based validation of reliable multicast protocol: Problem Report. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Wu, Yunqing

    1995-01-01

    Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP multicasting. In this report, we develop formal models for RMP using existing automated verification systems, and perform validation on the formal RMP specifications. The validation analysis help identifies some minor specification and design problems. We also use the formal models of RMP to generate a test suite for conformance testing of the implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress of implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.

  5. Reliability-centered maintenance for ground-based large optical telescopes and radio antenna arrays

    NASA Astrophysics Data System (ADS)

    Marchiori, G.; Formentin, F.; Rampini, F.

    2014-07-01

    In the last years, EIE GROUP has been more and more involved in large optical telescopes and radio antennas array projects. In this frame, the paper describes a fundamental aspect of the Logistic Support Analysis (LSA) process, that is the application of the Reliability-Centered Maintenance (RCM) methodology for the generation of maintenance plans for ground-based large optical telescopes and radio antennas arrays. This helps maintenance engineers to make sure that the telescopes continue to work properly, doing what their users require them to do in their present operating conditions. The main objective of the RCM process is to establish the complete maintenance regime, with the safe minimum required maintenance, carried out without any risk to personnel, telescope and subsystems. At the same time, a correct application of the RCM allows to increase the cost effectiveness, telescope uptime and items availability, and to provide greater understanding of the level of risk that the organization is managing. At the same time, engineers shall make a great effort since the initial phase of the project to obtain a telescope requiring easy maintenance activities and simple replacement of the major assemblies, taking special care on the accesses design and items location, implementation and design of special lifting equipment and handling devices for the heavy items. This maintenance engineering framework is based on seven points, which lead to the main steps of the RCM program. The initial steps of the RCM process consist of: system selection and data collection (MTBF, MTTR, etc.), definition of system boundaries and operating context, telescope description with the use of functional block diagrams, and the running of a FMECA to address the dominant causes of equipment failure and to lay down the Critical Items List. In the second part of the process the RCM logic is applied, which helps to determine the appropriate maintenance tasks for each identified failure mode. Once

  6. Reliable classifier to differentiate primary and secondary acute dengue infection based on IgG ELISA.

    PubMed

    Cordeiro, Marli Tenório; Braga-Neto, Ulisses; Nogueira, Rita Maria Ribeiro; Marques, Ernesto T A

    2009-01-01

    Dengue virus infection causes a wide spectrum of illness, ranging from sub-clinical to severe disease. Severe dengue is associated with sequential viral infections. A strict definition of primary versus secondary dengue infections requires a combination of several tests performed at different stages of the disease, which is not practical. We developed a simple method to classify dengue infections as primary or secondary based on the levels of dengue-specific IgG. A group of 109 dengue infection patients were classified as having primary or secondary dengue infection on the basis of a strict combination of results from assays of antigen-specific IgM and IgG, isolation of virus and detection of the viral genome by PCR tests performed on multiple samples, collected from each patient over a period of 30 days. The dengue-specific IgG levels of all samples from 59 of the patients were analyzed by linear discriminant analysis (LDA), and one- and two-dimensional classifiers were designed. The one-dimensional classifier was estimated by bolstered resubstitution error estimation to have 75.1% sensitivity and 92.5% specificity. The two-dimensional classifier was designed by taking also into consideration the number of days after the onset of symptoms, with an estimated sensitivity and specificity of 91.64% and 92.46%. The performance of the two-dimensional classifier was validated using an independent test set of standard samples from the remaining 50 patients. The classifications of the independent set of samples determined by the two-dimensional classifiers were further validated by comparing with two other dengue classification methods: hemagglutination inhibition (HI) assay and an in-house anti-dengue IgG-capture ELISA method. The decisions made with the two-dimensional classifier were in 100% accordance with the HI assay and 96% with the in-house ELISA. Once acute dengue infection has been determined, a 2-D classifier based on common dengue virus IgG kits can reliably

  7. Comprehensive reliability allocation method for CNC lathes based on cubic transformed functions of failure mode and effects analysis

    NASA Astrophysics Data System (ADS)

    Yang, Zhou; Zhu, Yunpeng; Ren, Hongrui; Zhang, Yimin

    2015-03-01

    Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.

  8. Estimating and comparing the reliability of a suite of workplace-based assessments: an obstetrics and gynaecology setting.

    PubMed

    Homer, Matt; Setna, Zeryab; Jha, Vikram; Higham, Jenny; Roberts, Trudie; Boursicot, Katherine

    2013-08-01

    This paper reports on a study that compares estimates of the reliability of a suite of workplace based assessment forms as employed to formatively assess the progress of trainee obstetricians and gynaecologists. The use of such forms of assessment is growing nationally and internationally in many specialties, but there is little research evidence on comparisons by procedure/competency and form-type across an entire specialty. Generalisability theory combined with a multilevel modelling approach is used to estimate variance components, G-coefficients and standard errors of measurement across 13 procedures and three form-types (mini-CEX, OSATS and CbD). The main finding is that there are wide variations in the estimates of reliability across the forms, and that therefore the guidance on assessment within the specialty does not always allow for enough forms per trainee to ensure that the levels of reliability of the process is adequate. There is, however, little evidence that reliability varies systematically by form-type. Methodologically, the problems of accurately estimating reliability in these contexts through the calculation of variance components and, crucially, their associated standard errors are considered. The importance of the use of appropriate methods in such calculations is emphasised, and the unavoidable limitations of research in naturalistic settings are discussed.

  9. Test–retest reliability of the prefrontal response to affective pictures based on functional near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Huang, Yuxia; Mao, Mengchai; Zhang, Zong; Zhou, Hui; Zhao, Yang; Duan, Lian; Kreplin, Ute; Xiao, Xiang; Zhu, Chaozhe

    2017-01-01

    Functional near-infrared spectroscopy (fNIRS) is being increasingly applied to affective and social neuroscience research; however, the reliability of this method is still unclear. This study aimed to evaluate the test-retest reliability of the fNIRS-based prefrontal response to emotional stimuli. Twenty-six participants viewed unpleasant and neutral pictures, and were simultaneously scanned by fNIRS in two sessions three weeks apart. The reproducibility of the prefrontal activation map was evaluated at three spatial scales (mapwise, clusterwise, and channelwise) at both the group and individual levels. The influence of the time interval was also explored and comparisons were made between longer (intersession) and shorter (intrasession) time intervals. The reliabilities of the activation map at the group level for the mapwise (up to 0.88, the highest value appeared in the intersession assessment) and clusterwise scales (up to 0.91, the highest appeared in the intrasession assessment) were acceptable, indicating that fNIRS may be a reliable tool for emotion studies, especially for a group analysis and under larger spatial scales. However, it should be noted that the individual-level and the channelwise fNIRS prefrontal responses were not sufficiently stable. Future studies should investigate which factors influence reliability, as well as the validity of fNIRS used in emotion studies.

  10. Development, construct validity and test-retest reliability of a field-based wheelchair mobility performance test for wheelchair basketball.

    PubMed

    de Witte, Annemarie M H; Hoozemans, Marco J M; Berger, Monique A M; van der Slikke, Rienk M A; van der Woude, Lucas H V; Veeger, Dirkjan H E J

    2017-01-16

    The aim of this study was to develop and describe a wheelchair mobility performance test in wheelchair basketball and to assess its construct validity and reliability. To mimic mobility performance of wheelchair basketball matches in a standardised manner, a test was designed based on observation of wheelchair basketball matches and expert judgement. Forty-six players performed the test to determine its validity and 23 players performed the test twice for reliability. Independent-samples t-tests were used to assess whether the times needed to complete the test were different for classifications, playing standards and sex. Intraclass correlation coefficients (ICC) were calculated to quantify reliability of performance times. Males performed better than females (P < 0.001, effect size [ES] = -1.26) and international men performed better than national men (P < 0.001, ES = -1.62). Performance time of low (≤2.5) and high (≥3.0) classification players was borderline not significant with a moderate ES (P = 0.06, ES = 0.58). The reliability was excellent for overall performance time (ICC = 0.95). These results show that the test can be used as a standardised mobility performance test to validly and reliably assess the capacity in mobility performance of elite wheelchair basketball athletes. Furthermore, the described methodology of development is recommended for use in other sports to develop sport-specific tests.

  11. Asymmetric programming: a highly reliable metadata allocation strategy for MLC NAND flash memory-based sensor systems.

    PubMed

    Huang, Min; Liu, Zhaoqing; Qiao, Liyan

    2014-10-10

    While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it's critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB) pages which are more reliable than least significant bit (LSB) pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme.

  12. Asymmetric Programming: A Highly Reliable Metadata Allocation Strategy for MLC NAND Flash Memory-Based Sensor Systems

    PubMed Central

    Huang, Min; Liu, Zhaoqing; Qiao, Liyan

    2014-01-01

    While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it's critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB) pages which are more reliable than least significant bit (LSB) pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme. PMID:25310473

  13. Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.

    1988-01-01

    The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.

  14. Kuhn-Tucker optimization based reliability analysis for probabilistic finite elements

    NASA Technical Reports Server (NTRS)

    Liu, W. K.; Besterfield, G.; Lawrence, M.; Belytschko, T.

    1988-01-01

    The fusion of probability finite element method (PFEM) and reliability analysis for fracture mechanics is considered. Reliability analysis with specific application to fracture mechanics is presented, and computational procedures are discussed. Explicit expressions for the optimization procedure with regard to fracture mechanics are given. The results show the PFEM is a very powerful tool in determining the second-moment statistics. The method can determine the probability of failure or fracture subject to randomness in load, material properties and crack length, orientation, and location.

  15. Hospital-based fall program measurement and improvement in high reliability organizations.

    PubMed

    Quigley, Patricia A; White, Susan V

    2013-05-31

    Falls and fall injuries in hospitals are the most frequently reported adverse event among adults in the inpatient setting. Advancing measurement and improvement around falls prevention in the hospital is important as falls are a nurse sensitive measure and nurses play a key role in this component of patient care. A framework for applying the concepts of high reliability organizations to falls prevention programs is described, including discussion of the core characteristics of such a model and determining the impact at the patient, unit, and organizational level. This article showcases the components of a patient safety culture and the integration of these components with fall prevention, the role of nurses, and high reliability.

  16. Test-Retest Reliability of an Automated Infrared-Assisted Trunk Accelerometer-Based Gait Analysis System.

    PubMed

    Hsu, Chia-Yu; Tsai, Yuh-Show; Yau, Cheng-Shiang; Shie, Hung-Hai; Wu, Chu-Ming

    2016-07-23

    The aim of this study was to determine the test-retest reliability of an automated infrared-assisted, trunk accelerometer-based gait analysis system for measuring gait parameters of healthy subjects in a hospital. Thirty-five participants (28 of them females; age range, 23-79 years) performed a 5-m walk twice using an accelerometer-based gait analysis system with infrared assist. Measurements of spatiotemporal gait parameters (walking speed, step length, and cadence) and trunk control (gait symmetry, gait regularity, acceleration root mean square (RMS), and acceleration root mean square ratio (RMSR)) were recorded in two separate walking tests conducted 1 week apart. Relative and absolute test-retest reliability was determined by calculating the intra-class correlation coefficient (ICC3,1) and smallest detectable difference (SDD), respectively. The test-retest reliability was excellent for walking speed (ICC = 0.87, 95% confidence interval = 0.74-0.93, SDD = 13.4%), step length (ICC = 0.81, 95% confidence interval = 0.63-0.91, SDD = 12.2%), cadence (ICC = 0.81, 95% confidence interval = 0.63-0.91, SDD = 10.8%), and trunk control (step and stride regularity in anterior-posterior direction, acceleration RMS and acceleration RMSR in medial-lateral direction, and acceleration RMS and stride regularity in vertical direction). An automated infrared-assisted, trunk accelerometer-based gait analysis system is a reliable tool for measuring gait parameters in the hospital environment.

  17. The Reliability and Validity of the Complex Task Performance Assessment: A Performance-Based Assessment of Executive Function

    PubMed Central

    Wolf, Timothy J.; Dahl, Abigail; Auen, Colleen; Doherty, Meghan

    2015-01-01

    The objective of this study was to evaluate the inter-rater reliability, test-retest reliability, concurrent validity, and discriminant validity of the Complex Task Performance Assessment (CTPA): An ecologically-valid performance-based assessment of executive function. Community control participants (n = 20) and individuals with mild stroke (n = 14) participated in this study. All participants completed the CTPA and a battery of cognitive assessments at initial testing. The control participants completed the CTPA at two different times one week apart. The intra-class correlation coefficient (ICC) for inter-rater reliability for the total score on the CTPA was 0.991. The ICCs for all of the sub scores of the CTPA were also high (0.889-0.977). The CTPA total score was significantly correlated to Condition 4 of the DKEFS Color-Word Interference Test (ρ = −0.425), and the Wechsler Test of Adult Reading (ρ = −0.493). Finally, there were significant differences between control subjects and individuals with mild stroke on the total score of the CTPA (p = 0.007) and all sub scores except interpretation failures and total items incorrect. These results are also consistent with other current executive function performance-based assessments and indicate that the CTPA is a reliable and valid performance-based measure of executive function. PMID:25939359

  18. The reliability and validity of the Complex Task Performance Assessment: A performance-based assessment of executive function.

    PubMed

    Wolf, Timothy J; Dahl, Abigail; Auen, Colleen; Doherty, Meghan

    2015-05-05

    The objective of this study was to evaluate the inter-rater reliability, test-retest reliability, concurrent validity, and discriminant validity of the Complex Task Performance Assessment (CTPA): an ecologically valid performance-based assessment of executive function. Community control participants (n = 20) and individuals with mild stroke (n = 14) participated in this study. All participants completed the CTPA and a battery of cognitive assessments at initial testing. The control participants completed the CTPA at two different times one week apart. The intra-class correlation coefficient (ICC) for inter-rater reliability for the total score on the CTPA was .991. The ICCs for all of the sub-scores of the CTPA were also high (.889-.977). The CTPA total score was significantly correlated to Condition 4 of the DKEFS Color-Word Interference Test (p = -.425), and the Wechsler Test of Adult Reading (p  = -.493). Finally, there were significant differences between control subjects and individuals with mild stroke on the total score of the CTPA (p = .007) and all sub-scores except interpretation failures and total items incorrect. These results are also consistent with other current executive function performance-based assessments and indicate that the CTPA is a reliable and valid performance-based measure of executive function.

  19. Reliability of lower limb alignment measures using an established landmark-based method with a customized computer software program.

    PubMed

    Sled, Elizabeth A; Sheehy, Lisa M; Felson, David T; Costigan, Patrick A; Lam, Miu; Cooke, T Derek V

    2011-01-01

    The objective of the study was to evaluate the reliability of frontal plane lower limb alignment measures using a landmark-based method by (1) comparing inter- and intra-reader reliability between measurements of alignment obtained manually with those using a computer program, and (2) determining inter- and intra-reader reliability of computer-assisted alignment measures from full-limb radiographs. An established method for measuring alignment was used, involving selection of 10 femoral and tibial bone landmarks. (1) To compare manual and computer methods, we used digital images and matching paper copies of five alignment patterns simulating healthy and malaligned limbs drawn using AutoCAD. Seven readers were trained in each system. Paper copies were measured manually and repeat measurements were performed daily for 3 days, followed by a similar routine with the digital images using the computer. (2) To examine the reliability of computer-assisted measures from full-limb radiographs, 100 images (200 limbs) were selected as a random sample from 1,500 full-limb digital radiographs which were part of the Multicenter Osteoarthritis Study. Three trained readers used the software program to measure alignment twice from the batch of 100 images, with two or more weeks between batch handling. Manual and computer measures of alignment showed excellent agreement (intraclass correlations [ICCs] 0.977-0.999 for computer analysis; 0.820-0.995 for manual measures). The computer program applied to full-limb radiographs produced alignment measurements with high inter- and intra-reader reliability (ICCs 0.839-0.998). In conclusion, alignment measures using a bone landmark-based approach and a computer program were highly reliable between multiple readers.

  20. Reliability of lower limb alignment measures using an established landmark-based method with a customized computer software program

    PubMed Central

    Sled, Elizabeth A.; Sheehy, Lisa M.; Felson, David T.; Costigan, Patrick A.; Lam, Miu; Cooke, T. Derek V.

    2010-01-01

    The objective of the study was to evaluate the reliability of frontal plane lower limb alignment measures using a landmark-based method by (1) comparing inter- and intra-reader reliability between measurements of alignment obtained manually with those using a computer program, and (2) determining inter- and intra-reader reliability of computer-assisted alignment measures from full-limb radiographs. An established method for measuring alignment was used, involving selection of 10 femoral and tibial bone landmarks. 1) To compare manual and computer methods, we used digital images and matching paper copies of five alignment patterns simulating healthy and malaligned limbs drawn using AutoCAD. Seven readers were trained in each system. Paper copies were measured manually and repeat measurements were performed daily for 3 days, followed by a similar routine with the digital images using the computer. 2) To examine the reliability of computer-assisted measures from full-limb radiographs, 100 images (200 limbs) were selected as a random sample from 1,500 full-limb digital radiographs which were part of the Multicenter Osteoarthritis (MOST) Study. Three trained readers used the software program to measure alignment twice from the batch of 100 images, with two or more weeks between batch handling. Manual and computer measures of alignment showed excellent agreement (intraclass correlations [ICCs] 0.977 – 0.999 for computer analysis; 0.820 – 0.995 for manual measures). The computer program applied to full-limb radiographs produced alignment measurements with high inter- and intra-reader reliability (ICCs 0.839 – 0.998). In conclusion, alignment measures using a bone landmark-based approach and a computer program were highly reliable between multiple readers. PMID:19882339

  1. Knowledge-base for the new human reliability analysis method, A Technique for Human Error Analysis (ATHEANA)

    SciTech Connect

    Cooper, S.E.; Wreathall, J.; Thompson, C.M., Drouin, M.; Bley, D.C.

    1996-10-01

    This paper describes the knowledge base for the application of the new human reliability analysis (HRA) method, a ``A Technique for Human Error Analysis`` (ATHEANA). Since application of ATHEANA requires the identification of previously unmodeled human failure events, especially errors of commission, and associated error-forcing contexts (i.e., combinations of plant conditions and performance shaping factors), this knowledge base is an essential aid for the HRA analyst.

  2. Reliability and structural integrity

    NASA Technical Reports Server (NTRS)

    Davidson, J. R.

    1976-01-01

    An analytic model is developed to calculate the reliability of a structure after it is inspected for cracks. The model accounts for the growth of undiscovered cracks between inspections and their effect upon the reliability after subsequent inspections. The model is based upon a differential form of Bayes' Theorem for reliability, and upon fracture mechanics for crack growth.

  3. Temporal Stability of Strength-Based Assessments: Test-Retest Reliability of Student and Teacher Reports

    ERIC Educational Resources Information Center

    Romer, Natalie; Merrell, Kenneth W.

    2013-01-01

    This study focused on evaluating the temporal stability of self-reported and teacher-reported perceptions of students' social and emotional skills and assets. We used a test-retest reliability procedure over repeated administrations of the child, adolescent, and teacher versions of the "Social-Emotional Assets and Resilience Scales".…

  4. Methodology for reliability based condition assessment. Application to concrete structures in nuclear plants

    SciTech Connect

    Mori, Y.; Ellingwood, B.

    1993-08-01

    Structures in nuclear power plants may be exposed to aggressive environmental effects that cause their strength to decrease over an extended period of service. A major concern in evaluating the continued service for such structures is to ensure that in their current condition they are able to withstand future extreme load events during the intended service life with a level of reliability sufficient for public safety. This report describes a methodology to facilitate quantitative assessments of current and future structural reliability and performance of structures in nuclear power plants. This methodology takes into account the nature of past and future loads, and randomness in strength and in degradation resulting from environmental factors. An adaptive Monte Carlo simulation procedure is used to evaluate time-dependent system reliability. The time-dependent reliability is sensitive to the time-varying load characteristics and to the choice of initial strength and strength degradation models but not to correlation in component strengths within a system. Inspection/maintenance strategies are identified that minimize the expected future costs of keeping the failure probability of a structure at or below an established target failure probability during its anticipated service period.

  5. Reliability-Based Weighting of Visual and Vestibular Cues in Displacement Estimation.

    PubMed

    ter Horst, Arjan C; Koppen, Mathieu; Selen, Luc P J; Medendorp, W Pieter

    2015-01-01

    When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement.

  6. Craniofacial landmarks in young children: how reliable are measurements based on 3-dimensional imaging?

    PubMed

    Metzler, Philipp; Bruegger, Lea S; Kruse Gujer, Astrid L; Matthews, Felix; Zemann, Wolfgang; Graetz, Klaus W; Luebbers, Heinz-Theo

    2012-11-01

    Different approaches for 3-dimensional (3D) data acquisition of the facial surface are common nowadays. Meticulous evaluation has proven their level of precision and accuracy. However, the question remains as to which level of craniofacial landmarks, especially in young children, are reliable if identified in 3D images. Potential sources of error, aside from the systems technology itself, need to be identified and addressed. Reliable and unreliable landmarks have to be identified. The 3dMDface System was used in a clinical setting to evaluate the intraobserver repeatability of 27 craniofacial landmarks in 7 young children between 6 and 18 months of age with a total of 1134 measurements. The handling of the system was mostly unproblematic. The mean 3D repeatability error was 0.82 mm, with a range of 0.26 mm to 2.40 mm, depending on the landmark. Single landmarks that have been shown to be relatively imprecise in 3D analysis could still provide highly accurate data if only 1 of the 3 spatial planes was relevant. There were no statistical differences from 1 patient to another. Reliability in craniofacial measurements can be achieved by such 3D soft-tissue imaging techniques as the 3dMDface System, but one must always be aware that the degree of precision is strictly dependent on the landmark and axis in question.For further clinical investigations, the degree of reliability for each landmark evaluated must be addressed and taken into account.

  7. Temporal Stability of Strength-Based Assessments: Test-Retest Reliability of Student and Teacher Reports

    ERIC Educational Resources Information Center

    Romer, Natalie; Merrell, Kenneth W.

    2013-01-01

    This study focused on evaluating the temporal stability of self-reported and teacher-reported perceptions of students' social and emotional skills and assets. We used a test-retest reliability procedure over repeated administrations of the child, adolescent, and teacher versions of the "Social-Emotional Assets and Resilience Scales".…

  8. Optimum structural design based on reliability and proof-load testing

    NASA Technical Reports Server (NTRS)

    Shinozuka, M.; Yang, J. N.

    1969-01-01

    Proof-load test eliminates structures with strength less than the proof load and improves the reliability value in analysis. It truncates the distribution function of strength at the proof load, thereby alleviating verification of a fitted distribution function at the lower tail portion where data are usually nonexistent.

  9. Predictions of Crystal Structure Based on Radius Ratio: How Reliable Are They?

    ERIC Educational Resources Information Center

    Nathan, Lawrence C.

    1985-01-01

    Discussion of crystalline solids in undergraduate curricula often includes the use of radius ratio rules as a method for predicting which type of crystal structure is likely to be adopted by a given ionic compound. Examines this topic, establishing more definitive guidelines for the use and reliability of the rules. (JN)

  10. A Note on the Reliability Coefficients for Item Response Model-Based Ability Estimates

    ERIC Educational Resources Information Center

    Kim, Seonghoon

    2012-01-01

    Assuming item parameters on a test are known constants, the reliability coefficient for item response theory (IRT) ability estimates is defined for a population of examinees in two different ways: as (a) the product-moment correlation between ability estimates on two parallel forms of a test and (b) the squared correlation between the true…

  11. RELIABILITY-BASED UNCERTAINTY ANALYSIS OF GROUNDWATER CONTAMINANT TRANSPORT AND REMEDIATION

    EPA Science Inventory

    This report presents a discussion of the application of the first- and second-order reliability methods (FORM and SORM, respectively) to ground-water transport and remediation, and to public health risk assessment. Using FORM and SORM allows the formal incorporation of parameter...

  12. Reliability-Based Weighting of Visual and Vestibular Cues in Displacement Estimation

    PubMed Central

    ter Horst, Arjan C.; Koppen, Mathieu; Selen, Luc P. J.; Medendorp, W. Pieter

    2015-01-01

    When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement. PMID:26658990

  13. A Note on the Reliability Coefficients for Item Response Model-Based Ability Estimates

    ERIC Educational Resources Information Center

    Kim, Seonghoon

    2012-01-01

    Assuming item parameters on a test are known constants, the reliability coefficient for item response theory (IRT) ability estimates is defined for a population of examinees in two different ways: as (a) the product-moment correlation between ability estimates on two parallel forms of a test and (b) the squared correlation between the true…

  14. Predictions of Crystal Structure Based on Radius Ratio: How Reliable Are They?

    ERIC Educational Resources Information Center

    Nathan, Lawrence C.

    1985-01-01

    Discussion of crystalline solids in undergraduate curricula often includes the use of radius ratio rules as a method for predicting which type of crystal structure is likely to be adopted by a given ionic compound. Examines this topic, establishing more definitive guidelines for the use and reliability of the rules. (JN)

  15. Moving to a Higher Level for PV Reliability through Comprehensive Standards Based on Solid Science (Presentation)

    SciTech Connect

    Kurtz, S.

    2014-11-01

    PV reliability is a challenging topic because of the desired long life of PV modules, the diversity of use environments and the pressure on companies to rapidly reduce their costs. This presentation describes the challenges, examples of failure mechanisms that we know or don't know how to test for, and how a scientific approach is being used to establish international standards.

  16. A reliable and highly sensitive, digital PCR-based assay for early detection of citrus Huanglongbing

    USDA-ARS?s Scientific Manuscript database

    Huanglongbing (HLB) is caused by a phloem-limited bacterium, Ca. Liberibacter asiaticus (Las) in the United States. The bacterium is often present at a low concentration and unevenly distributed in the early stage of infection, making reliable and early diagnosis a challenge. We have developed a pro...

  17. Checking the reliability of a linear-programming based approach towards detecting community structures in networks.

    PubMed

    Chen, W Y C; Dress, A W M; Yu, W Q

    2007-09-01

    Here, the reliability of a recent approach to use parameterised linear programming for detecting community structures in network has been investigated. Using a one-parameter family of objective functions, a number of "perturbation experiments' document that our approach works rather well. A real-life network and a family of benchmark network are also analysed.

  18. Contributions to a reliable hydrogen sensor based on surface plasmon surface resonance spectroscopy

    NASA Astrophysics Data System (ADS)

    Morjan, Martin; Züchner, Harald; Cammann, Karl

    2009-06-01

    Hydrogen is being seen as a potentially inexhaustible, clean power supply. Direct hydrogen production and storage techniques that would eliminate carbon by-products and compete in cost are accelerated in R&D due to the recent sharp price increase of crude oil. But hydrogen is also linked with certain risks of use, namely the danger of explosions if mixed with air due to the very low energy needed for ignition and the possibility to diminish the ozone layer by undetected leaks. To reduce those risks efficient, sensitive and very early warning systems are needed. This paper will contribute to this challenge in adopting the optical method of Surface-Plasmon-Resonance (SPR) Spectroscopy for a sensitive detection of hydrogen concentrations well below the lower explosion limit. The technique of SPR performed with fiberoptics would in principle allow a remote control without any electrical contacts in the potential explosion zone. A thin palladium metal layer has been studied as sensing element. A simulation programme to find an optimum sensor design lead to the conclusion that an Otto-configuration is more advantageous under intended "real world" measurement conditions than a Kretschmann configuration. This could be experimentally verified. The very small air gap in the Otto-configuration could be successfully replaced by a several hundred nm thick intermediate layer of MgF 2 or SiO 2 to ease the fabrication of hydrogen sensor-chips based on glass slide substrates. It could be demonstrated that by a separate detection of the TM- and TE-polarized light fractions the TE-polarized beam could be used as a reference signal, since the TE-part does not excite surface plasmons and thus is not influenced by the presence of hydrogen. Choosing the measured TM/TE intensity ratio as the analytical signal a sensor-chip made from a BK7 glass slide with a 425 nm thick intermediate layer of SiO 2 and a sensing layer of 50 nm Pd on top allowed a drift-free, reliable and reversible

  19. Potential of the Reliability-Resilience-Vulnerability (RRV) Based Drought Management Index (DMI)

    NASA Astrophysics Data System (ADS)

    Maity, R.; Chanda, K.; D, N. K.; Sharma, A.; Mehrotra, R.

    2014-12-01

    This paper highlights the findings from a couple of recent investigations aimed at characterizing and predicting the long-term drought propensity at a region for effective water management. A probabilistic index, named as Drought Management Index (DMI), was proposed for assessing the drought propensity on a multi-year scale at the chosen study area. The novelty of this index lay in the fact that it employed the Reliability-Resilience-Vulnerability (RRV) rationale, commonly used in water resources systems analysis, with the assumption that depletion of soil moisture across a vertical soil column is analogous to the operation of a water supply reservoir. This was the very first attempt to incorporate into a drought index the resilience of soil moisture series, which denotes the readiness of soil moisture to bounce back from drought to normal state. Further, the predictability of DMI was explored to assess the future drought propensity, which is essential for adopting suitable drought management policies at any location. For computing DMI, the intermediate measures i.e., RRV were obtained using the Permanent Wilting Point (PWP) as the threshold, indicative of transition into water stress. The joint distribution of resilience and vulnerability of soil moisture series was subsequently determined using Plackett copula. The DMI was designed such that it increases with increase in vulnerability as well as with decrease in resilience and vice versa. Thus, it was expressed as the joint probability of exceedence of resilience and non-exceedence of vulnerability of a soil moisture series. An assessment of the sensitivity of the DMI to the length of the data segments indicated that a 5-year temporal scale is optimum to obtain stable estimates of DMI. The ability of the DMI to reflect the spatio-temporal variation of drought propensity was illustrated using India as a test bed. Based on the observed behaviour of DMI series across India, on a climatological time scale, a DMI

  20. RELIABILITY AND VALIDITY OF A BIOMECHANICALLY BASED ANALYSIS METHOD FOR THE TENNIS SERVE

    PubMed Central

    Kibler, W. Ben; Lamborn, Leah; Smith, Belinda J.; English, Tony; Jacobs, Cale; Uhl, Tim L.

    2017-01-01

    Background An observational tennis serve analysis (OTSA) tool was developed using previously established body positions from three-dimensional kinematic motion analysis studies. These positions, defined as nodes, have been associated with efficient force production and minimal joint loading. However, the tool has yet to be examined scientifically. Purpose The primary purpose of this investigation was to determine the inter-observer reliability for each node between two health care professionals (HCPs) that developed the OTSA, and secondarily to investigate the validity of the OTSA. Methods Two separate studies were performed to meet these objectives. An inter-observer reliability study preceded the validity study by examining 28 videos of players serving. Two HCPs graded each video and scored the presence or absence of obtaining each node. Discriminant validity was determined in 33 tennis players using video taped records of three first serves. Serve mechanics were graded using the OSTA and categorized players into those with good ( ≥ 5) and poor ( ≤ 4) mechanics. Participants performed a series of field tests to evaluate trunk flexibility, lower extremity and trunk power, and dynamic balance. Results The group with good mechanics demonstrated greater backward trunk flexibility (p=0.02), greater rotational power (p=0.02), and higher single leg countermovement jump (p=0.05). Reliability of the OTSA ranged from K = 0.36-1.0, with the majority of all the nodes displaying substantial reliability (K>0.61). Conclusion This study provides HCPs with a valid and reliable field tool used to assess serve mechanics. Physical characteristics of trunk mobility and power appear to discriminate serve mechanics between players. Future intervention studies are needed to determine if improvement in physical function contribute to improved serve mechanics. Level of Evidence 3 PMID:28593098

  1. Reliable Magnetic Resonance Imaging Based Grading System for Cervical Intervertebral Disc Degeneration

    PubMed Central

    Chen, Antonia F.; Kang, James D.; Lee, Joon Y.

    2016-01-01

    Study Design Observational. Purpose To develop a simple and comprehensive grading system for cervical discs that precisely, consistently and meaningfully presents radiologic and morphologic data. Overview of Literature The Thompson grading system is commonly used to classify the severity of degenerative lumbar discs on magnetic resonance imaging (MRI). Inherent differences in the morphological and physiological characteristics of cervical discs have hindered development of precise classification systems. Other grading systems have been developed for degenerating cervical discs, but their versatility and feasibility in the clinical setting is suboptimal. Methods MRIs of 46 human cervical discs were de-identified and displayed in PowerPoint format. Each slide depicted a single disc with a normal (grade 0) disc displayed in the top right corner for reference. The presentation was given to 25 physicians comprising attending spine surgeons, spine fellows, orthopaedic residents, and two attending musculoskeletal radiologists. The grading system included Grade 0 (normal height compared to C2–3, mid cleft still visible), grade 1 (dark disc, normal height), grade 2 (collapsed disc, few osteophytes), and grade 3 (collapsed disc, many osteophytes). The ease of use of the system was gauged in the participants and the interobserver reliability was calculated. Results The intraclass correlation coefficient for interobserver reliability was 0.87, and 0.94 for intraobserver reliability, indicating excellent reliability. Ninety-five percent and 85 percent of the clinicians judged the grading system to be clinically feasible and useful in daily practice, respectively. Conclusions The grading system is easy to use, has excellent reliability, and can be used for precise and consistent clinician communication. PMID:26949461

  2. Reliability of smartphone-based gait measurements for quantification of physical activity/inactivity levels.

    PubMed

    Ebara, Takeshi; Azuma, Ryohei; Shoji, Naoto; Matsukawa, Tsuyoshi; Yamada, Yasuyuki; Akiyama, Tomohiro; Kurihara, Takahiro; Yamada, Shota

    2017-08-24

    Objective measurements using built-in smartphone sensors that can measure physical activity/inactivity in daily working life have the potential to provide a new approach to assessing workers' health effects. The aim of this study was to elucidate the characteristics and reliability of built-in step counting sensors on smartphones for development of an easy-to-use objective measurement tool that can be applied in ergonomics or epidemiological research. To evaluate the reliability of step counting sensors embedded in seven major smartphone models, the 6-minute walk test was conducted and the following analyses of sensor precision and accuracy were performed: 1) relationship between actual step count and step count detected by sensors, 2) reliability between smartphones of the same model, and 3) false detection rates when sitting during office work, while riding the subway, and driving. On five of the seven models, the inter-class correlations coefficient (ICC (3,1)) showed high reliability with a range of 0.956-0.993. The other two models, however, had ranges of 0.443-0.504 and the relative error ratios of the sensor-detected step count to the actual step count were ±48.7%-49.4%. The level of agreement between the same models was ICC (3,1): 0.992-0.998. The false detection rates differed between the sitting conditions. These results suggest the need for appropriate regulation of step counts measured by sensors, through means such as correction or calibration with a predictive model formula, in order to obtain the highly reliable measurement results that are sought in scientific investigation.

  3. Integrated avionics reliability

    NASA Technical Reports Server (NTRS)

    Alikiotis, Dimitri

    1988-01-01

    The integrated avionics reliability task is an effort to build credible reliability and/or performability models for multisensor integrated navigation and flight control. The research was initiated by the reliability analysis of a multisensor navigation system consisting of the Global Positioning System (GPS), the Long Range Navigation system (Loran C), and an inertial measurement unit (IMU). Markov reliability models were developed based on system failure rates and mission time.

  4. A human reliability based usability evaluation method for safety-critical software

    SciTech Connect

    Boring, R. L.; Tran, T. Q.; Gertman, D. I.; Ragsdale, A.

    2006-07-01

    Boring and Gertman (2005) introduced a novel method that augments heuristic usability evaluation methods with that of the human reliability analysis method of SPAR-H. By assigning probabilistic modifiers to individual heuristics, it is possible to arrive at the usability error probability (UEP). Although this UEP is not a literal probability of error, it nonetheless provides a quantitative basis to heuristic evaluation. This method allows one to seamlessly prioritize and identify usability issues (i.e., a higher UEP requires more immediate fixes). However, the original version of this method required the usability evaluator to assign priority weights to the final UEP, thus allowing the priority of a usability issue to differ among usability evaluators. The purpose of this paper is to explore an alternative approach to standardize the priority weighting of the UEP in an effort to improve the method's reliability. (authors)

  5. RICA: a reliable and image configurable arena for cyborg bumblebee based on CAN bus.

    PubMed

    Gong, Fan; Zheng, Nenggan; Xue, Lei; Xu, Kedi; Zheng, Xiaoxiang

    2014-01-01

    In this paper, we designed a reliable and image configurable flight arena, RICA, for developing cyborg bumblebees. To meet the spatial and temporal requirements of bumblebees, the Controller Area Network (CAN) bus is adopted to interconnect the LED display modules to ensure the reliability and real-time performance of the arena system. Easily-configurable interfaces on a desktop computer implemented by python scripts are provided to transmit the visual patterns to the LED distributor online and configure RICA dynamically. The new arena system will be a power tool to investigate the quantitative relationship between the visual inputs and induced flight behaviors and also will be helpful to the visual-motor research in other related fields.

  6. Support vector machine-based expert system for reliable heartbeat recognition.

    PubMed

    Osowski, Stanislaw; Hoai, Linh Tran; Markiewicz, Tomasz

    2004-04-01

    This paper presents a new solution to the expert system for reliable heartbeat recognition. The recognition system uses the support vector machine (SVM) working in the classification mode. Two different preprocessing methods for generation of features are applied. One method involves the higher order statistics (HOS) while the second the Hermite characterization of QRS complex of the registered electrocardiogram (ECG) waveform. Combining the SVM network with these preprocessing methods yields two neural classifiers, which have been combined into one final expert system. The combination of classifiers utilizes the least mean square method to optimize the weights of the weighted voting integrating scheme. The results of the performed numerical experiments for the recognition of 13 heart rhythm types on the basis of ECG waveforms confirmed the reliability and advantage of the proposed approach.

  7. Optimal periodic proof test based on cost-effective and reliability criteria

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1976-01-01

    An exploratory study for the optimization of periodic proof tests for fatigue-critical structures is presented. The optimal proof load level and the optimal number of periodic proof tests are determined by minimizing the total expected (statistical average) cost, while the constraint on the allowable level of structural reliability is satisfied. The total expected cost consists of the expected cost of proof tests, the expected cost of structures destroyed by proof tests, and the expected cost of structural failure in service. It is demonstrated by numerical examples that significant cost saving and reliability improvement for fatigue-critical structures can be achieved by the application of the optimal periodic proof test. The present study is relevant to the establishment of optimal maintenance procedures for fatigue-critical structures.

  8. Reliability growth modeling analysis of the space shuttle main engines based upon the Weibull process

    NASA Technical Reports Server (NTRS)

    Wheeler, J. T.

    1990-01-01

    The Weibull process, identified as the inhomogeneous Poisson process with the Weibull intensity function, is used to model the reliability growth assessment of the space shuttle main engine test and flight failure data. Additional tables of percentage-point probabilities for several different values of the confidence coefficient have been generated for setting (1-alpha)100-percent two sided confidence interval estimates on the mean time between failures. The tabled data pertain to two cases: (1) time-terminated testing, and (2) failure-terminated testing. The critical values of the three test statistics, namely Cramer-von Mises, Kolmogorov-Smirnov, and chi-square, were calculated and tabled for use in the goodness of fit tests for the engine reliability data. Numerical results are presented for five different groupings of the engine data that reflect the actual response to the failures.

  9. Validity and reliability of intra-stroke kayak velocity and acceleration using a GPS-based accelerometer.

    PubMed

    Janssen, Ina; Sachlikidis, Alexi

    2010-03-01

    The aim of this study was to assess the validity and reliability of the velocity and acceleration measured by a kayak-mounted GPS-based accelerometer units compared to the video-derived measurements and the effect of satellite configuration on velocity. Four GPS-based accelerometers units of varied accelerometer ranges (2 g or 6 g) were mounted on a kayak as the paddler performed 12 trials at three different stroke rates for each of three different testing sessions (two in the morning vs. one in the afternoon). The velocity and acceleration derived by the accelerometers was compared with the velocity and acceleration derived from high-speed video footage (100Hz). Validity was measured using Bland and Altman plots, R2, and the root of the mean of the squared difference (RMSe), while reliability was calculated using the coefficient of variation, R2, and repeated measures analysis of variance (ANOVA) tests. The GPS-based accelerometers under-reported kayak velocity by 0.14-0.19 m/s and acceleration by 1.67 m/s2 when compared to the video-derived measurements. The afternoon session reported the least difference, indicating a time of day effect on the velocity measured. This study highlights the need for sports utilising GPS-based accelerometers, such as minimaxX, for intra-stroke measurements to conduct sport-specific validity and reliability studies to ensure the accuracy of their data.

  10. A Human Reliability Based Usability Evaluation Method for Safety-Critical Software

    SciTech Connect

    Phillippe Palanque; Regina Bernhaupt; Ronald Boring; Chris Johnson

    2006-04-01

    Recent years have seen an increasing use of sophisticated interaction techniques including in the field of safety critical interactive software [8]. The use of such techniques has been required in order to increase the bandwidth between the users and systems and thus to help them deal efficiently with increasingly complex systems. These techniques come from research and innovation done in the field of humancomputer interaction (HCI). A significant effort is currently being undertaken by the HCI community in order to apply and extend current usability evaluation techniques to these new kinds of interaction techniques. However, very little has been done to improve the reliability of software offering these kinds of interaction techniques. Even testing basic graphical user interfaces remains a challenge that has rarely been addressed in the field of software engineering [9]. However, the non reliability of interactive software can jeopardize usability evaluation by showing unexpected or undesired behaviors. The aim of this SIG is to provide a forum for both researchers and practitioners interested in testing interactive software. Our goal is to define a roadmap of activities to cross fertilize usability and reliability testing of these kinds of systems to minimize duplicate efforts in both communities.

  11. Reliability and Validity of a Magnetic Resonance-Based Volumetric Analysis of the Intrinsic Foot Muscles

    PubMed Central

    Cheuy, Victor A.; Commean, Paul K.; Hastings, Mary K.; Mueller, Michael J.

    2015-01-01

    Purpose To describe a semi-automated program that will segment subcutaneous fat, muscle, and adipose tissue in the foot using magnetic resonance (MR) imaging, determine the reliability of the program between and within raters, and determine the validity of the program using MR phantoms. Materials and Methods MR images were acquired from 19 subjects with and without diabetes and peripheral neuropathy. Two raters segmented and measured volumes from single MR slices at the forefoot, midfoot, and hindfoot at two different times. Intra and inter-rater correlation coefficients were determined. Muscle and fat MR phantoms of known volumes were measured by the program. Results Most ICC reliability values were over 0.950. Validity estimates comparing MR estimates and known volumes resulted in r2 values above 0.970 for all phantoms. The root mean square error was less than 5% for all phantoms. Conclusion Subcutaneous fat, lean muscle, and adipose tissue volumes in the foot can be quantified in a reliable and valid way. This program can be applied in future studies investigating the relationship of these foot structures to functions in important pathologies including the neuropathic foot or other musculoskeletal problems. PMID:23450691

  12. Based on Weibull Information Fusion Analysis Semiconductors Quality the Key Technology of Manufacturing Execution Systems Reliability

    NASA Astrophysics Data System (ADS)

    Huang, Zhi-Hui; Tang, Ying-Chun; Dai, Kai

    2016-05-01

    Semiconductor materials and Product qualified rate are directly related to the manufacturing costs and survival of the enterprise. Application a dynamic reliability growth analysis method studies manufacturing execution system reliability growth to improve product quality. Refer to classical Duane model assumptions and tracking growth forecasts the TGP programming model, through the failure data, established the Weibull distribution model. Combining with the median rank of average rank method, through linear regression and least squares estimation method, match respectively weibull information fusion reliability growth curve. This assumption model overcome Duane model a weakness which is MTBF point estimation accuracy is not high, through the analysis of the failure data show that the method is an instance of the test and evaluation modeling process are basically identical. Median rank in the statistics is used to determine the method of random variable distribution function, which is a good way to solve the problem of complex systems such as the limited sample size. Therefore this method has great engineering application value.

  13. Reliable change indices and standardized regression-based change score norms for evaluating neuropsychological change in children with epilepsy.

    PubMed

    Busch, Robyn M; Lineweaver, Tara T; Ferguson, Lisa; Haut, Jennifer S

    2015-06-01

    Reliable change indices (RCIs) and standardized regression-based (SRB) change score norms permit evaluation of meaningful changes in test scores following treatment interventions, like epilepsy surgery, while accounting for test-retest reliability, practice effects, score fluctuations due to error, and relevant clinical and demographic factors. Although these methods are frequently used to assess cognitive change after epilepsy surgery in adults, they have not been widely applied to examine cognitive change in children with epilepsy. The goal of the current study was to develop RCIs and SRB change score norms for use in children with epilepsy. Sixty-three children with epilepsy (age range: 6-16; M=10.19, SD=2.58) underwent comprehensive neuropsychological evaluations at two time points an average of 12 months apart. Practice effect-adjusted RCIs and SRB change score norms were calculated for all cognitive measures in the battery. Practice effects were quite variable across the neuropsychological measures, with the greatest differences observed among older children, particularly on the Children's Memory Scale and Wisconsin Card Sorting Test. There was also notable variability in test-retest reliabilities across measures in the battery, with coefficients ranging from 0.14 to 0.92. Reliable change indices and SRB change score norms for use in assessing meaningful cognitive change in children following epilepsy surgery are provided for measures with reliability coefficients above 0.50. This is the first study to provide RCIs and SRB change score norms for a comprehensive neuropsychological battery based on a large sample of children with epilepsy. Tables to aid in evaluating cognitive changes in children who have undergone epilepsy surgery are provided for clinical use. An Excel sheet to perform all relevant calculations is also available to interested clinicians or researchers.

  14. Optimal clustering of MGs based on droop controller for improving reliability using a hybrid of harmony search and genetic algorithms.

    PubMed

    Abedini, Mohammad; Moradi, Mohammad H; Hosseinian, S M

    2016-03-01

    This paper proposes a novel method to address reliability and technical problems of microgrids (MGs) based on designing a number of self-adequate autonomous sub-MGs via adopting MGs clustering thinking. In doing so, a multi-objective optimization problem is developed where power losses reduction, voltage profile improvement and reliability enhancement are considered as the objective functions. To solve the optimization problem a hybrid algorithm, named HS-GA, is provided, based on genetic and harmony search algorithms, and a load flow method is given to model different types of DGs as droop controller. The performance of the proposed method is evaluated in two case studies. The results provide support for the performance of the proposed method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Reliability of a Novel CBCT-Based 3D Classification System for Maxillary Canine Impactions in Orthodontics: The KPG Index

    PubMed Central

    Visconti, Luca; Martin, Conchita

    2013-01-01

    The aim of this study was to evaluate both intra- and interoperator reliability of a radiological three-dimensional classification system (KPG index) for the assessment of degree of difficulty for orthodontic treatment of maxillary canine impactions. Cone beam computed tomography (CBCT) scans of fifty impacted canines, obtained using three different scanners (NewTom, Kodak, and Planmeca), were classified using the KPG index by three independent orthodontists. Measurements were repeated one month later. Based on these two sessions, several recommendations on KPG Index scoring were elaborated. After a joint calibration session, these recommendations were explained to nine orthodontists and the two measurement sessions were repeated. There was a moderate intrarater agreement in the precalibration measurement sessions. After the calibration session, both intra- and interrater agreement were almost perfect. Indexes assessed with Kodak Dental Imaging 3D module software showed a better reliability in z-axis values, whereas indexes assessed with Planmeca Romexis software showed a better reliability in x- and y-axis values. No differences were found between the CBCT scanners used. Taken together, these findings indicate that the application of the instructions elaborated during this study improved KPG index reliability, which was nevertheless variously influenced by the use of different software for images evaluation. PMID:24235889

  16. Reliability of a novel CBCT-based 3D classification system for maxillary canine impactions in orthodontics: the KPG index.

    PubMed

    Dalessandri, Domenico; Migliorati, Marco; Rubiano, Rachele; Visconti, Luca; Contardo, Luca; Di Lenarda, Roberto; Martin, Conchita

    2013-01-01

    The aim of this study was to evaluate both intra- and interoperator reliability of a radiological three-dimensional classification system (KPG index) for the assessment of degree of difficulty for orthodontic treatment of maxillary canine impactions. Cone beam computed tomography (CBCT) scans of fifty impacted canines, obtained using three different scanners (NewTom, Kodak, and Planmeca), were classified using the KPG index by three independent orthodontists. Measurements were repeated one month later. Based on these two sessions, several recommendations on KPG Index scoring were elaborated. After a joint calibration session, these recommendations were explained to nine orthodontists and the two measurement sessions were repeated. There was a moderate intrarater agreement in the precalibration measurement sessions. After the calibration session, both intra- and interrater agreement were almost perfect. Indexes assessed with Kodak Dental Imaging 3D module software showed a better reliability in z-axis values, whereas indexes assessed with Planmeca Romexis software showed a better reliability in x- and y-axis values. No differences were found between the CBCT scanners used. Taken together, these findings indicate that the application of the instructions elaborated during this study improved KPG index reliability, which was nevertheless variously influenced by the use of different software for images evaluation.

  17. Web-based tools can be used reliably to detect patients with major depressive disorder and subsyndromal depressive symptoms

    PubMed Central

    Lin, Chao-Cheng; Bai, Ya-Mei; Liu, Chia-Yih; Hsiao, Mei-Chun; Chen, Jen-Yeu; Tsai, Shih-Jen; Ouyang, Wen-Chen; Wu, Chia-hsuan; Li, Yu-Chuan

    2007-01-01

    Background Although depression has been regarded as a major public health problem, many individuals with depression still remain undetected or untreated. Despite the potential for Internet-based tools to greatly improve the success rate of screening for depression, their reliability and validity has not been well studied. Therefore the aim of this study was to evaluate the test-retest reliability and criterion validity of a Web-based system, the Internet-based Self-assessment Program for Depression (ISP-D). Methods The ISP-D to screen for major depressive disorder (MDD), minor depressive disorder (MinD), and subsyndromal depressive symptoms (SSD) was developed in traditional Chinese. Volunteers, 18 years and older, were recruited via the Internet and then assessed twice on the online ISP-D system to investigate the test-retest reliability of the test. They were subsequently prompted to schedule face-to-face interviews. The interviews were performed by the research psychiatrists using the Mini-International Neuropsychiatric Interview and the diagnoses made according to DSM-IV diagnostic criteria were used for the statistics of criterion validity. Kappa (κ) values were calculated to assess test-retest reliability. Results A total of 579 volunteer subjects were administered the test. Most of the subjects were young (mean age: 26.2 ± 6.6 years), female (77.7%), single (81.6%), and well educated (61.9% college or higher). The distributions of MDD, MinD, SSD and no depression specified were 30.9%, 7.4%, 15.2%, and 46.5%, respectively. The mean time to complete the ISP-D was 8.89 ± 6.77 min. One hundred and eighty-four of the respondents completed the retest (response rate: 31.8%). Our analysis revealed that the 2-week test-retest reliability for ISP-D was excellent (weighted κ = 0.801). Fifty-five participants completed the face-to-face interview for the validity study. The sensitivity, specificity, positive, and negative predictive values for major depressive

  18. Web-based tools can be used reliably to detect patients with major depressive disorder and subsyndromal depressive symptoms.

    PubMed

    Lin, Chao-Cheng; Bai, Ya-Mei; Liu, Chia-Yih; Hsiao, Mei-Chun; Chen, Jen-Yeu; Tsai, Shih-Jen; Ouyang, Wen-Chen; Wu, Chia-hsuan; Li, Yu-Chuan

    2007-04-10

    Although depression has been regarded as a major public health problem, many individuals with depression still remain undetected or untreated. Despite the potential for Internet-based tools to greatly improve the success rate of screening for depression, their reliability and validity has not been well studied. Therefore the aim of this study was to evaluate the test-retest reliability and criterion validity of a Web-based system, the Internet-based Self-assessment Program for Depression (ISP-D). The ISP-D to screen for major depressive disorder (MDD), minor depressive disorder (MinD), and subsyndromal depressive symptoms (SSD) was developed in traditional Chinese. Volunteers, 18 years and older, were recruited via the Internet and then assessed twice on the online ISP-D system to investigate the test-retest reliability of the test. They were subsequently prompted to schedule face-to-face interviews. The interviews were performed by the research psychiatrists using the Mini-International Neuropsychiatric Interview and the diagnoses made according to DSM-IV diagnostic criteria were used for the statistics of criterion validity. Kappa (kappa) values were calculated to assess test-retest reliability. A total of 579 volunteer subjects were administered the test. Most of the subjects were young (mean age: 26.2 +/- 6.6 years), female (77.7%), single (81.6%), and well educated (61.9% college or higher). The distributions of MDD, MinD, SSD and no depression specified were 30.9%, 7.4%, 15.2%, and 46.5%, respectively. The mean time to complete the ISP-D was 8.89 +/- 6.77 min. One hundred and eighty-four of the respondents completed the retest (response rate: 31.8%). Our analysis revealed that the 2-week test-retest reliability for ISP-D was excellent (weighted kappa = 0.801). Fifty-five participants completed the face-to-face interview for the validity study. The sensitivity, specificity, positive, and negative predictive values for major depressive disorder were 81.8% and

  19. Experience-based design for integrating the patient care experience into healthcare improvement: Identifying a set of reliable emotion words.

    PubMed

    Russ, Lauren R; Phillips, Jennifer; Brzozowicz, Keely; Chafetz, Lynne A; Plsek, Paul E; Blackmore, C Craig; Kaplan, Gary S

    2013-12-01

    Experience-based design is an emerging method used to capture the emotional content of patient and family member healthcare experiences, and can serve as the foundation for patient-centered healthcare improvement. However, a core tool-the experience-based design questionnaire-requires words with consistent emotional meaning. Our objective was to identify and evaluate an emotion word set reliably categorized across the demographic spectrum as expressing positive, negative, or neutral emotions for experience-based design improvement work. We surveyed 407 patients, family members, and healthcare workers in 2011. Participants designated each of 67 potential emotion words as positive, neutral, or negative based on their emotional perception of the word. Overall agreement was assessed using the kappa statistic. Words were selected for retention in the final emotion word set based on 80% simple agreement on classification of meaning across subgroups. The participants were 47.9% (195/407) patients, 19.4% (33/407) family members and 32.7% (133/407) healthcare staff. Overall agreement adjusted for chance was moderate (k=0.55). However, agreement for positive (k=0.69) and negative emotions (k=0.68) was substantially higher, while agreement in the neutral category was low (k=0.11). There were 20 positive, 1 neutral, and 14 negative words retained for the final experience-based design emotion word set. We identified a reliable set of emotion words for experience questionnaires to serve as the foundation for patient-centered, experience-based redesign of healthcare. Incorporation of patient and family member perspectives in healthcare requires reliable tools to capture the emotional content of care touch points. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. On Reliable and Efficient Data Gathering Based Routing in Underwater Wireless Sensor Networks

    PubMed Central

    Liaqat, Tayyaba; Akbar, Mariam; Javaid, Nadeem; Qasim, Umar; Khan, Zahoor Ali; Javaid, Qaisar; Alghamdi, Turki Ali; Niaz, Iftikhar Azim

    2016-01-01

    This paper presents cooperative routing scheme to improve data reliability. The proposed protocol achieves its objective, however, at the cost of surplus energy consumption. Thus sink mobility is introduced to minimize the energy consumption cost of nodes as it directly collects data from the network nodes at minimized communication distance. We also present delay and energy optimized versions of our proposed RE-AEDG to further enhance its performance. Simulation results prove the effectiveness of our proposed RE-AEDG in terms of the selected performance matrics. PMID:27589750

  1. Reliability of associative recall based on data manipulations in phase encoded volume holographic storage systems

    NASA Astrophysics Data System (ADS)

    Berger, G.; Stumpe, M.; Höhne, M.; Denz, C.

    2005-10-01

    We investigate the characteristics of correlation signals accomplished by content addressing in a phase encoded volume holographic storage system under different realistic conditions. In particular, we explore two crucial cases with respect to the structure of the data. The first one deals with a scenario where only partial or defective data are available for content addressing. The second case takes similarities among the stored data sets into account, which significantly differ from their statistical correlation. For both the cases we provide, for the first time, a theoretical approach and present experimental results when employing phase-code multiplexing. Finally, we discuss the reliability of the employed methods.

  2. Test-Retest Reliability of an Automated Infrared-Assisted Trunk Accelerometer-Based Gait Analysis System

    PubMed Central

    Hsu, Chia-Yu; Tsai, Yuh-Show; Yau, Cheng-Shiang; Shie, Hung-Hai; Wu, Chu-Ming

    2016-01-01

    The aim of this study was to determine the test-retest reliability of an automated infrared-assisted, trunk accelerometer-based gait analysis system for measuring gait parameters of healthy subjects in a hospital. Thirty-five participants (28 of them females; age range, 23–79 years) performed a 5-m walk twice using an accelerometer-based gait analysis system with infrared assist. Measurements of spatiotemporal gait parameters (walking speed, step length, and cadence) and trunk control (gait symmetry, gait regularity, acceleration root mean square (RMS), and acceleration root mean square ratio (RMSR)) were recorded in two separate walking tests conducted 1 week apart. Relative and absolute test-retest reliability was determined by calculating the intra-class correlation coefficient (ICC3,1) and smallest detectable difference (SDD), respectively. The test-retest reliability was excellent for walking speed (ICC = 0.87, 95% confidence interval = 0.74–0.93, SDD = 13.4%), step length (ICC = 0.81, 95% confidence interval = 0.63–0.91, SDD = 12.2%), cadence (ICC = 0.81, 95% confidence interval = 0.63–0.91, SDD = 10.8%), and trunk control (step and stride regularity in anterior-posterior direction, acceleration RMS and acceleration RMSR in medial-lateral direction, and acceleration RMS and stride regularity in vertical direction). An automated infrared-assisted, trunk accelerometer-based gait analysis system is a reliable tool for measuring gait parameters in the hospital environment. PMID:27455281

  3. Reliable Execution Based on CPN and Skyline Optimization for Web Service Composition

    PubMed Central

    Ha, Weitao; Zhang, Guojun

    2013-01-01

    With development of SOA, the complex problem can be solved by combining available individual services and ordering them to best suit user's requirements. Web services composition is widely used in business environment. With the features of inherent autonomy and heterogeneity for component web services, it is difficult to predict the behavior of the overall composite service. Therefore, transactional properties and nonfunctional quality of service (QoS) properties are crucial for selecting the web services to take part in the composition. Transactional properties ensure reliability of composite Web service, and QoS properties can identify the best candidate web services from a set of functionally equivalent services. In this paper we define a Colored Petri Net (CPN) model which involves transactional properties of web services in the composition process. To ensure reliable and correct execution, unfolding processes of the CPN are followed. The execution of transactional composition Web service (TCWS) is formalized by CPN properties. To identify the best services of QoS properties from candidate service sets formed in the TCSW-CPN, we use skyline computation to retrieve dominant Web service. It can overcome that the reduction of individual scores to an overall similarity leads to significant information loss. We evaluate our approach experimentally using both real and synthetically generated datasets. PMID:23935431

  4. Reliable execution based on CPN and skyline optimization for Web service composition.

    PubMed

    Chen, Liping; Ha, Weitao; Zhang, Guojun

    2013-01-01

    With development of SOA, the complex problem can be solved by combining available individual services and ordering them to best suit user's requirements. Web services composition is widely used in business environment. With the features of inherent autonomy and heterogeneity for component web services, it is difficult to predict the behavior of the overall composite service. Therefore, transactional properties and nonfunctional quality of service (QoS) properties are crucial for selecting the web services to take part in the composition. Transactional properties ensure reliability of composite Web service, and QoS properties can identify the best candidate web services from a set of functionally equivalent services. In this paper we define a Colored Petri Net (CPN) model which involves transactional properties of web services in the composition process. To ensure reliable and correct execution, unfolding processes of the CPN are followed. The execution of transactional composition Web service (TCWS) is formalized by CPN properties. To identify the best services of QoS properties from candidate service sets formed in the TCSW-CPN, we use skyline computation to retrieve dominant Web service. It can overcome that the reduction of individual scores to an overall similarity leads to significant information loss. We evaluate our approach experimentally using both real and synthetically generated datasets.

  5. Resistive switching memories based on metal oxides: mechanisms, reliability and scaling

    NASA Astrophysics Data System (ADS)

    Ielmini, Daniele

    2016-06-01

    With the explosive growth of digital data in the era of the Internet of Things (IoT), fast and scalable memory technologies are being researched for data storage and data-driven computation. Among the emerging memories, resistive switching memory (RRAM) raises strong interest due to its high speed, high density as a result of its simple two-terminal structure, and low cost of fabrication. The scaling projection of RRAM, however, requires a detailed understanding of switching mechanisms and there are potential reliability concerns regarding small device sizes. This work provides an overview of the current understanding of bipolar-switching RRAM operation, reliability and scaling. After reviewing the phenomenological and microscopic descriptions of the switching processes, the stability of the low- and high-resistance states will be discussed in terms of conductance fluctuations and evolution in 1D filaments containing only a few atoms. The scaling potential of RRAM will finally be addressed by reviewing the recent breakthroughs in multilevel operation and 3D architecture, making RRAM a strong competitor among future high-density memory solutions.

  6. Validity and reliability of the Omron HJ-303 tri-axial accelerometer-based pedometer.

    PubMed

    Steeves, Jeremy A; Tyo, Brian M; Connolly, Christopher P; Gregory, Douglas A; Stark, Nyle A; Bassett, David R

    2011-09-01

    This study compared the validity of a new Omron HJ-303 piezoelectric pedometer and 2 other pedometers (Sportline Traq and Yamax SW200). To examine the effect of speed, 60 subjects walked on a treadmill at 2, 3, and 4 mph. Twenty subjects also ran at 6, 7, and 8 mph. To test lifestyle activities, 60 subjects performed front-back-side-side stepping, elliptical machine and stair climbing/descending. Twenty others performed ballroom dancing. Sixty participants completed 5 100-step trials while wearing 5 different sets of the devices tested device reliability. Actual steps were determined using a hand tally counter. Significant differences existed among pedometers (P < .05). For walking, the Omron pedometers were the most valid. The Sportline overestimated and the Yamax underestimated steps (P < .05). Worn on the waist or in the backpack, the Omron device and Sportline were valid for running. The Omron was valid for 3 activities (elliptical machine, ascending and descending stairs). The Sportline overestimated all of these activities, and Yamax was only valid for descending stairs. The Omron andYamax were both valid and reliable in the 100-step trials. The Omron HJ-303, worn on the waist, appeared to be the most valid of the 3 pedometers.

  7. A reliability-based maintenance technicians' workloads optimisation model with stochastic consideration

    NASA Astrophysics Data System (ADS)

    Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.

    2016-12-01

    The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.

  8. Multiplication factor for open ground storey buildings-a reliability based evaluation

    NASA Astrophysics Data System (ADS)

    Haran Pragalath, D. C.; Avadhoot, Bhosale; Robin, Davis P.; Pradip, Sarkar

    2016-06-01

    Open Ground Storey (OGS) framed buildings where the ground storey is kept open without infill walls, mainly to facilitate parking, is increasing commonly in urban areas. However, vulnerability of this type of buildings has been exposed in past earthquakes. OGS buildings are conventionally designed by a bare frame analysis that ignores the stiffness of the infill walls present in the upper storeys, but doing so underestimates the inter-storey drift (ISD) and thereby the force demand in the ground storey columns. Therefore, a multiplication factor (MF) is introduced in various international codes to estimate the design forces (bending moments and shear forces) in the ground storey columns. This study focuses on the seismic performance of typical OGS buildings designed by means of MFs. The probabilistic seismic demand models, fragility curves, reliability and cost indices for various frame models including bare frames and fully infilled frames are developed. It is found that the MF scheme suggested by the Israel code is better than other international codes in terms of reliability and cost.

  9. Coloured Letters and Numbers (CLaN): a reliable factor-analysis based synaesthesia questionnaire.

    PubMed

    Rothen, Nicolas; Tsakanikos, Elias; Meier, Beat; Ward, Jamie

    2013-09-01

    Synaesthesia is a heterogeneous phenomenon, even when considering one particular sub-type. The purpose of this study was to design a reliable and valid questionnaire for grapheme-colour synaesthesia that captures this heterogeneity. By the means of a large sample of 628 synaesthetes and a factor analysis, we created the Coloured Letters and Numbers (CLaN) questionnaire with 16 items loading on 4 different factors (i.e., localisation, automaticity/attention, deliberate use, and longitudinal changes). These factors were externally validated with tests which are widely used in the field of synaesthesia research. The questionnaire showed good test-retest reliability and construct validity (i.e., internally and externally). Our findings are discussed in the light of current theories and new ideas in synaesthesia research. More generally, the questionnaire is a useful tool which can be widely used in synaesthesia research to reveal the influence of individual differences on various performance measures and will be useful in generating new hypotheses.

  10. Test-retest reliability and responsiveness of the Barthel Index-based Supplementary Scales in patients with stroke.

    PubMed

    Lee, Ya C; Yu, Wan H; Hsueh, I P; Chen, Sheng S; Hsieh, Ching L

    2017-02-08

    A lack of evidence on the test-retest reliability and responsiveness limits the utility of the BI-based Supplementary Scales (BI-SS) in both clinical and research settings. To examine the test-retest reliability and responsiveness of the BI-based Supplementary Scales (BI-SS) in patients with stroke. A repeated-assessments design (1 week apart) was used to examine the test-retest reliability of the BI-SS. For the responsiveness study, the participants were assessed with the BI-SS and BI (treated as an external criterion) at admission to and discharge from rehabilitation wards. Seven outpatient rehabilitation units and one inpatient rehabilitation unit. Eighty-four outpatients with chronic stroke participated in the test-retest reliability study. Fifty-seven inpatients completed baseline and follow-up assessments in the responsiveness study. For the test-retest reliability study, the values of the intra-class correlation coefficient and the overall percentage of minimal detectable change for the Ability Scale and Self-perceived Difficulty Scale were 0.97, 12.8%, and 0.78, 35.8%, respectively. For the responsiveness study, the standardized effect size and standardized response mean (representing internal responsiveness) of the Ability Scale and Self-perceived Difficulty Scale were 1.17 and 1.56, and 0.78 and 0.89, respectively. Regarding external responsiveness, the change in score of the Ability Scale had significant and moderate association with that of the BI (r=0.61, p<0.001). The change in score of the Self-perceived Difficulty Scale had non-significant and weak association with that of the BI (r=0.23, p=0.080). The Ability Scale of the BI-SS has satisfactory test-retest reliability and sufficient responsiveness for patients with stroke. However, the Self-perceived Difficulty Scale of the BI-SS has substantial random measurement error and insufficient external responsiveness, which may affect its utility in clinical settings. The findings of this study provide

  11. Reliability and Validity of a Novel Internet-Based Battery to Assess Mood and Cognitive Function in the Elderly

    PubMed Central

    Myers, Candice A.; Keller, Jeffrey N.; Allen, H. Raymond; Brouillette, Robert M.; Foil, Heather; Davis, Allison B.; Greenway, Frank L.; Johnson, William D.; Martin, Corby K.

    2016-01-01

    Dementia is a chronic condition in the elderly and depression is often a concurrent symptom. As populations continue to age, accessible and useful tools to screen for cognitive function and its associated symptoms in elderly populations are needed. The aim of this study was to test the reliability and validity of a new internet-based assessment battery for screening mood and cognitive function in an elderly population. Specifically, the Helping Hand Technology (HHT) assessments for depression (HHT-D) and global cognitive function (HHT-G) were evaluated in a sample of 57 elderly participants (22 male, 35 female) aged 59–85 years. The study sample was categorized into three groups: 1) dementia (n = 8; Mini-Mental State Exam (MMSE) score 10–24), 2) mild cognitive impairment (n = 24; MMSE score 25–28), and 3) control (n = 25; MMSE score 29–30). Test-retest reliability (Pearson correlation coefficient, r) and internal consistency reliability (Cronbach’s alpha, α) of the HHT-D and HHT-G were assessed. Validity of the HHT-D and HHT-G was tested via comparison (Pearson r) to commonly used pencil-and-paper based assessments: HHT-D versus the Geriatric Depression Scale (GDS) and HHT-G versus the MMSE. Good test-retest (r = 0.80; p < 0.0001) and acceptable internal consistency reliability (α = 0.73) of the HHT-D were established. Moderate support for the validity of the HHT-D was obtained (r = 0.60 between the HHT-D and GDS; p < 0.0001). Results indicated good test-retest (r = 0.87; p < 0.0001) and acceptable internal consistency reliability (α = 0.70) of the HHT-G. Validity of the HHT-G was supported (r = 0.71 between the HHT-G and MMSE; p < 0.0001). In summary, the HHT-D and HHT-G were found to be reliable and valid computerized assessments to screen for depression and cognitive status, respectively, in an elderly sample. PMID:27589529

  12. A reliable low-cost wireless and wearable gait monitoring system based on a plastic optical fibre sensor

    NASA Astrophysics Data System (ADS)

    Bilro, L.; Oliveira, J. G.; Pinto, J. L.; Nogueira, R. N.

    2011-04-01

    A wearable and wireless system designed to evaluate quantitatively the human gait is presented. It allows knee sagittal motion monitoring over long distances and periods with a portable and low-cost package. It is based on the measurement of transmittance changes when a side-polished plastic optical fibre is bent. Four voluntary healthy subjects, on five different days, were tested in order to assess inter-day and inter-subject reliability. Results have shown that this technique is reliable, allows a one-time calibration and is suitable in the diagnosis and rehabilitation of knee injuries or for monitoring the performance of competitive athletes. Environmental testing was accomplished in order to study the influence of different temperatures and humidity conditions.

  13. Validity and reliability of smartphone magnetometer-based goniometer evaluation of shoulder abduction--A pilot study.

    PubMed

    Johnson, Linda B; Sumner, Sean; Duong, Tina; Yan, Posu; Bajcsy, Ruzena; Abresch, R Ted; de Bie, Evan; Han, Jay J

    2015-12-01

    Goniometers are commonly used by physical therapists to measure range-of-motion (ROM) in the musculoskeletal system. These measurements are used to assist in diagnosis and to help monitor treatment efficacy. With newly emerging technologies, smartphone-based applications are being explored for measuring joint angles and movement. This pilot study investigates the intra- and inter-rater reliability as well as concurrent validity of a newly-developed smartphone magnetometer-based goniometer (MG) application for measuring passive shoulder abduction in both sitting and supine positions, and compare against the traditional universal goniometer (UG). This is a comparative study with repeated measurement design. Three physical therapists utilized both the smartphone MG and a traditional UG to measure various angles of passive shoulder abduction in a healthy subject, whose shoulder was positioned in eight different positions with pre-determined degree of abduction while seated or supine. Each therapist was blinded to the measured angles. Concordance correlation coefficients (CCCs), Bland-Altman plotting methods, and Analysis of Variance (ANOVA) were used for statistical analyses. Both traditional UG and smartphone MG were reliable in repeated measures of standardized joint angle positions (average CCC > 0.997) with similar variability in both measurement tools (standard deviation (SD) ± 4°). Agreement between the UG and MG measurements was greater than 0.99 in all positions. Our results show that the smartphone MG has equivalent reliability compared to the traditional UG when measuring passive shoulder abduction ROM. With concordant measures and comparable reliability to the UG, the newly developed MG application shows potential as a useful tool to assess joint angles. Published by Elsevier Ltd.

  14. Structural reliability of alumina-, feldspar-, leucite-, mica- and zirconia-based ceramics.

    PubMed

    Tinschert, J; Zwez, D; Marx, R; Anusavice, K J

    2000-09-01

    The objective of this study was to test the hypothesis that industrially manufactured ceramic materials, such as Cerec Mark II and Zirconia-TZP, have a smaller range of fracture strength variation and therefore greater structural reliability than laboratory-processed dental ceramic materials. Thirty bar specimens per material were prepared and tested. The four-point bend test was used to determine the flexure strength of all ceramic materials. The fracture stress values were analyzed by Weibull analysis to determine the Weibull modulus values (m) and the 1 and 5% probabilities of failure. The mean strength and standard deviation values for these ceramics are as follows: (MPa+/-SD) were: Cerec Mark II, 86.3+/-4.3; Dicor, 70.3+/-12.2; In-Ceram Alumina, 429. 3+/-87.2; IPS Empress, 83.9+/-11.3; Vitadur Alpha Core, 131.0+/-9.5; Vitadur Alpha Dentin, 60.7+/-6.8; Vita VMK 68, 82.7+/-10.0; and Zirconia-TZP, 913.0+/-50.2. There was no statistically significant difference among the flexure strength of Cerec Mark II, Dicor, IPS Empress, Vitadur Alpha Dentin, and Vita VMK 68 ceramics (p>0.05). The highest Weibull moduli were associated with Cerec Mark II and Zirconia-TZP ceramics (23.6 and 18.4). Dicor glass-ceramic and In-Ceram Alumina had the lowest m values (5.5 and 5.7), whereas intermediate values were observed for IPS-Empress, Vita VMK 68, Vitadur Alpha Dentin and Vitadur Alpha Core ceramics (8.6, 8.9, 10.0 and 13.0, respectively). Except for In-Ceram Alumina, Vitadur Alpha and Zirconia-TZP core ceramics, most of the investigated ceramic materials fabricated under the condition of a dental laboratory were not stronger or more structurally reliable than Vita VMK 68 veneering porcelain. Only Cerec Mark II and Zirconia-TZP specimens, which were prepared from an industrially optimized ceramic material, exhibited m values greater than 18. Hence, we conclude that industrially prepared ceramics are more structurally reliable materials for dental applications although CAD

  15. MNOS stack for reliable, low optical loss, Cu based CMOS plasmonic devices.

    PubMed

    Emboras, Alexandros; Najar, Adel; Nambiar, Siddharth; Grosse, Philippe; Augendre, Emmanuel; Leroux, Charles; de Salvo, Barbara; de Lamaestre, Roch Espiau

    2012-06-18

    We study the electro optical properties of a Metal-Nitride-Oxide-Silicon (MNOS) stack for a use in CMOS compatible plasmonic active devices. We show that the insertion of an ultrathin stoichiometric Si(3)N(4) layer in a MOS stack lead to an increase in the electrical reliability of a copper gate MNOS capacitance from 50 to 95% thanks to a diffusion barrier effect, while preserving the low optical losses brought by the use of copper as the plasmon supporting metal. An experimental investigation is undertaken at a wafer scale using some CMOS standard processes of the LETI foundry. Optical transmission measurments conducted in a MNOS channel waveguide configuration coupled to standard silicon photonics circuitry confirms the very low optical losses (0.39 dB.μm(-1)), in good agreement with predictions using ellipsometric optical constants of Cu.

  16. A Novel Evaluation Method for Building Construction Project Based on Integrated Information Entropy with Reliability Theory

    PubMed Central

    Bai, Xiao-ping; Zhang, Xi-wei

    2013-01-01

    Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme. Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for risk computing of building construction projects. PMID:23533352

  17. A novel evaluation method for building construction project based on integrated information entropy with reliability theory.

    PubMed

    Bai, Xiao-ping; Zhang, Xi-wei

    2013-01-01

    Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme. Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for risk computing of building construction projects.

  18. Diskless supercomputers: Scalable, reliable I/O for the Tera-Op technology base

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.; Ousterhout, John K.; Patterson, David A.

    1993-01-01

    Computing is seeing an unprecedented improvement in performance; over the last five years there has been an order-of-magnitude improvement in the speeds of workstation CPU's. At least another order of magnitude seems likely in the next five years, to machines with 500 MIPS or more. The goal of the ARPA Teraop program is to realize even larger, more powerful machines, executing as many as a trillion operations per second. Unfortunately, we have seen no comparable breakthroughs in I/O performance; the speeds of I/O devices and the hardware and software architectures for managing them have not changed substantially in many years. We have completed a program of research to demonstrate hardware and software I/O architectures capable of supporting the kinds of internetworked 'visualization' workstations and supercomputers that will appear in the mid 1990s. The project had three overall goals: high performance, high reliability, and scalable, multipurpose system.

  19. Reliability analysis for determining performance of barrage based on gates operation

    NASA Astrophysics Data System (ADS)

    Adiningrum, C.; Hadihardaja, I. K.

    2017-06-01

    Some rivers located on a flat slope topography such as Cilemahabang river and Ciherang river in Cilemahabang watershed, Bekasi regency, West Java are susceptible to flooding. The inundation mostly happens near a barrage in the middle and downstream of the Cilemahabang watershed, namely the Cilemahabang and Caringin barrages. Barrages or gated weirs are difficult to exploit since the gate must be kept and operated properly under any circumstances. Therefore, a reliability analysis of the gates operation is necessary to determine the performance of the barrage with respect to the number of gates opened and the gates opening heights. The First Order Second Moment (FOSM) method was used to determine the performance by the reliability index (β) and the probability of failure (risk). It was found that for Cilemahabang Barrage, the number of gates opened with load (L) represents the peak discharge derived from various rainfall (P) respectively one gate with opening height (h=1m) for Preal, two gates (h=1m and h=1,5m) for P50, and three gates (each gate with h=2,5m) for P100. For Caringin Barrage, the results are minimum three gates opened (each gate with h=2,5 m) for Preal, five gates opened (each gate with h=2,5m) for P50, and six gates opened (each gate with h=2,5m) for P100. It can be concluded that a greater load (L) needs greater resistance (R) to counterbalance. Resistance can be added by increasing the number of gates opened and the gate opening height. A higher number of gates opened will lead to the decrease of water level in the upstream of barrage and less risk of overflow.

  20. Reliability and validity of the inertial sensor-based Timed "Up and Go" test in individuals affected by stroke.

    PubMed

    Wüest, Seline; Massé, Fabien; Aminian, Kamiar; Gonzenbach, Roman; de Bruin, Eling D

    2016-01-01

    The instrumented Timed "Up and Go" test (iTUG) has the potential for playing an important role in providing clinically useful information regarding an individual's balance and mobility that cannot be derived from the original single-outcome Timed "Up and Go" test protocol. The purpose of this study was to determine the reliability and validity of the iTUG using body-fixed inertial sensors in people affected by stroke. For test-retest reliability analysis, 14 individuals with stroke and 25 nondisabled elderly patients were assessed. For validity analysis, an age-matched comparison of 12 patients with stroke and 12 nondisabled controls was performed. Out of the 14 computed iTUG metrics, the majority showed excellent test-retest reliability expressed by high intraclass correlation coefficients (range 0.431-0.994) together with low standard error of measurement and smallest detectable difference values. Bland-Altman plots demonstrated good agreement between two repeated measurements. Significant differences between patients with stroke and nondisabled controls were found in 9 of 14 iTUG parameters analyzed. Consequently, these results warrant the future application of the inertial sensor-based iTUG test for the assessment of physical deficits poststroke in longitudinal study designs.

  1. DEGRADATION SUSCEPTIBILITY METRICS AS THE BASES FOR BAYESIAN RELIABILITY MODELS OF AGING PASSIVE COMPONENTS AND LONG-TERM REACTOR RISK

    SciTech Connect

    Unwin, Stephen D.; Lowry, Peter P.; Toyooka, Michael Y.; Ford, Benjamin E.

    2011-07-17

    Conventional probabilistic risk assessments (PRAs) are not well-suited to addressing long-term reactor operations. Since passive structures, systems and components are among those for which refurbishment or replacement can be least practical, they might be expected to contribute increasingly to risk in an aging plant. Yet, passives receive limited treatment in PRAs. Furthermore, PRAs produce only snapshots of risk based on the assumption of time-independent component failure rates. This assumption is unlikely to be valid in aging systems. The treatment of aging passive components in PRA does present challenges. First, service data required to quantify component reliability models are sparse, and this problem is exacerbated by the greater data demands of age-dependent reliability models. A compounding factor is that there can be numerous potential degradation mechanisms associated with the materials, design, and operating environment of a given component. This deepens the data problem since the risk-informed management of materials degradation and component aging will demand an understanding of the long-term risk significance of individual degradation mechanisms. In this paper we describe a Bayesian methodology that integrates the metrics of materials degradation susceptibility being developed under the Nuclear Regulatory Commission's Proactive Management of Materials of Degradation Program with available plant service data to estimate age-dependent passive component reliabilities. Integration of these models into conventional PRA will provide a basis for materials degradation management informed by the predicted long-term operational risk.

  2. The Korean Version of the University of California San Diego Performance-based Skills Assessment: Reliability and Validity.

    PubMed

    Kim, Sung-Jin; Kim, Jung-Min; Shim, Joo-Cheol; Seo, Beom-Joo; Jung, Sung-Soo; Ryu, Jeoung-Whan; Seo, Young-Soo; Lee, Yu-Cheol; Moon, Jung-Joon; Jeon, Dong-Wook; Park, Kyoung-Duck; Jung, Do-Un

    2017-08-31

    The study's aim was to develop and standardize a Korean version of the University of California San Diego Performance-based Skills Assessment (K-UPSA), which is used to evaluate the daily living function of patients with schizophrenia. Study participants were 78 patients with schizophrenia and 27 demographically matched healthy controls. We evaluated the clinical states and cognitive functions to verify K-UPSA's reliability and validity. For clinical states, the Positive and Negative Syndrome Scale, Clinical Global Impression-Schizophrenia scale, and Social and Occupational Functioning Assessment Scale and Schizophrenia Quality of Life Scale-fourth revision were used. The Schizophrenia Cognition Rating Scale, Short-form of Korean-Wechsler Adult Intelligence Scale, and Wisconsin Card Sorting Test were used to assess cognitive function. The K-UPSA had statistically significant reliability and validity. The K-UPSA has high internal consistency (Cronbach's alpha, 0.837) and test-retest reliability (intra-class correlation coefficient, 0.381-0.792; p<0.001). The K-UPSA had significant discriminant validity (p<0.001). Significant correlations between the K-UPSA's scores and most of the scales and tests listed above demonstrated K-UPSA's concurrent validity (p<0.001). The K-UPSA is useful to evaluate the daily living function in Korean patients with schizophrenia.

  3. Scalability and reliability issues of Ti/HfOx-based 1T1R bipolar RRAM: Occurrence, mitigation, and solution

    NASA Astrophysics Data System (ADS)

    Rahaman, Sk. Ziaur; Lee, Heng-Yuan; Chen, Yu-Sheng; Lin, Yu-De; Chen, Pang-Shiu; Chen, Wei-Su; Wang, Pei-Hua

    2017-05-01

    Scalability and reliability issues are the most dominant obstacle for the development of resistive switching memory (RRAM) technology. Owing to the excellent memory performance and process compatibility with current CMOS technology of Ti/HfOx-based filamentary type bipolar RRAM, its scalability and reliability issues have been investigated in this document. Towards this goal, we demonstrate that there exists a clear correlation between the transistor and memory cell, which ultimately limits the scaling in terms of operation current and size of the transistor as well and performance of the Ti/HfOx-based 1T1R bipolar RRAM. Due to the resemblance of switching behaviour between complementary resistive switching, i.e., CRS in a single memory stack, and bipolar resistive switching, the Ti/HfOx-based bipolar RRAM suffers from resistance pinning (RP) issues, whereas the minimum resistance during the 1st RESET operation always impeded below 20 kΩ; this occurs through the interaction between the transistor and memory cell during the FORMING process. However, a sufficiently lower FORMING voltage can mitigate the RP issue occurring in Ti/HfOx-based bipolar RRAM and an alternative Ta buffer layer over HfOx dielectrics is proposed to prevent the activation of self-CRS in the memory cell during the FORMING process.

  4. Reliable Mixed H∞ and Passivity-Based Control for Fuzzy Markovian Switching Systems With Probabilistic Time Delays and Actuator Failures.

    PubMed

    Sakthivel, Rathinasamy; Selvi, Subramaniam; Mathiyalagan, Kalidass; Shi, Peng

    2015-12-01

    This paper is concerned with the problem of reliable mixed H ∞ and passivity-based control for a class of stochastic Takagi-Sugeno (TS) fuzzy systems with Markovian switching and probabilistic time varying delays. Different from the existing works, the H∞ and passivity control problem with probabilistic occurrence of time-varying delays and actuator failures is considered in a unified framework, which is more general in some practical situations. The main aim of this paper is to design a reliable mixed H∞ and passivity-based controller such that the stochastic TS fuzzy system with Markovian switching is stochastically stable with a prescribed mixed H∞ and passivity performance level γ > 0 . Based on the Lyapunov-Krasovskii functional (LKF) involving lower and upper bound of probabilistic time delay and convex combination technique, a new set of delay-dependent sufficient condition in terms of linear matrix inequalities (LMIs) is established for obtaining the required result. Finally, a numerical example based on the modified truck-trailer model is given to demonstrate the effectiveness and applicability of the proposed design techniques.

  5. Nanowire growth process modeling and reliability models for nanodevices

    NASA Astrophysics Data System (ADS)

    Fathi Aghdam, Faranak

    . This work is an early attempt that uses a physical-statistical modeling approach to studying selective nanowire growth for the improvement of process yield. In the second research work, the reliability of nano-dielectrics is investigated. As electronic devices get smaller, reliability issues pose new challenges due to unknown underlying physics of failure (i.e., failure mechanisms and modes). This necessitates new reliability analysis approaches related to nano-scale devices. One of the most important nano-devices is the transistor that is subject to various failure mechanisms. Dielectric breakdown is known to be the most critical one and has become a major barrier for reliable circuit design in nano-scale. Due to the need for aggressive downscaling of transistors, dielectric films are being made extremely thin, and this has led to adopting high permittivity (k) dielectrics as an alternative to widely used SiO2 in recent years. Since most time-dependent dielectric breakdown test data on bilayer stacks show significant deviations from a Weibull trend, we have proposed two new approaches to modeling the time to breakdown of bi-layer high-k dielectrics. In the first approach, we have used a marked space-time self-exciting point process to model the defect generation rate. A simulation algorithm is used to generate defects within the dielectric space, and an optimization algorithm is employed to minimize the Kullback-Leibler divergence between the empirical distribution obtained from the real data and the one based on the simulated data to find the best parameter values and to predict the total time to failure. The novelty of the presented approach lies in using a conditional intensity for trap generation in dielectric that is a function of time, space and size of the previous defects. In addition, in the second approach, a k-out-of-n system framework is proposed to estimate the total failure time after the generation of more than one soft breakdown.

  6. Reliability of video-based quantification of the knee- and hip angle at foot strike during running.

    PubMed

    Damsted, Camma; Nielsen, Rasmus Oestergaard; Larsen, Lars Henrik

    2015-04-01

    In clinical practice, joint kinematics during running are primarily quantified by two-dimensional (2D) video recordings and motion-analysis software. The applicability of this approach depends on the clinicians' ability to quantify kinematics in a reliable manner. The reliability of quantifying knee- and hip angles at foot strike is uninvestigated. To investigate the intra- and inter-rater reliability within and between days of clinicians' ability to quantify the knee- and hip angles at foot strike during running. Eighteen recreational runners were recorded twice using a clinical 2D video setup during treadmill running. Two blinded raters quantified joint angles on each video twice with freeware motion analysis software (Kinovea 0.8.15). The range from the lower prediction limit to the upper prediction limit of the 95% prediction interval varied three to eight degrees (within day) and nine to 14 degrees (between day) for the knee angles. Similarly, the hip angles varied three to seven degrees (within day) and nine to 11 degrees (between day). The intra- and inter rater reliability of within and between day quantifications of the knee- and hip angle based on a clinical 2D video setup is sufficient to encourage clinicians to keep using 2D motion analysis techniques in clinical practice to quantify the knee- and hip angles in healthy runners. However, the interpretation should include critical evaluation of the physical set-up of the 2D motion analysis system prior to the recordings and conclusions should take measurement variations (3-8 degrees and 9-14 degrees for within and between day, respectively) into account. 3.

  7. Feasibility, reliability, and validity of a smartphone based application for the assessment of cognitive function in the elderly.

    PubMed

    Brouillette, Robert M; Foil, Heather; Fontenot, Stephanie; Correro, Anthony; Allen, Ray; Martin, Corby K; Bruce-Keller, Annadora J; Keller, Jeffrey N

    2013-01-01

    While considerable knowledge has been gained through the use of established cognitive and motor assessment tools, there is a considerable interest and need for the development of a battery of reliable and validated assessment tools that provide real-time and remote analysis of cognitive and motor function in the elderly. Smartphones appear to be an obvious choice for the development of these "next-generation" assessment tools for geriatric research, although to date no studies have reported on the use of smartphone-based applications for the study of cognition in the elderly. The primary focus of the current study was to assess the feasibility, reliability, and validity of a smartphone-based application for the assessment of cognitive function in the elderly. A total of 57 non-demented elderly individuals were administered a newly developed smartphone application-based Color-Shape Test (CST) in order to determine its utility in measuring cognitive processing speed in the elderly. Validity of this novel cognitive task was assessed by correlating performance on the CST with scores on widely accepted assessments of cognitive function. Scores on the CST were significantly correlated with global cognition (Mini-Mental State Exam: r = 0.515, p<0.0001) and multiple measures of processing speed and attention (Digit Span: r = 0.427, p<0.0001; Trail Making Test: r = -0.651, p<0.00001; Digit Symbol Test: r = 0.508, p<0.0001). The CST was not correlated with naming and verbal fluency tasks (Boston Naming Test, Vegetable/Animal Naming) or memory tasks (Logical Memory Test). Test re-test reliability was observed to be significant (r = 0.726; p = 0.02). Together, these data are the first to demonstrate the feasibility, reliability, and validity of using a smartphone-based application for the purpose of assessing cognitive function in the elderly. The importance of these findings for the establishment of smartphone-based assessment batteries of cognitive

  8. Development of a Tablet-based symbol digit modalities test for reliably assessing information processing speed in patients with stroke.

    PubMed

    Tung, Li-Chen; Yu, Wan-Hui; Lin, Gong-Hong; Yu, Tzu-Ying; Wu, Chien-Te; Tsai, Chia-Yin; Chou, Willy; Chen, Mei-Hsiang; Hsieh, Ching-Lin

    2016-09-01

    To develop a Tablet-based Symbol Digit Modalities Test (T-SDMT) and to examine the test-retest reliability and concurrent validity of the T-SDMT in patients with stroke. The study had two phases. In the first phase, six experts, nine college students and five outpatients participated in the development and testing of the T-SDMT. In the second phase, 52 outpatients were evaluated twice (2 weeks apart) with the T-SDMT and SDMT to examine the test-retest reliability and concurrent validity of the T-SDMT. The T-SDMT was developed via expert input and college student/patient feedback. Regarding test-retest reliability, the practise effects of the T-SDMT and SDMT were both trivial (d=0.12) but significant (p≦0.015). The improvement in the T-SDMT (4.7%) was smaller than that in the SDMT (5.6%). The minimal detectable changes (MDC%) of the T-SDMT and SDMT were 6.7 (22.8%) and 10.3 (32.8%), respectively. The T-SDMT and SDMT were highly correlated with each other at the two time points (Pearson's r=0.90-0.91). The T-SDMT demonstrated good concurrent validity with the SDMT. Because the T-SDMT had a smaller practise effect and less random measurement error (superior test-retest reliability), it is recommended over the SDMT for assessing information processing speed in patients with stroke. Implications for Rehabilitation The Symbol Digit Modalities Test (SDMT), a common measure of information processing speed, showed a substantial practise effect and considerable random measurement error in patients with stroke. The Tablet-based SDMT (T-SDMT) has been developed to reduce the practise effect and random measurement error of the SDMT in patients with stroke. The T-SDMT had smaller practise effect and random measurement error than the SDMT, which can provide more reliable assessments of information processing speed.

  9. GalaxyTBM: template-based modeling by building a reliable core and refining unreliable local regions.

    PubMed

    Ko, Junsu; Park, Hahnbeom; Seok, Chaok

    2012-08-10

    Protein structures can be reliably predicted by template-based modeling (TBM) when experimental structures of homologous proteins are available. However, it is challenging to obtain structures more accurate than the single best templates by either combining information from multiple templates or by modeling regions that vary among templates or are not covered by any templates. We introduce GalaxyTBM, a new TBM method in which the more reliable core region is modeled first from multiple templates and less reliable, variable local regions, such as loops or termini, are then detected and re-modeled by an ab initio method. This TBM method is based on "Seok-server," which was tested in CASP9 and assessed to be amongst the top TBM servers. The accuracy of the initial core modeling is enhanced by focusing on more conserved regions in the multiple-template selection and multiple sequence alignment stages. Additional improvement is achieved by ab initio modeling of up to 3 unreliable local regions in the fixed framework of the core structure. Overall, GalaxyTBM reproduced the performance of Seok-server, with GalaxyTBM and Seok-server resulting in average GDT-TS of 68.1 and 68.4, respectively, when tested on 68 single-domain CASP9 TBM targets. For application to multi-domain proteins, GalaxyTBM must be combined with domain-splitting methods. Application of GalaxyTBM to CASP9 targets demonstrates that accurate protein structure prediction is possible by use of a multiple-template-based approach, and ab initio modeling of variable regions can further enhance the model quality.

  10. A competency based selection procedure for Dutch postgraduate GP training: a pilot study on validity and reliability.

    PubMed

    Vermeulen, Margit I; Tromp, Fred; Zuithoff, Nicolaas P A; Pieters, Ron H M; Damoiseaux, Roger A M J; Kuyvenhoven, Marijke M

    2014-12-01

    Abstract Background: Historically, semi-structured interviews (SSI) have been the core of the Dutch selection for postgraduate general practice (GP) training. This paper describes a pilot study on a newly designed competency-based selection procedure that assesses whether candidates have the competencies that are required to complete GP training. The objective was to explore reliability and validity aspects of the instruments developed. The new selection procedure comprising the National GP Knowledge Test (LHK), a situational judgement tests (SJT), a patterned behaviour descriptive interview (PBDI) and a simulated encounter (SIM) was piloted alongside the current procedure. Forty-seven candidates volunteered in both procedures. Admission decision was based on the results of the current procedure. Study participants did hardly differ from the other candidates. The mean scores of the candidates on the LHK and SJT were 21.9 % (SD 8.7) and 83.8% (SD 3.1), respectively. The mean self-reported competency scores (PBDI) were higher than the observed competencies (SIM): 3.7(SD 0.5) and 2.9(SD 0.6), respectively. Content-related competencies showed low correlations with one another when measured with different instruments, whereas more diverse competencies measured by a single instrument showed strong to moderate correlations. Moreover, a moderate correlation between LHK and SJT was found. The internal consistencies (intraclass correlation, ICC) of LHK and SJT were poor while the ICC of PBDI and SIM showed acceptable levels of reliability. Findings on content validity and reliability of these new instruments are promising to realize a competency based procedure. Further development of the instruments and research on predictive validity should be pursued.

  11. Human reliability-based MC&A models for detecting insider theft.

    SciTech Connect

    Duran, Felicia Angelica; Wyss, Gregory Dane

    2010-06-01

    Material control and accounting (MC&A) safeguards operations that track and account for critical assets at nuclear facilities provide a key protection approach for defeating insider adversaries. These activities, however, have been difficult to characterize in ways that are compatible with the probabilistic path analysis methods that are used to systematically evaluate the effectiveness of a site's physical protection (security) system (PPS). MC&A activities have many similar characteristics to operator procedures performed in a nuclear power plant (NPP) to check for anomalous conditions. This work applies human reliability analysis (HRA) methods and models for human performance of NPP operations to develop detection probabilities for MC&A activities. This has enabled the development of an extended probabilistic path analysis methodology in which MC&A protections can be combined with traditional sensor data in the calculation of PPS effectiveness. The extended path analysis methodology provides an integrated evaluation of a safeguards and security system that addresses its effectiveness for attacks by both outside and inside adversaries.

  12. Reliability of vibration energy harvesters of metal-based PZT thin films

    NASA Astrophysics Data System (ADS)

    Tsujiura, Y.; Suwa, E.; Kurokawa, F.; Hida, H.; Kanno, I.

    2014-11-01

    This paper describes the reliability of piezoelectric vibration energy harvesters (PVEHs) of Pb(Zr,Ti)O3 (PZT) thin films on metal foil cantilevers. The PZT thin films were directly deposited onto the Pt-coated stainless-steel (SS430) cantilevers by rf-magnetron sputtering, and we observed their aging behavior of power generation characteristics under the resonance vibration condition for three days. During the aging measurement, there was neither fatigue failure nor degradation of dielectric properties in our PVEHs (length: 13 mm, width: 5.0 mm, thickness: 104 μm) even under a large excitation acceleration of 25 m/s2. However, we observed clear degradation of the generated electric voltage depending on excitation acceleration. The decay rate of the output voltage was 5% from the start of the measurement at 25 m/s2. The transverse piezoelectric coefficient (e31,f) also degraded with almost the same decay rate as that of the output voltage; this indicates that the degradation of output voltage was mainly caused by that of piezoelectric properties. From the decay curves, the output powers are estimated to degrade 7% at 15 m/s2 and 36% at 25 m/s2 if we continue to excite the PVEHs for 30 years.

  13. A logarithmic opinion pool based STAPLE algorithm for the fusion of segmentations with associated reliability weights.

    PubMed

    Akhondi-Asl, Alireza; Hoyte, Lennox; Lockhart, Mark E; Warfield, Simon K

    2014-10-01

    Pelvic floor dysfunction is common in women after childbirth and precise segmentation of magnetic resonance images (MRI) of the pelvic floor may facilitate diagnosis and treatment of patients. However, because of the complexity of its structures, manual segmentation of the pelvic floor is challenging and suffers from high inter and intra-rater variability of expert raters. Multiple template fusion algorithms are promising segmentation techniques for these types of applications, but they have been limited by imperfections in the alignment of templates to the target, and by template segmentation errors. A number of algorithms sought to improve segmentation performance by combining image intensities and template labels as two independent sources of information, carrying out fusion through local intensity weighted voting schemes. This class of approach is a form of linear opinion pooling, and achieves unsatisfactory performance for this application. We hypothesized that better decision fusion could be achieved by assessing the contribution of each template in comparison to a reference standard segmentation of the target image and developed a novel segmentation algorithm to enable automatic segmentation of MRI of the female pelvic floor. The algorithm achieves high performance by estimating and compensating for both imperfect registration of the templates to the target image and template segmentation inaccuracies. A local image similarity measure is used to infer a local reliability weight, which contributes to the fusion through a novel logarithmic opinion pooling. We evaluated our new algorithm in comparison to nine state-of-the-art segmentation methods and demonstrated our algorithm achieves the highest performance.

  14. Tumor Heterogeneity: Mechanisms and Bases for a Reliable Application of Molecular Marker Design

    PubMed Central

    Diaz-Cano, Salvador J.

    2012-01-01

    Tumor heterogeneity is a confusing finding in the assessment of neoplasms, potentially resulting in inaccurate diagnostic, prognostic and predictive tests. This tumor heterogeneity is not always a random and unpredictable phenomenon, whose knowledge helps designing better tests. The biologic reasons for this intratumoral heterogeneity would then be important to understand both the natural history of neoplasms and the selection of test samples for reliable analysis. The main factors contributing to intratumoral heterogeneity inducing gene abnormalities or modifying its expression include: the gradient ischemic level within neoplasms, the action of tumor microenvironment (bidirectional interaction between tumor cells and stroma), mechanisms of intercellular transference of genetic information (exosomes), and differential mechanisms of sequence-independent modifications of genetic material and proteins. The intratumoral heterogeneity is at the origin of tumor progression and it is also the byproduct of the selection process during progression. Any analysis of heterogeneity mechanisms must be integrated within the process of segregation of genetic changes in tumor cells during the clonal expansion and progression of neoplasms. The evaluation of these mechanisms must also consider the redundancy and pleiotropism of molecular pathways, for which appropriate surrogate markers would support the presence or not of heterogeneous genetics and the main mechanisms responsible. This knowledge would constitute a solid scientific background for future therapeutic planning. PMID:22408433

  15. Reliability and Validity of Web-Based Portfolio Peer Assessment: A Case Study for a Senior High School's Students Taking Computer Course

    ERIC Educational Resources Information Center

    Chang, Chi-Cheng; Tseng, Kuo-Hung; Chou, Pao-Nan; Chen, Yi-Hui

    2011-01-01

    This study examined the reliability and validity of Web-based portfolio peer assessment. Participants were 72 second-grade students from a senior high school taking a computer course. The results indicated that: 1) there was a lack of consistency across various student raters on a portfolio, or inter-rater reliability; 2) two-thirds of the raters…

  16. Reliability computation from reliability block diagrams

    NASA Technical Reports Server (NTRS)

    Chelson, P. O.; Eckstein, R. E.

    1971-01-01

    A method and a computer program are presented to calculate probability of system success from an arbitrary reliability block diagram. The class of reliability block diagrams that can be handled include any active/standby combination of redundancy, and the computations include the effects of dormancy and switching in any standby redundancy. The mechanics of the program are based on an extension of the probability tree method of computing system probabilities.

  17. Developing Evidence-Based Practice questionnaire for community health nurses: reliability and validity of a Spanish adaptation.

    PubMed

    Zabaleta-del-Olmo, Edurne; Subirana-Casacuberta, Mireia; Ara-Pérez, Ana; Escuredo-Rodríguez, Bibiana; Ríos-Rodríguez, María Ángeles; Carrés-Esteve, Lourdes; Jodar-Solà, Glòria; Lejardi-Estevez, Yolanda; Nuix-Baqué, Núria; Aguas-Lluch, Asunción; Ondiviela-Cariteu, Àngels; Blanco-Sánchez, Rafaela; Rosa García-Cerdán, María; Contel-Segura, Juan Carlos; Jurado-Campos, Jeroni; Juvinyà-Canal, Dolors

    2016-02-01

    This study aimed to translate the community nursing version of the Developing Evidence-Based Practice questionnaire, adapt the Spanish translation to the primary care context in Spain, and evaluate its reliability and validity. Instruments available in Spanish to date are not designed to rigorously evaluate barriers and incentives associated with evidence-based practice implementation in community health nursing. Classical Test Theory approach. The 49-item Developing Evidence-Based Practice questionnaire was translated, back-translated and pilot-tested. Two items were added to assess respondents' ability to read and understand the English language. During the first six months of 2010, 513 nurses from 255 primary health care centres in Catalunya (Spain) voluntarily participated in the study. Internal consistency and test-retest reliability were evaluated. Internal structure was analysed by principal component analysis. A randomized, controlled, parallel-design study was carried out to test scores' sensitivity to change with two groups, intervention and control. The intervention consisted of eight hours of in-person training, provided by experts in evidence-based practice. Of 513 nurses, 445 (86·7%) nurses responded to all 51 items. Factor analysis showed six components that explained 51% of the total variance. Internal consistency and test-retest reliability were satisfactory (Cronbach α and intraclass correlation coefficients >0·70). A total of 93 nurses participated in the sensitivity-to-change tests (42 in the intervention group, 51 controls). After the training session, overall score and the 'skills for evidence-based practice' component score showed a medium (Cohen d = 0·69) and large effect (Cohen d = 0·86), respectively. The Developing Evidence-Based Practice questionnaire adapted to community health nursing in the primary care setting in Spain has satisfactory psychometric properties. The Developing Evidence-Based Practice questionnaire is a useful

  18. Probing Reliability of Transport Phenomena Based Heat Transfer and Fluid Flow Analysis in Autogeneous Fusion Welding Process

    NASA Astrophysics Data System (ADS)

    Bag, S.; de, A.

    2010-09-01

    The transport phenomena based heat transfer and fluid flow calculations in weld pool require a number of input parameters. Arc efficiency, effective thermal conductivity, and viscosity in weld pool are some of these parameters, values of which are rarely known and difficult to assign a priori based on the scientific principles alone. The present work reports a bi-directional three-dimensional (3-D) heat transfer and fluid flow model, which is integrated with a real number based genetic algorithm. The bi-directional feature of the integrated model allows the identification of the values of a required set of uncertain model input parameters and, next, the design of process parameters to achieve a target weld pool dimension. The computed values are validated with measured results in linear gas-tungsten-arc (GTA) weld samples. Furthermore, a novel methodology to estimate the overall reliability of the computed solutions is also presented.

  19. Reliable emotion recognition system based on dynamic adaptive fusion of forehead biopotentials and physiological signals.

    PubMed

    Khezri, Mahdi; Firoozabadi, Mohammad; Sharafat, Ahmad Reza

    2015-11-01

    In this study, we proposed a new adaptive method for fusing multiple emotional modalities to improve the performance of the emotion recognition system. Three-channel forehead biosignals along with peripheral physiological measurements (blood volume pressure, skin conductance, and interbeat intervals) were utilized as emotional modalities. Six basic emotions, i.e., anger, sadness, fear, disgust, happiness, and surprise were elicited by displaying preselected video clips for each of the 25 participants in the experiment; the physiological signals were collected simultaneously. In our multimodal emotion recognition system, recorded signals with the formation of several classification units identified the emotions independently. Then the results were fused using the adaptive weighted linear model to produce the final result. Each classification unit is assigned a weight that is determined dynamically by considering the performance of the units during the testing phase and the training phase results. This dynamic weighting scheme enables the emotion recognition system to adapt itself to each new user. The results showed that the suggested method outperformed conventional fusion of the features and classification units using the majority voting method. In addition, a considerable improvement, compared to the systems that used the static weighting schemes for fusing classification units, was also shown. Using support vector machine (SVM) and k-nearest neighbors (KNN) classifiers, the overall classification accuracies of 84.7% and 80% were obtained in identifying the emotions, respectively. In addition, applying the forehead or physiological signals in the proposed scheme indicates that designing a reliable emotion recognition system is feasible without the need for additional emotional modalities.

  20. Robot-Assisted End-Effector-Based Stair Climbing for Cardiopulmonary Exercise Testing: Feasibility, Reliability, and Repeatability

    PubMed Central

    Stoller, Oliver; Schindelholz, Matthias; Hunt, Kenneth J.

    2016-01-01

    Background Neurological impairments can limit the implementation of conventional cardiopulmonary exercise testing (CPET) and cardiovascular training strategies. A promising approach to provoke cardiovascular stress while facilitating task-specific exercise in people with disabilities is feedback-controlled robot-assisted end-effector-based stair climbing (RASC). The aim of this study was to evaluate the feasibility, reliability, and repeatability of augmented RASC-based CPET in able-bodied subjects, with a view towards future research and applications in neurologically impaired populations. Methods Twenty able-bodied subjects performed a familiarisation session and 2 consecutive incremental CPETs using augmented RASC. Outcome measures focussed on standard cardiopulmonary performance parameters and on accuracy of work rate tracking (RMSEP−root mean square error). Criteria for feasibility were cardiopulmonary responsiveness and technical implementation. Relative and absolute test-retest reliability were assessed by intraclass correlation coefficients (ICC), standard error of the measurement (SEM), and minimal detectable change (MDC). Mean differences, limits of agreement, and coefficients of variation (CoV) were estimated to assess repeatability. Results All criteria for feasibility were achieved. Mean V′O2peak was 106±9% of predicted V′O2max and mean HRpeak was 99±3% of predicted HRmax. 95% of the subjects achieved at least 1 criterion for V′O2max, and the detection of the sub-maximal ventilatory thresholds was successful (ventilatory anaerobic threshold 100%, respiratory compensation point 90% of the subjects). Excellent reliability was found for peak cardiopulmonary outcome measures (ICC ≥ 0.890, SEM ≤ 0.60%, MDC ≤ 1.67%). Repeatability for the primary outcomes was good (CoV ≤ 0.12). Conclusions RASC-based CPET with feedback-guided exercise intensity demonstrated comparable or higher peak cardiopulmonary performance variables relative to

  1. Robot-Assisted End-Effector-Based Stair Climbing for Cardiopulmonary Exercise Testing: Feasibility, Reliability, and Repeatability.

    PubMed

    Stoller, Oliver; Schindelholz, Matthias; Hunt, Kenneth J

    2016-01-01

    Neurological impairments can limit the implementation of conventional cardiopulmonary exercise testing (CPET) and cardiovascular training strategies. A promising approach to provoke cardiovascular stress while facilitating task-specific exercise in people with disabilities is feedback-controlled robot-assisted end-effector-based stair climbing (RASC). The aim of this study was to evaluate the feasibility, reliability, and repeatability of augmented RASC-based CPET in able-bodied subjects, with a view towards future research and applications in neurologically impaired populations. Twenty able-bodied subjects performed a familiarisation session and 2 consecutive incremental CPETs using augmented RASC. Outcome measures focussed on standard cardiopulmonary performance parameters and on accuracy of work rate tracking (RMSEP-root mean square error). Criteria for feasibility were cardiopulmonary responsiveness and technical implementation. Relative and absolute test-retest reliability were assessed by intraclass correlation coefficients (ICC), standard error of the measurement (SEM), and minimal detectable change (MDC). Mean differences, limits of agreement, and coefficients of variation (CoV) were estimated to assess repeatability. All criteria for feasibility were achieved. Mean V'O2peak was 106±9% of predicted V'O2max and mean HRpeak was 99±3% of predicted HRmax. 95% of the subjects achieved at least 1 criterion for V'O2max, and the detection of the sub-maximal ventilatory thresholds was successful (ventilatory anaerobic threshold 100%, respiratory compensation point 90% of the subjects). Excellent reliability was found for peak cardiopulmonary outcome measures (ICC ≥ 0.890, SEM ≤ 0.60%, MDC ≤ 1.67%). Repeatability for the primary outcomes was good (CoV ≤ 0.12). RASC-based CPET with feedback-guided exercise intensity demonstrated comparable or higher peak cardiopulmonary performance variables relative to predicted values, achieved the criteria for V'O2max

  2. Reliability training

    NASA Technical Reports Server (NTRS)

    Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Dillard, Richard B.; Wong, Kam L.; Barber, Frank J.; Barina, Frank J.

    1992-01-01

    Discussed here is failure physics, the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low cost reliable products. A review of reliability for the years 1940 to 2000 is given. Next, a review of mathematics is given as well as a description of what elements contribute to product failures. Basic reliability theory and the disciplines that allow us to control and eliminate failures are elucidated.

  3. Reliable detection of fluence anomalies in EPID-based IMRT pretreatment quality assurance using pixel intensity deviations

    SciTech Connect

    Gordon, J. J.; Gardner, J. K.; Wang, S.; Siebers, J. V.

    2012-08-15

    Purpose: This work uses repeat images of intensity modulated radiation therapy (IMRT) fields to quantify fluence anomalies (i.e., delivery errors) that can be reliably detected in electronic portal images used for IMRT pretreatment quality assurance. Methods: Repeat images of 11 clinical IMRT fields are acquired on a Varian Trilogy linear accelerator at energies of 6 MV and 18 MV. Acquired images are corrected for output variations and registered to minimize the impact of linear accelerator and electronic portal imaging device (EPID) positioning deviations. Detection studies are performed in which rectangular anomalies of various sizes are inserted into the images. The performance of detection strategies based on pixel intensity deviations (PIDs) and gamma indices is evaluated using receiver operating characteristic analysis. Results: Residual differences between registered images are due to interfraction positional deviations of jaws and multileaf collimator leaves, plus imager noise. Positional deviations produce large intensity differences that degrade anomaly detection. Gradient effects are suppressed in PIDs using gradient scaling. Background noise is suppressed using median filtering. In the majority of images, PID-based detection strategies can reliably detect fluence anomalies of {>=}5% in {approx}1 mm{sup 2} areas and {>=}2% in {approx}20 mm{sup 2} areas. Conclusions: The ability to detect small dose differences ({<=}2%) depends strongly on the level of background noise. This in turn depends on the accuracy of image registration, the quality of the reference image, and field properties. The longer term aim of this work is to develop accurate and reliable methods of detecting IMRT delivery errors and variations. The ability to resolve small anomalies will allow the accuracy of advanced treatment techniques, such as image guided, adaptive, and arc therapies, to be quantified.

  4. Hardware based redundant multi-threading inside a GPU for improved reliability

    SciTech Connect

    Sridharan, Vilas; Gurumurthi, Sudhanva

    2015-05-05

    A system and method for verifying computation output using computer hardware are provided. Instances of computation are generated and processed on hardware-based processors. As instances of computation are processed, each instance of computation receives a load accessible to other instances of computation. Instances of output are generated by processing the instances of computation. The instances of output are verified against each other in a hardware based processor to ensure accuracy of the output.

  5. Real-time reliability evaluation methodology based on dynamic Bayesian networks: A case study of a subsea pipe ram BOP system.

    PubMed

    Cai, Baoping; Liu, Yonghong; Ma, Yunpeng; Liu, Zengkai; Zhou, Yuming; Sun, Junhe

    2015-09-01

    A novel real-time reliability evaluation methodology is proposed by combining root cause diagnosis phase based on Bayesian networks (BNs) and reliability evaluation phase based on dynamic BNs (DBNs). The root cause diagnosis phase exactly locates the root cause of a complex mechatronic system failure in real time to increase diagnostic coverage and is performed through backward analysis of BNs. The reliability evaluation phase calculates the real-time reliability of the entire system by forward inference of DBNs. The application of the proposed methodology is demonstrated using a case of a subsea pipe ram blowout preventer system. The value and the variation trend of real-time system reliability when the faults of components occur are studied; the importance degree sequence of components at different times is also determined using mutual information and belief variance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  6. A Logarithmic Opinion Pool Based STAPLE Algorithm For The Fusion of Segmentations With Associated Reliability Weights

    PubMed Central

    Akhondi-Asl, Alireza; Hoyte, Lennox; Lockhart, Mark E.; Warfield, Simon K.

    2014-01-01

    Pelvic floor dysfunction is very common in women after childbirth and precise segmentation of magnetic resonance images (MRI) of the pelvic floor may facilitate diagnosis and treatment of patients. However, because of the complexity of the structures of pelvic floor, manual segmentation of the pelvic floor is challenging and suffers from high inter and intra-rater variability of expert raters. Multiple template fusion algorithms are promising techniques for segmentation of MRI in these types of applications, but these algorithms have been limited by imperfections in the alignment of each template to the target, and by template segmentation errors. In this class of segmentation techniques, a collection of templates is aligned to a target, and a new segmentation of the target is inferred. A number of algorithms sought to improve segmentation performance by combining image intensities and template labels as two independent sources of information, carrying out decision fusion through local intensity weighted voting schemes. This class of approach is a form of linear opinion pooling, and achieves unsatisfactory performance for this application. We hypothesized that better decision fusion could be achieved by assessing the contribution of each template in comparison to a reference standard segmentation of the target image and developed a novel segmentation algorithm to enable automatic segmentation of MRI of the female pelvic floor. The algorithm achieves high performance by estimating and compensating for both imperfect registration of the templates to the target image and template segmentation inaccuracies. The algorithm is a generalization of the STAPLE algorithm in which a reference segmentation is estimated and used to infer an optimal weighting for fusion of templates. A local image similarity measure is used to infer a local reliability weight, which contributes to the fusion through a novel logarithmic opinion pooling. We evaluated our new algorithm in comparison

  7. Validity and Reliability Evidence for a New Measure: The Evidence-Based Practice Knowledge Assessment in Nursing.

    PubMed

    Spurlock, Darrell; Wonder, Amy Hagedorn

    2015-11-01

    Studies of evidence-based practice (EBP) among nurses often focus on attitudes and beliefs about EBP and self-reported EBP knowledge. Because knowledge self-assessments can be highly inaccurate, the authors developed and tested a new objective measure of EBP knowledge--the Evidence-Based Practice Knowledge Assessment in Nursing (EKAN). Seven subject matter experts reviewed candidate items, resulting in a scale content validity index of 0.94. Rasch modeling was used to evaluate item-person performance on the proposed unidimensional trait of EBP knowledge. The candidate item pool was then tested among 200 undergraduate nursing students. Strong evidence of unidimensionality was confirmed by narrow item infit statistics centering on 1.0. The item separation index was 7.05, and the person separation index was 1.66. Item reliability was 0.98, and person reliability was 0.66. The 20-item EKAN showed strong psychometric properties for an instrument developed under the Rasch model and is available for use in research and educational contexts. Copyright 2015, SLACK Incorporated.

  8. Everyday functions and needs of individuals with disability: a reliability and validity study based on the principles of the ICF.

    PubMed

    Marton, Klára; Kövi, Zsuzsanna; Farkas, Lajos; Egri, Tímea

    2014-01-01

    The goal of this study was to develop and validate a questionnaire to measure everyday functions of individuals with disability based on the principles of the International Classification of Functioning, Disability, and Health (ICF). Participants consisted of 1116 individuals. The final sample was representative for the following criteria: disability, gender, age, and residence. The questionnaire consisted of 4 sections. In addition to general and demographic questions, we developed 258 statements about everyday functioning based on the items from the ICF. The Cronbach alphas showed adequate internal reliability for the different scales: range of Cronbach alphas on the main sample: .624 to .904; range of Cronbach alphas on the test-retest sample: .627 to .921. Correlations with validating scales were typically high. Individuals with disability showed lower mean scores in each area compared to controls but the profiles of the different groups with disability varied across areas. The data also showed that physical status by itself does not determine everyday functioning. Several participants across groups showed that despite severe physical disability, one may exhibit high values of everyday functioning and well-being. Our questionnaire is a valid and reliable method to measure everyday functioning in individuals with different disabilities. The various versions of the questionnaire (computerized, paper-pencil, easy to understand) ensure that everyone's functioning and well-being can be assessed.

  9. Person Reliability

    ERIC Educational Resources Information Center

    Lumsden, James

    1977-01-01

    Person changes can be of three kinds: developmental trends, swells, and tremors. Person unreliability in the tremor sense (momentary fluctuations) can be estimated from person characteristic curves. Average person reliability for groups can be compared from item characteristic curves. (Author)

  10. Software Reliability 2002

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores R.

    2003-01-01

    In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.

  11. Reliability of Smartphone-Based Instant Messaging Application for Diagnosis, Classification, and Decision-making in Pediatric Orthopedic Trauma.

    PubMed

    Stahl, Ido; Katsman, Alexander; Zaidman, Michael; Keshet, Doron; Sigal, Amit; Eidelman, Mark

    2017-07-11

    Smartphones have the ability to capture and send images, and their use has become common in the emergency setting for transmitting radiographic images with the intent to consult an off-site specialist. Our objective was to evaluate the reliability of smartphone-based instant messaging applications for the evaluation of various pediatric limb traumas, as compared with the standard method of viewing images of a workstation-based picture archiving and communication system (PACS). X-ray images of 73 representative cases of pediatric limb trauma were captured and transmitted to 5 pediatric orthopedic surgeons by the Whatsapp instant messaging application on an iPhone 6 smartphone. Evaluators were asked to diagnose, classify, and determine the course of treatment for each case over their personal smartphones. Following a 4-week interval, revaluation was conducted using the PACS. Intraobserver agreement was calculated for overall agreement and per fracture site. The overall results indicate "near perfect agreement" between interpretations of the radiographs on smartphones compared with computer-based PACS, with κ of 0.84, 0.82, and 0.89 for diagnosis, classification, and treatment planning, respectively. Looking at the results per fracture site, we also found substantial to near perfect agreement. Smartphone-based instant messaging applications are reliable for evaluation of a wide range of pediatric limb fractures. This method of obtaining an expert opinion from the off-site specialist is immediately accessible and inexpensive, making smartphones a powerful tool for doctors in the emergency department, primary care clinics, or remote medical centers, enabling timely and appropriate treatment for the injured child. This method is not a substitution for evaluation of the images in the standard method over computer-based PACS, which should be performed before final decision-making.

  12. Note: Reliable and non-contact 6D motion tracking system based on 2D laser scanners for cargo transportation

    SciTech Connect

    Kim, Young-Keun; Kim, Kyung-Soo

    2014-10-15

    Maritime transportation demands an accurate measurement system to track the motion of oscillating container boxes in real time. However, it is a challenge to design a sensor system that can provide both reliable and non-contact methods of 6-DOF motion measurements of a remote object for outdoor applications. In the paper, a sensor system based on two 2D laser scanners is proposed for detecting the relative 6-DOF motion of a crane load in real time. Even without implementing a camera, the proposed system can detect the motion of a remote object using four laser beam points. Because it is a laser-based sensor, the system is expected to be highly robust to sea weather conditions.

  13. Child and Adolescent Behaviorally Based Disorders: A Critical Review of Reliability and Validity

    ERIC Educational Resources Information Center

    Mallett, Christopher A.

    2014-01-01

    Objectives: The purpose of this study was to investigate the historical construction and empirical support of two child and adolescent behaviorally based mental health disorders: oppositional defiant and conduct disorders. Method: The study utilized a historiography methodology to review, from 1880 to 2012, these disorders' inclusion in…

  14. Reliability and Validity of a Computer-Based Knowledge Mapping System To Measure Content Understanding.

    ERIC Educational Resources Information Center

    Herl, H. E.; O'Neil, H. F., Jr.; Chung, G. K. W. K.; Schacter, J.

    1999-01-01

    Presents results from two computer-based knowledge-mapping studies developed by the National Center for Research on Evaluation, Standards, and Student Testing (CRESST): in one, middle and high school students constructed group maps while collaborating over a network, and in the second, students constructed individual maps while searching the Web.…

  15. The N of 1 in Arts-Based Research: Reliability and Validity

    ERIC Educational Resources Information Center

    Siegesmund, Richard

    2014-01-01

    N signifies the number of data samples in a study. Traditional research values numerous data samples as this reduces the variability created by extremes. Alternatively, arts-based research privileges the outlier, the N of 1. Oftentimes, what is unique and outside the norm is the focus. There are three approaches to the N of 1 in arts-based…

  16. Child and Adolescent Behaviorally Based Disorders: A Critical Review of Reliability and Validity

    ERIC Educational Resources Information Center

    Mallett, Christopher A.

    2014-01-01

    Objectives: The purpose of this study was to investigate the historical construction and empirical support of two child and adolescent behaviorally based mental health disorders: oppositional defiant and conduct disorders. Method: The study utilized a historiography methodology to review, from 1880 to 2012, these disorders' inclusion in…

  17. From Fulcher to PLEVALEX: Issues in Interface Design, Validity and Reliability in Internet Based Language Testing

    ERIC Educational Resources Information Center

    Garcia Laborda, Jesus

    2007-01-01

    Interface design and ergonomics, while already studied in much of educational theory, have not until recently been considered in language testing (Fulcher, 2003). In this paper, we revise the design principles of PLEVALEX, a fully operational prototype Internet based language testing platform. Our focus here is to show PLEVALEX's interfaces and…

  18. Reliability and Validity of Authentic Assessment in a Web Based Course

    ERIC Educational Resources Information Center

    Olfos, Raimundo; Zulantay, Hildaura

    2007-01-01

    Web-based courses are promising in that they are effective and have the possibility of their instructional design being improved over time. However, the assessments of said courses are criticized in terms of their validity. This paper is an exploratory case study regarding the validity of the assessment system used in a semi presential web-based…

  19. Alpha, Dimension-Free, and Model-Based Internal Consistency Reliability

    ERIC Educational Resources Information Center

    Bentler, Peter M.

    2009-01-01

    As pointed out by Sijtsma ("in press"), coefficient alpha is inappropriate as a single summary of the internal consistency of a composite score. Better estimators of internal consistency are available. In addition to those mentioned by Sijtsma, an old dimension-free coefficient and structural equation model based coefficients are…

  20. A theory-based instrument to evaluate team communication in the operating room: balancing measurement authenticity and reliability.

    PubMed

    Lingard, Lorelei; Regehr, Glenn; Espin, Sherry; Whyte, Sarah

    2006-12-01

    Breakdown in communication among members of the healthcare team threatens the effective delivery of health services, and raises the risk of errors and adverse events. To describe the process of developing an authentic, theory-based evaluation instrument that measures communication among members of the operating room team by documenting communication failures. 25 procedures were viewed by 3 observers observing in pairs, and records of events on each communication failure observed were independently completed by each observer. Each record included the type and outcome of the failure (both selected from a checklist of options), as well as the time of occurrence and a description of the event. For each observer, records of events were compiled to create a profile for the procedure. At the level of identifying events in the procedure, mean inter-rater agreement was low (mean agreement across pairs 47.3%). However, inter-rater reliability regarding the total number of communication failures per procedure was reasonable (mean ICC across pairs 0.72). When observers recorded the same event, a strong concordance about the type of communication failure represented by the event was found. Reasonable inter-rater reliability was shown by the instrument in assessing the relative rate of communication failures displayed per procedure. The difficulties in identifying and interpreting individual communication events reflect the delicate balance between increased subtlety and increased error. Complex team communication does not readily reduce to mere observation of events; some level of interpretation is required to meaningfully account for communicative exchanges. Although such observer interpretation improves the subtlety and validity of the instrument, it necessarily introduces error, reducing reliability. Although we continue to work towards increasing the instrument's sensitivity at the level of individual categories, this study suggests that the instrument could be used to

  1. Reliability study of Zr and Al incorporated Hf based high-k dielectric deposited by advanced processing

    NASA Astrophysics Data System (ADS)

    Bhuyian, Md Nasir Uddin

    Hafnium-based high-kappa dielectric materials have been successfully used in the industry as a key replacement for SiO2 based gate dielectrics in order to continue CMOS device scaling to the 22-nm technology node. Further scaling according to the device roadmap requires the development of oxides with higher kappa values in order to scale the equivalent oxide thickness (EOT) to 0.7 nm or below while achieving low defect densities. In addition, next generation devices need to meet challenges like improved channel mobility, reduced gate leakage current, good control on threshold voltage, lower interface state density, and good reliability. In order to overcome these challenges, improvements of the high-kappa film properties and deposition methods are highly desirable. In this dissertation, a detail study of Zr and Al incorporated HfO 2 based high-kappa dielectrics is conducted to investigate improvement in electrical characteristics and reliability. To meet scaling requirements of the gate dielectric to sub 0.7 nm, Zr is added to HfO2 to form Hf1-xZrxO2 with x=0, 0.31 and 0.8 where the dielectric film is deposited by using various intermediate processing conditions, like (i) DADA: intermediate thermal annealing in a cyclical deposition process; (ii) DSDS: similar cyclical process with exposure to SPA Ar plasma; and (iii) As-Dep: the dielectric deposited without any intermediate step. MOSCAPs are formed with TiN metal gate and the reliability of these devices is investigated by subjecting them to a constant voltage stress in the gate injection mode. Stress induced flat-band voltage shift (DeltaVFB), stress induced leakage current (SILC) and stress induced interface state degradation are observed. DSDS samples demonstrate the superior characteristics whereas the worst degradation is observed for DADA samples. Time dependent dielectric breakdown (TDDB) shows that DSDS Hf1-xZrxO2 (x=0.8) has the superior characteristics with reduced oxygen vacancy, which is affiliated to

  2. How reliable is internet-based self-reported identity, socio-demographic and obesity measures in European adults?

    PubMed

    Celis-Morales, Carlos; Livingstone, Katherine M; Woolhead, Clara; Forster, Hannah; O'Donovan, Clare B; Macready, Anna L; Fallaize, Rosalind; Marsaux, Cyril F M; Tsirigoti, Lydia; Efstathopoulou, Eirini; Moschonis, George; Navas-Carretero, Santiago; San-Cristobal, Rodrigo; Kolossa, Silvia; Klein, Ulla L; Hallmann, Jacqueline; Godlewska, Magdalena; Surwiłło, Agnieszka; Drevon, Christian A; Bouwman, Jildau; Grimaldi, Keith; Parnell, Laurence D; Manios, Yannis; Traczyk, Iwona; Gibney, Eileen R; Brennan, Lorraine; Walsh, Marianne C; Lovegrove, Julie A; Martinez, J Alfredo; Daniel, Hannelore; Saris, Wim H M; Gibney, Mike; Mathers, John C

    2015-09-01

    In e-health intervention studies, there are concerns about the reliability of internet-based, self-reported (SR) data and about the potential for identity fraud. This study introduced and tested a novel procedure for assessing the validity of internet-based, SR identity and validated anthropometric and demographic data via measurements performed face-to-face in a validation study (VS). Participants (n = 140) from seven European countries, participating in the Food4Me intervention study which aimed to test the efficacy of personalised nutrition approaches delivered via the internet, were invited to take part in the VS. Participants visited a research centre in each country within 2 weeks of providing SR data via the internet. Participants received detailed instructions on how to perform each measurement. Individual's identity was checked visually and by repeated collection and analysis of buccal cell DNA for 33 genetic variants. Validation of identity using genomic information showed perfect concordance between SR and VS. Similar results were found for demographic data (age and sex verification). We observed strong intra-class correlation coefficients between SR and VS for anthropometric data (height 0.990, weight 0.994 and BMI 0.983). However, internet-based SR weight was under-reported (Δ -0.70 kg [-3.6 to 2.1], p < 0.0001) and, therefore, BMI was lower for SR data (Δ -0.29 kg m(-2) [-1.5 to 1.0], p < 0.0001). BMI classification was correct in 93 % of cases. We demonstrate the utility of genotype information for detection of possible identity fraud in e-health studies and confirm the reliability of internet-based, SR anthropometric and demographic data collected in the Food4Me study. NCT01530139 ( http://clinicaltrials.gov/show/NCT01530139 ).

  3. Reliable and Rapid Identification of Listeria monocytogenes and Listeria Species by Artificial Neural Network-Based Fourier Transform Infrared Spectroscopy†

    PubMed Central

    Rebuffo, Cecilia A.; Schmitt, Jürgen; Wenning, Mareike; von Stetten, Felix; Scherer, Siegfried

    2006-01-01

    Differentiation of the species within the genus Listeria is important for the food industry but only a few reliable methods are available so far. While a number of studies have used Fourier transform infrared (FTIR) spectroscopy to identify bacteria, the extraction of complex pattern information from the infrared spectra remains difficult. Here, we apply artificial neural network technology (ANN), which is an advanced multivariate data-processing method of pattern analysis, to identify Listeria infrared spectra at the species level. A hierarchical classification system based on ANN analysis for Listeria FTIR spectra was created, based on a comprehensive reference spectral database including 243 well-defined reference strains of Listeria monocytogenes, L. innocua, L. ivanovii, L. seeligeri, and L. welshimeri. In parallel, a univariate FTIR identification model was developed. To evaluate the potentials of these models, a set of 277 isolates of diverse geographical origins, but not included in the reference database, were assembled and used as an independent external validation for species discrimination. Univariate FTIR analysis allowed the correct identification of 85.2% of all strains and of 93% of the L. monocytogenes strains. ANN-based analysis enhanced differentiation success to 96% for all Listeria species, including a success rate of 99.2% for correct L. monocytogenes identification. The identity of the 277-strain test set was also determined with the standard phenotypical API Listeria system. This kit was able to identify 88% of the test isolates and 93% of L. monocytogenes strains. These results demonstrate the high reliability and strong potential of ANN-based FTIR spectrum analysis for identification of the five Listeria species under investigation. Starting from a pure culture, this technique allows the cost-efficient and rapid identification of Listeria species within 25 h and is suitable for use in a routine food microbiological laboratory. PMID

  4. Test-Retest Reliability and Convergent Validity of a Computer Based Hand Function Test Protocol in People with Arthritis

    PubMed Central

    Srikesavan, Cynthia S.; Shay, Barbara; Szturm, Tony

    2015-01-01

    Objectives: A computer based hand function assessment tool has been developed to provide a standardized method for quantifying task performance during manipulations of common objects/tools/utensils with diverse physical properties and grip/grasp requirements for handling. The study objectives were to determine test-retest reliability and convergent validity of the test protocol in people with arthritis. Methods: Three different object manipulation tasks were evaluated twice in forty people with rheumatoid arthritis (RA) or hand osteoarthritis (HOA). Each object was instrumented with a motion sensor and moved in concert with a computer generated visual target. Self-reported joint pain and stiffness levels were recorded before and after each task. Task performance was determined by comparing the object movement with the computer target motion. This was correlated with grip strength, nine hole peg test, Disabilities of Arm, Shoulder, and Hand (DASH) questionnaire, and the Health Assessment Questionnaire (HAQ) scores. Results: The test protocol indicated moderate to high test-retest reliability of performance measures for three manipulation tasks, intraclass correlation coefficients (ICCs) ranging between 0.5 to 0.84, p<0.05. Strength of association between task performance measures with self- reported activity/participation composite scores was low to moderate (Spearman rho <0.7). Low correlations (Spearman rho < 0.4) were observed between task performance measures and grip strength; and between three objects’ performance measures. Significant reduction in pain and joint stiffness (p<0.05) was observed after performing each task. Conclusion: The study presents initial evidence on the test retest reliability and convergent validity of a computer based hand function assessment protocol in people with rheumatoid arthritis or hand osteoarthritis. The novel tool objectively measures overall task performance during a variety of object manipulation tasks done by tracking a

  5. Probabilistic and Reliability-Based Health Monitoring Strategies for High-Speed Naval Vessels

    DTIC Science & Technology

    2012-01-01

    spots (e.g., base plates, heat affected zones near welds ) are often instrumented with strain gages to record the hull response under cyclic wave...straight line log-log scale representation of the S-N curve, the following can be used: N(S)=§; (21) where K and b arc material constants...project, cycles arc accumulated in a 2-byte memory slot, limiting the maximum number of cycles that can be accumulated for a specific strain amplitude

  6. Facile and Reliable in Situ Polymerization of Poly(Ethyl Cyanoacrylate)-Based Polymer Electrolytes toward Flexible Lithium Batteries.

    PubMed

    Cui, Yanyan; Chai, Jingchao; Du, Huiping; Duan, Yulong; Xie, Guangwen; Liu, Zhihong; Cui, Guanglei

    2017-03-15

    Polycyanoacrylate is a very promising matrix for polymer electrolyte, which possesses advantages of strong binding and high electrochemical stability owing to the functional nitrile groups. Herein, a facile and reliable in situ polymerization strategy of poly(ethyl cyanoacrylate) (PECA) based gel polymer electrolytes (GPE) via a high efficient anionic polymerization was introduced consisting of PECA and 4 M LiClO4 in carbonate solvents. The in situ polymerized PECA gel polymer electrolyte achieved an excellent ionic conductivity (2.7 × 10(-3) S cm(-1)) at room temperature, and exhibited a considerable electrochemical stability window up to 4.8 V vs Li/Li(+). The LiFePO4/PECA-GPE/Li and LiNi1.5Mn0.5O4/PECA-GPE/Li batteries using this in-situ-polymerized GPE delivered stable charge/discharge profiles, considerable rate capability, and excellent cycling performance. These results demonstrated this reliable in situ polymerization process is a very promising strategy to prepare high performance polymer electrolytes for flexible thin-film batteries, micropower lithium batteries, and deformable lithium batteries for special purpose.

  7. A Multi-Criteria Decision Analysis based methodology for quantitatively scoring the reliability and relevance of ecotoxicological data.

    PubMed

    Isigonis, Panagiotis; Ciffroy, Philippe; Zabeo, Alex; Semenzin, Elena; Critto, Andrea; Giove, Silvio; Marcomini, Antonio

    2015-12-15

    Ecotoxicological data are highly important for risk assessment processes and are used for deriving environmental quality criteria, which are enacted for assuring the good quality of waters, soils or sediments and achieving desirable environmental quality objectives. Therefore, it is of significant importance the evaluation of the reliability of available data for analysing their possible use in the aforementioned processes. The thorough analysis of currently available frameworks for the assessment of ecotoxicological data has led to the identification of significant flaws but at the same time various opportunities for improvement. In this context, a new methodology, based on Multi-Criteria Decision Analysis (MCDA) techniques, has been developed with the aim of analysing the reliability and relevance of ecotoxicological data (which are produced through laboratory biotests for individual effects), in a transparent quantitative way, through the use of expert knowledge, multiple criteria and fuzzy logic. The proposed methodology can be used for the production of weighted Species Sensitivity Weighted Distributions (SSWD), as a component of the ecological risk assessment of chemicals in aquatic systems. The MCDA aggregation methodology is described in detail and demonstrated through examples in the article and the hierarchically structured framework that is used for the evaluation and classification of ecotoxicological data is shortly discussed. The methodology is demonstrated for the aquatic compartment but it can be easily tailored to other environmental compartments (soil, air, sediments).

  8. Design and reliability analysis of high-speed and continuous data recording system based on disk array

    NASA Astrophysics Data System (ADS)

    Jiang, Changlong; Ma, Cheng; He, Ning; Zhang, Xugang; Wang, Chongyang; Jia, Huibo

    2002-12-01

    In many real-time fields the sustained high-speed data recording system is required. This paper proposes a high-speed and sustained data recording system based on the complex-RAID 3+0. The system consists of Array Controller Module (ACM), String Controller Module (SCM) and Main Controller Module (MCM). ACM implemented by an FPGA chip is used to split the high-speed incoming data stream into several lower-speed streams and generate one parity code stream synchronously. It also can inversely recover the original data stream while reading. SCMs record lower-speed streams from the ACM into the SCSI disk drivers. In the SCM, the dual-page buffer technology is adopted to implement speed-matching function and satisfy the need of sustainable recording. MCM monitors the whole system, controls ACM and SCMs to realize the data stripping, reconstruction, and recovery functions. The method of how to determine the system scale is presented. At the end, two new ways Floating Parity Group (FPG) and full 2D-Parity Group (full 2D-PG) are proposed to improve the system reliability and compared with the Traditional Parity Group (TPG). This recording system can be used conveniently in many areas of data recording, storing, playback and remote backup with its high-reliability.

  9. High power laser source for atom cooling based on reliable telecoms technology with all fibre frequency stabilisation

    NASA Astrophysics Data System (ADS)

    Legg, Thomas; Farries, Mark

    2017-02-01

    Cold atom interferometers are emerging as important tools for metrology. Designed into gravimeters they can measure extremely small changes in the local gravitational field strength and be used for underground surveying to detect buried utilities, mineshafts and sinkholes prior to civil works. To create a cold atom interferometer narrow linewidth, frequency stabilised lasers are required to cool the atoms and to setup and measure the atom interferometer. These lasers are commonly either GaAs diodes, Ti Sapphire lasers or frequency doubled InGaAsP diodes and fibre lasers. The InGaAsP DFB lasers are attractive because they are very reliable, mass-produced, frequency controlled by injection current and simply amplified to high powers with fibre amplifiers. In this paper a laser system suitable for Rb atom cooling, based on a 1560nm DFB laser and erbium doped fibre amplifier, is described. The laser output is frequency doubled with fibre coupled periodically poled LiNbO3 to a wavelength of 780nm. The output power exceeds 1 W at 780nm. The laser is stabilised at 1560nm against a fibre Bragg resonator that is passively temperature compensated. Frequency tuning over a range of 1 GHz is achieved by locking the laser to sidebands of the resonator that are generated by a phase modulator. This laser design is attractive for field deployable rugged systems because it uses all fibre coupled components with long term proven reliability.

  10. Reliable Alignment in Total Knee Arthroplasty by the Use of an iPod-Based Navigation System.

    PubMed

    Koenen, Paola; Schneider, Marco M; Fröhlich, Matthias; Driessen, Arne; Bouillon, Bertil; Bäthis, Holger

    2016-01-01

    Axial alignment is one of the main objectives in total knee arthroplasty (TKA). Computer-assisted surgery (CAS) is more accurate regarding limb alignment reconstruction compared to the conventional technique. The aim of this study was to analyse the precision of the innovative navigation system DASH® by Brainlab and to evaluate the reliability of intraoperatively acquired data. A retrospective analysis of 40 patients was performed, who underwent CAS TKA using the iPod-based navigation system DASH. Pre- and postoperative axial alignment were measured on standardized radiographs by two independent observers. These data were compared with the navigation data. Furthermore, interobserver reliability was measured. The duration of surgery was monitored. The mean difference between the preoperative mechanical axis by X-ray and the first intraoperatively measured limb axis by the navigation system was 2.4°. The postoperative X-rays showed a mean difference of 1.3° compared to the final navigation measurement. According to radiographic measurements, 88% of arthroplasties had a postoperative limb axis within ±3°. The mean additional time needed for navigation was 5 minutes. We could prove very good precision for the DASH system, which is comparable to established navigation devices with only negligible expenditure of time compared to conventional TKA.

  11. Reliable Alignment in Total Knee Arthroplasty by the Use of an iPod-Based Navigation System

    PubMed Central

    Koenen, Paola; Schneider, Marco M.; Fröhlich, Matthias; Driessen, Arne; Bouillon, Bertil; Bäthis, Holger

    2016-01-01

    Axial alignment is one of the main objectives in total knee arthroplasty (TKA). Computer-assisted surgery (CAS) is more accurate regarding limb alignment reconstruction compared to the conventional technique. The aim of this study was to analyse the precision of the innovative navigation system DASH® by Brainlab and to evaluate the reliability of intraoperatively acquired data. A retrospective analysis of 40 patients was performed, who underwent CAS TKA using the iPod-based navigation system DASH. Pre- and postoperative axial alignment were measured on standardized radiographs by two independent observers. These data were compared with the navigation data. Furthermore, interobserver reliability was measured. The duration of surgery was monitored. The mean difference between the preoperative mechanical axis by X-ray and the first intraoperatively measured limb axis by the navigation system was 2.4°. The postoperative X-rays showed a mean difference of 1.3° compared to the final navigation measurement. According to radiographic measurements, 88% of arthroplasties had a postoperative limb axis within ±3°. The mean additional time needed for navigation was 5 minutes. We could prove very good precision for the DASH system, which is comparable to established navigation devices with only negligible expenditure of time compared to conventional TKA. PMID:27313898

  12. Reliable bearing fault diagnosis using Bayesian inference-based multi-class support vector machines.

    PubMed

    Islam, M M Manjurul; Kim, Jaeyoung; Khan, Sheraz A; Kim, Jong-Myon

    2017-02-01

    This letter presents a multi-fault diagnosis scheme for bearings using hybrid features extracted from their acoustic emissions and a Bayesian inference-based one-against-all support vector machine (Bayesian OAASVM) for multi-class classification. The Bayesian OAASVM, which is a standard multi-class extension of the binary support vector machine, results in ambiguously labeled regions in the input space that degrade its classification performance. The proposed Bayesian OAASVM formulates the feature space as an appropriate Gaussian process prior, interprets the decision value of the Bayesian OAASVM as a maximum a posteriori evidence function, and uses Bayesian inference to label unknown samples.

  13. Multi-immunoreaction-based dual-color capillary electrophoresis for enhanced diagnostic reliability of thyroid gland disease.

    PubMed

    Woo, Nain; Kim, Su-Kang; Kang, Seong Ho

    2017-08-04

    Thyroid-stimulating hormone (TSH) secretion plays a critical role in regulating thyroid gland function and circulating thyroid hormones (i.e., thyroxine (T4) and triiodothyronine (T3)). A novel multi-immunoreaction-based dual-color capillary electrophoresis (CE) technique was investigated in this study to assess its reliability in diagnosing thyroid gland disease via simultaneous detection of TSH, T3, and T4 in a single run of CE. Compared to the conventional immunoreaction technique, multi-immunoreaction of biotinylated streptavidin antibodies increased the selectivity and sensitivity for individual hormones in human blood samples. Dual-color laser-induced fluorescence (LIF) detection-based CE performed in a running buffer of 25mM Na2B4O7-NaOH (pH 9.3) allowed for fast, simultaneous quantitative analysis of three target thyroid hormones using different excited wavelengths within 3.2min. This process had excellent sensitivity and detection limits of 0.05-5.32 fM. The results showed 1000-100,000 times higher detection sensitivity than previous methods. Method validation with enzyme linked immunosorbent assay for application with human blood samples showed that the CE method was not significantly different at the 98% confidence level. Therefore, the developed CE-LIF method has the advantages of high detection sensitivity, faster analysis time, and smaller sample amount compared to the conventional methods The combined multi-immunoreaction and dual-color CE-LIF method should have increased diagnostic reliability for thyroid gland disease compared to conventional methods based on its highly sensitive detection of thyroid hormones using a single injection and high-throughput screening. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Feasibility and reliability of classifying gross motor function among children with cerebral palsy using population-based record surveillance.

    PubMed

    Benedict, Ruth E; Patz, Jean; Maenner, Matthew J; Arneson, Carrie L; Yeargin-Allsopp, Marshalyn; Doernberg, Nancy S; Van Naarden Braun, Kim; Kirby, Russell S; Durkin, Maureen S

    2011-01-01

    For conditions with wide-ranging consequences, such as cerebral palsy (CP), population-based surveillance provides an estimate of the prevalence of case status but only the broadest understanding of the impact of the condition on children, families or society. Beyond case status, information regarding health, functional skills and participation is necessary to fully appreciate the consequences of the condition. The purpose of this study was to assess the feasibility and reliability of enhancing population-based surveillance by classifying gross motor function (GMF) from information available in medical records of children with CP. We assessed inter-rater reliability of two GMF classification methods, one the Gross Motor Function Classification System (GMFCS) and the other a 3-category classification of walking ability: (1) independently, (2) with handheld mobility device, or (3) limited or none. Two qualified clinicians independently reviewed abstracted evaluations from medical records of 8-year-old children residing in southeast Wisconsin, USA who were identified as having CP (n = 154) through the Centers for Disease Control and Prevention's Autism and Developmental Disabilities Monitoring Network. Ninety per cent (n = 138) of the children with CP had information in the record after age 4 years and 108 (70%) had adequate descriptions of gross motor skills to classify using the GMFCS. Agreement was achieved on 75.0% of the GMFCS ratings (simple kappa = 0.67, 95% confidence interval [95% CI 0.57, 0.78], weighted kappa = 0.83, [95% CI 0.77, 0.89]). Among case children for whom walking ability could be classified (n = 117), approximately half walked independently without devices and one-third had limited or no walking ability. Across walking ability categories, agreement was reached for 94% (simple kappa = 0.90, [95% CI 0.82, 0.96], weighted kappa = 0.94, [95% CI 0.89, 0.98]). Classifying GMF in the context of active records-based surveillance is feasible and reliable

  15. Reliability of office-based narrow-band imaging-guided flexible laryngoscopic tissue samplings.

    PubMed

    Chang, Catherine; Lin, Wan-Ni; Hsin, Li-Jen; Lee, Li-Ang; Lin, Chien-Yu; Li, Hsueh-Yu; Liao, Chun-Ta; Fang, Tuan-Jen

    2016-12-01

    Direct suspension laryngoscopic biopsy performed under general anesthesia is the conventional management for obtaining pathological diagnosis for neoplasms of the larynx, oropharynx, and hypopharynx. Since the development of distal chip laryngoscopy and digital imaging systems, transnasal flexible laryngoscopy tissue sampling has gained popularity as an office-based procedure. Additional assessment with narrow-band imaging (NBI) can help to increase the diagnostic yield. The aim of the study was to evaluate the accuracy, sensitivity, and specificity of a novel diagnostic tool: office-based NBI (OB-NBI) flexible laryngoscopic tissue sampling. Retrospective chart review performed in a tertiary referral medical center in Taiwan. From January 2010 to February 2013, 90 consecutive patients received OB-NBI biopsies. The accuracies of the OB-NBI biopsies were compared among locations, tumor sizes, head and neck cancer histories, and other factors. All patients had completed the procedure without life-threatening complications. The overall sensitivity and specificity were 97.2% and 100%, respectively, with a diagnostic accuracy of 98.9%. Accuracy was not affected by tumor size, location, learning curves, or previous head and neck cancer history. We present an integrated technique that merges the safety and versatility of flexible laryngoscopy with the diagnostic power of NBI to produce a promising method of high accuracy and minimal morbidity. 4 Laryngoscope, 126:2764-2769, 2016. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  16. Psychometric Properties of Performance-based Measurements of Functional Capacity: Test-Retest Reliability, Practice Effects, and Potential Sensitivity to Change

    PubMed Central

    Leifker, Feea R.; Patterson, Thomas L.; Bowie, Christopher R.; Mausbach, Brent T.; Harvey, Philip D.

    2010-01-01

    Performance-based measures of the ability to perform social and everyday living skills are being more widely used to assess functional capacity in people with serious mental illnesses such as schizophrenia and bipolar disorder. Since they are also being used as outcome measures in pharmacological and cognitive remediation studies aimed at cognitive impairments in schizophrenia, understanding their measurement properties and potential sensitivity to change is important. In this study, the test-retest reliability, practice effects, and reliable change indices of two different performance-based functional capacity measures, the UCSD Performance-based skills assessment (UPSA) and Social skills performance assessment (SSPA) were examined over several different retest intervals in two different samples of people with schizophrenia (n’s=238 and 116) and a healthy comparison sample (n=109). These psychometric properties were compared to those of a neuropsychological assessment battery. Test-retest reliabilities of the long form of the UPSA ranged from r=.63 to r=.80 over follow-up periods up to 36 months in people with schizophrenia, while brief UPSA reliabilities ranged from r=.66 to r=.81. Test-retest reliability of the NP performance scores ranged from r=.77 to r=.79. Test-retest reliabilities of the UPSA were lower in healthy controls, while NP performance was slightly more reliable. SSPA test-retest reliability was lower. Practice effect sizes ranged from .05 to .16 for the UPSA and .07 to .19 for the NP assessment in patients, with HC having more practice effects. Reliable change intervals were consistent across NP and both FC measures, indicating equal potential for detection of change. These performance-based measures of functional capacity appear to have similar potential to be sensitive to change compared to NP performance in people with schizophrenia. PMID:20399613

  17. Parameter estimation techniques based on optimizing goodness-of-fit statistics for structural reliability

    NASA Technical Reports Server (NTRS)

    Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.

    1993-01-01

    New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.

  18. Reliable gate stack and substrate parameter extraction based on C-V measurements for 14 nm node FDSOI technology

    NASA Astrophysics Data System (ADS)

    Mohamad, B.; Leroux, C.; Rideau, D.; Haond, M.; Reimbold, G.; Ghibaudo, G.

    2017-02-01

    Effective work function and equivalent oxide thickness are fundamental parameters for technology optimization. In this work, a comprehensive study is done on a large set of FDSOI devices. The extraction of the gate stack parameters is carried out by fitting experimental CV characteristics to quantum simulation, based on self-consistent solution of one dimensional Poisson and Schrodinger equations. A reliable methodology for gate stack parameters is proposed and validated. This study identifies the process modules that impact directly the effective work function from those that only affect the device threshold voltage, due to the device architecture. Moreover, the relative impacts of various process modules on channel thickness and gate oxide thickness are evidenced.

  19. Copper-based micro-channel cooler reliably operated using solutions of distilled-water and ethanol as a coolant

    NASA Astrophysics Data System (ADS)

    Chin, A. K.; Nelson, A.; Chin, R. H.; Bertaska, R.; Jacob, J. H.

    2015-03-01

    Copper-based micro-channel coolers (Cu-MCC) are the lowest thermal-resistance heat-sinks for high-power laserdiode (LD) bars. Presently, the resistivity, pH and oxygen content of the de-ionized water coolant, must be actively controlled to minimize cooler failure by corrosion and electro-corrosion. Additionally, the water must be constantly exposed to ultraviolet radiation to limit the growth of micro-organisms that may clog the micro-channels. In this study, we report the reliable, care-free operation of LD-bars attached to Cu-MCCs, using a solution of distilledwater and ethanol as the coolant. This coolant meets the storage requirements of Mil-Std 810G, e.g. exposure to a storage temperature as low as -51°C and no growth of micro-organisms during passive storage.

  20. Reliability of neuronal information conveyed by unreliable neuristor-based leaky integrate-and-fire neurons: a model study

    PubMed Central

    Lim, Hyungkwang; Kornijcuk, Vladimir; Seok, Jun Yeong; Kim, Seong Keun; Kim, Inho; Hwang, Cheol Seong; Jeong, Doo Seok

    2015-01-01

    We conducted simulations on the neuronal behavior of neuristor-based leaky integrate-and-fire (NLIF) neurons. The phase-plane analysis on the NLIF neuron highlights its spiking dynamics – determined by two nullclines conditional on the variables on the plane. Particular emphasis was placed on the operational noise arising from the variability of the threshold switching behavior in the neuron on each switching event. As a consequence, we found that the NLIF neuron exhibits a Poisson-like noise in spiking, delimiting the reliability of the information conveyed by individual NLIF neurons. To highlight neuronal information coding at a higher level, a population of noisy NLIF neurons was analyzed in regard to probability of successful information decoding given the Poisson-like noise of each neuron. The result demonstrates highly probable success in decoding in spite of large variability – due to the variability of the threshold switching behavior – of individual neurons. PMID:25966658

  1. Observer-based reliable stabilization of uncertain linear systems subject to actuator faults, saturation, and bounded system disturbances.

    PubMed

    Fan, Jinhua; Zhang, Youmin; Zheng, Zhiqiang

    2013-11-01

    A matrix inequality approach is proposed to reliably stabilize a class of uncertain linear systems subject to actuator faults, saturation, and bounded system disturbances. The system states are assumed immeasurable, and a classical observer is incorporated for observation to enable state-based feedback control. Both the stability and stabilization of the closed-loop system are discussed and the closed-loop domain of attraction is estimated by an ellipsoidal invariant set. The resultant stabilization conditions in the form of matrix inequalities enable simultaneous optimization of both the observer gain and the feedback controller gain, which is realized by converting the non-convex optimization problem to an unconstrained nonlinear programming problem. The effectiveness of proposed design techniques is demonstrated through a linearized model of F-18 HARV around an operating point. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Performance and reliability of HfAlO x-based interpoly dielectrics for floating-gate Flash memory

    NASA Astrophysics Data System (ADS)

    Govoreanu, B.; Wellekens, D.; Haspeslagh, L.; Brunco, D. P.; De Vos, J.; Aguado, D. Ruiz; Blomme, P.; van der Zanden, K.; Van Houdt, J.

    2008-04-01

    This paper discusses the performance and reliability of aggressively scaled HfAlO x-based interpoly dielectric stacks in combination with high-workfunction metal gates for sub-45 nm non-volatile memory technologies. It is shown that a less than 5 nm EOT IPD stack can provide a large program/erase (P/E) window, while operating at moderate voltages and has very good retention, with an extrapolated 10-year retention window of about 3 V at 150 °C. The impact of the process sequence and metal gate material is discussed. The viability of the material is considered in view of the demands of various Flash memory technologies and direction for further improvements are discussed.

  3. [Reliability of the individual age assessment at the time of death based on sternal rib end morphology in Balkan population].

    PubMed

    Donić, Danijela; Durić, Marija; Babić, Dragan; Popović, Dorde

    2005-06-01

    This paper analyzes the reliability of the Iscan's sternal rib-ends phase method for the assessment of individual age at the time of death in the Balkan population. The method is based on the morphological age changes of the sternal rib ends. The tested samples consisted of 65 ribs from autopsy cases in the Institute for Forensic Medicine, University of Belgrade, during 1999-2002 (23 females, and 42 males of various ages, ranged from 17-91 years), according to the forensic documents. Significant differences between the real chronological age of the individuals and the values established by the Iscan's method was found, especially in the older categories (phases 6 and 7), in both males and females. The results of the discriminative analysis showed the values of the highest diagnostic relevance for the assessment of age in our population: the change of the depth of the articular fossa, the thickness of its walls, and the quality of the bones.

  4. An Optimized, Data Distribution Service-Based Solution for Reliable Data Exchange Among Autonomous Underwater Vehicles.

    PubMed

    Rodríguez-Molina, Jesús; Bilbao, Sonia; Martínez, Belén; Frasheri, Mirgita; Cürüklü, Baran

    2017-08-05

    Major challenges are presented when managing a large number of heterogeneous vehicles that have to communicate underwater in order to complete a global mission in a cooperative manner. In this kind of application domain, sending data through the environment presents issues that surpass the ones found in other overwater, distributed, cyber-physical systems (i.e., low bandwidth, unreliable transport medium, data representation and hardware high heterogeneity). This manuscript presents a Publish/Subscribe-based semantic middleware solution for unreliable scenarios and vehicle interoperability across cooperative and heterogeneous autonomous vehicles. The middleware relies on different iterations of the Data Distribution Service (DDS) software standard and their combined work between autonomous maritime vehicles and a control entity. It also uses several components with different functionalities deemed as mandatory for a semantic middleware architecture oriented to maritime operations (device and service registration, context awareness, access to the application layer) where other technologies are also interweaved with middleware (wireless communications, acoustic networks). Implementation details and test results, both in a laboratory and a deployment scenario, have been provided as a way to assess the quality of the system and its satisfactory performance.

  5. SAR-based sea traffic monitoring: a reliable approach for maritime surveillance

    NASA Astrophysics Data System (ADS)

    Renga, Alfredo; Graziano, Maria D.; D'Errico, M.; Moccia, A.; Cecchini, A.

    2011-11-01

    Maritime surveillance problems are drawing the attention of multiple institutional actors. National and international security agencies are interested in matters like maritime traffic security, maritime pollution control, monitoring migration flows and detection of illegal fishing activities. Satellite imaging is a good way to identify ships but, characterized by large swaths, it is likely that the imaged scenes contain a large number of ships, with the vast majority, hopefully, performing legal activities. Therefore, the imaging system needs a supporting system which identifies legal ships and limits the number of potential alarms to be further monitored by patrol boats or aircrafts. In this framework, spaceborne Synthetic Aperture Radar (SAR) sensors, terrestrial AIS and the ongoing satellite AIS systems can represent a great potential synergy for maritime security. Starting from this idea the paper develops different designs for an AIS constellation able to reduce the time lag between SAR image and AIS data acquisition. An analysis of SAR-based ship detection algorithms is also reported and candidate algorithms identified.

  6. Improved Membrane-Based Sensor Network for Reliable Gas Monitoring in the Subsurface

    PubMed Central

    Lazik, Detlef; Ebert, Sebastian

    2012-01-01

    A conceptually improved sensor network to monitor the partial pressure of CO2 in different soil horizons was designed. Consisting of five membrane-based linear sensors (line-sensors) each with 10 m length, the set-up enables us to integrate over the locally fluctuating CO2 concentrations (typically lower 5%vol) up to the meter-scale gaining valuable concentration means with a repetition time of about 1 min. Preparatory tests in the laboratory resulted in a unexpected highly increased accuracy of better than 0.03%vol with respect to the previously published 0.08%vol. Thereby, the statistical uncertainties (standard deviations) of the line-sensors and the reference sensor (nondispersive infrared CO2-sensor) were close to each other. Whereas the uncertainty of the reference increases with the measurement value, the line-sensors show an inverse uncertainty trend resulting in a comparatively enhanced accuracy for concentrations >1%vol. Furthermore, a method for in situ maintenance was developed, enabling a proof of sensor quality and its effective calibration without demounting the line-sensors from the soil which would disturb the established structures and ongoing processes. PMID:23235447

  7. An Optimized, Data Distribution Service-Based Solution for Reliable Data Exchange Among Autonomous Underwater Vehicles

    PubMed Central

    Bilbao, Sonia; Martínez, Belén; Frasheri, Mirgita; Cürüklü, Baran

    2017-01-01

    Major challenges are presented when managing a large number of heterogeneous vehicles that have to communicate underwater in order to complete a global mission in a cooperative manner. In this kind of application domain, sending data through the environment presents issues that surpass the ones found in other overwater, distributed, cyber-physical systems (i.e., low bandwidth, unreliable transport medium, data representation and hardware high heterogeneity). This manuscript presents a Publish/Subscribe-based semantic middleware solution for unreliable scenarios and vehicle interoperability across cooperative and heterogeneous autonomous vehicles. The middleware relies on different iterations of the Data Distribution Service (DDS) software standard and their combined work between autonomous maritime vehicles and a control entity. It also uses several components with different functionalities deemed as mandatory for a semantic middleware architecture oriented to maritime operations (device and service registration, context awareness, access to the application layer) where other technologies are also interweaved with middleware (wireless communications, acoustic networks). Implementation details and test results, both in a laboratory and a deployment scenario, have been provided as a way to assess the quality of the system and its satisfactory performance. PMID:28783049

  8. Improving the reliability of road materials based on micronized sulfur composites

    NASA Astrophysics Data System (ADS)

    Abdrakhmanova, K. K.

    2015-01-01

    The work contains the results of a nano-structural modification of sulfur that prevents polymorphic transformations from influencing the properties of sulfur composites where sulfur is present in a thermodynamic stable condition that precludes destruction when operated. It has been established that the properties of sulfur-based composite materials can be significantly improved by modifying sulfur and structuring sulfur binder by nano-dispersed fiber particles and ultra-dispersed state filler. The paper shows the possibility of modifying Tengiz sulfur by its fragmenting which ensures that the structured sulfur is structurally changed and stabilized through reinforcement by ultra-dispersed fiber particles allowing the phase contact area to be multiplied. Interaction between nano-dispersed fibers of chrysotile asbestos and sulfur ensures the implementation of the mechanical properties of chrysotile asbestos tubes in reinforced composite and its integrity provided that the surface of chrysotile asbestos tubes are highly moistened with molten sulfur and there is high adhesion between the tubes and the matrix that, in addition to sulfur, contains limestone microparticles. Ability to apply materials in severe operation conditions and possibility of exposure in both aggressive medium and mechanical loads makes produced sulfur composites required by the road construction industry.

  9. A high reliability detection algorithm for wireless ECG systems based on compressed sensing theory.

    PubMed

    Balouchestani, Mohammadreza; Raahemifar, Kaainran; Krishnan, Sridhar

    2013-01-01

    Wireless Body Area Networks (WBANs) consist of small intelligent biomedical wireless sensors attached on or implanted in the body to collect vital biomedical data from the human body providing Continuous Health Monitoring Systems (CHMS). The WBANs promise to be a key element in wireless electrocardiogram (ECG) systems for next-generation. ECG signals are widely used in health care systems as a noninvasive technique for diagnosis of heart conditions. However, the use of conventional ECG system is restricted by patient's mobility, transmission capacity, and physical size. Aforementioned highlights the need and advantage of wireless ECG systems with low sampling-rate and low power consumption. With this in mind, Compressed Sensing (CS) procedure as a new sampling approach and the collaboration from Shannon Energy Transformation (SET) and Peak Finding Schemes (PFS) is used to provide a robust low-complexity detection algorithm in gateways and access points in the hospitals and medical centers with high probability and enough accuracy. Advanced wireless ECG systems based on our approach will be able to deliver healthcare not only to patients in hospitals and medical centers; but also at their homes and workplaces thus offering cost saving, and improving the quality of life. Our simulation results show an increment of 0.1 % for sensitivity as well as 1.5% for the prediction level and detection accuracy.

  10. The Needs-Based Assessment of Parental (Guardian) Support: a test of its validity and reliability.

    PubMed

    Bolen, Rebecca M; Leah Lamb, J; Gradante, Jennifer

    2002-10-01

    The purpose of this paper is to present a newly developed measure of guardian support, the Needs-Based Assessment of Parental (Guardian) Support (NAPS), an empirical evaluation of that measure, and its comparison with another measure of guardian support. The theoretical model that underlies this measure applies humanistic theory and Maslow's hierarchy of needs to the understanding of guardian support. The study employed a cross-sectional nonexperimental survey design using 183 nonoffending guardians who accompanied children presenting for a medical/forensic examination for sexual abuse. The NAPS and an existing measure of guardian support were administered during the hospital outpatient visit, and basic information concerning the child and abuse situations were gathered. The NAPS had robust psychometric properties and was culturally sensitive. Tests of specific hypotheses supported the construct validity of the measure and a conceptualization of guardian support as hierarchical, with four stages of support. The brevity and ease of administration of the NAPS for both the clinician and guardian suggest that it is a viable assessment tool. The strong support for the NAPS' underlying theoretical model suggests that the nonoffending guardians' available resources need to be considered when assessing guardian support.

  11. Effect of Inner Electrode on Reliability of (Zn,Mg)TiO3-Based Multilayer Ceramic Capacitor

    NASA Astrophysics Data System (ADS)

    Lee, Wen‑His; Su, Chi‑Yi; Lee, Ying Chieh; Yang, Jackey; Yang, Tong; PinLin, Shih

    2006-07-01

    In this study, different proportions of silver-palladium alloy acting as the inner electrode were adopted to a (Zn,Mg)TiO3-based multilayer ceramic capacitor (MLCC) sintered at 925 °C for 2 h to evaluate the effect of the inner electrode on reliability. The main results show that the lifetime is inversely proportional to Ag content in the Pd/Ag inner electrode. Ag+1 diffusion into the (Zn,Mg)TiO3-based MLCC during cofiring at 925 °C for 2 h and Ag+1 migration at 140 °C against 200 V are both responsible for the short lifetime of the (Zn,Mg)TiO3-based MLCC, particularly the latter factor. A (Zn,Mg)TiO3-based MLCC with high Ag content in the inner electrode Ag/Pd=99/01 exhibits the shortest lifetime (13 h), and the effect of Ag+1 migration is markedly enhanced when the activation energy of the (Zn,Mg)TiO3 dielectric is greatly lowered due to the excessive formation of oxygen vacancies and the semiconducting Zn2TiO4 phase when Ag+ substitutes for Zn+2 during co-firing.

  12. Observational Measures of Implementer Fidelity for a School-based Preventive Intervention: Development, Reliability and Validity

    PubMed Central

    Cross, Wendi; West, Jennifer; Wyman, Peter A.; Schmeelk-Cone, Karen; Xia, Yinglin; Tu, Xin; Teisl, Michael; Brown, C. Hendricks; Forgatch, Marion

    2014-01-01

    Current measures of implementer fidelity often fail to adequately measure core constructs of adherence and competence, and their relationship to outcomes can be mixed. To address these limitations, we used observational methods to assess these constructs and their relationships to proximal outcomes in a randomized trial of a school-based preventive intervention (Rochester Resilience Project) designed to strengthen emotion self-regulation skills in 1st–3rd graders with elevated aggressive-disruptive behaviors. Within the intervention group (n = 203), a subsample (n = 76) of students was selected to reflect the overall sample. Implementers were 10 paraprofessionals. Videotaped observations of three lessons from Year 1 of the intervention (14 lessons) were coded for each implementer-child dyad on Adherence (content) and Competence (quality). Using multi-level modeling we examined how much of the variance in the fidelity measures was attributed to implementer and to the child within implementer. Both measures had large and significant variance accounted for by implementer (Competence, 68%; Adherence, 41%); child within implementer did not account for significant variance indicating that ratings reflected stable qualities of the implementer rather than the child. Raw Adherence and Competence scores shared 46% of variance (r = .68). Controlling for baseline differences and age, the amount (Adherence) and quality (Competence) of program delivered predicted children’s enhanced response to the intervention on both child and parent reports after six months, but not on teacher report of externalizing behavior. Our findings support the use of multiple observations for measuring fidelity and that adherence and competence are important components of fidelity which could be assessed by many programs using these methods. PMID:24736951

  13. Reliability of GFR formulas based on serum creatinine, with special reference to the MDRD Study equation.

    PubMed

    Coresh, Josef; Auguste, Priscilla

    2008-01-01

    Estimation of glomerular filtration rate (GFR) is central to the diagnosis, evaluation and management of chronic kidney disease (CKD). This review summarizes data on the performance of equations using serum creatinine to estimate GFR, particularly the Modification of Diet in Renal Disease (MDRD) Study equation. The size of studies evaluating GFR estimation equations and their level of sophistication in estimating bias, precision, validity and sensitivity to the source population have improved over the past decade. We update our review from 2006, which included 7 studies with over 500 individuals and 12 studies with 50-499 individuals with measured GFR evaluating the MDRD Study and Cockcroft-Gault equations. More recent studies include an individual level pooling analysis of 5504 participants in 10 studies which showed that creatinine calibration to reference methods improved the performance of the MDRD Study equation but increased bias for the Cockcroft-Gault equation. The MDRD Study equation had a bias of 3.0 %, interquartile range of 29.0 % and percentage of estimates within 30 % of the measured GFR value (P(30)) of 82 % for estimates below 60 mL/(min x 1.73 m(2)). Above this value, the bias was greater (8.7 %) and estimates are less useful since 30 % error is a large absolute error in GFR. Results vary across studies but are generally similar with disappointing performance in the high GFR range, which is of particular interest in early diabetic nephropathy. New equations using serum creatinine can reduce the bias present in the high GFR range but are unlikely to dramatically improve precision, suggesting a need for additional markers. Finally, algorithms are needed to tailor clinical practice based on data from GFR estimates and other participant characteristics, including the source population and level of proteinuria.

  14. Reliability of information-based integration of EEG and fMRI data: a simulation study.

    PubMed

    Assecondi, Sara; Ostwald, Dirk; Bagshaw, Andrew P

    2015-02-01

    Most studies involving simultaneous electroencephalographic (EEG) and functional magnetic resonance imaging (fMRI) data rely on the first-order, affine-linear correlation of EEG and fMRI features within the framework of the general linear model. An alternative is the use of information-based measures such as mutual information and entropy, which can also detect higher-order correlations present in the data. The estimate of information-theoretic quantities might be influenced by several parameters, such as the numerosity of the sample, the amount of correlation between variables, and the discretization (or binning) strategy of choice. While these issues have been investigated for invasive neurophysiological data and a number of bias-correction estimates have been developed, there has been no attempt to systematically examine the accuracy of information estimates for the multivariate distributions arising in the context of EEG-fMRI recordings. This is especially important given the differences between electrophysiological and EEG-fMRI recordings. In this study, we drew random samples from simulated bivariate and trivariate distributions, mimicking the statistical properties of EEG-fMRI data. We compared the estimated information shared by simulated random variables with its numerical value and found that the interaction between the binning strategy and the estimation method influences the accuracy of the estimate. Conditional on the simulation assumptions, we found that the equipopulated binning strategy yields the best and most consistent results across distributions and bias correction methods. We also found that within bias correction techniques, the asymptotically debiased (TPMC), the jackknife debiased (JD), and the best upper bound (BUB) approach give similar results, and those are consistent across distributions.

  15. Reliability physics

    NASA Technical Reports Server (NTRS)

    Cuddihy, E. F.; Ross, R. G., Jr.

    1984-01-01

    Speakers whose topics relate to the reliability physics of solar arrays are listed and their topics briefly reviewed. Nine reports are reviewed ranging in subjects from studies of photothermal degradation in encapsulants and polymerizable ultraviolet stabilizers to interface bonding stability to electrochemical degradation of photovoltaic modules.

  16. Calibration of an autocorrelation-based method for determining amplitude histogram reliability and quantal size.

    PubMed

    Stratford, K J; Jack, J J; Larkman, A U

    1997-12-01

    1. We describe a method, based on autocorrelation and Monte Carlo simulation, for determining the likelihood that peaks in synaptic amplitude frequency histograms could have been a result of finite sampling from parent distributions that were unimodal. 2. The first step was to calculate an 'autocorrelation score' for the histogram to be tested. A unimodal distribution was fitted to the test histogram and subtracted from it. The resulting difference function was smoothed and its autocorrelation function calculated. The amplitude of the first (non-zero lag) peak in this autocorrelation function was taken as the autocorrelation score for that histogram. The score depends on the sharpness of the histogram peaks, the equality of their spacing and the number of trials. 3. The second stage was to generate large numbers of random samples, each of the same number of trials as the histogram, from a unimodal generator distribution of similar shape. The autocorrelation score was calculated for each sample and the proportion of samples with scores greater than the histogram gave the likelihood that the histogram peaks could have arisen by sampling artifact. 4. The method was calibrated using simulated non-quantal and quantal histograms with different signal-to-noise ratios and numbers of trials. For a quantal distribution with four peaks and a signal-to-noise ratio of 3, a sample size of about 500 trials was needed for 95% of samples to be distinguished from a non-quantal distribution. 5. The ability of the autocorrelation method to distinguish quantal from non-quantal distributions was compared against two conventional statistical tests, the chi 2 and the Kolmogorov-Smirnov goodness of fit tests. The autocorrelation method was more specific in extracting quantized responses. The Kolmogorov-Smirnov test in particular could not distinguish quantal distributions with multiple peaks even if the peaks were very sharp. 6. The improved discrimination of the autocorrelation method

  17. The rating reliability calculator

    PubMed Central

    Solomon, David J

    2004-01-01

    Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program. PMID:15117416

  18. Approach for the use of MSW settlement predictions in the assessment of landfill capacity based on reliability analysis.

    PubMed

    Sivakumar Babu, G L; Chouksey, Sandeep Kumar; Reddy, Krishna R

    2013-10-01

    In the analysis and design of municipal solid waste (MSW) landfills, there are many uncertainties associated with the properties of MSW during and after MSW placement. Several studies are performed involving different laboratory and field tests to understand the complex behavior and properties of MSW, and based on these studies, different models are proposed for the analysis of time dependent settlement response of MSW. For the analysis of MSW settlement, it is very important to account for the variability of model parameters that reflect different processes such as primary compression under loading, mechanical creep and biodegradation. In this paper, regression equations based on response surface method (RSM) are used to represent the complex behavior of MSW using a newly developed constitutive model. An approach to assess landfill capacities and develop landfill closure plans based on prediction of landfill settlements is proposed. The variability associated with model parameters relating to primary compression, mechanical creep and biodegradation are used to examine their influence on MSW settlement using reliability analysis framework and influence of various parameters on the settlement of MSW are estimated through sensitivity analysis.

  19. New Flexible Silicone-Based EEG Dry Sensor Material Compositions Exhibiting Improvements in Lifespan, Conductivity, and Reliability

    PubMed Central

    Yu, Yi-Hsin; Chen, Shih-Hsun; Chang, Che-Lun; Lin, Chin-Teng; Hairston, W. David; Mrozek, Randy A.

    2016-01-01

    This study investigates alternative material compositions for flexible silicone-based dry electroencephalography (EEG) electrodes to improve the performance lifespan while maintaining high-fidelity transmission of EEG signals. Electrode materials were fabricated with varying concentrations of silver-coated silica and silver flakes to evaluate their electrical, mechanical, and EEG transmission performance. Scanning electron microscope (SEM) analysis of the initial electrode development identified some weak points in the sensors’ construction, including particle pull-out and ablation of the silver coating on the silica filler. The newly-developed sensor materials achieved significant improvement in EEG measurements while maintaining the advantages of previous silicone-based electrodes, including flexibility and non-toxicity. The experimental results indicated that the proposed electrodes maintained suitable performance even after exposure to temperature fluctuations, 85% relative humidity, and enhanced corrosion conditions demonstrating improvements in the environmental stability. Fabricated flat (forehead) and acicular (hairy sites) electrodes composed of the optimum identified formulation exhibited low impedance and reliable EEG measurement; some initial human experiments demonstrate the feasibility of using these silicone-based electrodes for typical lab data collection applications. PMID:27809260

  20. Single crystalline silicon-based surface micromachining for high-precision inertial sensors: technology and design for reliability

    NASA Astrophysics Data System (ADS)

    Knechtel, Roy

    2009-05-01

    In this paper, a foundry process for surface micromachined inertial sensors such as accelerometers or gyroscopes is introduced, with special attention on reliability aspects. Reliability was a major focus during the development phase, leading to the choice of the single crystalline silicon layer of an SOI device wafer as the mechanically active material. Glass frit wafer bonding is used for capping and hermetic sealing, but in addition to these fundamental reliability aspects, many influences on reliability must be considered, such as the risk of sticking, local stress concentration, electrical effects or the defined limitations of the mechanical movement in the interaction of design and technology. Reliability test results, as well as measures for improving the reliability and performance, are discussed in this paper.